Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

Integrate iOS device camera and motion features to produce augmented reality experiences in your app or game using ARKit.

ARKit Documentation

Posts under ARKit subtopic

Post

Replies

Boosts

Views

Activity

Adding reference image failed in visionos
I am trying the image tracking of ARKit on VisionPro, but there seems to be some problem when adding reference image. Here is my code: let images = ReferenceImage.loadReferenceImages(inGroupNamed: "photos") print("Images: \(images)") try await appState!.arkitSession.run([imageTracking]) It can successfully print those images, however sometimes it will print the error message like this: ARImageTrackingRemoteService: Adding reference image <ARReferenceImage: 0x3032399e0 name="chair" physicalSize=(0.070, 0.093)> failed. When this error message is printed, the corresponding image can not be tracked. I do not understand why this will happen, because sometimes the image can be successfully added, but other time not, even for the same image. It makes my app not stable. Besides, there are some other error messages, and I do not know whether it is related: ARPredictorRemoteService <0x1042154a0>: Query queue is not running. Execution of the command buffer was aborted due to an error during execution. Insufficient Permission (to submit GPU work from background) (00000006:kIOGPUCommandBufferCallbackErrorBackgroundExecutionNotPermitted)
1
0
296
Mar ’25
ARMeshAnchors are very unreliable on iPad Pro (4th gen)
Hello, We are developing an AR app that requires the lidar meshes. Unfortunately the ARMeshAnchors that allows us to retrieve the mesh data are very unreliable. It happens very often that the ARSession removes all ARMeshAnchors and takes anywhere from 5s to 30s to reappear. The planes detection (ARPlaneAnchors) are still working fine and the camera tracking is also working normally. I tried a basic ARKit sample app, and got the same behaviour as our own app. Is this a known issue ? Anything we can do to mitigate the issue ? Thank you
1
0
269
Mar ’25
ARPlaneGeometry Negative triangles indices ?
Hello When processing an ARPlaneAnchor geometry using its ARPlaneGeometry, the triangleIndices is an array of Int16. It's supposed to be an index buffer, which can only be uint16 or uint32 metal. What am I supposed to do with negative indices ? Negative indices are rare but do appear sometimes. Thank you
1
0
178
Mar ’25
Merge MeshAnchor from Scene Reconstruction for Vision Pro
Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material. func run(_ sceneRec: SceneReconstructionProvider) async { for await update in sceneRec.anchorUpdates { switch update.event { case .added, .updated: // Get or create entity for this anchor let anchorEntity = anchors[update.anchor.id] ?? { let entity = ModelEntity() root?.addChild(entity) anchors[update.anchor.id] = entity return entity }() // Remove any existing children for child in anchorEntity.children { child.removeFromParent() } // Generate the mesh from the anchor guard let mesh = try? await MeshResource(from: update.anchor) else { return } guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue } print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)") // Get the material to use var material: RealityKit.Material if isMaterialLoaded, let loadedMaterial = self.shaderMaterial { material = loadedMaterial } else { // Use a temporary material until the shader loads var tempMaterial = UnlitMaterial() tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5)) material = tempMaterial } await MainActor.run { anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material])) anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil) // Add collision component with static flag - required for spatial interactions anchorEntity.components.set(CollisionComponent( shapes: [shape], isStatic: true, filter: .default )) // Make entity interactive - enables spatial taps, drags, etc. anchorEntity.components.set(InputTargetComponent()) let shadowComponent = GroundingShadowComponent( castsShadow: true, receivesShadow: true ) anchorEntity.components.set(shadowComponent) } I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh. SpatialTapGesture() .targetedToAnyEntity() .onEnded { value in let tappedEntity = value.entity // Check if the tapped entity is a child of tracking.meshAnchors if isChildOfMeshAnchors(entity: tappedEntity) { // Get local position (in the entity's coordinate space) let localPosition = value.location3D // Convert to world position (scene coordinate space) let worldPosition = value.convert(localPosition, from: .local, to: .scene) print("Tapped mesh anchor at local position: \(localPosition)") print("Tapped mesh anchor at world position: \(worldPosition)") // Update the material parameter with the tap position updateMaterialTapPosition(entity: tappedEntity, position: worldPosition) } else { print("Tapped entity is not a mesh anchor") } } } My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!
3
0
284
Mar ’25
A question about adding grounding shadow in visionPro
I want adding grounding shadow on my Entity in RealityView on visionPro. However it seems that the shadow can only appear on another Entity. So I using plane detection in ARKit and add a transparent plane on it to render shadow. let planeEntity = ModelEntity(mesh: .generatePlane(width: anchor.geometry.extent.width, height: anchor.geometry.extent.height), materials: [material]) planeEntity.components.set(OpacityComponent(opacity: 0.0)) But sometimes there will be a border around my Entityon the plane. I do not know why it will happen, and I want remove the border.
5
0
352
Mar ’25
Adding reference image failed in VisionPro
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image. Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
0
0
205
Mar ’25
ARKit hand tracking
Hello, I am developing a visionOS application and am interested in obtaining detailed data of users’ hands through ARKit, including but not limited to Transform and rotation angle. I have reviewed Happy Beem, but it appears to only introduce the method of identifying the user’s specific gestures. Could you please advise on how to obtain the Transform and rotation angle of the user’s hand? Thank you.
1
0
373
Mar ’25
Is there any way, I can use the Object Tracking applications on an iOS (iPhone) AR App.
I have been referencing the Object Tracking Tutorial from WWDC 2024 on Vision OS, how Create ML is used to create a reference object, and we can track them in the ARSession. I am looking forward to building this feature on an AR app for iPhone, I am using iPhone 13 Pro Max. I have created couple of reference objects from the Create ML.
1
0
290
Mar ’25
When to use an AnchorEntity or HandTrackingProvider in VisionOS
As I understand it there are two ways I can track a hand, or a joint, in RealityKit: either, create an AnchorEntity, for example AnchorEntity(.hand(.left, location: .palm)) or, set up an ARSession with a HandTrackingProvider ( a lot more code which I haven't repeated here). Assuming this is correct, when would I want to use one over the other?
2
0
372
Feb ’25
how to convert mlmodel to reference object?
Hello, I have downloaded and run the sample object tracking app for visionos. Now I'm working on my own objects for tracking. I have made a model using Create ML using images of my object. However, I cannot see how to convert the Create ML output file (xxx.mlmodel) into a reference object like the files in the sample project. is there a tool for converting them? TIA
2
0
314
Feb ’25
How to make a RealityKit `Entity` respond to Environment light
I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect: When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world. I want an effect like this: https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/ I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code. Besides, it will be better if the shadow will also respond to the light correctly.
1
0
469
Feb ’25
Difference in ARKit plane detection from iPhone 8 to iPhone 15
I am developing an ARKit based application that requires plane detection of the tabletop at which the user is seated. Early testing was with an iPhone 8 and iPhone 8+. With those devices, ARKit rapidly detected the plane of the tabletop when it was only 8 to 10 inches away. Using iPhone 15 with the same code, it seems to require me to move the phone more like 15 to 16 inches away before detecting the plane of the table. This is an awkward motion for a user seated at a table. To validate that it was not necessarily a feature of my code, I determined that the same behavior results with Apple's sample AR Interaction application. Has anyone else experienced this, and if so, have suggestions to improve the situation?
2
0
446
Feb ’25
xform called "Scene" breaks animations on Quicklook starting with iOS15
Hello, We discovered that a bunch of our old animated models were no longer animated on iOS15 and onwards. After a few days of playing spot the difference between usda files I noticed that all the broken models had an xform called "Scene". Lo and behold, changing the name of that xform fixed the issue on all the models. Even lowercase "scene" makes the animations work again. Is "Scene" a reserved keyword or something? What other keywords do we need to avoid so we can create more robust USDZ files? I'm surprised this issue isn't more widespread considering Blender wraps models in a "Scene" node. At the drive link below you can find two animated cube USDZs. The only difference is the name of one of the xforms. The one with a "Scene" xform is not animated in quicklook (replicated on iPhone 13 iOS v15.2, iPhone 13 iOS v 18.3, and various devices on Browserstack including iPhone 16 iOS v18.3). https://drive.google.com/drive/folders/1dch1WaM9O6mbHy29S6NGWgnSHkZkPiBf?usp=sharing
1
0
332
Feb ’25