Merge MeshAnchor from Scene Reconstruction for Vision Pro

Hi there, I'm trying to merge the mesh anchor into a single mesh, but couldn't find any resources on this. Here is the code where I make the mesh from each mesh anchor, and assigned it to a model component with a shader graph material.

    func run(_ sceneRec: SceneReconstructionProvider) async {
        for await update in sceneRec.anchorUpdates {
            switch update.event {
            case .added, .updated:
                // Get or create entity for this anchor
                let anchorEntity = anchors[update.anchor.id] ?? {
                    let entity = ModelEntity()
                    root?.addChild(entity)
                    anchors[update.anchor.id] = entity
                    return entity
                }()
                
                // Remove any existing children
                for child in anchorEntity.children {
                    child.removeFromParent()
                }
                
                // Generate the mesh from the anchor
                guard let mesh = try? await MeshResource(from: update.anchor) else { return }
                guard let shape = try? await ShapeResource.generateStaticMesh(from: update.anchor) else { continue }
                
                print("Mesh added, vertices: \(update.anchor.geometry.vertices.count), bounds: \(mesh.bounds)")
                
                // Get the material to use
                var material: RealityKit.Material
                
                if isMaterialLoaded, let loadedMaterial = self.shaderMaterial {
                    material = loadedMaterial
                } else {
                    // Use a temporary material until the shader loads
                    var tempMaterial = UnlitMaterial()
                    tempMaterial.color = .init(tint: .purple.withAlphaComponent(0.5))
                    material = tempMaterial
                }
                
                await MainActor.run {
                    anchorEntity.components.set(ModelComponent(mesh: mesh, materials: [material]))
                    anchorEntity.setTransformMatrix(update.anchor.originFromAnchorTransform, relativeTo: nil)
                    
                    // Add collision component with static flag - required for spatial interactions
                    anchorEntity.components.set(CollisionComponent(
                        shapes: [shape],
                        isStatic: true,
                        filter: .default
                    ))
                    
                    // Make entity interactive - enables spatial taps, drags, etc.
                    anchorEntity.components.set(InputTargetComponent())
                    
                    let shadowComponent = GroundingShadowComponent(
                        castsShadow: true,
                        receivesShadow: true
                    )
                    anchorEntity.components.set(shadowComponent)
                }

I then use a spatial tap gesture to set the position parameter in the shader graph material that creates a nice gradient from the tap position on the mesh to the rest of the mesh.

        SpatialTapGesture()
            .targetedToAnyEntity()
            .onEnded { value in
                let tappedEntity = value.entity
                
                // Check if the tapped entity is a child of tracking.meshAnchors
                if isChildOfMeshAnchors(entity: tappedEntity) {
                    // Get local position (in the entity's coordinate space)
                    let localPosition = value.location3D
                    
                    // Convert to world position (scene coordinate space)
                    let worldPosition = value.convert(localPosition, from: .local, to: .scene)
                    
                    print("Tapped mesh anchor at local position: \(localPosition)")
                    print("Tapped mesh anchor at world position: \(worldPosition)")
                    
                    // Update the material parameter with the tap position
                    updateMaterialTapPosition(entity: tappedEntity, position: worldPosition)
                } else {
                    print("Tapped entity is not a mesh anchor")
                }
            }
    }
 

My issue is that because there are several mesh anchors, the gradient often gets cut off by the edge of the mesh generated from the mesh anchor as suppose to a nice continuous gradient across the entire scene reconstructed mesh I couldn't find any documentations on how to merge mesh from mesh anchors, any tips would be helpful! Thank you!

Hi @SandraF,

Cool question! The answer, as with many great questions is complex. A lot will depend on the fidelity required of the merged mesh.

You could build a combined mesh in this way:

    var combinedEntity = ModelEntity()
...

guard let newMesh = try? await MeshResource(from: update.anchor) else {continue}
            
if let existingMesh = combinedEntity.model?.mesh {
  // merge with existing
  var newContents = MeshResource.Contents()
  newContents.models = existingMesh.contents.models
  for model in newMesh.contents.models {
    newContents.models.insert(model)
  }
                
  guard let combinedMesh = try? MeshResource.generate(from: newContents) else { continue }
   combinedEntity.components.set(ModelComponent(mesh: combinedMesh,
                                                             materials: [SimpleMaterial(color: .orange,
                                                                                        isMetallic: false)]))
} else {
  // make combined have this new mesh...
   combinedEntity.components.set(ModelComponent(mesh: newMesh,
                                                             materials: [SimpleMaterial(color: .orange,
                                                                                        isMetallic: false)]))
}

But I don't expect that would behave the way your shader material expects.

Since I don't know exactly what your material expects, perhaps you'd want to do something more like this to get a mesh that the shader understands:

for model in newMesh.contents.models {
  for part in model.parts {
    if let positions = part.buffers[.positions] {
      print("found positions, merge them properly")
    }
    if let tris = part.buffers[.triangleIndices] {
      print("found triangle indices, merge them properly")
    }
    if let normals = part.buffers[.normals] {
      print("found normals, merge them properly")
    }
    if let uvs = part.buffers[.textureCoordinates] {
      print("found text coords, merge them properly")
    }
  }
}

Of course 'merge them properly' is the complicated part of this endeavor. Getting the positions and normals merged in should be close to simply appending them to the existing buffers, but then you have to be careful to update the triangle indices properly to make sure the new positions and normals are organized as expected to render properly.

Once the positions, normals and triangle indices are properly merged then merging the texture coords would likely require something like re-normalizing, but that can be even more complicated.

Thanks for the quick response. Here is a screenshot of my shader graph

I pass the world position of my spatial tap to the TapPos variable and calculate distance with position node in world space to create the color and transparency gradient.




for model in newMesh.contents.models {



  for part in model.parts {



    if let positions = part.buffers[.positions] {



      print("found positions, merge them properly")



    }



    if let tris = part.buffers[.triangleIndices] {



      print("found triangle indices, merge them properly")



    }



    if let normals = part.buffers[.normals] {



      print("found normals, merge them properly")



    }



    if let uvs = part.buffers[.textureCoordinates] {



      print("found text coords, merge them properly")



    }



  }



}



I'm not sure how to order the triangles and positions in a way that preserved the topology of the overall mesh, I often ended up with one corrupted mesh.

Hi @SandraF,

The topology is def non-trivial. There is no easy answer on how to merge the topology either.

One simple approach would be to ignore the 'seam' of where two mesh might over lap and just add the existing position count to the triangle indices for the new positions. That would leave the two meshes un-joined but should still render.

Merge MeshAnchor from Scene Reconstruction for Vision Pro
 
 
Q