After scanning the room I use the .export method passing a ModelProvider. Then I import the USDZ into a SCNView and continue processing the scene: I would like to apply a texture to the walls and floor, but I can't do it for the walls and floor because they don't contain the TextureCoordinates. Creating them from the USDZ file is not easy. I tried to combine the CapturedRoom data for the walls and floor only, adding the TextureCoordinates. I'm managing, but I'm struggling a lot. Isn't there an easier way to do it? Is there a ModelProvider planned for surfaces in the future? If so, where can I access the RoomPlan beta documentation?
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
SceneKit
RSS for tagCreate 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.
Posts under SceneKit tag
64 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi everyone,
I'm looking for a way to convert an FBX file to USDZ directly within my iOS app. I'm aware of Reality Converter and the Python USDZ converter tool, but I haven't been able to find any documentation on how to do this directly within the app (assuming the user can upload their own file). Any guidance on how to achieve this would be greatly appreciated.
I've heard about Model I/O and SceneKit, but I haven't found much information on using them for this purpose either.
Thanks!
Hello,
I’m playing around with making an fully immersive multiplayer, air to air dogfighting game, but I’m having trouble figuring out how to attach a camera to an entity.
I have a plane that’s controlled with a GamePad. And I want the camera’s position to be pinned to that entity as it moves about space, while maintaining the users ability to look around.
Is this possible?
--
From my understanding, the current state of SceneKit, ARKit, and RealityKit is a bit confusing with what can and can not be done.
SceneKit
Full control of the camera
Not sure if it can use RealityKits ECS system.
2D Window. - Missing full immersion.
ARKit
Full control of the camera* - but only for non Vision Pro devices. Since Vision OS doesn't have a ARView.
Has RealityKits ECS system
2D Window. - Missing full immersion.
RealityKit
Camera is pinned to the device's position and orientation
Has RealityKits ECS system
Allows full immersion
I'm trying to update my projects to use Swift6, if I change the project settings to use Swift6 then my app crashes when I add a closure to the SCNAnimation animationDidStop property. The error is inside the SceneKit renderingQueue and indicates that the callback is being called on the wring queue.
Maybe I need to do something in the code to fix this but I can't seem to make it work, maybe a SceneKit bug?
If you create a new game template in Xcode using SceneKit and replace the contents of GameViewController.swift with the following you will see the app crash after it is launched.
import UIKit
import SceneKit
class GameViewController: UIViewController {
let player: SCNAnimationPlayer = {
let a = CABasicAnimation(keyPath: "opacity")
return SCNAnimationPlayer(animation: SCNAnimation(caAnimation: a))
}()
override func viewDidLoad() {
super.viewDidLoad()
let scnView = self.view as! SCNView
scnView.scene = SCNScene()
// Change the project settings to use Swift6
// Setting this closure will then cause a _dispatch_assert_queue_fail
// EXC_BREAKPOINT error in the scenekit.renderingQueue.SCNView queue,
// the only thing on the stack is:
// "%sBlock was %sexpected to execute on queue [%s (%p)]"
player.animation.animationDidStop = { (a: SCNAnimation, b: SCNAnimatable, c: Bool) in
print("stopped")
}
scnView.scene?.rootNode.addAnimationPlayer(player, forKey: nil)
player.play()
}
}
I have an AR game using ARKit with SceneKit that works just fine in iOS 17.
In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless.
I narrowed down the issue to showing the Game Center Access Point.
My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene.
Following is the code I use to start Game Center in VC1:
GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in
let isGameCenterReady = (gcAuthVC == nil) && (error == nil)
if let viewController = gcAuthVC {
self.present (viewController, animated: true, completion: nil)
}
if error != nil {
print(error?.localizedDescription ?? "")
}
if isGameCenterReady {
GKAccessPoint.shared.location = .topLeading
GKAccessPoint.shared.showHighlights = true
GKAccessPoint.shared.isActive = true
}
}
When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black.
If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows:
If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly
If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem.
So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists.
Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly!
I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code.
But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe???
I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well).
So could this be a bug in iOS 18?
As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience.
Any help would be greatly appreciated. Thanks!
HI guys,
I'm integrating the RoomPlan framework into my app.
I'm able to scan a room and extract the nodes from the CaptureStructure object. So far, I can rebuild the 3D object in the SceneView, but I can't render the openings and the windows correctly. I'm struggling to add these two objects correctly in the wall, in order to make the wall transparent where they are supposed to be.
If I export the CaptureStructure into a usda file and then I load it directly in the SceneView, all the doors, windows and openings are correctly rendered, therefore I do believe that I'm doing something wrong.
Could you please tell me what I'm doing wrong?
I added here a screenshot of my problem:
I have also a prototype, which you can run and see the problem I'm talking about: https://github.com/renanstig/3d-scenekit-prototype
i'm trying to figure out how to basically engrave some text into this ellipsoid mesh. so far the only thing i've learned that can sort of come close to what im looking for is SCNText but it floats above the ellipsoid and doesnt conform to the angular shape.
let allocator = MTKMeshBufferAllocator(device: MTLCreateSystemDefaultDevice()!)
let disc = MDLMesh.newEllipsoid(
withRadii: vector_float3(Float(discDiameter/2), Float(discDiameter/2), Float(discThickness/2)),
radialSegments: 64,
verticalSegments: 64,
geometryType: .triangles,
inwardNormals: false,
hemisphere: false,
allocator: allocator
)
let discGeometry = SCNGeometry(mdlMesh: disc)
let material = createIridescentMaterial()
discGeometry.materials = [material]
I want to convert CGPoint into SCNVector3. I am using ARFaceTrackingConfiguration for face tracking.
Below is my code to convert SCNVector3 to CGPoint
let point = faceAnchor.verticeAndProjection(to: sceneView, facePoint: faceAnchor.geometry.vertices[0])
print(point, faceAnchor.geometry.vertices[0])
which prints below values
CGPoint = (350.564453125, 643.4456787109375)
SIMD3<Float>(0.014480735, 0.01397189, 0.04508282)
extension ARFaceAnchor{
// struct to store the 3d vertex and the 2d projection point
struct VerticesAndProjection {
var vertex: SIMD3<Float>
var projected: CGPoint
}
// return a struct with vertices and projection
func verticeAndProjection(to view: ARSCNView, facePoint: Int) -> CGPoint{
let point = SCNVector3(geometry.vertices[facePoint])
let col = SIMD4<Float>(SCNVector4())
let pos = SIMD4<Float>(SCNVector4(point.x, point.y, point.z, 1))
let pworld = transform * simd_float4x4(col, col, col, pos)
let vect = view.projectPoint(SCNVector3(pworld.position.x, pworld.position.y, pworld.position.z))
let p = CGPoint(x: CGFloat(vect.x), y: CGFloat(vect.y))
return p
}
}
extension matrix_float4x4 {
/// Get the position of the transform matrix.
public var position: SCNVector3 {
get{
return SCNVector3(self[3][0], self[3][1], self[3][2])
}
}
}
Now i want to convert same CGPoint to SCNVector3.
I tried using below code but it is not giving expected values, which is SIMD3(0.014480735, 0.01397189, 0.04508282)
let projectedOrigin = sceneView.projectPoint(SCNVector3Zero)
let unproject = sceneView.unprojectPoint(SCNVector3(point.x, point.y, CGFloat(projectedOrigin.z)))
let vector = SCNVector3(unproject.x, unproject.y, unproject.z)
Is there any way to convert CGPoint to SCNVector3? I cannot use hitTest because this CGPoint is not present on the node. It is present somewhere on the face area.
As the title suggests, clicking the "Export to SceneKit" button indeed converts a USD to .scn but removes the normals in the process if the mesh has blendshapes.
When I export the same file without any blendshapes / morphtargets, the normals stay on as expected.
If I try to create normals in the scenekit editor (adding them as a new geometry source) Xcode crashes (no matter if there are blendshapes or not)
I've tried loading the resulting scene with
[SCNSceneSource.LoadingOption.createNormalsIfAbsent : true]
but this doesn't change anything either.
I suppose this is a bug?
My last resort is to load my character without any blendshapes and then add the targets from a different scene.
Thanks for any insight!
seb
Hi,
I am initializing a SCNNode from a OBJ file. Let's suppose the object is a sphere, and its pivot after loading it from the OBJ file is the bottom of the sphere (where it would rest on the floor). Its default position is the zero vector.
However, I must change the pivot to the center of the sphere. After doing so (based on its bbox), since the position is still the zero vector, does that mean that the object was translated so that the new pivot lies at (0,0,0)? Or should set its position to (0,0,0), which will now be based on the new pivot?
To test whether this is needed, I am using a separate button to change the node's position to (0,0,0) after changing its pivot, but I do not see any change visually, which leads me to believe that after changing the pivot, the object is automatically moved to (0,0,0) based on its new pivot. This is probably done faster than the scene renders which is why I do not notice any difference between the two methods.
I cannot tell which of the two is correct, meaning that I do no know whether I should set the position again to (0,0,0) after changing the pivot or not. Right now it seems like it makes no difference. Any thoughts?
使用xib方式的scnview 加载点云图3D模型,在苹果12上无法展示,在苹果13上可以正常显示.以下为报错信息:Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)
2024-07-10 11:01:22.403196+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)
2024-07-10 11:01:22.403458+0800 不愁物联网[26648:1375452] [SceneKit] Error: Resource command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=1 "Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)" UserInfo={NSLocalizedDescription=Discarded (victim of GPU error/recovery) (00000005:kIOGPUCommandBufferCallbackErrorInnocentVictim)}
(
)
2024-07-10 11:01:22.403539+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
2024-07-10 11:01:22.403556+0800 不愁物联网[26648:1375452] Execution of the command buffer was aborted due to an error during execution. Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)
2024-07-10 11:01:22.403586+0800 不愁物联网[26648:1375452] [SceneKit] Error: Main command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=3 "Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)" UserInfo={NSLocalizedDescription=Caused GPU Address Fault Error (0000000b:kIOGPUCommandBufferCallbackErrorPageFault)}
(
)
I have an app on the App Store for many years enabling users to post text into clouds in augmented reality. Yet last week abruptly upon installing the app on the iPhone the screen started going totally dark and a list of little comprehensible logs came up of the kind:
ARSCNCompositor <0x300ad0e00>: ARSCNCompositor (0, 0) initialization failed. Matting is not set up properly.
many times, then
RWorldTrackingTechnique <0x106235180>: Unable to update pose [PredictorFailure] for timestamp 870.392108
ARWorldTrackingTechnique <0x106235180>: Unable to predict pose [1] for timestamp 870.392108
again several times and then:
ARWorldTrackingTechnique <0x106235180>: SLAM error callback: Error Domain=Slam Error Code=7 "Non fatal error occurred due to significant drop in a IMU data" UserInfo={NSDescription=Non fatal error occurred due to significant drop in a IMU data, NSLocalizedFailureReason=SlamEngineNodeGroup Failure: IMU issue: gyro data stream verification failed [Significant data drop]. Failed on timestamp: 870.413247, Last known timestamp: 865.350198, Delta: 5.063049, System timestamp: 870.415781, Delta between system and frame: 0.002534. }
and then again the pose issues several times.
I hoped the new beta version would have solved the issue, but it was not the case. Unfortunately I do not know if that depends on the beta version or some other issue, given the app may be not installed on the Mac simulator.
So I am trying to create a certain amount of spheres in a SceneKit scene based on the number of objects in a list. So I think I would put an addChild in a for loop, but how would I assign them all to the same type of model? Especially because sometimes the list will be empty, so the model would not need to show up at all.
I am trying to make an app that uses SceneKit to display some 3D models, but started it up from the regular app format. When I try to build I get this error: shell script build rule for "/somePath/Scene.scn' must declare at least one output file.
The .scn is being provided to a view, is that not an output? Or is there some formatting issue that I need to solve.
I have a visionOS app that utilizes DrawableQueue and CADisplayLink to update an Entity, TextureResource tied to the drawable, and a Material that uses that TextureResource. TextureResource gets updated with when a video frame is ready. Material properties can get updated from the video or from other sources.
Current process: when each video frame is ready, we get the next drawable, render to it, present it, and make an Entity update (e.g. transform). However, I’m experiencing jitter in the rendered content where it seems that the updates to the entity and the drawable being presented are milliseconds off from each other.
Should I be using Drawable.presentOnSceneUpdate() to ensure all updates happen in the same update cycle? And if so, do you have any additional details on how to correctly use this function (the docs are unclear)?
Hello, I’m trying to move my app into vision OS, my app is used for pilot to study the airplane system, is a 3d airplane cockpit build with scene kit and I use sprite scene to animate the cockpit instruments .
Scenekit allow to apply as material a sprite scene , so I could animate easy all the different instruments and indication there, but I can’t find this option on reality compose pro , is this possible? any suggestions I can look into to animate and simulate instruments.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
SpriteKit
SceneKit
Reality Composer Pro
visionOS
We have requirement adding usdz file to UIView and showing the it’s content and archive the data and save to file. When user open the file, we need to unarchive that usdz content and binding with UIView and showing it to user. Initially, we created SCNScene object passing usdz file url like below.
do {
usdzScene = try SCNScene(url: usdzUrl)
} catch let error as NSError {
print(error)
}
Since SCNScene support to NSSecureCoding protocol , we directly archive that object and save it file and load back it from file and created SCNScene object using NSKeyedUnarchiver process.
But for some files, we realised high memory consumption while archiving the SCNScene object using below line.
func encode(with coder: NSCoder) {
coder.encode(self.scnScene, forKey: "scnScene")
}
File referene link : toy_drummer_idle.usdz
When we analyse apple documentation (check discussion section) , it said, scn file extension is the fastest format for processing than the usdz.
So we used SCNSecne write to feature for creating scn file from given usdz file.
After that, When we do the archive SCNScene object that was created by sun file url, the archive process is more faster and it will not take high memory as well. It is really faster previous case now.
But unfortunately, SCNScene write method will take lot of time for this conversion and memory meter is also going high and it will be caused to app crash as well.
I check the output file size as well. The given usdz file size is 18MB and generated scn file size is 483 MB. But SCNScene archive process is so fast.
Please, analyse this case and please, provide some guideline how we can optimise this behaviour. I really appreciate your feedback.
Full Code:
import UIKit
import SceneKit
class ViewController: UIViewController {
var scnView: SCNView?
var usdzScene: SCNScene?
var scnScene: SCNScene?
lazy var exportButton: UIButton = {
let btn = UIButton(type: UIButton.ButtonType.system)
btn.tag = 1
btn.backgroundColor = UIColor.blue
btn.addTarget(self, action: #selector(buttonPressed(_:)), for: .touchUpInside)
btn.setTitle("USDZ to SCN", for: .normal)
btn.setTitleColor(.white, for: .normal)
btn.layer.borderColor = UIColor.gray.cgColor
btn.titleLabel?.font = .systemFont(ofSize: 20)
btn.translatesAutoresizingMaskIntoConstraints = false
return btn
}()
func deleteTempDirectory(directoryName: String) {
let tempDirectoryUrl = URL(fileURLWithPath: NSTemporaryDirectory())
let tempDirectory = tempDirectoryUrl.appendingPathComponent(directoryName, isDirectory: true)
if FileManager.default.fileExists(atPath: URL(string: tempDirectory.absoluteString)!.path) {
do{
try FileManager.default.removeItem(at: tempDirectory)
}
catch let error as NSError {
print(error)
}
}
}
func createTempDirectory(directoryName: String) -> URL? {
let tempDirectoryUrl = URL(fileURLWithPath: NSTemporaryDirectory())
let toBeCreatedDirectoryUrl = tempDirectoryUrl.appendingPathComponent(directoryName, isDirectory: true)
if !FileManager.default.fileExists(atPath: URL(string: toBeCreatedDirectoryUrl.absoluteString)!.path) {
do{
try FileManager.default.createDirectory(at: toBeCreatedDirectoryUrl, withIntermediateDirectories: true, attributes: nil)
}
catch let error as NSError {
print(error)
return nil
}
}
return toBeCreatedDirectoryUrl
}
@IBAction func buttonPressed(_ sender: UIButton){
let scnFolderName = "SCN"
let scnFileName = "3D"
deleteTempDirectory(directoryName: scnFolderName)
guard let scnDirectoryUrl = createTempDirectory(directoryName: scnFolderName) else {return}
let scnFileUrl = scnDirectoryUrl.appendingPathComponent(scnFileName).appendingPathExtension("scn")
guard let usdzScene else {return}
let result = usdzScene.write(to: scnFileUrl, options: nil, delegate: nil, progressHandler: nil)
if (result) {
print("exporting process is success.")
} else {
print("exporting process is failed.")
}
}
override func viewDidLoad() {
super.viewDidLoad()
let usdzUrl: URL? = Bundle.main.url(forResource: "toy_drummer_idle", withExtension: "usdz")
guard let usdzUrl else {return}
do {
usdzScene = try SCNScene(url: usdzUrl)
} catch let error as NSError {
print(error)
}
guard let usdzScene else {return}
scnView = SCNView(frame: .zero)
guard let scnView else {return}
scnView.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(scnView)
self.view.addSubview(exportButton)
NSLayoutConstraint.activate([
scnView.leadingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.leadingAnchor),
scnView.trailingAnchor.constraint(equalTo: view.safeAreaLayoutGuide.trailingAnchor),
scnView.topAnchor.constraint(equalTo: view.safeAreaLayoutGuide.topAnchor),
scnView.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor, constant: -30),
exportButton.widthAnchor.constraint(equalToConstant: 200),
exportButton.heightAnchor.constraint(equalToConstant: 40),
exportButton.centerXAnchor.constraint(equalTo: view.safeAreaLayoutGuide.centerXAnchor),
exportButton.bottomAnchor.constraint(equalTo: view.safeAreaLayoutGuide.bottomAnchor),
])
DispatchQueue.main.asyncAfter(deadline: .now() + 0.01) {[weak self] in
guard let self else {return}
loadModel(scene: usdzScene)
}
}
func loadModel(scene: SCNScene){
guard let scnView else {return}
scnView.autoenablesDefaultLighting = true
scnView.scene = scene
scnView.allowsCameraControl = true
}
}

We’re experiencing an issue with wrong SceneKit hit testing results in iOS 17.2 compared with iOS 16.1 when using the either Metal or OpenGLES2 engines.
Tapping on a 3D model to place a SCNNode
// pointInScene: tapped point
let hitResults = sceneView.hitTest(pointInScene, options: nil)
return hitResults.first { $0.node.name?.compare("node_name") == .orderedSame }
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately.
When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
Hi,
My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes.
Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes.
This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open.
Thank you