Hello everyone, I'm a new developer and I'm still learning the foundations of Swift and SwiftUI while building my first app. Today I wanted to ask you how to implement AR Quixck Views inside my app. I wanna be able to dynamically preview AR objects in a dedicated view, however, I don't seem to have understood where and how to locate AR objects inside my project. I tried including them in the Assets folder of the project, or in the Recources folder, or within the main folder of my project alongside the MyAppApp.swift file. None of the methods I used seemed have worked in that none of the objects was ever located. I made sure to specify the path to the files every time, but somehow the location isn't recognized. I also tried giving no path so that the app would search for the files in their default location (which I apparently haven't grasped yet), but still my attempt failed. I don't have the code sample on me at the moment, but I will write a followup comment on this post to show you what I wrote in case anyone was interested in debugging my code. Meanwhile, if anyone would be so kind to point me at the support article or to comment below the sample code they used in their app, I would very much appreciate it, so that I can start debugging. Thank you for reading this, I appreciate you.
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
USDZ
RSS for tagUSDZ is a 3D file format that shows up as AR content on a website.
Posts under USDZ tag
43 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
ar quicklook suddenly is grayed out on iphone 15 pro, I bought the phone new recently ot was working great, 2 days ago updated to ios 18.1.4, ar mode kept opening but i started getting a move iphone over surface message and the object wouldn’t detect surfaces correctly, updated to ios 18.5, now when i open quicklook modesl ar is completely greyed out,
can someone help me fix or detect the issue
thank you
Hello! I’m familiar with the discussion on “Sending messages to the scene”, and I’ve successfully used that code.
However, I have several instances of the same model in my scene.
Is it possible to make only one specific model respond to a notification?
For example, can I pass something like RealityKit.NotificationTrigger.SourceEntity in userInfo or use another method to target just one instance?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
Reality Composer
RealityKit
visionOS
In the Creating A 3D Application With Hydra Rendering tutorial on the Apple Developer website, on the last step where I execute this command:
cmake -S ~/Users/macuser/CreatingA3DApplicationWithHydraRendering/ -B ~/Users/macuser/CreatingA3DApplicationWithHydraRendering/
I keep getting an error:
CMake Error at CMakeLists.txt:5 (include):
include could not find requested file:
/Users/macuser/USDInstall/bin/pxrConfig.cmake
I've tried to follow the instructions as mentioned in the README.md file included in the project files at least 5 times as well as moving the pxrConfig.cmake file around and copying it in different folders, then executed the command and was still unsuccessful into generating the proper file expected to compile and render the HydraPlayer renderer. How do I get cmake to generate the Xcode file to create the HydraPlayer renderer?
I use SCNScene.write To file ".usdz",
then open the ".usdz" by SCNScene.url , all Nodes color fades.
Export and load only once. The fading is not obvious. Return and repeat 4 or 5 times. It is obvious that the color is inconsistent with the original color and has become much lighter.
Hello everyone,
I've been trying for a few weeks now to convert a sequential series of meshes into a stop-motion animation in USDZ format.
In Unreal Engine, I’ve already figured out how to transform the sequential series of individual meshes into a smooth animation using the node system and arrays.
Unfortunately, the node system cannot be exported as a usdz animation logic in either Unreal or Blender.
Because of this, I have tried several other methods to incorporate the animation logic. Here’s what I’ve tried so far:
I attempted to create the animation in Blender with Render-/Viewports and mapping it to keyframes. However, in my experience, Viewports are not supported in the conversion.
I tried aligning the vertices of individual objects and merging the frames using the Shrinkwrap modifier in Blender, then setting up a morph animation with keyframes. However, because the individual meshes are too different, this results in artifacts, and manually editing each mesh is too difficult for me to handle.
I placed all individual meshes at the same position and animated them sequentially by scaling them from 0 to 100 in keyframes (Frame 1 is visible for 10 frames, then scales down at frame 11, while Frame 2 becomes visible at frame 11, and so on). I also adjusted the keyframes so that the scaling happens in a "constant" manner rather than the default Bezier or linear interpolation. I then converted this animation to .abc, and the result initially looked good. However, some information is lost when converting it with OpenUSD. The animation does not maintain its intended jump-like behavior in USDZ format, and instead, the scaling of individual files is visible in the animation.
I tried using a Blender add-on (StepMotion), which allows the animation to be exported as .abc, but it can only be read in Blender or Unreal. Even in the preview, the animation is not displayed correctly, so converting the animation logic does not work either.
Unfortunately, I have no alternative way to create the animation, as the individual frames have been provided to me as meshes. So far, I haven’t found a way to implement this successfully.
I would be very grateful for any tips or ideas, as I am running out of options on how to make this work.
Thanks in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
Core Animation
Reality Converter
Visual Design
USDZ
Hi is there possibly to put text data or information into Roomplanapi elements like data wall color etc and export to usdz after lidar scan
I have a USDZ 3d model contains a running man animation and I manage to run the model well using this code:
import SwiftUI
import RealityKit
struct ContentView: View {
@State var animationResource: AnimationResource?
@State var animationDefinition: AnimationDefinition?
@State var manPlayer = Entity()
@State var speed:Float = 0.5
var body: some View {
VStack {
RealityView { content in
if let item = try? await Entity(named: "run.usdz") {
manPlayer = item
content.add(manPlayer)
}
}
HStack {
Button(action: playThis) {
Text("Play")
}
Button(action: increaseSpeed) {
Text("increase Speed (+) ")
}
Button(action: decreaseSpeed) {
Text("decrease Speed (-) ")
}
}
}
}
func playThis() {
animationResource = manPlayer.availableAnimations[0]
animationDefinition = animationResource?.definition
animationDefinition?.repeatMode = .repeat
animationDefinition?.speed = speed
// i could not add the definition to the animation resource back again
// so i generated a new one
let animRes = try! AnimationResource.generate(with: animationDefinition!)
manPlayer.playAnimation(animRes)
}
func increaseSpeed() {
speed += 0.1
animationDefinition?.speed = speed
// Now i wonder is this the best way to increase speed
// isn't it as if im adding more load to the memory
// by adding another players
let animRes = try! AnimationResource.generate(with: animationDefinition!)
manPlayer.playAnimation(animRes)
}
func decreaseSpeed() {
speed -= 0.1
animationDefinition?.speed = speed
// Now i wonder is this the best way to increase speed
// isn't it as if im adding more load to the memory
// by adding another players
let animRes = try! AnimationResource.generate(with: animationDefinition!)
manPlayer.playAnimation(animRes)
}
}
how to control speed of this animation while it is running without have to regenerate a resource and replay the animation over and over with every speed change knowing that every time the animation replayed it started from the frame zero meaning its not completing its cycle before its replayed but cut in the middle and start over again from start which I do not prefer to be happen.
The USDZ file is here if you wish to try https://www.worldhotelity.com/stack/run.usdz
I have a 3D model with morphing animation that works correctly in Blender.
I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play.
What I Have Tried:
Morphing animation works correctly in Blender.
After exporting to USDZ, the morphing animation does not play in the Xcode app.
Linear motion animations (such as object movement) work fine.
Behavior in Reality Converter:
GLB files do not display.
USDZ files load, but morphing animations do not play.
What I Want to Know:
Is there a way to play morphing animations in an Xcode-developed app?
Does RealityKit support morphing animations?
Can morphing animations be played in an Xcode-developed app?
If RealityKit does not support morphing animations, what alternative methods can be used to play them?
I am looking for a way to use the existing animations without recreating them.
Additional Information:
I have both the Blender file (where animations work) and the USDZ file (where animations do not play).
I am developing a visionOS app using Xcode.
Any advice or solutions would be greatly appreciated.
Thank you in advance!
In Reality Composer, it is possible to create child components and manipulate them within the hierarchy of a ModelEntity. Is there a way to create child components in other 3D modeling programs, such as Blender?
Hello,
we have a RealityKit app that also runs on macOS via Catalyst.
For specific USD assets containing particle systems we have observed a reproducible crash.
Steps to reproduce:
Open Reality Composer Pro
Create new file
Create simple particle system (default one is fine)
export as USDZ
Create project in Xcode
Call Entity.load(… and pass in your USD
Running this on an Intel iMac with macOS Sequoia 15.3 will lead to a crash with the following console log:
validateWithDevice:4704: failed assertion `Render Pipeline DescrvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepthvalidateWithDevice:4704: failed assertion `Render Pipeline Descriptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
iptor Validation
depthAttachmentPixelFormat (MTLPixelFormatDepth32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil32Float) and stencilAttachmentPixelFormat (MTLPixelFormatStencil8) must match.
'
8) must match.
'
Xcode version: 16.2.0
iMac 2020 3,8GHz Intel Core i7
macOS Sequoia 15.3
FB16477373
It would be great if this could be fixed quickly or a workaround provided since it affects or production app. Thank you!
Hi everyone,
I’m experiencing a critical issue with USDZ files created in Reality Composer on an iPad 9th Generation (iPadOS 18.3). The files work perfectly on iPads from the 10th Generation onwards and on iPad Pros. However, on older devices like the iPad 9th Generation and older iPhones, QuickLook (file preview) crashes when opening them.
This is a major issue because these USDZ files are part of an exhibition where artworks are extended with AR elements via a web page. If some visitors cannot view the 3D content, it significantly impacts the experience.
What’s puzzling is that two years ago, we exported USDZ files from Reality Composer, made them available via a website, and they worked flawlessly on all devices, including older iPads and iPhones. Now, with the latest iPadOS, they consistently crash on older devices.
Has anyone encountered a similar issue? Are there known limitations with QuickLook on older devices, or is there a way to optimize the USDZ files to prevent crashes? Could this be related to changes in iPadOS or RealityKit? Any advice or workaround would be greatly appreciated!
Thanks in advance!
Looking for help on getting "On Tap" to work inside RCP for my AVP project. I can get it to work when using "on added to scene" but if I switch to "on tap", the audio will not play when attaching the audio to an entity in my scene. I'm using the same entity for the tap gesture that the audio is using for the emitter. Here is my work flow for the "on added to scene" that works correctly to help troubleshoot my non working "on tap".
Behaviors: "on added to scene". action - timeline
Input target: check mark enabled, allowed all
Collision set to default
Audio library: source mp3 file
Chanel Audio: resource mp3 file above
Timeline: Play Audio with mp3 file added
This set up in RCP allows my AVP project to launch correctly with audio "on added to scene". But when switching behaviors to "on tap", the audio will no longer play and I can not figure out why. I've tried several different options and nothing works. Please help!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Audio
USDZ
Reality Composer Pro
visionOS
Hello,
I have a usdz model created in Maya. It's supposed to have a splashing animation associated with it, and this can be viewed in Maya and Blender, but for some reason when I export it to usdz and then import it into my Reality Composer Pro project, the animation is missing. I expect an animation library to be created on the entity when I drag it in like any other usdz with animation data, but that is not the case.
Any help would be appreciated on this issue.
I have a very basic usdz file from this repo
I call loadTextures() after loading the usdz via MDLAsset. Inspecting the MDLTexture object I can tell it is assigning a colorspace of linear rgb instead of srgb although the image file in the usdz is srgb.
This causes the textures to ultimately render as over saturated.
In the code I later convert the MDLTexture to MTLTexture via MTKTextureLoader but if I set the srgb option it seems to ignore it.
This significantly impacts the usefulness of Model I/O if it can't load a simple usdz texture correctly. Am I missing something?
Thanks!
I have created a simple scene in reality composer (composer not composer pro).
It contains just a cube and text item.
I convert this to usdz file and load it into a Arkit swift app.
Since ios 18/xcode 16 - the "text" element is not displayed at all.
The cube is displayed, anchors correctly and can be moved etc....
The output from usdchecker
➜ Desktop usdchecker GKTUHR1.6.3.usdz -v --arkit
Opening GKTUHR1.6.3.usdz
Checking layer <GKTUHR1.6.3.usdz>.
Checking package <GKTUHR1.6.3.usdz>
Checking prim </Root>.
Checking prim </Root/Scenes>.
Checking prim </Root/Scenes/Scene>.
Checking prim </Root/Scenes/Scene/Gravity>.
Checking prim </Root/Scenes/Scene/sceneGroundPlane>.
Checking prim </Root/Scenes/Scene/sceneGroundPlane/physicsMaterial>.
Checking prim </Root/Scenes/Scene/Children>.
Checking prim </Root/Scenes/Scene/Children/hello>.
Checking prim </Root/Scenes/Scene/Children/hello/Generated>.
Checking prim </Root/Scenes/Scene/Children/hello/Generated/Text>.
Checking prim </Root/Scenes/Scene/Children/hello/Generated/Text/Material>.
Checking prim </Root/Scenes/Scene/Children/hello/Generated/Text/Material/PBRShader>.
Checking shader </Root/Scenes/Scene/Children/hello/Generated/Text/Material/PBRShader>.
Checking prim </Root/Scenes/Scene/Children/hello/Children>.
Checking prim </Root/Scenes/Scene/Children/Box>.
Checking prim </Root/Scenes/Scene/Children/Box/Generated>.
Checking prim </Root/Scenes/Scene/Children/Box/Generated/Mesh0>.
Checking prim </Root/Scenes/Scene/Children/Box/Generated/Mesh0/Mesh0>.
Checking prim </Root/Scenes/Scene/Children/Box/Generated/Mesh0/Material>.
Checking prim </Root/Scenes/Scene/Children/Box/Generated/Mesh0/Material/PBRShader>.
Checking shader </Root/Scenes/Scene/Children/Box/Generated/Mesh0/Material/PBRShader>.
Checking prim </Root/Scenes/Scene/Children/Box/Children>.
Checking prim </Root/Scenes/Scene/Children/Box/PhysicsMaterial_Box>.
Found material bindings but no MaterialBindingAPI applied on the prim </Root/Scenes/Scene/sceneGroundPlane>. (fails 'MaterialBindingAPIAppliedChecker')
Found material bindings but no MaterialBindingAPI applied on the prim </Root/Scenes/Scene/Children/hello/Generated/Text>. (fails 'MaterialBindingAPIAppliedChecker')
Found material bindings but no MaterialBindingAPI applied on the prim </Root/Scenes/Scene/Children/Box>. (fails 'MaterialBindingAPIAppliedChecker')
Found material bindings but no MaterialBindingAPI applied on the prim </Root/Scenes/Scene/Children/Box/Generated/Mesh0>. (fails 'MaterialBindingAPIAppliedChecker')
Failed!
If I put an alpha image texture on a model created in Blender and run it on
RCP or visionOS, the rendering between the front and back due to alpha will result in an unintended rendering. Details are below.
I expor ted a USDC file of a Blender-created cylindrical object wit h a PNG (wit h alpha) texture applied to t he inside, and
t hen impor ted it into Reality Composer Pro.
When multiple objects t hat make extensive use of transparent textures are placed in front of and behind each ot her,
t he following behaviors were obser ved in t he transparent areas
・The transparent areas do not become transparent
・The transparent areas become transparent toget her wit h t he image behind t hem
the order of t he images becomes incorrect
Best regards.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
USDZ
RealityKit
Reality Composer Pro
visionOS
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
In my VisionPro app, I'm facing a problem with loading USDZ models from a RealityKitBundle package, created using Reality Composer Pro.
It was working fine until I added more models to the package. As I added more models with large textures in the project, the app started to show them with texturing problems.
So, when I load the models from the RealityView using Entity(named:in), the mesh loads correctly, but all black, with no textures, as below:
However, when I load the same USDZ directly from the main bundle, using ModelEntity(named:in), it loads fine.
I know that large textures can cause memory issues, but when talking about one single model, I know that it's not enough to cause a memory overflow in the VisionPro. This USDZ model is about 40MB with something around 800MB of texture memory (from the RealityComposerPro Statistic tab).
I've built experiences in VisionPro with much heavier models, and they do present the same texture issues, but only after there's more than 3 huge models enabled in the Reality scene. But that's not the case. The un-textured model appears right from the beginning, so it seems to me that's not a runtime issue in the device, but rather some issue in the packaging process from RealityComposerPro to XCode to the Device, am I correct?
I'm also using a simple Mac Mini with M2 but only 8MB of RAM. Maybe that's the issue?
As I still want to use RealityComposerPro to build more dev-friendly and interesting applications, I'd really appreciate some guidance here!
Thanks in advance!