I'm an experienced SceneKit developer and I want to begin work on a new project using RealityKit. So I appreciated as timely, the WWDC 2025 Session, "Bring your SceneKit project to RealityKit".
However, now I am finding that:
Blender does not properly support exporting armatures in usdc files, and usdc is really the only file format that should be used for creating 3D assets for RealityKit.
The option of exporting from Blender to fbx or some other intermediate format, and then converting that to usdc, is a challenge.
Apple's Reality Converter App, which supposedly can support importing and converting fbx files to usdc, is no longer available from Apple's website. And an older copy of it I found at the Kodeco website requires Rosetta on Apple Silicon. As well, this older copy does not in fact import fbx or anything else - I find it doesn't work at all.
Apple's Reality Composer Pro, at least as far as I can tell, only supports importing usdc - it is not a file conversion tool.
Alternatively, I am under the impression that Maya supports producing usdc files with armatures, but Maya costs over $2000 per year and I am skilled with Blender, so I believe strongly that I should be able to continue with Blender. Maya's expense and skillset simply shouldn't be a requirement for building RealityKit applications.
What are my options then, if any, to produce assets with armatures and armature based animations using Blender, and then bring them into RealityKit?
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
Build captivating gaming experiences for Apple platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
Hello,
Thank you for attending today’s Metal & game technologies group lab at WWDC25!
We were delighted to answer many questions from developers and energized by the community engagement.
We hope you enjoyed it and welcome your feedback.
We invite you to carry on the conversation here, particularly if your question appeared in Slido and we were unable to answer it during the lab.
If your question received feedback let us know if you need clarification.
You may want to ask your question again in a different lab e.g. visionOS tomorrow.
(We realize that this can be confusing when frameworks interoperate)
We have a lot to learn from each other so let’s get to Q&A and make the best of WWDC25! 😃
Looking forward to your questions posted in new threads.
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option.
Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue.
First reports of this new behavior occurred as early as iOS 18.4.
Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
i am trying to build some projects please check out my project
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
I was trying to move from appkit to swiftUI. As a learning project I am building a cellular automata style project based on Pattersons Worms.. I am trying something similar to the EA game Worms? for the Commodore 64. There is a video on YouTube of the game running, but I'm not allowed to link it here.
The problem I have is that the animation is driven by a ruleset. When the automata hits a configuration that is not in the ruleset it is supposed to stop and ask the user. For each step the model returns either the next move, or nil to indicate the user need to make a choice that will be sent back to the model to be added to the ruleset.
My current approach, and I might be following the wrong path, is a ZStack where the bottom level is the grid, the middle level is the established worm segments and the top level is either the animation of the next worm segment or the user chooser to choose the segment. I've only implemented the animation of the next worm segment. The idea is that when the model adds a segment that it first animated at the top level and then displayed by the middle level. Then the top level animates the next segment. I was animating the trim on the segment to draw the line.
If the current move is nil, then the middle level draws the segment. If current move has a value, the animation draw it, and then on completion sets the current move to nil so that the bottom level draw it.
The problem I ran into was resetting the animation to draw the next segment. I've tried two approaches. in one the completion resets the animation boolean variable, but I need a manual step to set the next stage of the animation. The other uses the completion to set the next step, but it the animation doesn't run for the step and the display is an always a step behind. I'm not sure how to both update the move and reset the animation at the same time.
I have uploaded a simplified version without the full grid and simplified model to GitHub (https://github.com/thomasrdean/AnimationTest). Is there any other way to reset the animation the than the completion so I can use the completion to retrieve the next step from the model?
Hi,
seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example..
useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU..
also useful for MoltenVK to support that extensions..
thanks..
What is Game Mode?
Game Mode optimizes your gaming experience by giving your game the highest priority access to your CPU and GPU, lowering usage for background tasks. And it doubles the Bluetooth sampling rate, which reduces input latency and audio latency for wireless accessories like game controllers and AirPods.
See Use Game Mode on Mac
See Port advanced games to Apple platforms
How can I enable Game Mode in my game?
Add the Supports Game Mode property (GCSupportsGameMode) to your game’s Info.plist and set to true
Correctly identify your game’s Application Category with LSApplicationCategoryType (also Info.plist)
Note:
Enabling Game Mode makes your game eligible but is not a guarantee; the OS decides if it is ok to enable Game Mode at runtime
An app that enables Game Mode but isn’t a game will be rejected by App Review.
How can I disable Game Mode?
Set GCSupportsGameMode to false.
Note: On Mac Game Mode is automatically disabled if the game isn’t running full screen.
I can't create any breakpoint in my Xcode after I upgraded to macOS 15.4
macOS: Version 15.4 (24E248)
visionOS Simulator: 2.3
Xcode: Version 16.2 (16C5032a)
My app works well without any breakpoints.
But if I create any breakpoint it shows me this:
Couldn't find the Objective-C runtime library in loaded images.
Message from debugger: The LLDB RPC server has crashed. You may need to manually terminate your process. The crash log is located in ~/Library/Logs/DiagnosticReports and has a prefix 'lldb-rpc-server'. Please file a bug and attach the most recent crash log.
I am develop visionOS app. I am now very interested in Metal and Compositor Services, but I have not explored them in depth. I know that Metal has a higher degree of control freedom. I am wondering if using Compositor Services will have fewer functions than RealityKit in AR technology (such as scene reconstruction and understanding, hover effect, etc.).
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code...
import SwiftUI
import RealityKit
import RealityKitContent
extension Notification.Name {
static let button1Pressed = Notification.Name("button1pressed")
static let button2Pressed = Notification.Name("button2pressed")
static let button3Pressed = Notification.Name("button3pressed")
}
struct MainButtons: View {
@State private var transitionToNextSceneForButton1 = false
@State private var transitionToNextSceneForButton2 = false
@State private var transitionToNextSceneForButton3 = false
@Environment(AppModel.self) var appModel
@Environment(\.dismissWindow) var dismissWindow
// Notification publishers for each button
private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed)
private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed)
private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed)
var body: some View {
ZStack {
RealityView { content in
// Load your RC Pro scene that contains the 3D buttons.
if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
// Optionally attach a gesture if you want to debug a generic tap:
.gesture(
TapGesture().targetedToAnyEntity().onEnded { value in
print("3D Object tapped")
_ = value.entity.applyTapForBehaviors()
// Do not post a test notification here—rely on RC Pro timeline events.
}
)
}
.onAppear {
dismissWindow(id: "main")
// Remove any test notification posting code.
}
// Listen for distinct button notifications.
.onReceive(button1PressedReceived) { (output) in
print("Button 1 pressed notification received")
transitionToNextSceneForButton1 = true
}
.onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 2 pressed notification received")
transitionToNextSceneForButton2 = true
}
.onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 3 pressed notification received")
transitionToNextSceneForButton3 = true
}
// Present next scenes for each button as needed. For example, for button 1:
.fullScreenCover(isPresented: $transitionToNextSceneForButton1) {
FacilityTour()
.environment(appModel)
}
// You can add additional fullScreenCover modifiers for button 2 and 3 transitions.
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Graphics and Games
Xcode
SwiftUI
Reality Composer Pro
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
Hello everyone,
i created a game that is related to doctor operation simulator. In this game, user perform different operations on different organs like face, elbow, foot etc. Game is live on play store and here is the link:
https://play.google.com/store/apps/details?id=com.nexthope.doctor.hospital.surgery.games&hl=en_US&gl=US
But it was rejected by app store review team. Similar games are already there on app store. Please help me on this issue.
Hi,
I'm working on a game for the past few years using first Unreal Engine 4, and now Unreal Engine 5.4.4.
I'm experiencingan unusual crash on startup on some devices
. The crash is so fast that I'm barely able to see the launching screen sometimes because the app closes itself before that.
I got a EXC_CRASH (SIGABRT) so I know that it's a null pointer reference, but I can't quite wrap my head about the cause, I think that's something messed up in the packaging of the app, but here is where I'm blocked, I'm not that accustomed with apple devices.
If someone has some advise to give, please, any help will be very valuable. Many thanks.
Log :
Crash Log on Ipad
Crash dump:
`Crashed Thread: 0 tid_103 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGILL)
Exception Codes: KERN_PROTECTION_FAILURE at 0x000000016d3bfea0
Exception Codes: 0x0000000000000002, 0x000000016d3bfea0
Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4
Terminating Process: Unity [7873]
VM Region Info: 0x16d3bfea0 is in 0x169bbc000-0x16d3c0000; bytes after start: 58736288 bytes before end: 351
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
mapped file 169b00000-169ba8000 [ 672K] rw-/rwx SM=PRV Object_id=4d22156e
GAP OF 0x14000 BYTES
---> STACK GUARD 169bbc000-16d3c0000 [ 56.0M] ---/rwx SM=NUL stack guard for thread 0
Stack 16d3c0000-16dbbc000 [ 8176K] rw-/rwx SM=SHM thread 0
Thread 0 Crashed:: tid_103 Dispatch queue: com.apple.main-thread
0 libsystem_platform.dylib 0x1932ee7ac _platform_memset + 108
1 libmonobdwgc-2.0.dylib 0x33977abdc GC_clear_stack_inner + 60
2 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
3 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
4 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
5 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
6 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
7 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
8 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
9 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
10 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
11 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
12 libmonobdwgc-2.0.dylib 0x33976b518 GC_clear_stack + 76
13 libmonobdwgc-2.0.dylib 0x33973c074 mono_gc_alloc_obj + 112
14 libmonobdwgc-2.0.dylib 0x3396e0db4 mono_object_new_specific_checked + 72
15 libmonobdwgc-2.0.dylib 0x3396e116c ves_icall_object_new_specific + 28`
The solo Leveling:arise is a game but the game mode is not switching on and game crashing everything time while playing
Topic:
Graphics & Games
SubTopic:
GameKit
Tags:
External Graphics Processors
Games
Graphics and Games
Hey! I'm facing an issue with Equipment collision when adding and moving TabletopKit equipment with different pose rotations.
Let me share a very simple TabletopKit setup as an example:
Table
struct Table: Tabletop {
var shape: TabletopShape = .rectangular(width: 1, height: 1, thickness: 0.01)
var id: EquipmentIdentifier = .tableID
}
Board
struct Board: Equipment {
let id: EquipmentIdentifier = .boardID
var initialState: BaseEquipmentState {
.init(
parentID: .tableID,
seatControl: .restricted([]),
pose: .init(position: .init(), rotation: .zero),
boundingBox: .init(center: .zero, size: .init(1.0, 0, 1.0))
)
}
}
Equipment
struct Object: EntityEquipment {
var id: ID
var size: SIMD2<Float>
var position: SIMD2<Double>
var rotation: Float
var entity: Entity
var initialState: BaseEquipmentState
init(id: Int, size: SIMD2<Float>, position: SIMD2<Double>, rotation: Float) {
self.id = EquipmentIdentifier(id)
self.size = size
self.position = position
self.rotation = rotation
self.entity = objectEntity
self.initialState = .init(
parentID: .boardID,
seatControl: .any,
pose: .init(
position: .init(x: position.x, z: position.y),
rotation: .degrees(Double(rotation))
),
entity: entity
)
}
}
Setup
class GameSetup {
var setup: TableSetup
init(root: Entity) {
setup = TableSetup(tabletop: Table())
setup.add(equipment: Board())
setup.add(seat: PlayerSeat())
let object1 = Object(
id: 2,
size: .init(x: 0.1, y: 0.1),
position: .init(x: 0.1, y: -0.1),
rotation: 0
)
let object2 = Object(
id: 3,
size: .init(x: 0.2, y: 0.1),
position: .init(x: -0.1, y: -0.1),
rotation: 90
)
setup.add(equipment: object1)
setup.add(equipment: object2)
}
}
The issue
When I add two equipment entities with different rotation poses, the collisions between them behave oddly. If one is 90º and the other 0º, for example, the former will intersect with the latter as if its bounding box was not rotated as you can see below:
But if both equipment have the example rotation (e.g. 0 or 90º), though, then there's no collision issue at all, which seems to indicate their bounding box were correctly rotated:
I'd really appreciate some help understanding if this is a bug or if I'm just missing something.
Thanks in advance!
Topic:
Graphics & Games
SubTopic:
TabletopKit
Tags:
Graphics and Games
RealityKit
visionOS
TabletopKit
For many years, I've noticed that although in native code I can handle continuous and simultaneous Apple pencil and touch inputs using UIKit, Safari and WKWebView's PointerEvents only seem to allow you to use one input type at a time. i.e. Apple Pencil down blocks touch input until lifted and touch input blocks Apple Pencil input. It's as though requiresexclusivetouchtype has been set in the underlying webkit implementation. There's decades of research (e.g. https://dl.acm.org/doi/10.1145/1866029.1866036 ) and several existing native applications in production showing that multimodal inputs open-up many unique and useful applications and interactions. Even a simple "hold object with finger" + "draw with stylus" controls are the norm. I recently built a native application using multimodal simultaneous inputs, but this is impossible to port to web due to the unexpected behavior of PointerEvents (and touch events, and mouse events; any variant exhibits the same behavior). I've researched and attempted to apply every possible flag, change, and css code to get this working, but I think the behind-the-scenes implementation is what's blocking the simultaneous touch types.
This is unexpected and undesired behavior because it's inconsistent with the native behavior. If it's unintended, it's a big priority to fix for creating better user experiences on the iPad. If it's intended, I do not believe that's reasonable (even if it might be more complex and used for more advanced applications). Please expose a way to support simultaneous touch types in iPadOS/iOS in both Safari and WKWebView.
At minimum, may we have a discussion on how to support the desired behavior? The simplest solution I can think of is to provide a webkit-platform-specific boolean in Safari and WKWebView called requiresExclusiveTouchType, which is set to False by default to keep the current behavior, and settable to True to get the more flexible behavior I'm expecting.