스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
함께하면 더욱 탁월한 SwiftUI 및 RealityKit
visionOS 26에서 SwiftUI와 RealityKit을 원활하게 결합하는 방법을 알아보세요. 애니메이션 및 ConfigurationCatalog 지원 등 Model3D의 개선 사항을 살펴보고 RealityView로 원활하게 전환하는 방법을 보여드립니다. SwiftUI 애니메이션을 활용하여 RealityKit 구성 요소의 변화를 유도하고, 상호작용 조작을 구현하며, 새로운 SwiftUI 구성 요소를 이용하여 보다 풍부한 상호작용을 수행하고 SwiftUI 코드에서 RealityKit의 변경 사항을 확인하는 방법을 알아볼 수 있습니다. 또한 프레임워크간 좌표 변환을 위한 통합 좌표 변환을 사용하는 방법도 확인하세요.
챕터
- 0:00 - Introduction
- 1:24 - Model3D enhancements
- 6:13 - RealityView transition
- 11:52 - Object manipulation
- 15:35 - SwiftUI components
- 19:08 - Information flow
- 24:56 - Unified coordinate conversion
- 27:01 - Animation
- 29:41 - Next steps
리소스
- Canyon Crosser: Building a volumetric hike-planning app
- Rendering hover effects in Metal immersive apps
관련 비디오
WWDC24
WWDC23
WWDC21
WWDC20
-
비디오 검색…
Hi. I'm Amanda, and I'm a RealityKit engineer. And I'm Maks. I'm a SwiftUI engineer. Today we'll share some great enhancements to both SwiftUI and RealityKit that help them work even better together! Check out this adorable scene! We've got a charming SwiftUI robot, hovering in mid-air, and a grounded RealityKit robot – both yearning for connection. When they get close, sparks fly! But how can they get close enough to truly interact? Maks and I will share how to combine the worlds of traditional UI and interactive 3D content. First, I'll share some enhancements to Model3D.
Then, I'll demonstrate how to transition from using Model3D to using RealityView, and talk about when to choose one versus the other.
I'll tell you about the new Object Manipulation API.
RealityKit gets new Component types, integrating more aspects of SwiftUI.
Information can now flow both ways between SwiftUI and RealityKit - we'll explain.
Coordinate space conversion is easier than ever.
Drive RealityKit component changes with SwiftUI animations.
Let's wire it up! Display 3D models in your apps with just one line of code using Model3D. In visionOS 26, two enhancements let you do even more with Model3D - playing animations, and loading from a ConfigurationCatalog. Since Model3D is a SwiftUI view, it participates in the SwiftUI layout system. I'll use that layout system to make a little sign that displays the robot's name.
Now the sign says that this robot's name is Sparky.
Sparky also has some sweet dance moves! The artist bundled this animation with the robot model asset. New in visionOS 26 is the Model3DAsset type. Load and control animations on your 3D content by constructing a Model3D using a Model3DAsset.
The model loads the animations from the asset, and lets you choose which one to play.
"Model" is an overloaded term, especially in this session as we're converging a UI framework and a 3D game framework. In UI frameworks, a "model" refers to the data structure that represents the information your app uses.
The model holds the data and business logic, allowing the view to display this information. In 3D frameworks like RealityKit, a model refers to a 3D object that can be placed in a scene. You access it via the ModelComponent, which consists of a mesh resource that defines its shape, and materials that determine its appearance. Sometimes that happens. Two worlds collide, bringing their terminologies with them, and sometimes there's overlap. Now, back to Sparky and its animation.
I'm placing the Model3D above a Picker, a Play button, and a time scrubber. In my RobotView, I'm displaying the animated robot itself, and under that, I'm placing a Picker to choose which animation to play, plus the animation playback controls.
First, I'm initializing a Model3DAsset with the scene name to load from my bundle.
Then, once the asset is present, I pass it to the Model3D initializer. Underneath that in the VStack, I'm presenting a customized Picker that lists the animations that are available in this model asset. When an item is chosen from the list, the Picker sets the asset's `selectedAnimation` to the new value. Then the Model3DAsset creates an AnimationPlaybackController to control playback of that chosen animation.
The asset vends an `animationPlaybackController`. Use this object to pause, resume, and seek in the animation.
I'm passing that animationController into my RobotAnimationControls view, which we'll also look at shortly.
In visionOS 26, the existing RealityKit class `AnimationPlaybackController` is now Observable. In my SwiftUI view, I observe the `time` property to display the animation's progress.
I have a @Bindable property called `controller`, which means I'm using the`AnimationPlaybackController` as my view's data model.
When the controller's isPlaying value changes, SwiftUI will re-evaluate my RobotAnimationControls view. I've got a Slider that shows the current time in the animation, relative to the total duration of the animation. You can drag this slider and it will scrub through the animation. Here's Sparky doing its celebration animation! I can fast forward and rewind using the Slider. Go Sparky, it's your birthday! With its dance moves down, Sparky wants to dress up before it heads to the greenhouse to meet the other robot there. I can help it do that with enhancements to RealityKit's ConfigurationCatalog type. This type stores alternatives representations of an entity, such as different mesh geometries, component values, or material properties.
In visionOS 26, you can initialize a Model3D with a ConfigurationCatalog and switch between its various representations.
To allow Sparky to try on different outfits, my artist bundled a reality file with several different body types. I load this file as a ConfigurationCatalog from my app's main Bundle. Then, I create my Model3D with the configuration.
This popover presents the configuration options. Choosing from the popover changes Sparky's look.
Dance moves? Check. Outfit? Check. Sparky is ready to meet its new friend in the RealityKit greenhouse. Sparks are going to fly! To make those sparks fly, I'll use a Particle Emitter. But - that’s not something I can do at runtime with the Model3D type. Particle Emitter is a component that I add to a RealityKit entity. More on that in a moment. Importantly, Model3D doesn't support adding components.
So, to add a particle emitter, I'll switch to RealityView. I'll share how to smoothly replace my Model3D with a RealityView without changing the layout. First, I switch the view from Model3D to RealityView.
I load the botanist model from the app bundle inside the `make` closure of the RealityView, creating an entity.
I add that entity to the contents of the RealityView so Sparky appears on screen.
But... now the name sign is pushed too far over to the side. That wasn't happening before when we were using a Model3D.
It's happening now because, by default, RealityView takes up all the available space that the SwiftUI layout system gives it. By contrast, the Model3D sizes itself based on the intrinsic size of the underlying model file. I can fix this! I apply the new `.realityViewLayoutBehavior` modifier with `.fixedSize` to make the RealityView tightly wrap the model's initial bounds.
Much better.
RealityView will use the visual bounds of the entities in its contents to figure out its size.
This sizing is only evaluated once - right after your `make` closure is run.
The other options for `realityViewLayoutBehavior` are .flexible and .centered.
In all three of these RealityViews, I have the bottom of the Sparky model sitting on the origin of the scene, and I've marked that origin with a gizmo, the little multicolored cross showing the axes and origin.
On the left, with the `.flexible` option, the RealityView acts as if it doesn't have the modifier applied. The origin remains in the center of the view.
The `.centered` option moves the origin of the RealityView so that the contents are centered in the view. `.fixedSize` makes the RealityView tightly wrap the contents' bounds, and makes your RealityView behave just like Model3D.
None of these options re-position or scale your entities with respect to the RealityViewContent; they just re-position the RealityView's own origin point. I've sorted out Sparky's sizing in the RealityView. Next I'll get Sparky animating again.
I'll move from Model3D's new animation API to a RealityKit animation API directly on the Entity.
For more detail on the many ways of working with animation in RealityKit, check out the session "Compose interactive 3D content in Reality Composer Pro". I switched from Model3D to RealityView so I could give Sparky a ParticleEmitterComponent, because sparks need to fly when these two robots get close to each other.
Particle Emitters let you make effects that involve hundreds of tiny particles animating at once, like fireworks, rain, and twinkles.
RealityKit provides preset values for these, and you can adjust those presets to get the effect you're after. You can use Reality Composer Pro to design them, and you can configure them in code.
You add the ParticleEmitter to an entity as a Component. Components are a central part of RealityKit, which is based on the "Entity Component System" paradigm. Each object in your scene is an Entity, and you add components to it to tell it what traits and behaviors it has. A Component is the type that holds data about an Entity. A System processes entities that have specific components, performing logic involving that data. There are built-in systems for things like animating particles, for handling physics, for rendering, and many more. You can write your own custom system in RealityKit to do custom logic for your game or app. Watch Dive into RealityKit 2 for a more in-depth look at the Entity Component System in RealityKit. I'll add a particle emitter to each side of Sparky's head. First I make two invisible entities that serve as containers for the sparks effect. I designed my sparks emitter to point to the right. I'll add it directly to my invisible entity on Sparky's right side.
On the other side, I rotate the entity 180 degrees about the y axis so it's pointing leftward.
Putting it all together in the RealityView, here's Sparky with its animation, its name sign in the right position, and sparks flying! RealityKit is great for detailed creation like this! If you're making a game or play-oriented experience, or need fine-grained control over the behavior of your 3D content, choose RealityView.
On the other hand, use Model3D to display a self-contained 3D asset on its own. Think of it like SwiftUI's Image view but for 3D assets.
Model3D's new animation and configuration catalogs let you do more with Model3D. And if your design evolves and you need direct access to the entities, components, and systems, transition smoothly from Model3D to RealityView using realityViewLayoutBehavior. Next I'll share details about the new Object Manipulation API in visionOS 26, which lets people pick up the virtual objects in your app! Object manipulation works from both SwiftUI and RealityKit. With Object manipulation you move the object with a single hand, rotate it with one or both hands, and scale it by pinching and dragging with both hands. You can even pass the object from one hand to the other.
There are two ways to enable this, depending on whether the object is a RealityKit Entity or SwiftUI View.
In SwiftUI, add the new `manipulable` modifier.
To disallow scaling, but keep the ability to move and rotate the robot with either hand, I specify what Operations are supported.
To make the robot feel super heavy, I specify that it has high inertia.
The .manipulable modifier works when Sparky is displayed in a Model3D view. It applies to the whole Model3D, or to any View it's attached to.
When Sparky's in a RealityView, I want to enable manipulation on just the robot entity itself, not the whole RealityView.
In visionOS 26, ManipulationComponent is a new type that you can set on an entity to enable Object Manipulation.
The static function `configureEntity` adds the ManipulationComponent to your entity.
It also adds a CollisionComponent so that the interaction system knows when you've tapped on this entity. It adds an InputTargetComponent which tells the system that this entity responds to gestures. And finally, it adds a HoverEffectComponent which applies a visual effect when a person looks at or hovers their mouse over it.
This is the only line you need to enable manipulation of an entity in your scene. To customize the experience further, there are several parameters you can pass. Here, I'm specifying a purple spotlight effect. I'm allowing all types of input: direct touch and indirect gaze and pinch. And I'm supplying collision shapes that define the outer dimensions of the robot. To respond when a person interacts with an object in your app, the object manipulation system raises events at key moments, such as when the interaction starts and stops, gets updated as the entity is moved, rotated, and scaled, when it is released, and when it is handed off from one hand to another.
Subscribe to these events to update your state. By default, standard sounds play when the interaction begins, a handoff occurs, or the object is released.
To apply custom sounds, I first set the audioConfiguration to `none`. That disables the standard sounds. Then I subscribe to the ManipulationEvent DidHandOff, which is delivered when a person passes the robot from one hand to the other.
In that closure, I play my own audio resource. Well, Maks. Sparky's journey has been exciting: animating in Model3D, finding its new home in a RealityView, showing its personality with sparks, and letting people reach out and interact with it. It's come a long way on its path towards the RealityKit greenhouse. It sure has! But for Sparky to truly connect with the robot waiting there, the objects in their virtual space need new capabilities. They need to respond to gestures, present information about themselves, and trigger actions in a way that feels native to SwiftUI.
Sparky's journey toward the RealityKit greenhouse is all about building connection. Deep connection requires rich interactions. That's exactly what the new SwiftUI RealityKit components are designed to enable. The new components in visionOS 26 bring powerful, familiar SwiftUI capabilities directly to RealityKit entities.
RealityKit introduces three key components: First, the ViewAttachmentComponent allows you to add SwiftUI views directly to your entities. Next, the GestureComponent makes your entities responsive to touch and gestures. And finally, the PresentationComponent, which presents SwiftUI views, like popovers, from within your RealityKit scene.
visionOS 1 let you declare attachments up front as part of the RealityView initializer. After evaluating your attachment view builder, the system called your update closure with the results as entities. You could add these entities to your scene and position them in 3D space.
In visionOS 26, this is simplified. Now you create attachments using a RealityKit component from anywhere in your app.
Create your ViewAttachmentComponent by giving it any SwiftUI View. Then, add it to an entity's components collection.
And just like that I moved our NameSign from SwiftUI to RealityKit. Let’s explore gestures next! You can already attach gestures to your RealityView using `targetedToEntity`gesture modifiers.
New in visionOS 26 is GestureComponent. Just like ViewAttachmentComponent, you add GestureComponent to your entities directly, passing regular SwiftUI gestures to it. The gesture values are by default reported in the entity’s coordinate space. Super handy! I use GestureComponent with a tap gesture to toggle the name sign on and off.
Check it out. This robot's name is... Bolts! Pro tip: on any entity that's the target of a gesture, also add both InputTargetComponent and CollisionComponent. This advice applies to both GestureComponent and the targeted gestures API.
GestureComponent and ViewAttachmentComponent let me create a name sign for Bolts. But, Bolts is getting ready for a special visitor: Sparky! Bolts wants to look its absolute best for their meeting in the greenhouse. Time for another outfit change! I'll replace Bolts' name sign with UI to pick what Bolts will wear. Truly, a momentous decision.
To emphasize that, I'll show this UI in a popover, using PresentationComponent, directly from RealityKit.
First, I replace `ViewAttachmentComponent` with `PresentationComponent.
The component takes a boolean binding to control when the popover is presented, and to notify you when someone dismisses the popover.
The `configuration` parameter is the type of presentation to be shown. I'm specifying `popover`.
Inside the popover, I'll present a view with configuration catalog options to dress up Bolts.
Now, I can help Bolts pick its best color for when Sparky comes to visit.
Hey Maks, do you think Bolts is more of a summer? Or an autumn? That's a fashion joke.
Bolts is dressed to impress. But first, it has to go to work. Bolts waters plants in the greenhouse. I'll make a mini map, like on a heads-up display in a game, to track Bolts' position in the greenhouse. For that, I need to observe the robot's Transform component.
In visionOS 26, entities are now observable. They can notify other code when their properties change. To be notified, just read an entity’s "observable" property.
From the “observable” property you can watch for changes to the entity's position, scale, and rotation, its Children collection, and even its Components, including your own custom components! Observe these properties directly using a `withObservationTracking` block.
Or lean on SwiftUI's built-in observation tracking. I’ll use SwiftUI to implement my Minimap.
To learn more about Observation, watch "Discover Observation in SwiftUI".
In this view, I display my entity's position on a MiniMap. I'm accessing this observable value on my entity. This tells SwiftUI that my view depends on this value.
As Bolts moves about the greenhouse, watering the plants, its position will change. Each time it does, SwiftUI will call my View's body again, moving its counterpart symbol in the minimap! For a deeper explanation of SwiftUI's data flow, check out the session "Data Essentials in SwiftUI." Our robot friends are really coming together! That's the dream! I liked your description of the difference between "model" and "model" earlier. And sometimes you need to pass data from your data model to your 3D object model, and vice versa. In visionOS 26, observable entities give us a new tool to do that. Since the beginning, you could pass information from SwiftUI to RealityKit in the `update` closure of RealityView. Now with entity's `observable` property, you can send information the other way. RealityKit entities can act like model objects to drive updates to your SwiftUI views! So information can flow both ways now: from SwiftUI to RealityKit and from RealityKit to SwiftUI. But... does this create the potential for an infinite loop? Yes! Let's look at how to avoid creating infinite loops between SwiftUI and RealityKit.
When you read an observable property inside the body of a view, you create a dependency; your view depends on that property. When the property’s value changes, SwiftUI will update the view and re-run its body. RealityView has some special behavior. Think of its… update closure as an extension of the containing view's body.
SwiftUI will call the closure whenever any of that view's state changes, not only when state that is explicitly observed in that closure changes.
Here in my RealityView's update closure, I'm changing that position. This will write to the position value, which will cause SwiftUI to update the view and re-run its body, causing an infinite loop.
To avoid creating an infinite loop don’t modify your observed state within your update closure.
You are free to modify entities that you're not observing. That won't create an infinite loop because changes to them won't trigger SwiftUI to re-evaluate the view body.
If you do need to modify an observed property, check the existing value of that property and avoid writing that same value back.
This breaks the cycle and avoids an infinite loop.
Note that the RealityView's make closure is special.
When you access an observable property in the make closure, that doesn't create a dependency. It's not included in the containing view's observation scope.
Also, the `make` closure is not re-run on changes. It only runs when the containing view first appears.
You can also update the properties on an observed entity from within your own custom system. A system's update function is not inside the scope of the SwiftUI view body evaluation, so this is a good place to change my observed entities' values.
The closures of Gestures are also not inside the scope of the SwiftUI view body evaluation. They are called in response to user input.
You can modify your observed entities' values here, too.
To recap, it's cool to modify your observed entities in some places, and not in others.
If do you find that you have an infinite loop in your app, here's a tip for fixing it: Split up your larger views into smaller, self-contained views, each one having only their own necessary state. That way, a change in some unrelated entity won't cause your small view to be re-evaluated. This is also great for performance! You know, Maks, you might find that you don't need to use your `update` closure at all anymore. Since your Entity can now be your view's state, you can modify it in the normal places that you're used to modifying state, and forgo the update closure altogether. Yeah! I feel like avoiding infinite loops is something I have to learn over and over again. But, if I don't use an update closure, I'm less likely to run into one.
I think it's time to bring Bolts and Sparky together.
Bolts is finally done with work - time for its date with Sparky! As I pick up Sparky to bring it over, and the two robots get closer together, I want to make sparks fly as a function of the decreasing distance between them. I'll use our new Unified Coordinate Conversion API to enable this.
Sparky is in a Model3D SwiftUI view now, and Bolts is an Entity in the RealityKit greenhouse. I need to get the absolute distance between these two robots, even though they're in different coordinate spaces.
To solve this, the Spatial framework now defines a `CoordinateSpace3D` protocol that represents an abstract coordinate space. You can easily convert values between any two types that conform to CoordinateSpace3D, even from different frameworks.
RealityKit's `Entity` and `Scene` types conform to CoordinateSpace3D. On the SwiftUI side, GeometryProxy3D has a new .coordinateSpace3D() function that gives you its coordinate space. Additionally, several Gesture types can provide their values relative to any given CoordinateSpace3D. CoordinateSpace3D protocol works by first converting a value in Sparky’s coordinate space to a coordinate space shared by both RealityKit and SwiftUI. After that, it converts from the shared space into Bolt’s coordinate space, while taking low-level details like points-to-meter conversion and axis direction into account. In Sparky's Model3D view, whenever the view geometry changes, the system calls my `onGeometryChange3D` function. It passes in a GeometryProxy3D which I use to get its coordinate space. Then, I can convert my view's position to a point in the entity's space so I know how far apart my two robots are from each other. Now as Amanda brings Bolts and Sparky together, the sparks increase. As she pulls them apart, the sparks decrease.
Next, I'll teach these robots to move together and to coordinate their actions. I'll use SwiftUI driven animation for RealityKit components.
SwiftUI already comes with great animation APIs to implicitly animate changes to your view properties.
Here, I animate the Model3D view that Sparky is in. I send it over to the left when I toggle, and then it bounces back to the original position when I toggle again.
I’m adding an animation to my `isOffset` binding, and I'm specifying that I want an extra bouncy animation for it.
In visionOS 26, you can now use SwiftUI animation to implicitly animate changes to RealityKit components.
All you need to do is set a supported component on your entity within a RealityKit animation block and the framework will take care of the rest. There are two ways to associate an animation with a state change. From within a RealityView, you can use `content.animate()` to set new values for your components inside the animate block. RealityKit will use the animation associated with the SwiftUI transaction that triggered the `update` closure, which, in this case, is an extra bouncy animation.
The other way is to call the new Entity.animate() function, passing a SwiftUI animation, and a closure that sets new values for your components. Here, whenever the isOffset property changes, I send Sparky left or right using the entity’s position.
Setting the position inside the animate block will begin an implicit animation of the Transform component, causing the entity to move smoothly to the new position. The power of implicit animation really shines when I combine it with the Object Manipulation API that Amanda introduced. I can use a SwiftUI animation to implement a custom release behavior for Bolts. I’m first going to disable the default release behavior for object manipulation by setting it to .stay.
Then, I will subscribe to the WillRelease event for the manipulation interaction.
And when the object is about to be released, I will snap Sparky back by setting its transform to identity, which resets the scale, translation, and rotation of the entity. Since I’m modifying Sparky’s transform inside the animate block, Sparky will bounce back to its default position. Now Sparky's animation back to its original position is much more fun! All these built-in RealityKit components support implicit animations, including the Transform, the Audio components, and the Model and Light components, which have color properties! Sparky and Bolts had quite a journey. It's so great to see the power of SwiftUI and RealityKit working together.
With this connection, you're also empowered to develop truly exceptional spatial apps, fostering a real connection between the virtual and the physical! Imagine the possibilities as you seamlessly integrate SwiftUI components into your RealityKit scenes, and as entities dynamically drive changes to your SwiftUI state.
Just like Sparky and Bolts, we hope you're inspired to connect SwiftUI and RealityKit in ways we haven't even imagined yet. Let's build the future together!
-
-
1:42 - Sparky in Model3D
struct ContentView: View { var body: some View { Model3D(named: "sparky") } }
-
1:52 - Sparky in Model3D with a name sign
struct ContentView: View { var body: some View { HStack { NameSign() Model3D(named: "sparky") } } }
-
struct RobotView: View { @State private var asset: Model3DAsset? var body: some View { if asset == nil { ProgressView().task { asset = try? await Model3DAsset(named: "sparky") } } } }
-
struct RobotView: View { @State private var asset: Model3DAsset? var body: some View { if asset == nil { ProgressView().task { asset = try? await Model3DAsset(named: "sparky") } } else if let asset { VStack { Model3D(asset: asset) AnimationPicker(asset: asset) } } } }
-
struct RobotView: View { @State private var asset: Model3DAsset? var body: some View { if asset == nil { ProgressView().task { asset = try? await Model3DAsset(named: "sparky") } } else if let asset { VStack { Model3D(asset: asset) AnimationPicker(asset: asset) if let animationController = asset.animationPlaybackController { RobotAnimationControls(playbackController: animationController) } } } } }
-
struct RobotAnimationControls: View { @Bindable var controller: AnimationPlaybackController var body: some View { HStack { Button(controller.isPlaying ? "Pause" : "Play") { if controller.isPlaying { controller.pause() } else { controller.resume() } } Slider( value: $controller.time, in: 0...controller.duration ).id(controller) } } }
-
5:41 - Load a Model3D using a ConfigurationCatalog
struct ConfigCatalogExample: View { @State private var configCatalog: Entity.ConfigurationCatalog? @State private var configurations = [String: String]() @State private var showConfig = false var body: some View { if let configCatalog { Model3D(from: configCatalog, configurations: configurations) .popover(isPresented: $showConfig, arrowEdge: .leading) { ConfigPicker( name: "outfits", configCatalog: configCatalog, chosenConfig: $configurations["outfits"]) } } else { ProgressView() .task { await loadConfigurationCatalog() } } } }
-
6:51 - Switching from Model3D to RealityView
struct RobotView: View { let url: URL = Bundle.main.url(forResource: "sparky", withExtension: "reality")! var body: some View { HStack { NameSign() RealityView { content in if let sparky = try? await Entity(contentsOf: url) { content.add(sparky) } } } } }
-
7:25 - Switching from Model3D to RealityView with layout behavior
struct RobotView: View { let url: URL = Bundle.main.url(forResource: "sparky", withExtension: "reality")! var body: some View { HStack { NameSign() RealityView { content in if let sparky = try? await Entity(contentsOf: url) { content.add(sparky) } } .realityViewLayoutBehavior(.fixedSize) } } }
-
8:48 - Switching from Model3D to RealityView with layout behavior and RealityKit animation
struct RobotView: View { let url: URL = Bundle.main.url(forResource: "sparky", withExtension: "reality")! var body: some View { HStack { NameSign() RealityView { content in if let sparky = try? await Entity(contentsOf: url) { content.add(sparky) sparky.playAnimation(getAnimation()) } } .realityViewLayoutBehavior(.fixedSize) } } }
-
10:34 - Add 2 particle emitters; one to each side of the robot's head
func setupSparks(robotHead: Entity) { let leftSparks = Entity() let rightSparks = Entity() robotHead.addChild(leftSparks) robotHead.addChild(rightSparks) rightSparks.components.set(sparksComponent()) leftSparks.components.set(sparksComponent()) leftSparks.transform.rotation = simd_quatf(Rotation3D( angle: .degrees(180), axis: .y)) leftSparks.transform.translation = leftEarOffset() rightSparks.transform.translation = rightEarOffset() } // Create and configure the ParticleEmitterComponent func sparksComponent() -> ParticleEmitterComponent { ... }
-
12:30 - Apply the manipulable view modifier
struct RobotView: View { let url: URL var body: some View { HStack { NameSign() Model3D(url: url) .manipulable() } } }
-
12:33 - Allow translate, 1- and 2-handed rotation, but not scaling
struct RobotView: View { let url: URL var body: some View { HStack { NameSign() Model3D(url: url) .manipulable( operations: [.translation, .primaryRotation, .secondaryRotation] ) } } }
-
12:41 - The model feels heavy with high inertia
struct RobotView: View { let url: URL var body: some View { HStack { NameSign() Model3D(url: url) .manipulable(inertia: .high) } } }
-
13:18 - Add a ManipulationComponent to an entity
RealityView { content in let sparky = await loadSparky() content.add(sparky) ManipulationComponent.configureEntity(sparky) }
-
RealityView { content in let sparky = await loadSparky() content.add(sparky) ManipulationComponent.configureEntity( sparky, hoverEffect: .spotlight(.init(color: .purple)), allowedInputTypes: .all, collisionShapes: myCollisionShapes() ) }
-
14:08 - Manipulation interaction events
public enum ManipulationEvents { /// When an interaction is about to begin on a ManipulationComponent's entity public struct WillBegin: Event { } /// When an entity's transform was updated during a ManipulationComponent public struct DidUpdateTransform: Event { } /// When an entity was released public struct WillRelease: Event { } /// When the object has reached its destination and will no longer be updated public struct WillEnd: Event { } /// When the object is directly handed off from one hand to another public struct DidHandOff: Event { } }
-
14:32 - Replace the standard sounds with custom ones
RealityView { content in let sparky = await loadSparky() content.add(sparky) var manipulation = ManipulationComponent() manipulation.audioConfiguration = .none sparky.components.set(manipulation) didHandOff = content.subscribe(to: ManipulationEvents.DidHandOff.self) { event in sparky.playAudio(handoffSound) } }
-
16:19 - Builder based attachments
struct RealityViewAttachments: View { var body: some View { RealityView { content, attachments in let bolts = await loadAndSetupBolts() if let nameSign = attachments.entity( for: "name-sign" ) { content.add(nameSign) place(nameSign, above: bolts) } content.add(bolts) } attachments: { Attachment(id: "name-sign") { NameSign("Bolts") } } .realityViewLayoutBehavior(.centered) } }
-
16:37 - Attachments created with ViewAttachmentComponent
struct AttachmentComponentAttachments: View { var body: some View { RealityView { content in let bolts = await loadAndSetupBolts() let attachment = ViewAttachmentComponent( rootView: NameSign("Bolts")) let nameSign = Entity(components: attachment) place(nameSign, above: bolts) content.add(bolts) content.add(nameSign) } .realityViewLayoutBehavior(.centered) } }
-
17:04 - Targeted to entity gesture API
struct AttachmentComponentAttachments: View { @State private var bolts = Entity() @State private var nameSign = Entity() var body: some View { RealityView { ... } .realityViewLayoutBehavior(.centered) .gesture( TapGesture() .targetedToEntity(bolts) .onEnded { value in nameSign.isEnabled.toggle() } ) } }
-
17:10 - Gestures with GestureComponent
struct AttachmentComponentAttachments: View { var body: some View { RealityView { content in let bolts = await loadAndSetupBolts() let attachment = ViewAttachmentComponent( rootView: NameSign("Bolts")) let nameSign = Entity(components: attachment) place(nameSign, above: bolts) bolts.components.set(GestureComponent( TapGesture().onEnded { nameSign.isEnabled.toggle() } )) content.add(bolts) content.add(nameSign) } .realityViewLayoutBehavior(.centered) } }
-
-
- 0:00 - Introduction
Learn about RealityKit SwiftUI enhancements that enable seamless integration of traditional UI and interactive 3D content. Key updates include enhancements to Model3D and RealityView, the introduction of the Object Manipulation API, new Component types, bidirectional data flow between SwiftUI and RealityKit, easier coordinate space conversion, and SwiftUI-driven animations for RealityKit components.
- 1:24 - Model3D enhancements
In visionOS 26, Model3D enables the display of 3D models with animations and loading from a 'ConfigurationCatalog'. You can now create interactive 3D experiences with just a few lines of code. Use the new 'Model3DAsset' type to load and control animations, and 'ConfigurationCatalog' enables switching between different representations of an entity, such as outfits or body types. The example of a robot named Sparky, who you can animate, dress up in different outfits, and control using SwiftUI views and pickers, making it ready to interact with other robots in a virtual greenhouse, demonstrates these features.
- 6:13 - RealityView transition
To add a particle emitter to a 3D model at runtime, you must switch from using 'Model3D' to 'RealityView' in RealityKit because Model3D doesn't support adding components. RealityView is loaded with the model entity, but this causes layout issues as it defaults to taking up all available space. The 'realityViewLayoutBehavior' modifier is applied with 'fixedSize' to resolve this, making RealityView wrap the model's initial bounds tightly. There are three different 'realityViewLayoutBehavior' options: 'flexible', 'centered', and 'fixedSize', each affecting the RealityView's origin point without repositioning the entities. Use Reality Composer Pro to design particle emitters that are configurable in code. Particle emitters are added to entities as components. RealityKit is based on the "Entity Component System" paradigm. The example adds particle emitters to create spark effects, involving invisible entities as containers for the sparks. RealityView is great for detailed creation and fine-grained control over 3D content behavior, while Model3D is suitable for displaying self-contained 3D assets. You can transition smoothly between the two as needed.
- 11:52 - Object manipulation
The new Object Manipulation API in visionOS 26, enables people to interact with virtual objects in apps that use SwiftUI and RealityKit. People can move, rotate, and scale objects with their hands, and even pass objects between hands. For SwiftUI views, you can apply the 'manipulable' modifier, allowing customization of supported operations and object inertia. In RealityKit, add the 'ManipulationComponent' to entities via the static 'configureEntity' function, which also adds collision, input target, and hover effect components. The function has several parameters for behavior customization. Subscribe to 'ManipulationEvents' triggered during interactions, such as starts, stops, updates, releases, and hand-offs, to update app state and customize the experience, including replacing default sounds with custom audio resources.
- 15:35 - SwiftUI components
The new SwiftUI RealityKit components enhance user interaction and connection within RealityKit scenes. These components enable you to seamlessly integrate SwiftUI views into 3D environments. Key features include the 'ViewAttachmentComponent', which allows you to add SwiftUI views directly to entities; the 'GestureComponent', making entities responsive to touch and gestures; and the 'PresentationComponent', which enables the presentation of SwiftUI views, like popovers, within the scene. The configuration parameter on 'PresentationComponent' is the type of presentation to show. These improvements simplify the development process, enabling you to create more dynamic and engaging experiences. In the example, a robot named Bolts can have a name sign that toggles on and off with a tap gesture, and people can choose Bolts' outfit from a popover menu, all within the immersive RealityKit environment.
- 19:08 - Information flow
In visionOS 26, entities are now observable, which allows entities, such as the robot Bolts, to notify other code when their properties change. This is particularly useful for tracking Bolts' movements as it waters plants in the greenhouse. The example uses SwiftUI to implement a minimap that displays Bolts' position in real-time. By accessing Bolts' observable position property, SwiftUI automatically updates the minimap whenever Bolts moves. This two-way data flow between SwiftUI and RealityKit, enabled by observable entities, is a significant new tool. This new observable functionality also introduces the potential for infinite loops if not managed carefully. To avoid these loops, be mindful of where you modify observed state. Don't make changes to observed state within the 'update' closure of 'RealityView', because this can trigger a recursive update cycle. Instead, make modifications in specific places like custom systems, gesture closures, or the 'make' closure, which are outside the scope of SwiftUI's view body evaluation. By breaking down larger views into smaller, self-contained ones, and being cautious about where state is modified, you can ensure smooth and efficient data flow between SwiftUI and RealityKit, avoiding infinite loops and enhancing the overall performance of the application.
- 24:56 - Unified coordinate conversion
The new Unified Coordinate Conversion API bridges the gap between RealityKit and SwiftUI. By implementing the 'CoordinateSpace3D' protocol, you can seamlessly convert values between the two frameworks. This allows calculating the absolute distance between Bolts, an Entity in RealityKit, and Sparky, a Model3D in SwiftUI, as they move closer together, dynamically generating sparks based on their proximity.
- 27:01 - Animation
In visionOS 26, you can now leverage SwiftUI's animation APIs to implicitly animate changes to RealityKit components. This enables smooth and bouncy movements of entities by simply setting supported components within an animation block. There are two methods to achieve this: using 'content.animate()' within a RealityView or calling 'Entity.animate()' directly. This integration allows for custom release behaviors when manipulating entities, making interactions with 3D objects more engaging and fun. Various RealityKit components, including Transform, Audio, Model, and Light, support these implicit animations.
- 29:41 - Next steps
Use this new connection between SwiftUI and RealityKit to create innovative spatial apps by integrating SwiftUI components with RealityKit scenes, enabling dynamic interactions between the virtual and physical worlds, and inspiring new possibilities in app development.