View in English

  • 메뉴 열기 메뉴 닫기
  • Apple Developer
검색
검색 닫기
  • Apple Developer
  • 뉴스
  • 둘러보기
  • 디자인
  • 개발
  • 배포
  • 지원
  • 계정
페이지에서만 검색

빠른 링크

5 빠른 링크

비디오

메뉴 열기 메뉴 닫기
  • 컬렉션
  • 주제
  • 전체 비디오
  • 소개

WWDC25 컬렉션으로 돌아가기

스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.

  • 소개
  • 요약
  • 자막 전문
  • 코드
  • 몰입감 넘치는 앱을 위한 Metal 렌더링의 새로운 기능

    Compositor Services와 함께 몰입감 넘치는 앱을 위한 Metal 렌더링의 최신 개선 사항을 확인하세요. 앱의 상호작용 요소를 강조하기 위해 호버 효과를 추가하는 방법과 동적 렌더링 품질로 더 높은 fidelity를 렌더링하는 방법을 알아보세요. 새로운 점진적 몰입 스타일에 대해서 살펴보고 Metal 콘텐츠를 Mac에서 Vision Pro로 직접 렌더링하여 macOS 앱에 몰입형 경험을 가져오는 방법도 확인할 수 있습니다. 이 세션을 최대한 활용하려면 WWDC23에서 ‘몰입감 넘치는 앱을 위한 Metal 살펴보기'를 먼저 시청하는 것이 좋습니다.

    챕터

    • 0:00 - Intro
    • 1:58 - New render loop APIs
    • 4:21 - Hover effects
    • 10:50 - Dynamic render quality
    • 14:44 - Progressive immersion
    • 18:32 - macOS spatial rendering
    • 23:51 - Next steps

    리소스

    • Analyzing the performance of your Metal app
    • Optimizing GPU performance
    • Rendering hover effects in Metal immersive apps
      • HD 비디오
      • SD 비디오

    관련 비디오

    WWDC25

    • Metal 4 게임 심화 기능 알아보기
    • Metal 4 게임 알아보기
    • Metal 4 살펴보기
    • SwiftUI의 새로운 기능
    • visionOS 26의 새로운 기능
    • visionOS에서 공간 액세서리 입력 살펴보기

    WWDC24

    • visionOS에서 Metal 콘텐츠를 패스스루와 통합하여 렌더링하기

    WWDC23

    • 몰입형 앱을 위한 Metal 알아보기
  • 비디오 검색…

    Hello, I'm Ricardo, a Software Engineer at Apple. Today, I'm going to show you new capabilities that you can adopt when using Metal to render immersive content on Apple Vision Pro. Last year, we showed how to take advantage of Metal, Compositor Services, and ARKit, to create Immersive experiences by directly rendering your content on visionOS.

    You can either implement a fully immersive experience, like in Demeo by Resolution Games; or you can adopt the mixed immersion style, to show your content along with the real world.

    And now, because of valuable feedback from developers like you, Metal rendering on visionOS supports exciting new features.

    You'll be able to add even richer details and interactive effects to your apps and games. I'll explain all about the new Compositor Services capabilities in this video.

    To get the most out of it, you should be familiar with the Compositor Services framework, and with Metal rendering techniques. If you haven’t used these technologies before, you can learn about them in these previous videos.

    To adopt the new features, first you'll need to make some changes to your existing render loop. I'll explain how to adopt new APIs that make the pipeline more flexible. Once you've done that, you'll be able to add hover effects to highlight the interactive elements of your app. You'll also be able to dynamically adjust the resolution of your rendered content. There's a new progressive immersion style that lets people adjust their immersion level with the Digital Crown. And you can use your Mac to render immersive content directly to Vision Pro! It all begins with the new render loop APIs. Let's dive in.

    A Metal immersive app starts in SwiftUI, where you create an immersive space that holds a compositor layer.

    The layer provides a layer renderer object for use in your render loop.

    You query frames from the layer renderer. From each frame, you get drawables, which contain textures you use to render your content. If you've already created a Metal immersive app, you are querying a single drawable for each rendered frame. This year, Compositor Services has a new query drawables function that returns an array of drawables. Based on system context, this array will contain one or two drawables.

    Most of the time, you'll get a single drawable. However, whenever you're recording a high-quality video with Reality Composer Pro, a frame will return two drawables. You can identify the drawables with the new target property. The one for your Vision Pro display has the .builtIn value, and the one for your recording has the .capture value. To learn how to capture high-quality videos, take a look at the Developer Documentation.

    This render frame function may be familiar to you. After querying the next frame and updating the scene state, I call the query drawable function. Then, I wait for the optimal input time. And I render the scene to the drawable.

    Now, instead I replace query drawable with query drawables. I make sure the array isn't empty, and I render my scene to all the drawables.

    Xcode includes a handy template that shows how to create a Metal immersive app. It's a great starting point. To check it out, start a new Xcode project for a visionOS app, and choose Metal 4 on the Immersive Space Renderer pop-up menu.

    Note that Metal 3 is still fully supported this year, and you can use it on the Xcode template by selecting the "Metal" option.

    You can learn more about how to adopt the latest version of Metal in "Discover Metal 4".

    Once you've adopted the new query drawables function, you'll be able to use all the new features this year. Like adding hover effects on the interactive objects of your scene.

    With them, the person using your app can see which objects are interactive, and can anticipate the targets of their actions. The system will dynamically highlight the object the viewer is looking at. For example, a puzzle game can emphasize the pieces that can be selected by the player.

    Imagine an app that is rendering a scene with several 3D objects, but only some of them can be interacted with. I want to make sure the hover effects only apply to the interactive objects in my scene. When an object is not interactive, it's not tracked and it doesn't have a hover effect. So I render it normally.

    On the other hand, if an object is interactive, I add a new tracking area to the drawable.

    Note that I need to assign a unique object identifier to each tracking area that I register.

    Then, I check if the object should have a hover effect. If it shouldn't, I draw it with the tracking area render value. This is useful for detecting pinches on my objects. But, if the object has a hover effect, I configure it on the tracking area before rendering.

    In code, I need to configure my layer renderer to use hover effects. I set the tracking areas texture to use an 8-bit pixel format. It supports up to 255 concurrent interactive objects. I make sure my layer capabilities support the format, and I set it in the configuration.

    In the tracking area registration code, I check if the object is interactive, and if it is, I register a new tracking area in the drawable with my object identifier. Be sure to keep track of unique identifiers for the lifecycle of your objects. Then, I check if the object has a hover effect, and if so, I add it with the .automatic attribute. This means the system will automatically add a hover effect when the viewer looks at the object. Finally, I render it.

    With Compositor Services, the drawable provides several textures your app can use to render content.

    You may be familiar with the color texture, which is what the viewer of your app sees. There's also the depth texture, where darker colors indicate objects that are further from the viewer. The system uses this texture to make your displayed content more accurate as the viewer moves around the scene.

    This year, there's also a tracking areas texture, which defines the different interactive regions in your scene. A drawable provides you the color and depth textures. Now, you can also query the new tracking areas texture.

    In it, you draw distinct areas corresponding to your interactive objects. With hover effects, when somebody looks at an interactive object in your scene, the system uses the tracking areas texture to find the corresponding region, and applies a hover effect on the matching part of your color texture.

    Here again is the object render function with my configured tracking area. To render to the tracking areas texture I need a system computed render value. I declare a local variable to store it, and I get it from my corresponding tracking area. If my object is not interactive, I can use the default nil render value.

    Finally, I pass it to my draw function, where I'll send it to my fragment shader.

    Let's have a look at the shader code.

    The fragment shader output has a tracking area render value, mapped to the color attachment at index 1. I have set my tracking areas texture there. I have bound the render value to my uniforms struct, and I return it in the shader output, along with the color value. Now, my app has interactive hover effects.

    There's one more thing to keep in mind if you use multisample antialiasing, or MSAA.

    This technique works by rendering an intermediate higher resolution texture, and then averaging the color values over a sampling window. This is usually done using the multisample resolve option on your target texture store action. You cannot resolve the tracking areas texture in the same way you resolve a color pixel. Doing so, would average your render values, resulting in an invalid scalar that doesn't correspond to any tracking area. If you use multisample resolve for your color, you have to implement a custom tile resolver for the tracking areas texture. You can do this by using the don't care store option with a custom tile render pipeline. A good strategy is choosing the render value that appears most frequently in the sampling window on the MSAA source texture.

    For an in-depth review of hover effects, including how to use them with MSAA, you can read "Rendering hover effects in Metal immersive apps" in the Developer Documentation.

    Tracking areas also allow your app to handle object interactions in an easier way than before. Spatial events now include a nullable tracking area identifier. I use this identifier to see if it matches any of my scene objects. If I find a target object, I can perform an action on it.

    I have improved the interactivity of my app with hover effects. The person using my app can clearly see which objects are actionable, and which one will be activated when they pinch. And this makes handling input events easier then ever! Now, you can also draw your content at even higher fidelity than before. By using dynamic render quality, you can adjust the resolution of your content based on the complexity of your scenes.

    First, I'll recap how foveated rendering works. In a standard non-foveated texture, the pixels are evenly distributed through the surface. With foveated rendering, the system helps you draw to a texture where the center has a higher pixel density. This way, your app uses its computing and power resources to make the content look better wherever the viewer is most likely to be looking. This year, you can take advantage of dynamic quality for foveated rendering. You can now control the quality of the frames rendered by your app.

    First, you need to specify a maximum render quality suitable for your app. This sets the upper limit for your app's rendering session. Then, you can adjust the runtime quality within your chosen range based on the type of content you're displaying.

    As you boost the render quality, the high relevance area in your texture expands, leading to a larger overall texture size. Just a heads up, increasing the quality also means your app will use more memory and power.

    If you're rendering text or user interface elements, you will benefit from setting a higher render quality. But if you're showing a complex 3D scene, you may be limited by computing resources. To make sure your app runs smoothly, you need to find a balance between high-quality visuals and the amount of power your app uses.

    You can use Instruments to analyze your app's realtime performance, and you can use the Metal debugger to deep dive and optimize your Metal code and shaders.

    Remember, it's important to profile your app with your most complex scenes to make sure it has enough time to render its frames at a steady pace.

    Check out the Developer Documentation to learn more about optimizing your Metal rendering apps.

    In this code example, I have profiled my app, and have determined that I want to render my app's menu with a .8 quality. This way, the text will look crisper. I want to render the world with a .6 quality because it's a complex scene. I also added a computed property with the maximum render quality that I'm going to use.

    And here's my layer configuration. Dynamic render quality can only be used with foveation, so I check if it's enabled. Then, I set my maximum render quality to the value from my computed property. Remember to set it at the minimum value that makes sense for your content. If you don't, your app will use more memory than it needs to.

    When I load a new scene, I call my adjust render quality function. Adjusting the render quality is only possible if foveation is enabled. I switch over my scene type, and I adjust the render quality accordingly.

    Transitioning between quality values takes a bit of time, rather than being instant. The system makes the transition smooth.

    With dynamic render quality, your highly detailed scenes will really shine. The higher resolution of your rendered scenes can really help with the clarity of your finer details. But remember, you may need to lower the quality during very complex scenes. You can now adjust your app's render quality for your content! New this year, your Metal app can be rendered inside a progressive immersive portal.

    With the progressive immersion style, people using your app can control the immersion level by rotating the Digital Crown. This mode anchors them to the real environment, and can help them feel more at ease when they're viewing complex scenes with movement.

    When you're viewing a Metal app in the progressive immersion mode, the system only renders the content that's inside the current immersion level.

    Here's an example scene from a game being rendered at full immersion. And here's the same scene rendered at partial immersion, after the viewer has adjusted the Digital Crown.

    Compare the two scenes and notice how you can save computing power by not rendering the highlighted area outside the portal. That part is not visible, so it's not necessary to render it.

    New APIs allow you to use a system-computed portal stencil to mask your content. This white oval shows the corresponding portal stencil.

    The stencil buffer works as a mask to your rendered scene. With it, you'll only render the contents inside the portal. You can see that the scene you're rendering doesn't have a smooth edge yet.

    The fading is applied by the system as the last step on your command buffer, resulting in the scene that the viewer sees.

    In order to use the stencil to avoid rendering unnecessary content, first you configure your compositor layer. Make sure your desired stencil format is supported by the layer capabilities, and set it on your configuration. To apply the portal stencil mask, you need to add a render context to the drawable with your command buffer. Drawing your mask on the stencil will prevent any invisible pixels from being rendered. You also have to end the encoding through your render context, instead of directly ending your command encoder. This way, the portal effect is efficiently applied over your content.

    In my app, I create an Immersive Space in SwiftUI, and I add the progressive immersion style as a new option to my list. The person using my app can switch between the progressive and the full styles. I configure the layer next. First, be aware that the progressive immersion style only works with the layered layout. I specify my desired stencil format to 8-bits per pixel. I check that the capabilities support this format, and I set it on my configuration.

    I also set the sample count to 1, since I'm not using MSAA. If you use that technique, set it to the MSAA sampling count.

    In my renderer, I add a render context to the drawable. I pass the same command buffer I will use for my render commands. Then, I draw my portal mask on the stencil attachment. I selected a stencil value that I don't use in any other stencil operations. I set the stencil reference value on the render encoder. This way, my renderer won't draw the area outside the current immersion level. After rendering the scene, note how I end the encoding on my drawable render context.

    To see a working example of a renderer using the progressive immersion style, choose the progressive option in the visionOS Metal app template. That will get you started building a portal-style Metal app.

    Finally, let's explore macOS spatial rendering.

    Until now, I've been talking about building native immersive experiences on Vision Pro.

    This year, you can use the power of your Mac to render and stream immersive content directly to Vision Pro. This can be used to add immersive experiences to existing Mac apps.

    For example, a 3D modeling app can directly preview your scenes on Vision Pro. Or you can build an immersive macOS app from scratch. This way you can make complex immersive experiences with high compute needs, without being constrained by the power usage of Vision Pro.

    Starting a remote immersive session from a Mac app is really easy. When you open an Immersive Space in macOS, you'll be prompted to accept the connection on Vision Pro.

    Do that, and you'll start seeing your Mac-rendered immersive content.

    A typical Mac app is built with SwiftUI or AppKit. You use either of these frameworks to create and display Windows. The system renders your window content with Core Animation. You can adopt a variety of macOS frameworks to implement your app's functionality. And the system displays your content on your Mac display. To build a Mac-supported immersive experience, you'll use the same familiar frameworks that allow you to create immersive visionOS apps. First, you use SwiftUI with the new Remote Immersive Space scene type. You then adopt the Compositor Services framework. You use ARKit and Metal to place and render your content. And the system directly displays your immersive scene on Vision Pro.

    The macOS Remote Immersive Space hosts the Compositor Layer and ARKit Session, like a native visionOS app does. They seamlessly connect to your Vision Pro display and sensors. In order to connect your ARKit Session to visionOS, there's a new Remote Device Identifier SwiftUI environment object that you pass to the session initializer.

    This is how a Mac immersive app is structured.

    I define a new remote immersive space, which contains my compositor content. I'll show how it uses the compositor layer in a bit. On Mac, only the progressive and full immersion styles are supported. In the interface of my Mac app, I use the new supports remote scenes environment variable to check if my Mac has this capability. I can customize my UI to show a message if remote scenes are not supported. If they are supported, and I have not opened the immersive space yet, I can launch it.

    The last part of my app is my compositor content. It has my compositor layer and my ARKit session. I create and use a compositor layer the same way I did on visionOS. I access the new remote device identifier SwiftUI environment object, and pass it to the ARKit session initializer. This will connect my Mac's ARKit session to Vision Pro. Last, I start my render loop like I would on a typical Metal immersive app.

    ARKit and the world tracking provider are now available on macOS.

    This allows you to query the Vision Pro location in space. Just as you would in a native immersive app, you will use the device pose to update your scene and drawables before rendering.

    A macOS spatial app supports any input device connected to your Mac. You can use keyboard and mouse controls. Or you can connect a gamepad, and handle its input using the Game Controller framework.

    Additionally, you can use pinch events on the interactive elements of your immersive scene by using the 'onSpatialEvent' modifier on your Layer Renderer.

    New this year, you can also create SwiftUI scenes from an existing AppKit or UIKit app. This is a great way of adding new immersive experiences to existing Mac apps. You can learn more about how to do this in "What's new in SwiftUI".

    It's common for rendering engines to be implemented in C or C++. All the APIs I have explained have native equivalents in C. The C types for the Compositor Services framework start with the 'cp' prefix. They use similar patterns and conventions as familiar C libraries such as Core Foundation. For ARKit, the cDevice property gives you a C-compatible remote device identifier. You can pass it into your C framework, and initialize your ARKit Session with the create with device function.

    Now you have all the pieces to use your Mac to power immersive content on Vision Pro.

    I'm excited to see how you use these new capabilities to augment your immersive apps. They allow for better interactivity, higher fidelity, and the new progressive immersion style. And I can't wait to see what you do with the new macOS spatial capabilities.

    To learn more about how to take your immersive apps to the next level, check "Set the scene with SwiftUI in visionOS". And for an overview of other platform improvements, see "What's new in visionOS".

    Thank you for watching!

    • 0:01 - Scene render loop

      // Scene render loop
      
      extension Renderer {
          func renderFrame(with scene: MyScene) {
              guard let frame = layerRenderer.queryNextFrame() else { return }
      
              frame.startUpdate()
              scene.performFrameIndependentUpdates()
              frame.endUpdate()
      
              let drawables = frame.queryDrawables()
              guard !drawables.isEmpty else { return }
      
              guard let timing = frame.predictTiming() else { return }
              LayerRenderer.Clock().wait(until: timing.optimalInputTime)
              frame.startSubmission()
              scene.render(to: drawable)
              frame.endSubmission()
          }
      }
    • 5:54 - Layer configuration

      // Layer configuration
      
      struct MyConfiguration: CompositorLayerConfiguration {
          func makeConfiguration(capabilities: LayerRenderer.Capabilities,
                                 configuration: inout LayerRenderer.Configuration) {
              // Configure other aspects of LayerRenderer
      
              let trackingAreasFormat: MTLPixelFormat = .r8Uint
              if capabilities.supportedTrackingAreasFormats.contains(trackingAreasFormat) {
                  configuration.trackingAreasFormat = trackingAreasFormat
              }
          }
      }
    • 7:54 - Object render function

      // Object render function
      
      extension MyObject {
          func render(drawable: Drawable, renderEncoder: MTLRenderCommandEncoder) {
              var renderValue: LayerRenderer.Drawable.TrackingArea.RenderValue? = nil
              if self.isInteractive {
                  let trackingArea = drawable.addTrackingArea(identifier: self.identifier)
                  if self.usesHoverEffect {
                      trackingArea.addHoverEffect(.automatic)
                  }
                  renderValue = trackingArea.renderValue
              }
      		self.draw(with: commandEncoder, trackingAreaRenderValue: renderValue)
          }
      }
    • 8:26 - Metal fragment shader

      // Metal fragment shader
      
      struct FragmentOut
      {
          float4 color [[color(0)]];
          uint16_t trackingAreaRenderValue [[color(1)]];
      };
      
      fragment FragmentOut fragmentShader( /* ... */ )
      {
          // ...
      
          return FragmentOut {
              float4(outColor, 1.0),
              uniforms.trackingAreaRenderValue
          };
      }
    • 10:09 - Event processing

      // Event processing
      
      extension Renderer {
          func processEvent(_ event: SpatialEventCollection.Event) {
             let object = scene.objects.first {
                 $0.identifier == event.trackingAreaIdentifier
             }
             if let object {
                 object.performAction()
             }
         }
      }
    • 13:08 - Quality constants

      // Quality constants
      
      extension MyScene {
          struct Constants {
              static let menuRenderQuality: LayerRenderer.RenderQuality = .init(0.8)
              static let worldRenderQuality: LayerRenderer.RenderQuality = .init(0.6)
              static var maxRenderQuality: LayerRenderer.RenderQuality { menuRenderQuality }
          }
      }
    • 13:32 - Layer configuration

      // Layer configuration
      
      struct MyConfiguration: CompositorLayerConfiguration {
          func makeConfiguration(capabilities: LayerRenderer.Capabilities,
                                 configuration: inout LayerRenderer.Configuration) {
             // Configure other aspects of LayerRenderer
      
             if configuration.isFoveationEnabled {
                 configuration.maxRenderQuality = MyScene.Constants.maxRenderQuality
             }
      }
    • 13:57 - Set runtime render quality

      // Set runtime render quality
      
      extension MyScene {
          var renderQuality: LayerRenderer.RenderQuality {
              switch type {
              case .world: Constants.worldRenderQuality
              case .menu: Constants.menuRenderQuality
              }
          }
      }
      
      extension Renderer {
          func adjustRenderQuality(for scene: MyScene) {
              guard layerRenderer.configuration.isFoveationEnabled else {
                  return;
              }
              layerRenderer.renderQuality = scene.renderQuality
          }
      }
    • 16:58 - SwiftUI immersion style

      // SwiftUI immersion style
      
      @main
      struct MyApp: App {
          @State var immersionStyle: ImmersionStyle
      
          var body: some Scene {
              ImmersiveSpace(id: "MyImmersiveSpace") {
                  CompositorLayer(configuration: MyConfiguration()) { @MainActor layerRenderer in
                      Renderer.startRenderLoop(layerRenderer)
                  }
              }
              .immersionStyle(selection: $immersionStyle, in: .progressive, .full)
          }
      }
    • 17:12 - Layer configuration

      // Layer configuration
      
      struct MyConfiguration: CompositorLayerConfiguration {
          func makeConfiguration(capabilities: LayerRenderer.Capabilities,
                                 configuration: inout LayerRenderer.Configuration) {
              // Configure other aspects of LayerRenderer
              
              if configuration.layout == .layered {
                  let stencilFormat: MTLPixelFormat = .stencil8 
                  if capabilities.drawableRenderContextSupportedStencilFormats.contains(
                      stencilFormat
                  ) {
                      configuration.drawableRenderContextStencilFormat = stencilFormat 
                  }
                  configuration.drawableRenderContextRasterSampleCount = 1
              }
          }
      }
    • 17:40 - Render loop

      // Render loop
      
      struct Renderer {
          let portalStencilValue: UInt8 = 200 // Value not used in other stencil operations
      
          func renderFrame(with scene: MyScene,
                           drawable: LayerRenderer.Drawable,
                           commandBuffer: MTLCommandBuffer) {
              let drawableRenderContext = drawable.addRenderContext(commandBuffer: commandBuffer)
              let renderEncoder = configureRenderPass(commandBuffer: commandBuffer)
              drawableRenderContext.drawMaskOnStencilAttachment(commandEncoder: renderEncoder,
                                                                value: portalStencilValue)
              renderEncoder.setStencilReferenceValue(UInt32(portalStencilValue))
              
              scene.render(to: drawable, renderEncoder: renderEncoder)
      
              drawableRenderContext.endEncoding(commandEncoder: commandEncoder)
              drawable.encodePresent(commandBuffer: commandBuffer)
          }
      }
    • 20:55 - App structure

      // App structure
      
      @main
      struct MyImmersiveMacApp: App {
          @State var immersionStyle: ImmersionStyle = .full
      
          var body: some Scene {
              WindowGroup {
                  MyAppContent()
              }
      
              RemoteImmersiveSpace(id: "MyRemoteImmersiveSpace") {
                  MyCompositorContent()
              }
              .immersionStyle(selection: $immersionStyle, in: .full, .progressive)
         }
      }
    • 21:14 - App UI

      // App UI
      
      struct MyAppContent: View {
          @Environment(\.supportsRemoteScenes) private var supportsRemoteScenes
          @Environment(\.openImmersiveSpace) private var openImmersiveSpace
          @State private var spaceState: OpenImmersiveSpaceAction.Result?
      
          var body: some View {
              if !supportsRemoteScenes {
                  Text("Remote SwiftUI scenes are not supported on this Mac.")
              } else if spaceState != nil {
                  MySpaceStateView($spaceState)
              } else {
                  Button("Open remote immersive space") {
                      Task {
                          spaceState = await openImmersiveSpace(id: "MyRemoteImmersiveSpace")
                      }
                  }
              }
          }
      }
    • 21:35 - Compositor content and ARKit session

      // Compositor content and ARKit session
      
      struct MyCompositorContent: CompositorContent {
          @Environment(\.remoteDeviceIdentifier) private var remoteDeviceIdentifier
      
          var body: some CompositorContent {
              CompositorLayer(configuration: MyConfiguration()) { @MainActor layerRenderer in
                  guard let remoteDeviceIdentifier else { return }
                  let arSession = ARKitSession(device: remoteDeviceIdentifier)
                  Renderer.startRenderLoop(layerRenderer, arSession)
              }
          }
      }
    • 23:17 - C interoperability

      // Swift
      let remoteDevice: ar_device_t = remoteDeviceIdentifier.cDevice
Renderer.start_rendering(layerRenderer, remoteDevice)
      
      // C
      void start_rendering(cp_layer_renderer_t layer_renderer, ar_device_t remoteDevice) {
    ar_session_t session = ar_session_create_with_device(remoteDevice);
    // ...
}
    • 0:00 - Intro
    • Metal rendering on visionOS, along with Compositor Services, bring exciting new features this year, including hover effects on objects that can be interacted with, dynamic render quality of your rendered content, a brand-new progressive immersion style, and the ability to render immersive content on the Vision Pro directly from macOS.

    • 1:58 - New render loop APIs
    • The render loop on visionOS has an important change this year. Instead of returning a single drawable, the queryDrawables object now returns an array of one or two drawables. The second drawable will be present whenever you are recording a high-quality video with Reality Composer Pro. Check Xcode for a template that can get you started. Both Metal and Metal 4 are supported.

    • 4:21 - Hover effects
    • Once you’ve adopted the new render loop API, you can start implementing hover effects on the objects that can be interacted with. With hover effects, people can see which objects are interactive, and can anticipate the targets of their actions. The system will dynamically highlight the object someone is looking at. For example, a puzzle game can emphasize the pieces that can be selected by the player. You can do this using the new tracking areas texture, which defines the different interactive regions in your scene. There are a few additional considerations to keep in mind if you’re using MSAA (multisample antialiasing).

    • 10:50 - Dynamic render quality
    • You can now draw your content at even higher fidelity than before. By using dynamic render quality, you can adjust the resolution of your content based on the complexity of your scenes. It builds upon foveated rendering, which prioritizes pixel density where the viewer is looking. You can set a maximum render quality and then adjust runtime quality within that range. Higher quality improves text and UI clarity but increases memory and power usage. Balancing quality and performance is crucial. Use tools like Instruments and Metal debugger to find the right balance.

    • 14:44 - Progressive immersion
    • New this year, you can render content inside a progressive immersive portal. With this, people control the immersion level by rotating the Digital Crown. This grounds them to the real environment, and can help them feel more at ease when they're viewing complex scenes with movement. To implement it, ask the system to provide a stencil buffer to mask content outside of the portal boundary. The system applies a fading effect to the edges of the portal, creating a seamless transition between the real and rendered environments. Pixels outside the portal's view aren’t rendered, saving computing power. Implementation details are shared.

    • 18:32 - macOS spatial rendering
    • macOS spatial rendering enables you to leverage the power of your Mac to render to stream immersive content directly to the Apple Vision Pro. This new feature allows existing Mac apps to be enhanced with immersive experiences, such as real-time 3D modeling previews. ARKit and the worldTrackingProvider are now available on macOS. This allows you to query the Vision Pro’s location in space. The macOS RemoteImmersiveSpace hosts the CompositorLayer and ARKitSession, like a native visionOS app would. There’s a new remoteDeviceIdentifier you’ll use to connect your Mac's ARKit session to the Vision Pro. And, all of the relevant APIs have native equivalents in C.

    • 23:51 - Next steps
    • These new capabilities in Metal and Compositor Services on visionOS can enable you to bring better interactivity, higher fidelity, and the new progressive immersion style to your apps and games. Next, check "Set the scene with SwiftUI in visionOS” and "What's new in visionOS 26.”

Developer Footer

  • 비디오
  • WWDC25
  • 몰입감 넘치는 앱을 위한 Metal 렌더링의 새로운 기능
  • 메뉴 열기 메뉴 닫기
    • iOS
    • iPadOS
    • macOS
    • tvOS
    • visionOS
    • watchOS
    메뉴 열기 메뉴 닫기
    • Swift
    • SwiftUI
    • Swift Playground
    • TestFlight
    • Xcode
    • Xcode Cloud
    • SF Symbols
    메뉴 열기 메뉴 닫기
    • 손쉬운 사용
    • 액세서리
    • 앱 확장 프로그램
    • App Store
    • 오디오 및 비디오(영문)
    • 증강 현실
    • 디자인
    • 배포
    • 교육
    • 서체(영문)
    • 게임
    • 건강 및 피트니스
    • 앱 내 구입
    • 현지화
    • 지도 및 위치
    • 머신 러닝
    • 오픈 소스(영문)
    • 보안
    • Safari 및 웹(영문)
    메뉴 열기 메뉴 닫기
    • 문서(영문)
    • 튜토리얼
    • 다운로드(영문)
    • 포럼(영문)
    • 비디오
    메뉴 열기 메뉴 닫기
    • 지원 문서
    • 문의하기
    • 버그 보고
    • 시스템 상태(영문)
    메뉴 열기 메뉴 닫기
    • Apple Developer
    • App Store Connect
    • 인증서, 식별자 및 프로파일(영문)
    • 피드백 지원
    메뉴 열기 메뉴 닫기
    • Apple Developer Program
    • Apple Developer Enterprise Program
    • App Store Small Business Program
    • MFi Program(영문)
    • News Partner Program(영문)
    • Video Partner Program(영문)
    • Security Bounty Program(영문)
    • Security Research Device Program(영문)
    메뉴 열기 메뉴 닫기
    • Apple과의 만남
    • Apple Developer Center
    • App Store 어워드(영문)
    • Apple 디자인 어워드
    • Apple Developer Academy(영문)
    • WWDC
    Apple Developer 앱 받기
    Copyright © 2025 Apple Inc. 모든 권리 보유.
    약관 개인정보 처리방침 계약 및 지침