Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro

I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.

The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.

In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.

However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.

Steps I have taken:

  • Created a new layer (CameraFeed) and assigned the relevant objects to it.
  • Set the secondary camera’s culling mask to render only the CameraFeed layer.
  • Assigned the RenderTexture as the camera’s target texture.
  • Applied the RenderTexture to an Unlit/Texture material on a quad.
  • Confirmed the camera is active and correctly positioned relative to the object.

From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.

My questions are:

  1. Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
  2. Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
  3. Has anyone found alternative design patterns or methods for this kind of interaction?

Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4

Any insight or suggestions would be greatly appreciated. Thank you.

Unity PolySpatial – Live handheld camera feed of graspable objects not rendering on Vision Pro
 
 
Q