Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

raycast during RoomCaptureSession

I created an app where the user:

  1. first scans a room using RoomCaptureView (RoomPlan)
  2. then taps on physical elements (objects, walls...) using an ARView to record some 3d positions

I can handle taps in an ARView using a UITapGestureRecognizer and the ARView raycast(from:, allowing:, alignment:) method. This works fine, so I thought I could do the same using the ARView used by RoomCaptureView., so the user can scan a room and record some 3d positions at the same time. Sadly, this approach does not work, as the raycast method always returns nil.

What I actually need is mapping a tap on screen to a real-world position during RoomCaptureSession.

Does anyone know how to do this?

I'm not very experienced with IOS development, so this solution might not be very elegant with Swift: you can just grab camera transform and camera intrinsic from arFrame within arSession, then write an algorithm to map it back manually.

@benjiv Did you ever manage to implement this? I managed to add a SCNView on top of the RoomCaptureView and copy RoomPlan's camera transform to replicate the camera movement. However, raycasting did not work.

I managed to read the feature points, but they seem to be all over the place...

raycast during RoomCaptureSession
 
 
Q