(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Camera & Photos.
WWDC25 Camera & Photos group lab ran for one hour at 6 PM PST on Tuesday June 10th, 2025
Question 10
Can we directly integrate auto-capture triggers (e.g., when image is steady or text is detected) using Vision and AVFoundation?
- Yes apps can use AVCaptureSession's VDO + AVCapturePhotoOutput, run vision on VDO buffers and capture photo when certain scene or text is detected.
- Just to be careful to run Vision on VDO buffers async so it doesn't cause frame drops.
Question 11
What Camera or Photos framework features support working with images from external media, like connected cameras or SD cards? Any best practices?
- The ImageCaptureCore framework supports camera devices, memory cards, scanners
- read and write, where supported
- check out the docs to see how to browse connected devices, folders, files, etc.
Question 12
Hi Brad, to follow up on your SwiftUI cautionary note: using AVCaptureVideoPreview inside a UIViewRepresentable, is okay, right? Thanks all for the great info!
- Yes, this is totally fine.
- AppKit or UIKit views inside appropriate SwiftUI representables should be equivalent performance
Question 13
What’s the “right” way to transition media in my photos app between HDR modes? When I’m in a one-up view, we use HDR, but in other contexts (like thumbnail) we don’t want HDR. Is there a nice way to tone map?
- There’s a suite of new System Tone Mapper APIs in this years’ OSes
- CoreImage ImageKit CoreAnimation, CoreGraphics
- For example:
- CoreImage: new CISystemToneMap filter.
- CoreAnimation: layer.preferredDynamicRange = CADynamicRangeConstrainedHigh
- Using image views (NSImageView/UIImageView/SwiftUI Image/CALayer) support animations on preferredDynamicRange
- Can go from high to constrained to standard
- Tone mapping is provided by the system (CISystemToneMap for controllable example)
Question 14
What is your recommendation to preprocess and upscale your depth map in order to render a realistic portrait mode image?
- One way to do this: the CIEdgePreserveUpsample CIFilter can be use to upsample a lower resolution depth map by using a higher resolution RGB image as a guide.
Question 15
For buffering frames for later processing from real-time camera output should we prefer a AVSampleBufferDisplayLayer centered approach or AVCaptureVideoDataOutputSampleBufferDelegate centered approach? When would we use each?
- AVSampleBufferDisplayLayer and AVCaptureVideoDataOutputSampleBufferDelegate are used hand in hand for custom camera preview.
- For buffering for later processing, ensure you make copies of VDO buffers to not drop frames from the output
Question 16
Hello, my question is on Deferred Photo Processing? Say I have a photo capture app that adds a CIFilter to the capture. How can I take advantage of Deferred Photo Processing? Since I don’t know how to detect when the deferred captured photo is ready
- CIFilter can be called on the final at that point
- Photo will have to be re-inserted into the Photo library as adjustment
Question 17
Is digital zoom (e.g., 1.5x) before taking a photo the same as cropping the photo afterward?
- digital zoom upscales the image to output dimensions and cropping will yield a smaller output image
- while digital zoom will crop, it also upscales
Question 18
How do you design camera interfaces that work for both casual users and photography enthusiasts?
- Progressive disclosure: Put the most common controls up front, and make it easy for pros to drill down.
- Sensible Defaults: Choose defaults that work well for casual users, but allow those defaults to be modified for photography enthusiasts
- A good philosophy is: Keep the simple things easy, make the hard things possible
Question 19
Recent iPhone models introduced macro mode which automatically switch between lenses to take into account of the focal distance difference. Is there official API to implement this, or should I implement them myself using LiDAR values.
- Using builtInTripleCamera and builtInDualWideCamera will automatically switch to macro when available
Question 20
a couple of years ago at WWDC, the option of replacing a camera with a virtual camera was mentioned. How does one do that - make the “physical” camera effectively disappear, so only the virtual camera is accessible to the user?
- You can't prevent the built-in camera from being available to the user
Question 21
Can developers now integrate custom Core ML models with Vision for on-device photo analysis more seamlessly?
- Yes they can, use CoreMLRequest , provide their model container
- Been supported for a while (iOS 18/macOS 15)
- For more details go to Machine Learning & AI group lab Thursday
- use smaller images for better performance
Question 22
What would you recommend for capture of the new immersive and spatial formats?
- To capture Spatial Video use AVCaptureMovieFileOutput’s spatialVideoCaptureEnabled property
- Not all device formats support spatial capture, check AVCaptureDevice.activeFormat.spatialVideoCaptureSupported
- See WWDC 2024 talk “Build compelling spatial photo and video experiences” for more details
Question 23
You mentioned JPEG-XL. What is the current status of support on iOS and macOS for encoding and decoding?
- For decoding, we support JPEG-XL files in all our OSes, regular SDR files, as well as ISO HDR files.
- For encoding, we only support JPEG-XL for ProRAW DNG capture in the Camera app or via third-party AVFoundation APIs.
- If you have any requests for improvement or new features related to JPEG-XL, please file a Feedback request using the Feedback Assistant.
(Note: this is part 2 of a 3 part posting. See Part 1 or Part 3)