I developed a driverkit extension based on overriding-the-default-usb-video-class-extension, but the link didn’t give the details of realization. I asked DTS who gave two tips:
1, Do you also have a CMIO extension to load in place of the default overriding-the-default-usb-video-class-extension
2, Your DriverKit extension’s info.plist is also missing the CameraAssistantBundleID.
I want to know why a driverkit extension needs a CMIO extension, what’s the data and control flow?
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
Photos & Camera
RSS for tagExplore technical aspects of capturing high-quality photos and videos, including exposure control, focus modes, and RAW capture options.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
According to the doc, I did a simple demo to verify.
My env:
ProductName: macOS
ProductVersion: 15.5
BuildVersion: 24F74
2.4 GHz 四核Intel Core i5
Info.plist:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>IOKitPersonalities</key>
<dict>
<key>UVCamera</key>
<dict>
<key>CFBundleIdentifierKernel</key>
<string>com.apple.kpi.iokit</string>
<key>IOClass</key>
<string>IOUserService</string>
<key>IOMatchCategory</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProviderClass</key>
<string>IOUserResources</string>
<key>IOResourceMatch</key>
<string>IOKit</string>
<key>IOUserClass</key>
<string>UVCamera</string>
<key>IOUserServerName</key>
<string>$(PRODUCT_BUNDLE_IDENTIFIER)</string>
<key>IOProbeScore</key>
<integer>100000</integer>
<key>idVendor</key>
<integer>1452</integer>
<key>idProduct</key>
<integer>34068</integer>
</dict>
</dict>
<key>OSBundleUsageDescription</key>
<string></string>
</dict>
</plist>
UVCamera.cpp
//
// UVCamera.cpp
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#include <os/log.h>
#include <DriverKit/IOUserServer.h>
#include <DriverKit/IOLib.h>
#include "UVCamera.h"
kern_return_t
IMPL(UVCamera, Start)
{
kern_return_t ret;
ret = Start(provider, SUPERDISPATCH);
os_log(OS_LOG_DEFAULT, "Hello World");
return ret;
}
UVCamera.iig
//
// UVCamera.iig
// UVCamera
//
// Created by DTEN on 2025/6/12.
//
#ifndef UVCamera_h
#define UVCamera_h
#include <Availability.h>
#include <DriverKit/IOService.iig>
class UVCamera: public IOService
{
public:
virtual kern_return_t
Start(IOService * provider) override;
};
#endif /* UVCamera_h */
Then I build by xcode and mv it to /Library/DriverExtensions:
sudo mv com.lqs.MyVirtualCam.UVCamera.dext /Library/DriverExtensions
sudo kmutil install -R / -r /Library/DriverExtensions
kmutil rebuild done
However,the dext can't be loaded:
kmutil showloaded --list-only | grep UVCamera
No variant specified, falling back to release
What's the problem? anyone can help me?
When I use IOKit/usb/IOUSBLib to toggle build-in camera, I got an ERROR:ret IOReturn -536870210
How can I resolve it? Can I use IOUSBLib to disable or hide build-in camera?
My environment:
Model Name: MacBook Pro
ProductVersion: 15.5
Model Identifier: MacBookPro15,2
Processor Name: Quad-Core Intel Core i5
Processor Speed: 2.4 GHz
Number of Processors: 1
// 禁用/启用USB设备
bool toggleUSBDevice(uint16_t vendorID, uint16_t productID, bool enable) {
std::cout << (enable ? "Enabling" : "Disabling") << " USB device with VID: 0x"
<< std::hex << vendorID << ", PID: 0x" << productID << std::endl;
// 创建匹配字典查找指定VID/PID的USB设备
CFMutableDictionaryRef matchingDict = IOServiceMatching(kIOUSBDeviceClassName);
if (!matchingDict) {
std::cerr << "Failed to create USB device matching dictionary." << std::endl;
return false;
}
// 设置VID/PID匹配条件
CFNumberRef vendorIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &vendorID);
CFNumberRef productIDRef = CFNumberCreate(kCFAllocatorDefault, kCFNumberSInt16Type, &productID);
CFDictionarySetValue(matchingDict, CFSTR(kUSBVendorID), vendorIDRef);
CFDictionarySetValue(matchingDict, CFSTR(kUSBProductID), productIDRef);
CFRelease(vendorIDRef);
CFRelease(productIDRef);
// 获取匹配的设备迭代器
io_iterator_t deviceIterator;
if (IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &deviceIterator) != KERN_SUCCESS) {
std::cerr << "Failed to get USB device iterator." << std::endl;
CFRelease(matchingDict);
return false;
}
io_service_t usbDevice;
bool result = false;
int deviceCount = 0;
// 遍历所有匹配的设备
while ((usbDevice = IOIteratorNext(deviceIterator)) != IO_OBJECT_NULL) {
deviceCount++;
// 获取设备路径
char path[1024];
if (IORegistryEntryGetPath(usbDevice, kIOServicePlane, path) == KERN_SUCCESS) {
std::cout << "Found device at path: " << path << std::endl;
}
// 打开设备
IOCFPlugInInterface** plugInInterface = NULL;
IOUSBDeviceInterface** deviceInterface = NULL;
SInt32 score;
IOReturn ret = IOCreatePlugInInterfaceForService(
usbDevice,
kIOUSBDeviceUserClientTypeID,
kIOCFPlugInInterfaceID,
&plugInInterface,
&score);
if (ret == kIOReturnSuccess && plugInInterface) {
ret = (*plugInInterface)->QueryInterface(plugInInterface,
CFUUIDGetUUIDBytes(kIOUSBDeviceInterfaceID),
(LPVOID*)&deviceInterface);
(*plugInInterface)->Release(plugInInterface);
}
if (ret != kIOReturnSuccess) {
std::cerr << "Failed to open USB device interface. Error:" << ret << std::endl;
IOObjectRelease(usbDevice);
continue;
}
// 禁用/启用设备
if (enable) {
// 启用设备 - 重新配置设备
ret = (*deviceInterface)->USBDeviceReEnumerate(deviceInterface, 0);
if (ret == kIOReturnSuccess) {
std::cout << "Device enabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to enable device. Error: " << ret << std::endl;
}
} else {
// 禁用设备 - 断开设备连接
ret = (*deviceInterface)->USBDeviceClose(deviceInterface);
if (ret == kIOReturnSuccess) {
std::cout << "Device disabled successfully." << std::endl;
result = true;
} else {
std::cerr << "Failed to disable device. Error: " << ret << std::endl;
}
}
// 关闭设备接口
(*deviceInterface)->Release(deviceInterface);
IOObjectRelease(usbDevice);
}
IOObjectRelease(deviceIterator);
if (deviceCount == 0) {
std::cerr << "No device found with specified VID/PID." << std::endl;
return false;
}
return result;
}
The documentation for PHAssetChangeRequest.revertAssetContentToOriginal says it will fail if the original asset content is not on the current device so you should use PHAssetResourceManager to download it first, but this no longer seems to be the case in the latest iOS versions because an error no longer occurs when I take a photo on my iPhone, edit it, open Photos on my iPad and let it sync, then open my app on iPad and call revertAssetContentToOriginal for that asset. Does the system now take care of downloading the original when needed?
Some users have reported an error editing portrait photo assets in my app:
The operation couldn’t be completed. (CINonLocalizedDescriptionKey error 3.)
What is that error? Will affected photos always encounter this error (due to data corruption for example) or can it be resolved in a future iOS update?
FB16241301
iOS 26 added smoothness to CIRoundedRectangleGenerator, for use with CIFilter.roundedRectangleGenerator. What should the smoothness value be to achieve the same corner curve as CALayerCornerCurve.continuous? Does it need to be calculated based on the extent size, if so, how?
Can i use iokit usb lib to disable build-in camera?
Hi,
I'm developing a SwiftUI app using RealityKit and ARKit for an AR measuring feature. I’ve noticed that after navigating away from my AR view and performing extensive cleanup (including removing all anchors/entities, pausing the ARSession, and nil-ing out all references), memory usage remains elevated and sometimes grows with repeated AR sessions.
Each time I enter and exit the AR view, memory increases
The memory does not return to the baseline after cleanup, even though all custom objects are deallocated.
Are there best practices beyond what I’ve described to ensure all ARKit/RealityKit resources are released after an AR session?
i have a wallpaper app , i need a photos access when i try to add it in capabilities i don't find it.is there any solution?
Hi! I am making an app for Apple Vision pro (VisionOS 2.5) that is scanning the surroundings and recognises all the texts around you. I tried to use the AVCaptureSession library, but when I run the app from xcode on the real AVP device, the camera is not accessible. I enabled the camera access in my Info.plist: NSCameraUsageDescription Used for live text recognition and I checked camera settings in the AVP, there are no restrictions. However I have always a black square with a crossed camera icon displayed instead of the image from the camera.
I tried a couple of different apps from Github using the AVCaptureSession and they all display the black square instead of the picture.
What can be wrong with the camera?
Topic:
Media Technologies
SubTopic:
Photos & Camera
Does the library exists in xCode 16.4?
"import WorldCaptureKit" gives error "No such module 'WorldCaptureKit'".
And I do not find any information about the library in the apple documentation.
But AI keeps suggesting me to use the library
Topic:
Media Technologies
SubTopic:
Photos & Camera
Hi,
I’m building a PPG-based heart rate feature where the user places their finger over the rear telephoto camera. On iPhone 16 Pro Max, I'm explicitly selecting the telephoto lens like this:
videoDevice = AVCaptureDevice.default(.builtInTelephotoCamera, for: .video, position: .back)
And trying to lock it:
if #available(iOS 15.0, *),
device.activePrimaryConstituentDeviceSwitchingBehavior != .unsupported {
try? device.lockForConfiguration()
device.setPrimaryConstituentDeviceSwitchingBehavior(.locked, restrictedSwitchingBehaviorConditions: [])
device.unlockForConfiguration()
}
I also lock everything else to prevent dynamic changes:
try device.lockForConfiguration()
device.focusMode = .locked
device.exposureMode = .locked
device.whiteBalanceMode = .locked
device.videoZoomFactor = 1.0
device.automaticallyEnablesLowLightBoostWhenAvailable = false
device.automaticallyAdjustsVideoHDREnabled = false
device.unlockForConfiguration()
Despite this, the camera still switches to another lens, especially under different lighting, even though the user’s finger fully covers the lens.
Questions:
How can I completely prevent lens switching in this scenario?
Would using videoZoomFactor = 3.0 or 5.0 better enforce use of the telephoto lens?
Thanks!
Gal
I'm developing a video capture app using AVFoundation, designed specifically for use on a boat pylon to record slalom water skiing. This setup involves considerable vibration.
As you may know, the OIS that Apple began adding to lenses since the iPhone 7 is actually very problematic in high vibration circumstances, ironically creating very shaky video, whereas lenses without OIS produce perfectly stable video. Because of this, up until iPhone 14, the solution for my app was simply to use the Selfie lens, which did not have OIS.
Starting with iPhone 14 through iPhone 16 (non-Pro models), technical specs suggest the selfie lens still does not include OIS. However, I’m still seeing the same kind of shaky video behavior I see on OIS-equipped lenses. The one hardware change I see in this camera module is the addition of PDAF (Phase Detection Autofocus), so that is my best guess as to what is causing the unstable video.
1- Does that make any sense - that in high vibration settings, PDAF could create unstable video in the same way that OIS does? Or could it be something else that was changed between the iPhone 13 and 14 Selfie lens?
Thinking that the issue was PDAF, I figured that if I enabled my app to set a Manual Focus level, that ought to circumvent PDAF (expecting that if a lens is manually focusing, it can’t also be autofocusing via PDAF).
However, even with manual focus locked via AVCaptureDevice in my app, on the Selfie lens of an iPhone 16, the video still comes out very shaky, basically unusable. I also tested with the built-in Apple Camera app (using the press-and-hold to lock focus and exposure) and another 3rd party camera app to lock focus, all with the same results, so it's not that my app just isn't correctly doing manual focus.
So I'm stuck with these questions:
2- Does the selfie camera on iPhones 14–16 use PDAF even when focus is set to locked/manual mode?
3- Is there any way in AVFoundation to disable or suppress PDAF during video recording (e.g., a flag, device format setting, or private API)?
4- Is PDAF behavior or suppression documented or controllable via AVCaptureDevice or any related class?
5- If no control of PDAF is available, are there any best practices for stabilizing or smoothing this effect programmatically?
Note that I also have set my app to use the most aggressive form of stabilization available, so it defaults to .cinematicExtendedEnhanced, if that’s not available, then .cinematicExtended, etc. On the 16 Selfie lens, it is using .cinematicExtended. As an additional question:
6- Would those be the most appropriate stabilization settings for a high vibration environment, and if not, what would be best?
Xcode Version 16.3 (16E140)
App developed in Flutter Flutter 3.29.3
Test iPhone device: iPhone 16 Pro running iOS 18.5
I have an app that requires Camera access. This used to work before with iOS 18.4.x. I have dumbed down my app to just get Camera permission. Even then it fails
flutter: Camera permission: PermissionStatus.denied
flutter: Photos permission: PermissionStatus.denied
flutter: Microphone permission: PermissionStatus.denied
flutter: --- End Debug Info ---
flutter: Loaded translations from asset for en_US
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
container_create_or_lookup_app_group_path_by_app_group_identifier: client is not entitled
flutter: CAMERA PERMISSION STATUS: PermissionStatus.permanentlyDenied
Camera permissions don't show up in my App settings or under "Settings -> Privacy and Security -> Camera" and I am at loss to understand why this is happening.
ImageIO encoding to HEICS fails in macOS 15.5.
log
writeImageAtIndex:1246: *** CMPhotoCompressionSessionAddImageToSequence: err = kCMPhotoError_UnsupportedOperation [-16994] (codec: 'hvc1')
seems to be related with
https://github.com/SDWebImage/SDWebImage/issues/3732
affected version
iOS 18.4 (sim and device), macOS 15.5
unaffected version
iOS 18.3 (sim and device), macOS 15.3
In the latest production release of our iOS app (deployed via the App Store), we’ve observed a significant increase in AVCaptureSessionWasInterrupted notifications where the interruption reason has a rawValue of 4. The session does not automatically recover, even after returning from background or deleting/reinstalling the app. An employee ran into this and was able to get a recording. We see the below error when attempting to take photos.
"Error Domain=AVFoundationErrorDomain Code=-11803 \"Cannot Record\" UserInfo={AVErrorRecordingFailureDomainKey=3, NSLocalizedDescription=Cannot Record, NSLocalizedRecoverySuggestion=Try recording again.}",
}
This interruption causes the camera preview to remain black, and any attempt to capture an image results in a failure with the following error:
Some questions from our team:
What common system conditions or foreground app behaviors can cause .videoDeviceNotAvailableWithMultipleForegroundApps (reason 4) to become persistent? Our teams under is under the impression the interruption reason 4 is mostly associated with iPad and PiP, but neither of these are true in the logs we see.
Is manual recovery of the session required?
Is there a recommended strategy to detect that the session is unrecoverable and gracefully notify the user or rebuild the session?
Is there an instrument(s) in XCode you would recommend when trying to evaluate the increase in reason 4?
Best,
Ben
My app is a camera app that supports Picture-in-Picture (PiP) mode.
Normally, when the device rotates, I get the device orientation from iOS and use it to rotate the camera feed so that the preview stays correctly aligned.
However, when the app enters PiP mode, it is considered to be in the background, and I can no longer receive orientation updates from the system.
As a result, I can’t apply rotation corrections to the camera video in PiP mode.
Is there any way to retrieve device orientation while the app is in the background (specifically during PiP mode)?
Any guidance would be greatly appreciated.
Thank you!
Hi guys,
Can I use CMIO to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected:
When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker).
When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
Hi guys,
How to achieve the following feature on macOS when a USB device (Camera/Mic/Speaker) is connected:
When a third-party video conferencing app is not in a meeting, ensure the app defaults to using the USB device (Camera/Mic/Speaker).
When a third-party conferencing app is in a meeting, ensure the app automatically switches to the USB device (Camera/Mic/Speaker).
I want to make use of IOKit extension to hidden or ignore build-in camera to realize the requirement.
however the extension can't be loaded for Invalid permissions in MacOS 15.4.1, buildVersion:24E263. I also tried to run in in MacOS 14.4.1, which can be loaded but can't auto load when restart laptop as KDK version not match.
Could you please give me some suggestion? Is it possible hidden build-in camera in MacOS M-series chip? Is there any other method to realize the feature. Thanks a lot.
In the AVP project, a selector pops up, only wanting to filter spatial videos. When selecting the material of one of the spatial videos, the selection result returns empty. How can we obtain the video selected by the user and get the path and the URL of the file
The code is as follows:
PhotosPicker(selection: $selectedItem, matching: .videos) {
Text("Choose a spatial photo or video")
}
func loadTransferable(from imageSelection: PhotosPickerItem) -> Progress {
return imageSelection.loadTransferable(type: URL.self) { result in
DispatchQueue.main.async {
// guard imageSelection == self.imageSelection else { return }
print("加载成功的图片集合:(result)")
switch result {
case .success(let url?):
self.selectSpatialVideoURL = url
print("获取视频链接:(url)")
case .success(nil):
break
// Handle the success case with an empty value.
case .failure(let error):
print("spatial错误:(error)")
// Handle the failure case with the provided error.
}
}
}
}