Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

ReplayKit

RSS for tag

Record or stream video from the screen and audio from the app and microphone using ReplayKit.

Posts under ReplayKit tag

25 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Assistance with Capturing WebView Snapshot in Broadcast Upload Extension Without Adding to View Hierarchy
Hello, I am currently developing a game streaming application using ReplayKit and Broadcast Upload Extension. I would like to ask for your assistance regarding capturing snapshots of a WKWebView in the upload extension without adding it to a visible view hierarchy. From my understanding, calling takeSnapshot(with:) on a WKWebView that is not added to the view hierarchy generally works for simple web pages. However, when it comes to more complex web content — such as animations or WebGL — the snapshot returns a blank or static image. I believe this is because rendering such content requires access to the GPU, which is not fully available when the web view is off-screen. That said, I’ve observed that certain apps are able to capture live animated web content inside their broadcast upload extensions, even when the main app is terminated. This suggests that the snapshot is not being generated by the main app or from a remote server — especially since the network activity confirms the content is served locally (via localhost or local IP). Given this, I believe there must be a way to achieve GPU-accelerated rendering for WKWebView directly within the upload extension context, without attaching it to the app's UI. I would greatly appreciate any guidance, APIs, or recommended techniques that could help me achieve this behavior correctly and within system limitations. Thank you in advance for your support. I look forward to your advice. Warm regards,
0
0
33
2w
How to prevent the main app from being terminated by the system during long - term system - level recording
After logging in to the main App, turn on screen recording, then switch to the interface of another App to perform operations. After about ten-odd minutes, when returning to the main App, it was found that the app was forcefully quit by the system, and subsequent operations could not be carried out.
1
0
34
2w
app crashed _CFRelease.cold.1
In my app, I implemented a screen recording functionality. But there was an unexpected crash. 0 CoreFoundation _CFRelease.cold.1 + 16 1 CoreFoundation ___CFTypeCollectionRelease 2 ReplayKit ___56-[RPScreenRecorder captureHandlerWithSample:timingData:]_block_invoke + 148 3 libdispatch.dylib __dispatch_call_block_and_release + 32 4 libdispatch.dylib __dispatch_client_callout + 16 5 libdispatch.dylib __dispatch_lane_serial_drain + 740 6 libdispatch.dylib __dispatch_lane_invoke + 388 7 libdispatch.dylib __dispatch_root_queue_drain_deferred_wlh + 292 8 libdispatch.dylib __dispatch_workloop_worker_thread + 540 9 libsystem_pthread.dylib __pthread_wqthread + 292
2
0
62
May ’25
Screen recording audio and video out of sync
I use startCaptureWithHandler to record screen and AVAssetWriter appendSampleBuffer: to save audio and video ,but when played the saved file audio and video are out of sync. I don t know if it s a AVAssetWriterInputr setup problem,here is my code NSDictionary *audioCompressionSettings = @{ AVEncoderBitRatePerChannelKey : @(64000), AVFormatIDKey : @(kAudioFormatMPEG4AAC), AVNumberOfChannelsKey : @(2), AVSampleRateKey : @(44100) }; AVAssetWriterInput *audioAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioCompressionSettings]; audioAssetWriterInput.expectsMediaDataInRealTime = YES; [_assetWriter addInput:audioAssetWriterInput]; NSDictionary *videoCompressSetting = @{AVVideoAverageBitRateKey:@(screenWidth*screenHeight*5), AVVideoMaxKeyFrameIntervalKey:@(30), AVVideoProfileLevelKey : AVVideoProfileLevelH264MainAutoLevel}; NSDictionary *codecSetting = @{AVVideoCodecKey:AVVideoCodecTypeH264, AVVideoScalingModeKey : AVVideoScalingModeResize, AVVideoWidthKey:@(screenWidth*2), AVVideoHeightKey:@(screenHeight*2), AVVideoCompressionPropertiesKey:videoCompressSetting }; AVAssetWriterInput* videoAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:codecSetting]; videoAssetWriterInput.expectsMediaDataInRealTime = YES; [_assetWriter addInput:videoAssetWriterInput];
0
0
43
Apr ’25
Replaykit stop screen record failed, recording status is false
I want to record screen ,and than when I call the method stopCaptureWithHandler:(nullable void (^)(NSError *_Nullable error))handler to stop recording and saving file. before call it,I check the value record of RPScreenRecorder sharedRecorder ,the value is false , It's weird! The screen is currently being recorded ! I wonder if the value of [RPScreenRecorder sharedRecorder].record will affect the method stopCaptureWithHandler: -(void)startCaptureScreen { [[RPScreenRecorder sharedRecorder] startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error) { //code } completionHandler:^(NSError * _Nullable error) { //code }]; } - (void)stopRecordingHandler { if([[RPScreenRecorder sharedRecorder] isRecording]){ // deal error .sometime isRecording is false }else { [[RPScreenRecorder sharedRecorder] stopCaptureWithHandler:^(NSError * _Nullable error) { }]; } } here are my code.
0
0
33
Apr ’25
RePlayKit:screen recording method return sampleBuffer is nil
I want record screen in my app,the method startCaptureWithHandler:completionHandler:,the sampleBuffer, It is supposed to exist but it has become nil.Not only that,but there‘s another problem,when I want to stop recording and save the video,I will check [RPScreenRecorder sharedRecorder].recording first, it will be false sometime,that problems are unusual in iOS 18.3.2 iPhoneXs Max,and unexpected,here is my code -(void)startCaptureScreen { NSLog(@"AKA++ startCaptureScreen"); if ([[RPScreenRecorder sharedRecorder] isRecording]) { return; } //屏幕录制 [[RPScreenRecorder sharedRecorder]setMicrophoneEnabled:YES]; NSLog(@"AKA++ MicrophoneEnabled AAAA startCaptureScreen"); [[RPScreenRecorder sharedRecorder]setCameraEnabled:YES]; [[RPScreenRecorder sharedRecorder] startCaptureWithHandler:^(CMSampleBufferRef _Nonnull sampleBuffer, RPSampleBufferType bufferType, NSError * _Nullable error) { if(self.assetWriter == nil){ if (self.AVAssetWriterStatus == 0) { [self setupAssetWriterAndStartWith:sampleBuffer]; } } if (self.AVAssetWriterStatus != 2) { return; } if (error) { // deal with error return; } if (self.assetWriter.status != AVAssetWriterStatusWriting) { [self assetWriterAppendSampleBufferFailWith:bufferType]; return; } if (bufferType == RPSampleBufferTypeVideo) { if(self.assetWriter.status == 0 ||self.assetWriter.status > 2){ } else if(self.videoAssetWriterInput.readyForMoreMediaData == YES){ BOOL success = [self.videoAssetWriterInput appendSampleBuffer:sampleBuffer]; } } if (bufferType == RPSampleBufferTypeAudioMic) { if(self.assetWriter.status == 0 ||self.assetWriter.status > 2){ } else if(self.audioAssetWriterInput.readyForMoreMediaData == YES){ BOOL success = [self.audioAssetWriterInput appendSampleBuffer:sampleBuffer]; } } } completionHandler:^(NSError * _Nullable error) { //deal with error }]; } and than ,when want to save it : -(void)stopRecording { if([[RPScreenRecorder sharedRecorder] isRecording]){ // The problem is sporadic,recording action failed,it makes me confused } [[RPScreenRecorder sharedRecorder] stopCaptureWithHandler:^(NSError * _Nullable error) { if(!error) { //post message } }]; }
0
0
27
Apr ’25
ReplayKit start and stop capture breaks and give me an error when switching from Immersive to Mixed and back.
Hi, I'm developing a virtual camera system using ReplayKit to capture scene video by directly accessing raw video buffers. The capture mechanism works flawlessly when repeatedly starting and stopping video capture within a continuous immersive environment. However, a critical issue arises when interrupting the immersive space: Step 1: Enter immersive environment and start and stop capture videos(Multiple times with no issues) Step 2: Press the crown button to exit the immersive environment Step 3: Return to the immersive space subsequently Step 4: Attempt to start the video capture At this point, the startCapture method throws an unexpected error, disrupting the video capture workflow. This is the Xcode error that I see " [ERROR] -[RPScreenRecorder startCaptureWithHandler:completionHandler:]_block_invoke_2:500 failed to start due to error: Error Domain=com.apple.ReplayKit.RPRecordingErrorDomain Code=-5803 "Recording failed to start" UserInfo={NSLocalizedDescription=Recording failed to start}" I have tried all possible ways to stopCapture including OnDisappear and other methods and nothing seems to solve this.
3
0
228
Mar ’25
unable to find boardcast extension
Issue Summary: In our Flutter application, we utilize Tencent's TRTC API for voice and video communication. While the broadcast functionality operates correctly on Android, it fails to respond on iOS devices. Attempting to initiate a broadcast results in no action, and long-pressing the broadcast button does not reveal the broadcast extension. Steps to Reproduce: Add Broadcast Upload Extension: In Xcode, navigate to File > New > Target. Select Broadcast Upload Extension and add it to the project. 2. Build the Project: Attempt to build the project. Encounter the error: "Cycle inside Runner; building could produce unreliable results." 3. Resolve Build Cycle Error: Go to the project’s Build Phases. Locate the Embed App Extensions phase. Move Embed App Extensions just below Copy Bundle Resources. Ensure the Copy only when installing option is selected. Rebuild the project; the cycle error is resolved. 4.Test Broadcast Functionality: Install the app on an iOS device. Tap the broadcast button; observe no response. Long-press the broadcast button in the top right hand scroll down menu; the broadcast extension is not listed. 5. Isolate the Issue: Create a new Flutter project. Repeat the above steps to add the broadcast upload extension. The issue persists: broadcast functionality remains unresponsive on iOS.
1
0
486
Feb ’25
Screen sharing application - URGENT question
There are different kinds of screen-sharing applications, all using different APIs. The API used by AnyDesk, for example, or TeamViewer, which doesn't require light signals. I wonder if this is more appropriate for a corporate application, i.e. MDM, A screen-sharing application could be created and validated by Apple to display no light signals, and which could access the user's screen whenever the person wanted to after an initial acceptance? In other words, the user accepts to share his screen once, but won't be notified to accept the next time. Or is this impossible on iOS? I'd be honored to have some answers
3
0
443
Feb ’25
How to Implement Screen Mirroring in iOS for Google TV?
I am developing an iOS application that supports screen mirroring to Google TV (or Chromecast with Google TV). My goal is to mirror the iPhone/iPad screen in real time to a Google TV device. What I Have Tried So Far I have explored multiple approaches but haven't found a direct way to achieve low-latency screen mirroring. Here are some of my findings: Google Cast SDK: Google Cast SDK is primarily designed for casting media (videos, images, audio) rather than real-time mirroring. It supports custom receiver applications, but there are no direct APIs for full screen mirroring. Casting a recorded video is possible, but it introduces latency and is not real-time. ReplayKit for Screen Capture: RPScreenRecorder.shared().startCapture(handler: ...) allows capturing the iPhone screen as a video stream. However, sending this stream to Google TV in real time is a challenge. I could potentially encode the video as HLS and stream it, but the delay is significant. RTSP/UDP Streaming: Some third-party libraries support RTSP/UDP streaming for real-time screen sharing. Google TV does not natively support RTSP, making this approach difficult. My Questions: Is it possible to achieve real-time screen mirroring on Google TV using Google Cast SDK? Does Google TV support WebRTC or any low-latency streaming protocol that can be used from iOS? Are there any alternative approaches to mirror an iOS screen to Google TV with minimal latency? I would appreciate any guidance, code examples, or references to relevant documentation.
0
1
405
Feb ’25
Intercepting incoming video and/or audio for stopping scams
I'm trying to make an app that is able to quietly run in the background. It needs to detect other apps' or the system's incoming video and/or audio, using only on-device resources to determine if it might be a scam caller. It will tap into an escalating cascade of resources to do so. For video/image scam detection, it uses OpenCV to detect faces, then refers to a known database of reported scam imagery. For audio scam calls, we defer to known techniques of voice modulation in frequency and/or amplitude. Each video and/or audio result will be relayed via notification banner as well as recorded in-app. Crucially, if the results are uncertain, users have the option to submit it to a global collaborative cloud database for investigative teams; 60 second audio snippets or series of images where faces were detected (60 second equivalent). In the end, we expect to deploy this app across most parts of Asia and Africa, thereby protecting generations of iPhone and iPad users. However, we have not been able to find a method that does this, and there is no known correspondance able to provide such technical guidance. Please assist.
0
0
409
Nov ’24
Overlapping Video Frames in RPBroadcastSampleHandler with ReplayKit
I am recording video on iOS using ReplayKit and found that after copying data in the processSampleBuffer:withType: callback using memcpy, the data changes. This occurs particularly frequently when the screen content changes rapidly, making it look like the frames are overlapping. I found that the values starting from byte 672 in the video data on my device often change. Here is the test demo: - (void)processSampleBuffer:(CMSampleBufferRef)sampleBuffer withType:(RPSampleBufferType)sampleBufferType { switch (sampleBufferType) { case RPSampleBufferTypeVideo: { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); int ret = 0; uint8_t *oYData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0); size_t oYSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 0) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0); uint8_t *oUVData = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1); size_t oUVSize = CVPixelBufferGetHeightOfPlane(pixelBuffer, 1) * CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1); if (oYSize <= 672) { return; } uint8_t tempValue = oYData[672]; uint8_t *tYData = malloc(oYSize); memcpy(tYData, oYData, oYSize); if (tYData[672] != oYData[672]) { NSLog(@"$$$$$$$$$$$$$$$$------ t:%d o:%d temp:%d", tYData[672], oYData[672], tempValue); } free(tYData); CVPixelBufferUnlockBaseAddress(pixelBuffer, kCVPixelBufferLock_ReadOnly); break; } default: { break; } } } Output: $$$$$$$$$$$$$$$$------ t:110 o:124 temp:110 $$$$$$$$$$$$$$$$------ t:111 o:133 temp:111 $$$$$$$$$$$$$$$$------ t:124 o:138 temp:124 $$$$$$$$$$$$$$$$------ t:133 o:144 temp:133 $$$$$$$$$$$$$$$$------ t:138 o:151 temp:138 $$$$$$$$$$$$$$$$------ t:144 o:156 temp:144 $$$$$$$$$$$$$$$$------ t:151 o:135 temp:151 $$$$$$$$$$$$$$$$------ t:156 o:78 temp:156 $$$$$$$$$$$$$$$$------ t:135 o:76 temp:135 $$$$$$$$$$$$$$$$------ t:78 o:77 temp:78 $$$$$$$$$$$$$$$$------ t:76 o:80 temp:76 $$$$$$$$$$$$$$$$------ t:77 o:80 temp:77 $$$$$$$$$$$$$$$$------ t:80 o:79 temp:80 $$$$$$$$$$$$$$$$------ t:79 o:80 temp:79
0
0
477
Oct ’24
Screen Capture ReplayKit: How do I know if user had consented for Screen Capture alert?
I understand that there are no delegate methods for this. But to determine a positive consent from user to Record Screen needs to be evaluated via block parameter. When user denies the permission by mistake, and, if user tries again, the alert is not showing up. How do I reset the permission and throw the below alert again?
0
0
441
Oct ’24
Capturing External Object Images via Vision Pro Passthrough Camera with Enterprise APIs
We are currently working with the Enterprise APIs for visionOS 2 and have successfully obtained the necessary entitlements for passthrough camera access. Our goal is to capture images of external real-world objects using the passthrough camera of the Vision Pro, not just take screenshots or screen captures. Our specific use case involves: 1. Accessing the raw passthrough camera feed. 2. Capturing high-resolution images of objects in the real world through the camera. 3. Processing and saving these images for further analysis within our custom enterprise app. We would greatly appreciate any guidance, tutorials, or sample code that could help us achieve this functionality. If there are specific APIs or best practices for handling real-world image capture via passthrough cameras with the Enterprise APIs, please let us know.
0
0
571
Oct ’24
RPScreenRecorder startCapture issues on generated file
Hello all, This is my first post on the developer forums. I am developing an app that records the screen of my app, using AVAssetWriter and RPScreenRecorder startCapture. Everything is working as it should on most cases. There are some seemingly random times where the file generated is of some kb and it is corrupted. There seems to be no pattern on what the device is or the iOS version is. It can happen on various phones and iOS versions. The steps I have followed in order to create the file are: configuring the AssetWritter videoAssetWriter = try? AVAssetWriter(outputURL: url!, fileType: AVFileType.mp4) let size = UIScreen.main.bounds.size let width = (Int(size.width / 4)) * 4 let height = (Int(size.height / 4)) * 4 let videoOutputSettings: Dictionary<String, Any> = [ AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : width, AVVideoHeightKey : height ] videoInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoOutputSettings) videoInput?.expectsMediaDataInRealTime = true guard let videoInput = videoInput else { return } if videoAssetWriter?.canAdd(videoInput) ?? false { videoAssetWriter?.add(videoInput) } let audioInputsettings = [ AVFormatIDKey: Int(kAudioFormatMPEG4AAC), AVSampleRateKey: 12000, AVNumberOfChannelsKey: 1, AVEncoderAudioQualityKey: AVAudioQuality.high.rawValue ] audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioInputsettings) audioInput?.expectsMediaDataInRealTime = true guard let audioInput = audioInput else { return } if videoAssetWriter?.canAdd(audioInput) ?? false { videoAssetWriter?.add(audioInput) } The urlForVideo returns the URL to the documentDirectory, after appending and creating the folders needed. This part seems to be working as it should as the directories are created and the video file exists on them. Start the recording if RPScreenRecorder.shared().isRecording { return } RPScreenRecorder.shared().startCapture(handler: { [weak self] sample, bufferType, error in if let error = error { onError?(error.localizedDescription) } else { if (!RPScreenRecorder.shared().isMicrophoneEnabled) { RPScreenRecorder.shared().stopCapture { error in if let error = error { return } } onError?("Microphone was not enabled") } else { succesCompletion?() succesCompletion = nil self?.processSampleBuffer(sample, with: bufferType) } } }) { error in if let error = error { onError?(error.localizedDescription) } } Process the sampleBuffers guard CMSampleBufferDataIsReady(sampleBuffer) else { return } DispatchQueue.main.async { [weak self] in switch sampleBufferType { case .video: self?.handleVideoBaffer(sampleBuffer) case .audioMic: self?.add(sample: sampleBuffer, to: self?.audioInput) self?.audioInput) default: break } } // The add function from above fileprivate func add(sample: CMSampleBuffer, to writerInput: AVAssetWriterInput?) { if writerInput?.isReadyForMoreMediaData ?? false { writerInput?.append(sample) } // The handleVideoBaffer function from above fileprivate func handleVideoBaffer(_ sampleBuffer: CMSampleBuffer) { if self.videoAssetWriter?.status == AVAssetWriter.Status.unknown { self.videoAssetWriter?.startWriting() self.videoAssetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer)) } else { if (self.videoInput?.isReadyForMoreMediaData) ?? false { if self.videoAssetWriter?.status == AVAssetWriter.Status.writing { self.videoInput?.append(sampleBuffer) } } } } } Finally the stop recording func stopRecording(completion: @escaping (URL?, URL?, Error?) -> Void) { RPScreenRecorder.shared().stopCapture { error in if let error = error { completion(nil, nil, error) return } self.finish { videoURL, _ in completion(videoURL, nil, nil) } } } // The finish function mentioned above fileprivate func finish(completion: @escaping (URL?, URL?) -> Void) { let dispatchGroup = DispatchGroup() dispatchGroup.enter() finishRecordVideo { dispatchGroup.leave() } dispatchGroup.notify(queue: .main) { print("Finish with url:\(String(describing: self.urlForVideo()))") completion(self.urlForVideo(), nil) } } // The finishRecordVideo mentioned above fileprivate func finishRecordVideo(completion: @escaping ()-> Void) { videoInput?.markAsFinished() audioInput?.markAsFinished() videoAssetWriter?.finishWriting { if let writer = self.videoAssetWriter { if writer.status == .completed { completion() } else if writer.status == .failed { // Print the error to find out what went wrong if let error = writer.error { print("Video asset writing failed with error: \(error.localizedDescription). Url: \(writer.outputURL.path)") } else { print("Video asset writing failed, but no error description available.") } completion() }else { completion() } } } } What could it be the reason of the corrupted files generated? This issue has never happened to my devices so there is no way to debug using xcode. Also there are no errors popping out on the logs. Can you spot any issues on the code that can create this kind of issue? Do you have any suggestions on the problem at hand? Thanks
0
0
553
Sep ’24