Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

AVAudioSession

RSS for tag

Use the AVAudioSession object to communicate to the system how you intend to use audio in your app.

Posts under AVAudioSession tag

92 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Wi-Fi Access Point Not Reconnecting While AVAudioSession Is Active
We’ve encountered a reproducible issue where the iPhone fails to reconnect to a Wi-Fi access point under the following conditions: The device is connected to a 2.4GHz Wi-Fi network. A Bluetooth audio accessory is connected (e.g. headset). AVAudioSession is active (such as during a voice call or when using the Voice Memos app). The user moves away from the access point, causing a disconnect. Upon returning within range, the access point is no longer recognized or reconnected while AVAudioSession remains active. However, if the Bluetooth device is disconnected or the AVAudioSession is deactivated, the Wi-Fi access point is immediately recognized again. We confirmed this behavior not only in my app but also using Apple's built-in Voice Memos app, suggesting this is not specific to our implementation. It appears that the Wi-Fi system deprioritizes reconnection while AVAudioSession is engaged. Could this be by design? Or is this a known issue or limitation with Wi-Fi and AVAudioSession interaction? Test Environment: Device: iPhone 13 mini iOS: 17.5.1 Wi-Fi: 2.4GHz band Accessories: Bluetooth headset We’d appreciate clarification on whether this is expected behavior or a bug. Thank you!
0
0
23
15h
Real-time audio application on locked device
I would like to inquire about the feasibility of developing an iOS application with the following requirements: The app must support real-time audio communication based on UDP. It needs to maintain a TCP signaling connection, even when the device is locked. The app will run only on selected devices within a controlled (closed) environment, such as company-managed iPads or iPhones. Could you please clarify the following: Is it technically possible to maintain an active TCP connection when the device is locked? What are the current iOS restrictions or limitations for background execution, particularly related to networking and audio? Are there any recommended APIs or frameworks (such as VoIP, PushKit, or Background Modes) suitable for this type of application?
1
0
63
1w
Background communication of Apple Watch
I am currently developing an app for the Apple Watch. In RTPController.swift, I handle the sending, receiving, and playback of audio, and the specific processes are as follows: Overview of the current implementation: Audio processing: Audio processing is performed by setting the AVAudioSession to the playAndRecord category and voiceChat mode within RTPController, and by activating the AVAudioEngine. Audio reception: RTP packets (audio data) are received over the network within the setupConnection() method of RTPController. Audio playback: The received audio data is passed to the playSound(data:) method and played back through the AVAudioEngine and AVAudioPlayerNode. Xcode Capabilities settings: Signing & Capabilities Background Modes: Audio, AirPlay, and Picture in Picture Voice over IP Workout processing Privacy descriptions in Info.plist: Privacy - Health Share Usage Description Privacy - Health Update Usage Description Privacy - Health Records Usage Description Question 1: When the digital crown is pressed during a call, a message appears on the screen stating, "End Call to Continue," and the call cannot be moved to the background. As a result, it is not possible to operate other apps while on a call. Is this behavior due to the specifications of CallKit? Question 2: Our app stops communication when it goes into the background, but the walkie-talkie app on the Apple Watch can transition to the background by pressing the digital crown during a call, allowing it to continue receiving and playing the other party's audio while in the background. To achieve background transition during a call and audio reception and playback in the background, is the current implementation of RTPController and the enabled background modes insufficient? Best regards.
1
0
72
6d
Audio issue on iPhone 15 pro (iOS18.5)
I have an audio issue on iPhone 15 Pro (iOS18.5). After some steps, the new call will be on one-way audio status, but tap mute and unmute will back to normal. See the attached video and check the "yellow dot indicator" for the audio status. Video link: https://youtube.com/shorts/DqYIIIqtMKI?feature=share I have a similar issue on iOS15 and iOS16, and no issue on iOS17, but now I have this issue on iOS18 with dynamic island model devices. Please check. Thanks.
3
1
71
6d
Push to talk channelManager(_:didActivate:) doesn't get called
I am implementing the new Push to talk framework and I found an issue where channelManager(:didActivate:) is not called after I immediately return a NOT NIL activeRemoteParticipant from incomingPushResult. I have tested it and it could play the PTT audio in foreground and background. This issue is only occurring when I join the PTT Channel from the app foreground, then kill the app. The channel gets restored via channelDescriptor(restoredChannelUUID:). After the channel gets restored, I send PTT push. I can see that my device is receiving the incomingPushResult and returning the activeRemotePartipant and the notification panel is showing that A is speaking - but channelManager(:didActivate:) never gets called. Thus resulting in no audio being played. Rejoining the channel fixes the issue. And reopening the app also seems to fix the issue.
1
0
49
2w
AVAudioSession dropping wired USBAudio source
My app inputs electrical waveforms from an IV485B39 2 channel USB device using an AVAudioSession. Before attempting to acquire data I make sure the input device is available as follows: AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; I have been using this code for about 10 years. My app is scriptable so a user can acquire data from the IV485B29 multiple times with various parameter settings (sampling rates and sample duration). Recently the scripts have been failing to complete and what I have notice that when it fails the list of available inputs is missing the USBAudio input. While debugging I have noticed that when working properly the list of inputs includes both the internal microphone as well as the USBAudio device as shown below. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584c7d0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>", "<AVAudioSessionPortDescription: 0x11584cae0, type = USBAudio; name = 485B39 200095708064650803073200616; UID = AppleUSBAudioEngine:Digiducer.com :485B39 200095708064650803073200616:000957 200095708064650803073200616:1; selectedDataSource = (null)>" ) But when it fails I only see the built in microphone. VIB_TimeSeriesViewController:***Available inputs = ( "<AVAudioSessionPortDescription: 0x11584cef0, type = MicrophoneBuiltIn; name = iPad Microphone; UID = Built-In Microphone; selectedDataSource = Front>" ) If I only see the built in microphone I immediately repeat the three lines of code and most of the "inputs" contains both the internal microphone and the USBAudioDevice AVAudiosSession *audioSession = [AVAudioSession sharedInstance]; [audioSession setCategory :AVAudioSessionCategoryRecord error:&err]; NSArray *inputs = [audioSession availableInputs]; This fix always works on my M2 iPadPro and my iPhone 14 but some of my customers have older devices and even with 3 tries they still get faults about 1 in 10 tries. I rolled back my code to a released version from about 12 months ago where I know we never had this problem and compiled it against the current libraries and the problem still exists. I assume this is a problem caused by a change in the AVAudioSession framework libraries. I need to find a way to work around the issue or get the library fixed.
0
0
33
2w
App Randomly Crashes During Continuous Sound Playback Using AVAudioPlayer
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 We're using AVAudioPlayer to play a sound when a button is tapped. In our use case, this button can be tapped very frequently — roughly every 0.1 to 0.2 seconds. Each tap triggers the following function: var audioPlayer: AVAudioPlayer? func soundPlay(resource: String, type: String){ guard let path = Bundle.main.path(forResource: resource, ofType: type) else { return } do { audioPlayer = try AVAudioPlayer(contentsOf: URL(fileURLWithPath: path)) audioPlayer!.delegate = self try audioSession.setCategory(.playback) } catch { return } self.audioPlayer!.play() } The issue is that under high-frequency tapping (especially around 0.1–0.15s intervals), the app occasionally crashes. The crash does not occur every time, but it happens randomly — sometimes within 30 seconds, within 1 minute, or even 3 minutes of continuous tapping. Interestingly, adding a delay of 0.2 seconds between button taps seems to prevent the crash entirely. Delays shorter than 0.2 seconds (e.g.,0.15s,0.18s) still result in occasional crashes. My questions are: **Is this expected behavior from AVAudioPlayer or AVAudioSession? Could this be a known issue or a limitation in AVFoundation? Is there any documentation or guidance on handling frequent sound playback safely?** Any insights or recommendations on how to handle rapid, repeated audio playback more reliably would be appreciated.
0
0
90
2w
How to disable/hide Audio Controls on lock screen from WkWebView
Hi, I am trying to remove the audio controls for my app on the lock screen. Since I use WKWebView, there are 3 audio tags in my html and I play and pause em via JS. However, if I do not play any sound since app launch, there are no audio controls on the lock screen. But if I play one of those 3 files (they are even less then 3 Sec sound effects e.g. for buttons) the audio controls appears on lock screen. Note even when the sounds on pause() or not playing they were listed on the lock screen. What I have tried so far without success MPNowPlayingInfoCenter.default().nowPlayingInfo = [:] and ``try audioSession.setCategory(.playback, mode: .default, options: []) try audioSession.setActive(false, options: .notifyOthersOnDeactivation)`` and UIApplication.shared.endReceivingRemoteControlEvents() Another problem is that the app scales with iOS system settings "display zoom". Is there a way to deny it? It is latest Xcode verion 16.3 and iOS 18. I have no background mode in my Capabilities. Nothing worked so far. Has anyone an idea? Greetings
2
0
46
3w
Need to check Call Status without using CallKit
My app requirement is to check that User is on call while doing transaction. If user on call then we need to show Caution alert. For this requirement we used CallKit to detect Call status and it's working fine but recently Apple has rejected the application because of Callkit that is banned in China. Could you please provide any solution to check the Call Status only.
3
0
57
3w
AVAudioPlayer/SKAudioNode audio no longer plays after interruption
Hi 👋! We have a SpriteKit-based app where we play AVAudio sounds in three different ways: Effects (incl. UI sounds) with AVAudioPlayer. Long looping tracks with AVAudioPlayer. Short animation effects on the timeline of SpriteKit's SKScene files (effectively SKAudioNode nodes). We've found that when you exit the app or otherwise interrupt audio plays, future audio plays often fail. For example, there's a WebKit-based video trailer inside the app, and if you play it, our looping background music track (2.) will stop playing, and won't resume as you close the trailer (return from WebKit). This is probably due to us not manually restarting the track (so may well be easily fixed). Periodically played AVAudioPlayer audio (1.) are not affected. However, the more concerning thing is that the audio tracks on SKScene file timelines (3.) will no longer play. My hypothesis is that AVAudioEngine gets interrupted, and needs to be restarted for those AVAudioNode elements to regain functionality. Thing is, we don't deal with AVAudioEngine at all currently in the app, meaning it is never initiated to begin with. Obviously things return to normal when you remove the app from short-term memory and restart it. However, it seems many of our users aren't doing this, and often report audio failing presumably due to some interruption in the past without the app ever being cleared from memory. Any idea why timeline-run SKAudioNodes would fail like this? Should the app react to app backgrounding/foregrounding regarding audio? Any help would be very much appreciated ✌️!
0
0
36
3w
iOS Audio Routing - Bluetooth Output + Built-in Microphone Input
Hello! I'm experiencing an issue with iOS's audio routing system when trying to use Bluetooth headphones for audio output while also recording environmental audio from the built-in microphone. Desired behavior: Play audio through Bluetooth headset (AirPods) Record unprocessed environmental audio from the iPhone's built-in microphone Actual behavior: When explicitly selecting the built-in microphone, iOS reports it's using it (in currentRoute.inputs) However, the actual audio data received is clearly still coming from the AirPods microphone The audio is heavily processed with voice isolation/noise cancellation, removing environmental sounds Environment Details Device: iPhone 12 Pro Max iOS Version: 18.4.1 Hardware: AirPods Audio Framework: AVAudioEngine (also tried AudioQueue) Code Attempted I've tried multiple approaches to force the correct routing: func configureAudioSession() { let session = AVAudioSession.sharedInstance() // Configure to allow Bluetooth output but use built-in mic try? session.setCategory(.playAndRecord, options: [.allowBluetoothA2DP, .defaultToSpeaker]) try? session.setActive(true) // Explicitly select built-in microphone if let inputs = session.availableInputs, let builtInMic = inputs.first(where: { $0.portType == .builtInMic }) { try? session.setPreferredInput(builtInMic) print("Selected input: \(builtInMic.portName)") } // Log the current route let route = session.currentRoute print("Current input: \(route.inputs.first?.portName ?? "None")") // Configure audio engine with native format let inputNode = audioEngine.inputNode let nativeFormat = inputNode.inputFormat(forBus: 0) inputNode.installTap(onBus: 0, bufferSize: 1024, format: nativeFormat) { buffer, time in // Process audio buffer // Despite showing "Built-in Microphone" in route, audio appears to be // coming from AirPods with voice isolation applied - welp! } try? audioEngine.start() } I've also tried various combinations of: Different audio session modes (.default, .measurement, .voiceChat) Different option combinations (with/without .allowBluetooth, .allowBluetoothA2DP) Setting session.setPreferredInput() both before and after activation Diagnostic Observations When AirPods are connected: AVAudioSession.currentRoute.inputs correctly shows "Built-in Microphone" after setPreferredInput() The actual audio data received shows clear signs of AirPods' voice isolation processing Background/environmental sounds are actively filtered out... When recording a test audio played near the phone (not through the app), the recording is nearly silent. Only headset voice goes through. Questions Is there a workaround to force iOS to actually use the built-in microphone while maintaining Bluetooth output? Are there any lower-level configurations that might resolve this issue? Any insights, workarounds, or suggestions would be greatly appreciated. This is blocking a critical feature in my application that requires environmental audio recording while providing audio feedback through headphones 😅
0
0
51
May ’25
CallKit does not activate audio session with higher probability after upgrading to iOS 18.4.1
Hi, We've noticed that this issue occurs more frequently after upgrading to iOS 18.4.1 and can result in one-way audio. Our app uses CallKit with WebRTC to establish VoIP connections. However, on iOS 18.4.1, CallKit no longer triggers: func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) We're currently comparing the occurrence rate across different iOS versions to better understand the impact. Could you please help analyze the root cause of this issue?
3
0
128
May ’25
Audio Session in Notification Service Extension
Is there anyway that I could use AVAudioSession, AVAudioPlayer or anything similar in Notification Service Extension? I am trying to implement Audio Playback in the Notification Service Extension to play specific audio file when receiving Notification regardless the app state(foreground, background or killed), but I am not able to activate audio session in Notification Service Extension. NSError *sessionError = nil; BOOL success = [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:&sessionError]; success = [[AVAudioSession sharedInstance] setActive:YES error:&sessionError]; if (!success) { NSLog(@"Error activating audio session: %@", sessionError); } Below is the error that I got when I am trying to run the code above in Notification Service Extension. Error Domain=NSOSStatusErrorDomain Code=561015905 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
1
0
53
May ’25
Audio session activation occasionally fails from CarPlay
I'm working on adding CarPlay support to an audio app and am running into an issue. Occasionally, when a user opens the app from CarPlay while the main app scene is either not connected or is currently in the background, I will receive an error when attempting to activate the audio session. The code below mimics my setup: do { try AVAudioSession.sharedInstance().setCategory(.playback, mode: .spokenAudio) try AVAudioSession.sharedInstance().setActive(true) } catch { print(error) // NSOSStatusErrorDomain - 560557684: Session activation failed } That error code maps to AVAudioSession.ErrorCode.cannotInterruptOthers. Once in this state, all subsequent attempts to play different pieces of content will fail. However, things will start working normally if the user opens the app on their phone and tries again from CarPlay (while the app is in the foreground on their phone). I'm not sure why it would behave this way and want to note that I do have the audio background mode capability enabled. Has anyone else encountered this? Are there any workarounds or changes I could make to prevent this from happening?
0
1
65
Apr ’25
AVAudioRecorder loses audio recorded before interruption
Hi everyone, I'm running into an issue with AVAudioRecorder when handling interruptions such as phone calls or alarms. Problem: When the app is recording audio and an interruption occurs: I handle the interruption with audioRecorder?.pause() inside AVAudioSession.interruptionNotification (on .began). On .ended, I check for .shouldResume and call audioRecorder?.record() again. The recorder resumes successfully, but only the audio recorded after the interruption is saved. The audio recorded before the interruption is lost, even though I'm using the same file URL and not recreating the recorder. Repro: Start a recording with AVAudioRecorder Simulate a system interruption (e.g., incoming call) Resume recording after the interruption Stop and inspect the output audio file Expected: Full audio (before and after interruption) should be saved. Actual: Only the audio after interruption is saved; the earlier part is missing Notes: According to the documentation, calling .record() after .pause() should resume recording into the same file. I confirmed that the file URL does not change, and I do not recreate the recorder instance. No error is thrown by the system during this process. This behavior happens consistently when the app is interrupted and resumed. Question: Is this a known issue? Is there a recommended workaround for preserving the full recording when interruptions happen? Thanks in advance!
0
0
40
Apr ’25
Push notification volume for custom sound
Is there a way to configure the APNS notification sound volume to be louder? I am implementing some custom sounds(narrative sentences) for APNS, it does play the custom sound, but the volume of the custom sound is not loud enough even though I had set the device's volume and "RingTone and Alerts" volume to max. I tried to amplify the custom sound file, it does play louder but the result is minimum if I want to maintain the quality of the sound without it been distorted. I tried to use Notification Service Extension, AVAudioPlayer and AVAudioSession to play the sound, it does play louder in max volume compare with relying on default sound payload in APNS, but the problem is AVAudioPlayer and AVAudioSession do not seems to be usable when the application is in background or killed state, is there any other alternative I could use?
3
0
69
Apr ’25
Level Networking on watchOS for Duplex audio streaming
I did watch WWDC 2019 Session 716 and understand that an active audio session is key to unlocking low‑level networking on watchOS. I’m configuring my audio session and engine as follows: private func configureAudioSession(completion: @escaping (Bool) -> Void) { let audioSession = AVAudioSession.sharedInstance() do { try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: []) try audioSession.setActive(true, options: .notifyOthersOnDeactivation) // Retrieve sample rate and configure the audio format. let sampleRate = audioSession.sampleRate print("Active hardware sample rate: \(sampleRate)") audioFormat = AVAudioFormat(standardFormatWithSampleRate: sampleRate, channels: 1) // Configure the audio engine. audioInputNode = audioEngine.inputNode audioEngine.attach(audioPlayerNode) audioEngine.connect(audioPlayerNode, to: audioEngine.mainMixerNode, format: audioFormat) try audioEngine.start() completion(true) } catch { print("Error configuring audio session: \(error.localizedDescription)") completion(false) } } private func setupUDPConnection() { let parameters = NWParameters.udp parameters.includePeerToPeer = true connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupTCPConnection() { let parameters = NWParameters.tcp connection = NWConnection(host: "***.***.xxxxx.***", port: 0000, using: parameters) setupNWConnectionHandlers() } private func setupWebSocketConnection() { guard let url = URL(string: "ws://***.***.xxxxx.***:0000") else { print("Invalid WebSocket URL") return } let session = URLSession(configuration: .default) webSocketTask = session.webSocketTask(with: url) webSocketTask?.resume() print("WebSocket connection initiated") sendAudioToServer() receiveDataFromServer() sendWebSocketPing(after: 0.6) } private func setupNWConnectionHandlers() { connection?.stateUpdateHandler = { [weak self] state in DispatchQueue.main.async { switch state { case .ready: print("Connected (NWConnection)") self?.isConnected = true self?.failToConnect = false self?.receiveDataFromServer() self?.sendAudioToServer() case .waiting(let error), .failed(let error): print("Connection error: \(error.localizedDescription)") DispatchQueue.main.asyncAfter(deadline: .now() + 2) { self?.setupNetwork() } case .cancelled: print("NWConnection cancelled") self?.isConnected = false default: break } } } connection?.start(queue: .main) } Duplex in this context refers to two-way audio transmission simultaneously recording and sending audio while also receiving and playing back incoming audio, similar to a VoIP/SIP call. The setup works fine on the simulator, which suggests that the core logic is correct. However, since the simulator doesn’t fully replicate WatchOS hardware behavior especially for audio sessions and networking issues might arise when running on a real device. The problem likely lies in either the Watch’s actual hardware limitations, permission constraints, or specific audio session configurations. I am reaching out to seek further assistance regarding the challenges I've been experiencing with establishing a UDP, TCP & web socket connection on watchOS using NWConnection for duplex audio streaming. Despite implementing the recommendations provided earlier, I am still encountering difficulties From what I can see, your implementation is focused on streaming audio playback with the server. In my case, I'm looking for a slightly different approach: I want to capture audio and send buffers of a specific size to the server while playing audio simultaneously, essentially achieving full duplex streaming similar to a VOIP call. Additionally, I’d like to ensure that if no external audio route is connected, the Apple Watch speaker is used by default. Any thoughts or insights on adapting this setup for those requirements would be very welcome.
1
0
54
Apr ’25
AVAudioSession automatically sets the tablet audio volume to 50% when recording audio.
Environment→ ・Device: iPad 10th generation ・OS:**iOS18.3.2 I'm using AVAudioSession to record sound in my application. But I recently came to realize that when the app starts a recording session on a tablet, OS automatically sets the tablet volume to 50% and when after recording ends, it doesn't change back to the previous volume level before starting the recording. So I would like to know whether this is an OS default behavior or a bug? If it's a default behavior, I much appreciate if I can get a link to the documentation.
0
0
47
Apr ’25
Creating an initial Now Playing state of paused - impossible?
I am working on an app which plays audio - https://youtu.be/VbAfUk_eYl0?si=nJg5ayy2faWE78-g - and one of the features is, on restart, if you had paused playback of a file at the time the app was previously shut down (or were playing one at the time of shutdown), the paused state and position in the file is restored exactly as it was, on restart. The functionality works. However, it seems impossible to get the "now playing" information in iOS into the right state to reflect that via the MediaPlayer API. On restart, handlers are attached to the play/pause/togglePlayPause actions on MPRemoteCommandCenter.shared(), and the map of media info is updated on MPNowPlayingInfoCenter.default().nowPlayingInfo. What happens is that iOS's media view shows the audio as playing and offers a pause button - even though the play action is enabled and the pause action is disabled. Once playback has been initiated (my workaround is to have the pause action toggle the play state, since otherwise you wouldn't be able to initiate playback from controls in a car without initiating it once from a device first). I've created a simplified white-noise-player demo to illustrate the problem - simply build and deploy it, and then start the app, lock your device and look at the playback controls on the lock screen. It will show a pause button - same behavior I've described. https://github.com/timboudreau/ios-play-pause-demo I've tried a few things to narrow down the source of the issue - for example, thinking that not MPNowPlayingInfoPropertyPlaybackProgress and MPMediaItemPropertyPlaybackDuration might be the culprit (since the system interpolates elapsed time and it's recommended to update those properties infrequently) on startup might do the trick, but the result is the same, just without a duration or progress shown. What governs this behavior, and is there some way to explicitly tell the media player API your current state is paused?
0
0
41
Apr ’25