I'm trying to set a specific start time for the song, using ApplicationMusicPlayer.shared.playbackTime but is not working
musicPlayer.playbackTime = 10
try await musicPlayer.prepareToPlay()
try await musicPlayer.play()
How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here
Explore the integration of media technologies within your app. Discuss working with audio, video, camera, and other media functionalities.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
I am getting into a trap. Please check stack-trace, howto fix this?
regards, Joël
stack-trace with ExtAudioFileWrite
Hi, i have try with RPBroadcastSampleHandler Broadcast Extension and RPSystemBroadcastPickerView but i don't understand how can i mirror my iOS screen to Android smart TV?
Hello everyone,
I am thrilled about the iPhone Mirroring demo on WWDC24 and I have a few thoughts to share.
Will it work through a local network, or can the iPhone be accessed within a global network? Will there be an API to initiate iPhone mirroring via an app? This would be a great feature for MDMs, allowing administrators to provide support for their users. Could you share more details from the development perspective?
Topic:
Media Technologies
SubTopic:
General
We are using a VoiceProcessingIO audio unit in our VoIP application on Mac. In certain scenarios, the AudioComponentInstanceNew call blocks for up to five seconds (at least two). We are using the following code to initialize the audio unit:
OSStatus status;
AudioComponentDescription desc;
AudioComponent inputComponent;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
inputComponent = AudioComponentFindNext(NULL, &desc);
status = AudioComponentInstanceNew(inputComponent, &unit);
We are having the issue with current MacOS versions on a host of different Macs (x86 and x64 alike). It takes two to three seconds until AudioComponentInstanceNew returns.
We also see the following errors in the log multiple times:
AUVPAggregate.cpp:2560 AggInpStreamsChanged wait failed
and those right after (which I don't know if they matter to this issue):
KeystrokeSuppressorCore.cpp:44 ERROR: KeystrokeSuppressor initialization was unsuccessful. Invalid or no plist was provided. AU will be bypassed. vpStrategyManager.mm:486 Error code 2003332927 reported at GetPropertyInfo
Hey!
I'm working on a camera app and I've noticed that the .builtInTripleCamera doesn't behave anything like the native app. Tested on iPhone 15 Pro Max and iPhone 12 Pro Max.
The documentation states the following, but that seems quite different from what is happening in the app:
Automatic switching from one camera to another occurs when the zoom factor, light level, and focus position allow.
So, does it automatically switch like the native camera, or do I need to do something?
Custom Camera vs Native Camera
Custom Camera
Native Camera
The code was adapted from the Apple's project
AVCamFilter.
Just download the AVCamFilter and update videoDeviceDiscoverySession:
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(
deviceTypes: [.builtInTripleCamera],
mediaType: .video,
position: .unspecified
)
After the session video, "Build a great Lock Screen camera capture experience", was unclear about the UI.
So do developers need to provide a whole new UI in the extension? The main UI cannot be repurposed?
My app stores and transports lots of groups of similar PNGs. These aren't compressed well by official algorithms like .lzfse, .lz4, .lzbitmap... not even bz2, but I realized that they are well-suited for compression by video codecs since they're highly similar to one another.
I ran an experiment where I compressed a dozen images into an HEVCWithAlpha .mov via AVAssetWriter, and the compression ratio was fantastic, but when I retrieved the PNGs via AVAssetImageGenerator there were lots of artifacts which simply wasn't acceptable. Maybe I'm doing something wrong, or maybe I'm chasing something that doesn't exist.
Is there a way to use video compression like a specialized archive to store and retrieve PNGs losslessly while retaining alpha? I have no intention of using the videos except as condensed storage.
Any suggestions on how to reduce storage size of many large PNGs are also welcome. I also tried using HEVC instead of PNG via the new UIImage.hevcData(), but the decompression/processing times were just insane (5000%+ increase), on top of there being fatal errors when using async.
Hello,
We've a music app reading MPMediaItem.
We got items using MPMediaQuery. But we realized that some downloaded tracks from Apple Music were fetched too. Not all downloaded track but only those who were played recently.
Of course, since these tracks are protected with DRM we can't play them in our player.
It's weird to get them in our query because we added predicate in order to dont fetch protected asset and iCloud item
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyHasProtectedAsset)
MPMediaPropertyPredicate(value: false, forProperty: MPMediaItemPropertyIsCloudItem)
To be sure, we made a second check on each item we've fetched
extension MPMediaItem {
public func isValid() -> Bool {
return self.assetURL != nil && !self.isCloudItem && !self.hasProtectedAsset
}
}
But we still get these items. Their hasProtectedAsset attribute always return false.
I dont know if it's a bug, but since we can't detect this items as Apple Music downloaded track, we can't either:
filter them to not add them in our application library
OR
switch on a MPMusicPlayerController.applicationMusicPlayer to allow the user to play them
Topic:
Media Technologies
SubTopic:
General
Tags:
Apple Music API
Media Player
Media Library
AVFoundation
I'm working on a macOS application that captures audio and video. When the user selects a video capture source (most likely an elgato box), I would like the application to automatically select the audio input from the same device. I was achieving this by pairing video and audio sources that had the same name, but this doesn't work when the user plugs in two capture devices of the same make and model.
With the command system_profiler SPUSBDataType I can list all the USB devices, and I can see that the two elgato boxes have different serial numbers. If I could find this serial number, then I could figure out which AVCaptureDevices come from the same hardware.
Is there a way to get the manufacturer's serial number from the AVCaptureDevice object? Or a way to identify the USB device for an AVCaptureDevice, and from there I could get the serial or some other unique ID?
Well, I will collect a lot of memes from the Internet and save them on my iPhone. I will name and classify them, but I will click on a photo in "All Photos", and its info does not show which album I added to, which makes me very distressed. If I have this function, I will easily manage the memes that I did not correctly add to the corresponding album.
Hi.
I saw that in iOS 18 Beta there is a property "transition" on the Music Kit's ApplicationMusicPlayer. However, in my app I am using MPMusicPlayerApplicationController because I want to play Apple Music songs, local songs and podcasts. But I didn't find an analogue property on MPMusicPlayerApplicationController to specify transitions between songs. Am I missing something?
Thanks,
Dirk
iOS17.4 17.5.1
AVPlayer
https://svip.yzzy23-play.com/20240606/14121_017bbbeb/index.m3u8
大概在8:30s时会无声 我测试过系统safari也会出现这样的情况
之前版本都没问题
I am writing to follow up with my lab in WWDC24.
I had 1:1 lab with Mr. Kavin, we had good 30 minutes lab and for follow up questions Kavin asked me to post it using feedback.
Following is my questin:
We have screenshare in our application and trying to use CFMessagePort for passing CVPixelBufferRef from broadcast extension to Applicaiton.
Questions:
How to copy planes of IOSurface backed CVPixelBufferRef onto another one without using memcpy, is there a zero-copy method?
How to get notified when an IOSurface backed CVPixelBufferRef data get changed by another process.
How to send an IOSurface backed CVPixelBufferRef from Broadcast Extension to application.
How to pass unowned IOSurfaceRef from the Broadcast Extension to appliction.
Hi Team,
I'm using AVPlayer on Apple TV 2nd generation which has that siri click pad with four buttons around and we need to detect where the user clicked on the Siri remote fast forward/backward. I have tested many different approaches to do it, but nothing is working for me.
Does anyone have any idea how to resolve it in Swift?
Hello everyone,
Is it possible to set AVCaptureMovieFileOutput to record the audio for my HECV video as 44100Hz 16 bit mono PCM. If yes: how does it work?
Thank you in advance
Topic:
Media Technologies
SubTopic:
Video
Hello everyone, with the release of Apple's new Final Cut Camera App, we see the possibility to overlay a Focus Peaking indicator over the camera feed, showing focussed areas.
We have already had a contrast based autofocus system for some time via the AVCaptureDevice.Format.AutoFocusSystem.contrastDetection, but I haven't found a way to actually present contrast areas to the user.
Given that Apple now natively has such an algorithm for the Final Cut Camera App, I wonder if we devs now also get access to this. If not, does anybody know of implementations of focus peaking out there?
Thanks and with best regards
The M series utilizes VideoToolBox GPU compression with a YUV422 format kCVPixelFormatType_422YpCbCr8BiPlanarVideoRange input, and the compressed output JPEG image format remains YUV420. For the Intel series GPU compression, a YUV420 format kCVPixelFormatType_420YpCbCr8Planar input is required, and the compressed output JPEG image format is YUV422. The output format after compression is not consistent with the input format. Does VideoToolBox GPU compression support output YUV422 or YUV444 JPEG images and H.264 streams?
Topic:
Media Technologies
SubTopic:
Video
Hi.
I know that playing videos from Apple Music was not possible with iOS 17. There is only the workaround to open the Music app.
My question is whether anybody found a solution for iOS 18 (Beta).
Thanks,
Dirk
OS:VisionOS 1.0
Xcode:15.2
In the application under development, do the following
Open ImmersiveSpace
Add VideoPlayerComponent to Entity
Play 8K Video
the App crash
The Apple symbol appears and returned to the Home
but, The problem does not occur if the application is created by extracting only the part of the 8K video to be played back.
Error Log
apply fence tx failed (client=0x6fbf0fcc) [0xfffffecc (ipc/mig) server died]
Failed to commit transaction (client=0x58510d43) [0x10000003 (ipc/send) invalid destination port]
nw_read_request_report [C1] Receive failed with error "No message available on STREAM"
nw_protocol_socket_reset_linger [C1:2] setsockopt SO_LINGER failed [22: Invalid argument]
Failed to set override status for bind point component member.
Message from debugger: Terminated due to signal 9
I can't share the entire application, but is anyone else experiencing the same problem?
Is this a memory issue?