Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

AVAssetWriterInputTaggedPixelBufferGroupAdaptor Hanging With Tagged Buffers

We've successfully implemented an AVAssetWriter to produce HLS streams (all code is Objective-C++ for interop with existing codebase) but are struggling to extend the operations to use tagged buffers.

We're starting to wonder if the tagged buffers required for an MV-HEVC signal are fully supported when producing HLS segments in a live-stream setting.

We generate a live stream of data using something like:

UTType *t = [UTType typeWithIdentifier:AVFileTypeMPEG4];
m_writter = [[AVAssetWriter alloc] initWithContentType:t];


// - videoHint describes HEVC and width/height
// - m_videoConfig includes compression settings and, when using MV-HEVC,
//   the correct keys are added (i.e. kVTCompressionPropertyKey_MVHEVCVideoLayerIDs)
//   The app was throwing an exception without these which was
//   useful to know when we got the configuration right.
m_video = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeVideo outputSettings:m_videoConfig sourceFormatHint:videoHint];

For either path we're producing CVPixelBufferRefs that contain the raw pixel information (i.e. 32BGRA) so we use an adapter to make that as simple as possible.

If we use a single view and a AVAssetWriterInputPixelBufferAdaptor things work out very well. We produce segments and the delegate is called.

However, if we use the AVAssetWriterInputTaggedPixelBufferGroupAdaptor as exampled in the SideBySideToMVHEVC demo project, things go poorly.

We create the tagged buffers with something like:

CMTagCollectionRef collections[2];

CMTag leftTags[] = {
    CMTagMakeWithSInt64Value(
        kCMTagCategory_VideoLayerID, (int64_t)0),
    CMTagMakeWithSInt64Value(
        kCMTagCategory_StereoView, kCMStereoView_LeftEye)
};
CMTagCollectionCreate(
    kCFAllocatorDefault, leftTags, 2, &(collections[0])
);

CMTag rightTags[] = {
    CMTagMakeWithSInt64Value(
        kCMTagCategory_VideoLayerID, (int64_t)1),
    CMTagMakeWithSInt64Value(
        kCMTagCategory_StereoView, kCMStereoView_RightEye)
};
CMTagCollectionCreate(
    kCFAllocatorDefault, rightTags, 2, &(collections[1])
);

CFArrayRef tagCollections = CFArrayCreate(
    kCFAllocatorDefault, (const void **)collections, 2, &kCFTypeArrayCallBacks
);

CVPixelBufferRef buffers[] = {*b, *alt};
CFArrayRef b = CFArrayCreate(
    kCFAllocatorDefault, (const void **)buffers, 2, &kCFTypeArrayCallBacks
);

CMTaggedBufferGroupRef bufferGroup;
OSStatus res = CMTaggedBufferGroupCreate(
    kCFAllocatorDefault, tagCollections, b, &bufferGroup
);

Perhaps there's something about this OBJC code that I've buggered up? Hopefully!

Anyways, when I submit this tagged bugger group to the adaptor:

if (![mvVideoAdapter appendTaggedPixelBufferGroup:bufferGroup withPresentationTime:pts]) {
    // report error...
}

Appending does not raise any errors - eventually it just hangs on us and we never return from it...

Real Issue:

So either:

  • The delegate assigned to the AVAssetWriter doesn't fire its assetWriter callback which should produce the segments
  • The adapter hangs on the appendTaggedPixelBufferGroup before a segment is ready to be completed (but succeeds for a number of buffer groups before this happens).

This is the same delegate class that's assigned to the non multi view code path if MV-HEVC is turned off which works perfectly.

Accepted Answer

Ah yes. A good case of RTFM!

From the docs on AVAssetWriterInput::readyForMoreMediaData:

Clients writing media data from a real-time source, such as an instance of AVCaptureOutput, should set the input's expectsMediaDataInRealTime property to YES to ensure that the value of readyForMoreMediaData is calculated appropriately. When expectsMediaDataInRealTime is YES, readyForMoreMediaData will become NO only when the input cannot process media samples as quickly as they are being provided by the client. If readyForMoreMediaData becomes NO for a real-time source, the client may need to drop samples or consider reducing the data rate of appended samples.

In our case, the real time source is our render engine producing sweet graphics!

We're in the VFX world and tend to get a little over-eager when we start a project. I had the stream at 4K without enough optimizations. In MV-HEVC we are doubling that up sort of). While our render cycle is fast enough, we haven't had enough optimization on the HLS tooling done and it was causing the readyForMoreMediaData to become NO.

When you spam the adapter with buffers it's not ready for, it appears to hang and, AFAICT never return.

For now we've have opted to drop the frames as required to stay up with the stream.

/// Called every (1 / framerate) seconds (with a little wiggle)
/// from our plugin interface
void HLSInterface::writeFrame()
{
    CMTime pts = CMTimeMake(
        m_frameIndex++ * m_frameRateDuration,
        m_frameRateScale
    );

    if (![m_video readyForMoreMediaData])
    {
        // We can't use this buffer, skip it for now. Ideally
        // this never happens but we can't trust the input
        // adapter not to explode if we overload it
        return;
    }

    // We can accept buffers, tagged or otherwise, so do
    // the thing
    // ...
}
AVAssetWriterInputTaggedPixelBufferGroupAdaptor Hanging With Tagged Buffers
 
 
Q