How to reduce CMSampleBuffer volume

Hello,

Basically, I am reading and writing an asset.
To simplify, I am just reading the asset and rewriting it into an output video without any modifications.

However, I want to add a fade-out effect to the last three seconds of the output video.

I don’t know how to do this.

So far, before adding the CMSampleBuffer to the output video, I tried reducing its volume using an extension on CMSampleBuffer.

In the extension, I passed 0.4 for testing, aiming to reduce the video's overall volume by 60%.

My question is:

How can I directly adjust the volume of a CMSampleBuffer?

Here is the extension:

extension CMSampleBuffer {
    func adjustVolume(by factor: Float) -> CMSampleBuffer? {
        guard let blockBuffer = CMSampleBufferGetDataBuffer(self) else { return nil }
        
        var length = 0
        var dataPointer: UnsafeMutablePointer<Int8>?
        
        guard CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: &length, dataPointerOut: &dataPointer) == kCMBlockBufferNoErr else { return nil }
        
        guard let dataPointer = dataPointer else { return nil }
        
        let sampleCount = length / MemoryLayout<Int16>.size
        dataPointer.withMemoryRebound(to: Int16.self, capacity: sampleCount) { pointer in
            for i in 0..<sampleCount {
                let sample = Float(pointer[i])
                pointer[i] = Int16(sample * factor)
            }
        }
        
        return self
    }
}
Answered by Engineer in 840680022

Instead of iterating over the raw PCM samples of a CMSampleBuffer, it is also possible to use setVolumeRamp(fromStartVolume:toEndVolume:timeRange:) to ramp the volume of a sequence of CMSampleBuffers over time.

Hello @Paulo_DEV01, thank you for your post.

In the extension, I passed 0.4 for testing, aiming to reduce the video's overall volume by 60%.

When you work with raw samples, it is usually a good idea to use a decibel scale to change amplitude, as human perception is not linear. In other words, multiplying the raw samples by 0.4 might be reducing the perceived volume a lot less than you would expect.

Regarding the fade-out effect, you would need to use the sample buffer's sample rate and length to compute how many buffers would be needed to fade over three seconds. You can then keep a variable that ramps over this interval of samples and update it every sample, for example.

Hello, but in this case, are the loop values from the extension initially in decibels or linear values?

The idea I have is: I convert to a linear value, then multiply by 0.4. After that, I convert back to decibels.

Give me an example of the formula, reducing the volume to 40%.

Instead of iterating over the raw PCM samples of a CMSampleBuffer, it is also possible to use setVolumeRamp(fromStartVolume:toEndVolume:timeRange:) to ramp the volume of a sequence of CMSampleBuffers over time.

The issue is that I’m writing a video from two different sources:

  1. The first comes from an AVAssetReader;
  2. The second is from an AVSpeechSynthesizer.write(buffer:).

Using AVAssetWriter, I want to concatenate content from both sources in order.

setRampVolume only works when writing video that comes from AVAssetReader using AVAssetExportSession. I’m using AVAssetWriter, so AVAssetExportSession doesn't apply.

How to reduce CMSampleBuffer volume
 
 
Q