Thanks for being a part of WWDC25!

How did we do? We’d love to know your thoughts on this year’s conference. Take the survey here

Implementing Scalable Order-Independent Transparency (OIT) in Metal

Hi,

Apple’s documentation on Order-Independent Transparency (OIT) describes an approach using image blocks, where an array of size 4 is allocated per fragment to store depth and color in a tile shading compute pass.

However, when increasing the scene’s depth complexity by adding more overlapping quads, the OIT implementation fails due to the fixed array size.

Is there a way to dynamically allocate storage for fragments based on actual depth complexity encountered during rasterization, rather than using a fixed-size array? Specifically, can an adaptive array of fragments be maintained and sorted by depth, where the size grows as needed instead of being limited to 4 entries?

Any insights or alternative approaches would be greatly appreciated.

Thank you!

Hello,

We'll investigate how this value can effectively be changed and whether it's possible to do so efficiently on the GPU.

/// The number of transparent geometry layers that the app stores in image block memory.
/// Each layer consumes tile memory and increases the value of the pipeline's `imageBlockSampleLength` property.
static constexpr constant short kNumLayers = 4;
Implementing Scalable Order-Independent Transparency (OIT) in Metal
 
 
Q