20151027 - Temporal Filtering With Dithered Transparency
Possible direction for technique for handling dithered transparency with temporal feedback,
- Render scene depth only, opaque only. Can be lower than screen resolution. Used for culling.
- Render scene color, transparent and opaque using dithering. Output resolution.
- Render transparent layer hash map. Lower than screen resolution. Pixels are 32-bits. Split into 2 regions (low 16-bits, high 16-bits). Each object gets a new random bit position (from 0-15) and maintains the prior frame random bit position. The new random bit sets one bit in the low 16-bits of a constant for the object, and the prior random bit sets a bit in the high 16-bits of the constant. Rendering is an atomicOR of that constant into the frame.
- Maybe dilate the layer hash map (OR with the neighborhood) if lower than screen resolution.
Layer hash map segments the image into regions of similar overlap of transparent objects. The hash changes each frame (keeping new and prior) to ensure collisions are distributed across frames.
Could use the layer hash map to check similarity of pixels on a given frame (for any kind of spatial blur), or of a prior frame (temporal blur).
Way to diff the hash map for temporal blur, extract the matching frame 16-bit values (low 16-bits of prior frame, high 16-bits of current frame), XOR them: zero is same, or count ones bits for the amount of difference.
Theory is similar to what I was doing with dithered fog, except replace using opaque depth as the likeness check, with the layer hash map.