Skip to content

Conversation

@CodyJasonBennett
Copy link
Contributor

@CodyJasonBennett CodyJasonBennett commented Sep 19, 2024

Related issue: #22017 (comment)

Description

Implements a reversed depth buffer with a [0, 1] projection matrix via EXT_clip_control, which is a strict improvement over logarithmic depth where supported. A reverse depth buffer exploits the high precision range of [0, ~0.8] and inverts depth to better the distribution at distance. This is not only significantly faster than logarithmic depth (where use of gl_FragDepth disables early-z optimizations and MSAA coverage) but more accurate. See the below articles, which visualize this effect.

https://tomhultonharrop.com/mathematics/graphics/2023/08/06/reverse-z.html
https://developer.nvidia.com/blog/visualizing-depth-precision


As both a [0, 1] and reversed projection matrix would require drastic changes to sensitive code like frustum planes or raycasting, I've hidden both conversions when setting the projectionMatrix uniform in WebGLRenderer. It's possible this can be moved to a shader chunk, but any inline clip-space math must account for this mode, and we'll need to add a shader define accordingly.

The same is true for logarithmic depth, e.g., pmndrs/drei#1737. I've debated adding methods to packing chunks for these.


This example was ported from https://webgpu.github.io/webgpu-samples/?sample=reversedZ and uses a 32 bit depth-buffer.

https://jsfiddle.net/n69rqj8u

three-reverse-depth.mp4

@github-actions
Copy link

github-actions bot commented Sep 19, 2024

📦 Bundle size

Full ESM build, minified and gzipped.

Before After Diff
WebGL 686.03
169.81
686.77
170.07
+739 B
+259 B
WebGPU 835
223.93
835
223.93
+0 B
+0 B
WebGPU Nodes 834.5
223.81
834.5
223.81
+0 B
+0 B

🌳 Bundle size after tree-shaking

Minimal build including a renderer, camera, empty scene, and dependencies.

Before After Diff
WebGL 462.34
111.55
463.05
111.83
+713 B
+277 B
WebGPU 531.77
143.36
531.77
143.36
+0 B
+0 B
WebGPU Nodes 488.43
133.22
488.43
133.22
+0 B
+0 B

@Mugen87 Mugen87 added this to the r169 milestone Sep 19, 2024
@CodyJasonBennett

This comment was marked as resolved.

@CodyJasonBennett
Copy link
Contributor Author

CodyJasonBennett commented Sep 20, 2024

Right, they're using depth32float for depth, and the upper bits are used for near-field precision.

https://jsfiddle.net/n69rqj8u

{508745D8-6C64-4AA8-9D7A-8CEE2CFF0728}

@N8python
Copy link
Contributor

N8python commented Sep 20, 2024

This would only work with floating point depth buffers though - what about 24-bit depth buffers often used w/ stencil?

@CodyJasonBennett
Copy link
Contributor Author

CodyJasonBennett commented Sep 20, 2024

No. The whole trick here is exploiting floating-point representation, whose precision is more closely clustered between [0, 0.8]. To emulate this with integers, you would need a non-linear transform similar to logarithmic depth. This is the exact same problem we have when applying a gamma curve to fit colors in smaller integer formats; it can be forgone if using high precision floating point for the same reason.

See #29445 (comment) for instance, which uses the default canvas framebuffer (integer). You lose near-field precision this way, but that's okay for mobile, where you want to keep this low(er) precision format and avoid the blit. WebGPU gives more control here.

We could enable a workaround where you set camera.coordinateSystem = THREE.WebGPUCoordinateSystem like in the second screenshot to skip the [-1, 1] -> [0, 1] conversion. I think it's more okay for us to recompose the projection matrix's depth components in that case, but the difference shown there is unexpected, and I suspect a false negative. It seems the range wasn't reversed in the matrix there, which favors near-field over far-field precision. I'm not sure this is useful in practice.

@N8python
Copy link
Contributor

What I more mean is w/ 24-bit depth buffers is the precision uniform over [0, 1] - or am I wrong here?

@CodyJasonBennett
Copy link
Contributor Author

Precision is not uniform due to how perspective projection works and the useful range of floating point representation. We reverse this range to preserve far-field precision over near-field.

In other words, the distribution does not change whether you use 32/24/16 bits for depth, but the possible range you represent in floating point.

@WestLangley
Copy link
Collaborator

@Mugen87
@CodyJasonBennett

I think this feature is deserving of a three.js example.

Also, (self) shadows do not appear to render correctly in my testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants