-
-
Notifications
You must be signed in to change notification settings - Fork 36.1k
WebGPURenderer: New Cache System #25750
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Great PR! I'll review this on Monday! |
|
Some important things:
|
|
|
||
| } | ||
|
|
||
| getEnvironmentNode() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks great so far! The only thing that I have noticed is that some node-specific logic/semantic finds its way into the renderer.
The original idea was to make the renderer material-agnostic (as good as possible) so it does not rely on a specific material system. But now there are things like getEnvironmentNode() or getFogNode() (so node syntax) inside renderer classes.
Would it be possible to reduce the node related logic inside WebGPURenderer to a minimum?
The idea would be to define more general interfaces in the renderer (e.g. getEnvironment()) and then inject a material specific object (e.g. the nodes object that we already used elsewhere) that provides the implementation. In our case it is node specific but in theory it could be something different, too.
The overall design goal is to avoid a too tight coupling of the renderer to the node material.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it ideally should be something like this:
WebGLRenderer/WebGPURenderer/SoftwareRenderer/another renderer (doesn't know about nodes, takes shaders written in GLSL/WGSL/JS) -- NodeRenderer (a boilerplate "renderer" that knows how to work with nodes) and WebGLNodeBuilder/WebGPUNodeBuilder/JSNodeBuilder (converts node material to GLSL/WGSL/JS shader) -- NodeMaterial system -- Nodes system.
So here getEnvironmentNode and getFogNode could go exactly in the NodeRenderer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current goals are these:
WebGPURenderershould be able to replaceWebGLRenderer(exceptShaderMaterial).WebGPURenderershould have aWebGPUBackendand aWebGLBackend.- This summer, the
<model-viewer>project has to be able to replaceWebGLRendererwithWebGPURenderereasily so it can use WebGPU when available. - After replacing the renderer,
<model-viewer>'s file size should stay relatively the same.
We should be careful with any additional work on the nodes system that gets in the way of achieving these goals.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why we need WebGL backend for WebGPU renderer? Why we can't make (almost) completely interchangeable WebGLRenderer and WebGPURenderer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we can't ask developers to start using two renderers in their projects and do the feature detection themselves. Resulting in 2x bigger build sizes too.
The original goal of the nodes code was to solve the tree-shaking problem of the WebGLRenderer by replacing the non tree-shakeable ShaderChunks with the tree-shakeable NodeMaterial.
The fact that NodeMaterial is able to produce GLSL and WGSL makes it much easier to have the renderer support both back-ends..
And developers should not have to know any of these details.
It seems to me like the best plan is to say that... as long as their project doesn't use ShaderMaterial... we can promise them that they can start using WebGPURenderer in their projects if they want to get WebGPU performance benefits when available, yet the renderer will take care of the backwards compatibility for them when not available.
So, the community's developer experience I'm aiming for is this:
- Same scene graph API.
- The only thing that changes is the renderer.
- Their projects will automatically support WebGL and WebGPU.
- If
NodeMaterialis able to keep its promise, their bundle sizes should be smaller.
We may need to create a separate npm package though... three-webgpu or something.
However, depending on WebGPU support, we could then remove the WebGLBackend at some point in the future.
Just how WebGLRenderer now uses WebGL 2 by default yet it is still able to use WebGL 1 when not available to make the developers life easier (instead of making them use a WebGLRenderer and a WebGL2Renderer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could consider that possibility?
const backend = WebGPU.isAvailable() ? new WebGPUBackend() : new WebGLBackend();
const renderer = new THREE.Renderer( backend );There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps if the backend was compatible with Raytracing as well like:
But I don't know if it would be possible, just a thought
const renderer = new THREE.Renderer( new WebGPURaytracingBackend() );There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The API and its usage stays the same. The only thing that changes is the name of the renderer's class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
const backend = WebGPU.isAvailable() ? new WebGPUBackend() : new WebGLBackend(); const renderer = new THREE.Renderer( backend );
That's acceptable, I think -- but const renderer = WebGPU.isAvailable() ? new WebGPURenderer() : new WebGLNodeRenderer() is semantically the same but little bit simpler. But I don't oppose this approach.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's basically the same WebGPURenderer will do internally.
const backend = WebGPU.isAvailable() ? new WebGPUBackend() : new WebGLBackend();
I think it gets more organized too, updated.
|
|
Now we have |
|
@sunag That sounds like a good compromise! |
|
@Mugen87 Thanks :) |
Description
One of the main problems of
WebGPURenderertoday is the caching system. I recreated the cache system oriented to aWebGPUWeakMapwhere the keys are terminated by an array that defines the unique flow of the Shader.WebGPURenderObjectit is the class that determines all possible variations of bindings, pipeline and nodes in relation to the object, it also allows obtaining a CacheKey for revalidation if necessary.Moves geometry updating from
WebGLObjectstoWebGPUGeometries.scene.overrideMaterialis now available for use.Demos
Demo 1
https://raw.githack.com/sunag/three.js/dev-webgpu-cache-system-demo/examples/demo1.html
Demo 2
https://raw.githack.com/sunag/three.js/dev-webgpu-cache-system-demo/examples/demo1.html