-
-
Notifications
You must be signed in to change notification settings - Fork 36.1k
Closed
Description
Description
I'm unable to read RenderTargets using the WebGPU backend correctly, regardless of which color space is used during creation.
I have tried multiple things, but what I've gathered until now is the following:
- None of the color spaces work when rendering alongside an animation loop:
NoColorSpace,LinearSRGBColorSpace,SRGBColorSpace(see RenderTarget + WebGPU backend: Setting color space to SRGBColorSpace in constructor breaks "readRenderTargetPixelsAsync" #31654) - The only case where I have managed to read the data is when setting the RenderTarget on the renderer before any call to
render/renderAsync, then calling the function once... Even then, I cannot reproduce this reliably - RenderTarget pixels can only be properly read using WebGL, might be due to
preserveDrawingBuffernot existing for theWebGPURenderer? - In the case of WebGL, the scene background color is ignored while rendering to the RenderTarget (see below)
Side note: my code might not be correct, in which case how may I read this data correctly regardless of the backend?
Reproduction steps
(note: check the live example for a complete setup)
- Create a scene with a basic mesh, using a WebGPURenderer with the WebGPU backend
- Create a RenderTarget with any color space, and render to it
- Try to read pixels using readRenderTargetPixelsAsync
Code
import * as THREE from "three";
import {
LinearSRGBColorSpace,
MeshStandardNodeMaterial,
WebGPURenderer,
} from "three/build/three.webgpu";
import { Fn, positionLocal, vec4 } from "three/src/nodes/TSL";
let screenshotted = false;
// IIFE
(async () => {
const camera = new THREE.PerspectiveCamera(
70,
window.innerWidth / window.innerHeight,
0.1,
100
);
camera.position.z = 2;
const scene = new THREE.Scene();
scene.background = new THREE.Color("#111");
const light = new THREE.PointLight();
light.position.set(0, 0, 5);
light.intensity = 100;
const mesh = new THREE.Mesh(
new THREE.SphereGeometry(),
new MeshStandardNodeMaterial({
colorNode: Fn(() => vec4(positionLocal, 1.0))(),
})
);
scene.add(mesh);
scene.add(light);
const renderer = new WebGPURenderer({
forceWebGL: false, // setting to true restores rendering
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.setAnimationLoop(() => renderer.render(scene, camera));
document.body.appendChild(renderer.domElement);
window.addEventListener("keypress", async () => {
if (screenshotted) return;
screenshotted = true;
await screenshot(renderer, scene, camera);
});
})();
// ----------------------- START test code -----------------------
async function screenshot(
renderer: WebGPURenderer,
scene: THREE.Scene,
camera: THREE.PerspectiveCamera
) {
const canvas = renderer.domElement;
const w = canvas.width,
h = canvas.height;
const screenshotRTT = new THREE.RenderTarget(w, h, {
colorSpace: THREE.SRGBColorSpace, // change this to see differences
});
renderer.setRenderTarget(screenshotRTT);
renderer.render(scene, camera);
renderer.setRenderTarget(null);
const rawBuffer = new Uint8Array(w * h * 4);
rawBuffer.set(
await renderer.readRenderTargetPixelsAsync(screenshotRTT, 0, 0, w, h)
);
pixelDataToCanvas(rawBuffer);
screenshotRTT.dispose();
}
function pixelDataToCanvas(data: Uint8Array) {
const canvas = document.createElement("canvas");
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
const ctx = canvas.getContext("2d")!;
const imgData = ctx.createImageData(window.innerWidth, window.innerHeight);
imgData.data.set(data);
ctx.putImageData(imgData, 0, 0);
document.body.appendChild(canvas);
}
// ------------------------ END test code ------------------------Live example
- CodeSandbox: https://codesandbox.io/p/sandbox/yhwl56 (note: press a key when focused on the window to trigger the RenderTarget rendering code)
Screenshots
No response
Version
r179
Device
Desktop
Browser
Chrome
OS
Windows