Skip to content

Conversation

@marcofugaro
Copy link
Contributor

Related issue: Fixes #21492

Description

We did this at Spline and wanted to give back to three.js as well.

You can try the draco option here, enable the checkbox and click on Export walthead.
https://raw.githack.com/marcofugaro/three.js/gltf-draco-exporter/examples/misc_exporter_gltf.html

Screenshot 2021-07-30 at 16 56 33

Currently I made it use the existing draco_encoder.js. In a future PR, I'll add the option to use draco_encoder.wasm which is way faster!

/cc @donmccurdy

This contribution is funded by Spline.

@donmccurdy
Copy link
Collaborator

Thanks @marcofugaro!

Draco compression will change the vertex count and order when used with the default "edgebreaker" compression method. Switching to "sequential" encoding will prevent reordering, and usually (but not always, unfortunately) prevent changing the number of vertices. Because of this, the accessor.count property must be updated after compression to the new value. Moreover, geometry with morph targets must be compressed with sequential encoding, or else do some very elaborate workarounds, since the number of compressed vertices cannot differ from the number of uncompressed vertices in the morph attributes.

It would be good to test this on meshes that contain...

  • indices
  • no indices
  • morph targets

This can get a bit complicated, unfortunately. I think I've come to the conclusion that DCC tools should in general not implement optimization steps themselves, but should instead rely on dedicated glTF -> glTF optimization tools.

Currently I made it use the existing draco_encoder.js. In a future PR, I'll add the option to use draco_encoder.wasm which is way faster!

IE11 is at end-of-life soon, so perhaps we should support only the WASM version for new features. I think the JS --> WASM fallback we do in DRACOLoader is unnecessary complexity at this point.

@marcofugaro
Copy link
Contributor Author

the accessor.count property must be updated after compression to the new value.

Any idea on how to get that new value? I was using encoder.GetNumberOfEncodedPoints() and encoder.GetNumberOfEncodedFaces() but of course it does not work for grouped geometries.

It would return the total of all primitives when calling dracoExporter.parse( mesh ) if the geometry had groups. However I need the total for each one since grouped geometries are saved as multiple primitives, correct?

This can get a bit complicated, unfortunately. I think I've come to the conclusion that DCC tools should in general not implement optimization steps themselves, but should instead rely on dedicated glTF -> glTF optimization tools.

Are you suggesting that we use your library gltf-transform (WebIO) in GLTFExporter? Fine by me, what does @mrdoob think?

IE11 is at end-of-life soon, so perhaps we should support only the WASM version for new features. I think the JS --> WASM fallback we do in DRACOLoader is unnecessary complexity at this point.

I agree.

@donmccurdy
Copy link
Collaborator

donmccurdy commented Aug 12, 2021

However I need the total for each one since grouped geometries are saved as multiple primitives, correct?

That sounds right; I don't think the Draco encoder supports this use case, it won't tell you what the new indices for each primitive should be. So we would have to encode each primitive separately instead, possibly duplicating some vertices.

So this is possible, it is just complicated. I am not doing much work on GLTFExporter these days so I don't want to say we "can't" do it this way if interested contributors like you would like to, but it seems like just supporting lossless export is a hard enough problem (see #22165, #22163, #21538, #20474) and we will make it harder for ourselves — i.e. more complicated primitive processing — by taking on extra optimization work that could be done in a separate processing stage.

By contrast, you can add Draco compression (or quantization, or Meshopt compression soon!) with glTF-Transform pretty easily, even without adding it to three.js. For example:

import { WebIO } from '@gltf-transform/core';
import { KHRONOS_EXTENSIONS, DracoMeshCompression } from '@gltf-transform/extensions';

const io = new WebIO()
  .registerExtensions( KHRONOS_EXTENSIONS )
  .registerDependencies( {
    'draco3d.encoder': await new DracoEncoderModule(),
    'draco3d.decoder': await new DracoDecoderModule(),
  } );

new GLTFExporter().parse( scene, function ( glb ) {

  const document = io.readBinary( glb );

  document.createExtension( DracoMeshCompression )
    .setRequired( true )
    .setEncoderOptions( {
      method: DracoMeshCompression.EncoderMethod.EDGEBREAKER
    } );

  glb = io.writeBinary( document );

}, { binary: true } );

Related: https://stackoverflow.com/a/66979159/1314762

/cc @takahirox any preference?

@marcofugaro
Copy link
Contributor Author

By contrast, you can add Draco compression (or quantization, or Meshopt compression soon!) with glTF-Transform pretty easily, even without adding it to three.js. For example:

So what about people that are requesting it in #21492? Should we tell them to use gltf-transform after the GLTFExporter step?

@takahirox
Copy link
Collaborator

takahirox commented Aug 12, 2021

/cc @takahirox any preference?

Personally I prefer to keep the exporter simple as much as possible and to optimize the assets with external post-processing tools like glTF-transform.

If you want to let GLTFExporter support Draco compression, you may also try GLTFExporter extensibility mechanism (Sorry no documents yet). But the extensibility API is not matured yet so I'm not sure it satisfy the Draco compression support. If you try and face any problems with the API, please give us feedbacks.

const exporter = new GLTFExporter()
  .register(writer => new YourDracCompressionPlugin(writer));
exporter.parse(scene, ...);

Plugin should be reusable like the ones in https://github.com/takahirox/three-gltf-extensions

@donmccurdy
Copy link
Collaborator

@marcofugaro sorry for the obstacles here, especially after #21492 had been open a while. I was not originally sure what the complexity would be of getting this into GLTFExporter, and there are some other challenges (see google/draco#713) that are a bit tricky.

I'll leave a comment on #21492 about the use of GLTFExporter with glTF-Transform, I think that is likely to be a more maintainable solution long term.

@marcofugaro
Copy link
Contributor Author

Thanks for the explanation, we'll rely our draco compression on your library, please keep maintaining it!

On a side note, I also noticed the issue shown in google/draco#713 (comment) on some of my models in some personal projects, glad to know there is a solution!

@forerunrun
Copy link

forerunrun commented May 7, 2023

Hey, i'm trying to use the following method as quoted from above, this is returning the error "Error: Method requires Uint8Array parameter; received "object"." at const document = io.readBinary( glb ); i'm using the latest npm version of three and i can't seem to find an updated example that demonstrates how to use the GLTFExporter functionality, does anyone have a little guidance on how this would be done?

EDIT: I realised the second error callback argument was missing from GLTFExporter().parse, this is now returning an array buffer which i can extract the uint8Array from, however going on to export the model with

const optimizedGLB = await io.writeBinary( document );
exportGLTF(optimizedGLB) 

is exporting a glb file at 1kb with no model data inside, would anyone happen to know why this is?

import { WebIO } from '@gltf-transform/core';
import { KHRONOS_EXTENSIONS, DracoMeshCompression } from '@gltf-transform/extensions';

const io = new WebIO()
  .registerExtensions( KHRONOS_EXTENSIONS )
  .registerDependencies( {
    'draco3d.encoder': await new DracoEncoderModule(),
    'draco3d.decoder': await new DracoDecoderModule(),
  } );

new GLTFExporter().parse( scene, function ( glb ) {

  const document = io.readBinary( glb );

  document.createExtension( DracoMeshCompression )
    .setRequired( true )
    .setEncoderOptions( {
      method: DracoMeshCompression.EncoderMethod.EDGEBREAKER
    } );

  glb = io.writeBinary( document );

}, { binary: true } );

@donmccurdy
Copy link
Collaborator

@forerunrun see https://gltf-transform.donmccurdy.com/classes/core.webio.html – the glTF Transform library has been updated since this comment was written. readBinary() accepts a Uint8Array and returns a Promise<Document>. writeBinary(document) returns a Promise<Uint8Array>, so you'd need to await the result before writing.

@forerunrun
Copy link

forerunrun commented May 7, 2023

thanks @donmccurdy perfect, yes i managed to get it working after a bit of struggle with outdated comments... before, i was running the transform and using await io.writeBinary( gltfScene ); i was then trying to use GLTFExporter to re-parse the result which was what i was doing wrong... creating a new Blob from result, then an ObjectURL URL.createObjectURL( blob ); and downloading this ObjectURL gives the correct resulting file outputs!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GLTFExporter: Add option for draco compression

4 participants