Replies: 1 comment
-
|
this isn't LeelaChessZero bonus: NNUE is fast enough and ONNX models are not very beneficial, and it will clutter release binaries |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
NNUE seems to be a unique standard for chess engines, it might make sense to have a way of porting ONNX models over, vice versa.
This way it’s much easier to incorporate advances in deep learning that are usually done via tensorflow or PyTorch over, via ONNX which is widely adopted by most deep learning frameworks.
It’s a nice to have as it could potentially speed up the search for better deep learning models for the engine.
Beta Was this translation helpful? Give feedback.
All reactions