Releases: orbital-materials/orb-models
v0.6.2
v0.6.1
What's Changed
- Enable stress for orb-v3-conservative-omol by @timduignan in #150
- Added
dftd3_parameters.ptto the package by @vsimkus in #151
Full Changelog: v0.6.0...v0.6.1
0.6.0
What's Changed
- Use sensible defaults for charge and spin in readme by @benrhodes26 in #121
- Orbmol fixes clean by @timduignan in #122
- Use pypi instead of nvidia repo for cuml by @vsimkus in #129
- Highlight in MODELS.md that OrbMol models are for non-periodic systems only by @vsimkus in #126
- Enhanced finetune.py with custom loss weights and reference energies by @timduignan in #133
- More info in Error message ChargeSpinConditioner by @thomasloux in #141
- Make
wandboptional in finetune.py script by @vsimkus in #140 - Alchemiops KNN+D3, TorchSim, Refactor by @vsimkus in #143
- Fix custom ref energies shape in finetune.py by @vsimkus in #146
- Remove StateDict as potential ModelInterface input torchsim. by @CompRhys in #147
New Contributors
- @thomasloux made their first contribution in #141
- @CompRhys made their first contribution in #147
Full Changelog: v0.5.5...v0.6.0
0.5.5
What's Changed
- Update NaCl examples to use orb-v3 and streamline dependencies by @timduignan in #106
- Fix conservative model loading for finetuning by @vsimkus in #112
- correct_read_me_docs: docs: Update README.md by @evansdoe in #100
- Add r_max as argument with default 6A. by @ameya98 in #115
- Fix aggregate nodes when last batch in a graph has no edges/nodes by @vsimkus in #119
- Omol models by @benrhodes26 in #120
New Contributors
Full Changelog: 0.5.4...v0.5.5
0.5.4
0.5.3
0.5.2
0.5.1
What's Changed
- Update readme with paper links by @benrhodes26 in #77
- Pin
dm-tree==0.1.8due to MacOS compilation issues by @vsimkus in #78 - Featurization fixes and a speed benchmarking script by @benrhodes26 in #81
Full Changelog: v0.5.0...v0.5.1
0.5.0
April 2025: We have released the Orb-v3 set of potentials. These models improve substantially over Orb-v2, in particular:
Model compilation using PyTorch 2.6.0+, enabling faster inference while maintaining support for dynamic graph sizes
Wider architecture (1024 vs 512) with fewer layers (5 vs 15) compared to v2, resulting in 2-3x faster performance with similar parameter count
Two variants available: direct models and conservative models (forces/stress computed via backpropagation)
Trained on the larger, more diverse OMat24 dataset
Improved edge embeddings using Bessel-Spherical Harmonic outer products (8 Bessel bases, Lmax=3)
Enhanced stability through Huber loss and a ZBL pair repulsion term added to forces
Models available with both unlimited neighbors and 20-neighbor maximum configurations
New confidence head providing intrinsic uncertainty estimates for predictions
v0.4.2
What's Changed
- added jupyter notebook and examples for sagemaker by @Arthurhussey in #45
- update notebook by @Arthurhussey in #46
- Replace
torch.cuda.amp.autocast->torch.autocastby @BenedictIrwin in #43 - added eol announcement by @Arthurhussey in #47
- fixed marketplace product id by @Arthurhussey in #48
- fix: mark the return type as
GraphRegressorinstead oftorch.nn.Moduleby @caic99 in #50 - Feature/md tutorial by @timduignan in #49
- Colab for MD by @timduignan in #51
- correct ordering of properties by @DeNeutoy in #55
- added azure model card and examples by @Arthurhussey in #56
New Contributors
- @BenedictIrwin made their first contribution in #43
- @caic99 made their first contribution in #50
- @timduignan made their first contribution in #49
Full Changelog: v0.4.1...v0.4.2