Skip to content

Releases: Boulaouaney/timmx

v0.2.1

26 Feb 05:11

Choose a tag to compare

What's Changed

  • feat: add --source torch-export option to CoreML backend by @Boulaouaney in #9

Full Changelog: v0.2.0...v0.2.1

v0.2.0

24 Feb 18:50
05d589d

Choose a tag to compare

What's Changed

  • feat: add onnxslim optimization to ONNX export by @Boulaouaney in #8

Full Changelog: v0.1.0...v0.2.0

v0.1.0

24 Feb 17:49

Choose a tag to compare

Initial release of timmx — an extensible CLI and Python package for exporting timm models to deployment formats.

Backends

  • ONNX — dynamo-based export with fallback, dynamic batch, opset selection
  • Core ML — mlprogram/neuralnetwork, dynamic batch, compute precision control
  • LiteRT / TFLite — fp32, fp16, dynamic-int8, int8 quantization with calibration data, NHWC input layout
  • ncnn — export via pnnx with automatic cleanup of intermediate files
  • TensorRT — fp32, fp16, int8 with calibration, dynamic batch, requires CUDA
  • ExecuTorch — XNNPack and CoreML delegates, fp32 and int8 (PT2E quantization), dynamic batch
  • torch.export — standard PyTorch export with dynamic shapes
  • TorchScript — trace and script modes

Usage

pip install timmx
pip install 'timmx[onnx]'  # install backend extras as needed

timmx export onnx resnet18 --pretrained --output model.onnx
timmx doctor  # check which backends are available

Requirements

  • Python >= 3.11
  • PyTorch >= 2.5.0