quantized-containerized-models is a collection of experiments and best practices for deploying optimized AI models in efficient, containerized environments. The goal is to showcase how modern techniques—quantization, containerization and continuous integration/deployment (CI/CD) can work together to deliver fast, lightweight, and production-ready model deployments.
- Quantization – Reduce model size and accelerate inference using techniques like 
nf4, int8, and sparsity. - Containerization – Package models with Cog, ensuring reproducible builds and smooth deployments.
 - CI/CD Integration – Automated pipelines for linting, testing, building and deployment directly to Replicate.
 - Deployment Tracking – Status Page for visibility into workflow health and deployment status.(TODO)
 - Open Source – Fully licensed under Apache 2.0.
 
- 
flux-fast-lora-hotswap: Built on the LoRA fast blog post, this deployment uses
flux.1-devmodels with two LoRAs that can be hot-swapped to reduce generation time and avoid graph breaks.- Optimized with 
nf4quantization andtorch.compilefor speedups. - Includes an Img2Img variant.
 - Featured in the official Hugging Face blogpost.
 - Source code.
 
 - Optimized with 
 - 
smollm3-3b-smashed: Uses Pruna to quantize and
torch.compilethe smollm3-3b model, enabling lower VRAM usage and faster generation.- Supports 16k token context windows and hybrid reasoning.
 - Source code.
 
 - 
phi-4-reasoning-plus-unsloth: Accelerates Microsoft’s Phi-4 reasoning model with Unsloth, achieving faster inference and a smaller memory footprint.
 - 
gemma3-torchao-quant-sparse: Improves inference performance for Gemma-3-4B-IT using torchao int8 quantization combined with sparsity techniques such as granular and magnitude pruning.
 
This project is licensed under the Apache License 2.0.