Production-ready Rust framework for autonomous robotic control with local AI models
Saorsa Robotics provides a comprehensive, safety-first framework for robotic control systems with Vision-Language-Action (VLA) models running entirely on local hardware. Built in Rust for memory safety, performance, and reliability.
- MolmoAct Integration: Action Reasoning Model with 3D spatial understanding and Chain-of-Thought planning
- Candle ML Framework: Lightweight Rust-native inference without Python dependencies
- OpenVLA Support: Compatible with cloud and local VLA models
- On-Device Learning: Continual improvement through OFT adapters and intervention learning
- Formal Constraint System: Expression-based DSL for defining safety boundaries
- Real-Time Monitoring: Watchdog systems with automatic intervention
- Zero Panic Guarantee: No
unwrap(),expect(), orpanic!()in production code - Comprehensive Testing: 100% test coverage on safety-critical paths
- Multi-Modal Control: Voice commands, vision processing, and haptic feedback
- Stereo Vision: Depth perception with dual camera calibration
- CAN Bus Integration: Direct hardware control for motors and actuators
- Intent Parsing: Natural language to robot action conversion
- Zero-Copy Data Paths: Efficient memory usage
- Async/Await: Non-blocking I/O throughout
- Action Chunking: Smooth control despite network latency
- Real-Time Capable: Deterministic timing for critical operations
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Clone repository
git clone https://github.com/dirvine/saorsa-robotics
cd saorsa-robotics# Build all crates
cargo build --release
# Run all tests
cargo test --all
# Run with safety checks
cargo run --bin sr-cli -- --safety-enabled# VLA Policy Demo
cargo run --example vla_policy_demo
# Wake Word Detection
cargo run --example wake_word_demo
# Safety Constraints Demo
cargo run --bin safety-demosaorsa-robotics/
βββ apps/ # Application binaries
β βββ sr-cli/ # Main CLI interface
β βββ brain-daemon/ # Central coordination daemon
β βββ safety-demo/ # Safety system demonstration
β βββ kyutai-stt-app/ # Speech-to-text application
βββ crates/ # Core library crates
β βββ vla-policy/ # Vision-Language-Action models
β βββ safety-guard/ # Safety constraint engine
β βββ voice-local/ # On-device voice processing
β βββ vision-stereo/ # Stereo vision and depth
β βββ intent-parser/ # NLU and command parsing
β βββ can-transport/ # CAN bus communication
β βββ device-registry/ # Hardware device management
β βββ continual-learning/ # Online learning framework
βββ examples/ # Example applications
βββ configs/ # Device and system configs
βββ docs/ # Technical documentation
Implements multiple Vision-Language-Action models for robot control:
use vla_policy::{create_policy, PolicyConfig, Observation};
// Create MolmoAct policy with 3D reasoning
let config = PolicyConfig {
model_type: "molmoact".to_string(),
model_path: "models/molmoact-7b".to_string(),
// ... configuration
};
let policy = create_policy(config)?;
let action = policy.predict(&observation).await?;Features:
- MolmoAct with Chain-of-Thought reasoning
- Waypoint generation for complex tasks
- Skills framework (Pick, Place, Reach)
- Mock policy for testing
Expression-based constraint system ensuring safe operation:
use safety_guard::{SafetyGuard, Constraint};
let mut guard = SafetyGuard::new();
// Define workspace boundaries
guard.add_constraint(Constraint::expression(
"workspace_x",
"x >= -0.5 && x <= 0.5"
)?);
// Check if action is safe
if guard.check_action(&action)? {
robot.execute(action)?;
}Features:
- Mathematical expression constraints
- Real-time evaluation with evalexpr
- Watchdog monitoring
- Automatic intervention on violations
On-device speech recognition and wake word detection:
use voice_local::{KyutaiProvider, WakeWordDetector};
let provider = KyutaiProvider::new(config)?;
let detector = WakeWordDetector::new("hey robot")?;
// Process audio stream
if detector.detect(&audio_frame)? {
let command = provider.transcribe(&audio_buffer)?;
execute_command(command)?;
}Features:
- Kyutai/Mimi model integration
- Real-time transcription
- Wake word detection
- Plugin architecture for custom models
Depth perception and 3D scene understanding:
use vision_stereo::{StereoCamera, DepthEstimator};
let camera = StereoCamera::new(config)?;
camera.calibrate()?;
let (left, right) = camera.capture()?;
let depth_map = DepthEstimator::compute(&left, &right)?;
let tags = detect_april_tags(&left)?;Features:
- Dual camera calibration
- Real-time depth estimation
- AprilTag detection
- Point cloud generation
Hardware control via CAN bus:
use can_transport::{SlcanTransport, Message};
let transport = SlcanTransport::new("/dev/ttyUSB0")?;
// Send motor command
let msg = Message::new(0x123, &[0x01, 0x02, 0x03])?;
transport.send(&msg)?;Features:
- SLCAN protocol support
- ODrive motor control
- T-Motor actuator support
- Mock transport for testing
All crates maintain 100% test coverage on critical paths:
# Run all tests
cargo test --all
# Run with coverage
cargo tarpaulin --out Html
# Run safety-critical tests
cargo test -p safety-guard
# Run benchmarks
cargo benchCurrent test status:
- β
safety-guard: 13/13 passing - β
vla-policy: 21/21 passing - β
voice-local: All doctests passing - β
intent-parser: 1/1 passing - β Zero compilation warnings
- No Panics: Zero
unwrap(),expect(), orpanic!()in production - Error Handling: All errors properly propagated with
Result<T, E> - Memory Safety: Guaranteed by Rust's ownership system
- Concurrency Safety: Safe parallelism with Send/Sync traits
- Formal constraint verification
- Watchdog timers on all operations
- Automatic failsafe modes
- Comprehensive audit logging
- Architecture Overview
- VLA Policy Design
- Safety System
- CAN Protocol
- Vision System
- Voice Control
- Research Notes
- ONNX runtime integration for broader model support
- ROS2 bridge for ecosystem compatibility
- Web dashboard for monitoring and control
- Simulation environment with Bevy
- Distributed multi-robot coordination
- Federated learning across robot fleets
- Custom silicon accelerator support
- Formal verification of safety properties
We welcome contributions! Please see CONTRIBUTING.md for guidelines.
Key areas for contribution:
- Additional VLA model implementations
- Hardware device drivers
- Safety constraint patterns
- Documentation and examples
MIT License - see LICENSE for details.
- Built with Candle for ML inference
- Inspired by LeRobot for robot learning
- Safety patterns from aerospace and automotive industries
- GitHub: @dirvine
- Project: Saorsa Labs
For the original Python implementation for SO-101 arms, see archive/python-so101