- Overview
- Available Workflows
- Tutorials
- Repository Structure
- Contributing
- Support
- License
- Acknowledgement
This repository contains healthcare robotics workflows - complete, end-to-end implementations that demonstrate how to build, simulate, and deploy robotic systems for specific medical applications using the Nvidia Isaac for Healthcare platform.
Workflows are comprehensive reference implementations that showcase the complete development pipeline from simulation to real-world deployment. Each workflow includes digital twin environments, AI model training capabilities, and deployment frameworks for specific healthcare robotics applications.
This repository currently includes four main workflows:
- Robotic Surgery - Physics-based surgical robot simulation framework with photorealistic rendering for developing autonomous surgical skills. Supports da Vinci Research Kit (dVRK), dual-arm configurations, and STAR surgical arms. This workflow enables researchers and medical device companies to train AI models for surgical assistance, validate robot behaviors safely, and accelerate development through GPU-parallelized reinforcement learning. Includes pre-built surgical subtasks like suture needle manipulation and precise reaching tasks.
- Robotic Ultrasound - Comprehensive autonomous ultrasound imaging system featuring physics-accurate sensor simulation through GPU-accelerated raytracing technology that models ultrasound wave propagation, tissue interactions, and acoustic properties in real-time. The raytracing ultrasound simulator generates photorealistic B-mode images by simulating acoustic wave physics, enabling synthetic data generation for training AI models without requiring physical ultrasound hardware. Supports multiple AI policies (PI0, GR00T N1), distributed communication via RTI DDS, and Holoscan deployment for clinical applications. This workflow enables medical imaging researchers, ultrasound device manufacturers, and healthcare AI developers to train robotic scanning protocols, validate autonomous imaging algorithms, and accelerate development through GPU-accelerated simulation before clinical deployment.
- Telesurgery - Real-time remote surgical operations framework supporting both simulated and physical environments with low-latency video streaming, haptic feedback, and distributed control systems. This workflow features H.264/HEVC hardware-accelerated video encoding, RTI DDS communication, cross-platform deployment (x86/AARCH64), and seamless sim-to-real transition. Designed for surgical robotics companies, medical device manufacturers, and telemedicine providers to develop remote surgical capabilities, validate teleoperation systems, and deploy scalable telesurgery solutions across different network conditions.
- SO-ARM Starter - Surgical assistant robotics system featuring SO-ARM101 manipulator control with complete data collection, policy training, and deployment pipeline. Implements GR00T N1.5 diffusion policy for autonomous surgical instrument handling, precise tool positioning, and workspace organization using dual RGB camera streams (640x480@30fps room/wrist views) and 6-DOF joint state feedback. Features physics-based simulation environments, imitation learning from both real-world and simulated demonstrations, real-time policy inference generating 16-step action sequences, and RTI DDS middleware for distributed robot communication.
Each workflow provides complete simulation environments, training datasets, pre-trained models, and deployment tools to accelerate your healthcare robotics development.
Please see What's New for details on our milestone releases.
Get started with our comprehensive tutorials that guide you through key aspects of the framework:
- Bring Your Own Patient
- Medical Data Conversion (CT-to-USD)
- Bring Your Own Robot
- Bring Your Own Operating Room
- Bring Your Own XR Device
- Sim2Real Transition
- Telesurgery Latency Benchmarking
i4h-workflows/
├── docs/                 # Documentation and guides
├── tutorials/            # Tutorial materials
│   ├── assets/             # Asset-related tutorials
│   └── sim2real/           # Sim2Real transition tutorials
├── workflows/            # Main workflow implementations
│   ├── so_arm_starter/     # SO-ARM Starter workflow
│   ├── robotic_surgery/    # Robotic surgery workflow
│   ├── robotic_ultrasound/ # Robotic ultrasound workflow
│   └── telesurgery/        # Telesurgery workflow
We wholeheartedly welcome contributions from the community to make this framework mature and useful for everyone. Contributions can be made through:
- Bug reports
- Feature requests
- Code contributions
- Documentation improvements
- Tutorial additions
Please check our contribution guidelines for detailed information on:
- Development setup
- Code style guidelines
- Pull request process
- Issue reporting
- Documentation standards
For support and troubleshooting:
- Check the documentation
- Search existing issues
- Submit a new issue for:
- Bug reports
- Feature requests
- Documentation improvements
- General questions
 
The Isaac for Healthcare framework is under Apache 2.0.
The Robotic Surgery workflow initiated from the ORBIT-Surgical framework. We would appreciate if you would cite it in academic publications as well:
@inproceedings{ORBIT-Surgical,
  author={Yu, Qinxi and Moghani, Masoud and Dharmarajan, Karthik and Schorp, Vincent and Panitch, William Chung-Ho and Liu, Jingzhou and Hari, Kush and Huang, Huang and Mittal, Mayank and Goldberg, Ken and Garg, Animesh},
  booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
  title={ORBIT-Surgical: An Open-Simulation Framework for Learning Surgical Augmented Dexterity},
  year={2024},
  pages={15509-15516},
  doi={10.1109/ICRA57147.2024.10611637}
}
@inproceedings{SuFIA-BC,
  author={Moghani, Masoud and Nelson, Nigel and Ghanem, Mohamed and Diaz-Pinto, Andres and Hari, Kush and Azizian, Mahdi and Goldberg, Ken and Huver, Sean and Garg, Animesh},
  booktitle={2025 IEEE International Conference on Robotics and Automation (ICRA)},
  title={SuFIA-BC: Generating High Quality Demonstration Data for Visuomotor Policy Learning in Surgical Subtasks},
  year={2025},
}


