ISAAC_FFW_RL_reach_play.mp4
robotis_lab is a research-oriented repository based on Isaac Lab, designed to enable reinforcement learning (RL) and imitation learning (IL) experiments using Robotis robots in simulation. This project provides simulation environments, configuration tools, and task definitions tailored for Robotis hardware, leveraging NVIDIA Isaac Sim’s powerful GPU-accelerated physics engine and Isaac Lab’s modular RL pipeline.
Important
This repository currently depends on IsaacLab v2.2.0 or higher.
Docker installation provides a consistent environment with all dependencies pre-installed.
Prerequisites:
- Docker and Docker Compose installed
- NVIDIA Container Toolkit installed
- NVIDIA GPU with appropriate drivers
Steps:
-
Clone robotis_lab repository with submodules:
git clone --recurse-submodules https://github.com/ROBOTIS-GIT/robotis_lab.git cd robotis_labIf you already cloned without submodules, initialize them:
git submodule update --init --recursive
-
Build and start the Docker container:
./docker/container.sh start
-
Enter the container:
./docker/container.sh enter
Docker Commands:
./docker/container.sh start- Build and start the container./docker/container.sh enter- Enter the running container./docker/container.sh stop- Stop the container./docker/container.sh logs- View container logs./docker/container.sh clean- Remove container and image
What's included in the Docker image:
- Isaac Sim 5.1.0
- Isaac Lab v2.3.0 (from third_party submodule)
- CycloneDDS 0.10.2 (from third_party submodule)
- robotis_dds_python (from third_party submodule)
- LeRobot 0.3.3 (in separate virtual environment at
~/lerobot_env) - All required dependencies and configurations
Reinforcement learning
OMY Reach Task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Reach-OMY-v0 --num_envs=16OMY Lift Task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Lift-Cube-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Lift-Cube-OMY-v0 --num_envs=16OMY Open drawer Task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Open-Drawer-OMY-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Open-Drawer-OMY-v0 --num_envs=16FFW-BG2 reach Task
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=512 --headless
# Play
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Reach-FFW-BG2-v0 --num_envs=16Imitation learning
If you want to control a SINGLE ROBOT with the keyboard during playback, add
--keyboardat the end of the play script.Key bindings: =========================== ========================= Command Key =========================== ========================= Toggle gripper (open/close) K Move arm along x-axis W / S Move arm along y-axis A / D Move arm along z-axis Q / E Rotate arm along x-axis Z / X Rotate arm along y-axis T / G Rotate arm along z-axis C / V =========================== =========================
OMY Stack Task (Stack the blocks in the following order: blue → red → green.)
# Teleop and record
python scripts/imitation_learning/isaaclab_recorder/record_demos.py --task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --teleop_device keyboard --dataset_file ./datasets/dataset.hdf5 --num_demos 10
# Annotate
python scripts/imitation_learning/isaaclab_mimic/annotate_demos.py --device cuda --task RobotisLab-Stack-Cube-OMY-IK-Rel-Mimic-v0 --auto --input_file ./datasets/dataset.hdf5 --output_file ./datasets/annotated_dataset.hdf5 --headless
# Mimic data
python scripts/imitation_learning/isaaclab_mimic/generate_dataset.py \
--device cuda --num_envs 100 --generation_num_trials 1000 \
--input_file ./datasets/annotated_dataset.hdf5 --output_file ./datasets/generated_dataset.hdf5 --headless
# Train
python scripts/imitation_learning/robomimic/train.py \
--task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --algo bc \
--dataset ./datasets/generated_dataset.hdf5
# Play
python scripts/imitation_learning/robomimic/play.py \
--device cuda --task RobotisLab-Stack-Cube-OMY-IK-Rel-v0 --num_rollouts 50 \
--checkpoint /PATH/TO/desired_model_checkpoint.pthFFW-BG2 Pick and Place Task (Move the red stick into the basket.)
# Teleop and record
python scripts/imitation_learning/isaaclab_recorder/record_demos.py --task RobotisLab-PickPlace-FFW-BG2-IK-Rel-v0 --teleop_device keyboard --dataset_file ./datasets/dataset.hdf5 --num_demos 10 --enable_cameras
# Annotate
python scripts/imitation_learning/isaaclab_mimic/annotate_demos.py --device cuda --task RobotisLab-PickPlace-FFW-BG2-Mimic-v0 --input_file ./datasets/dataset.hdf5 --output_file ./datasets/annotated_dataset.hdf5 --enable_cameras
# Mimic data
python scripts/imitation_learning/isaaclab_mimic/generate_dataset.py \
--device cuda --num_envs 20 --generation_num_trials 300 \
--input_file ./datasets/annotated_dataset.hdf5 --output_file ./datasets/generated_dataset.hdf5 --enable_cameras --headless
# Train
python scripts/imitation_learning/robomimic/train.py \
--task RobotisLab-PickPlace-FFW-BG2-IK-Rel-v0 --algo bc \
--dataset ./datasets/generated_dataset.hdf5
# Play
python scripts/imitation_learning/robomimic/play.py \
--device cuda --task RobotisLab-PickPlace-FFW-BG2-IK-Rel-v0 --num_rollouts 50 \
--checkpoint /PATH/TO/desired_model_checkpoint.pth --enable_camerasImportant
OMY Hardware Setup: To run Sim2Real with the real OMY robot, you need to bring up the robot.
This can be done using ROBOTIS’s open_manipulator repository.
AI WORKER Hardware Setup: To run Sim2Real with the real AI WORKER robot, you need to bring up the robot.
This can be done using ROBOTIS’s ai_worker repository.
The training and inference of the collected dataset should be carried out using physical_ai_tools. This can be done using ROBOTIS’s physical_ai_tools
Reinforcement learning
OMY Reach Task Introduction YouTube
sim2real.mp4
Run Sim2Real Reach Policy on OMY
# Train
python scripts/reinforcement_learning/rsl_rl/train.py --task RobotisLab-Reach-OMY-v0 --num_envs=512 --headless
# Play (You must run rsl_rl play in order to generate the policy file.)
python scripts/reinforcement_learning/rsl_rl/play.py --task RobotisLab-Reach-OMY-v0 --num_envs=16
# Sim2Real
python scripts/sim2real/reinforcement_learning/inference/OMY/reach/run_omy_reach.py --model_dir=<2025-07-10_08-47-09>Replace <2025-07-10_08-47-09> with the actual timestamp folder name under:
logs/rsl_rl/reach_omy/Imitation learning
OMY Pick and Place Task
Sim2Sim
506326011-26108031-55b0-4809-8d56-1faa495920c1.mp4
Sim2Real
506326055-02cd503e-0f24-4bec-aea2-79010c21f874.mp4
# Teleop and record demos
python scripts/sim2real/imitation_learning/recorder/record_demos.py --task=RobotisLab-Real-Pick-Place-Bottle-OMY-v0 --robot_type OMY --dataset_file ./datasets/omy_pick_place_task.hdf5 --num_demos 10 --enable_cameras
[Option] Mimic generate dataset
# Data convert ee_pose action from joint action
python scripts/sim2real/imitation_learning/mimic/action_data_converter.py --input_file ./datasets/omy_pick_place_task.hdf5 --output_file ./datasets/processed_omy_pick_place_task.hdf5 --action_type ik
# Annotate dataset
python scripts/sim2real/imitation_learning/mimic/annotate_demos.py --task RobotisLab-Real-Mimic-Pick-Place-Bottle-OMY-v0 --auto --input_file ./datasets/processed_omy_pick_place_task.hdf5 --output_file ./datasets/annotated_dataset.hdf5 --enable_cameras --headless
# Generate dataset
python scripts/sim2real/imitation_learning/mimic/generate_dataset.py --device cuda --num_envs 10 --task RobotisLab-Real-Mimic-Pick-Place-Bottle-OMY-v0 --generation_num_trials 500 --input_file ./datasets/annotated_dataset.hdf5 --output_file ./datasets/generated_dataset.hdf5 --enable_cameras --headless
# Data convert joint action from ee_pose action
python scripts/sim2real/imitation_learning/mimic/action_data_converter.py --input_file ./datasets/generated_dataset.hdf5 --output_file ./datasets/processed_generated_dataset.hdf5 --action_type joint
# Data convert lerobot dataset from IsaacLab hdf dataset
lerobot-python scripts/sim2real/imitation_learning/data_converter/OMY/isaaclab2lerobot.py \
--task=RobotisLab-Real-Pick-Place-Bottle-OMY-v0 \
--robot_type OMY \
--dataset_file ./datasets/<processed_omy_pick_place_task.hdf5> or ./datasets/<processed_generated_dataset.hdf5>
# Inference in simulation
python scripts/sim2real/imitation_learning/inference/inference_demos.py --task RobotisLab-Real-Pick-Place-Bottle-OMY-v0 --robot_type OMY --enable_cameras
This repository is licensed under the Apache 2.0 License. See LICENSE for details.
- Isaac Lab: BSD-3-Clause License, see LICENSE-IsaacLab