In this tutorial we will show a minimilastic example of training policy. For official robomimic tutorial please refer to robomimic tutorial
Quick start with minimal implementation, open In Colab train_lift_minimal.ipynb
Dagger on Lift task, open In Colab train_lift_dag.ipynb
The following commands are taken from this link
conda create -n robomimic_venv python=3.8.0
conda activate robomimic_venv
Install PyTorch
- Install from here https://pytorch.org/get-started/locally/
Install robomimic
cd <PATH_TO_YOUR_INSTALL_DIRECTORY>
git clone https://github.com/ARISE-Initiative/robomimic.git
cd robomimic
pip install -e .
Install robosuite (Use from source installation, don't use pip install robosuite)
cd <PATH_TO_INSTALL_DIR>
git clone https://github.com/ARISE-Initiative/robosuite.git
cd robosuite
git checkout v1.4.1
pip install -r requirements.txt
Choose either Option 1 or Option 2 and then press CTRL+C when you are done. The data will be saved in the robosuite/robosuite/models/assets/demonstrations folder. Remember the filepath in your computer, that should be similar to the following file path.
/home/ns/robosuite/robosuite/models/assets/demonstrations/1739396875_9637682/demo.hdf5
Option 1: Collect data using keyboard
cd robosuite/robosuite/scripts
conda activate robomimic_venv
python collect_human_demonstrations.py
Option 2: Collect data using Spacemouse. Pleasee see for installation instruction.
cd robosuite/robosuite/scripts
conda activate robomimic_venv
python collect_human_demonstrations.py --device spacemouse
cd robomimic/robomimic/scripts/conversion
conda activate robomimic_venv
python convert_robosuite.py --dataset demo_file_path.hdf5- Change the "demo_file_path.hdf5" with your hdf5 file accordingly.
cd robomimic/robomimic/scripts/
conda activate robomimic_venv
python dataset_states_to_obs.py --dataset demo_file_path.hdf5 \
--output_name output_filepath.hdf5\
--done_mode 2 --camera_names agentview robot0_eye_in_hand --camera_height 84 --camera_width 84 - Note: change the dataset path and the output path accordingly.
cd bc_tutorial/robomimic_tasks
conda activate robomimic_venv
python hdf52videos.py --dataset demo_image_filepath.hdf5 The videos will be saved inside "videos" folder in the same directory where the hdf5 file is located.
- The data is stored in hdf5 format. The structure is as follows:
low_dim_v141.hdf5
├──data
│ ├──demo_0
│ │ ├──action(7)
│ │ ├──obs
│ │ ├──Object(10)
│ │ ├──robot0_eef_pos(3)
│ │ ├──robot0_eef_quat(4)
│ │ ├──robot0_gripper_qpos(2)
│ │ ├──robot0_joint_pos(7)
│ │ ...
│ │ ├──next_obs
│ │ ├──rewards
│ │ ...
│ ├──demo_1
│ ...
├──mask
From this data we are interested in
-
action: 7 dimensional action space (delta x, delta y, delta z, delta roll, delta pitch, delta yaw, gripper)
-
obs: observation space (object(10), robot0_eef_pos(3), robot0_eef_quat(4), robot0_gripper_qpos(2)) that is 19 dimensional
-
See the data_info.ipynb file to understand the data structure.
Please see the robomimic tutorials here Getting Started
sudo apt install libosmesa6-dev libgl1-mesa-glx libglfw3 patchelf
conda install -c conda-forge glew
conda install -c conda-forge mesalib
conda install -c menpo glfw3
You can also try to install in the "base" environment instead of "robomimic_venv" environment.