Yuheng Lei, Sitong Mao, Shunbo Zhou, Hongyuan Zhang, Xuelong Li, Ping Luo
[Paper] [Pretraining Checkpoint (LIBERO-90)]
Please run the following commands in the given order to install the dependency and the LIBERO benchmark.
conda create -n dmpel python=3.8.13
conda activate dmpel
pip install -r requirements.txt
Then install the libero package:
pip install -e .
We leverage high-quality human teleoperation demonstrations for the task suites in LIBERO. To download the demonstration dataset, run:
python benchmark_scripts/download_libero_datasets.pyFor a detailed walk-through of the LIBERO benchmark, please either refer to the documentation or the original paper.
We can starting training by running:
export CUDA_VISIBLE_DEVICES=GPU_ID && \
export MUJOCO_EGL_DEVICE_ID=GPU_ID && \
python libero/lifelong/main.py seed=SEED \
benchmark_name=BENCHMARK \
policy=POLICY \
lifelong=ALGOBENCHMARKfrom[LIBERO_90]ALGOfrom[multitask]POLICYfrom[bc_foundation_policy_fft, bc_foundation_policy_frozen]
We provide the template script of pretraining as follows:
sh exp_scripts/pretraining_scripts/run_chunkonlyfft_base_clip.sh
BENCHMARKfrom[LIBERO_SPATIAL, LIBERO_OBJECT, LIBERO_GOAL, LIBERO_10]ALGOfrom[base, er, ewc, packnet, lotus, l2m, iscil, tail, dmpel]POLICYfrom[bc_foundation_policy_fft, bc_foundation_policy_frozen, bc_hierarchical_policy, bc_foundation_tail_policy, bc_foundation_l2m_policy, bc_foundation_iscil_policy, bc_foundation_dmpel_policy]
We provide the scripts to reproduce results in the paper in exp_scripts/lifelong_scripts. For example, we can evaluate DMPEL in LIBERO-Goal by running:
sh exp_scripts/lifelong_scripts/dmpel.sh
Note that the pretrained model path should be the same as the final checkpoint you saved during pretraining. We also provide our pretraining checkpoint to facilitate the replication of results presented in the main paper.
This codebase is built with reference to the following repositories: