Welcome to the DEEL France–Québec Hackathon repository 🎉 Here you’ll find everything you need to get started: objectives, rules, tips, and resources to make the most out of this exciting challenge.
📚 Table of contents
The challenge: teach a robot arm to place 1, 2, or 3 colored cubes onto a 2×2 wooden grid by reading a small reference card that shows, for each cube color, a colored cross on the target cell.
Sounds simple? In practice, it’s a tough robotics + learning task! Success depends on how strategically you build datasets (coverage & curriculum), train models, and plan your approach.
Your robot will be evaluated over 4 levels of increasing difficulty. Each level has 5 trials. You must succeed on at least 3/5 trials to unlock the next level.
Level 1 — Single Color, Fixed Position
Task: Pick up one cube of a single color and place it on the same target cell every time (as indicated by the card) in ≤ 20s.
Scoring: 5 points per success (+5 bonus if the cube is grasped on the first attempt).
Level 2 — Single Cube, Varying Positions
Task: Pick up one cube and place it on the correct cell (varies across trials per the card) in ≤ 20s.
Scoring: 10 points per success (+5 bonus for first-attempt grasps).
Level 3 — Two Colored Cubes, Varying Positions
Task: Pick and place 2 cubes of different colors on their respective target cells (per the card) in ≤ 30s.
Scoring: 30 points per success.
Note
💡 Adding the second cube too early in your dataset may make Levels 1 & 2 harder.
Level 4 — Three Colored Cubes, Varying Positions
Task: Pick and place 3 cubes of different colors on their respective target cells (per the card) in ≤ 30s.
Scoring: 50 points per success.
Warning
Day 1
09:00–09:15: Hackathon intro
09:15–09:45: Team meet-up & strategy discussion
09:45: 🟢 Official start!
11:00 (recommended): Run first training tests to verify dataset format, logging, and checkpoints.
12:00 (recommended): Test inference from early checkpoints.
18:00: First evaluation attempt (even with partial training).
Day 2
09:00: Evaluation with overnight training results
16:30: Final evaluations & team presentations (strategies, results, lessons learned)
17:30: Clean-up and prep for MobiliT.AI poster session
Plan your recording setup: Each team gets 2 external cameras. Place them wisely (wrist, top, side view, etc.). Name them clearly when recording datasets.
Record plenty of data: Rotate who records to get variety. Consistency matters!
Mark positions: If you set up a controlled environment, mark object positions (objects may be moved overnight).
Robustness vs. simplicity:
- Controlled environment = easier early progress.
- Varied environments = robustness but needs more data.
- Curriculum strategy: Start with lots of Level 1 data, then gradually add Level 2, Level 3, etc. However, this is not an obligation. If you and your teammates want to win all or nothing we respect that!
Note
💡 Check out the dataset guidelines for more details.
- ACT (RTX 4090): batch size 32, 100k steps → ~7h
- SmolVLA (RTX 4090): batch size 64, 20k steps → ~7 h
- Pi0.5: TBD
Note
- Think strategically: scoring is incremental, don’t rush for Level 4.
- Save checkpoints and test early & often.
- Teamwork matters as much as models: rotate recording, share insights.
- Remember: the goal is to learn, experiment, and have fun 🎉
This guide explains how to prepare your environment for the DEEL France–Québec Hackathon.
We cover three main scenarios:
- Recording data on a Windows machine.
- Training models on DEEL’s cluster machines.
- Running inference on Windows with your trained checkpoints.
For the hackathon, we provide a dedicated fork of Hugging Face’s LeRobot repository.
👉 Use the custom repository as your codebase.
You can distinguish at least two installations settings. The minimum installation on Windows for recording the data and for inference, and the setup for training a model. We provide here the instructions for the training supposing you will use one of the machines prodvided to you on the DEEL's cluster. If you want to do it in your own environment some adaptation may be needed.
Note
Training can be tested independently of data recording, since we provide you with a ready-to-use dataset.
Warning
We advise to seriously read the dataset guidelines section and this blog post before recording your dataset.
1. Install FFmpeg
Open PowerShell and run:
winget install ffmpeg2. Clone the Repository
git clone [email protected]:deel-ai/lerobot-hackathon.git
cd lerobot-hackathon3. Create and Activate Conda Environment
conda create -y -n lerobot python=3.10
conda activate lerobot4. Install PyTorch with the correct CUDA version
First, check your CUDA version using the command:
nvidia-smiThen, install torch and torchvision for the corresponding CUDA version by following the official PyTorch installation guide.
For example, if your CUDA version is 12.4, run:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu1245. Install Dependencies
pip install -e .
pip install -e ".[feetech]"Calibration ensures that leader and follower arms map correctly.
A well-calibrated robot allows a model trained on one setup to generalize to another.
- Connect the Arms
- Power both arms.
- Use USB-C → USB-A cables to connect arms to your PC.
- Always use the same USB ports for consistency.
Find the port names:
lerobot-find-portExample output:
Finding all available ports for the MotorBus.
['COM4', 'COM5']
Remove the USB cable from your MotorsBus and press Enter when done.
[...Disconnect corresponding leader or follower arm and press Enter...]
The port of this MotorsBus is 'COM4'
Reconnect the USB cable.In our example, if you disconnected the leader arm, then the leader arm is on 'COM4' and the follower 'COM5'.
Remember it, as it will have its importance when you will operate.
- Calibrate the Follower Arm
lerobot-calibrate \
--robot.type=so101_follower \
--robot.port='COM5' \ # <- The port of your robot
--robot.id=follower_idontheetiquette \ # <- Give the robot a unique name
--robot.calibration_dir="path\to\lerobot-hackathon\calibration\robots\so101_follower"Note
Using Powershell you should replace the " \ " with "`", and using the shell with "^"
For the calibration itself, the easiest is to follow the video in the dedicated section in this tutorial.
Your file will be saved in your calibration dir with the name "follower_idontheetiquette.json". This is interesting as once a calibration has been properly done you can share it with member of your teams without the need of doing the calibration again.
- Calibrate the Leader Arm
lerobot-calibrate \
--teleop.type=so101_leader \
--teleop.port='COM4' # <- The port of your robot
--teleop.id=leader_idontheetiquette # <- Give the robot a unique name
--teleop.calibration_dir="path\to\lerobot-hackathon\calibration\teleoperators\so101_leader"Note
The --robot flag is for the follower and --teleop for the leader arm.
- Test Calibration
While having your hands on the leader arm:
lerobot-teleoperate \
--robot.type=so101_follower \
--robot.port="COM5" \
--robot.id="follower_f0" \
--robot.calibration_dir="path\to\lerobot-hackathon\calibration\robots\so101_follower" \
--teleop.type=so101_leader \
--teleop.port="COM4" \
--teleop.id="leader_l0" \
--teleop.calibration_dir="path\to\lerobot-hackathon\calibration\teleoperators\so101_leader" \
--display_data=trueA rerun.io window should open. The follower arm must match the leader precisely. In case of lags, trying killing unnecessary processes.
If it is still not working → redo calibration.
Warning
Move joints slowly during calibration. Fast manual motions can trigger motor faults or overheating protection, which may later cause detection issues during teleop/record/inference.
Warning
Troubleshooting — Motors blinking red / not detected.
If a motor LED blinks red or a motor is not detected when starting teleoperation, recording, or inference, it may be an overheating protection state (see related discussion in the LeRobot repo, issue #441).
Quick fix that worked for us:
- Power off the affected arm (disconnect the arm’s power).
- Wait a few seconds.
- Power it back on and retry.
- Plug cameras into your computer.
A dock might be needed to have enough port and to have your computer charging.
- Identify them:
lerobot-find-cameras opencvExample output:
--- Detected Cameras ---
Camera #0:
Name: OpenCV Camera @ 0
Type: OpenCV
Id: 0
Backend api: AVFOUNDATION
Default stream profile:
Format: 16.0
Width: 1920
Height: 1080
Fps: 15.0
--------------------
(more cameras ...)Note
This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
- Match IDs with physical placement by checking saved images in:
repository_dir/outputs/captured_imagesLet's say that opencv_0.png is a camera positioned on the left of our robot and opencv_2.png correspond to a camera positioned
in front of it.
- Example teleoperation with two cameras:
lerobot-teleoperate \
--robot.type=so101_follower \
--robot.port="COM5" \
--robot.id="follower_f0" \
--robot.calibration_dir="path\to\lerobot-hackathon\calibration\robots\so101_follower" \
--robot.cameras="{ left: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}" ^ # <-- Your setting
--teleop.type=so101_leader \
--teleop.port="COM4" \
--teleop.id="leader_l0" \
--teleop.calibration_dir="path\to\lerobot-hackathon\calibration\teleoperators\so101_leader" \
--display_data=trueTip
Performance & latency: We strongly recommend 480p (width=640, height=480) for each camera.
With 2 cameras at ≥720p, we observed choppy teleoperation/inference that severely hurts success rates. 480p keeps streams smooth while preserving enough detail for the task.
You should now "see" what your robots is seeing in the rerun.io window. It will be useful to use that vision before actually recording data to place them as you wish.
Once you are happy with your installation setting, you are now ready to record data.
Each team should create a Hugging Face dataset repo:
DEEL-AI/Hackathon_TeamXX
Note
If you do not have the rights to write for the DEEL-AI organization please come see us!
Keyboard Shortcuts During Recording
Before running the recording command, it’s useful to know the available shortcuts:
-
Redo a recording If you consider the current attempt low-quality, press ← (left arrow).
- This gives you time to reset the environment and place the robot back in its initial position.
- Once ready, press → (right arrow) to start recording again.
-
Save a recording When you’re satisfied with an episode and have placed the robot in its final position, press → (right arrow).
- This saves the episode.
- You then have time to reset the environment before pressing → (right arrow) again to start the next episode.
Note
A practical shortcut: if you’re happy with a recording, simply double-press the right arrow. The saving process takes long enough to let you reset the environment before the next episode starts.
The command to record:
lerobot-record \
--robot.type=so101_follower \
--robot.port='COM5' \
--robot.id=follower_f0 \
--robot.calibration_dir="path\to\lerobot-hackathon\calibration\robots\so101_follower" \
--teleop.type=so101_leader \
--teleop.port='COM4' \
--teleop.id=leader_l0 \
--teleop.calibration_dir="path\to\lerobot-hackathon\calibration\teleoperators\so101_leader" \
--display_data=true \
--robot.cameras="{ left: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}" \ # <-- Change with your setting
--dataset.root='path_to_locally_save_the_ds' \
--dataset.repo_id='DEEL-AI/Hackathon_TeamXX' \
--dataset.num_episodes=25 \ # Number of episodes you will record at once
--dataset.single_task="Pick and place one green cube on the cell indicated by the card on a 2×2 grid." \ # <-- can be adapted but most follow the guidelines in the dataset section
--dataset.push_to_hub=True \
--resume=false \ # <-- Set to true once it has been initializedWarning
Change --resume to true after your first recording.
Note
You can change the --dataset.single_task to change the command prompt. For example "Pick and place one green cube on the top-left cell of a 2×2 grid (as shown on the card)."
BEFORE RECORDING, take a look at the dataset guidelines.
- Create environment:
conda create -y -n lerobot python=3.10
conda activate lerobot- Install FFmpeg:
conda install ffmpeg=7.1.1 -c conda-forgeChecks that when you do:
which ffmpegOutputs looks like:
/home/lucas.hervier/.conda/envs/lerobot/bin/ffmpegAnd that you see libsvtav1 in the list of the output of the following command:
ffmpeg -encoders- Clone & install repo:
git clone [email protected]:deel-ai/lerobot-hackathon.git
cd lerobot-hackathon
pip install -e .- Authenticate:
hf auth login
wandb loginNote
Make sure that you use a token with the right permissions
- Download dataset:
hf download DEEL-AI/Hackathon_TeamXX --repo-type datasetCUDA_VISIBLE_DEVICES=id_of_your_gpu lerobot-train \
--dataset.repo_id=DEEL-AI/Hackathon_TeamXX \
--policy.type=act \
--output_dir=/output_dir_with_space/train/act_so101_test \
--job_name=act_so101_test \
--policy.device=cuda \
--policy.push_to_hub=false \
--batch_size=64 \
--save_freq=10000 \
--steps=100000 \
--wandb.enable=true \
--wandb.disable_artifact=trueAs a test you can run the following script with dataset.repo_id=DEEL-AI/Hackathon_Team0Z. (Don't forget to download it with hf download DEEL-AI/Hackathon_Team0Z --repo-type dataset)
For this test I voluntarily set a low number of steps and a saving frequency that is low. However, having such a low frequency will fill up space quickly so be mindful of this parameter.
ACT come with default values for steps and save_freq. It is up to you to check if the are those you want to use.
If you want to resume a training:
lerobot-train \
--config_path=/output_dir_with_space/train/outputs/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
--resume=true \
--policy.device=cuda \
--policy.push_to_hub=false \
--steps=200000 \
--wandb.enable=true \
--wandb.disable_artifact=trueWarning
Here --steps is not the number of additional steps you want to do. It is the number of steps it must reach from the CKPT step you provided.
Additionnally to finetune SmolVLA:
pip install -e ".[smolvla]"Finally check your pytorch version and use set_cuda_version with matching distributions.
Then:
lerobot-train \
--policy.path=cijerezg/smolvla-test \
--dataset.repo_id=DEEL-AI/Hackathon_TeamXX \
--output_dir=/output_dir_with_space/train/smolvla_so101_test \
--job_name=smolvla_so101_test \
--policy.device=cuda \
--policy.push_to_hub=false \
--batch_size=64 \
--steps=20000 \
--save_freq=5000 \
--wandb.enable=true \
--wandb.disable_artifact=trueAs a test you can run the following script with dataset.repo_id=DEEL-AI/Hackathon_Team0Z
To resume training it is the same as for ACT:
lerobot-train \
--config_path=/output_dir_with_space/train/outputs/smolvla_so101_test/checkpoints/last/pretrained_model/train_config.json \
--resume=true \
--policy.device=cuda \
--policy.push_to_hub=false \
--steps=40000 \
--wandb.enable=true \
--wandb.disable_artifact=trueTo run inference on your PC, you first need to get your trained checkpoints. If you want to get them from the DEEL's machines you can use scp:
scp -r username@deelXX://output_dir_with_space/train/act_so101_test/checkpoints/last C:\path\to\outputs\train\act_so101_test\checkpointsThen, if you have the environment as build in the recording section you can run:
lerobot-record ^
--robot.type=so101_follower \
--robot.port='COM5' \
--robot.id=follower_f0 \
--robot.calibration_dir="path\to\lerobot-hackathon\calibration\robots\so101_follower" \
--robot.cameras="{ left: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 2, width: 640, height: 480, fps: 30}}" \ # <-- Change with your setting
--teleop.type=so101_leader \
--teleop.port='COM4' \
--teleop.id=leader_l0 \
--teleop.calibration_dir="path\to\lerobot-hackathon\calibration\teleoperators\so101_leader" \
--display_data=true \
--dataset.single_task="Pick and place one green cube on the cell indicated by the card on a 2×2 grid." ^
--dataset.root='./eval_hackathon_9_cubes_v1' ^
--dataset.repo_id='DEEL-AI/eval_Hackathon_TeamXX' ^
--dataset.push_to_hub=false ^
--policy.path="C:\path\to\outputs\train\act_so101_test\checkpoints\last\pretrained_model"You don’t need to include the teleop arguments if you prefer not to. However, adding them allows you to press ← (left arrow) during inference to temporarily take manual control of the robot and reset it, before pressing → (right arrow) to continue to the next episode.
Note
You can modify the --dataset.single_task flag to change the command prompt. That said, we recommend using the exact same commands as those in your dataset to ensure consistency.
The keyboard shortcuts behave the same way as during recording, except you won’t need to teleoperate—the robot will autonomously execute episodes. For testing, we provide a sample policy that you can run before training your own. Its behavior may be erratic, but it’s useful for verifying that inference is working correctly.
✅ You are now fully set up and ready to record, train, and run inference!
From this HF blog post (but we changed the resolution tip):
Browse all datasets: HF Datasets
Visualize a LeRobot Dataset Use the interactive tool: HF Robot Viz Space
You can also use the local commands as described here:
- Visualize data stored on a local machine:
local$ lerobot-dataset-viz \
--repo-id lerobot/pusht \
--episode-index 0
- Visualize data stored on a distant machine with a local viewer:
distant$ lerobot-dataset-viz \
--repo-id lerobot/pusht \
--episode-index 0 \
--save 1 \
--output-dir path/to/directory
local$ scp distant:path/to/directory/lerobot_pusht_episode_0.rrd .
local$ rerun lerobot_pusht_episode_0.rrd
- Visualize data stored on a distant machine through streaming:
(You need to forward the websocket port to the distant machine, with
ssh -L 9087:localhost:9087 username@remote-host)
distant$ lerobot-dataset-viz \
--repo-id lerobot/pusht \
--episode-index 0 \
--mode distant \
--ws-port 9087
local$ rerun ws://localhost:9087
- At least 1 core member with a full setup per team (7/7)
- Make cross-validation of produced code
- Build the teams
- 4 Developers
- 4 Testers
- Print all the Leader (7/7)
- Print all the Follower (7/7)
- Prepare a notice to be able to calibrate arms correctly
- Prepare a notice to set machines to record data
- Print parts for the Camera
- Prepare the WoodBoards (7/7)
- Define the best way to handle the generated datasets (per team)
- Make utils code to ease the dataset manipulation
- Make utils code to train using ACT on DEEL's machine
- Make utils code to finetune SmolVLA
- Prepare a notice to train on a custom dataset with the defined dataset's strategy
