This repository provides a ROS 2 package for generating explanations in autonomous robots (ROS 2) based on log analysis using LLMs.
Explainable_ros uses the RAG method to filter the most relevant logs from those generated by the robot during the execution of its behavior.
To enhance the robot's internal data, a VLM is used to process the images captured by the onboard camera, describe them, and log them through rosout. This allows the combination of logs generated by the robot subsystems with images of the environment. The workflow of the system is illustrated in the following figure:
On the other hand, the high-level representation of the components that make up the developed system is shown in the following figure.
Take into account that the examples shown in the usage section have been made using this rosbag.
You must have llama_ros and CUDA Toolkit (llama_ros dependency) installed.
cd ros2_ws/src
git clone https://github.com/Dsobh/explainable_ros.git
pip install -r explainable_ros/requirements.txt
cd ../
colcon buildYou can also use docker. To do this you can compile the dockerfile found in the root of this repository or download the corresponding image from Dockerhub.
For the examples shown in this section we use the following models (available in llama_ros repository):
- Embbeding model: bge-base-en-v1.5.yaml
- Reranker model: jina-reranker
- Base model: Qwen2
- Run the embbeding model:
ros2 llama launch ~/ros2_ws/src/llama_ros/llama_bringup/models/bge-base-en-v1.5.yaml- Run reranker model:
ros2 llama launch ~/ros2_ws/src/llama_ros/llama_bringup/models/jina-reranker.yaml- Run the base model:
ros2 llama launch ~/ros2_ws/src/llama_ros/llama_bringup/models/Qwen2.yaml- Now you can run the main node of the system:
ros2 run explainable_ros explainability_nodeThis node subscribes to the rosout topic and process the logs to add them to the context of the LLM. You can play a rosbag file in order to generate logs and test the operation of the system.
- To request a explanation you should use the /question service:
ros2 service call /question explainable_ros_msgs/srv/Question "{'question': 'What is happening?'}"[ADD IMG]
Run container:
sudo docker run --rm -it --entrypoint bash <docker_name:tag>Run second container:
docker exec -it <container_id> bash- Run a VLM model
ros2 launch llama_bringup minicpm-2.6.launch.py- Run the visual describer node
ros2 run explainable_ros visual_descriptor_nodeThis node is subscribed to the /camera/rgb/image_raw topic and every 5 seconds describes the image captured by the camera and logs it in the /rosout.
- llama_ros → A repository that provides a set of ROS 2 packages to integrate llama.cpp into ROS 2.
A series of rosbags (ROS 2 Humble) published in Zenodo are listed below. This data can be used to test the explainability capabilities of the project.
- Sobrín Hidalgo, D. (2024). Navigation Test in Simulated Environment Rosbag. Human obstacle detection. (1.0.0) [Data set]. Robotics Group. https://doi.org/10.5281/zenodo.10896141
- Sobrín Hidalgo, D. (2024). Navigation Benchmark Rosbags Inspired by ERL Competition Test (1.0.0) [Data set]. Robotics Group. https://doi.org/10.5281/zenodo.10518775
- Sobrín-Hidalgo, D., González-Santamarta, M. A., Guerrero-Higueras, Á. M., Rodríguez-Lera, F. J., & Matellán-Olivera, V. (2024). Explaining Autonomy: Enhancing Human-Robot Interaction through Explanation Generation with Large Language Models. arXiv preprint arXiv:2402.04206.
- Sobrín-Hidalgo, D., González-Santamarta, M. Á., Guerrero-Higueras, Á. M., Rodríguez-Lera, F. J., & Matellán-Olivera, V. (2024). Enhancing Robot Explanation Capabilities through Vision-Language Models: a Preliminary Study by Interpreting Visual Inputs for Improved Human-Robot Interaction. arXiv preprint arXiv:2404.09705.
If your work uses this repository, please, cite the repository or the following paper:
@article{sobrin2024explaining,
title={Explaining Autonomy: Enhancing Human-Robot Interaction through Explanation Generation with Large Language Models},
author={Sobr{\'\i}n-Hidalgo, David and Gonz{\'a}lez-Santamarta, Miguel A and Guerrero-Higueras, {\'A}ngel M and Rodr{\'\i}guez-Lera, Francisco J and Matell{\'a}n-Olivera, Vicente},
journal={arXiv preprint arXiv:2402.04206},
year={2024}
}
This project has been partially funded by the Recovery, Transformation, and Resilience Plan, financed by the European Union (Next Generation) thanks to the TESCAC project (Traceability and Explainability in Autonomous Cystems for improved Cybersecurity) granted by INCIBE to the University of León, and by grant PID2021-126592OB-C21 funded by MCIN/AEI/10.13039/501100011033 EDMAR (Explainable Decision Making in Autonomous Robots) project, PID2021-126592OB-C21 funded by MCIN/AEI/10.13039/501100011033 and by ERDF ”A way of making Europe”.




