Skip to content
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 30 additions & 0 deletions _pages/exercises/AutonomousCars/autoparking.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ The objective of this exercise is to implement the logic of a navigation algorit

## Robot API

This exercise now supports ROS 2-native implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

- `import HAL` - to import the HAL (Hardware Abstraction Layer) library class. This class contains the functions that send and receive information to and from the Hardware (Gazebo).
- `import WebGUI` - to import the WebGUI (Web Graphical User Interface) library class. This class contains the functions used to view the debugging information, like image widgets.
- `HAL.getPose3d()` - to get all the position information.
Expand All @@ -69,6 +73,32 @@ The objective of this exercise is to implement the logic of a navigation algorit
- `HAL.setV()` - to set the linear speed.
- `HAL.setW()` - to set the angular velocity.

### ROS 2-direct Implementation

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

- `/prius_autoparking/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`

- `/prius_autoparking/odom` - Subscribe to this topic to receive the car odometry. Message type: `nav_msgs/msg/Odometry`

- `/prius_autoparking/scan_front` - Subscribe to this topic to receive the front laser scan. Message type: `sensor_msgs/msg/LaserScan`

- `/prius_autoparking/scan_side` - Subscribe to this topic to receive the right-side laser scan. Message type: `sensor_msgs/msg/LaserScan`

- `/prius_autoparking/scan_back` - Subscribe to this topic to receive the rear laser scan. Message type: `sensor_msgs/msg/LaserScan`

- `/prius_autoparking/pc2` - Subscribe to this topic to receive 3D LiDAR data. Message type: `sensor_msgs/msg/PointCloud2`

In this exercise, the WebGUI does not require extra topics.

To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.

## Laser attributes

`HAL.getFrontLaserData()`, `HAL.getRightLaserData()` and `HAL.getBackLaserData()` returns an instance of a Class with the following attributes:
Expand Down
36 changes: 18 additions & 18 deletions _pages/exercises/AutonomousCars/end_to_end_visual_control.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ While monitoring <strong>validation loss</strong> and checking <strong>evaluatio

## Robot API

This exercise now supports ROS 2-native implementation in addition to the original HAL-based approach. Below you'll find the details for both options.
This exercise now supports ROS 2-direct implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

Expand All @@ -245,35 +245,35 @@ This exercise now supports ROS 2-native implementation in addition to the origin
- `HAL.setW(velocity)` - to set the angular velocity.
- `WebGUI.showImage(image)` - allows you to view a debug image or with relevant information.

### ROS 2-native Implementation
### ROS 2-direct Implementation

`from WebGUI import gui` - to enable the Web GUI for visualizing camera images.
Use standard ROS 2 topics for direct communication with the simulation.

**Note**: Ensure this import is included in your script to access the Web GUI functionalities.
⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

#### ROS 2 Topics

Use standard ROS 2 topics for direct communication with the simulation.
- `/cam_f1_left/image_raw` - Subscribe to this topic to receive the camera image. Message type: `sensor_msgs/msg/Image`

- `/cam_f1_left/image_raw ` - Subscribe to this topic to receive camera images (BGR8). Message type: `sensor_msgs/msg/Image`
- `/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`
- `/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`

#### Frequency Control
- `/webgui/image` - Publish to this topic to display a debug image in the WebGUI.
Message type: `sensor_msgs/msg/Image`
QoS: `TRANSIENT_LOCAL`, depth `10`

Use standard ROS 2 mechanisms to manage loop timing:
- `/odom` - Subscribe to this topic if you want lap and map feedback, as used internally by the WebGUI.
Message type: `nav_msgs/msg/Odometry`

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.
The user node must load the trained model locally and run inference on the images received from `/cam_f1_left/image_raw`.

#### Image Debugging
#### Model loading from local file

- Publish processed images to the topic: `/webgui_image`
Used for sending debug or processed visuals to the frontend.
- The GUI automatically subscribes to `/webgui_image`
Images published to this topic are displayed in the GUI interface.
```python

<!-- TODO: USER CODE -->
from model import model_path_func
model_path = model_path_func("model.onnx")

```

## Run the Exercise

Expand Down
33 changes: 33 additions & 0 deletions _pages/exercises/AutonomousCars/global_navigation.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,10 @@ The solution can integrate one or more of the following difficulty increasing go

## Robot API

This exercise now supports ROS 2-native implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

* `import HAL` - to import the HAL (Hardware Abstraction Layer) library class. This class contains the functions that send and receive information to and from the Hardware (Gazebo).
* `import WebGUI` - to import the WebGUI (Web Graphical User Interface) library class. This class contains the functions used to view the debugging information, like image widgets.
* `HAL.setV()` - to set the linear speed.
Expand All @@ -148,6 +152,35 @@ array = WebGUI.getMap('/resources/exercises/global_navigation/images/cityLargenB

The map image has a resolution of 400x400 pixels and indicates whether there is an obstacle or not by its color. The map in the Gazebo world has its center in [0, 0] and it has a width and height of 500 meters. Therefore, each of the pixels in the map image represent a cell in the Gazebo world with a width and height of 1.25 meters.

### ROS 2-direct Implementation

#### ROS 2 Topics

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

- `/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`

- `/odom` - Subscribe to this topic to receive the robot odometry.
Message type: `nav_msgs/msg/Odometry`

- `/webgui/current_target` - Subscribe to this topic to receive the current goal selected from the WebGUI.
Message type: `geometry_msgs/msg/Point`
QoS: `TRANSIENT_LOCAL`, depth `1`

- `/webgui/path` - Publish to this topic to display the planned path in the WebGUI.
Message type: `nav_msgs/msg/Path`

- `/webgui/debug_image` - Publish to this topic to display a debug image typically used to visualize the cost map, wavefront expansion, or any other intermediate grid representation.
Message type: `sensor_msgs/msg/Image`

To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.

## Videos

{% include youtubePlayer.html id=page.youtubeId5 %}
Expand Down
22 changes: 21 additions & 1 deletion _pages/exercises/AutonomousCars/obstacle_avoidance.md
Original file line number Diff line number Diff line change
Expand Up @@ -187,6 +187,8 @@ To use it, only two actions must be carried out:

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

- `/cmd_vel` - Publish to this topic to set both linear and angular velocities.
Message type: `geometry_msgs/msg/Twist`

Expand All @@ -196,8 +198,26 @@ Use standard ROS 2 topics for direct communication with the simulation.
- `/f1/laser/scan` - Subscribe to this topic to get laser scan data.
Message type: `sensor_msgs/msg/LaserScan`

The field `ranges` in `sensor_msgs/msg/LaserScan` contains the measured distances expressed in **meters**.
**WebGUI:**
- `/webgui/current_target` - Subscribe to this topic to receive the current target.
Message type: `geometry_msgs/msg/Point`
QoS: `TRANSIENT_LOCAL`, depth `1`

- `/webgui/target_reached` - Publish to notify that the current target has been reached.
Message type: `std_msgs/msg/Bool`
When `data=True`, the GUI updates to the next target.

- `/webgui/local_target` - Publish to visualize the current local target.
Message type: `geometry_msgs/msg/Point`

- `/webgui/force/car` - Publish to visualize the attractive force.
Message type: `geometry_msgs/msg/Point`

- `/webgui/force/obs` - Publish to visualize the repulsive force.
Message type: `geometry_msgs/msg/Point`

- `/webgui/force/avg` - Publish to visualize the resulting force.
Message type: `geometry_msgs/msg/Point`

#### Python
To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:
Expand Down
67 changes: 67 additions & 0 deletions _pages/exercises/ComputerVision/3d_reconstruction.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,10 @@ In this exercise, the intention is to program the necessary logic to allow kobuk

## Robot API

This exercise now supports ROS 2-direct implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

* `import HAL` - to import the HAL (Hardware Abstraction Layer) library class. This class contains the functions that send and receive information to and from the Hardware (Gazebo).
* `import WebGUI` - to import the WebGUI (Web Graphical User Interface) library class. This class contains the functions used to view the debugging information, like image widgets.

Expand Down Expand Up @@ -98,6 +102,69 @@ def algorithm(self):
# color = [1.0, 0.0, 0.0] R, G, B
# self.point.plotPoint(position, color)
```
### ROS 2-direct Implementation

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

#### ROS 2 Topics

- `/cam_turtlebot_left/image_raw` - Subscribe to this topic to receive the left camera image. Message type: `sensor_msgs/msg/Image`

- `/cam_turtlebot_right/image_raw` - Subscribe to this topic to receive the right camera image. Message type: `sensor_msgs/msg/Image`

- `/webgui/image_left` - Publish to this topic to display the left image in the WebGUI.
Message type: `sensor_msgs/msg/Image`
QoS: `TRANSIENT_LOCAL`, depth `10`

- `/webgui/image_right` - Publish to this topic to display the right image in the WebGUI.
Message type: `sensor_msgs/msg/Image`
QoS: `TRANSIENT_LOCAL`, depth `10`

- `/webgui/paint_matching` - Publish to this topic to enable or disable matching visualization.
Message type: `std_msgs/msg/Bool`

- `/webgui/points_new` - Publish to this topic to add newly reconstructed 3D points.
Message type: `std_msgs/msg/String`
Format: JSON list of points `[x, y, z, r, g, b]`

- `/webgui/points_all` - Publish to this topic to replace the full reconstructed point cloud.
Message type: `std_msgs/msg/String`
Format: JSON list of points

- `/webgui/image_matching` - Publish to this topic to visualize correspondences between both images.
Message type: `std_msgs/msg/Float32MultiArray`
Format: `[x1, y1, x2, y2]`

- `/webgui/clear_points` - Publish to this topic to clear all reconstructed points.
Message type: `std_msgs/msg/Empty`

#### Note
In this exercise, the 3D reconstruction is not performed through ROS 2 topics.
Functions such as: `backproject()`, `project()`, `graficToOptical()`, `opticalToGrafic()` and `getCameraPosition()` are local geometric utilities based on the stereo calibration file in (`"/workspace/code/3d_reconstruction_conf.yml`).

#### Example: publishing new 3D points

```python
import json
from std_msgs.msg import String

points = [
[x, y, z, r, g, b],
[x2, y2, z2, r2, g2, b2]
]

msg = String()
msg.data = json.dumps(points)

points_pub.publish(msg)
```
To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.

### 3D Viewer

Expand Down
35 changes: 35 additions & 0 deletions _pages/exercises/ComputerVision/marker_visual_loc.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,10 @@ The red robot represents the user estimated position.

## Robot API

This exercise now supports ROS 2-native implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

* `import HAL` - to import the HAL (Hardware Abstraction Layer) library class. This class contains the functions that send and receive information to and from the Hardware (Gazebo).
* `import WebGUI` - to import the WebGUI (Web Graphical User Interface) library class. This class contains the functions used to view the debugging information, like image widgets.
* `HAL.getImage()` - to get the image.
Expand All @@ -64,6 +68,37 @@ The red robot represents the user estimated position.
* `HAL.getOdom().yaw` - to get the approximated orientation position of the robot (with noise).
* `HAL.getLaserData()` - It allows to obtain the data of the laser sensor, which consists of 180 pairs of values ​​(0-180º, distance in meters).

### ROS 2-direct Implementation

Use standard ROS 2 topics for direct communication with the simulation.

#### ROS 2 Topics

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

- `/turtlebot3/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`

- `/turtlebot3/odom` - Subscribe to this topic to receive the robot ground-truth odometry. Message type: `nav_msgs/msg/Odometry`

- `/turtlebot3/odom_noisy` - Subscribe to this topic to receive the noisy odometry. Message type: `nav_msgs/msg/Odometry`

- `/turtlebot3/laser/scan` - Subscribe to this topic to receive laser data. Message type: `sensor_msgs/msg/LaserScan`

- `/turtlebot3/camera/image_raw` - Subscribe to this topic to receive the camera image. Message type: `sensor_msgs/msg/Image`

- `/webgui/estimated_pose` - Publish to this topic to display the estimated robot pose in the WebGUI. Message type: `geometry_msgs/msg/PoseStamped`
QoS: `TRANSIENT_LOCAL`, depth `1`

- `/webgui/image_debug` - Publish to this topic to display a debug image in the WebGUI. Message type: `sensor_msgs/msg/Image`

To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.

Here is an example of how to parse the laser data:

```python
Expand Down
35 changes: 35 additions & 0 deletions _pages/exercises/ComputerVision/montecarlo_visual_loc.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,10 @@ The objective of this exercise is to develop a visual localisation algorithm bas

## Robot API

This exercise now supports ROS 2-direct implementation in addition to the original HAL-based approach. Below you'll find the details for both options.

### HAL-based Implementation

* `import HAL` - to import the HAL (Hardware Abstraction Layer) library class. This class contains the functions that send and receive information to and from the Hardware (Gazebo).
* `import WebGUI` - to import the WebGUI (Web Graphical User Interface) library class. This class contains the functions used to view the debugging information, like image widgets.
* `HAL.getImage()` - to get the image.
Expand Down Expand Up @@ -87,6 +91,37 @@ The instruction to get the image with the roof textures is:
```python
array = WebGUI.getColorMap('/resources/exercises/montecarlo_visual_loc/images/color_mapgrannyannie.png')
```
### ROS 2-direct Implementation

#### ROS 2 Topics

Use standard ROS 2 topics for direct communication with the simulation.

> ⚠️ Even when using ROS 2-direct, you must still import `WebGUI` if you want visualization.

- `/cmd_vel` - Publish to this topic to set both linear and angular velocities. Message type: `geometry_msgs/msg/Twist`

- `/odom` - Subscribe to this topic to receive the robot ground-truth odometry. Message type: `nav_msgs/msg/Odometry`

- `/odom_noisy` - Subscribe to this topic to receive the noisy odometry. Message type: `nav_msgs/msg/Odometry`

- `/roombaROS/laser/scan` - Subscribe to this topic to receive laser data. Message type: `sensor_msgs/msg/LaserScan`

- `/camera/image_raw` - Subscribe to this topic to receive the camera image. Message type: `sensor_msgs/msg/Image`

- `/webgui/estimated_pose` - Publish to this topic to display the estimated robot pose in the WebGUI. Message type: `geometry_msgs/msg/PoseStamped`
QoS: `TRANSIENT_LOCAL`, depth `1`

- `/webgui/particles` - Publish to this topic to display the particle set in the WebGUI. Message type: `geometry_msgs/msg/PoseArray`
QoS: `TRANSIENT_LOCAL`, depth `1`

- `/webgui/image_debug` - Publish to this topic to display a debug image in the WebGUI. Message type: `sensor_msgs/msg/Image`

To have frequency control you need to use standard ROS 2 mechanisms to manage loop timing:

- `rclpy.spin()` - Event-driven execution using callbacks.
- `rclpy.spin_once()` - Single-step processing, often with custom timers.
- `rclpy.Rate()` - Loop-based frequency control.

## Theory

Expand Down
Loading