Work-in-Progress, Traffic Sign Recognition for the city of Rostock.
In images taken during 360° "street view" rides, traffic signs should be detected and classified in order to later compare detections with the city's cadastre of traffic signs.
- Own images are preprocessed to filter out irrelevant images and restructure the remaining images.
- Remaining images are labeled using the annotation tool "CVAT" (manually drawing frames around traffic signs), the annotations are saved in YOLOv8 Segmentation format.
- Annotations need to be prepared for training by adjusting the paths in the annotation files.
- Annotations are used for fine-tuning a pre-trained model – enabling an existing model that can distinguish everyday objects to specifically classify traffic signs.
- The resulting model is then used during inference (application) to automatically draw frames around traffic signs in new images.
Filter & Restructure source images using the preprocessing/restructure_source_images.py script.
- Download the source images to a local folder.
- Edit the
preprocessing/restructure_source_images.pyscript to point to the source images folder. - Run the script:
python preprocessing/restructure_source_images.py
Using the Computer Vision Annotation Tool (CVAT).
See the list of all traffic sign classes here: annotation/all_traffic_signs/Readme.md
Assuming a linux environment with docker and docker compose installed.
- Run the installation script
annotation/install.sh, it will download cvat toannotation/cvatand ask you to create an admin user.
- Run the start script
annotation/start.sh - Open the URL
http://localhost:8080in your web-browser to open CVAT
- Inside the CVAT web interface create a new project using the labels from
annotation/labels.tsr_example.json. (In the future, we will have the finale labels in such a file, but for now, we use this example.) - Open the newly created project and create a new task, select a subset (type one of train/test/val manually, do not select the existing ones with slightly different writing), upload some of the image files there (adhere to the annotation guidelines).
- Select the newly created job to start annotating (adhere to the annotation guidelines).
- Once all annotations are finished, export the annotations via "Project -> Export Dataset" in YOLOv8 Segmentation 1.0 format.
- Additionally, you may backup the project via "Project -> Backup".
- Run the stop script
annotation/stop.sh
Pre-selecting Traffic Signs in the annotation task based from previous labeling iterations will be implemented in the future.
The export from CVAT should be unzipped and copied into a subfolder in the datasets folder, e.g. to a subfolder named tsr.
Once the dataset is copied there, some paths inside the txt and yaml files of the export need to be adjusted.
For this, please run the datasets/fix_path.sh script like this, where tsr is the name of the subfolder you created for your dataset:
datasets/fix_path.sh tsrTo view the statistics of the dataset, run the datasets/stats.py script like this, where tsr is the name of the subfolder you created for your dataset:
python datasets/stats.py tsrUsing YOLO. See the usage here: /recognition/README.md

