diff --git a/.styleguide b/.styleguide index 94517f39bd..7259f0408f 100644 --- a/.styleguide +++ b/.styleguide @@ -19,6 +19,7 @@ modifiableFileExclude { \.webp$ \.ico$ \.rknn$ + \.tflite$ \.mp4$ \.ttf$ \.woff2$ diff --git a/build.gradle b/build.gradle index 3ec1fd50b9..9e242c1baf 100644 --- a/build.gradle +++ b/build.gradle @@ -40,6 +40,7 @@ ext { javalinVersion = "5.6.2" libcameraDriverVersion = "v2025.0.3" rknnVersion = "dev-v2025.0.0-1-g33b6263" + rubikVersion = "v2025.1.0" frcYear = "2025" mrcalVersion = "v2025.0.0"; diff --git a/docs/.styleguide b/docs/.styleguide index 6236e908de..a3bf6f6ac3 100644 --- a/docs/.styleguide +++ b/docs/.styleguide @@ -11,6 +11,7 @@ modifiableFileExclude { \.webp$ \.ico$ \.rknn$ + \.tflite$ \.svg$ \.woff2$ gradlew diff --git a/docs/source/docs/objectDetection/about-object-detection.md b/docs/source/docs/objectDetection/about-object-detection.md index bc3510c1ce..3b062c952f 100644 --- a/docs/source/docs/objectDetection/about-object-detection.md +++ b/docs/source/docs/objectDetection/about-object-detection.md @@ -2,7 +2,7 @@ ## How does it work? -PhotonVision supports object detection using neural network accelerator hardware built into Orange Pi 5/5+ coprocessors. Please note that the Orange Pi 5/5+ are the only coprocessors that are currently supported. The Neural Processing Unit, or NPU, is [used by PhotonVision](https://github.com/PhotonVision/rknn_jni/tree/main) to massively accelerate certain math operations like those needed for running ML-based object detection. +PhotonVision supports object detection using neural network accelerator hardware, commonly known as an NPU. The two coprocessors currently supported are the {ref}`Orange Pi 5 ` and the {ref}`Rubik Pi 3 `. PhotonVision currently ships with a model trained on the [COCO dataset](https://cocodataset.org/) by [Ultralytics](https://github.com/ultralytics/ultralytics) (this model is licensed under [AGPLv3](https://www.gnu.org/licenses/agpl-3.0.en.html)). This model is meant to be used for testing and other miscellaneous purposes. It is not meant to be used in competition. For the 2025 post-season, PhotonVision also ships with a pretrained ALGAE model. A model to detect coral is available in the PhotonVision discord, but will not be distributed with PhotonVision. @@ -18,7 +18,7 @@ This model output means that while its fairly easy to say that "this rectangle p ## Tuning and Filtering -Compared to other pipelines, object detection exposes very few tuning handles. The Confidence slider changes the minimum confidence that the model needs to have in a given detection to consider it valid, as a number between 0 and 1 (with 0 meaning completely uncertain and 1 meaning maximally certain). The Non-Maximum Suppresion (NMS) Threshold slider is used to filter out overlapping detections. Lower values mean more detections are allowed through, but may result in false positives. It's generally recommended that teams leave this set at the default, unless they find they're unable to get usable results with solely the Confidence slider. +Compared to other pipelines, object detection exposes very few tuning handles. The Confidence slider changes the minimum confidence that the model needs to have in a given detection to consider it valid, as a number between 0 and 1 (with 0 meaning completely uncertain and 1 meaning maximally certain). The Non-Maximum Suppresion (NMS) Threshold slider is used to filter out overlapping detections. Higher values mean more detections are allowed through, but may result in false positives. It's generally recommended that teams leave this set at the default, unless they find they're unable to get usable results with solely the Confidence slider. ```{raw} html