diff --git a/docs/README.MD b/docs/README.MD index 1e7a067ea9..017204cb8a 100644 --- a/docs/README.MD +++ b/docs/README.MD @@ -6,4 +6,4 @@ PhotonVision is a free open-source vision processing software for FRC teams. This repository is the source code for our ReadTheDocs documentation, which can be found [here](https://docs.photonvision.org). -[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/photonvision-docs/index.html) +[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/index.html) diff --git a/docs/source/docs/apriltag-pipelines/about-apriltags.md b/docs/source/docs/apriltag-pipelines/about-apriltags.md index 0eba914f8e..f96ec5c229 100644 --- a/docs/source/docs/apriltag-pipelines/about-apriltags.md +++ b/docs/source/docs/apriltag-pipelines/about-apriltags.md @@ -10,5 +10,5 @@ AprilTags are a common type of visual fiducial marker. Visual fiducial markers a A more technical explanation can be found in the [WPILib documentation](https://docs.wpilib.org/en/latest/docs/software/vision-processing/apriltag/apriltag-intro.html). :::{note} -You can get FIRST's [official PDF of the targets used in 2024 here](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf). +You can get FIRST's [official PDF of the targets used in 2025 here](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf). ::: diff --git a/docs/source/docs/apriltag-pipelines/multitag.md b/docs/source/docs/apriltag-pipelines/multitag.md index da5169fb04..40bb4178d0 100644 --- a/docs/source/docs/apriltag-pipelines/multitag.md +++ b/docs/source/docs/apriltag-pipelines/multitag.md @@ -51,7 +51,7 @@ The returned field to camera transform is a transform from the fixed field origi ## Updating the Field Layout -PhotonVision ships by default with the [2024 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2024-crescendo.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below. +PhotonVision ships by default with the [2025 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2025-reefscape.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below. ```{image} images/field-layout.png :alt: The currently saved field layout in the Photon UI diff --git a/docs/source/docs/examples/aimingatatarget.md b/docs/source/docs/examples/aimingatatarget.md index 1217c2ff63..5cf50ea864 100644 --- a/docs/source/docs/examples/aimingatatarget.md +++ b/docs/source/docs/examples/aimingatatarget.md @@ -7,7 +7,7 @@ The following example is from the PhotonLib example repository ([Java](https://g - A Robot - A camera mounted rigidly to the robot's frame, cenetered and pointed forward. - A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline. -- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface. +- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface. ## Code diff --git a/docs/source/docs/objectDetection/about-object-detection.md b/docs/source/docs/objectDetection/about-object-detection.md index 25bd1e0dde..4d55ede355 100644 --- a/docs/source/docs/objectDetection/about-object-detection.md +++ b/docs/source/docs/objectDetection/about-object-detection.md @@ -4,13 +4,7 @@ PhotonVision supports object detection using neural network accelerator hardware built into Orange Pi 5/5+ coprocessors. The Neural Processing Unit, or NPU, is [used by PhotonVision](https://github.com/PhotonVision/rknn_jni/tree/main) to massively accelerate certain math operations like those needed for running ML-based object detection. -For the 2024 season, PhotonVision shipped with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models. - -```{image} images/notes-ui.png - -``` - -For the 2025 season, we intend to release a new trained model once gamepiece data is released. +For the 2025 season, PhotonVision does not currently ship with a pre-trained detector. If teams are interested in using object detection, they can follow the custom process outlined {ref}`below `. ## Tracking Objects @@ -49,6 +43,4 @@ Coming soon! PhotonVision currently ONLY supports YOLOv5 models trained and converted to `.rknn` format for RK3588 CPUs! Other models require different post-processing code and will NOT work. The model conversion process is also highly particular. Proceed with care. ::: -Our [pre-trained NOTE model](https://github.com/PhotonVision/photonvision/blob/main/photon-server/src/main/resources/models/note-640-640-yolov5s.rknn) is automatically extracted from the JAR when PhotonVision starts, only if a file named “note-640-640-yolov5s.rknn” and "labels.txt" does not exist in the folder `photonvision_config/models/`. This technically allows power users to replace the model and label files with new ones without rebuilding Photon from source and uploading a new JAR. - -Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/note-640-640-yolov5s.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI. +Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/NEW-MODEL-NAME.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI.