Skip to content
2 changes: 1 addition & 1 deletion docs/README.MD
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ PhotonVision is a free open-source vision processing software for FRC teams.

This repository is the source code for our ReadTheDocs documentation, which can be found [here](https://docs.photonvision.org).

[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/photonvision-docs/index.html)
[Contribution and formatting guidelines for this project](https://docs.photonvision.org/en/latest/docs/contributing/index.html)
2 changes: 1 addition & 1 deletion docs/source/docs/apriltag-pipelines/about-apriltags.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ AprilTags are a common type of visual fiducial marker. Visual fiducial markers a
A more technical explanation can be found in the [WPILib documentation](https://docs.wpilib.org/en/latest/docs/software/vision-processing/apriltag/apriltag-intro.html).

:::{note}
You can get FIRST's [official PDF of the targets used in 2024 here](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf).
You can get FIRST's [official PDF of the targets used in 2025 here](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf).
:::
2 changes: 1 addition & 1 deletion docs/source/docs/apriltag-pipelines/multitag.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ The returned field to camera transform is a transform from the fixed field origi

## Updating the Field Layout

PhotonVision ships by default with the [2024 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2024-crescendo.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below.
PhotonVision ships by default with the [2025 field layout JSON](https://github.com/wpilibsuite/allwpilib/blob/main/apriltag/src/main/native/resources/edu/wpi/first/apriltag/2025-reefscape.json). The layout can be inspected by navigating to the settings tab and scrolling down to the "AprilTag Field Layout" card, as shown below.

```{image} images/field-layout.png
:alt: The currently saved field layout in the Photon UI
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/examples/aimingatatarget.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The following example is from the PhotonLib example repository ([Java](https://g
- A Robot
- A camera mounted rigidly to the robot's frame, cenetered and pointed forward.
- A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline.
- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2024/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface.
- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface.

## Code

Expand Down
12 changes: 2 additions & 10 deletions docs/source/docs/objectDetection/about-object-detection.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,7 @@

PhotonVision supports object detection using neural network accelerator hardware built into Orange Pi 5/5+ coprocessors. The Neural Processing Unit, or NPU, is [used by PhotonVision](https://github.com/PhotonVision/rknn_jni/tree/main) to massively accelerate certain math operations like those needed for running ML-based object detection.

For the 2024 season, PhotonVision shipped with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models.

```{image} images/notes-ui.png

```

For the 2025 season, we intend to release a new trained model once gamepiece data is released.
For the 2025 season, PhotonVision does not currently ship with a pre-trained detector. If teams are interested in using object detection, they can follow the custom process outlined {ref}`below <docs/objectDetection/about-object-detection:Uploading Custom Models>`.

## Tracking Objects

Expand Down Expand Up @@ -49,6 +43,4 @@ Coming soon!
PhotonVision currently ONLY supports YOLOv5 models trained and converted to `.rknn` format for RK3588 CPUs! Other models require different post-processing code and will NOT work. The model conversion process is also highly particular. Proceed with care.
:::

Our [pre-trained NOTE model](https://github.com/PhotonVision/photonvision/blob/main/photon-server/src/main/resources/models/note-640-640-yolov5s.rknn) is automatically extracted from the JAR when PhotonVision starts, only if a file named “note-640-640-yolov5s.rknn” and "labels.txt" does not exist in the folder `photonvision_config/models/`. This technically allows power users to replace the model and label files with new ones without rebuilding Photon from source and uploading a new JAR.

Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/note-640-640-yolov5s.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI.
Use a program like WinSCP or FileZilla to access your coprocessor's filesystem, and copy the new `.rknn` model file into /home/pi. Next, SSH into the coprocessor and `sudo mv /path/to/new/model.rknn /opt/photonvision/photonvision_config/models/NEW-MODEL-NAME.rknn`. Repeat this process with the labels file, which should contain one line per label the model outputs with no training newline. Next, restart PhotonVision via the web UI.
98 changes: 98 additions & 0 deletions photon-client/src/components/settings/DeviceControlCard.vue
Original file line number Diff line number Diff line change
Expand Up @@ -202,6 +202,57 @@ const handleSettingsImport = () => {
importFile.value = null;
};

const showObjectDetectionImportDialog = ref(false);
const importRKNNFile = ref<File | null>(null);
const importLabelsFile = ref<File | null>(null);

const handleObjectDetectionImport = () => {
if (importRKNNFile.value === null || importLabelsFile.value === null) return;

const formData = new FormData();
formData.append("rknn", importRKNNFile.value);
formData.append("labels", importLabelsFile.value);

useStateStore().showSnackbarMessage({
message: "Importing Object Detection Model...",
color: "secondary",
timeout: -1
});

axios
.post("/utils/importObjectDetectionModel", formData, {
headers: { "Content-Type": "multipart/form-data" }
})
.then((response) => {
useStateStore().showSnackbarMessage({
message: response.data.text || response.data,
color: "success"
});
})
.catch((error) => {
if (error.response) {
useStateStore().showSnackbarMessage({
color: "error",
message: error.response.data.text || error.response.data
});
} else if (error.request) {
useStateStore().showSnackbarMessage({
color: "error",
message: "Error while trying to process the request! The backend didn't respond."
});
} else {
useStateStore().showSnackbarMessage({
color: "error",
message: "An error occurred while trying to process the request."
});
}
});

showObjectDetectionImportDialog.value = false;
importRKNNFile.value = null;
importLabelsFile.value = null;
};

const showFactoryReset = ref(false);
const expected = "Delete Everything";
const yesDeleteMySettingsText = ref("");
Expand Down Expand Up @@ -355,6 +406,53 @@ const nukePhotonConfigDirectory = () => {
</v-btn>
</v-col>
</v-row>
<v-row>
<v-col cols="12">
<v-btn color="secondary" @click="() => (showImportDialog = true)">
<v-icon left class="open-icon"> mdi-import </v-icon>
<span class="open-label">Import Object Detection Model</span>
</v-btn>
<v-dialog
v-model="showImportDialog"
width="600"
@input="
() => {
importRKNNFile = null;
importLabelsFile = null;
}">
<v-card color="primary" dark>
<v-card-title>Import Object Detection Model</v-card-title>
<v-card-text>
Upload a new object detection model to this device that can be used in a pipeline.
Naming convention is that the labels file ought to have the same name as the RKNN file, with -labels appended to the end.
For example, if the RKNN file is named <i>foo.rknn</i>, the labels file should be named <i>foo-labels.txt</i>.
<v-row class="mt-6 ml-4 mr-8">
<v-file-input
label="RKNN File"
v-model="importRKNNFile"
accept=".rknn"
/>
</v-row>
<v-row class="mt-6 ml-4 mr-8">
<v-file-input
label="Labels File"
v-model="importLabelsFile"
accept=".txt"
/>
</v-row>
<v-row
class="mt-12 ml-8 mr-8 mb-1"
style="display: flex; align-items: center; justify-content: center"
align="center">
<v-btn color="secondary" :disabled="importRKNNFile === null || importLabelsFile === null" @click="handleObjectDetectionImport">
<v-icon left class="open-icon"> mdi-import </v-icon>
<span class="open-label">Import Object Detection Model</span>
</v-btn>
</v-row>
</v-card-text>
</v-card>
</v-dialog>
</v-row>
<v-divider class="mt-3 pb-3" />
<v-row>
<v-col cols="12">
Expand Down
Loading