diff --git a/docs/source/docs/additional-resources/best-practices.md b/docs/source/docs/additional-resources/best-practices.md index 7dd11ecd0b..ca8e5d327e 100644 --- a/docs/source/docs/additional-resources/best-practices.md +++ b/docs/source/docs/additional-resources/best-practices.md @@ -3,27 +3,31 @@ ## Before Competition - Ensure you have spares of the relevant electronics if you can afford it (switch, coprocessor, cameras, etc.). -- Download the latest release .jar onto your computer and update your Pi if necessary (only update if the release is labeled "critical" or similar, we do not recommend updating right before an event in case there are unforeseen bugs). -- Test out PhotonVision at your home setup. -- Ensure that you have set up SmartDashboard / Shuffleboard to view your camera streams during matches. -- Follow all the recommendations under the Networking section in installation (network switch and static IP). -- Use high quality ethernet cables that have been rigorously tested. -- Set up port forwarding using the guide in the Networking section in installation. +- Stay on the latest version of PhotonVision until you have tested your full robot system to be functional. +- Some time before the competition, lock down the version you are using and do not upgrade unless you encounter a critical bug. +- Have a copy of the installation image for the version you are using on your programming laptop, in case re-imaging (without internet) is needed. +- Extensively test at your home setup. Practice tuning from scratch under different lighting conditions. +- Use SmartDashboard / Shuffleboard to view your camera streams during practice. +- Confirm you have followed all the recommendations under the Networking section in installation (network switch and static IP). +- Only use high quality ethernet cables that have been rigorously tested. +- Set up RIO USB port forwarding using the guide in the Networking section in installation. ## During the Competition -- Make sure you take advantage of the field calibration time given at the start of the event: - - Bring your robot to the field at the allotted time. - - Turn on your robot and pull up the dashboard on your driver station. - - Point your robot at the AprilTags(s) and ensure you get a consistent tracking (you hold one AprilTag consistently, the ceiling lights aren't detected, etc.). - - If you have problems with your pipeline, go to the pipeline tuning section and retune the pipeline using the guide there. - - Move the robot close, far, angled, and around the field to ensure no extra AprilTags are found. - - Go to a practice match to ensure everything is working correctly. +- Use the field calibration time given at the start of the event: + - Bring your robot to the field at the allotted time. + - Make sure the field has match-accurate lighting conditions active. + - Turn on your robot and pull up the dashboard on your driver station. + - Point your robot at the targets and ensure you get a consistent tracking (you hold one targets consistently, the ceiling lights aren't detected, etc.). + - If you have problems with your pipeline, go to the pipeline tuning section and retune the pipeline using the guide there. + - Move the robot close, far, angled, and around the field to ensure no extra targets are found. + - Monitor camera feeds during a practice match to ensure everything is working correctly. - After field calibration, use the "Export Settings" button in the "Settings" page to create a backup. - - Do this for each coprocessor on your robot that runs PhotonVision, and name your exports with meaningful names. - - This will contain camera information/calibration, pipeline information, network settings, etc. - - In the event of software/hardware failures (IE lost SD Card, broken device), you can then use the "Import Settings" button and select "All Settings" to restore your settings. - - This effectively works as a snapshot of your PhotonVision data that can be restored at any point. -- Before every match, check the ethernet connection going into your coprocessor and that it is seated fully. -- Ensure that exposure is as low as possible and that you don't have the dashboard up when you don't need it to reduce bandwidth. + - Do this for each coprocessor on your robot that runs PhotonVision, and name your exports with meaningful names. + - This will contain camera information/calibration, pipeline information, network settings, etc. + - In the event of software/hardware failures (IE lost SD Card, broken device), you can then use the "Import Settings" button and select "All Settings" to restore your settings. + - This effectively works as a snapshot of your PhotonVision data that can be restored at any point. +- Before every match: + - Check the ethernet and USB connectors are seated fully. + - Close streaming dashboards when you don't need them to reduce bandwidth. - Stream at as low of a resolution as possible while still detecting AprilTags to stay within field bandwidth limits. diff --git a/docs/source/docs/additional-resources/config.md b/docs/source/docs/additional-resources/config.md index e29fb1e87c..31412ddcaf 100644 --- a/docs/source/docs/additional-resources/config.md +++ b/docs/source/docs/additional-resources/config.md @@ -48,3 +48,6 @@ A variety of files can be imported back into PhotonVision: - {code}`hardwareSettings.json` - {code}`networkSettings.json` - Useful for simple hardware or network configuration tasks without overwriting all settings. + + + diff --git a/docs/source/docs/advanced-installation/sw_install/romi.md b/docs/source/docs/advanced-installation/sw_install/romi.md index 6a26fff343..f26b059d1f 100644 --- a/docs/source/docs/advanced-installation/sw_install/romi.md +++ b/docs/source/docs/advanced-installation/sw_install/romi.md @@ -32,7 +32,7 @@ Next, from the SSH terminal, run `sudo nano /home/pi/runCamera` then arrow down ``` -After the Romi reboots, you should be able to open the PhotonVision UI at: [`http://10.0.0.2:5800/`](http://10.0.0.2:5800/). From here, you can adjust {ref}`Settings ` and configure {ref}`Pipelines `. +After the Romi reboots, you should be able to open the PhotonVision UI at: [`http://10.0.0.2:5800/`](http://10.0.0.2:5800/). From here, you can adjust settings and configure {ref}`Pipelines `. :::{warning} In order for settings, logs, etc. to be saved / take effect, ensure that PhotonVision is in writable mode. diff --git a/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md b/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md index d715ffd008..eb3672acd7 100644 --- a/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md +++ b/docs/source/docs/apriltag-pipelines/2D-tracking-tuning.md @@ -21,7 +21,9 @@ AprilTag pipelines come with reasonable defaults to get you up and running with ### Target Family -Target families are defined by two numbers (before and after the h). The first number is the number of bits the tag is able to encode (which means more tags are available in the respective family) and the second is the hamming distance. Hamming distance describes the ability for error correction while identifying tag ids. A high hamming distance generally means that it will be easier for a tag to be identified even if there are errors. However, as hamming distance increases, the number of available tags decreases. The 2024 FRC game will be using 36h11 tags, which can be found [here](https://github.com/AprilRobotics/apriltag-imgs/tree/main/tag36h11). +Target families are defined by two numbers (before and after the h). The first number is the number of bits the tag is able to encode (which means more tags are available in the respective family) and the second is the hamming distance. Hamming distance describes the ability for error correction while identifying tag ids. A high hamming distance generally means that it will be easier for a tag to be identified even if there are errors. However, as hamming distance increases, the number of available tags decreases. + +The 2025 FRC game will be using 36h11 tags, which can be found [here](https://github.com/AprilRobotics/apriltag-imgs/tree/main/tag36h11). ### Decimate diff --git a/docs/source/docs/apriltag-pipelines/detector-types.md b/docs/source/docs/apriltag-pipelines/detector-types.md index 76d01b1008..0bd43f2ad2 100644 --- a/docs/source/docs/apriltag-pipelines/detector-types.md +++ b/docs/source/docs/apriltag-pipelines/detector-types.md @@ -12,4 +12,4 @@ The AprilTag pipeline type is based on the [AprilTag](https://april.eecs.umich.e ## AruCo -The AruCo pipeline is based on the [AruCo](https://docs.opencv.org/4.8.0/d9/d6a/group__aruco.html) library implementation from OpenCV. It is ~2x higher fps and ~2x lower latency than the AprilTag pipeline type, but is less accurate. We recommend this pipeline type for teams that need to run at a higher framerate or have a lower powered device. This pipeline type is new for the 2024 season and is not as well tested as AprilTag. +The AruCo pipeline is based on the [AruCo](https://docs.opencv.org/4.8.0/d9/d6a/group__aruco.html) library implementation from OpenCV. It is ~2x higher fps and ~2x lower latency than the AprilTag pipeline type, but is less accurate. We recommend this pipeline type for teams that need to run at a higher framerate or have a lower powered device. This pipeline type was new for the 2024 season. diff --git a/docs/source/docs/description.md b/docs/source/docs/description.md index 2d342d2bee..334272eaa3 100644 --- a/docs/source/docs/description.md +++ b/docs/source/docs/description.md @@ -2,7 +2,7 @@ ## Description -PhotonVision is a free, fast, and easy-to-use vision processing solution for the _FIRST_ Robotics Competition. PhotonVision is designed to get vision working on your robot _quickly_, without the significant cost of other similar solutions. +PhotonVision is a free, fast, and easy-to-use vision processing solution for the _FIRST_ Robotics Competition. PhotonVision is designed to get vision working on your robot _quickly_, but with lower cost than other solutions. Using PhotonVision, teams can go from setting up a camera and coprocessor to detecting and tracking AprilTags and other targets by simply tuning sliders. With an easy to use interface, comprehensive documentation, and a feature rich vendor dependency, no experience is necessary to use PhotonVision. No matter your resources, using PhotonVision is easy compared to its alternatives. ## Advantages @@ -19,19 +19,15 @@ The PhotonVision user interface is simple and modular, making things easier for ### PhotonLib Vendor Dependency -The PhotonLib vendor dependency allows you to easily get necessary target data (without having to work directly with NetworkTables) while also providing utility methods to get distance and position on the field. This helps your team focus less on getting data and more on using it to do cool things. This also has the benefit of having a structure that ensures all data is from the same timestamp, which is helpful for latency compensation. +The PhotonLib vendor dependency allows you to easily get necessary target data (without having to work directly with NetworkTables) while also providing utility methods to get distance and position on the field. A serialization strategy is used to guarantees data coherency, which is helpful for latency compensation. This helps your team focus less on getting data and more on using it to do cool things. ### User Calibration Using PhotonVision allows the user to calibrate for their specific camera, which will get you the best tracking results. This is extremely important as every camera (even if it is the same model) will have it's own quirks and user calibration allows for those to be accounted for. -### High FPS Processing +### Low Latency, High FPS Processing -Compared to alternative solutions, PhotonVision boasts higher frames per second which allows for a smoother video stream and detection of targets to ensure you aren't losing out on any performance. - -### Low Latency - -PhotonVision provides low latency processing to make sure you get vision measurements as fast as possible, which makes complex vision tasks easier. We guarantee that all measurements are sent from the same timestamp, making life easier for your programmers. +PhotonVision exposes specalized hardware on select coprocessors to maximize processing speed. This allows for lower-latency detection of targets to ensure you aren't losing out on any performance. ### Fully Open Source and Active Developer Community diff --git a/docs/source/docs/hardware/picamconfig.md b/docs/source/docs/hardware/picamconfig.md index 4a2226abe5..1177887fca 100644 --- a/docs/source/docs/hardware/picamconfig.md +++ b/docs/source/docs/hardware/picamconfig.md @@ -1,5 +1,7 @@ # Pi Camera Configuration +This page covers specifics about the _Raspberry Pi_ CSI camera configuration. + ## Background The Raspberry Pi CSI Camera port is routed through and processed by the GPU. Since the GPU boots before the CPU, it must be configured properly for the attached camera. Additionally, this configuration cannot be changed without rebooting. diff --git a/docs/source/docs/hardware/selecting-hardware.md b/docs/source/docs/hardware/selecting-hardware.md index 81c09948cd..c04445b4d8 100644 --- a/docs/source/docs/hardware/selecting-hardware.md +++ b/docs/source/docs/hardware/selecting-hardware.md @@ -1,11 +1,10 @@ # Selecting Hardware :::{note} -It is highly recommended that you read the {ref}`quick start guide`, and use the hardware recommended there that -is not touched on here. +See the {ref}`quick start guide`, for latest, specific recommendations on hardware to use for PhotonVision. ::: -In order to use PhotonVision, you need a coprocessor and a camera. Other than the recommended hardware found in the {ref}`quick start guide`, this page will help you select hardware that should work for photonvision even though it is not supported/recommended. +In order to use PhotonVision, you need a coprocessor and a camera. This page discusses the specifics of why that hardware is recommended. ## Choosing a Coprocessor @@ -22,41 +21,76 @@ In order to use PhotonVision, you need a coprocessor and a camera. Other than th - Note that we only support using the Raspberry Pi's MIPI-CSI port, other MIPI-CSI ports from other coprocessors will probably not work. - Ethernet port for networking +Note these are bare minimums. Most high-performance vision processing will require higher specs. + ### Coprocessor Recommendations -When selecting a coprocessor, it is important to consider various factors, particularly when it comes to AprilTag detection. Opting for a coprocessor with a more powerful CPU can generally result in higher FPS AprilTag detection, leading to more accurate pose estimation. However, it is important to note that there is a point of diminishing returns, where the benefits of a more powerful CPU may not outweigh the additional cost. Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly. +Vision processing on one camera stream is usually a CPU-bound operation. Some operations are able to be done in parallel, but not all. USB bandwidth and network data transfer also cause a fixed overhead. + +Faster CPU's generally result in lower latency, but eventually with diminishing returns. More cores allow for some improvement, especially if multiple camera streams are being processed. + +PhotonVision is most commonly tested around Raspbian (Debian-based) operating systems. + +Other coprocessors can be used but may require some extra work / command line usage in order to get it working properly. + +### Power Supply + +Coprocessors need a steady, regulated power supply. Under-volting the processor will result in CPU throttling, low performance, unexpected reboots, and sometimes electrical damage. Many coprocessors draw 5-10 amps of current. + +Be sure to select a power supply which regulate's the robot's variable battery voltage into something steady that the robot can use. + +### Storage Media + +Most single-board computer coprocessors use micro SD cards as their storage media. + +Three important considerations include total storage space, read/write speed, and robustness. + +PhotonVision is not usually disk-bound, other than during coprocessor boot-up and initial startup. Some disk writing is done at runtime for logging, settings, and saving camera images on command. + +Better storage space and read/write speed mostly matter if image capture is used frequently on the field. + +Industrial-grade SD cards are recommended for their stability under shock, vibration, variable voltage, and power-off. Raspberry Pi and Orange Pi coprocessors are generally robust against robot power interruptions, teams have anecdotally reported that Sandisk industrial SD cards reduce the chances of an unexpected settings or log file corruption on shutdown. + ## Choosing a Camera -PhotonVision works with Pi Cameras and most USB Cameras. Other cameras such as webcams, virtual cameras, etc. are not officially supported and may not work. It is important to note that fisheye cameras should only be used as a driver camera / gamepeice detection and not for detecting targets / AprilTags. +PhotonVision relies on [CSCore](https://github.com/wpilibsuite/allwpilib/tree/main/cscore) to detect and process cameras, so camera support is determined based off compatibility with CScore along with native support for the camera within your OS (ex. [V4L compatibility](https://en.wikipedia.org/wiki/Video4Linux)). -PhotonVision relies on [CSCore](https://github.com/wpilibsuite/allwpilib/tree/main/cscore) to detect and process cameras, so camera support is determined based off compatibility with CScore along with native support for the camera within your OS (ex. [V4L compatibility](https://en.wikipedia.org/wiki/Video4Linux) if using a Linux machine like a Raspberry Pi). +PhotonVision attempts to support most USB Cameras. Exceptions include: -:::{note} -Logitech Cameras and integrated laptop cameras will not work with PhotonVision due to oddities with their drivers. We recommend using a different camera. -::: +- All Logitech brand cameras + - Logitech uses a non-standard driver which is not currently supported +- Built-in webcams + - Driver support is too varied. Some may happen to work, but most have been found to be non-functional +- virtual cameras (OBS, Snapchat camera, etc.) + - PhotonVision assumes the camera has real physical hardware to control - these do not expose the minimum number of controls. + +Use caution when using multiple identical cameras, as only the physical USB port they are plugged into can differentiate them. PhotonVision provides a "strict matching" setting which can reduce errors related to identical cameras. Arducam has a [tool that allows for identical cameras to be renamed](https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/) by their physical location or purpose. -:::{note} -We do not currently support the usage of two of the same camera on the same coprocessor. You can only use two or more cameras if they are of different models or they are from Arducam, which has a [tool that allows for cameras to be renamed](https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/). -::: ### Cameras Attributes -For colored shape detection, any non-fisheye camera supported by PhotonVision will work. We recommend a high fps USB camera. +For colored shape detection, any non-fisheye camera supported by PhotonVision will work. + +For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field. Use the minimum acceptable resolution to help keep latency low. -For driver camera, we recommend a USB camera with a fisheye lens, so your driver can see more of the field. +For AprilTag detection, we recommend you use a camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency. -For AprilTag detection, we recommend you use a global shutter camera that has ~100 degree diagonal FOV. This will allow you to see more AprilTags in frame, and will allow for more accurate pose estimation. You also want a camera that supports high FPS, as this will allow you to update your pose estimator at a higher frequency. +For object detection, we recommend a USB camera. Some fisheye lenses may be ok, but very wide angle cameras may distort the gamepiece beyond recognition. -Another cause of image distortion is 'rolling shutter.' This occurs when the camera captures pixels sequentially from top to bottom, which can also lead to distortion if the camera or object is moving. +Global shutter cameras are recommended in all cases, to reduce rolling-shutter image sheer while the robot is moving. ```{image} images/rollingshutter.gif :align: center ``` +Cameras capable of capturing a good image with very short exposures will also help reduce image blur. Usually, high-FPS-capable cameras designed for computer vision are better at this than "consumer-grade" USB webcams. + ### Using Multiple Cameras -Using multiple cameras on your robot will help you detect more AprilTags at once and improve your pose estimation as a result. In order to use multiple cameras, you will need to create multiple PhotonPoseEstimators and add all of their measurements to a single drivetrain pose estimator. Please note that the accuracy of your robot to camera transform is especially important when using multiple cameras as any error in the transform will cause your pose estimations to "fight" each other. For more information, see {ref}`the programming reference. `. +Keeping the target(s) in view of the robot often requires more than one camera. PhotonVision has no hardcoded limit on the number of cameras supported. The limit is usually dependant on CPU (can all frames be processed fast enough?) and USB bandwidth (Can all cameras send their images without overwhelming the bus?). + +Note that cameras are not synchronized together. Frames are captured and processed asynchronously. Robot Code must fuse estimates together. For more information, see {ref}`the programming reference. `. ## Performance Matrix diff --git a/docs/source/docs/objectDetection/about-object-detection.md b/docs/source/docs/objectDetection/about-object-detection.md index d9ecdf87b6..25bd1e0dde 100644 --- a/docs/source/docs/objectDetection/about-object-detection.md +++ b/docs/source/docs/objectDetection/about-object-detection.md @@ -4,12 +4,14 @@ PhotonVision supports object detection using neural network accelerator hardware built into Orange Pi 5/5+ coprocessors. The Neural Processing Unit, or NPU, is [used by PhotonVision](https://github.com/PhotonVision/rknn_jni/tree/main) to massively accelerate certain math operations like those needed for running ML-based object detection. -For the 2024 season, PhotonVision ships with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models. +For the 2024 season, PhotonVision shipped with a **pre-trained NOTE detector** (shown above), as well as a mechanism for swapping in custom models. Future development will focus on enabling lower friction management of multiple custom models. ```{image} images/notes-ui.png ``` +For the 2025 season, we intend to release a new trained model once gamepiece data is released. + ## Tracking Objects Before you get started with object detection, ensure that you have followed the previous sections on installation, wiring, and networking. Next, open the Web UI, go to the top right card, and switch to the “Object Detection” type. You should see a screen similar to the image above. diff --git a/docs/source/docs/quick-start/common-setups.md b/docs/source/docs/quick-start/common-setups.md index 888e270828..590a0c7497 100644 --- a/docs/source/docs/quick-start/common-setups.md +++ b/docs/source/docs/quick-start/common-setups.md @@ -1,49 +1,46 @@ # Common Hardware Setups +PhotonVision requires dedicated hardware, above and beyond a roboRIO. This page lists hardware that is frequently used with PhotonVision. + ## Coprocessors +- Orange Pi 5 4GB + - Supports up to 2 object detection streams, along with 2 AprilTag streams at 1280x800 (30fps). +- Raspberry Pi 5 2GB + - Supports up to 2 AprilTag streams at 1280x800 (30fps). + :::{note} The Orange Pi 5 is the only currently supported device for object detection. ::: -- Orange Pi 5 4GB - - Able to process two object detection streams at once while also processing 1 to 2 AprilTag streams at 1280x800 (30fps). -- Raspberry Pi 5 2GB - - A good cheaper option. Doesn't support object detection. Able to process 2 AprilTag streams at 1280x800 (30fps). - ## SD Cards +- 8GB or larger micro SD card + :::{important} -It is highly recommended that you use an industrial micro SD card, as they offer far greater protection against corruption from improper shutdowns, like most cards -face every time the robot is turned off. +Industrial grade SD cards from major manufacturers are recommended for robotics applications. For example: Sandisk SDSDQAF3-016G-I . ::: -- 8GB or larger micro SD card - - Many teams have found that an industrial micro sd card are much more stable in competition. One example is the SanDisk industrial 16GB micro SD card. - ## Cameras -- AprilTag +Innomaker and Arducam are common manufacturers of hardware designed specifically for vision processing. - - Innomaker or Arducam OV9281 UVC USB cameras. +- AprilTag Detection + - OV9281 - Object Detection - - - Arducam OV9782 works well with its global shutter. - - Most other fixed-focus color UVC USB webcams. + - OV9782 - Driver Camera - OV9281 - OV9782 - Pi Camera Module V1 {ref}`(More setup info)` - - Most other fixed-focus UVC USB webcams + +Feel free to get started with any color webcam you have sitting around. ## Power - Pololu S13V30F5 Regulator - - - Wide power range input. Recommended by many teams. - - Redux Robotics Zinc-V Regulator - - Recently released for the 2025 season, offering reliable and easy integration. +See {ref}`(Selecting Hardware)` for info on why these are recommended. diff --git a/docs/source/docs/quick-start/networking.md b/docs/source/docs/quick-start/networking.md index bbf30fa7b7..12a189976a 100644 --- a/docs/source/docs/quick-start/networking.md +++ b/docs/source/docs/quick-start/networking.md @@ -11,7 +11,7 @@ When using PhotonVision off robot, you _MUST_ plug the coprocessor into a physic :::{tab-item} New Radio (2025 - present) ```{danger} -Ensure that DIP switches 1 and 2 are turned off; otherwise, the radio PoE feature will fry your coprocessor. [More info.](https://frc-radio.vivid-hosting.net/getting-started/passive-power-over-ethernet-poe-for-downstream-devices) +Ensure that the radio's DIP switches 1 and 2 are turned off; otherwise, the radio PoE feature may electrically destroy your coprocessor. [More info.](https://frc-radio.vivid-hosting.net/getting-started/passive-power-over-ethernet-poe-for-downstream-devices) ``` ```{image} images/networking-diagram-vividhosting.png @@ -33,13 +33,13 @@ PhotonVision _STRONGLY_ recommends the usage of a network switch on your robot. ## Network Hostname -Rename each device from the default "Photonvision" to a unique hostname (e.g., "Photon-OrangePi-Left" or "Photon-RPi5-Back"). This helps differentiate multiple coprocessors on your network, making it easier to manage them. Navigate to the settings page and scroll down to the network section. You will find the hostname is set to "photonvision" by default, this can only contain letters (A-Z), numeric characters (0-9), and the minus sign (-). +Rename each device from the default "photonvision" to a unique hostname (e.g., "Photon-OrangePi-Left" or "Photon-RPi5-Back"). This helps differentiate multiple coprocessors on your network, making it easier to manage them. Navigate to the settings page and scroll down to the network section. You will find the hostname is set to "photonvision" by default, this can only contain letters (A-Z), numeric characters (0-9), and the minus sign (-). ```{image} images/editHostname.png :alt: The hostname can be edited in the settings page under the network section. ``` -## Digital Networking +## Robot Networking PhotonVision _STRONGLY_ recommends the usage of Static IPs as it increases reliability on the field and when using PhotonVision in general. To properly set up your static IP, follow the steps below: @@ -61,6 +61,8 @@ Power-cycle your robot and then you will now be access the PhotonVision dashboar ```{image} images/static.png :alt: Correctly set static IP ``` +The "team number" field will accept (in addition to a team number) an IP address or hostname. This is useful for testing PhotonVision on the same computer as a simulated robot program; +you can set the team number to "localhost", and PhotonVision will send data to the network tables in the simulated robot. ## Port Forwarding diff --git a/docs/source/docs/settings.md b/docs/source/docs/settings.md deleted file mode 100644 index d35657c326..0000000000 --- a/docs/source/docs/settings.md +++ /dev/null @@ -1,23 +0,0 @@ -# Settings - -```{image} assets/settings.png -``` - -## General - -Here, you can view general data on your system, including version, hardware, your platform, and performance statistics. You can also export/import the settings in a .zip file or restart PhotonVision/your coprocessor. - -## Networking - -Here, you can set your team number, switch your IP between DHCP and static, and specify your host name. For more information about on-robot networking, click [here.](https://docs.wpilib.org/en/latest/docs/networking/networking-introduction/networking-basics.html) - -The "team number" field will accept (in addition to a team number) an IP address or hostname. This is useful for testing PhotonVision on the same computer as a simulated robot program; -you can set the team number to "localhost", and PhotonVision will send data to the network tables in the simulated robot. - -:::{note} -Something must be entered into the team number field if using PhotonVision on a robot. Using a team number is recommended (as opposed to an IP address or hostname). -::: - -## LEDs - -If your coprocessor electronics support hardware-controlled LED's and has the proper hardware configuration set up, here you can adjust the brightness of your LEDs. diff --git a/docs/source/docs/troubleshooting/networking-troubleshooting.md b/docs/source/docs/troubleshooting/networking-troubleshooting.md index 2600015a1f..1eb47d2db5 100644 --- a/docs/source/docs/troubleshooting/networking-troubleshooting.md +++ b/docs/source/docs/troubleshooting/networking-troubleshooting.md @@ -10,16 +10,11 @@ A few issues make up the majority of support requests. Run through this checklis - Ethernet straight from a laptop to a coprocessor will not work (most likely), due to the unreliability of link-local connections. - Even if there's a switch between your laptop and coprocessor, you'll still want a radio or router in the loop somehow. - The FRC radio is the _only_ router we will officially support due to the innumerable variations between routers. -- (Raspberry Pi, Orange Pi & Limelight only) have you flashed the correct image, and is it up to date? - - Limelights 2/2+ should be flashed using the Limelight 2 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight2.img.xz`). - - Limelights 3 should be flashed using the Limelight 3 image (eg, `photonvision-v2024.2.8-linuxarm64_limelight3.img.xz`). - - Raspberry Pi devices (including Pi 3, Pi 4, CM3 and CM4) should be flashed using the Raspberry Pi image (eg, `photonvision-v2024.2.8-linuxarm64_RaspberryPi.img.xz`). - - Orange Pi 5 devices should be flashed using the Orange Pi 5 image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5.img.xz`). - - Orange Pi 5+ devices should be flashed using the Orange Pi 5+ image (eg, `photonvision-v2024.2.8-linuxarm64_orangepi5plus.img.xz`). -- Is your robot code using a **2024** version of WPILib, and is your coprocessor using the most up to date **2024** release? - - 2022, 2023 and 2024 versions of either cannot be mix-and-matched! - - Your PhotonVision version can be checked on the {ref}`settings tab`. -- Is your team number correctly set on the {ref}`settings tab`? +- (Raspberry Pi, Orange Pi & Limelight only) have you flashed the correct image, and is it [up to date](https://github.com/PhotonVision/photonvision/releases/latest)? +- Is your robot code using a **2025** version of WPILib, and is your coprocessor using the most up to date **2025** release? + - 2022, 2023, 2024, and 2025 versions of either cannot be mix-and-matched! + - Your PhotonVision version can be checked on the settings tab. +- Is your team number correctly set on the settings tab? ### photonvision.local Not Found @@ -35,7 +30,7 @@ Please check that: 1\. You don't have the NetworkTables Server on (toggleable in the settings tab). Turn this off when doing work on a robot. 2\. You have your team number set properly in the settings tab. 3\. Your camera name in the `PhotonCamera` constructor matches the name in the UI. -4\. You are using the 2024 version of WPILib and RoboRIO image. +4\. You are using the 2025 version of WPILib and RoboRIO image. 5\. Your robot is on. If all of the above are met and you still have issues, feel free to {ref}`contact us ` and provide the following information: diff --git a/docs/source/index.md b/docs/source/index.md index f85b07ef76..8917825487 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -91,7 +91,6 @@ docs/description docs/quick-start/index docs/hardware/index docs/advanced-installation/index -docs/settings ``` ```{toctree} diff --git a/photonlib-java-examples/aimandrange/WPILib-License.md b/photonlib-java-examples/aimandrange/WPILib-License.md index 645e54253a..d744196fe9 100644 --- a/photonlib-java-examples/aimandrange/WPILib-License.md +++ b/photonlib-java-examples/aimandrange/WPILib-License.md @@ -1,4 +1,4 @@ -Copyright (c) 2009-2024 FIRST and other WPILib contributors +Copyright (c) 2009-2025 FIRST and other WPILib contributors All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/photonlib-java-examples/aimattarget/WPILib-License.md b/photonlib-java-examples/aimattarget/WPILib-License.md index 645e54253a..d744196fe9 100644 --- a/photonlib-java-examples/aimattarget/WPILib-License.md +++ b/photonlib-java-examples/aimattarget/WPILib-License.md @@ -1,4 +1,4 @@ -Copyright (c) 2009-2024 FIRST and other WPILib contributors +Copyright (c) 2009-2025 FIRST and other WPILib contributors All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/photonlib-java-examples/poseest/WPILib-License.md b/photonlib-java-examples/poseest/WPILib-License.md index 645e54253a..d744196fe9 100644 --- a/photonlib-java-examples/poseest/WPILib-License.md +++ b/photonlib-java-examples/poseest/WPILib-License.md @@ -1,4 +1,4 @@ -Copyright (c) 2009-2024 FIRST and other WPILib contributors +Copyright (c) 2009-2025 FIRST and other WPILib contributors All rights reserved. Redistribution and use in source and binary forms, with or without