Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ SSH into the Raspberry Pi (using Windows command line, or a tool like [Putty](ht
:::{attention}
The version of WPILibPi for the Romi is 2023.2.1, which is not compatible with the current version of PhotonVision. **If you are using WPILibPi 2023.2.1 on your Romi, you must install PhotonVision v2023.4.2 or earlier!**

To install a compatible version of PhotonVision, enter these commands in the SSH terminal connected to the Raspberry Pi. This will download and run the install script, which will intall PhotonVision on your Raspberry Pi and configure it to run at startup.
To install a compatible version of PhotonVision, enter these commands in the SSH terminal connected to the Raspberry Pi. This will download and run the install script, which will install PhotonVision on your Raspberry Pi and configure it to run at startup.

```bash
$ wget https://git.io/JJrEP -O install.sh
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig

## Startup:

- GIVEN An emtpy set of deserialized Camera Configurations
- GIVEN An empty set of deserialized Camera Configurations
<br>WHEN PhotonVision starts
<br>THEN no VisionModules will be started

Expand All @@ -72,12 +72,12 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig

## Camera (re)enumeration:

- GIVEN a NEW USB CAMERA is avaliable for enumeration
- GIVEN a NEW USB CAMERA is available for enumeration
<br>WHEN a USB camera is discovered by VisionSourceManager
<br>AND the USB camera's VIDEO DEVICE PATH is not in the set of DESERIALIZED CAMERA CONFIGURATIONS
<br>THEN a UNIQUE NAME will be assigned to the camera info

- GIVEN a NEW USB CAMERA is avaliable for enumeration
- GIVEN a NEW USB CAMERA is available for enumeration
<br>WHEN a USB camera is discovered by VisionSourceManager
<br>AND the USB camera's VIDEO DEVICE PATH is in the set of DESERIALIZED CAMERA CONFIGURATIONS
<br>THEN a UNIQUE NAME equal to the matching DESERIALIZED CAMERA CONFIGURATION will be assigned to the camera info
Expand All @@ -86,13 +86,13 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig
## Creating from a new camera

- Given: A UNIQUE NAME from a NEW USB CAMERA
<br>WHEN I request a new VisionModule is created for this NEW USB CAMREA
<br>WHEN I request a new VisionModule is created for this NEW USB CAMERA
<br>AND the camera has a VALID USB PATH
<br>AND the camera's VALID USB PATH is not in use by any CURRENTLY ACTIVE CAMERAS
<br>THEN a NEW VisionModule will be started for the NEW USB CAMERA using the VALID USB PATH

- Given: A UNIQUE NAME from a NEW USB CAMERA
<br>WHEN I request a new VisionModule is created for this NEW USB CAMREA
<br>WHEN I request a new VisionModule is created for this NEW USB CAMERA
<br>AND the camera does not have a VALID USB PATH
<br>AND the camera's VIDEO DEVICE PATH is not in use by any CURRENTLY ACTIVE CAMERAS
<br>THEN a NEW VisionModule will be started for the NEW USB CAMERA using the VIDEO DEVICE PATH
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

## A primer on time

Expecially starting around 2022 with AprilTags making localization easier, providing a way to know when a camera image was captured at became more important for localization.
Especially starting around 2022 with AprilTags making localization easier, providing a way to know when a camera image was captured at became more important for localization.
Since the [creation of USBFrameProvider](https://github.com/PhotonVision/photonvision/commit/f92bf670ded52b59a00352a4a49c277f01bae305), we used the time [provided by CSCore](https://github.wpilib.org/allwpilib/docs/release/java/edu/wpi/first/cscore/CvSink.html#grabFrame(org.opencv.core.Mat)) to tell when a camera image was captured at, but just keeping track of "CSCore told us frame N was captured 104.21s after the Raspberry Pi turned on" isn't very helpful. We can decompose this into asking:

- At what time was a particular image captured at, in the coprocessor's timebase?
Expand All @@ -29,13 +29,13 @@ WPILib's CSCore is a platform-agnostic wrapper around Windows, Linux, and MacOS

Prior to https://github.com/wpilibsuite/allwpilib/pull/7609, CSCore used the [time it dequeued the buffer at](https://github.com/wpilibsuite/allwpilib/blob/17a03514bad6de195639634b3d57d5ac411d601e/cscore/src/main/native/linux/UsbCameraImpl.cpp#L559) as the image capture time. But this doesn't account for exposure time or latency introduced by the camera + USB stack + Linux itself.

V4L does expose (with some [very heavy caviets](https://github.com/torvalds/linux/blob/fc033cf25e612e840e545f8d5ad2edd6ba613ed5/drivers/media/usb/uvc/uvc_video.c#L600) for some troublesome cameras) its best guess at the time an image was captured at via [buffer flags](https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/buffer.html#buffer-flags). In my testing, all my cameras were able to provide timestamps with both these flags set:
V4L does expose (with some [very heavy caveats](https://github.com/torvalds/linux/blob/fc033cf25e612e840e545f8d5ad2edd6ba613ed5/drivers/media/usb/uvc/uvc_video.c#L600) for some troublesome cameras) its best guess at the time an image was captured at via [buffer flags](https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/buffer.html#buffer-flags). In my testing, all my cameras were able to provide timestamps with both these flags set:
- `V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC`: The buffer timestamp has been taken from the CLOCK_MONOTONIC clock [...] accessible via `clock_gettime()`.
- `V4L2_BUF_FLAG_TSTAMP_SRC_SOE`: Start Of Exposure. The buffer timestamp has been taken when the exposure of the frame has begun.

I'm sure that we'll find a camera that doesn't play nice, because we can't have nice things :). But until then, using this timestamp gets us a free accuracy bump.

Other things to note: This gets us an estimate at when the camera *started* collecting photons. The camera's sensor will remain collecitng light for up to the total integration time, plus readout time for rolling shutter cameras.
Other things to note: This gets us an estimate at when the camera *started* collecting photons. The camera's sensor will remain collecting light for up to the total integration time, plus readout time for rolling shutter cameras.

## Latency Testing

Expand Down Expand Up @@ -105,7 +105,7 @@ public class Robot extends TimedRobot {
```
</details>

I've decreased camera exposure as much as possible (so we know with reasonable confidence that the image was collected right at the start of the exposure time reported by V4L), but we only get back new images at 60fps. So we don't know when between frame N and N+1 the LED turned on - just that somtime between now and 1/60th of a second a go, the LED turned on.
I've decreased camera exposure as much as possible (so we know with reasonable confidence that the image was collected right at the start of the exposure time reported by V4L), but we only get back new images at 60fps. So we don't know when between frame N and N+1 the LED turned on - just that sometime between now and 1/60th of a second a go, the LED turned on.

The test coprocessor was an Orange Pi 5 running a PhotonVision 2025 (Ubuntu 24.04 based) image, with an ArduCam OV9782 at 1280x800, 60fps, MJPG running a reflective pipeline.

Expand Down Expand Up @@ -133,4 +133,4 @@ With the camera capturing at 60fps, the time between successive frames is only ~

### Future Work

This test also makes no effort to isolate error from time syncronization from error introduced by frame time measurement - we're just interested in overall error. Future work could investigate the latency contribution
This test also makes no effort to isolate error from time synchronization from error introduced by frame time measurement - we're just interested in overall error. Future work could investigate the latency contribution
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ Communication between server and clients shall occur over the User Datagram Prot

## Message Format

The message format forgoes CRCs (as these are provided by the Ethernet physical layer) or packet delimination (as our packetsa are assumed be under the network MTU). **TSP Ping** and **TSP Pong** messages shall be encoded in a manor compatible with a WPILib packed struct with respect to byte alignment and endienness.
The message format forgoes CRCs (as these are provided by the Ethernet physical layer) or packet delineation (as our packets are assumed be under the network MTU). **TSP Ping** and **TSP Pong** messages shall be encoded in a manor compatible with a WPILib packed struct with respect to byte alignment and endianness.

### TSP Ping

Expand All @@ -98,7 +98,7 @@ The message format forgoes CRCs (as these are provided by the Ethernet physical

## Optional Protocol Extensions

Clients may publish statistics to NetworkTables. If they do, they shall publish to a key that is globally unique per participant in the Time Synronization network. If a client implements this, it shall provide the following publishers:
Clients may publish statistics to NetworkTables. If they do, they shall publish to a key that is globally unique per participant in the Time Synchronization network. If a client implements this, it shall provide the following publishers:

| Key | Type | Notes |
| ------ | ------ | ---- |
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/description.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Using PhotonVision allows the user to calibrate for their specific camera, which

### Low Latency, High FPS Processing

PhotonVision exposes specalized hardware on select coprocessors to maximize processing speed. This allows for lower-latency detection of targets to ensure you aren't losing out on any performance.
PhotonVision exposes specialized hardware on select coprocessors to maximize processing speed. This allows for lower-latency detection of targets to ensure you aren't losing out on any performance.

### Fully Open Source and Active Developer Community

Expand Down
4 changes: 2 additions & 2 deletions docs/source/docs/examples/aimingatatarget.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@ The following example is from the PhotonLib example repository ([Java](https://g
## Knowledge and Equipment Needed

- A Robot
- A camera mounted rigidly to the robot's frame, cenetered and pointed forward.
- A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline.
- A camera mounted rigidly to the robot's frame, centered and pointed forward.
- A coprocessor running PhotonVision with an AprilTag or Aruco 2D Pipeline.
- [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface.

## Code
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ You can get a [translation](https://docs.wpilib.org/en/latest/docs/software/adva
.. code-block:: C++

// Calculate a translation from the camera to the target.
frc::Translation2d translation = photonlib::PhotonUtils::EstimateCameraToTargetTranslationn(
frc::Translation2d translation = photonlib::PhotonUtils::EstimateCameraToTargetTranslation(
distance, frc::Rotation2d(units::degree_t(-target.GetYaw())));

.. code-block:: Python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/docs/quick-start/arducam-cameras.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,6 @@ Arducam cameras are supported for setups with multiple devices. This is possible
3. **Save Settings**: Ensure that you save the settings after selecting the appropriate camera model for each device.

```{image} images/setArducamModel.png
:alt: The camera model can be selected from the Arudcam model selector in the cameras tab
:alt: The camera model can be selected from the Arducam model selector in the cameras tab
:align: center
```
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@

/**
* Add migrations by adding the SQL commands for each migration sequentially to this array. DO NOT
* edit or delete existing SQL commands. That will lead to producing an icompatible database.
* edit or delete existing SQL commands. That will lead to producing an incompatible database.
*
* <p>You can use multiple SQL statements in one migration step as long as you separate them with a
* semicolon (;).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
* Our blessed images inject the current version via this build workflow:
* https://github.com/PhotonVision/photon-image-modifier/blob/2e5ddb6b599df0be921c12c8dbe7b939ecd7f615/.github/workflows/main.yml#L67
*
* <p>This class provides a convienent abstraction around this
* <p>This class provides a convenient abstraction around this
*/
public class OsImageVersion {
private static final Logger logger = new Logger(OsImageVersion.class, LogGroup.General);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ public enum CameraQuirk {
/** Separate red/blue gain controls available */
@JsonAlias("AWBGain") // remove after https://github.com/PhotonVision/photonvision/issues/1488
AwbRedBlueGain,
/** Will not work with photonvision - Logitec C270 at least */
/** Will not work with photonvision - Logitech C270 at least */
CompletelyBroken,
/** Has adjustable focus and autofocus switch */
AdjustableFocus,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,9 @@ public class QuirkyCamera {
// SeeCam, which has an odd exposure range
new QuirkyCamera(
0x2560, 0xc128, "See3Cam_24CUG", CameraQuirk.Gain, CameraQuirk.See3Cam_24CUG),
// Chris's older generic "Logitec HD Webcam"
// Chris's older generic "Logitech HD Webcam"
new QuirkyCamera(0x9331, 0x5A3, CameraQuirk.CompletelyBroken),
// Logitec C270
// Logitech C270
new QuirkyCamera(0x825, 0x46D, CameraQuirk.CompletelyBroken),
// A laptop internal camera someone found broken
new QuirkyCamera(0x0bda, 0x5510, CameraQuirk.CompletelyBroken),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ public void remakeSettables() {
// And update the settables' FrameStaticProps
settables.setVideoMode(oldVideoMode);

// Propogate our updated settables over to the frame provider
// Propagate our updated settables over to the frame provider
((USBFrameProvider) this.usbFrameProvider).updateSettables(this.settables);
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ public void overrideTakeSnapshot() {
}

/**
* Returns the match Data collected from the NT. eg : Q58 for qualfication match 58. If not in
* Returns the match Data collected from the NT. eg : Q58 for qualification match 58. If not in
* event, returns N/A-0-EVENTNAME
*/
private String getMatchData() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -444,7 +444,7 @@ boolean setPipeline(int index) {
return false;
}

visionRunner.runSyncronously(
visionRunner.runSynchronously(
() -> {
settables.setVideoModeInternal(pipelineSettings.cameraVideoModeIndex);
settables.setBrightness(pipelineSettings.cameraBrightness);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ public void startCalibration(Map<String, Object> data) {
parentModule.startCalibration(deserialized);
parentModule.saveAndBroadcastAll();
} catch (Exception e) {
logger.error("Error deserailizing start-calibration request", e);
logger.error("Error deserializing start-calibration request", e);
}
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ public void stopProcess() {
}
}

public Future<Void> runSyncronously(Runnable runnable) {
public Future<Void> runSynchronously(Runnable runnable) {
CompletableFuture<Void> future = new CompletableFuture<>();

synchronized (runnableList) {
Expand All @@ -109,7 +109,7 @@ public Future<Void> runSyncronously(Runnable runnable) {
return future;
}

public <T> Future<T> runSyncronously(Callable<T> callable) {
public <T> Future<T> runSynchronously(Callable<T> callable) {
CompletableFuture<T> future = new CompletableFuture<>();

synchronized (runnableList) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -366,7 +366,7 @@ private static List<PVCameraInfo> filterAllowedDevices(List<PVCameraInfo> allDev
* CameraConfiguration}'s matchedCameraInfo. We depend on the underlying {@link VisionSource} to
* be robust to disconnected sources at boot
*
* <p>Verify that nickname is unique within the set of desesrialized camera configurations, adding
* <p>Verify that nickname is unique within the set of deserialized camera configurations, adding
* random characters if this isn't the case
*/
protected VisionSource loadVisionSourceFromCamConfig(CameraConfiguration configuration) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ public void onCameraConnected() {
public abstract void setGain(int gain);

// Pretty uncommon so instead of abstract this is just a no-op by default
// Overriddenn by cameras with AWB gain support
// Overridden by cameras with AWB gain support
public void setRedGain(int red) {}

public void setBlueGain(int blue) {}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ public static void calibrateCommon(
}

/**
* Uses a given camera coefficientss matrix set to "undistort" every image file found in a given
* Uses a given camera coefficients matrix set to "undistort" every image file found in a given
* directory and display them. Provides an easy way to visually debug the results of the
* calibration routine. Seems to play havoc with CI and takes a chunk of time, so shouldn't
* usually be left active in tests.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ public void testUndistortImagePointsWithRotation(@Enum ImageRotationMode rot) {

Point[] originalPoints = {new Point(100, 100), new Point(200, 200), new Point(300, 100)};

// Distort the origonal points
// Distort the original points
var distortedOriginalPoints =
OpenCVHelp.distortPoints(
List.of(originalPoints),
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -172,9 +172,9 @@ public static Pose2d estimateFieldToCamera(Transform2d cameraToTarget, Pose2d fi
* Estimates the pose of the robot in the field coordinate system, given the pose of the fiducial
* tag, the robot relative to the camera, and the target relative to the camera.
*
* @param fieldRelativeTagPose Pose3D the field relative pose of the target
* @param cameraToRobot Transform3D of the robot relative to the camera. Origin of the robot is
* defined as the center.
* @param fieldRelativeTagPose Pose3D the field relative pose of the target
* @param cameraToTarget Transform3D of the target relative to the camera, returned by
* PhotonVision
* @return Transform3d Robot position relative to the field
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ public void testTimeSyncServerWithPhotonCamera() throws InterruptedException, IO

private static Stream<Arguments> testNtOffsets() {
return Stream.of(
// various initializaiton orders
// various initialization orders
Arguments.of(1, 10, 30, 30),
Arguments.of(10, 2, 30, 30),
Arguments.of(10, 10, 30, 30),
Expand Down
2 changes: 1 addition & 1 deletion photon-serde/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Like Rosmsg. But worse.

The code for a single type is split across 3 files. Let's look at PnpResult:
- [The struct definition](src/struct/pnpresult_struct.h): This is the data the object holds. Auto-generated. The data this object holds can be primitives or other, fully-deserialized types (like Vec2)
- [The user class](src/targeting/pnpresult_struct.h): This is the fully-deserialized PnpResult type. This contains extra functions users might need to expose like `Amgiguity`, or other computed helper things.
- [The user class](src/targeting/pnpresult_struct.h): This is the fully-deserialized PnpResult type. This contains extra functions users might need to expose like `Ambiguity`, or other computed helper things.
- [The serde interface](src/serde/pnpresult_struct.h): This is a template specialization for converting the user class to/from bytes

## Prior art
Expand Down
Loading