diff --git a/docs/source/docs/advanced-installation/sw_install/romi.md b/docs/source/docs/advanced-installation/sw_install/romi.md index f26b059d1f..25b24455ab 100644 --- a/docs/source/docs/advanced-installation/sw_install/romi.md +++ b/docs/source/docs/advanced-installation/sw_install/romi.md @@ -15,7 +15,7 @@ SSH into the Raspberry Pi (using Windows command line, or a tool like [Putty](ht :::{attention} The version of WPILibPi for the Romi is 2023.2.1, which is not compatible with the current version of PhotonVision. **If you are using WPILibPi 2023.2.1 on your Romi, you must install PhotonVision v2023.4.2 or earlier!** -To install a compatible version of PhotonVision, enter these commands in the SSH terminal connected to the Raspberry Pi. This will download and run the install script, which will intall PhotonVision on your Raspberry Pi and configure it to run at startup. +To install a compatible version of PhotonVision, enter these commands in the SSH terminal connected to the Raspberry Pi. This will download and run the install script, which will install PhotonVision on your Raspberry Pi and configure it to run at startup. ```bash $ wget https://git.io/JJrEP -O install.sh diff --git a/docs/source/docs/contributing/design-descriptions/camera-matching.md b/docs/source/docs/contributing/design-descriptions/camera-matching.md index 75ab3b8c2b..3476849349 100644 --- a/docs/source/docs/contributing/design-descriptions/camera-matching.md +++ b/docs/source/docs/contributing/design-descriptions/camera-matching.md @@ -50,7 +50,7 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig ## Startup: -- GIVEN An emtpy set of deserialized Camera Configurations +- GIVEN An empty set of deserialized Camera Configurations
WHEN PhotonVision starts
THEN no VisionModules will be started @@ -72,12 +72,12 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig ## Camera (re)enumeration: -- GIVEN a NEW USB CAMERA is avaliable for enumeration +- GIVEN a NEW USB CAMERA is available for enumeration
WHEN a USB camera is discovered by VisionSourceManager
AND the USB camera's VIDEO DEVICE PATH is not in the set of DESERIALIZED CAMERA CONFIGURATIONS
THEN a UNIQUE NAME will be assigned to the camera info -- GIVEN a NEW USB CAMERA is avaliable for enumeration +- GIVEN a NEW USB CAMERA is available for enumeration
WHEN a USB camera is discovered by VisionSourceManager
AND the USB camera's VIDEO DEVICE PATH is in the set of DESERIALIZED CAMERA CONFIGURATIONS
THEN a UNIQUE NAME equal to the matching DESERIALIZED CAMERA CONFIGURATION will be assigned to the camera info @@ -86,13 +86,13 @@ When a new camera (ie, one we can't match by-path to a deserialized CameraConfig ## Creating from a new camera - Given: A UNIQUE NAME from a NEW USB CAMERA -
WHEN I request a new VisionModule is created for this NEW USB CAMREA +
WHEN I request a new VisionModule is created for this NEW USB CAMERA
AND the camera has a VALID USB PATH
AND the camera's VALID USB PATH is not in use by any CURRENTLY ACTIVE CAMERAS
THEN a NEW VisionModule will be started for the NEW USB CAMERA using the VALID USB PATH - Given: A UNIQUE NAME from a NEW USB CAMERA -
WHEN I request a new VisionModule is created for this NEW USB CAMREA +
WHEN I request a new VisionModule is created for this NEW USB CAMERA
AND the camera does not have a VALID USB PATH
AND the camera's VIDEO DEVICE PATH is not in use by any CURRENTLY ACTIVE CAMERAS
THEN a NEW VisionModule will be started for the NEW USB CAMERA using the VIDEO DEVICE PATH diff --git a/docs/source/docs/contributing/design-descriptions/e2e-latency.md b/docs/source/docs/contributing/design-descriptions/e2e-latency.md index 3417626d05..7d81b59b44 100644 --- a/docs/source/docs/contributing/design-descriptions/e2e-latency.md +++ b/docs/source/docs/contributing/design-descriptions/e2e-latency.md @@ -3,7 +3,7 @@ ## A primer on time -Expecially starting around 2022 with AprilTags making localization easier, providing a way to know when a camera image was captured at became more important for localization. +Especially starting around 2022 with AprilTags making localization easier, providing a way to know when a camera image was captured at became more important for localization. Since the [creation of USBFrameProvider](https://github.com/PhotonVision/photonvision/commit/f92bf670ded52b59a00352a4a49c277f01bae305), we used the time [provided by CSCore](https://github.wpilib.org/allwpilib/docs/release/java/edu/wpi/first/cscore/CvSink.html#grabFrame(org.opencv.core.Mat)) to tell when a camera image was captured at, but just keeping track of "CSCore told us frame N was captured 104.21s after the Raspberry Pi turned on" isn't very helpful. We can decompose this into asking: - At what time was a particular image captured at, in the coprocessor's timebase? @@ -29,13 +29,13 @@ WPILib's CSCore is a platform-agnostic wrapper around Windows, Linux, and MacOS Prior to https://github.com/wpilibsuite/allwpilib/pull/7609, CSCore used the [time it dequeued the buffer at](https://github.com/wpilibsuite/allwpilib/blob/17a03514bad6de195639634b3d57d5ac411d601e/cscore/src/main/native/linux/UsbCameraImpl.cpp#L559) as the image capture time. But this doesn't account for exposure time or latency introduced by the camera + USB stack + Linux itself. -V4L does expose (with some [very heavy caviets](https://github.com/torvalds/linux/blob/fc033cf25e612e840e545f8d5ad2edd6ba613ed5/drivers/media/usb/uvc/uvc_video.c#L600) for some troublesome cameras) its best guess at the time an image was captured at via [buffer flags](https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/buffer.html#buffer-flags). In my testing, all my cameras were able to provide timestamps with both these flags set: +V4L does expose (with some [very heavy caveats](https://github.com/torvalds/linux/blob/fc033cf25e612e840e545f8d5ad2edd6ba613ed5/drivers/media/usb/uvc/uvc_video.c#L600) for some troublesome cameras) its best guess at the time an image was captured at via [buffer flags](https://www.kernel.org/doc/html/v4.9/media/uapi/v4l/buffer.html#buffer-flags). In my testing, all my cameras were able to provide timestamps with both these flags set: - `V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC`: The buffer timestamp has been taken from the CLOCK_MONOTONIC clock [...] accessible via `clock_gettime()`. - `V4L2_BUF_FLAG_TSTAMP_SRC_SOE`: Start Of Exposure. The buffer timestamp has been taken when the exposure of the frame has begun. I'm sure that we'll find a camera that doesn't play nice, because we can't have nice things :). But until then, using this timestamp gets us a free accuracy bump. -Other things to note: This gets us an estimate at when the camera *started* collecting photons. The camera's sensor will remain collecitng light for up to the total integration time, plus readout time for rolling shutter cameras. +Other things to note: This gets us an estimate at when the camera *started* collecting photons. The camera's sensor will remain collecting light for up to the total integration time, plus readout time for rolling shutter cameras. ## Latency Testing @@ -105,7 +105,7 @@ public class Robot extends TimedRobot { ``` -I've decreased camera exposure as much as possible (so we know with reasonable confidence that the image was collected right at the start of the exposure time reported by V4L), but we only get back new images at 60fps. So we don't know when between frame N and N+1 the LED turned on - just that somtime between now and 1/60th of a second a go, the LED turned on. +I've decreased camera exposure as much as possible (so we know with reasonable confidence that the image was collected right at the start of the exposure time reported by V4L), but we only get back new images at 60fps. So we don't know when between frame N and N+1 the LED turned on - just that sometime between now and 1/60th of a second a go, the LED turned on. The test coprocessor was an Orange Pi 5 running a PhotonVision 2025 (Ubuntu 24.04 based) image, with an ArduCam OV9782 at 1280x800, 60fps, MJPG running a reflective pipeline. @@ -133,4 +133,4 @@ With the camera capturing at 60fps, the time between successive frames is only ~ ### Future Work -This test also makes no effort to isolate error from time syncronization from error introduced by frame time measurement - we're just interested in overall error. Future work could investigate the latency contribution +This test also makes no effort to isolate error from time synchronization from error introduced by frame time measurement - we're just interested in overall error. Future work could investigate the latency contribution diff --git a/docs/source/docs/contributing/design-descriptions/time-sync.md b/docs/source/docs/contributing/design-descriptions/time-sync.md index 507adaa922..6d051c5660 100644 --- a/docs/source/docs/contributing/design-descriptions/time-sync.md +++ b/docs/source/docs/contributing/design-descriptions/time-sync.md @@ -76,7 +76,7 @@ Communication between server and clients shall occur over the User Datagram Prot ## Message Format -The message format forgoes CRCs (as these are provided by the Ethernet physical layer) or packet delimination (as our packetsa are assumed be under the network MTU). **TSP Ping** and **TSP Pong** messages shall be encoded in a manor compatible with a WPILib packed struct with respect to byte alignment and endienness. +The message format forgoes CRCs (as these are provided by the Ethernet physical layer) or packet delineation (as our packets are assumed be under the network MTU). **TSP Ping** and **TSP Pong** messages shall be encoded in a manor compatible with a WPILib packed struct with respect to byte alignment and endianness. ### TSP Ping @@ -98,7 +98,7 @@ The message format forgoes CRCs (as these are provided by the Ethernet physical ## Optional Protocol Extensions -Clients may publish statistics to NetworkTables. If they do, they shall publish to a key that is globally unique per participant in the Time Synronization network. If a client implements this, it shall provide the following publishers: +Clients may publish statistics to NetworkTables. If they do, they shall publish to a key that is globally unique per participant in the Time Synchronization network. If a client implements this, it shall provide the following publishers: | Key | Type | Notes | | ------ | ------ | ---- | diff --git a/docs/source/docs/description.md b/docs/source/docs/description.md index 334272eaa3..35e6e74b08 100644 --- a/docs/source/docs/description.md +++ b/docs/source/docs/description.md @@ -27,7 +27,7 @@ Using PhotonVision allows the user to calibrate for their specific camera, which ### Low Latency, High FPS Processing -PhotonVision exposes specalized hardware on select coprocessors to maximize processing speed. This allows for lower-latency detection of targets to ensure you aren't losing out on any performance. +PhotonVision exposes specialized hardware on select coprocessors to maximize processing speed. This allows for lower-latency detection of targets to ensure you aren't losing out on any performance. ### Fully Open Source and Active Developer Community diff --git a/docs/source/docs/examples/aimingatatarget.md b/docs/source/docs/examples/aimingatatarget.md index 5cf50ea864..fc191b985f 100644 --- a/docs/source/docs/examples/aimingatatarget.md +++ b/docs/source/docs/examples/aimingatatarget.md @@ -5,8 +5,8 @@ The following example is from the PhotonLib example repository ([Java](https://g ## Knowledge and Equipment Needed - A Robot -- A camera mounted rigidly to the robot's frame, cenetered and pointed forward. -- A coprocessor running PhotonVision with an AprilTag or Aurco 2D Pipeline. +- A camera mounted rigidly to the robot's frame, centered and pointed forward. +- A coprocessor running PhotonVision with an AprilTag or Aruco 2D Pipeline. - [A printout of AprilTag 7](https://firstfrc.blob.core.windows.net/frc2025/FieldAssets/Apriltag_Images_and_User_Guide.pdf), mounted on a rigid and flat surface. ## Code diff --git a/docs/source/docs/programming/photonlib/using-target-data.md b/docs/source/docs/programming/photonlib/using-target-data.md index ff700715dd..1f8a8ae1e4 100644 --- a/docs/source/docs/programming/photonlib/using-target-data.md +++ b/docs/source/docs/programming/photonlib/using-target-data.md @@ -106,7 +106,7 @@ You can get a [translation](https://docs.wpilib.org/en/latest/docs/software/adva .. code-block:: C++ // Calculate a translation from the camera to the target. - frc::Translation2d translation = photonlib::PhotonUtils::EstimateCameraToTargetTranslationn( + frc::Translation2d translation = photonlib::PhotonUtils::EstimateCameraToTargetTranslation( distance, frc::Rotation2d(units::degree_t(-target.GetYaw()))); .. code-block:: Python diff --git a/docs/source/docs/quick-start/arducam-cameras.md b/docs/source/docs/quick-start/arducam-cameras.md index 82eaaaf87e..96178f6726 100644 --- a/docs/source/docs/quick-start/arducam-cameras.md +++ b/docs/source/docs/quick-start/arducam-cameras.md @@ -17,6 +17,6 @@ Arducam cameras are supported for setups with multiple devices. This is possible 3. **Save Settings**: Ensure that you save the settings after selecting the appropriate camera model for each device. ```{image} images/setArducamModel.png -:alt: The camera model can be selected from the Arudcam model selector in the cameras tab +:alt: The camera model can be selected from the Arducam model selector in the cameras tab :align: center ``` diff --git a/photon-core/src/main/java/org/photonvision/common/configuration/DatabaseSchema.java b/photon-core/src/main/java/org/photonvision/common/configuration/DatabaseSchema.java index e13059390d..d55c3494dc 100644 --- a/photon-core/src/main/java/org/photonvision/common/configuration/DatabaseSchema.java +++ b/photon-core/src/main/java/org/photonvision/common/configuration/DatabaseSchema.java @@ -19,7 +19,7 @@ /** * Add migrations by adding the SQL commands for each migration sequentially to this array. DO NOT - * edit or delete existing SQL commands. That will lead to producing an icompatible database. + * edit or delete existing SQL commands. That will lead to producing an incompatible database. * *

You can use multiple SQL statements in one migration step as long as you separate them with a * semicolon (;). diff --git a/photon-core/src/main/java/org/photonvision/common/hardware/OsImageVersion.java b/photon-core/src/main/java/org/photonvision/common/hardware/OsImageVersion.java index 211e7b6d61..7b00abf20a 100644 --- a/photon-core/src/main/java/org/photonvision/common/hardware/OsImageVersion.java +++ b/photon-core/src/main/java/org/photonvision/common/hardware/OsImageVersion.java @@ -28,7 +28,7 @@ * Our blessed images inject the current version via this build workflow: * https://github.com/PhotonVision/photon-image-modifier/blob/2e5ddb6b599df0be921c12c8dbe7b939ecd7f615/.github/workflows/main.yml#L67 * - *

This class provides a convienent abstraction around this + *

This class provides a convenient abstraction around this */ public class OsImageVersion { private static final Logger logger = new Logger(OsImageVersion.class, LogGroup.General); diff --git a/photon-core/src/main/java/org/photonvision/vision/camera/CameraQuirk.java b/photon-core/src/main/java/org/photonvision/vision/camera/CameraQuirk.java index 61e51aba8e..a3dace6549 100644 --- a/photon-core/src/main/java/org/photonvision/vision/camera/CameraQuirk.java +++ b/photon-core/src/main/java/org/photonvision/vision/camera/CameraQuirk.java @@ -31,7 +31,7 @@ public enum CameraQuirk { /** Separate red/blue gain controls available */ @JsonAlias("AWBGain") // remove after https://github.com/PhotonVision/photonvision/issues/1488 AwbRedBlueGain, - /** Will not work with photonvision - Logitec C270 at least */ + /** Will not work with photonvision - Logitech C270 at least */ CompletelyBroken, /** Has adjustable focus and autofocus switch */ AdjustableFocus, diff --git a/photon-core/src/main/java/org/photonvision/vision/camera/QuirkyCamera.java b/photon-core/src/main/java/org/photonvision/vision/camera/QuirkyCamera.java index 9cf24a1b82..4f85879f0b 100644 --- a/photon-core/src/main/java/org/photonvision/vision/camera/QuirkyCamera.java +++ b/photon-core/src/main/java/org/photonvision/vision/camera/QuirkyCamera.java @@ -31,9 +31,9 @@ public class QuirkyCamera { // SeeCam, which has an odd exposure range new QuirkyCamera( 0x2560, 0xc128, "See3Cam_24CUG", CameraQuirk.Gain, CameraQuirk.See3Cam_24CUG), - // Chris's older generic "Logitec HD Webcam" + // Chris's older generic "Logitech HD Webcam" new QuirkyCamera(0x9331, 0x5A3, CameraQuirk.CompletelyBroken), - // Logitec C270 + // Logitech C270 new QuirkyCamera(0x825, 0x46D, CameraQuirk.CompletelyBroken), // A laptop internal camera someone found broken new QuirkyCamera(0x0bda, 0x5510, CameraQuirk.CompletelyBroken), diff --git a/photon-core/src/main/java/org/photonvision/vision/camera/USBCameras/USBCameraSource.java b/photon-core/src/main/java/org/photonvision/vision/camera/USBCameras/USBCameraSource.java index 13e5f538e2..ab7bf5bc7b 100644 --- a/photon-core/src/main/java/org/photonvision/vision/camera/USBCameras/USBCameraSource.java +++ b/photon-core/src/main/java/org/photonvision/vision/camera/USBCameras/USBCameraSource.java @@ -180,7 +180,7 @@ public void remakeSettables() { // And update the settables' FrameStaticProps settables.setVideoMode(oldVideoMode); - // Propogate our updated settables over to the frame provider + // Propagate our updated settables over to the frame provider ((USBFrameProvider) this.usbFrameProvider).updateSettables(this.settables); } diff --git a/photon-core/src/main/java/org/photonvision/vision/frame/consumer/FileSaveFrameConsumer.java b/photon-core/src/main/java/org/photonvision/vision/frame/consumer/FileSaveFrameConsumer.java index 6549e9c203..4b1b61ede9 100644 --- a/photon-core/src/main/java/org/photonvision/vision/frame/consumer/FileSaveFrameConsumer.java +++ b/photon-core/src/main/java/org/photonvision/vision/frame/consumer/FileSaveFrameConsumer.java @@ -145,7 +145,7 @@ public void overrideTakeSnapshot() { } /** - * Returns the match Data collected from the NT. eg : Q58 for qualfication match 58. If not in + * Returns the match Data collected from the NT. eg : Q58 for qualification match 58. If not in * event, returns N/A-0-EVENTNAME */ private String getMatchData() { diff --git a/photon-core/src/main/java/org/photonvision/vision/processes/VisionModule.java b/photon-core/src/main/java/org/photonvision/vision/processes/VisionModule.java index 91efbad777..34e9976ec7 100644 --- a/photon-core/src/main/java/org/photonvision/vision/processes/VisionModule.java +++ b/photon-core/src/main/java/org/photonvision/vision/processes/VisionModule.java @@ -444,7 +444,7 @@ boolean setPipeline(int index) { return false; } - visionRunner.runSyncronously( + visionRunner.runSynchronously( () -> { settables.setVideoModeInternal(pipelineSettings.cameraVideoModeIndex); settables.setBrightness(pipelineSettings.cameraBrightness); diff --git a/photon-core/src/main/java/org/photonvision/vision/processes/VisionModuleChangeSubscriber.java b/photon-core/src/main/java/org/photonvision/vision/processes/VisionModuleChangeSubscriber.java index 8301506ad7..d40e0d75bc 100644 --- a/photon-core/src/main/java/org/photonvision/vision/processes/VisionModuleChangeSubscriber.java +++ b/photon-core/src/main/java/org/photonvision/vision/processes/VisionModuleChangeSubscriber.java @@ -206,7 +206,7 @@ public void startCalibration(Map data) { parentModule.startCalibration(deserialized); parentModule.saveAndBroadcastAll(); } catch (Exception e) { - logger.error("Error deserailizing start-calibration request", e); + logger.error("Error deserializing start-calibration request", e); } } diff --git a/photon-core/src/main/java/org/photonvision/vision/processes/VisionRunner.java b/photon-core/src/main/java/org/photonvision/vision/processes/VisionRunner.java index 4d6b6c28fc..c8baa0a138 100644 --- a/photon-core/src/main/java/org/photonvision/vision/processes/VisionRunner.java +++ b/photon-core/src/main/java/org/photonvision/vision/processes/VisionRunner.java @@ -92,7 +92,7 @@ public void stopProcess() { } } - public Future runSyncronously(Runnable runnable) { + public Future runSynchronously(Runnable runnable) { CompletableFuture future = new CompletableFuture<>(); synchronized (runnableList) { @@ -109,7 +109,7 @@ public Future runSyncronously(Runnable runnable) { return future; } - public Future runSyncronously(Callable callable) { + public Future runSynchronously(Callable callable) { CompletableFuture future = new CompletableFuture<>(); synchronized (runnableList) { diff --git a/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceManager.java b/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceManager.java index fa3cb2f41e..f264f7c7cd 100644 --- a/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceManager.java +++ b/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceManager.java @@ -366,7 +366,7 @@ private static List filterAllowedDevices(List allDev * CameraConfiguration}'s matchedCameraInfo. We depend on the underlying {@link VisionSource} to * be robust to disconnected sources at boot * - *

Verify that nickname is unique within the set of desesrialized camera configurations, adding + *

Verify that nickname is unique within the set of deserialized camera configurations, adding * random characters if this isn't the case */ protected VisionSource loadVisionSourceFromCamConfig(CameraConfiguration configuration) { diff --git a/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceSettables.java b/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceSettables.java index 2b299c14dc..000c8bcbfd 100644 --- a/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceSettables.java +++ b/photon-core/src/main/java/org/photonvision/vision/processes/VisionSourceSettables.java @@ -70,7 +70,7 @@ public void onCameraConnected() { public abstract void setGain(int gain); // Pretty uncommon so instead of abstract this is just a no-op by default - // Overriddenn by cameras with AWB gain support + // Overridden by cameras with AWB gain support public void setRedGain(int red) {} public void setBlueGain(int blue) {} diff --git a/photon-core/src/test/java/org/photonvision/vision/pipeline/Calibrate3dPipeTest.java b/photon-core/src/test/java/org/photonvision/vision/pipeline/Calibrate3dPipeTest.java index f8b6640a3a..7c8cf2a1ba 100644 --- a/photon-core/src/test/java/org/photonvision/vision/pipeline/Calibrate3dPipeTest.java +++ b/photon-core/src/test/java/org/photonvision/vision/pipeline/Calibrate3dPipeTest.java @@ -287,7 +287,7 @@ public static void calibrateCommon( } /** - * Uses a given camera coefficientss matrix set to "undistort" every image file found in a given + * Uses a given camera coefficients matrix set to "undistort" every image file found in a given * directory and display them. Provides an easy way to visually debug the results of the * calibration routine. Seems to play havoc with CI and takes a chunk of time, so shouldn't * usually be left active in tests. diff --git a/photon-core/src/test/java/org/photonvision/vision/pipeline/CalibrationRotationPipeTest.java b/photon-core/src/test/java/org/photonvision/vision/pipeline/CalibrationRotationPipeTest.java index c5d192f8f4..c1bb4c2d47 100644 --- a/photon-core/src/test/java/org/photonvision/vision/pipeline/CalibrationRotationPipeTest.java +++ b/photon-core/src/test/java/org/photonvision/vision/pipeline/CalibrationRotationPipeTest.java @@ -139,7 +139,7 @@ public void testUndistortImagePointsWithRotation(@Enum ImageRotationMode rot) { Point[] originalPoints = {new Point(100, 100), new Point(200, 200), new Point(300, 100)}; - // Distort the origonal points + // Distort the original points var distortedOriginalPoints = OpenCVHelp.distortPoints( List.of(originalPoints), diff --git a/photon-lib/src/main/java/org/photonvision/PhotonUtils.java b/photon-lib/src/main/java/org/photonvision/PhotonUtils.java index 90c407f555..e9ea78c9ee 100644 --- a/photon-lib/src/main/java/org/photonvision/PhotonUtils.java +++ b/photon-lib/src/main/java/org/photonvision/PhotonUtils.java @@ -172,9 +172,9 @@ public static Pose2d estimateFieldToCamera(Transform2d cameraToTarget, Pose2d fi * Estimates the pose of the robot in the field coordinate system, given the pose of the fiducial * tag, the robot relative to the camera, and the target relative to the camera. * - * @param fieldRelativeTagPose Pose3D the field relative pose of the target * @param cameraToRobot Transform3D of the robot relative to the camera. Origin of the robot is * defined as the center. + * @param fieldRelativeTagPose Pose3D the field relative pose of the target * @param cameraToTarget Transform3D of the target relative to the camera, returned by * PhotonVision * @return Transform3d Robot position relative to the field diff --git a/photon-lib/src/test/java/org/photonvision/PhotonCameraTest.java b/photon-lib/src/test/java/org/photonvision/PhotonCameraTest.java index fa93b90d3d..17c6f54e38 100644 --- a/photon-lib/src/test/java/org/photonvision/PhotonCameraTest.java +++ b/photon-lib/src/test/java/org/photonvision/PhotonCameraTest.java @@ -122,7 +122,7 @@ public void testTimeSyncServerWithPhotonCamera() throws InterruptedException, IO private static Stream testNtOffsets() { return Stream.of( - // various initializaiton orders + // various initialization orders Arguments.of(1, 10, 30, 30), Arguments.of(10, 2, 30, 30), Arguments.of(10, 10, 30, 30), diff --git a/photon-serde/README.md b/photon-serde/README.md index be58bceeff..50777ebffb 100644 --- a/photon-serde/README.md +++ b/photon-serde/README.md @@ -14,7 +14,7 @@ Like Rosmsg. But worse. The code for a single type is split across 3 files. Let's look at PnpResult: - [The struct definition](src/struct/pnpresult_struct.h): This is the data the object holds. Auto-generated. The data this object holds can be primitives or other, fully-deserialized types (like Vec2) -- [The user class](src/targeting/pnpresult_struct.h): This is the fully-deserialized PnpResult type. This contains extra functions users might need to expose like `Amgiguity`, or other computed helper things. +- [The user class](src/targeting/pnpresult_struct.h): This is the fully-deserialized PnpResult type. This contains extra functions users might need to expose like `Ambiguity`, or other computed helper things. - [The serde interface](src/serde/pnpresult_struct.h): This is a template specialization for converting the user class to/from bytes ## Prior art diff --git a/photon-server/src/main/java/org/photonvision/server/RequestHandler.java b/photon-server/src/main/java/org/photonvision/server/RequestHandler.java index 1a318b324f..a61f69a027 100644 --- a/photon-server/src/main/java/org/photonvision/server/RequestHandler.java +++ b/photon-server/src/main/java/org/photonvision/server/RequestHandler.java @@ -432,7 +432,7 @@ public static void onLogExportRequest(Context ctx) { var tempPath2 = Files.createTempFile("photonvision-kernelogs", ".txt"); // In the command below: // dmesg = output all kernel logs since current boot - // cat /var/log/kern.log = output all kernal logs since first boot + // cat /var/log/kern.log = output all kernel logs since first boot shell.executeBashCommand( "journalctl -u photonvision.service > " + tempPath.toAbsolutePath() @@ -467,8 +467,8 @@ public static void onLogExportRequest(Context ctx) { } } catch (IOException e) { ctx.status(500); - ctx.result("There was an error while exporting journactl logs"); - logger.error("There was an error while exporting journactl logs", e); + ctx.result("There was an error while exporting journalctl logs"); + logger.error("There was an error while exporting journalctl logs", e); } } diff --git a/photon-targeting/src/main/java/org/photonvision/targeting/PhotonPipelineResult.java b/photon-targeting/src/main/java/org/photonvision/targeting/PhotonPipelineResult.java index 9537dfb37a..598e5eecd6 100644 --- a/photon-targeting/src/main/java/org/photonvision/targeting/PhotonPipelineResult.java +++ b/photon-targeting/src/main/java/org/photonvision/targeting/PhotonPipelineResult.java @@ -164,7 +164,7 @@ public Optional getMultiTagResult() { /** * Returns the estimated time the frame was taken, in the Time Sync Server's time base (nt::Now). - * This is calculated using the estiamted offset between Time Sync Server time and local time. The + * This is calculated using the estimated offset between Time Sync Server time and local time. The * robot shall run a server, so the offset shall be 0. * * @return The timestamp in seconds diff --git a/photonlib-java-examples/aimandrange/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java b/photonlib-java-examples/aimandrange/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java index b41017a3e1..c1676f6ff4 100644 --- a/photonlib-java-examples/aimandrange/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java +++ b/photonlib-java-examples/aimandrange/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java @@ -143,8 +143,8 @@ public void setChassisSpeeds( } /** - * Command the swerve modules to the desired states. Velocites exceeding the maximum speed will be - * desaturated (while preserving the ratios between modules). + * Command the swerve modules to the desired states. Velocities exceeding the maximum speed will + * be desaturated (while preserving the ratios between modules). * * @param openLoop If swerve modules should use feedforward only and ignore velocity feedback * control. diff --git a/photonlib-java-examples/aimattarget/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java b/photonlib-java-examples/aimattarget/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java index b41017a3e1..c1676f6ff4 100644 --- a/photonlib-java-examples/aimattarget/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java +++ b/photonlib-java-examples/aimattarget/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java @@ -143,8 +143,8 @@ public void setChassisSpeeds( } /** - * Command the swerve modules to the desired states. Velocites exceeding the maximum speed will be - * desaturated (while preserving the ratios between modules). + * Command the swerve modules to the desired states. Velocities exceeding the maximum speed will + * be desaturated (while preserving the ratios between modules). * * @param openLoop If swerve modules should use feedforward only and ignore velocity feedback * control. diff --git a/photonlib-java-examples/poseest/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java b/photonlib-java-examples/poseest/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java index b41017a3e1..c1676f6ff4 100644 --- a/photonlib-java-examples/poseest/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java +++ b/photonlib-java-examples/poseest/src/main/java/frc/robot/subsystems/drivetrain/SwerveDrive.java @@ -143,8 +143,8 @@ public void setChassisSpeeds( } /** - * Command the swerve modules to the desired states. Velocites exceeding the maximum speed will be - * desaturated (while preserving the ratios between modules). + * Command the swerve modules to the desired states. Velocities exceeding the maximum speed will + * be desaturated (while preserving the ratios between modules). * * @param openLoop If swerve modules should use feedforward only and ignore velocity feedback * control.