diff --git a/content/blog/cosa-nyu-ml-tools/index.md b/content/blog/cosa-nyu-ml-tools/index.md index 349e099..8da456d 100644 --- a/content/blog/cosa-nyu-ml-tools/index.md +++ b/content/blog/cosa-nyu-ml-tools/index.md @@ -1,5 +1,5 @@ --- -templateKey: #blog-post +templateKey: blog-post title: COSA x NYU Machine Learning Tools for Creative Coding author: ml5.js description: Join us at ITP for an informal series of talks and workshops exploring open-source machine learning tools for creative coding, presented in partnership with the Clinic for Open Source Arts (COSA)! diff --git a/content/model-card/bodypose/images/_main.jpg b/content/model-card/bodypose/images/_main.jpg new file mode 100644 index 0000000..00247cc Binary files /dev/null and b/content/model-card/bodypose/images/_main.jpg differ diff --git a/content/model-card/bodypose/images/_thumb.jpg b/content/model-card/bodypose/images/_thumb.jpg new file mode 100644 index 0000000..5828a1c Binary files /dev/null and b/content/model-card/bodypose/images/_thumb.jpg differ diff --git a/content/model-card/bodypose/index.md b/content/model-card/bodypose/index.md new file mode 100644 index 0000000..b9da2b6 --- /dev/null +++ b/content/model-card/bodypose/index.md @@ -0,0 +1,69 @@ +--- +templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list +title: BodyPose Model Card +author: ml5.js +description: Friendly Machine Learning for the Web +keywords: bias, model card, BodyPose +image: "./images/_thumb.jpg" +externalLink: (link) +date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD +tags: + - BodyPose +featured: true +--- +BodyPose is developed leveraging TensorFlow's [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:~:text=MoveNet%20is%20an%20ultra%20fast,known%20as%20Lightning%20and%20Thunder) and [BlazePose](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker) models. + +______ +## MoveNet +MoveNet was trained on [two datasets](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf): + +**COCO Keypoint Dataset Training Set 2017** +- Date created: **2017** +- Size: **28K images** +- How the data was collected: “In-the-wild images with diverse scenes, instance sizes, and occlusions.” The original dataset of 64K images was distilled to the final 28K to only include three or less people per image. +- Bias: + * According to the public [model card](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf), the qualitative analysis shows that although the dataset has a 3:1 male to female ratio, favors young and light skinned individuals, the models is stated to perform “fairly” (< 5% performance differences between most categories). + * Categories of evaluation: + * Male / Female (gender) + * Young / Middle-age / Old (age) + * Darker / Medium/ Lighter (skin tone) + * There has been a fair amount of [research](https://medium.com/@rxtang/mitigating-gender-bias-in-captioning-systems-5a956e1e0d6d#:~:text=COCO%20dataset%20has%20an%20imbalanced,the%20bias%20learned%20by%20models) about the COCO Dataset. Most show that the dataset has numerous biases occurring due to underrepresentation of certain demographics. + +**Active Dataset Training Set** +- Date created: **2017-2021** ([assuming](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html)) +- Size: **23.5k images** +- How the data was collected: “Images sampled from **YouTube fitness videos** which capture people exercising (e.g. HIIT, weight-lifting, etc.), stretching, or dancing. It contains diverse poses and motion with more motion blur and self-occlusions.” +- Bias: + * According to the model card, the models are stated to perform “fairly” (< 5% performance differences between all categories). + * Categories of evaluation: + * Male / Female (gender) + * Young / Middle-age / Old (age) + * Darker / Medium/ Lighter (skin tone) + * The Active Single Person Image set, unlike COCO dataset, is not public, hence there is no additional research conducted to evaluate the fairness. + +As stated, fitness videos uploaded to YouTube were used to assemble this internal Google dataset. Only in [2024](https://support.google.com/youtube/thread/313644973/third-party-ai-trainability-on-youtube?hl=en), Google [has provided](https://support.google.com/youtube/answer/15509945?hl=en) creators the opportunity to opt-out from Google using their videos for their AI/ML research. + +___ +## BlazePose +BlazePose’s [research paper](https://arxiv.org/pdf/2006.10204) and [model card](https://drive.google.com/file/d/10WlcTvrQnR_R2TdTmKw0nkyRLqrwNkWU/preview) +- Date created: **2020-2021 (assuming)** +- Size: **80K** +- How the data was collected: Not stated in the original research paper. The model card asserts: “This model was trained and evaluated on images, including consented images (30K), of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions. The majority of training images (85K) capture a wide range of fitness poses.” +- Bias: + * According to the model card, the models are stated to perform “fairly”. + * Categories of evaluation: + * 14 subregions + * Male / Female (gender) + * 6 skin tones + * Evaluation results: + * Subregions (14): difference in confidence between average and worst performing regions of 4.8% for the heavy, 4.8% for the full and 6.5% for the light model. + * Gender: difference in confidence is 1.1% for the heavy model, 2.2% for the full model and 3.1% for the lite model. + * Skin tones: difference in confidence between worst and best performing categories is 5.7% for the heavy model, 7.0% for the full model and 7.3% for the lite model. + +There is no additional research conducted to evaluate the fairness. +There is no specific information on how the **consent** was obtained to get the images. + + +____ + +#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6). \ No newline at end of file diff --git a/content/model-card/bodysegmentation/images/_main.jpg b/content/model-card/bodysegmentation/images/_main.jpg new file mode 100644 index 0000000..00247cc Binary files /dev/null and b/content/model-card/bodysegmentation/images/_main.jpg differ diff --git a/content/model-card/bodysegmentation/images/_thumb.jpg b/content/model-card/bodysegmentation/images/_thumb.jpg new file mode 100644 index 0000000..5828a1c Binary files /dev/null and b/content/model-card/bodysegmentation/images/_thumb.jpg differ diff --git a/content/model-card/bodysegmentation/index.md b/content/model-card/bodysegmentation/index.md new file mode 100644 index 0000000..b8516e8 --- /dev/null +++ b/content/model-card/bodysegmentation/index.md @@ -0,0 +1,39 @@ +--- +templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list +title: BodySegmentation Model Card +author: ml5.js +description: Friendly Machine Learning for the Web +keywords: bias, model card, BodySegmentation +image: "./images/_thumb.jpg" +externalLink: (link) +date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD +tags: + - BodySegmentation +featured: true +--- +The ml5.js BodySegmentation provides two models, **SelfieSegmentation** and **BodyPix**: + +______ +## SelfieSegmentation + +**MediaPipe Selfie Segmentation [Model Card](https://storage.googleapis.com/mediapipe-assets/Model%20Card%20MediaPipe%20Selfie%20Segmentation.pdf)** +- Date created: **2021** +- Size: **Not stated** +- How the data was collected: “This model was trained and evaluated on images, including consented images of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions.” +- Bias: + * Categories of evaluation: + * 17 demographical subregions + * 6 skin tones + * Male / Female (gender) + * Evalutation results: + * Subregions: Difference in confidence between average and worst performing regions of 1.11% for the general model, and 1.28% for the landscape model, lower than the criteria. + * Gender: Differences in confidence are 1.6% for the general model and 0.6% for the landscape model. + * Skin tone: Difference in confidence between worst and best performing + +____ +## BodyPix +This [short article](https://medium.com/tensorflow/introducing-bodypix-real-time-person-segmentation-in-the-browser-with-tensorflow-js-f1948126c2a0) is the only information on BodyPix that we have found. + +____ + +#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6). \ No newline at end of file diff --git a/content/model-card/facemesh/images/_main.jpg b/content/model-card/facemesh/images/_main.jpg new file mode 100644 index 0000000..00247cc Binary files /dev/null and b/content/model-card/facemesh/images/_main.jpg differ diff --git a/content/model-card/facemesh/images/_thumb.jpg b/content/model-card/facemesh/images/_thumb.jpg new file mode 100644 index 0000000..5828a1c Binary files /dev/null and b/content/model-card/facemesh/images/_thumb.jpg differ diff --git a/content/model-card/facemesh/index.md b/content/model-card/facemesh/index.md new file mode 100644 index 0000000..6a683a2 --- /dev/null +++ b/content/model-card/facemesh/index.md @@ -0,0 +1,63 @@ +--- +templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list +title: Facemesh Model Card +author: ml5.js +description: Friendly Machine Learning for the Web +keywords: bias, model card, HandPose +image: "./images/_thumb.jpg" +externalLink: (link) +date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD +tags: + - HandPose +featured: true +--- + +## MediaPipe Face Mesh + +**These are the [Model Card](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view?pli=1), the [Research](https://arxiv.org/pdf/1907.06724), and the [Research Blog](https://sites.google.com/view/perception-cv4arvr/facemesh)** +- Date created: **2018** +- Size: **Not stated** +- How the data was collected: “All dataset images were captured on a diverse set of smartphone cameras, both front- and back-facing. All images were captured in a real-world environment with different light, noise and motion conditions via an AR (Augmented Reality) application.” + +- Bias: + * According to the model card, the models are stated to perform “well” across most groups. + * Categories of evaluation: + * 17 geographic subregions + * 6 skin tones + * Male / Female (gender) + * Evaluation results: + * Subregions: Difference in confidence between best and worst performing regions of 0.9% for the tracking mode and 1.56% for the reacquisition mode. + * Genders: Difference in confidence is 0.02% for the tracking mode and 0.1% for the reacquisition mode. + * Skin tones: Difference in confidence is 0.24% for tracking mode and 1.12% for the reacquisition mode. + * There is no additional research conducted to evaluate the fairness. + + + + * There is no additional research conducted to evaluate the fairness. + +_____ +## MediaPipe Attention Mesh +**This is the [Model Card](https://drive.google.com/file/d/1tV7EJb3XgMS7FwOErTgLU1ZocYyNmwlf/preview)** +- Date created: **2020** +- Size: **30K (assuming)** +- How the data was collected: “All dataset images were captured on a diverse set of smartphone cameras, both front- and back-facing. All images were captured in a real-world environment with different light, noise and motion conditions via an AR (Augmented Reality) application.” + +- Bias: + * According to the model card, the models are stated to perform “well” across most groups. + + * Categories of evaluation: + * 17 geographic subregions + * 6 skin tones + * Male / Female (gender) + * Evaluation results: + * Subregions: Difference in confidence between best and worst performing regions of 1.22% for the tracking mode and 1.27% for the reacquisition mode. + * Gender: Difference in confidence is 0.01% for the tracking mode and 0.03% for the reacquisition mode. + * Skin tones: Difference in confidence is 0.54% for tracking mode and 0.88% for the reacquisition mode. + +- Potential bias: + * Potential biases in the model which may result in drastic confidence differences if evaluated: May have difficulty with facial accessories like glasses or cultural headwear. + +____ + +#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6). + diff --git a/content/model-card/handpose/images/_main.jpg b/content/model-card/handpose/images/_main.jpg new file mode 100644 index 0000000..00247cc Binary files /dev/null and b/content/model-card/handpose/images/_main.jpg differ diff --git a/content/model-card/handpose/images/_thumb.jpg b/content/model-card/handpose/images/_thumb.jpg new file mode 100644 index 0000000..5828a1c Binary files /dev/null and b/content/model-card/handpose/images/_thumb.jpg differ diff --git a/content/model-card/handpose/index.md b/content/model-card/handpose/index.md new file mode 100644 index 0000000..393f1aa --- /dev/null +++ b/content/model-card/handpose/index.md @@ -0,0 +1,29 @@ +--- +templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list +title: HandPose Model Card +author: ml5.js +description: Friendly Machine Learning for the Web +keywords: bias, model card, HandPose +image: "./images/_thumb.jpg" +externalLink: (link) +date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD +tags: + - HandPose +featured: true +--- +## Hand Detection / Tracking + +**Hand Detection/Tracking [Model Card](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view?pli=1)** +- Date created: **2021** (assuming) +- Size: **Not stated** +- How the data was collected: “This model was trained and evaluated on images of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions.” + +- Bias: + * No evaluation has been conducted on this model. No access to the dataset was provided. + * As stated on the model card: “as with many human sensing tools, performance may vary across skin tones, gender, age, and potentially other sensitive demographic characteristics.” + * There is no additional research conducted to evaluate the fairness. + +____ + +#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6). + diff --git a/package-lock.json b/package-lock.json index 5122e72..24f4309 100644 --- a/package-lock.json +++ b/package-lock.json @@ -494,13 +494,12 @@ } }, "node_modules/@babel/helpers": { - "version": "7.20.7", - "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.20.7.tgz", - "integrity": "sha512-PBPjs5BppzsGaxHQCDKnZ6Gd9s6xl8bBCluz3vEInLGRJmnZan4F6BYCeqtyXqkk4W5IlPmjK4JlOuZkpJ3xZA==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.26.10.tgz", + "integrity": "sha512-UPYc3SauzZ3JGgj87GgZ89JVdC5dj0AoetR5Bw6wj4niittNyFh6+eOGonYvJ1ao6B8lEa3Q3klS7ADZ53bc5g==", "dependencies": { - "@babel/template": "^7.20.7", - "@babel/traverse": "^7.20.7", - "@babel/types": "^7.20.7" + "@babel/template": "^7.26.9", + "@babel/types": "^7.26.10" }, "engines": { "node": ">=6.9.0" @@ -584,11 +583,11 @@ } }, "node_modules/@babel/parser": { - "version": "7.26.2", - "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.26.2.tgz", - "integrity": "sha512-DWMCZH9WA4Maitz2q21SRKHo9QXZxkDsbNZoVD62gusNtNBBqDg9i7uOhASfTfIGNzW+O+r7+jAlM8dwphcJKQ==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.26.10.tgz", + "integrity": "sha512-6aQR2zGE/QFi8JpDLjUZEPYOs7+mhKXm86VaKFiLP35JQwQb6bwUE+XbvkH0EptsYhbNBSUGaUBLKqxH1xSgsA==", "dependencies": { - "@babel/types": "^7.26.0" + "@babel/types": "^7.26.10" }, "bin": { "parser": "bin/babel-parser.js" @@ -1916,24 +1915,29 @@ } }, "node_modules/@babel/runtime": { - "version": "7.21.5", - "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.21.5.tgz", - "integrity": "sha512-8jI69toZqqcsnqGGqwGS4Qb1VwLOEp4hz+CXPywcvjs60u3B4Pom/U/7rm4W8tMOYEB+E9wgD0mW1l3r8qlI9Q==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.10.tgz", + "integrity": "sha512-2WJMeRQPHKSPemqk/awGrAiuFfzBmOIPXKizAsVhWH9YJqLZ0H+HS4c8loHGgW6utJ3E/ejXQUsiGaQy2NZ9Fw==", "dependencies": { - "regenerator-runtime": "^0.13.11" + "regenerator-runtime": "^0.14.0" }, "engines": { "node": ">=6.9.0" } }, + "node_modules/@babel/runtime/node_modules/regenerator-runtime": { + "version": "0.14.1", + "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz", + "integrity": "sha512-dYnhHh0nJoMfnkZs6GmmhFknAGRrLznOu5nc9ML+EJxGvrx6H7teuevqVqCuPcPK//3eDrrjQhehXVx9cnkGdw==" + }, "node_modules/@babel/template": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.25.9.tgz", - "integrity": "sha512-9DGttpmPvIxBb/2uwpVo3dqJ+O6RooAFOS+lB+xDqoE2PVCE8nfoHMdZLpfCQRLwvohzXISPZcgxt80xLfsuwg==", + "version": "7.26.9", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.26.9.tgz", + "integrity": "sha512-qyRplbeIpNZhmzOysF/wFMuP9sctmh2cFzRAZOn1YapxBsE1i9bJIY586R/WBLfLcmcBlM8ROBiQURnnNy+zfA==", "dependencies": { - "@babel/code-frame": "^7.25.9", - "@babel/parser": "^7.25.9", - "@babel/types": "^7.25.9" + "@babel/code-frame": "^7.26.2", + "@babel/parser": "^7.26.9", + "@babel/types": "^7.26.9" }, "engines": { "node": ">=6.9.0" @@ -1978,9 +1982,9 @@ "integrity": "sha512-sGkPx+VjMtmA6MX27oA4FBFELFCZZ4S4XqeGOXCv68tT+jb3vk/RyaKWP0PTKyWtmLSM0b+adUTEvbs1PEaH2w==" }, "node_modules/@babel/types": { - "version": "7.26.0", - "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.26.0.tgz", - "integrity": "sha512-Z/yiTPj+lDVnF7lWeKCIJzaIkI0vYO87dMpZ4bg4TDrFe4XXLFWL1TbXU27gBP3QccxV9mZICCrnjnYlJjXHOA==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.26.10.tgz", + "integrity": "sha512-emqcG3vHrpxUKTrxcblR36dcrcoRDvKmnL/dCL6ZsHaShW80qxCAcNhzQZrpeM765VzEos+xOi4s+r4IXzTwdQ==", "dependencies": { "@babel/helper-string-parser": "^7.25.9", "@babel/helper-validator-identifier": "^7.25.9" @@ -5011,9 +5015,9 @@ } }, "node_modules/axios": { - "version": "1.7.7", - "resolved": "https://registry.npmjs.org/axios/-/axios-1.7.7.tgz", - "integrity": "sha512-S4kL7XrjgBmvdGut0sN3yJxqYzrDOnivkBiN0OFs6hLiUam3UPvswUo0kqGyhqUZGEOytHyumEdXsAkgCOUf3Q==", + "version": "1.8.3", + "resolved": "https://registry.npmjs.org/axios/-/axios-1.8.3.tgz", + "integrity": "sha512-iP4DebzoNlP/YN2dpwCgb8zoCmhtkajzS48JvwmkSkXvPI3DHc7m+XYL5tGnSlJtR6nImXZmdCuN5aP8dh1d8A==", "dependencies": { "follow-redirects": "^1.15.6", "form-data": "^4.0.0", @@ -14383,9 +14387,9 @@ } }, "node_modules/prismjs": { - "version": "1.29.0", - "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.29.0.tgz", - "integrity": "sha512-Kx/1w86q/epKcmte75LNrEoT+lX8pBpavuAbvJWRXar7Hz8jrtF+e3vY751p0R8H9HdArwaCTNDDzHg/ScJK1Q==", + "version": "1.30.0", + "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.30.0.tgz", + "integrity": "sha512-DEvV2ZF2r2/63V+tK8hQvrR2ZGn10srHbXviTlcv7Kpzw8jWiNTqbVgjO3IY8RxrrOUF8VPMQQFysYYYv0YZxw==", "engines": { "node": ">=6" } @@ -18380,13 +18384,12 @@ } }, "@babel/helpers": { - "version": "7.20.7", - "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.20.7.tgz", - "integrity": "sha512-PBPjs5BppzsGaxHQCDKnZ6Gd9s6xl8bBCluz3vEInLGRJmnZan4F6BYCeqtyXqkk4W5IlPmjK4JlOuZkpJ3xZA==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/helpers/-/helpers-7.26.10.tgz", + "integrity": "sha512-UPYc3SauzZ3JGgj87GgZ89JVdC5dj0AoetR5Bw6wj4niittNyFh6+eOGonYvJ1ao6B8lEa3Q3klS7ADZ53bc5g==", "requires": { - "@babel/template": "^7.20.7", - "@babel/traverse": "^7.20.7", - "@babel/types": "^7.20.7" + "@babel/template": "^7.26.9", + "@babel/types": "^7.26.10" } }, "@babel/highlight": { @@ -18451,11 +18454,11 @@ } }, "@babel/parser": { - "version": "7.26.2", - "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.26.2.tgz", - "integrity": "sha512-DWMCZH9WA4Maitz2q21SRKHo9QXZxkDsbNZoVD62gusNtNBBqDg9i7uOhASfTfIGNzW+O+r7+jAlM8dwphcJKQ==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/parser/-/parser-7.26.10.tgz", + "integrity": "sha512-6aQR2zGE/QFi8JpDLjUZEPYOs7+mhKXm86VaKFiLP35JQwQb6bwUE+XbvkH0EptsYhbNBSUGaUBLKqxH1xSgsA==", "requires": { - "@babel/types": "^7.26.0" + "@babel/types": "^7.26.10" } }, "@babel/plugin-bugfix-firefox-class-in-computed-class-key": { @@ -19290,21 +19293,28 @@ } }, "@babel/runtime": { - "version": "7.21.5", - "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.21.5.tgz", - "integrity": "sha512-8jI69toZqqcsnqGGqwGS4Qb1VwLOEp4hz+CXPywcvjs60u3B4Pom/U/7rm4W8tMOYEB+E9wgD0mW1l3r8qlI9Q==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/runtime/-/runtime-7.26.10.tgz", + "integrity": "sha512-2WJMeRQPHKSPemqk/awGrAiuFfzBmOIPXKizAsVhWH9YJqLZ0H+HS4c8loHGgW6utJ3E/ejXQUsiGaQy2NZ9Fw==", "requires": { - "regenerator-runtime": "^0.13.11" + "regenerator-runtime": "^0.14.0" + }, + "dependencies": { + "regenerator-runtime": { + "version": "0.14.1", + "resolved": "https://registry.npmjs.org/regenerator-runtime/-/regenerator-runtime-0.14.1.tgz", + "integrity": "sha512-dYnhHh0nJoMfnkZs6GmmhFknAGRrLznOu5nc9ML+EJxGvrx6H7teuevqVqCuPcPK//3eDrrjQhehXVx9cnkGdw==" + } } }, "@babel/template": { - "version": "7.25.9", - "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.25.9.tgz", - "integrity": "sha512-9DGttpmPvIxBb/2uwpVo3dqJ+O6RooAFOS+lB+xDqoE2PVCE8nfoHMdZLpfCQRLwvohzXISPZcgxt80xLfsuwg==", + "version": "7.26.9", + "resolved": "https://registry.npmjs.org/@babel/template/-/template-7.26.9.tgz", + "integrity": "sha512-qyRplbeIpNZhmzOysF/wFMuP9sctmh2cFzRAZOn1YapxBsE1i9bJIY586R/WBLfLcmcBlM8ROBiQURnnNy+zfA==", "requires": { - "@babel/code-frame": "^7.25.9", - "@babel/parser": "^7.25.9", - "@babel/types": "^7.25.9" + "@babel/code-frame": "^7.26.2", + "@babel/parser": "^7.26.9", + "@babel/types": "^7.26.9" } }, "@babel/traverse": { @@ -19337,9 +19347,9 @@ } }, "@babel/types": { - "version": "7.26.0", - "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.26.0.tgz", - "integrity": "sha512-Z/yiTPj+lDVnF7lWeKCIJzaIkI0vYO87dMpZ4bg4TDrFe4XXLFWL1TbXU27gBP3QccxV9mZICCrnjnYlJjXHOA==", + "version": "7.26.10", + "resolved": "https://registry.npmjs.org/@babel/types/-/types-7.26.10.tgz", + "integrity": "sha512-emqcG3vHrpxUKTrxcblR36dcrcoRDvKmnL/dCL6ZsHaShW80qxCAcNhzQZrpeM765VzEos+xOi4s+r4IXzTwdQ==", "requires": { "@babel/helper-string-parser": "^7.25.9", "@babel/helper-validator-identifier": "^7.25.9" @@ -21429,9 +21439,9 @@ "integrity": "sha512-RE3mdQ7P3FRSe7eqCWoeQ/Z9QXrtniSjp1wUjt5nRC3WIpz5rSCve6o3fsZ2aCpJtrZjSZgjwXAoTO5k4tEI0w==" }, "axios": { - "version": "1.7.7", - "resolved": "https://registry.npmjs.org/axios/-/axios-1.7.7.tgz", - "integrity": "sha512-S4kL7XrjgBmvdGut0sN3yJxqYzrDOnivkBiN0OFs6hLiUam3UPvswUo0kqGyhqUZGEOytHyumEdXsAkgCOUf3Q==", + "version": "1.8.3", + "resolved": "https://registry.npmjs.org/axios/-/axios-1.8.3.tgz", + "integrity": "sha512-iP4DebzoNlP/YN2dpwCgb8zoCmhtkajzS48JvwmkSkXvPI3DHc7m+XYL5tGnSlJtR6nImXZmdCuN5aP8dh1d8A==", "requires": { "follow-redirects": "^1.15.6", "form-data": "^4.0.0", @@ -28137,9 +28147,9 @@ } }, "prismjs": { - "version": "1.29.0", - "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.29.0.tgz", - "integrity": "sha512-Kx/1w86q/epKcmte75LNrEoT+lX8pBpavuAbvJWRXar7Hz8jrtF+e3vY751p0R8H9HdArwaCTNDDzHg/ScJK1Q==" + "version": "1.30.0", + "resolved": "https://registry.npmjs.org/prismjs/-/prismjs-1.30.0.tgz", + "integrity": "sha512-DEvV2ZF2r2/63V+tK8hQvrR2ZGn10srHbXviTlcv7Kpzw8jWiNTqbVgjO3IY8RxrrOUF8VPMQQFysYYYv0YZxw==" }, "probe-image-size": { "version": "7.2.3",