Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/blog/cosa-nyu-ml-tools/index.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
templateKey: #blog-post
templateKey: blog-post
title: COSA x NYU Machine Learning Tools for Creative Coding
author: ml5.js
description: Join us at ITP for an informal series of talks and workshops exploring open-source machine learning tools for creative coding, presented in partnership with the Clinic for Open Source Arts (COSA)!
Expand Down
Binary file added content/model-card/bodypose/images/_main.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/model-card/bodypose/images/_thumb.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
69 changes: 69 additions & 0 deletions content/model-card/bodypose/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list
title: BodyPose Model Card
author: ml5.js
description: Friendly Machine Learning for the Web
keywords: bias, model card, BodyPose
image: "./images/_thumb.jpg"
externalLink: (link)
date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD
tags:
- BodyPose
featured: true
---
BodyPose is developed leveraging TensorFlow's [MoveNet](https://www.tensorflow.org/hub/tutorials/movenet#:~:text=MoveNet%20is%20an%20ultra%20fast,known%20as%20Lightning%20and%20Thunder) and [BlazePose](https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker) models.

______
## MoveNet
MoveNet was trained on [two datasets](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf):

**COCO Keypoint Dataset Training Set 2017**
- Date created: **2017**
- Size: **28K images**
- How the data was collected: “In-the-wild images with diverse scenes, instance sizes, and occlusions.” The original dataset of 64K images was distilled to the final 28K to only include three or less people per image.
- Bias:
* According to the public [model card](https://storage.googleapis.com/movenet/MoveNet.SinglePose%20Model%20Card.pdf), the qualitative analysis shows that although the dataset has a 3:1 male to female ratio, favors young and light skinned individuals, the models is stated to perform “fairly” (< 5% performance differences between most categories).
* Categories of evaluation:
* Male / Female (gender)
* Young / Middle-age / Old (age)
* Darker / Medium/ Lighter (skin tone)
* There has been a fair amount of [research](https://medium.com/@rxtang/mitigating-gender-bias-in-captioning-systems-5a956e1e0d6d#:~:text=COCO%20dataset%20has%20an%20imbalanced,the%20bias%20learned%20by%20models) about the COCO Dataset. Most show that the dataset has numerous biases occurring due to underrepresentation of certain demographics.

**Active Dataset Training Set**
- Date created: **2017-2021** ([assuming](https://blog.tensorflow.org/2021/05/next-generation-pose-detection-with-movenet-and-tensorflowjs.html))
- Size: **23.5k images**
- How the data was collected: “Images sampled from **YouTube fitness videos** which capture people exercising (e.g. HIIT, weight-lifting, etc.), stretching, or dancing. It contains diverse poses and motion with more motion blur and self-occlusions.”
- Bias:
* According to the model card, the models are stated to perform “fairly” (< 5% performance differences between all categories).
* Categories of evaluation:
* Male / Female (gender)
* Young / Middle-age / Old (age)
* Darker / Medium/ Lighter (skin tone)
* The Active Single Person Image set, unlike COCO dataset, is not public, hence there is no additional research conducted to evaluate the fairness.

As stated, fitness videos uploaded to YouTube were used to assemble this internal Google dataset. Only in [2024](https://support.google.com/youtube/thread/313644973/third-party-ai-trainability-on-youtube?hl=en), Google [has provided](https://support.google.com/youtube/answer/15509945?hl=en) creators the opportunity to opt-out from Google using their videos for their AI/ML research.

___
## BlazePose
BlazePose’s [research paper](https://arxiv.org/pdf/2006.10204) and [model card](https://drive.google.com/file/d/10WlcTvrQnR_R2TdTmKw0nkyRLqrwNkWU/preview)
- Date created: **2020-2021 (assuming)**
- Size: **80K**
- How the data was collected: Not stated in the original research paper. The model card asserts: “This model was trained and evaluated on images, including consented images (30K), of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions. The majority of training images (85K) capture a wide range of fitness poses.”
- Bias:
* According to the model card, the models are stated to perform “fairly”.
* Categories of evaluation:
* 14 subregions
* Male / Female (gender)
* 6 skin tones
* Evaluation results:
* Subregions (14): difference in confidence between average and worst performing regions of 4.8% for the heavy, 4.8% for the full and 6.5% for the light model.
* Gender: difference in confidence is 1.1% for the heavy model, 2.2% for the full model and 3.1% for the lite model.
* Skin tones: difference in confidence between worst and best performing categories is 5.7% for the heavy model, 7.0% for the full model and 7.3% for the lite model.

There is no additional research conducted to evaluate the fairness.
There is no specific information on how the **consent** was obtained to get the images.


____

#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6).
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
39 changes: 39 additions & 0 deletions content/model-card/bodysegmentation/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list
title: BodySegmentation Model Card
author: ml5.js
description: Friendly Machine Learning for the Web
keywords: bias, model card, BodySegmentation
image: "./images/_thumb.jpg"
externalLink: (link)
date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD
tags:
- BodySegmentation
featured: true
---
The ml5.js BodySegmentation provides two models, **SelfieSegmentation** and **BodyPix**:

______
## SelfieSegmentation

**MediaPipe Selfie Segmentation [Model Card](https://storage.googleapis.com/mediapipe-assets/Model%20Card%20MediaPipe%20Selfie%20Segmentation.pdf)**
- Date created: **2021**
- Size: **Not stated**
- How the data was collected: “This model was trained and evaluated on images, including consented images of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions.”
- Bias:
* Categories of evaluation:
* 17 demographical subregions
* 6 skin tones
* Male / Female (gender)
* Evalutation results:
* Subregions: Difference in confidence between average and worst performing regions of 1.11% for the general model, and 1.28% for the landscape model, lower than the criteria.
* Gender: Differences in confidence are 1.6% for the general model and 0.6% for the landscape model.
* Skin tone: Difference in confidence between worst and best performing

____
## BodyPix
This [short article](https://medium.com/tensorflow/introducing-bodypix-real-time-person-segmentation-in-the-browser-with-tensorflow-js-f1948126c2a0) is the only information on BodyPix that we have found.

____

#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6).
Binary file added content/model-card/facemesh/images/_main.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/model-card/facemesh/images/_thumb.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
63 changes: 63 additions & 0 deletions content/model-card/facemesh/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
---
templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list
title: Facemesh Model Card
author: ml5.js
description: Friendly Machine Learning for the Web
keywords: bias, model card, HandPose
image: "./images/_thumb.jpg"
externalLink: (link)
date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD
tags:
- HandPose
featured: true
---

## MediaPipe Face Mesh

**These are the [Model Card](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view?pli=1), the [Research](https://arxiv.org/pdf/1907.06724), and the [Research Blog](https://sites.google.com/view/perception-cv4arvr/facemesh)**
- Date created: **2018**
- Size: **Not stated**
- How the data was collected: “All dataset images were captured on a diverse set of smartphone cameras, both front- and back-facing. All images were captured in a real-world environment with different light, noise and motion conditions via an AR (Augmented Reality) application.”

- Bias:
* According to the model card, the models are stated to perform “well” across most groups.
* Categories of evaluation:
* 17 geographic subregions
* 6 skin tones
* Male / Female (gender)
* Evaluation results:
* Subregions: Difference in confidence between best and worst performing regions of 0.9% for the tracking mode and 1.56% for the reacquisition mode.
* Genders: Difference in confidence is 0.02% for the tracking mode and 0.1% for the reacquisition mode.
* Skin tones: Difference in confidence is 0.24% for tracking mode and 1.12% for the reacquisition mode.
* There is no additional research conducted to evaluate the fairness.



* There is no additional research conducted to evaluate the fairness.

_____
## MediaPipe Attention Mesh
**This is the [Model Card](https://drive.google.com/file/d/1tV7EJb3XgMS7FwOErTgLU1ZocYyNmwlf/preview)**
- Date created: **2020**
- Size: **30K (assuming)**
- How the data was collected: “All dataset images were captured on a diverse set of smartphone cameras, both front- and back-facing. All images were captured in a real-world environment with different light, noise and motion conditions via an AR (Augmented Reality) application.”

- Bias:
* According to the model card, the models are stated to perform “well” across most groups.

* Categories of evaluation:
* 17 geographic subregions
* 6 skin tones
* Male / Female (gender)
* Evaluation results:
* Subregions: Difference in confidence between best and worst performing regions of 1.22% for the tracking mode and 1.27% for the reacquisition mode.
* Gender: Difference in confidence is 0.01% for the tracking mode and 0.03% for the reacquisition mode.
* Skin tones: Difference in confidence is 0.54% for tracking mode and 0.88% for the reacquisition mode.

- Potential bias:
* Potential biases in the model which may result in drastic confidence differences if evaluated: May have difficulty with facial accessories like glasses or cultural headwear.

____

#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6).

Binary file added content/model-card/handpose/images/_main.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added content/model-card/handpose/images/_thumb.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
29 changes: 29 additions & 0 deletions content/model-card/handpose/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
templateKey: blog-post # <-- Uncomment this so that this post can be included in the blog list
title: HandPose Model Card
author: ml5.js
description: Friendly Machine Learning for the Web
keywords: bias, model card, HandPose
image: "./images/_thumb.jpg"
externalLink: (link)
date: "2025-03-14" # YYYY-MM-DD or YYYY-MM-DD to YYYY-MM-DD or YYYY-MM-DD, YYYY-MM-DD, YYYY-MM-DD
tags:
- HandPose
featured: true
---
## Hand Detection / Tracking

**Hand Detection/Tracking [Model Card](https://drive.google.com/file/d/1sv4sSb9BSNVZhLzxXJ0jBv9DqD-4jnAz/view?pli=1)**
- Date created: **2021** (assuming)
- Size: **Not stated**
- How the data was collected: “This model was trained and evaluated on images of people using a mobile AR application captured with smartphone cameras in various “in-the-wild” conditions.”

- Bias:
* No evaluation has been conducted on this model. No access to the dataset was provided.
* As stated on the model card: “as with many human sensing tools, performance may vary across skin tones, gender, age, and potentially other sensitive demographic characteristics.”
* There is no additional research conducted to evaluate the fairness.

____

#### Please submit any feedback/information you belive would be useful regarding this model [here](https://forms.gle/BPG44g3cJywSKjde6).

Loading