Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 32 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_animal_keypoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227796953-95ae1e30-5323-43f8-9a19-c4c2326e9835.png" height="200px">
</div>

For [Animal-Pose](https://sites.google.com/view/animal-pose/) dataset, we prepare the dataset as follows:

1. Download the images of [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#data), especially the five categories (dog, cat, sheep, cow, horse), which we use as trainval dataset.
Expand Down Expand Up @@ -118,6 +122,10 @@ Those images from other sources (1000 images with 1000 annotations) are used for

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227797151-091dc21a-d944-49c9-8b62-cc47fa89e69f.png" height="200px">
</div>

For [AP-10K](https://github.com/AlexTheBad/AP-10K/) dataset, images and annotations can be downloaded from [download](https://drive.google.com/file/d/1-FNNGcdtAQRehYYkGY1y4wzFNg4iWNad/view?usp=sharing).
Note, this data and annotation data is for non-commercial use only.

Expand Down Expand Up @@ -170,6 +178,10 @@ The annotation files in 'annotation' folder contains 50 labeled animal species.

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227797934-32bc1b2c-7957-4a29-94df-8e431842ab3b.png" height="200px">
</div>

For [Horse-10](http://www.mackenziemathislab.org/horse10) dataset, images can be downloaded from [download](http://www.mackenziemathislab.org/horse10).
Please download the annotation files from [horse10_annotations](https://download.openmmlab.com/mmpose/datasets/horse10_annotations.tar). Note, this data and annotation data is for non-commercial use only, per the authors (see http://horse10.deeplabcut.org for more information).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -216,6 +228,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227799576-f10f8469-9432-4139-beb4-195037dee72c.png" height="200px">
</div>

For [MacaquePose](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html) dataset, images can be downloaded from [download](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html).
Please download the annotation files from [macaque_annotations](https://download.openmmlab.com/mmpose/datasets/macaque_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -266,6 +282,10 @@ Since the official dataset does not provide the test set, we randomly select 125

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227802774-bb4e4ef2-2ade-42ad-80f1-97f2a7faa9e2.png" height="200px">
</div>

For [Vinegar Fly](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [vinegar_fly_images](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_images.tar).
Please download the annotation files from [vinegar_fly_annotations](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -314,6 +334,10 @@ Since the official dataset does not provide the test set, we randomly select 90%

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227802779-09d0ec8c-8971-4c67-a315-e2d1355f7f72.png" height="200px">
</div>

For [Desert Locust](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [locust_images](https://download.openmmlab.com/mmpose/datasets/locust_images.tar).
Please download the annotation files from [locust_annotations](https://download.openmmlab.com/mmpose/datasets/locust_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -362,6 +386,10 @@ Since the official dataset does not provide the test set, we randomly select 90%

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227802783-ace952bb-1ff9-4720-80a8-c63cc9e714b6.png" height="200px">
</div>

For [Grévy’s Zebra](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [zebra_images](https://download.openmmlab.com/mmpose/datasets/zebra_images.tar).
Please download the annotation files from [zebra_annotations](https://download.openmmlab.com/mmpose/datasets/zebra_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -408,6 +436,10 @@ Since the official dataset does not provide the test set, we randomly select 90%

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227797386-fce99241-8a0e-4a40-a179-dad013e6c5a4.png" height="200px">
</div>

ATRW captures images of the Amur tiger (also known as Siberian tiger, Northeast-China tiger) in the wild.
For [ATRW](https://cvwc2019.github.io/challenge.html) dataset, please download images from
[Pose_train](https://lilablobssc.blob.core.windows.net/cvwc2019/train/atrw_pose_train.tar.gz),
Expand Down
36 changes: 36 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_body_keypoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864552-489d03de-e1b8-4ca2-8ac1-80dd99826cb7.png" height="300px">
</div>

For [COCO](http://cocodataset.org/) data, please download from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
[HRNet-Human-Pose-Estimation](https://github.com/HRNet/HRNet-Human-Pose-Estimation) provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results.
Please download from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
Expand Down Expand Up @@ -91,6 +95,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864660-e5f51e7d-deca-41d8-9725-8b5432bcc0e6.png" height="300px">
</div>

For [MPII](http://human-pose.mpi-inf.mpg.de/) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/).
We have converted the original annotation files into json format, please download them from [mpii_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -147,6 +155,10 @@ python tools/dataset/mat2json work_dirs/res50_mpii_256x256/pred.mat data/mpii/an

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864382-ab722299-6806-4ae4-babb-7bcc5fb09662.png" height="300px">
</div>

For [MPII-TRB](https://github.com/kennymckormick/Triplet-Representation-of-human-Body) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/).
Please download the annotation files from [mpii_trb_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_trb_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -187,6 +199,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864755-dd19644e-fccb-458b-a8c0-de55920261f5.png" height="300px">
</div>

For [AIC](https://github.com/AIChallenger/AI_Challenger_2017) data, please download from [AI Challenger 2017](https://github.com/AIChallenger/AI_Challenger_2017), 2017 Train/Val is needed for keypoints training and validation.
Please download the annotation files from [aic_annotations](https://download.openmmlab.com/mmpose/datasets/aic_annotations.tar).
Download and extract them under $MMPOSE/data, and make them look like this:
Expand Down Expand Up @@ -233,6 +249,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864868-54a98493-df3a-44d8-acbc-6ec22043dfb9.png" height="300px">
</div>

For [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose) data, please download from [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose).
Please download the annotation files and human detection results from [crowdpose_annotations](https://download.openmmlab.com/mmpose/datasets/crowdpose_annotations.tar).
For top-down approaches, we follow [CrowdPose](https://arxiv.org/abs/1812.00324) to use the [pre-trained weights](https://pjreddie.com/media/files/yolov3.weights) of [YOLOv3](https://github.com/eriklindernoren/PyTorch-YOLOv3) to generate the detected human bounding boxes.
Expand Down Expand Up @@ -280,6 +300,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227864552-489d03de-e1b8-4ca2-8ac1-80dd99826cb7.png" height="300px">
</div>

For [OCHuman](https://github.com/liruilong940607/OCHumanApi) data, please download the images and annotations from [OCHuman](https://github.com/liruilong940607/OCHumanApi),
Move them under $MMPOSE/data, and make them look like this:

Expand Down Expand Up @@ -322,6 +346,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227865030-2fd33ade-2cc2-4b67-aca0-6dea2124b63c.png" height="300px">
</div>

For [MHP](https://lv-mhp.github.io/dataset) data, please download from [MHP](https://lv-mhp.github.io/dataset).
Please download the annotation files from [mhp_annotations](https://download.openmmlab.com/mmpose/datasets/mhp_annotations.tar.gz).
Please download and extract them under $MMPOSE/data, and make them look like this:
Expand Down Expand Up @@ -377,6 +405,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227865114-3f98c673-f6d0-4518-ae99-653f475f9fc8.png" height="300px">
</div>

For [PoseTrack18](https://posetrack.net/users/download.php) data, please download from [PoseTrack18](https://posetrack.net/users/download.php).
Please download the annotation files from [posetrack18_annotations](https://download.openmmlab.com/mmpose/datasets/posetrack18_annotations.tar).
We have merged the video-wise separated official annotation files into two json files (posetrack18_train & posetrack18_val.json). We also generate the [mask files](https://download.openmmlab.com/mmpose/datasets/posetrack18_mask.tar) to speed up training.
Expand Down Expand Up @@ -469,6 +501,10 @@ pip install git+https://github.com/svenkreiss/poseval.git

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227865619-d65f64ae-991d-4693-99c2-caecd1beb1fc.png" height="300px">
</div>

For [sub-JHMDB](http://jhmdb.is.tue.mpg.de/dataset) data, please download the [images](<(http://files.is.tue.mpg.de/jhmdb/Rename_Images.tar.gz)>) from [JHMDB](http://jhmdb.is.tue.mpg.de/dataset),
Please download the annotation files from [jhmdb_annotations](https://download.openmmlab.com/mmpose/datasets/jhmdb_annotations.tar).
Move them under $MMPOSE/data, and make them look like this:
Expand Down
16 changes: 16 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_face_keypoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227780043-e557acc3-5966-48ba-ac9e-b6c5916f1e55.jpg" height="200px">
</div>

For 300W data, please download images from [300W Dataset](https://ibug.doc.ic.ac.uk/resources/300-W/).
Please download the annotation files from [300w_annotations](https://download.openmmlab.com/mmpose/datasets/300w_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -108,6 +112,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227785100-4b3d1e39-0d64-47c0-9e55-c947ce70866e.png" height="200px">
</div>

For WFLW data, please download images from [WFLW Dataset](https://wywu.github.io/projects/LAB/WFLW.html).
Please download the annotation files from [wflw_annotations](https://download.openmmlab.com/mmpose/datasets/wflw_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -215,6 +223,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227786792-06604943-c062-4bcd-bb2d-a2f78d80115b.png" height="200px">
</div>

For COFW data, please download from [COFW Dataset (Color Images)](http://www.vision.caltech.edu/xpburgos/ICCV13/Data/COFW_color.zip).
Move `COFW_train_color.mat` and `COFW_test_color.mat` to `data/cofw/` and make them look like:

Expand Down Expand Up @@ -274,6 +286,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227787217-2f226dc0-e5d7-4d0b-9ab8-68b53a5467c2.png" height="200px">
</div>

For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
Expand Down
4 changes: 4 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_fashion_landmark.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227774588-443fc5cc-7842-472a-abd5-827f0e3fd27f.png" height="150px">
</div>

For [DeepFashion](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html) dataset, images can be downloaded from [download](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html).
Please download the annotation files from [fld_annotations](https://download.openmmlab.com/mmpose/datasets/fld_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down
24 changes: 24 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_hand_keypoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771101-03a27bd8-ccc0-4eb9-a111-660f191a7a16.png" height="200px">
</div>

For [OneHand10K](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html) data, please download from [OneHand10K Dataset](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html).
Please download the annotation files from [onehand10k_annotations](https://download.openmmlab.com/mmpose/datasets/onehand10k_annotations.tar).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -81,6 +85,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771101-03a27bd8-ccc0-4eb9-a111-660f191a7a16.png" height="200px">
</div>

For [FreiHAND](https://lmb.informatik.uni-freiburg.de/projects/freihand/) data, please download from [FreiHand Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html).
Since the official dataset does not provide validation set, we randomly split the training data into 8:1:1 for train/val/test.
Please download the annotation files from [freihand_annotations](https://download.openmmlab.com/mmpose/datasets/frei_annotations.tar).
Expand Down Expand Up @@ -129,6 +137,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771101-03a27bd8-ccc0-4eb9-a111-660f191a7a16.png" height="200px">
</div>

For [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html), please download from [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html).
Following [Simon et al](https://arxiv.org/abs/1704.07809), panoptic images (hand143_panopticdb) and MPII & NZSL training sets (manual_train) are used for training, while MPII & NZSL test set (manual_test) for testing.
Please download the annotation files from [panoptic_annotations](https://download.openmmlab.com/mmpose/datasets/panoptic_annotations.tar).
Expand Down Expand Up @@ -183,6 +195,10 @@ year = {2020}

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771753-5df1d722-59bd-4815-b85f-64a5ef79bbf5.png" height="200px">
</div>

For [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/), please download from [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/).
Please download the annotation files from [annotations](https://drive.google.com/drive/folders/1pWXhdfaka-J0fSAze0MsajN0VpZ8e8tO).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -232,6 +248,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227772014-f7406a2b-2e64-42fb-8081-200d40104553.png" height="200px">
</div>

For [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html), please download from [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html).
Please download the annotation files from [rhd_annotations](https://download.openmmlab.com/mmpose/datasets/rhd_annotations.zip).
Extract them under {MMPose}/data, and make them look like this:
Expand Down Expand Up @@ -288,6 +308,10 @@ mmpose

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771101-03a27bd8-ccc0-4eb9-a111-660f191a7a16.png" height="200px">
</div>

For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
Expand Down
8 changes: 8 additions & 0 deletions docs/zh_cn/dataset_zoo/2d_wholebody_keypoint.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,10 @@ MMPose supported datasets:

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227770977-c8f00355-c43a-467e-8444-d307789cf4b2.png" height="300px">
</div>

For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
Expand Down Expand Up @@ -80,6 +84,10 @@ Please also install the latest version of [Extended COCO API](https://github.com

</details>

<div align="center">
<img src="https://user-images.githubusercontent.com/100993824/227771087-b839ea5b-4461-4ba7-8a9a-823b78e2ca44.png" height="300px">
</div>

For [Halpe](https://github.com/Fang-Haoshu/Halpe-FullBody/) dataset, please download images and annotations from [Halpe download](https://github.com/Fang-Haoshu/Halpe-FullBody).
The images of the training set are from [HICO-Det](https://drive.google.com/open?id=1QZcJmGVlF9f4h-XLWe9Gkmnmj2z1gSnk) and those of the validation set are from [COCO](http://images.cocodataset.org/zips/val2017.zip).
Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
Expand Down
Loading