From 1d2b4bb4e3b7d93e029ad4d1ab5d8c0ab561e05a Mon Sep 17 00:00:00 2001 From: walsvid Date: Mon, 27 Mar 2023 17:29:54 +0800 Subject: [PATCH] Add keypoints order images in zh_cn doc --- docs/zh_cn/dataset_zoo/2d_animal_keypoint.md | 32 +++++++++++++++++ docs/zh_cn/dataset_zoo/2d_body_keypoint.md | 36 +++++++++++++++++++ docs/zh_cn/dataset_zoo/2d_face_keypoint.md | 16 +++++++++ docs/zh_cn/dataset_zoo/2d_fashion_landmark.md | 4 +++ docs/zh_cn/dataset_zoo/2d_hand_keypoint.md | 24 +++++++++++++ .../dataset_zoo/2d_wholebody_keypoint.md | 8 +++++ docs/zh_cn/dataset_zoo/3d_body_keypoint.md | 12 +++++++ docs/zh_cn/dataset_zoo/3d_hand_keypoint.md | 4 +++ 8 files changed, 136 insertions(+) diff --git a/docs/zh_cn/dataset_zoo/2d_animal_keypoint.md b/docs/zh_cn/dataset_zoo/2d_animal_keypoint.md index ba9d871137..2429602537 100644 --- a/docs/zh_cn/dataset_zoo/2d_animal_keypoint.md +++ b/docs/zh_cn/dataset_zoo/2d_animal_keypoint.md @@ -33,6 +33,10 @@ MMPose supported datasets: +
+ +
+ For [Animal-Pose](https://sites.google.com/view/animal-pose/) dataset, we prepare the dataset as follows: 1. Download the images of [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#data), especially the five categories (dog, cat, sheep, cow, horse), which we use as trainval dataset. @@ -118,6 +122,10 @@ Those images from other sources (1000 images with 1000 annotations) are used for +
+ +
+ For [AP-10K](https://github.com/AlexTheBad/AP-10K/) dataset, images and annotations can be downloaded from [download](https://drive.google.com/file/d/1-FNNGcdtAQRehYYkGY1y4wzFNg4iWNad/view?usp=sharing). Note, this data and annotation data is for non-commercial use only. @@ -170,6 +178,10 @@ The annotation files in 'annotation' folder contains 50 labeled animal species. +
+ +
+ For [Horse-10](http://www.mackenziemathislab.org/horse10) dataset, images can be downloaded from [download](http://www.mackenziemathislab.org/horse10). Please download the annotation files from [horse10_annotations](https://download.openmmlab.com/mmpose/datasets/horse10_annotations.tar). Note, this data and annotation data is for non-commercial use only, per the authors (see http://horse10.deeplabcut.org for more information). Extract them under {MMPose}/data, and make them look like this: @@ -216,6 +228,10 @@ mmpose +
+ +
+ For [MacaquePose](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html) dataset, images can be downloaded from [download](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html). Please download the annotation files from [macaque_annotations](https://download.openmmlab.com/mmpose/datasets/macaque_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -266,6 +282,10 @@ Since the official dataset does not provide the test set, we randomly select 125 +
+ +
+ For [Vinegar Fly](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [vinegar_fly_images](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_images.tar). Please download the annotation files from [vinegar_fly_annotations](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -314,6 +334,10 @@ Since the official dataset does not provide the test set, we randomly select 90% +
+ +
+ For [Desert Locust](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [locust_images](https://download.openmmlab.com/mmpose/datasets/locust_images.tar). Please download the annotation files from [locust_annotations](https://download.openmmlab.com/mmpose/datasets/locust_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -362,6 +386,10 @@ Since the official dataset does not provide the test set, we randomly select 90% +
+ +
+ For [Grévy’s Zebra](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [zebra_images](https://download.openmmlab.com/mmpose/datasets/zebra_images.tar). Please download the annotation files from [zebra_annotations](https://download.openmmlab.com/mmpose/datasets/zebra_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -408,6 +436,10 @@ Since the official dataset does not provide the test set, we randomly select 90% +
+ +
+ ATRW captures images of the Amur tiger (also known as Siberian tiger, Northeast-China tiger) in the wild. For [ATRW](https://cvwc2019.github.io/challenge.html) dataset, please download images from [Pose_train](https://lilablobssc.blob.core.windows.net/cvwc2019/train/atrw_pose_train.tar.gz), diff --git a/docs/zh_cn/dataset_zoo/2d_body_keypoint.md b/docs/zh_cn/dataset_zoo/2d_body_keypoint.md index 625e4d5714..c5bf70a3f8 100644 --- a/docs/zh_cn/dataset_zoo/2d_body_keypoint.md +++ b/docs/zh_cn/dataset_zoo/2d_body_keypoint.md @@ -37,6 +37,10 @@ MMPose supported datasets: +
+ +
+ For [COCO](http://cocodataset.org/) data, please download from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation. [HRNet-Human-Pose-Estimation](https://github.com/HRNet/HRNet-Human-Pose-Estimation) provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results. Please download from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing). @@ -91,6 +95,10 @@ mmpose +
+ +
+ For [MPII](http://human-pose.mpi-inf.mpg.de/) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/). We have converted the original annotation files into json format, please download them from [mpii_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -147,6 +155,10 @@ python tools/dataset/mat2json work_dirs/res50_mpii_256x256/pred.mat data/mpii/an +
+ +
+ For [MPII-TRB](https://github.com/kennymckormick/Triplet-Representation-of-human-Body) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/). Please download the annotation files from [mpii_trb_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_trb_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -187,6 +199,10 @@ mmpose +
+ +
+ For [AIC](https://github.com/AIChallenger/AI_Challenger_2017) data, please download from [AI Challenger 2017](https://github.com/AIChallenger/AI_Challenger_2017), 2017 Train/Val is needed for keypoints training and validation. Please download the annotation files from [aic_annotations](https://download.openmmlab.com/mmpose/datasets/aic_annotations.tar). Download and extract them under $MMPOSE/data, and make them look like this: @@ -233,6 +249,10 @@ mmpose +
+ +
+ For [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose) data, please download from [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose). Please download the annotation files and human detection results from [crowdpose_annotations](https://download.openmmlab.com/mmpose/datasets/crowdpose_annotations.tar). For top-down approaches, we follow [CrowdPose](https://arxiv.org/abs/1812.00324) to use the [pre-trained weights](https://pjreddie.com/media/files/yolov3.weights) of [YOLOv3](https://github.com/eriklindernoren/PyTorch-YOLOv3) to generate the detected human bounding boxes. @@ -280,6 +300,10 @@ mmpose +
+ +
+ For [OCHuman](https://github.com/liruilong940607/OCHumanApi) data, please download the images and annotations from [OCHuman](https://github.com/liruilong940607/OCHumanApi), Move them under $MMPOSE/data, and make them look like this: @@ -322,6 +346,10 @@ mmpose +
+ +
+ For [MHP](https://lv-mhp.github.io/dataset) data, please download from [MHP](https://lv-mhp.github.io/dataset). Please download the annotation files from [mhp_annotations](https://download.openmmlab.com/mmpose/datasets/mhp_annotations.tar.gz). Please download and extract them under $MMPOSE/data, and make them look like this: @@ -377,6 +405,10 @@ mmpose +
+ +
+ For [PoseTrack18](https://posetrack.net/users/download.php) data, please download from [PoseTrack18](https://posetrack.net/users/download.php). Please download the annotation files from [posetrack18_annotations](https://download.openmmlab.com/mmpose/datasets/posetrack18_annotations.tar). We have merged the video-wise separated official annotation files into two json files (posetrack18_train & posetrack18_val.json). We also generate the [mask files](https://download.openmmlab.com/mmpose/datasets/posetrack18_mask.tar) to speed up training. @@ -469,6 +501,10 @@ pip install git+https://github.com/svenkreiss/poseval.git +
+ +
+ For [sub-JHMDB](http://jhmdb.is.tue.mpg.de/dataset) data, please download the [images](<(http://files.is.tue.mpg.de/jhmdb/Rename_Images.tar.gz)>) from [JHMDB](http://jhmdb.is.tue.mpg.de/dataset), Please download the annotation files from [jhmdb_annotations](https://download.openmmlab.com/mmpose/datasets/jhmdb_annotations.tar). Move them under $MMPOSE/data, and make them look like this: diff --git a/docs/zh_cn/dataset_zoo/2d_face_keypoint.md b/docs/zh_cn/dataset_zoo/2d_face_keypoint.md index e92e970327..17eb823954 100644 --- a/docs/zh_cn/dataset_zoo/2d_face_keypoint.md +++ b/docs/zh_cn/dataset_zoo/2d_face_keypoint.md @@ -32,6 +32,10 @@ MMPose supported datasets: +
+ +
+ For 300W data, please download images from [300W Dataset](https://ibug.doc.ic.ac.uk/resources/300-W/). Please download the annotation files from [300w_annotations](https://download.openmmlab.com/mmpose/datasets/300w_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -108,6 +112,10 @@ mmpose +
+ +
+ For WFLW data, please download images from [WFLW Dataset](https://wywu.github.io/projects/LAB/WFLW.html). Please download the annotation files from [wflw_annotations](https://download.openmmlab.com/mmpose/datasets/wflw_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -215,6 +223,10 @@ mmpose +
+ +
+ For COFW data, please download from [COFW Dataset (Color Images)](http://www.vision.caltech.edu/xpburgos/ICCV13/Data/COFW_color.zip). Move `COFW_train_color.mat` and `COFW_test_color.mat` to `data/cofw/` and make them look like: @@ -274,6 +286,10 @@ mmpose +
+ +
+ For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive). Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing). diff --git a/docs/zh_cn/dataset_zoo/2d_fashion_landmark.md b/docs/zh_cn/dataset_zoo/2d_fashion_landmark.md index c0eb2c8435..42f213e40a 100644 --- a/docs/zh_cn/dataset_zoo/2d_fashion_landmark.md +++ b/docs/zh_cn/dataset_zoo/2d_fashion_landmark.md @@ -43,6 +43,10 @@ MMPose supported datasets: +
+ +
+ For [DeepFashion](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html) dataset, images can be downloaded from [download](http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion/LandmarkDetection.html). Please download the annotation files from [fld_annotations](https://download.openmmlab.com/mmpose/datasets/fld_annotations.tar). Extract them under {MMPose}/data, and make them look like this: diff --git a/docs/zh_cn/dataset_zoo/2d_hand_keypoint.md b/docs/zh_cn/dataset_zoo/2d_hand_keypoint.md index 1d369e775b..aade35850c 100644 --- a/docs/zh_cn/dataset_zoo/2d_hand_keypoint.md +++ b/docs/zh_cn/dataset_zoo/2d_hand_keypoint.md @@ -34,6 +34,10 @@ MMPose supported datasets: +
+ +
+ For [OneHand10K](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html) data, please download from [OneHand10K Dataset](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html). Please download the annotation files from [onehand10k_annotations](https://download.openmmlab.com/mmpose/datasets/onehand10k_annotations.tar). Extract them under {MMPose}/data, and make them look like this: @@ -81,6 +85,10 @@ mmpose +
+ +
+ For [FreiHAND](https://lmb.informatik.uni-freiburg.de/projects/freihand/) data, please download from [FreiHand Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html). Since the official dataset does not provide validation set, we randomly split the training data into 8:1:1 for train/val/test. Please download the annotation files from [freihand_annotations](https://download.openmmlab.com/mmpose/datasets/frei_annotations.tar). @@ -129,6 +137,10 @@ mmpose +
+ +
+ For [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html), please download from [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html). Following [Simon et al](https://arxiv.org/abs/1704.07809), panoptic images (hand143_panopticdb) and MPII & NZSL training sets (manual_train) are used for training, while MPII & NZSL test set (manual_test) for testing. Please download the annotation files from [panoptic_annotations](https://download.openmmlab.com/mmpose/datasets/panoptic_annotations.tar). @@ -183,6 +195,10 @@ year = {2020} +
+ +
+ For [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/), please download from [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/). Please download the annotation files from [annotations](https://drive.google.com/drive/folders/1pWXhdfaka-J0fSAze0MsajN0VpZ8e8tO). Extract them under {MMPose}/data, and make them look like this: @@ -232,6 +248,10 @@ mmpose +
+ +
+ For [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html), please download from [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html). Please download the annotation files from [rhd_annotations](https://download.openmmlab.com/mmpose/datasets/rhd_annotations.zip). Extract them under {MMPose}/data, and make them look like this: @@ -288,6 +308,10 @@ mmpose +
+ +
+ For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive). Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing). diff --git a/docs/zh_cn/dataset_zoo/2d_wholebody_keypoint.md b/docs/zh_cn/dataset_zoo/2d_wholebody_keypoint.md index e3d573ffbd..a082c657c6 100644 --- a/docs/zh_cn/dataset_zoo/2d_wholebody_keypoint.md +++ b/docs/zh_cn/dataset_zoo/2d_wholebody_keypoint.md @@ -26,6 +26,10 @@ MMPose supported datasets: +
+ +
+ For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive). Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing). @@ -80,6 +84,10 @@ Please also install the latest version of [Extended COCO API](https://github.com +
+ +
+ For [Halpe](https://github.com/Fang-Haoshu/Halpe-FullBody/) dataset, please download images and annotations from [Halpe download](https://github.com/Fang-Haoshu/Halpe-FullBody). The images of the training set are from [HICO-Det](https://drive.google.com/open?id=1QZcJmGVlF9f4h-XLWe9Gkmnmj2z1gSnk) and those of the validation set are from [COCO](http://images.cocodataset.org/zips/val2017.zip). Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing). diff --git a/docs/zh_cn/dataset_zoo/3d_body_keypoint.md b/docs/zh_cn/dataset_zoo/3d_body_keypoint.md index ee04b01bc8..82e21010fc 100644 --- a/docs/zh_cn/dataset_zoo/3d_body_keypoint.md +++ b/docs/zh_cn/dataset_zoo/3d_body_keypoint.md @@ -32,6 +32,10 @@ MMPose supported datasets: +
+ +
+ For [Human3.6M](http://vision.imar.ro/human3.6m/description.php), please download from the official website and run the [preprocessing script](/tools/dataset_converters/preprocess_h36m.py), which will extract camera parameters and pose annotations at full framerate (50 FPS) and downsampled framerate (10 FPS). The processed data should have the following structure: ```text @@ -90,6 +94,10 @@ year = {2015} +
+ +
+ Please follow [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch) to prepare this dataset. 1. Download the dataset by following the instructions in [panoptic-toolbox](https://github.com/CMU-Perceptual-Computing-Lab/panoptic-toolbox) and extract them under `$MMPOSE/data/panoptic`. @@ -137,6 +145,10 @@ mmpose +
+ +
+ Please follow [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch) to prepare these two datasets. 1. Please download the datasets from the [official website](http://campar.in.tum.de/Chair/MultiHumanPose) and extract them under `$MMPOSE/data/campus` and `$MMPOSE/data/shelf`, respectively. The original data include images as well as the ground truth pose file `actorsGT.mat`. diff --git a/docs/zh_cn/dataset_zoo/3d_hand_keypoint.md b/docs/zh_cn/dataset_zoo/3d_hand_keypoint.md index 17537e4476..2b1f4d3923 100644 --- a/docs/zh_cn/dataset_zoo/3d_hand_keypoint.md +++ b/docs/zh_cn/dataset_zoo/3d_hand_keypoint.md @@ -25,6 +25,10 @@ year = {2020} +
+ +
+ For [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/), please download from [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/). Please download the annotation files from [annotations](https://drive.google.com/drive/folders/1pWXhdfaka-J0fSAze0MsajN0VpZ8e8tO). Extract them under {MMPose}/data, and make them look like this: