Skip to content

Commit 33c168a

Browse files
authored
Pose Proposal Network Tested + Pre-Trained Models => Google Drive + Documentation Update (#302)
* Fix PPN post-processing * Fix PPN post-processing: re * Clean PPN header * Remove useless scripts * More details on TensorRT building * Doc Comment on PPN * Models => Google Drive | Update Documents * Format C++ codes Update CI Update Install Scripts Update Install Scripts Update Install Scripts * Fix FAKE
1 parent d247583 commit 33c168a

38 files changed

Lines changed: 395 additions & 388 deletions

.github/workflows/ci.yml

Lines changed: 21 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,29 @@ jobs:
77

88
# https://help.github.com/en/articles/virtual-environments-for-github-actions#supported-virtual-environments
99
runs-on: ubuntu-18.04
10+
strategy:
11+
matrix:
12+
python-version: [3.6, 3.7, 3.8]
1013

1114
steps:
12-
- uses: actions/checkout@v1
13-
- run: sudo apt-get install libopencv-dev libgflags-dev # dependencies
14-
- run: sh scripts/download-test-data.sh
15-
- run: sh scripts/download-tinyvgg-model.sh
16-
- run: sh scripts/download-openpose-thin-model.sh
17-
- run: sh scripts/download-openpose-res50-model.sh
18-
- run: sh scripts/download-openpose-coco-model.sh
19-
- run: cmake . -DBUILD_TESTS=1 -DBUILD_FAKE=1 -DBUILD_EXAMPLES=1 -DBUILD_LIB=1 -DBUILD_USER_CODES=0 -DEXECUTABLE_OUTPUT_PATH=./bin
15+
- uses: actions/checkout@v2
16+
- name: Set up Python ${{ matrix.python-version }}
17+
uses: actions/setup-python@v2
18+
with:
19+
python-version: ${{ matrix.python-version }}
20+
- name: Initialize Python Env
21+
run: python3 -m pip install --upgrade pip
22+
- name: Install System Dependencies
23+
run: sudo apt-get install libopencv-dev libgflags-dev # dependencies
24+
- name: Check download scripts.
25+
run: |
26+
sh scripts/download-test-data.sh
27+
sh scripts/download-tinyvgg-model.sh
28+
sh scripts/download-openpose-thin-model.sh
29+
sh scripts/download-openpose-res50-model.sh
30+
sh scripts/download-openpose-coco-model.sh
31+
- name: Build Project(NO GPU)
32+
run: cmake . -DBUILD_TESTS=1 -DBUILD_FAKE=1 -DBUILD_EXAMPLES=1 -DBUILD_LIB=1 -DBUILD_USER_CODES=0 -DEXECUTABLE_OUTPUT_PATH=./bin
2033
- run: cmake --build . --config Release
2134

2235
# - run: ctest -C Release

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ venv
4444
_build
4545
docs/make.bat
4646
examples/user_codes/*.cpp
47+
debug.*
4748

4849
!docs/Makefile
4950
!docs/markdown/images/*

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
HyperPose is a library for building human pose estimation systems that can efficiently operate in the wild.
66

7-
> **Note**: We are in the process of migrating our APIs from 1.0 to 2.0. The migration is expected to finish by July 2020.
7+
> **News**: The PoseProposal inference model is released! See the HyperPose models on [Google Drive](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing).
88
99
## Features
1010

@@ -19,14 +19,14 @@ You can install HyperPose and learn its APIs through [Documentation](https://hyp
1919

2020
## Example
2121

22-
We provide an example to show human pose estimation achieved by HyperPose. You need to install CUDA Toolkit 10+, TensorRT 7+, OpenCV 3.2+ and gFlags (cmake version), and enable C++ 17 support. Once the prerequisite are ready, run the following script:
22+
We provide an example to show human pose estimation achieved by HyperPose. You need to install CUDA Toolkit 10+, TensorRT 7+, OpenCV 3.2+ and gFlags (cmake version), and enable C++ 17 support. Once the prerequisite are met, run the following script:
2323

2424
```bash
25-
sudo apt -y install git cmake build-essential subversion curl libgflags-dev # libopencv-dev # [optional]
25+
sudo apt -y install git cmake build-essential subversion libgflags-dev libopencv-dev
2626
sh scripts/download-test-data.sh # Install data for examples.
2727
sh scripts/download-tinyvgg-model.sh # Install tiny-vgg model.
2828
mkdir build && cd build
29-
cmake .. -DCMAKE_BUILD_TYPE=RELEASE && make -j$(nproc) # Build library && examples.
29+
cmake .. -DCMAKE_BUILD_TYPE=RELEASE && make -j # Build library && examples.
3030
./example.operator_api_batched_images_paf # The ouput images will be in the build folder.
3131
```
3232

docs/markdown/design/design.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ int main() {
4747
using namespace hyperpose;
4848

4949
const cv::Size network_resolution{384, 256};
50-
const dnn::uff uff_model{ "../data/models/hao28-600000-256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
50+
const dnn::uff uff_model{ "../data/models/TinyVGG-V1-HW=256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
5151

5252
// * Input video.
5353
auto capture = cv::VideoCapture("../data/media/video.avi");
@@ -106,7 +106,7 @@ int main() {
106106
using namespace hyperpose;
107107

108108
const cv::Size network_resolution{384, 256};
109-
const dnn::uff uff_model{ "../data/models/hao28-600000-256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
109+
const dnn::uff uff_model{ "../data/models/TinyVGG-V1-HW=256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
110110

111111
// * Input video.
112112
auto capture = cv::VideoCapture("../data/media/video.avi");

docs/markdown/install/prediction.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,10 +6,13 @@
66
* CMake 3.5+
77
* Third-Party
88
* OpenCV3.2+.
9-
* [CUDA 10](https://developer.nvidia.com/cuda-downloads), [TensorRT 7](https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt_304/tensorrt-install-guide/index.html).
9+
* [CUDA 10.2](https://developer.nvidia.com/cuda-downloads), [CuDNN 7.6.5](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html), [TensorRT 7.0](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html). (For Linux users, [Debian Installation](https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-debian) is highly recommended.)
1010
* gFlags(optional, for examples/tests)
1111

12-
> Older versions of the packages may also work but not tested.
12+
> Other versions of the packages may also work but not tested.
13+
14+
> Different TensorRT version requires specific CUDA and CuDNN version. For specific CUDA and CuDNN requirements of TensorRT7, please refer to [this](https://docs.nvidia.com/deeplearning/tensorrt/support-matrix/index.html#platform-matrix).
15+
> Also, for Ubuntu 18.04 users, this [3rd party blog](https://ddkang.github.io/2020/01/02/installing-tensorrt.html) may help you.
1316
1417
## Build On Ubuntu 18.04
1518

@@ -18,7 +21,7 @@
1821
sudo apt -y install cmake libopencv-dev
1922
# You may also install OpenCV from source to get best performance.
2023

21-
# >>> Install CUDA/TensorRT
24+
# >>> Install CUDA/CuDNN/TensorRT
2225

2326
# >>> Build gFlags(Optional) from source. Install it if you want to run the examples.
2427
wget https://github.com/gflags/gflags/archive/v2.2.2.zip
@@ -32,7 +35,7 @@ sudo make install
3235
git clone https://github.com/tensorlayer/hyperpose.git
3336
cd hyperpose
3437
mkdir build && cd build
35-
cmake .. -DCMAKE_BUILD_TYPE=RELEASE && make -j$(nproc)
38+
cmake .. -DCMAKE_BUILD_TYPE=Release && make -j
3639
```
3740

3841
## Build with User Codes

docs/markdown/performance/prediction.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,6 @@
1313
> **Environment**: [email protected], GPU@1070Ti, CPU@i7(12 logic cores).
1414
>
1515
> **Tested Video Source**: Crazy Updown Funk(resolution@640x360, frame_count@7458, source@[YouTube](https://www.youtube.com/watch?v=2DiQUX11YaY))
16-
>
17-
> **Availability**: All model above are available [here](https://github.com/tensorlayer/pretrained-models/tree/master/models/hyperpose).
1816
1917
> OpenPose performance is not tested with batch processing as it seems not to be implemented. (see [here](https://github.com/CMU-Perceptual-Computing-Lab/openpose/issues/100))
2018

docs/markdown/performance/supports.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -16,11 +16,8 @@
1616
### Supported Post-Processing Methods
1717

1818
- Part Association Field(PAF)
19-
- Pose Proposal Networks(Coming Soon)
19+
- Pose Proposal Networks
2020

2121
### Released Prediction Models
2222

23-
- [Tiny VGG](https://github.com/tensorlayer/pretrained-models/blob/master/models/hyperpose/hao28-600000-256x384.uff)
24-
- [OpenPose-COCO](https://github.com/tensorlayer/pretrained-models/blob/master/models/hyperpose/openpose_coco.onnx)
25-
- [OpenPose-Thin](https://github.com/tensorlayer/pretrained-models/blob/master/models/hyperpose/openpose_thin.onnx)
26-
- [ResNet18(for PAF)](https://github.com/tensorlayer/pretrained-models/blob/master/models/hyperpose/lopps_resnet50.onnx)
23+
We released the models on [Google Drive](TinyVGG-V1-HW=256x384.uff). `.onnx` and `.uff` files are for inference.

docs/markdown/quick_start/prediction.md

Lines changed: 11 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,10 @@ sh scripts/download-openpose-thin-model.sh # ~20 MB
3030
sh scripts/download-tinyvgg-model.sh # ~30 MB
3131
sh scripts/download-openpose-res50-model.sh # ~45 MB
3232
sh scripts/download-openpose-coco-model.sh # ~200 MB
33+
sh scripts/download-ppn-res50-model.sh # ~50 MB (PoseProposal Algorithm)
3334
```
3435

35-
> You can download them manually to `${HyperPose}/data/models/` via [LINK](https://github.com/tensorlayer/pretrained-models/tree/master/models/hyperpose) **if the network is not working**.
36+
> You can download them manually to `${HyperPose}/data/models/` via [LINK](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing) **if the network is not working**.
3637
3738
## Predict a sequence of images
3839

@@ -46,17 +47,23 @@ sh scripts/download-openpose-coco-model.sh # ~200 MB
4647
# Take images in ../data/media as a big batch and do prediction.
4748

4849
./example.operator_api_batched_images_paf
49-
# The same as: `./example.operator_api_batched_images_paf --model_file ../data/models/hao28-600000-256x384.uff --input_folder ../data/media --input_width 384 --input_height 256`
50+
# The same as: `./example.operator_api_batched_images_paf --model_file ../data/models/TinyVGG-V1-HW=256x384.uff --input_folder ../data/media --input_width 384 --input_height 256`
5051
```
5152

5253
The output images will be in the build folder.
5354

5455
### Using a precise model
5556

5657
```bash
57-
./example.operator_api_batched_images_paf --model_file ../data/models/openpose_thin.onnx --input_width 432 --input_height 368
58+
./example.operator_api_batched_images_paf --model_file ../data/models/openpose-thin-V2-HW=368x432.onnx --input_width 432 --input_height 368
5859

59-
./example.operator_api_batched_images_paf --model_file ../data/models/openpose_coco.onnx --input_width 656 --input_height 368
60+
./example.operator_api_batched_images_paf --model_file ../data/models/openpose-coco-V2-HW=368x656.onnx --input_width 656 --input_height 368
61+
```
62+
63+
### Use PoseProposal model
64+
65+
```bash
66+
./example.operator_api_batched_images_pose_proposal --model_file ../data/models/ppn-resnet50-V2-HW=384x384.onnx --input_width 368 --input_height 368
6067
```
6168

6269
### Convert models into TensorRT Engine Protobuf format

docs/markdown/tutorial/faq.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Refer to [here](https://www.learnopencv.com/tag/install/).
2424

2525
Download them manually:
2626

27-
- All prediction models are available [here](https://github.com/tensorlayer/pretrained-models/tree/master/models/hyperpose).
27+
- All prediction models are available on [Google Drive](https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing).
2828
- The test data are taken from the [OpenPose Project](https://github.com/CMU-Perceptual-Computing-Lab/openpose/tree/master/examples/media).
2929

3030
## Training
@@ -34,7 +34,7 @@ Download them manually:
3434
### TensorRT Error?
3535

3636
- See the `tensorrt.log`. (it contains more informations about logging and is located in where you execute the binary)
37-
- You may meet `ERROR: Tensor image cannot be both input and output` when using the `hao28-600000-256x384.uff` model. And just ignore it.
37+
- You may meet `ERROR: Tensor image cannot be both input and output` when using the `TinyVGG-V1-HW=256x384.uff` model. And just ignore it.
3838

3939
### Performance?
4040

docs/markdown/tutorial/prediction.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ int main() {
133133
using namespace hyperpose;
134134

135135
const cv::Size network_resolution{384, 256};
136-
const dnn::uff uff_model{ "../data/models/hao28-600000-256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
136+
const dnn::uff uff_model{ "../data/models/TinyVGG-V1-HW=256x384.uff", "image", {"outputs/conf", "outputs/paf"} };
137137

138138
// * Input video.
139139
auto capture = cv::VideoCapture("../data/media/video.avi");

0 commit comments

Comments
 (0)