-
Notifications
You must be signed in to change notification settings - Fork 678
[Model] Add PIPNet and FaceLandmark1000 Support #548
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 120 commits
Commits
Show all changes
131 commits
Select commit
Hold shift + click to select a range
1684b05
first commit for yolov7
ziqi-jin 71c00d9
pybind for yolov7
ziqi-jin 21ab2f9
CPP README.md
ziqi-jin d63e862
CPP README.md
ziqi-jin 7b3b0e2
modified yolov7.cc
ziqi-jin d039e80
README.md
ziqi-jin a34a815
python file modify
ziqi-jin eb010a8
merge test
ziqi-jin 39f64f2
delete license in fastdeploy/
ziqi-jin d071b37
repush the conflict part
ziqi-jin d5026ca
README.md modified
ziqi-jin fb376ad
README.md modified
ziqi-jin 4b8737c
file path modified
ziqi-jin ce922a0
file path modified
ziqi-jin 6e00b82
file path modified
ziqi-jin 8c359fb
file path modified
ziqi-jin 906c730
file path modified
ziqi-jin 80c1223
README modified
ziqi-jin 6072757
README modified
ziqi-jin 2c6e6a4
move some helpers to private
ziqi-jin 48136f0
add examples for yolov7
ziqi-jin 6feca92
api.md modified
ziqi-jin ae70d4f
api.md modified
ziqi-jin f591b85
api.md modified
ziqi-jin f0def41
YOLOv7
ziqi-jin 15b9160
yolov7 release link
ziqi-jin 4706e8c
yolov7 release link
ziqi-jin dc83584
yolov7 release link
ziqi-jin 086debd
copyright
ziqi-jin 4f980b9
change some helpers to private
ziqi-jin 2e61c95
Merge branch 'develop' into develop
ziqi-jin 80beadf
change variables to const and fix documents.
ziqi-jin 8103772
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin f5f7a86
gitignore
ziqi-jin e6cec25
Transfer some funtions to private member of class
ziqi-jin e25e4f2
Transfer some funtions to private member of class
ziqi-jin e8a8439
Merge from develop (#9)
ziqi-jin a182893
first commit for yolor
ziqi-jin 3aa015f
for merge
ziqi-jin d6b98aa
Develop (#11)
ziqi-jin 871cfc6
Merge branch 'yolor' into develop
ziqi-jin 013921a
Yolor (#16)
ziqi-jin 7a5a6d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin c996117
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 0aefe32
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 2330414
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 4660161
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 033c18e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 6c94d65
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 85fb256
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 90ca4cb
add is_dynamic for YOLO series (#22)
ziqi-jin f6a4ed2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 3682091
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin ca1e110
Merge remote-tracking branch 'upstream/develop' into develop
ziqi-jin 93ba6a6
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 767842e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin cc32733
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 2771a3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin a1e29ac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 5ecc6fe
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 2780588
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin c00be81
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 9082178
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 4b14f56
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 4876b82
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 9cebb1f
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin d1e3b29
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 69cf0d2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 2ff10e1
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin a673a2c
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 832d777
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin e513eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin ded2054
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 19db925
modify ppmatting backend and docs
ziqi-jin 15be4a6
modify ppmatting docs
ziqi-jin 3a5b93a
fix the PPMatting size problem
ziqi-jin f765853
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin c2332b0
fix LimitShort's log
ziqi-jin 950f948
retrigger ci
ziqi-jin 64a13c9
modify PPMatting docs
ziqi-jin 09c073d
modify the way for dealing with LimitShort
ziqi-jin 99969b6
Merge branch 'develop' into develop
jiangjiajun cf248de
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 9d4a4c9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 622fbf7
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin d1cf1ad
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin ff9a07e
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 2707b03
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 896d1d9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 25ee7e2
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 79068d3
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 74b3ee0
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin a75c0c4
add python comments for external models
ziqi-jin 985d273
modify resnet c++ comments
ziqi-jin e32a25c
modify C++ comments for external models
ziqi-jin 8a73af6
modify python comments and add result class comments
ziqi-jin 2aa7939
Merge branch 'develop' into doc_python
jiangjiajun 887c53a
Merge branch 'develop' into doc_python
jiangjiajun 963b9b9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 337e8c0
fix comments compile error
ziqi-jin d1d6890
modify result.h comments
ziqi-jin 67234dd
Merge branch 'develop' into doc_python
jiangjiajun 440e2a9
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin ac35141
Merge branch 'doc_python' into develop
ziqi-jin 3d83785
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 363a485
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin dc44eac
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 07717b4
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 33b4c62
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin f911f3b
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin ebb9365
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin 0c60494
c++ version for FaceLandmark1000
ziqi-jin 0ac31bd
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin b2c068e
add pipnet land1000 sigle test and python code
ziqi-jin 84083c1
fix facelandmark1000 sigle test
ziqi-jin 83c726f
fix python examples for PIPNet and FaceLandmark1000
ziqi-jin 96e1783
fix examples links for PIPNet and FaceLandmark1000
ziqi-jin 2d71fcf
modify test_vision_colorspace_convert.cc
ziqi-jin a2c30bd
modify facealign readme
ziqi-jin dce7000
retrigger ci
ziqi-jin f6a0f8e
modify README
ziqi-jin c7ee59c
test ci
ziqi-jin 661a1ef
Merge branch 'PaddlePaddle:develop' into develop
ziqi-jin e6b28f2
Merge pull request #40 from ziqi-jin/develop
ziqi-jin 87d0c26
fix download_prebuilt_libraries.md
ziqi-jin 52e47d6
fix download_prebuilt_libraries.md
ziqi-jin 03ae82d
modify for comments
ziqi-jin 1960ee7
modify supported_num_landmarks
ziqi-jin 6fc8b14
retrigger ci
ziqi-jin 947fe9b
check code style
ziqi-jin 20b3fa3
check code style
ziqi-jin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,25 @@ | ||
| # FaceLandmark 模型部署 | ||
|
|
||
| ## 模型版本说明 | ||
|
|
||
| - [FaceLandmark1000](https://github.com/Single430/FaceLandmark1000/tree/1a951b6) | ||
|
|
||
| ## 支持模型列表 | ||
|
|
||
| 目前FastDeploy支持如下模型的部署 | ||
|
|
||
| - [FaceLandmark1000 模型](https://github.com/Single430/FaceLandmark1000) | ||
|
|
||
| ## 下载预训练模型 | ||
|
|
||
| 为了方便开发者的测试,下面提供了FaceLandmark导出的各系列模型,开发者可直接下载使用。 | ||
|
|
||
| | 模型 | 参数大小 | 精度 | 备注 | | ||
| |:---------------------------------------------------------------- |:----- |:----- | :------ | | ||
| | [FaceLandmark1000](https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx) | 2.1M | - | | ||
|
|
||
|
|
||
| ## 详细部署文档 | ||
|
|
||
| - [Python部署](python) | ||
| - [C++部署](cpp) |
14 changes: 14 additions & 0 deletions
14
examples/vision/facealign/face_landmark_1000/cpp/CMakeLists.txt
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| PROJECT(infer_demo C CXX) | ||
| CMAKE_MINIMUM_REQUIRED (VERSION 3.10) | ||
|
|
||
| # 指定下载解压后的fastdeploy库路径 | ||
| option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.") | ||
| include(${FASTDEPLOY_INSTALL_DIR}/utils/gflags.cmake) | ||
| include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake) | ||
|
|
||
| # 添加FastDeploy依赖头文件 | ||
| include_directories(${FASTDEPLOY_INCS}) | ||
|
|
||
| add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc) | ||
| # 添加FastDeploy库依赖 | ||
| target_link_libraries(infer_demo ${FASTDEPLOY_LIBS} gflags pthread) | ||
84 changes: 84 additions & 0 deletions
84
examples/vision/facealign/face_landmark_1000/cpp/README.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,84 @@ | ||
| # FaceLandmark1000 C++部署示例 | ||
|
|
||
| 本目录下提供`infer.cc`快速完成FaceLandmark1000在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。 | ||
|
|
||
| 在部署前,需确认以下两个步骤 | ||
|
|
||
| - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
| - 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
|
|
||
| 以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试,保证 FastDeploy 版本0.6.0以上(x.x.x >= 0.6.0)支持FaceLandmark1000模型 | ||
ziqi-jin marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```bash | ||
| mkdir build | ||
| cd build | ||
| wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz | ||
| tar xvf fastdeploy-linux-x64-x.x.x.tgz | ||
| cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x | ||
| make -j | ||
|
|
||
| #下载官方转换好的 FaceLandmark1000 模型文件和测试图片 | ||
| wget https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx | ||
| wget https://bj.bcebos.com/paddlehub/fastdeploy/facealign_input.png | ||
|
|
||
| # CPU推理 | ||
| ./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device cpu | ||
| # GPU推理 | ||
| ./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device gpu | ||
| # GPU上TensorRT推理 | ||
| ./infer_demo --model FaceLandmark1000.onnx --image facealign_input.png --device gpu --backend trt | ||
| ``` | ||
|
|
||
| 运行完成可视化结果如下图所示 | ||
|
|
||
| <div width="500"> | ||
| <img width="470" height="384" float="left" src="https://user-images.githubusercontent.com/67993288/200761309-90c096e2-c2f3-4140-8012-32ed84e5f389.jpg"> | ||
| </div> | ||
|
|
||
| 以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考: | ||
| - [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md) | ||
|
|
||
| ## FaceLandmark1000 C++接口 | ||
|
|
||
| ### FaceLandmark1000 类 | ||
|
|
||
| ```c++ | ||
| fastdeploy::vision::facealign::FaceLandmark1000( | ||
| const string& model_file, | ||
| const string& params_file = "", | ||
| const RuntimeOption& runtime_option = RuntimeOption(), | ||
| const ModelFormat& model_format = ModelFormat::ONNX) | ||
| ``` | ||
|
|
||
| FaceLandmark1000模型加载和初始化,其中model_file为导出的ONNX模型格式。 | ||
|
|
||
| **参数** | ||
|
|
||
| > * **model_file**(str): 模型文件路径 | ||
| > * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可 | ||
| > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 | ||
| > * **model_format**(ModelFormat): 模型格式,默认为ONNX格式 | ||
|
|
||
| #### Predict函数 | ||
|
|
||
| > ```c++ | ||
| > FaceLandmark1000::Predict(cv::Mat* im, FaceAlignmentResult* result) | ||
| > ``` | ||
| > | ||
| > 模型预测接口,输入图像直接输出landmarks结果。 | ||
| > | ||
| > **参数** | ||
| > | ||
| > > * **im**: 输入图像,注意需为HWC,BGR格式 | ||
| > > * **result**: landmarks结果, FaceAlignmentResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/) | ||
|
|
||
| ### 类成员变量 | ||
|
|
||
| 用户可按照自己的实际需求,修改下列预处理参数,从而影响最终的推理和部署效果 | ||
|
|
||
| > > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[128, 128] | ||
|
|
||
| - [模型介绍](../../) | ||
| - [Python部署](../python) | ||
| - [视觉模型预测结果](../../../../../docs/api/vision_results/) | ||
| - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) | ||
110 changes: 110 additions & 0 deletions
110
examples/vision/facealign/face_landmark_1000/cpp/infer.cc
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,110 @@ | ||
| // Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. | ||
| // | ||
| // Licensed under the Apache License, Version 2.0 (the "License"); | ||
| // you may not use this file except in compliance with the License. | ||
| // You may obtain a copy of the License at | ||
| // | ||
| // http://www.apache.org/licenses/LICENSE-2.0 | ||
| // | ||
| // Unless required by applicable law or agreed to in writing, software | ||
| // distributed under the License is distributed on an "AS IS" BASIS, | ||
| // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | ||
| // See the License for the specific language governing permissions and | ||
| // limitations under the License. | ||
|
|
||
| #include "fastdeploy/vision.h" | ||
| #include "gflags/gflags.h" | ||
|
|
||
| DEFINE_string(model, "", "Directory of the inference model."); | ||
| DEFINE_string(image, "", "Path of the image file."); | ||
| DEFINE_string(device, "cpu", | ||
| "Type of inference device, support 'cpu' or 'gpu'."); | ||
| DEFINE_string(backend, "default", | ||
| "The inference runtime backend, support: ['default', 'ort', " | ||
| "'paddle', 'ov', 'trt', 'paddle_trt']"); | ||
| DEFINE_bool(use_fp16, false, "Whether to use FP16 mode, only support 'trt' and 'paddle_trt' backend"); | ||
|
|
||
| void PrintUsage() { | ||
| std::cout << "Usage: infer_demo --model model_path --image img_path --device [cpu|gpu] --backend " | ||
| "[default|ort|paddle|ov|trt|paddle_trt] " | ||
| "--use_fp16 false" | ||
| << std::endl; | ||
| std::cout << "Default value of device: cpu" << std::endl; | ||
| std::cout << "Default value of backend: default" << std::endl; | ||
| std::cout << "Default value of use_fp16: false" << std::endl; | ||
| } | ||
|
|
||
| bool CreateRuntimeOption(fastdeploy::RuntimeOption* option) { | ||
| if (FLAGS_device == "gpu") { | ||
| option->UseGpu(); | ||
| if (FLAGS_backend == "ort") { | ||
| option->UseOrtBackend(); | ||
| } else if (FLAGS_backend == "paddle") { | ||
| option->UsePaddleBackend(); | ||
| } else if (FLAGS_backend == "trt" || | ||
| FLAGS_backend == "paddle_trt") { | ||
| option->UseTrtBackend(); | ||
| option->SetTrtInputShape("input", {1, 3, 128, 128}); | ||
| if (FLAGS_backend == "paddle_trt") { | ||
| option->EnablePaddleToTrt(); | ||
| } | ||
| if (FLAGS_use_fp16) { | ||
| option->EnableTrtFP16(); | ||
| } | ||
| } else if (FLAGS_backend == "default") { | ||
| return true; | ||
| } else { | ||
| std::cout << "While inference with GPU, only support default/ort/paddle/trt/paddle_trt now, " << FLAGS_backend << " is not supported." << std::endl; | ||
| return false; | ||
| } | ||
| } else if (FLAGS_device == "cpu") { | ||
| if (FLAGS_backend == "ort") { | ||
| option->UseOrtBackend(); | ||
| } else if (FLAGS_backend == "ov") { | ||
| option->UseOpenVINOBackend(); | ||
| } else if (FLAGS_backend == "paddle") { | ||
| option->UsePaddleBackend(); | ||
| } else if (FLAGS_backend == "default") { | ||
| return true; | ||
| } else { | ||
| std::cout << "While inference with CPU, only support default/ort/ov/paddle now, " << FLAGS_backend << " is not supported." << std::endl; | ||
| return false; | ||
| } | ||
| } else { | ||
| std::cerr << "Only support device CPU/GPU now, " << FLAGS_device << " is not supported." << std::endl; | ||
| return false; | ||
| } | ||
|
|
||
| return true; | ||
| } | ||
|
|
||
| int main(int argc, char* argv[]) { | ||
| google::ParseCommandLineFlags(&argc, &argv, true); | ||
| auto option = fastdeploy::RuntimeOption(); | ||
| if (!CreateRuntimeOption(&option)) { | ||
| PrintUsage(); | ||
| return -1; | ||
| } | ||
|
|
||
| auto model = fastdeploy::vision::facealign::FaceLandmark1000(FLAGS_model, "", option); | ||
| if (!model.Initialized()) { | ||
| std::cerr << "Failed to initialize." << std::endl; | ||
| return -1; | ||
| } | ||
|
|
||
| auto im = cv::imread(FLAGS_image); | ||
| auto im_bak = im.clone(); | ||
|
|
||
| fastdeploy::vision::FaceAlignmentResult res; | ||
| if (!model.Predict(&im, &res)) { | ||
| std::cerr << "Failed to predict." << std::endl; | ||
| return -1; | ||
| } | ||
| std::cout << res.Str() << std::endl; | ||
|
|
||
| auto vis_im = fastdeploy::vision::VisFaceAlignment(im_bak, res); | ||
| cv::imwrite("vis_result.jpg", vis_im); | ||
| std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl; | ||
|
|
||
| return 0; | ||
| } |
71 changes: 71 additions & 0 deletions
71
examples/vision/facealign/face_landmark_1000/python/README.md
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,71 @@ | ||
| # FaceLandmark1000 Python部署示例 | ||
|
|
||
| 在部署前,需确认以下两个步骤 | ||
|
|
||
| - 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
| - 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md) | ||
|
|
||
| 本目录下提供`infer.py`快速完成FaceLandmark1000在CPU/GPU,以及GPU上通过TensorRT加速部署的示例,保证 FastDeploy 版本 >= 0.6.0 支持FaceLandmark1000模型。执行如下脚本即可完成 | ||
ziqi-jin marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ```bash | ||
| #下载部署示例代码 | ||
| git clone https://github.com/PaddlePaddle/FastDeploy.git | ||
| cd FastDeploy/examples/vision/facealign/facelandmark1000/python | ||
|
|
||
| # 下载FaceLandmark1000模型文件和测试图片 | ||
| ## 原版ONNX模型 | ||
| wget https://bj.bcebos.com/paddlehub/fastdeploy/FaceLandmark1000.onnx | ||
| wget https://bj.bcebos.com/paddlehub/fastdeploy/facealign_input.png | ||
|
|
||
| # CPU推理 | ||
| python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device cpu | ||
| # GPU推理 | ||
| python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device gpu | ||
| # TRT推理 | ||
| python infer.py --model FaceLandmark1000.onnx --image facealign_input.png --device gpu --backend trt | ||
| ``` | ||
|
|
||
| 运行完成可视化结果如下图所示 | ||
|
|
||
| <div width="500"> | ||
| <img width="470" height="384" float="left" src="https://user-images.githubusercontent.com/67993288/200761309-90c096e2-c2f3-4140-8012-32ed84e5f389.jpg"> | ||
| </div> | ||
|
|
||
| ## FaceLandmark1000 Python接口 | ||
|
|
||
| ```python | ||
| fd.vision.facealign.FaceLandmark1000(model_file, params_file=None, runtime_option=None, model_format=ModelFormat.ONNX) | ||
| ``` | ||
|
|
||
| FaceLandmark1000模型加载和初始化,其中model_file为导出的ONNX模型格式 | ||
|
|
||
| **参数** | ||
|
|
||
| > * **model_file**(str): 模型文件路径 | ||
| > * **params_file**(str): 参数文件路径,当模型格式为ONNX格式时,此参数无需设定 | ||
| > * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置 | ||
| > * **model_format**(ModelFormat): 模型格式,默认为ONNX | ||
|
|
||
| ### predict函数 | ||
|
|
||
| > ```python | ||
| > FaceLandmark1000.predict(input_image) | ||
| > ``` | ||
| > | ||
| > 模型预测结口,输入图像直接输出landmarks坐标结果。 | ||
| > | ||
| > **参数** | ||
| > | ||
| > > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式 | ||
|
|
||
| > **返回** | ||
| > | ||
| > > 返回`fastdeploy.vision.FaceAlignmentResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/) | ||
|
|
||
|
|
||
| ## 其它文档 | ||
|
|
||
| - [FaceLandmark1000 模型介绍](..) | ||
| - [FaceLandmark1000 C++部署](../cpp) | ||
| - [模型预测结果说明](../../../../../docs/api/vision_results/) | ||
| - [如何切换模型推理后端引擎](../../../../../docs/cn/faq/how_to_change_backend.md) | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.