Skip to content

Commit 7e00c5f

Browse files
ziqi-jinjiangjiajunrootDefTruthfelixhjh
authored
Modify PPMatting backend and docs (PaddlePaddle#182)
* first commit for yolov7 * pybind for yolov7 * CPP README.md * CPP README.md * modified yolov7.cc * README.md * python file modify * delete license in fastdeploy/ * repush the conflict part * README.md modified * README.md modified * file path modified * file path modified * file path modified * file path modified * file path modified * README modified * README modified * move some helpers to private * add examples for yolov7 * api.md modified * api.md modified * api.md modified * YOLOv7 * yolov7 release link * yolov7 release link * yolov7 release link * copyright * change some helpers to private * change variables to const and fix documents. * gitignore * Transfer some funtions to private member of class * Transfer some funtions to private member of class * Merge from develop (#9) * Fix compile problem in different python version (PaddlePaddle#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (PaddlePaddle#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * first commit for yolor * for merge * Develop (PaddlePaddle#11) * Fix compile problem in different python version (PaddlePaddle#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (PaddlePaddle#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Yolor (PaddlePaddle#16) * Develop (PaddlePaddle#11) (PaddlePaddle#12) * Fix compile problem in different python version (PaddlePaddle#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (PaddlePaddle#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * Develop (PaddlePaddle#13) * Fix compile problem in different python version (PaddlePaddle#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (PaddlePaddle#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * documents * Develop (PaddlePaddle#14) * Fix compile problem in different python version (PaddlePaddle#26) * fix some usage problem in linux * Fix compile problem Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> * Add PaddleDetetion/PPYOLOE model support (PaddlePaddle#22) * add ppdet/ppyoloe * Add demo code and documents * add convert processor to vision (PaddlePaddle#27) * update .gitignore * Added checking for cmake include dir * fixed missing trt_backend option bug when init from trt * remove un-need data layout and add pre-check for dtype * changed RGB2BRG to BGR2RGB in ppcls model * add model_zoo yolov6 c++/python demo * fixed CMakeLists.txt typos * update yolov6 cpp/README.md * add yolox c++/pybind and model_zoo demo * move some helpers to private * fixed CMakeLists.txt typos * add normalize with alpha and beta * add version notes for yolov5/yolov6/yolox * add copyright to yolov5.cc * revert normalize * fixed some bugs in yolox * fixed examples/CMakeLists.txt to avoid conflicts * add convert processor to vision * format examples/CMakeLists summary * Fix bug while the inference result is empty with YOLOv5 (PaddlePaddle#29) * Add multi-label function for yolov5 * Update README.md Update doc * Update fastdeploy_runtime.cc fix variable option.trt_max_shape wrong name * Update runtime_option.md Update resnet model dynamic shape setting name from images to x * Fix bug when inference result boxes are empty * Delete detection.py Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com> * add is_dynamic for YOLO series (PaddlePaddle#22) * modify ppmatting backend and docs * modify ppmatting docs * fix the PPMatting size problem * fix LimitShort's log * retrigger ci * modify PPMatting docs * modify the way for dealing with LimitShort Co-authored-by: Jason <jiangjiajun@baidu.com> Co-authored-by: root <root@bjyz-sys-gpu-kongming3.bjyz.baidu.com> Co-authored-by: DefTruth <31974251+DefTruth@users.noreply.github.com> Co-authored-by: huangjianhui <852142024@qq.com> Co-authored-by: Jason <928090362@qq.com>
1 parent f96fdad commit 7e00c5f

13 files changed

Lines changed: 193 additions & 12 deletions

File tree

csrc/fastdeploy/vision/common/processors/limit_short.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ class LimitShort : public Processor {
3434

3535
static bool Run(Mat* mat, int max_short = -1, int min_short = -1,
3636
ProcLib lib = ProcLib::OPENCV_CPU);
37+
int GetMaxShort() { return max_short_; }
3738

3839
private:
3940
int max_short_;
Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
//
3+
// Licensed under the Apache License, Version 2.0 (the "License");
4+
// you may not use this file except in compliance with the License.
5+
// You may obtain a copy of the License at
6+
//
7+
// http://www.apache.org/licenses/LICENSE-2.0
8+
//
9+
// Unless required by applicable law or agreed to in writing, software
10+
// distributed under the License is distributed on an "AS IS" BASIS,
11+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
// See the License for the specific language governing permissions and
13+
// limitations under the License.
14+
15+
#include "fastdeploy/vision/common/processors/resize_by_long.h"
16+
17+
namespace fastdeploy {
18+
namespace vision {
19+
20+
bool ResizeByLong::CpuRun(Mat* mat) {
21+
cv::Mat* im = mat->GetCpuMat();
22+
int origin_w = im->cols;
23+
int origin_h = im->rows;
24+
double scale = GenerateScale(origin_w, origin_h);
25+
if (use_scale_) {
26+
cv::resize(*im, *im, cv::Size(), scale, scale, interp_);
27+
} else {
28+
int width = static_cast<int>(round(scale * im->cols));
29+
int height = static_cast<int>(round(scale * im->rows));
30+
cv::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
31+
}
32+
mat->SetWidth(im->cols);
33+
mat->SetHeight(im->rows);
34+
return true;
35+
}
36+
37+
#ifdef ENABLE_OPENCV_CUDA
38+
bool ResizeByLong::GpuRun(Mat* mat) {
39+
cv::cuda::GpuMat* im = mat->GetGpuMat();
40+
int origin_w = im->cols;
41+
int origin_h = im->rows;
42+
double scale = GenerateScale(origin_w, origin_h);
43+
im->convertTo(*im, CV_32FC(im->channels()));
44+
if (use_scale_) {
45+
cv::cuda::resize(*im, *im, cv::Size(), scale, scale, interp_);
46+
} else {
47+
int width = static_cast<int>(round(scale * im->cols));
48+
int height = static_cast<int>(round(scale * im->rows));
49+
cv::cuda::resize(*im, *im, cv::Size(width, height), 0, 0, interp_);
50+
}
51+
mat->SetWidth(im->cols);
52+
mat->SetHeight(im->rows);
53+
return true;
54+
}
55+
#endif
56+
57+
double ResizeByLong::GenerateScale(const int origin_w, const int origin_h) {
58+
int im_size_max = std::max(origin_w, origin_h);
59+
int im_size_min = std::min(origin_w, origin_h);
60+
double scale = 1.0f;
61+
if (target_size_ == -1) {
62+
if (im_size_max > max_size_) {
63+
scale = static_cast<double>(max_size_) / static_cast<double>(im_size_max);
64+
}
65+
} else {
66+
scale =
67+
static_cast<double>(target_size_) / static_cast<double>(im_size_max);
68+
}
69+
return scale;
70+
}
71+
72+
bool ResizeByLong::Run(Mat* mat, int target_size, int interp, bool use_scale,
73+
int max_size, ProcLib lib) {
74+
auto r = ResizeByLong(target_size, interp, use_scale, max_size);
75+
return r(mat, lib);
76+
}
77+
} // namespace vision
78+
} // namespace fastdeploy
Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
//
3+
// Licensed under the Apache License, Version 2.0 (the "License");
4+
// you may not use this file except in compliance with the License.
5+
// You may obtain a copy of the License at
6+
//
7+
// http://www.apache.org/licenses/LICENSE-2.0
8+
//
9+
// Unless required by applicable law or agreed to in writing, software
10+
// distributed under the License is distributed on an "AS IS" BASIS,
11+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
// See the License for the specific language governing permissions and
13+
// limitations under the License.
14+
15+
#pragma once
16+
17+
#include "fastdeploy/vision/common/processors/base.h"
18+
19+
namespace fastdeploy {
20+
namespace vision {
21+
22+
class ResizeByLong : public Processor {
23+
public:
24+
ResizeByLong(int target_size, int interp = 1, bool use_scale = true,
25+
int max_size = -1) {
26+
target_size_ = target_size;
27+
max_size_ = max_size;
28+
interp_ = interp;
29+
use_scale_ = use_scale;
30+
}
31+
bool CpuRun(Mat* mat);
32+
#ifdef ENABLE_OPENCV_CUDA
33+
bool GpuRun(Mat* mat);
34+
#endif
35+
std::string Name() { return "ResizeByLong"; }
36+
37+
static bool Run(Mat* mat, int target_size, int interp = 1,
38+
bool use_scale = true, int max_size = -1,
39+
ProcLib lib = ProcLib::OPENCV_CPU);
40+
41+
private:
42+
double GenerateScale(const int origin_w, const int origin_h);
43+
int target_size_;
44+
int max_size_;
45+
int interp_;
46+
bool use_scale_;
47+
};
48+
} // namespace vision
49+
} // namespace fastdeploy

csrc/fastdeploy/vision/common/processors/transform.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@
2424
#include "fastdeploy/vision/common/processors/pad.h"
2525
#include "fastdeploy/vision/common/processors/pad_to_size.h"
2626
#include "fastdeploy/vision/common/processors/resize.h"
27+
#include "fastdeploy/vision/common/processors/resize_by_long.h"
2728
#include "fastdeploy/vision/common/processors/resize_by_short.h"
2829
#include "fastdeploy/vision/common/processors/resize_to_int_mult.h"
2930
#include "fastdeploy/vision/common/processors/stride_pad.h"

csrc/fastdeploy/vision/matting/ppmatting/ppmatting.cc

Lines changed: 51 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,8 @@ PPMatting::PPMatting(const std::string& model_file,
2626
const RuntimeOption& custom_option,
2727
const Frontend& model_format) {
2828
config_file_ = config_file;
29-
valid_cpu_backends = {Backend::PDINFER, Backend::ORT};
30-
valid_gpu_backends = {Backend::PDINFER, Backend::ORT, Backend::TRT};
29+
valid_cpu_backends = {Backend::ORT, Backend::PDINFER};
30+
valid_gpu_backends = {Backend::PDINFER, Backend::TRT};
3131
runtime_option = custom_option;
3232
runtime_option.model_format = model_format;
3333
runtime_option.model_file = model_file;
@@ -74,6 +74,11 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
7474
if (op["min_short"]) {
7575
min_short = op["min_short"].as<int>();
7676
}
77+
FDINFO << "Detected LimitShort processing step in yaml file, if the "
78+
"model is exported from PaddleSeg, please make sure the "
79+
"input of your model is fixed with a square shape, and "
80+
"greater than or equal to "
81+
<< max_short << "." << std::endl;
7782
processors_.push_back(
7883
std::make_shared<LimitShort>(max_short, min_short));
7984
} else if (op["type"].as<std::string>() == "ResizeToIntMult") {
@@ -92,6 +97,19 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
9297
std = op["std"].as<std::vector<float>>();
9398
}
9499
processors_.push_back(std::make_shared<Normalize>(mean, std));
100+
} else if (op["type"].as<std::string>() == "ResizeByLong") {
101+
int target_size = op["target_size"].as<int>();
102+
processors_.push_back(std::make_shared<ResizeByLong>(target_size));
103+
} else if (op["type"].as<std::string>() == "Pad") {
104+
// size: (w, h)
105+
auto size = op["size"].as<std::vector<int>>();
106+
std::vector<float> value = {127.5, 127.5, 127.5};
107+
if (op["fill_value"]) {
108+
auto value = op["fill_value"].as<std::vector<float>>();
109+
}
110+
processors_.push_back(std::make_shared<Cast>("float"));
111+
processors_.push_back(
112+
std::make_shared<PadToSize>(size[1], size[0], value));
95113
}
96114
}
97115
processors_.push_back(std::make_shared<HWC2CHW>());
@@ -102,11 +120,30 @@ bool PPMatting::BuildPreprocessPipelineFromConfig() {
102120
bool PPMatting::Preprocess(Mat* mat, FDTensor* output,
103121
std::map<std::string, std::array<int, 2>>* im_info) {
104122
for (size_t i = 0; i < processors_.size(); ++i) {
123+
if (processors_[i]->Name().compare("LimitShort") == 0) {
124+
int input_h = static_cast<int>(mat->Height());
125+
int input_w = static_cast<int>(mat->Width());
126+
auto processor = dynamic_cast<LimitShort*>(processors_[i].get());
127+
int max_short = processor->GetMaxShort();
128+
if (runtime_option.backend != Backend::PDINFER) {
129+
if (input_w != input_h || input_h < max_short || input_w < max_short) {
130+
FDWARNING << "Detected LimitShort processing step in yaml file and "
131+
"the size of input image is Unqualified, Fastdeploy "
132+
"will resize the input image into ("
133+
<< max_short << "," << max_short << ")." << std::endl;
134+
Resize::Run(mat, max_short, max_short);
135+
}
136+
}
137+
}
105138
if (!(*(processors_[i].get()))(mat)) {
106139
FDERROR << "Failed to process image data in " << processors_[i]->Name()
107140
<< "." << std::endl;
108141
return false;
109142
}
143+
if (processors_[i]->Name().compare("ResizeByLong") == 0) {
144+
(*im_info)["resize_by_long"] = {static_cast<int>(mat->Height()),
145+
static_cast<int>(mat->Width())};
146+
}
110147
}
111148

112149
// Record output shape of preprocessed image
@@ -135,6 +172,7 @@ bool PPMatting::Postprocess(
135172
// 先获取alpha并resize (使用opencv)
136173
auto iter_ipt = im_info.find("input_shape");
137174
auto iter_out = im_info.find("output_shape");
175+
auto resize_by_long = im_info.find("resize_by_long");
138176
FDASSERT(iter_out != im_info.end() && iter_ipt != im_info.end(),
139177
"Cannot find input_shape or output_shape from im_info.");
140178
int out_h = iter_out->second[0];
@@ -145,7 +183,17 @@ bool PPMatting::Postprocess(
145183
// TODO: 需要修改成FDTensor或Mat的运算 现在依赖cv::Mat
146184
float* alpha_ptr = static_cast<float*>(alpha_tensor.Data());
147185
cv::Mat alpha_zero_copy_ref(out_h, out_w, CV_32FC1, alpha_ptr);
148-
Mat alpha_resized(alpha_zero_copy_ref); // ref-only, zero copy.
186+
cv::Mat cropped_alpha;
187+
if (resize_by_long != im_info.end()) {
188+
int resize_h = resize_by_long->second[0];
189+
int resize_w = resize_by_long->second[1];
190+
alpha_zero_copy_ref(cv::Rect(0, 0, resize_w, resize_h))
191+
.copyTo(cropped_alpha);
192+
} else {
193+
cropped_alpha = alpha_zero_copy_ref;
194+
}
195+
Mat alpha_resized(cropped_alpha); // ref-only, zero copy.
196+
149197
if ((out_h != ipt_h) || (out_w != ipt_w)) {
150198
// already allocated a new continuous memory after resize.
151199
// cv::resize(alpha_resized, alpha_resized, cv::Size(ipt_w, ipt_h));

csrc/fastdeploy/vision/matting/ppmatting/ppmatting.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ class FASTDEPLOY_DECL PPMatting : public FastDeployModel {
2727
const RuntimeOption& custom_option = RuntimeOption(),
2828
const Frontend& model_format = Frontend::PADDLE);
2929

30-
std::string ModelName() const { return "PaddleMat"; }
30+
std::string ModelName() const { return "PaddleMatting"; }
3131

3232
virtual bool Predict(cv::Mat* im, MattingResult* result);
3333

examples/vision/matting/modnet/python/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,6 +33,7 @@ python infer.py --model modnet_photographic_portrait_matting.onnx --image mattin
3333
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186852116-cf91445b-3a67-45d9-a675-c69fe77c383a.jpg">
3434
<img width="200" height="200" float="left" src="https://user-images.githubusercontent.com/67993288/186851964-4c9086b9-3490-4fcb-82f9-2106c63aa4f3.jpg">
3535
</div>
36+
3637
## MODNet Python接口
3738

3839
```python

examples/vision/matting/ppmatting/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,19 +15,18 @@
1515

1616
在部署前,需要先将PPMatting导出成部署模型,导出步骤参考文档[导出模型](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.6/Matting)
1717

18-
注意:在导出模型时不要进行NMS的去除操作,正常导出即可。
1918

2019
## 下载预训练模型
2120

2221
为了方便开发者的测试,下面提供了PPMatting导出的各系列模型,开发者可直接下载使用。
2322

24-
其中精度指标来源于PPMatting中对各模型的介绍,详情各参考PPMatting中的说明。
23+
其中精度指标来源于PPMatting中对各模型的介绍(未提供精度数据),详情各参考PPMatting中的说明。
2524

2625

2726
| 模型 | 参数大小 | 精度 | 备注 |
2827
|:---------------------------------------------------------------- |:----- |:----- | :------ |
29-
| [PPMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 87MB | - |
30-
| [PPMatting](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 87MB | - |
28+
| [PPMatting-512](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-512.tgz) | 87MB | - |
29+
| [PPMatting-1024](https://bj.bcebos.com/paddlehub/fastdeploy/PP-Matting-1024.tgz) | 87MB | - |
3130

3231

3332

examples/vision/matting/ppmatting/cpp/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ wget https://bj.bcebos.com/paddlehub/fastdeploy/matting_bgr.jpg
2727

2828
# CPU推理
2929
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 0
30-
# GPU推理 (TODO: ORT-GPU 推理会报错)
30+
# GPU推理
3131
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 1
3232
# GPU上TensorRT推理
3333
./infer_demo PP-Matting-512 matting_input.jpg matting_bgr.jpg 2

examples/vision/matting/ppmatting/cpp/infer.cc

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,8 +25,9 @@ void CpuInfer(const std::string& model_dir, const std::string& image_file,
2525
auto model_file = model_dir + sep + "model.pdmodel";
2626
auto params_file = model_dir + sep + "model.pdiparams";
2727
auto config_file = model_dir + sep + "deploy.yaml";
28+
auto option = fastdeploy::RuntimeOption();
2829
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
29-
config_file);
30+
config_file, option);
3031
if (!model.Initialized()) {
3132
std::cerr << "Failed to initialize." << std::endl;
3233
return;
@@ -58,6 +59,7 @@ void GpuInfer(const std::string& model_dir, const std::string& image_file,
5859

5960
auto option = fastdeploy::RuntimeOption();
6061
option.UseGpu();
62+
option.UsePaddleBackend();
6163
auto model = fastdeploy::vision::matting::PPMatting(model_file, params_file,
6264
config_file, option);
6365
if (!model.Initialized()) {

0 commit comments

Comments
 (0)