Skip to content

Conversation

@rainyfly
Copy link
Collaborator

PR types(PR类型)

Model

Description

  1. Add style transfer models in paddlehub into fastdeploy model repository.

@rainyfly rainyfly changed the title [Model] add style transfer model (do not merge, examples to be added) [Model] add style transfer model Dec 22, 2022
@DefTruth DefTruth self-requested a review December 22, 2022 11:15
@jiangjiajun jiangjiajun requested a review from heliqi December 26, 2022 07:17
Copy link
Collaborator

@heliqi heliqi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

const ModelFormat& model_format) {

valid_cpu_backends = {Backend::PDINFER};
valid_gpu_backends = {Backend::PDINFER, Backend::TRT};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我看到GPU后端支持TRT,那理论上是可以转通过paddle2onnx转onnx的,所以ORT和openvino的后端是否也应该对应测试下?如果支持的话,一般来说性能会不错。

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GPU后端暂时不支持TRT,会报错,已经删除,报错原因已经提了icafe卡片,凡是需要转onnx的目前都会报错。

|[animegan_v2_paprika_74](https://www.paddlepaddle.org.cn/hubdetail?name=animegan_v2_paprika_74&en_category=GANs)|可将输入的图像转换成今敏红辣椒动漫风格,模型权重转换自AnimeGAN V2官方开源项目|paddle|
|[animegan_v2_paprika_98](https://www.paddlepaddle.org.cn/hubdetail?name=animegan_v2_paprika_98&en_category=GANs)|可将输入的图像转换成今敏红辣椒动漫风格,模型权重转换自AnimeGAN V2官方开源项目|paddle|

## FastDeploy paddle backend部署和hub速度对比(ips)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里速度是越高越好吗(ips)?可以加个注释,避免用户看到后产生疑惑

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

results->resize(shape[0]);
float* infer_result_data = reinterpret_cast<float*>(output_tensor.Data());
for(size_t i = 0; i < results->size(); ++i){
float* data = new float[shape[1]*shape[2]*3];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个new出来的内存没有被释放,会导致内存泄漏。这里建议改用std::vector,初始化时指定数据个数,避免内存泄露

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Mat fd_image = WrapMat(res);
BGR2RGB::Run(&fd_image);
res = *(fd_image.GetOpenCVMat());
res.copyTo(results->at(i));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

另外,这个循环内的代码看起来似乎没有进行缩进,请检查下代码风格

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Mat result_mat = Mat::Create(shape[1], shape[2], 3, FDDataType::FP32, data);
std::vector<float> mean{127.5f, 127.5f, 127.5f};
std::vector<float> std{127.5f, 127.5f, 127.5f};
Convert::Run(&result_mat, mean, std);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以直接用Mat来创建一个零拷贝引用fp32数据,然后直接转uint8即可。 也不需要创建一个中间的data,这样会导致额外的内存创建和拷贝。

Mat result_mat = Mat::Create(shape[1], shape[2], 3, FDDataType::FP32, infer_result_data+i*size);

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

void BindKeyPointDetection(pybind11::module& m);
void BindHeadPose(pybind11::module& m);
void BindSR(pybind11::module& m);
void BindGenerationModel(pybind11::module& m);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里改成BindGenertion吧,保持和其他模块写法的统一

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

:param runtime_option: (fastdeploy.RuntimeOption)RuntimeOption for inference this model, if it's None, will use the default backend on CPU
:param model_format: (fastdeploy.ModelForamt)Model format of the loaded model
"""
# 调用基函数进行backend_option的初始化
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

删除中文注释或修改成英文,其余同理

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@DefTruth DefTruth requested a review from jiangjiajun December 30, 2022 08:59
@jiangjiajun jiangjiajun merged commit 87bcb5d into PaddlePaddle:develop Jan 3, 2023
DefTruth added a commit to DefTruth/FastDeploy that referenced this pull request Jan 3, 2023
* [Backend] Update paddle inference version (PaddlePaddle#990)

2.4-dev3 -> 2.4-dev4

* Add uie benchmark

* fix trt dy shape

* update uie benchmark

* Update uie benchmark output

* Fix cpu_num_thread->cpu_num_threads

* Update backend name

* [Model] Update PPSeg Preprocess (PaddlePaddle#1007)

* 更新PPSeg pybind and python

* 更新PPSeg pybind and python

* [Model] Update PPDet Preprocess (PaddlePaddle#1006)

* 更新导航文档

* 更新导航文档

* 更新导航文档

* 更新导航文档

* 更新PPDet PreProcess

* 更新PPDet PreProcess

* 更新PPDet pybind and python

* 更新

* 更新ppdet

* [Other]Update Paddle Lite for RV1126  (PaddlePaddle#1013)

update lite link

* [Other] Remove TRT static libs in package (PaddlePaddle#1011)

* remove duplicated and useless libs

* use os system to run ldd

* remove filter libs by ldd

* [Serving]update ocr model.py from np.object to np.object_ (PaddlePaddle#1017)

* [Serving]update ocr model.py from np.object to np.object_

* Update model.py

* [Bug Fix] Fix build with Paddle Inference on Jetson (PaddlePaddle#1019)

Fix build with Paddle Inference on Jetson

* Update README.md

* [Serving]update np.object to np.object_  (PaddlePaddle#1021)

np.object to np.object_

* Update README.md

* fresh doc version release/1.0.2 (PaddlePaddle#1023)

fresh doc version

* [Other] Add a interface to get all pretrained models available from hub model server (PaddlePaddle#1022)

add get model list

* [Doc]Revise one wording (PaddlePaddle#1028)

* 第一次提交

* 补充一处漏翻译

* deleted:    docs/en/quantize.md

* Update one translation

* Update en version

* Update one translation in code

* Standardize one writing

* Standardize one writing

* Update some en version

* Fix a grammer problem

* Update en version for api/vision result

* Merge branch 'develop' of https://github.com/charl-u/FastDeploy into develop

* Checkout the link in README in vision_results/ to the en documents

* Modify a title

* Add link to serving/docs/

* Finish translation of demo.md

* Update english version of serving/docs/

* Update title of readme

* Update some links

* Modify a title

* Update some links

* Update en version of java android README

* Modify some titles

* Modify some titles

* Modify some titles

* modify article to document

* [Model] add style transfer model (PaddlePaddle#922)

* add style transfer model

* add examples for generation model

* add unit test

* add speed comparison

* add speed comparison

* add variable for constant

* add preprocessor and postprocessor

* add preprocessor and postprocessor

* fix

* fix according to review

Co-authored-by: DefTruth <[email protected]>

Co-authored-by: Jack Zhou <[email protected]>
Co-authored-by: Zheng-Bicheng <[email protected]>
Co-authored-by: yeliang2258 <[email protected]>
Co-authored-by: Wang Xinyu <[email protected]>
Co-authored-by: heliqi <[email protected]>
Co-authored-by: Jason <[email protected]>
Co-authored-by: leiqing <[email protected]>
Co-authored-by: Zeref996 <[email protected]>
Co-authored-by: chenjian <[email protected]>
Co-authored-by: charl-u <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants