Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/cn/build_and_install/download_prebuilt_libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ FastDeploy提供各平台预编译库,供开发者直接下载安装使用。

### Python安装

Release版本(当前最新1.0.1)安装
Release版本(当前最新1.0.2)安装
```bash
pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
```
Expand All @@ -43,8 +43,8 @@ Release版本

| 平台 | 文件 | 说明 |
| :--- | :--- | :---- |
| Linux x64 | [fastdeploy-linux-x64-gpu-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-1.0.1.tgz) | g++ 8.2, CUDA 11.2, cuDNN 8.2编译产出 |
| Windows x64 | [fastdeploy-win-x64-gpu-1.0.1.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-gpu-1.0.1.zip) | Visual Studio 16 2019, CUDA 11.2, cuDNN 8.2编译产出 |
| Linux x64 | [fastdeploy-linux-x64-gpu-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-1.0.2.tgz) | g++ 8.2, CUDA 11.2, cuDNN 8.2编译产出 |
| Windows x64 | [fastdeploy-win-x64-gpu-1.0.2.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-gpu-1.0.2.zip) | Visual Studio 16 2019, CUDA 11.2, cuDNN 8.2编译产出 |

Develop版本(Nightly build)

Expand All @@ -65,7 +65,7 @@ Develop版本(Nightly build)

### Python安装

Release版本(当前最新1.0.1)安装
Release版本(当前最新1.0.2)安装
```bash
pip install fastdeploy-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
```
Expand All @@ -81,11 +81,11 @@ Release版本

| 平台 | 文件 | 说明 |
| :--- | :--- | :---- |
| Linux x64 | [fastdeploy-linux-x64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-1.0.1.tgz) | g++ 8.2编译产出 |
| Windows x64 | [fastdeploy-win-x64-1.0.1.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-1.0.1.zip) | Visual Studio 16 2019编译产出 |
| Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.1.tgz) | clang++ 10.0.0编译产出|
| Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.1.tgz) | clang++ 13.0.0编译产出 |
| Linux aarch64 | [fastdeploy-linux-aarch64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.1.tgz) | gcc 6.3编译产出 |
| Linux x64 | [fastdeploy-linux-x64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-1.0.2.tgz) | g++ 8.2编译产出 |
| Windows x64 | [fastdeploy-win-x64-1.0.2.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-1.0.2.zip) | Visual Studio 16 2019编译产出 |
| Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.2.tgz) | clang++ 10.0.0编译产出|
| Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.2.tgz) | clang++ 13.0.0编译产出 |
| Linux aarch64 | [fastdeploy-linux-aarch64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.2.tgz) | gcc 6.3编译产出 |
| Android armv7&v8 | [fastdeploy-android-1.0.0-shared.tgz](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-1.0.0-shared.tgz) | NDK 25及clang++编译产出, 支持arm64-v8a及armeabi-v7a |

## Java SDK安装
Expand Down
22 changes: 11 additions & 11 deletions docs/en/build_and_install/download_prebuilt_libraries.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ FastDeploy supports Computer Vision, Text and NLP model deployment on CPU and Nv

### Python SDK

Install the released version(the newest 1.0.1 for now)
Install the released version(the newest 1.0.2 for now)

```
pip install fastdeploy-gpu-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
Expand All @@ -43,12 +43,12 @@ conda config --add channels conda-forge && conda install cudatoolkit=11.2 cudnn=

### C++ SDK

Install the released version(Latest 1.0.1
Install the released version(Latest 1.0.2

| Platform | File | Description |
|:----------- |:--------------------------------------------------------------------------------------------------------------------- |:--------------------------------------------------------- |
| Linux x64 | [fastdeploy-linux-x64-gpu-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-1.0.1.tgz) | g++ 8.2, CUDA 11.2, cuDNN 8.2 |
| Windows x64 | [fastdeploy-win-x64-gpu-1.0.1.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-gpu-1.0.1.zip) | Visual Studio 16 2019, CUDA 11.2, cuDNN 8.2 |
| Linux x64 | [fastdeploy-linux-x64-gpu-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-gpu-1.0.2.tgz) | g++ 8.2, CUDA 11.2, cuDNN 8.2 |
| Windows x64 | [fastdeploy-win-x64-gpu-1.0.2.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-gpu-1.0.2.zip) | Visual Studio 16 2019, CUDA 11.2, cuDNN 8.2 |

Install the Develop version(Nightly build)

Expand All @@ -70,7 +70,7 @@ FastDeploy supports computer vision, text and NLP model deployment on CPU with P

### Python SDK

Install the released version(Latest 1.0.1 for now)
Install the released version(Latest 1.0.2 for now)

```
pip install fastdeploy-python -f https://www.paddlepaddle.org.cn/whl/fastdeploy.html
Expand All @@ -84,15 +84,15 @@ pip install fastdeploy-python==0.0.0 -f https://www.paddlepaddle.org.cn/whl/fast

### C++ SDK

Install the released version(Latest 1.0.1 for now, Android is 1.0.1
Install the released version(Latest 1.0.2 for now, Android is 1.0.0

| Platform | File | Description |
|:------------- |:--------------------------------------------------------------------------------------------------------------------- |:------------------------------ |
| Linux x64 | [fastdeploy-linux-x64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-1.0.1.tgz) | g++ 8.2 |
| Windows x64 | [fastdeploy-win-x64-1.0.1.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-1.0.1.zip) | Visual Studio 16 2019 |
| Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.1.tgz) | clang++ 10.0.0|
| Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.1.tgz) | clang++ 13.0.0 |
| Linux aarch64 | [fastdeploy-osx-arm64-1.0.1.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.1.tgz) | gcc 6.3 |
| Linux x64 | [fastdeploy-linux-x64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-1.0.2.tgz) | g++ 8.2 |
| Windows x64 | [fastdeploy-win-x64-1.0.2.zip](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-win-x64-1.0.2.zip) | Visual Studio 16 2019 |
| Mac OSX x64 | [fastdeploy-osx-x86_64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-x86_64-1.0.2.tgz) | clang++ 10.0.0|
| Mac OSX arm64 | [fastdeploy-osx-arm64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-osx-arm64-1.0.2.tgz) | clang++ 13.0.0 |
| Linux aarch64 | [fastdeploy-osx-arm64-1.0.2.tgz](https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-aarch64-1.0.2.tgz) | gcc 6.3 |
| Android armv7&v8 | [fastdeploy-android-1.0.0-shared.tgz](https://bj.bcebos.com/fastdeploy/release/android/fastdeploy-android-1.0.0-shared.tgz)| NDK 25, clang++, support arm64-v8a and armeabi-v7a |

## Java SDK
Expand Down
4 changes: 2 additions & 2 deletions serving/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ FastDeploy builds an end-to-end serving deployment based on [Triton Inference Se
CPU images only support Paddle/ONNX models for serving deployment on CPUs, and supported inference backends include OpenVINO, Paddle Inference, and ONNX Runtime

```shell
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-cpu-only-21.10
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-cpu-only-21.10
```

#### GPU Image

GPU images support Paddle/ONNX models for serving deployment on GPU and CPU, and supported inference backends including OpenVINO, TensorRT, Paddle Inference, and ONNX Runtime

```
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-gpu-cuda11.4-trt8.4-21.10
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-gpu-cuda11.4-trt8.4-21.10
```

Users can also compile the image by themselves according to their own needs, referring to the following documents:
Expand Down
4 changes: 2 additions & 2 deletions serving/README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,13 @@ FastDeploy基于[Triton Inference Server](https://github.com/triton-inference-se
#### CPU镜像
CPU镜像仅支持Paddle/ONNX模型在CPU上进行服务化部署,支持的推理后端包括OpenVINO、Paddle Inference和ONNX Runtime
``` shell
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-cpu-only-21.10
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-cpu-only-21.10
```

#### GPU镜像
GPU镜像支持Paddle/ONNX模型在GPU/CPU上进行服务化部署,支持的推理后端包括OpenVINO、TensorRT、Paddle Inference和ONNX Runtime
```
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.1-gpu-cuda11.4-trt8.4-21.10
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.2-gpu-cuda11.4-trt8.4-21.10
```

用户也可根据自身需求,参考如下文档自行编译镜像
Expand Down