Skip to content
This repository was archived by the owner on Jan 24, 2024. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
84 changes: 54 additions & 30 deletions benchmark/tool/C/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,45 +7,69 @@ The demo can be run from the command line and can be used to test the inference
## Android
To compile and run this demo in an Android environment, please follow the following steps:

1. Refer to [this document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_en.md) to compile the Android version of PaddlePaddle. After following the mentioned steps, make install will generate an output directory containing three subdirectories: include, lib, and third_party( `libpaddle_capi_shared.so` will be produced in the `lib` directory).
- **Step 1, build PaddlePaddle for Android.**

Refer to [this document](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_en.md) to compile the Android version of PaddlePaddle. After following the mentioned steps, make install will generate an output directory containing three subdirectories: include, lib, and third_party( `libpaddle_capi_shared.so` will be produced in the `lib` directory).

- **Step 2, build the inference demo.**

Compile `inference.cc` to an executable program for the Android environment as follows:

2. Compile `inference.cc` to an executable program for the Android environment as follows (run in `Mobile/benchmark/tool/C`):
- For armeabi-v7a
```
mkdir build
cd build

cmake .. \
-DANDROID_ABI=armeabi-v7a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel
```bash
$ git clone https://github.com/PaddlePaddle/Mobile.git
$ cd Mobile/benchmark/tool/C/
$ mkdir build
$ cd build

$ cmake .. \
-DANDROID_ABI=armeabi-v7a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel

make
$ make
```

- For arm64-v8a
```
mkdir build
cd build

cmake .. \
-DANDROID_ABI=arm64-v8a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm64_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel
```bash
$ git clone https://github.com/PaddlePaddle/Mobile.git
$ cd Mobile/benchmark/tool/C/
$ mkdir build
$ cd build

$ cmake .. \
-DANDROID_ABI=arm64-v8a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm64_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel

make
$ make
```

3. Prepare a merged model:
Models config(.py) (eg: [Mobilenet](https://github.com/PaddlePaddle/Mobile/blob/develop/models/mobilenet.py)) contain only the structure of our models. A developer can choose [model config here](https://github.com/PaddlePaddle/Mobile/tree/develop/models) to train their custom models. PaddlePaddle documentation has [several tutorials](https://github.com/PaddlePaddle/models) for building and training models. The model parameter file(.tar.gz) will be generated during the training process. There we need to merge the configuration file(.py) and the parameter file(.tar.gz) into a file. Please refer to the [details.](https://github.com/PaddlePaddle/Mobile/tree/develop/tools/merge_config_parameters)
- **Step 3, prepare a merged model.**

Models config(.py) (eg: [Mobilenet](https://github.com/PaddlePaddle/Mobile/blob/develop/models/mobilenet.py)) contain only the structure of our models. A developer can choose [model config here](https://github.com/PaddlePaddle/Mobile/tree/develop/models) to train their custom models. PaddlePaddle documentation has [several tutorials](https://github.com/PaddlePaddle/models) for building and training models. The model parameter file(.tar.gz) will be generated during the training process. There we need to merge the configuration file(.py) and the parameter file(.tar.gz) into a file. Please refer to the [details.](https://github.com/PaddlePaddle/Mobile/tree/develop/tools/merge_config_parameters)

4. Run the demo program by logging into the Android environment via [adb](https://developer.android.google.cn/studio/command-line/adb.html?hl=zh-cn#Enabling) and specifying the PaddlePaddle model from the command line as follows:
```
adb push inference mobilenet.paddle /sdcard/test_mobilenet
adb shell
cd /sdcard/test_mobilenet
./inference --merged_model ./mobilenet.paddle --input_size 150528
```
- **Step 4, run the demo.**

Users can run the demo program by logging into the Android environment via [adb](https://developer.android.google.cn/studio/command-line/adb.html?hl=zh-cn#Enabling) and specifying the PaddlePaddle model from the command line as follows:

```bash
$ adb push inference /data/local/tmp # transfer the executable to Android's memory
$ adb push mobilenet_flowers102.paddle /data/local/tmp # transfer the model to Android's memory
$ adb shell # login Android device
odin:/ $ cd /data/local/tmp # switch to the working directory
odin:/data/local/tmp $ ls
inference mobilenet_flowers102.paddle
odin:/data/local/tmp $ chmod +x inference
odin:/data/local/tmp $ ./inference --merged_model ./mobilenet_flowers102.paddle --input_size 150528 # run the executable
I1211 17:12:53.334666 4858 Util.cpp:166] commandline:
Time of init paddle 3.4388 ms.
Time of create from merged model file 141.045 ms.
Time of forward time 398.818 ms.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is best to indicate this example is running on which phone.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just want to show the output of inference, the time here is not precise. I will add a comment about this in next PR.

```

**Note:** `input_size` is 150528, cause that the input size of the model is `3 * 224 * 224 = 150528`
**Note:** `input_size` is 150528, cause that the input size of the model is `3 * 224 * 224 = 150528`
111 changes: 111 additions & 0 deletions benchmark/tool/C/README_cn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
# 示例程序

这是一个用C++语言编写的简单的示例程序,其中通过调用Paddle的C-API接口,实现对模型的推断。

这个示例程序可在linux或Android手机的命令行下运行,可以用来测试不同模型的性能。

## Android
用户可按照以下几个步骤,编译出在Android设备上执行的可执行程序。

- **Step 1,编译Android平台上适用的PaddlePaddle库。**

用户需要按照[Android平台编译指南](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/mobile/cross_compiling_for_android_cn.md),编译Android平台上适用的PaddlePaddle库。在执行`make install`之后,PaddlePaddle库将会安装在`CMAKE_INSTALL_PREFIX`所指定的目录下。该目录包含如下几个子目录:
- `include`,其中包含使用PaddlePaddle所需要引入的头文件,通常代码中加入`#include <paddle/capi.h>`即可。
- `lib`,其中包含了PaddlePaddle对应架构的库文件。其中包括:
- 动态库,`libpaddle_capi_shared.so`。
- 静态库,`libpaddle_capi_layers.a`和`libpaddle_capi_engine.a`。
- `third_party`,PaddlePaddle所依赖的第三方库。

- **Step 2,编译示例程序。**

示例程序项目使用CMake管理,可按照以下步骤,编译Android设备上运行的可执行程序。
这个步骤中依旧需要用到第一步中配置的**独立工具链**。

- armeabi-v7a架构

```bash
$ git clone https://github.com/PaddlePaddle/Mobile.git
$ cd Mobile/benchmark/tool/C/
$ mkdir build
$ cd build
$ cmake .. \
-DANDROID_ABI=armeabi-v7a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel

$ make
```

- arm64-v8a架构

```bash
$ git clone https://github.com/PaddlePaddle/Mobile.git
$ cd Mobile/benchmark/tool/C/
$ mkdir build
$ cd build

$ cmake .. \
-DANDROID_ABI=arm64-v8a \
-DANDROID_STANDALONE_TOOLCHAIN=your/path/to/arm64_standalone_toolchain \
-DPADDLE_ROOT=The output path generated in the first step \
-DCMAKE_BUILD_TYPE=MinSizeRel

$ make
```

执行上述命令执行,会在`build`目录下生成目标可执行文件`inference`。

- **Step 3,准备模型。**

Android设备上推荐使用`merged model`。以Mobilenet为例,要生成`merged model`,首先你需要准备以下文件:
- 模型配置文件[mobilenet.py](https://github.com/PaddlePaddle/Mobile/blob/develop/models/mobilenet.py),它是使用PaddlePaddle的v2 api编写的`Mobilenet`模型的网络结构。用户可在[models](https://github.com/PaddlePaddle/Mobile/tree/develop/models)获取更多PaddlePaddle常用的网络配置,该repo下面同时提供了使用PaddlePaddle训练模型的方法。
- 模型参数文件。使用PaddlePaddle v2 api训练得到的参数将会存储成`.tar.gz`文件。比如,我们提供了一个使用[flowers102](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/)数据集训练`Mobilnet`分类模型的参数文件[mobilenet\_flowers102.tar.gz](http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e)。你也可以使用以下命令下载该参数文件:

```bash
wget -C http://cloud.dlnel.org/filepub/?uuid=4a3fcd7a-719c-479f-96e1-28a4c3f2195e -O mobilenet_flowers102.tar.gz
```

**注意,用来`merge model`的模型配置文件必须只包含`inference`网络**。

在准备好模型配置文件(.py)和参数文件(.tar.gz)之后,且所在机器已经成功安装了PaddlePaddle的Python包之后,我们可以通过执行以下脚本生成需要的`merged model`。

```bash
$ cd Mobile/tools/merge_config_parameters
$ python merge_model.py
```

你也可以直接下载[mobilenet\_flowers102.paddle](http://cloud.dlnel.org/filepub/?uuid=d3b95cf9-4dc3-476f-bdc7-98ac410c4f71)试用。命令行下载方式:

```
wget -C http://cloud.dlnel.org/filepub/?uuid=d3b95cf9-4dc3-476f-bdc7-98ac410c4f71 -O mobilenet_flowers102.paddle
```

更多有关于生成`merged model`的详情,请参考[merge\_config\_parameters](https://github.com/PaddlePaddle/Mobile/tree/develop/tools/merge_config_parameters)。

- **Step 4,在Android设备上测试。**

这是一个可以在Android设备上运行的命令行测试程序。你可以通过桌面终端,借助于[adb](https://developer.android.google.cn/studio/command-line/adb.html?hl=zh-cn#Enabling)工具,传输数据到Android设备上,并且登陆Android设备,运行可执行程序进行测试。

```bash
$ adb push inference /data/local/tmp # 将可执行程序传输到Android设备上
$ adb push mobilenet_flowers102.paddle /data/local/tmp # 将模型文件传输到Android设备上
$ adb shell # 登陆Android设备
odin:/ $ cd /data/local/tmp # 进入工作目录
odin:/data/local/tmp $ ls
inference mobilenet_flowers102.paddle
odin:/data/local/tmp $ chmod +x inference
odin:/data/local/tmp $ ./inference --merged_model ./mobilenet_flowers102.paddle --input_size 150528 # 执行测试程序
I1211 17:12:53.334666 4858 Util.cpp:166] commandline:
Time of init paddle 3.4388 ms.
Time of create from merged model file 141.045 ms.
Time of forward time 398.818 ms.
```

`inference`可执行程序需要设置两个运行时参数:
- `--merged_model`,模型的路径。
- `--input_size`,模型输入数据的长度。由于`mobilenet`使用`3 x 224 x 224`图像数据作为输入,因此设置`--input_size 150528`。

## 注意

该示例程序只是用来简单地测试Android设备上,模型的推断速度。因为使用随机数据作为模型的输入,若要测试和验证模型的正确性,请根据实际的需求进行修改。
14 changes: 7 additions & 7 deletions deployment/model/merge_config_parameters/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@

# Merge model config and parameters

Integrate the model configuration and model parameters into one file.
Integrate the model configuration and model parameters into one file.
The merged file is used in our capi forward predict program.

## Demo
Expand All @@ -10,23 +10,23 @@ This applies to all PaddlePaddle v2 models, we show a demo of mobilenet.

### Step 1: Prepartions

**Model Config :** [Mobilenet model config](../../models/mobilenet.py).
**Model Parameters:** [Mobilenet model param pretrained on flower102 download](https://pan.baidu.com/s/1geHkrw3)
**Model Config :** [Mobilenet model config](../../models/mobilenet.py).
**Model Parameters:** [Mobilenet model param pretrained on flower102 download](https://pan.baidu.com/s/1geHkrw3)

### Step2: Merge
### Step2: Merge

Run the following code

```
```python
from paddle.utils.merge_model import merge_v2_model

# import your network configuration
from mobilenet import mobile_net

net = mobile_net(3*224*224, 102, 1.0)
param_file = './mobilenet_flowers102.tar.gz'
output_file = './output.paddle'

merge_v2_model(net, param_file, output_file)

```
13 changes: 13 additions & 0 deletions deployment/model/merge_config_parameters/merge_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import paddle.v2 as paddle
from paddle.utils.merge_model import merge_v2_model

# import network configuration
from mobilenet import mobile_net

if __name__ == "__main__":
image_size = 224
num_classes = 102
net = mobile_net(3 * image_size * image_size, num_classes, 1.0)
param_file = './mobilenet_flowers102.tar.gz'
output_file = './mobilenet_flowers102.paddle'
merge_v2_model(net, param_file, output_file)