Skip to content
This repository was archived by the owner on Apr 20, 2026. It is now read-only.

Commit bf43f42

Browse files
First release of Intel Edge AI Performance Evaluation Tool. Please refer to docs/ReleaseNote.md for release note.
1 parent 4e913c7 commit bf43f42

33 files changed

Lines changed: 41827 additions & 0 deletions

LICENSE

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2022 Intel Corporation
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice (including the next paragraph) shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

Lines changed: 201 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
# Intel® Edge AI Performance Evaluation Toolkit User Guide
2+
3+
Intel® Edge AI Performance Evaluation Toolkit is an Edge AI customer enabling tool that has been designed to easily qualify and evaluate platform deep learning inference performance.
4+
5+
6+
## Components
7+
8+
It consists of scripts, configuration files, Intel Thermal Analysis Tool (TAT) workspace file and optimized OpenVINO INT8 IR model and brief explanation below,
9+
10+
* **OS setup scripts** - are used to setup container running environment on both Ubuntu Linux and Windows OS.
11+
12+
* **OpenVINO POT quantization scripts and configuration files** - are used to quantize OpenVINO FP32/FP16 IR models to INT8 by OpenVINO Post-Training Optimization Tool.
13+
14+
* **Benchmark scripts and Intel PTAT workspace file** - are used to benchmark optimized INT8 IR model and monitor system frequency and thermal condition to qualify system performance.
15+
16+
17+
## Supported HW
18+
19+
* Intel® Core™ i7-1165G7 Processor
20+
* Intel® Core™ i7-1185G7E Processor
21+
* Intel® Celeron® 6305E
22+
23+
24+
## License
25+
Intel® Edge AI Performance Evaluation is licensed under MIT License. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
26+
27+
28+
## User Guide
29+
30+
Below are steps to get started for Ubuntu 20.04.4 and Windows 10 21H2
31+
32+
### User Guide for Ubuntu 20.04
33+
34+
**Install Ubuntu 20.04.4**
35+
https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview
36+
37+
**Clone Intel® Edge AI Performance Evaluation Toolkit**
38+
39+
```=bash
40+
sudo apt update
41+
sudo apt upgrade
42+
sudo apt install git
43+
git clone https://github.com/intel/Intel-Edge-AI-Performance-Evaluation-Toolkit.git
44+
```
45+
46+
**Install docker utility by running**
47+
48+
```=bash
49+
cd Intel-Edge-AI-Performance-Evaluation-Toolkit
50+
bash tools/install_docker.sh
51+
```
52+
53+
Reboot system.
54+
55+
**Install Intel Power and Thermal Analysis Tool**
56+
57+
Tool download link : [Intel® Power And Thermal Analysis Tool](https://www.intel.com/content/www/us/en/secure/design/confidential/software-kits/kit-details.html?kitId=637737)
58+
59+
![](https://i.imgur.com/JQ9QPDp.png)
60+
61+
**Loading Intel PTAT workspace file from ptat_workspace.xml**
62+
63+
![](https://i.imgur.com/FcvpSv0.png)
64+
65+
66+
**Run Benchmark and Quantization Scripts**
67+
68+
1. **Copy yolo-v4-tf FP16_INT8 IR model to Downloads folder in Home directory**
69+
70+
```=bash
71+
cd Intel-Edge-AI-Performance-Evaluation-Toolkit
72+
cp -ar openvino_models/ $HOME/Downloads
73+
```
74+
75+
2. **Run benchmark_app on yolo-v4-tf FP16_INT8 IR model on CPU**
76+
77+
```=bash
78+
bash run_yolo-v4-tf-int8-cpu_benchmark.sh
79+
```
80+
81+
![](https://i.imgur.com/aOpi5ZF.png)
82+
83+
84+
3. **Run benchmark_app on yolo-v4-tf FP16_INT8 IR model on GPU**
85+
86+
```=bash
87+
bash run_yolo-v4-tf-int8-gpu_benchmark.sh
88+
```
89+
90+
![](https://i.imgur.com/G6yS6wp.png)
91+
92+
93+
4. **Run quantization on yolo-v3-tf FP16 IR model**
94+
95+
```=bash
96+
bash quantize_yolo-v3-tf_int8.sh
97+
```
98+
### User Guide for Windows 10
99+
100+
**Install Windows 10 21H1**
101+
102+
* Download Windows Insider Preview ISO (microsoft.com) and install
103+
104+
* Install required graphic driver (30.0.101.xxxx)
105+
106+
107+
**Download Intel® Edge AI Performance Evaluation Toolkit github link below**
108+
109+
https://github.com/intel/Intel-Edge-AI-Performance-Evaluation-Toolkit/archive/refs/heads/main.zip
110+
111+
Extract to C:\Users\Public\Intel-Edge-AI-Performance-Evaluation-Toolkit.
112+
113+
114+
**Enable Hypher-V (Run as Administator in PowerShell)**
115+
116+
Please refere to tools\enable-hyper-v.p1 and run below,
117+
118+
```
119+
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
120+
```
121+
122+
Press Y to reboot system.
123+
124+
**Install WSL2 (Run as Administator in PowerShell) and Reboot**
125+
126+
Please refere to tools\install_wsl2.p1 and run below,
127+
```
128+
wsl --install
129+
Restart-Computer
130+
```
131+
132+
After reboot, WSL will start automatically to install Ubuntu. Enter user name and password for WSL Ubuntu when prompt.
133+
134+
**Install docker utility by running in WSL**
135+
136+
```=bash
137+
cd /mnt/c/Users/Public/Intel-Edge-AI-Performance-Evaluation-Toolkit
138+
bash tools/install_docker.sh
139+
```
140+
141+
Reboot to activate docker settings.
142+
143+
**Install Intel Power and Thermal Analysis Tool**
144+
145+
Tool download link : [Intel® Power And Thermal Analysis Tool](https://www.intel.com/content/www/us/en/secure/design/confidential/software-kits/kit-details.html?kitId=637737)
146+
147+
![](https://i.imgur.com/eIBHHVw.png)
148+
149+
![](https://i.imgur.com/GorsHl2.png)
150+
151+
**Launch Intel PTAT tool as administrator**
152+
153+
![](https://i.imgur.com/pUyTewz.png)
154+
155+
**Loading Intel PTAT workspace file from ptat_workspace.json**
156+
157+
![](https://i.imgur.com/RGt1qd9.png)
158+
159+
160+
**Run Benchmark and Quantization Scripts in WSL**
161+
162+
1. **Copy yolo-v4-tf FP16_INT8 IR model to Downloads folder in Home directory**
163+
164+
```=bash
165+
cd /mnt/c/Users/Public/Intel-Edge-AI-Performance-Evaluation-Toolkit
166+
mkdir $HOME/Downloads
167+
cp -ar openvino_models/ $HOME/Downloads
168+
```
169+
170+
2. **Run benchmark_app on yolo-v4-tf FP16_INT8 IR model on CPU**
171+
172+
```=bash
173+
bash run_yolo-v4-tf-int8-cpu_benchmark.sh
174+
```
175+
176+
![](https://i.imgur.com/rXvPvTF.png)
177+
178+
179+
3. **Run benchmark_app on yolo-v4-tf FP16_INT8 IR model on GPU**
180+
181+
182+
```=bash
183+
bash run_yolo-v4-tf-int8-gpu_benchmark.sh
184+
```
185+
![](https://i.imgur.com/dXLwmhI.png)
186+
187+
4. **Run quantization on yolo-v3-tf FP16 IR model**
188+
189+
```=bash
190+
sudo apt install unzip
191+
bash tools/download_coco_dataset.sh
192+
bash quantize_yolo-v3-tf_int8.sh
193+
```
194+
195+
## How to contribute
196+
See [CONTRIBUTING](https://github.com/intel/Intel-Edge-AI-Performance-Evaluation-Toolkit/blob/main/CONTRIBUTING.md) for details. Thank you!
197+
198+
199+
## Get a support
200+
Please report questions, issues and suggestions using:
201+
[GitHub* Issues](https://github.com/intel/Intel-Edge-AI-Performance-Evaluation-Toolkit/issues)

docs/ReleaseNote.md

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,71 @@
1+
# Release Note for Intel® Edge AI Performance Evaluation Toolkit 1.0
2+
3+
4+
## Introduction
5+
6+
Intel® Edge AI Performance Evaluation Toolkit is an Edge AI customer enabling tool that has been designed to easily qualify and evaluate platform deep learning inference performance.
7+
8+
## New in This Release
9+
10+
This is the first release of Intel® Edge AI Performance Evaluation Toolkit.
11+
12+
## Known Limitations
13+
The toolkit has been validated on Intel® platforms Tigerlake Intel® Core™ i7-1165G7 Processor, Intel® Core™ i7-1185G7E Processor and Intel® Celeron® 6305E and OpenVINO™ toolkit v.2022.1.
14+
15+
## Included in This Release
16+
17+
It consists of scripts, configuration files, Intel Thermal Analysis Tool (TAT) workspace file and optimized OpenVINO INT8 IR model and brief explanation below,
18+
19+
* **OS setup scripts** - are used to setup container running environment on both Ubuntu Linux and Windows OS.
20+
21+
* **OpenVINO POT quantization scripts and configuration files** - are used to quantize OpenVINO FP32/FP16 IR models to INT8 by OpenVINO Post-Training Optimization Tool.
22+
23+
* **Benchmark scripts and Intel TAT workspace file** - are used to benchmark optimized INT8 IR model and monitor system frequency and thermal condition to qualify system performance.
24+
25+
## Hardware and Software Compatibility
26+
27+
Supported Intel Processors
28+
29+
* Intel® Core™ i7-1185G7E Processor
30+
* Intel® Core™ i7-1165G7 Processor
31+
* Intel® Celeron® 6305E
32+
33+
Supported Operating Systems
34+
35+
* Windows 10 21H2 64 bit
36+
* Ubuntu* 20.04 long-term support (LTS), 64-bit
37+
38+
## Where to Find This Release
39+
40+
Intel® Edge AI Performance Evaluation Toolkit can be found from https://github.com/intel/Intel-Edge-AI-Performance-Evaluation-Toolkit
41+
42+
## Helpful Link
43+
44+
[Intel® Power And Thermal Analysis Tool](https://www.intel.com/content/www/us/en/secure/design/confidential/software-kits/kit-details.html?kitId=637737)  
45+
46+
[Intel® Distribution of OpenVINO™ toolkit Benchmark Results](https://docs.openvino.ai/latest/openvino_docs_performance_benchmarks_openvino.html#doxid-openvino-docs-performance-benchmarks-openvino)
47+
48+
49+
## Legal Information
50+
51+
You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein.
52+
53+
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
54+
55+
All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest Intel product specifications and roadmaps.
56+
57+
The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request.
58+
59+
Intel technologies’ features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at http://www.intel.com/ or from the OEM or retailer.
60+
61+
No computer system can be absolutely secure.
62+
63+
Intel, Atom, Arria, Core, Movidius, Xeon, OpenVINO, and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
64+
65+
OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos
66+
67+
*Other names and brands may be claimed as the property of others.
68+
69+
Copyright © 2022, Intel Corporation. All rights reserved.
70+
71+
For more complete information about compiler optimizations, see our [Optimization Notice](https://software.intel.com/en-us/articles/optimization-notice#opt-en).
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
{
2+
"model": {
3+
"model_name": "yolo-v3-tf",
4+
"model": "/mnt/openvino_models/public/yolo-v3-tf/FP16/yolo-v3-tf.xml",
5+
"weights": "/mnt/openvino_models/public/yolo-v3-tf/FP16/yolo-v3-tf.bin"
6+
},
7+
"engine": {
8+
"config": "/mnt/yolo-v3-tf-int8.yml"
9+
},
10+
"compression": {
11+
"algorithms": [
12+
{
13+
"name": "DefaultQuantization",
14+
"params": {
15+
"preset": "performance",
16+
"stat_subset_size": 300
17+
}
18+
}
19+
]
20+
}
21+
}
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
models:
2+
- name: yolo-v3-tf
3+
launchers:
4+
- framework: openvino
5+
device: CPU
6+
adapter:
7+
type: yolo_v3
8+
anchors: "10,13, 16,30, 33,23, 30,61, 62,45, 59,119, 116,90, 156,198, 373,326"
9+
num: 9
10+
coords: 4
11+
classes: 80
12+
anchor_masks: [[6, 7, 8], [3, 4, 5], [0, 1, 2]]
13+
outputs:
14+
- conv2d_58/Conv2D/YoloRegion
15+
- conv2d_66/Conv2D/YoloRegion
16+
- conv2d_74/Conv2D/YoloRegion
17+
datasets:
18+
- name: ms_coco_detection_80_class_without_background
19+
annotation_conversion:
20+
converter: mscoco_detection
21+
annotation_file: /mnt/coco_dataset/annotations/instances_val2017.json
22+
data_source: /mnt/coco_dataset/val2017
23+
preprocessing:
24+
- type: resize
25+
size: 416
26+
27+
postprocessing:
28+
- type: resize_prediction_boxes
29+
- type: filter
30+
apply_to: prediction
31+
min_confidence: 0.001
32+
remove_filtered: True
33+
- type: nms
34+
overlap: 0.5
35+
- type: clip_boxes
36+
apply_to: prediction
37+
38+
metrics:
39+
- type: map
40+
integral: 11point
41+
ignore_difficult: true
42+
presenter: print_scalar
43+
reference: 0.6227
44+
- type: coco_precision
45+
max_detections: 100
46+
threshold: 0.5
47+
reference: 0.677
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
INFO:openvino.tools.pot.app.run:Output log dir: /mnt/openvino_models/public/yolo-v3-tiny-tf/FP16-INT8/yolo-v3-tiny-tf_DefaultQuantization/2022-08-04_05-46-04
2+
INFO:openvino.tools.pot.app.run:Creating pipeline:
3+
Algorithm: DefaultQuantization
4+
Parameters:
5+
preset : performance
6+
stat_subset_size : 300
7+
target_device : ANY
8+
model_type : None
9+
dump_intermediate_model : False
10+
inplace_statistics : True
11+
exec_log_dir : /mnt/openvino_models/public/yolo-v3-tiny-tf/FP16-INT8/yolo-v3-tiny-tf_DefaultQuantization/2022-08-04_05-46-04
12+
===========================================================================
13+
INFO:openvino.tools.pot.pipeline.pipeline:Inference Engine version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
14+
INFO:openvino.tools.pot.pipeline.pipeline:Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
15+
INFO:openvino.tools.pot.pipeline.pipeline:Post-Training Optimization Tool version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
16+
INFO:openvino.tools.pot.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
17+
INFO:openvino.tools.pot.statistics.collector:Computing statistics finished
18+
INFO:openvino.tools.pot.pipeline.pipeline:Start algorithm: DefaultQuantization
19+
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
20+
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Computing statistics finished
21+
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
22+
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Computing statistics finished
23+
INFO:openvino.tools.pot.pipeline.pipeline:Finished: DefaultQuantization
24+
===========================================================================
25+
INFO:openvino.tools.pot.pipeline.pipeline:Evaluation of generated model
26+
INFO:openvino.tools.pot.engines.ac_engine:Start inference on the whole dataset
27+
INFO:openvino.tools.pot.engines.ac_engine:Inference finished
28+
INFO:openvino.tools.pot.app.run:map : 0.35638613189657226
29+
INFO:openvino.tools.pot.app.run:coco_precision : 0.3932753353568194
Binary file not shown.

0 commit comments

Comments
 (0)