Skip to content

Commit d7b0156

Browse files
committed
[MXNET-711] Website build and version dropdown update (apache#11892)
* adding param for list of tags to display on website * using new website display argument for artifact placement in version folder * adding display logic * remove restricted setting for testing * update usage instructions * reverted Jenkinsfile to use restricted nodes [MXAPPS-581] Fixes for broken Straight Dope tests. (apache#11923) * Update relative paths pointing to the data directory to point to the correct place in the testing temporary folder. * Enable the notebooks that were previously broken because of relative file paths not pointing to the correct place. * Move some notebooks we do not plan to test to the whitelist. These notebooks are not published in the Straight Dope book. * Clean-up: Convert print statements to info/warn/error logging statements. Add some logging statements for better status. Disable flaky test: test_spatial_transformer_with_type (apache#11930) apache#11839 Add linux and macos MKLDNN Building Instruction (apache#11049) * add linux and macos doc * update doc * Update MKL_README.md * Update MKL_README.md Add convolution code to verify mkldnn backend * add homebrew link * rename to MKLDNN_README * add mkl verify * trigger * trigger * set mac complier to gcc47 * add VS2017 support experimentally * improve quality * improve quality * modify mac build instruction since prepare_mkldnn.sh has been rm * trigger * add some improvement [MXNET-531] Add download util (apache#11866) * add changes to example * place the file to the util * add retry scheme * fix the retry logic * change the DownloadUtil to Util * Trigger the CI [MXNET-11241] Avoid use of troublesome cudnnFind() results when grad_req='add' (apache#11338) * Add tests that fail due to issue 11241 * Fix apache#11241 Conv1D throws CUDNN_STATUS_EXECUTION_FAILED * Force algo 1 when grad_req==add with large c. Expand tests. * Shorten test runtimes. Improving documentation and error messages for Async distributed training with Gluon (apache#11910) * Add description about update on kvstore * add async check for gluon * only raise error if user set update_on_kvstore * fix condition * add async nightly test * fix case when no kvstore * add example for trainer creation in doc [MXNET-641] fix R windows install docs (apache#11805) * fix R windows install docs * addressed PR comments * PR comments * PR comments * fixed line wrappings * fixed line wrappings a hot fix for mkldnn link (apache#11939) re-enabling randomized test_l2_normalization (apache#11900) [MXNET-651] MXNet Model Backwards Compatibility Checker (apache#11626) * Added MNIST-MLP-Module-API models to check model save and load_checkpoint methods * Added LENET with Conv2D operator training file * Added LENET with Conv2d operator inference file * Added LanguageModelling with RNN training file * Added LamguageModelling with RNN inference file * Added hybridized LENET Gluon Model training file * Added hybridized LENET gluon model inference file * Added license headers * Refactored the model and inference files and extracted out duplicate code in a common file * Added runtime function for executing the MBCC files * Added JenkinsFile for MBCC to be run as a nightly job * Added boto3 install for s3 uploads * Added README for MBCC * Added license header * Added more common functions from lm_rnn_gluon_train and inference files into common.py to clean up code * Added scripts for training models on older versions of MXNet * Added check for preventing inference script from crashing in case no trained models are found * Fixed indentation issue * Replaced Penn Tree Bank Dataset with Sherlock Holmes Dataset * Fixed indentation issue * Removed training in models and added smaller models. Now we are simply checking a forward pass in the model with dummy data. * Updated README * Fixed indentation error * Fixed indentation error * Removed code duplication in the training file * Added comments for runtime_functions script for training files * Merged S3 Buckets for storing data and models into one * Automated the process to fetch MXNet versions from git tags * Added defensive checks for the case where the data might not be found * Fixed issue where we were performing inference on state model files * Replaced print statements with logging ones * Removed boto install statements and move them into ubuntu_python docker * Separated training and uploading of models into separate files so that training runs in Docker and upload runs outside Docker * Fixed pylint warnings * Updated comments and README * Removed the venv for training process * Fixed indentation in the MBCC Jenkins file and also separated out training and inference into two separate stages * Fixed indendation * Fixed erroneous single quote * Added --user flag to check for Jenkins error * Removed unused methods * Added force flag in the pip command to install mxnet * Removed the force-re-install flag * Changed exit 1 to exit 0 * Added quotes around the shell command * added packlibs and unpack libs for MXNet builds * Changed PythonPath from relative to absolute * Created dedicated bucket with correct permission * Fix for python path in training * Changed bucket name to CI bucket * Added set -ex to the upload shell script * Now raising an exception if no models are found in the S3 bucket * Added regex to train models script * Added check for performing inference only on models trained on same major versions * Added set -ex flags to shell scripts * Added multi-version regex checks in training * Fixed typo in regex * Now we will train models for all the minor versions for a given major version by traversing the tags * Added check for validating current_version [MXNET-531] NeuralStyle Example for Scala (apache#11621) * add initial neuralstyle and test coverage * Add two more test and README * kill comments * patch on memory leaks fix * fix formatting issues * remove redundant files * disable the Gan example for now * add ignore method * add new download scheme to match the changes
1 parent 4b3988e commit d7b0156

51 files changed

Lines changed: 2316 additions & 953 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

MKLDNN_README.md

Lines changed: 301 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,301 @@
1+
# Build/Install MXNet with MKL-DNN
2+
3+
Building MXNet with [Intel MKL-DNN](https://github.com/intel/mkl-dnn) will gain better performance when using Intel Xeon CPUs for training and inference. The improvement of performance can be seen in this [page](https://mxnet.incubator.apache.org/faq/perf.html#intel-cpu). Below are instructions for linux, MacOS and Windows platform.
4+
5+
<h2 id="0">Contents</h2>
6+
7+
* [1. Linux](#1)
8+
* [2. MacOS](#2)
9+
* [3. Windows](#3)
10+
* [4. Verify MXNet with python](#4)
11+
* [5. Enable MKL BLAS](#5)
12+
* [6. Support](#6)
13+
14+
<h2 id="1">Linux</h2>
15+
16+
### Prerequisites
17+
18+
```
19+
sudo apt-get update
20+
sudo apt-get install -y build-essential git
21+
sudo apt-get install -y libopenblas-dev liblapack-dev
22+
sudo apt-get install -y libopencv-dev
23+
sudo apt-get install -y graphviz
24+
```
25+
26+
### Clone MXNet sources
27+
28+
```
29+
git clone --recursive https://github.com/apache/incubator-mxnet.git
30+
cd incubator-mxnet
31+
```
32+
33+
### Build MXNet with MKL-DNN
34+
35+
```
36+
make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl USE_INTEL_PATH=/opt/intel
37+
```
38+
39+
If you don't have full [MKL](https://software.intel.com/en-us/intel-mkl) library installed, you can use OpenBLAS by setting `USE_BLAS=openblas`.
40+
41+
<h2 id="2">MacOS</h2>
42+
43+
### Prerequisites
44+
45+
Install the dependencies, required for MXNet, with the following commands:
46+
47+
- [Homebrew](https://brew.sh/)
48+
- gcc (clang in macOS does not support OpenMP)
49+
- OpenCV (for computer vision operations)
50+
51+
```
52+
# Paste this command in Mac terminal to install Homebrew
53+
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
54+
55+
# install dependency
56+
brew update
57+
brew install pkg-config
58+
brew install graphviz
59+
brew tap homebrew/core
60+
brew install opencv
61+
brew tap homebrew/versions
62+
brew install gcc49
63+
brew link gcc49 #gcc-5 and gcc-7 also work
64+
```
65+
66+
### Clone MXNet sources
67+
68+
```
69+
git clone --recursive https://github.com/apache/incubator-mxnet.git
70+
cd incubator-mxnet
71+
```
72+
73+
### Enable OpenMP for MacOS
74+
75+
If you want to enable OpenMP for better performance, you should modify the Makefile in MXNet root dictionary:
76+
77+
Add CFLAGS '-fopenmp' for Darwin.
78+
79+
```
80+
ifeq ($(USE_OPENMP), 1)
81+
# ifneq ($(UNAME_S), Darwin)
82+
CFLAGS += -fopenmp
83+
# endif
84+
endif
85+
```
86+
87+
### Build MXNet with MKL-DNN
88+
89+
```
90+
make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
91+
```
92+
93+
*Note: Temporarily disable OPENCV.*
94+
95+
<h2 id="3">Windows</h2>
96+
97+
We recommend to build and install MXNet yourself using [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also try experimentally the latest [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/).
98+
99+
**Visual Studio 2015**
100+
101+
To build and install MXNet yourself, you need the following dependencies. Install the required dependencies:
102+
103+
1. If [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) is not already installed, download and install it. You can download and install the free community edition.
104+
2. Download and Install [CMake 3](https://cmake.org/) if it is not already installed.
105+
3. Download and install [OpenCV 3](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
106+
4. Unzip the OpenCV package.
107+
5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for example) to the ``PATH`` variable.
108+
6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` to point to ```MKL``` directory that contains the ```include``` and ```lib```. If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. Typically, you can find the directory in
109+
```C:\Program Files (x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
110+
7. If you don't have the Intel Math Kernel Library (MKL) installed, download and install [OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that you should also download ```mingw64.dll.zip`` along with openBLAS and add them to PATH.
111+
8. Set the environment variable ```OpenBLAS_HOME``` to point to the ```OpenBLAS``` directory that contains the ```include``` and ```lib``` directories. Typically, you can find the directory in ```C:\Program files (x86)\OpenBLAS\```.
112+
113+
After you have installed all of the required dependencies, build the MXNet source code:
114+
115+
1. Download the MXNet source code from [GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the submodules:
116+
```
117+
git clone --recursive https://github.com/apache/incubator-mxnet.git
118+
```
119+
120+
2. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
121+
122+
3. Start a Visual Studio command prompt.
123+
124+
4. Use [CMake 3](https://cmake.org/) to create a Visual Studio solution in ```./build``` or some other directory. Make sure to specify the architecture in the
125+
[CMake 3](https://cmake.org/) command:
126+
```
127+
mkdir build
128+
cd build
129+
cmake -G "Visual Studio 14 Win64" .. -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All -DUSE_MKLDNN=1 -DCMAKE_BUILD_TYPE=Release
130+
```
131+
132+
5. In Visual Studio, open the solution file,```.sln```, and compile it.
133+
These commands produce a library called ```libmxnet.dll``` in the ```./build/Release/``` or ```./build/Debug``` folder.
134+
Also ```libmkldnn.dll``` with be in the ```./build/3rdparty/mkldnn/src/Release/```
135+
136+
6. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading mxnet.
137+
138+
**Visual Studio 2017**
139+
140+
To build and install MXNet yourself using [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/), you need the following dependencies. Install the required dependencies:
141+
142+
1. If [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) is not already installed, download and install it. You can download and install the free community edition.
143+
2. Download and install [CMake 3](https://cmake.org/files/v3.11/cmake-3.11.0-rc4-win64-x64.msi) if it is not already installed.
144+
3. Download and install [OpenCV](https://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.4.1/opencv-3.4.1-vc14_vc15.exe/download).
145+
4. Unzip the OpenCV package.
146+
5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV build directory``` (e.g., ```OpenCV_DIR = C:\utils\opencv\build```).
147+
6. If you don’t have the Intel Math Kernel Library (MKL) installed, download and install [OpenBlas](https://sourceforge.net/projects/openblas/files/v0.2.20/OpenBLAS%200.2.20%20version.zip/download).
148+
7. Set the environment variable ```OpenBLAS_HOME``` to point to the ```OpenBLAS``` directory that contains the ```include``` and ```lib``` directories (e.g., ```OpenBLAS_HOME = C:\utils\OpenBLAS```).
149+
150+
After you have installed all of the required dependencies, build the MXNet source code:
151+
152+
1. Start ```cmd``` in windows.
153+
154+
2. Download the MXNet source code from GitHub by using following command:
155+
156+
```r
157+
cd C:\
158+
git clone --recursive https://github.com/apache/incubator-mxnet.git
159+
```
160+
161+
3. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
162+
163+
4. Follow [this link](https://docs.microsoft.com/en-us/visualstudio/install/modify-visual-studio) to modify ```Individual components```, and check ```VC++ 2017 version 15.4 v14.11 toolset```, and click ```Modify```.
164+
165+
5. Change the version of the Visual studio 2017 to v14.11 using the following command (by default the VS2017 is installed in the following path):
166+
167+
```r
168+
"C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvars64.bat" -vcvars_ver=14.11
169+
```
170+
171+
6. Create a build dir using the following command and go to the directory, for example:
172+
173+
```r
174+
mkdir C:\build
175+
cd C:\build
176+
```
177+
178+
7. CMake the MXNet source code by using following command:
179+
180+
```r
181+
cmake -G "Visual Studio 15 2017 Win64" .. -T host=x64 -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All -DUSE_MKLDNN=1 -DCMAKE_BUILD_TYPE=Release
182+
```
183+
184+
8. After the CMake successfully completed, compile the the MXNet source code by using following command:
185+
186+
```r
187+
msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount
188+
```
189+
190+
9. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml.dll`, `libiomp5.dll`, `libopenblas.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\windows\system32```. Or you will come across `Not Found Dependencies` when loading mxnet.
191+
192+
<h2 id="4">Verify MXNet with python</h2>
193+
194+
```
195+
cd python
196+
sudo python setup.py install
197+
python -c "import mxnet as mx;print((mx.nd.ones((2, 3))*2).asnumpy());"
198+
199+
Expected Output:
200+
201+
[[ 2. 2. 2.]
202+
[ 2. 2. 2.]]
203+
```
204+
205+
### Verify whether MKL-DNN works
206+
207+
After MXNet is installed, you can verify if MKL-DNN backend works well with a single Convolution layer.
208+
209+
```
210+
import mxnet as mx
211+
import numpy as np
212+
213+
num_filter = 32
214+
kernel = (3, 3)
215+
pad = (1, 1)
216+
shape = (32, 32, 256, 256)
217+
218+
x = mx.sym.Variable('x')
219+
w = mx.sym.Variable('w')
220+
y = mx.sym.Convolution(data=x, weight=w, num_filter=num_filter, kernel=kernel, no_bias=True, pad=pad)
221+
exe = y.simple_bind(mx.cpu(), x=shape)
222+
223+
exe.arg_arrays[0][:] = np.random.normal(size=exe.arg_arrays[0].shape)
224+
exe.arg_arrays[1][:] = np.random.normal(size=exe.arg_arrays[1].shape)
225+
226+
exe.forward(is_train=False)
227+
o = exe.outputs[0]
228+
t = o.asnumpy()
229+
```
230+
231+
You can open the `MKLDNN_VERBOSE` flag by setting environment variable:
232+
```
233+
export MKLDNN_VERBOSE=1
234+
```
235+
Then by running above code snippet, you probably will get the following output message which means `convolution` and `reorder` primitive from MKL-DNN are called. Layout information and primitive execution performance are also demonstrated in the log message.
236+
```
237+
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nchw out:f32_nChw16c,num:1,32x32x256x256,6.47681
238+
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,32x32x3x3,0.0429688
239+
mkldnn_verbose,exec,convolution,jit:avx512_common,forward_inference,fsrc:nChw16c fwei:OIhw16i16o fbia:undef fdst:nChw16c,alg:convolution_direct,mb32_g1ic32oc32_ih256oh256kh3sh1dh0ph1_iw256ow256kw3sw1dw0pw1,9.98193
240+
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_oihw out:f32_OIhw16i16o,num:1,32x32x3x3,0.0510254
241+
mkldnn_verbose,exec,reorder,jit:uni,undef,in:f32_nChw16c out:f32_nchw,num:1,32x32x256x256,20.4819
242+
```
243+
244+
<h2 id="5">Enable MKL BLAS</h2>
245+
246+
To make it convenient for customers, Intel introduced a new license called [Intel® Simplified license](https://software.intel.com/en-us/license/intel-simplified-software-license) that allows to redistribute not only dynamic libraries but also headers, examples and static libraries.
247+
248+
Installing and enabling the full MKL installation enables MKL support for all operators under the linalg namespace.
249+
250+
1. Download and install the latest full MKL version following instructions on the [intel website.](https://software.intel.com/en-us/mkl)
251+
252+
2. Run `make -j ${nproc} USE_BLAS=mkl`
253+
254+
3. Navigate into the python directory
255+
256+
4. Run `sudo python setup.py install`
257+
258+
### Verify whether MKL works
259+
260+
After MXNet is installed, you can verify if MKL BLAS works well with a single dot layer.
261+
262+
```
263+
import mxnet as mx
264+
import numpy as np
265+
266+
shape_x = (1, 10, 8)
267+
shape_w = (1, 12, 8)
268+
269+
x_npy = np.random.normal(0, 1, shape_x)
270+
w_npy = np.random.normal(0, 1, shape_w)
271+
272+
x = mx.sym.Variable('x')
273+
w = mx.sym.Variable('w')
274+
y = mx.sym.batch_dot(x, w, transpose_b=True)
275+
exe = y.simple_bind(mx.cpu(), x=x_npy.shape, w=w_npy.shape)
276+
277+
exe.forward(is_train=False)
278+
o = exe.outputs[0]
279+
t = o.asnumpy()
280+
```
281+
282+
You can open the `MKL_VERBOSE` flag by setting environment variable:
283+
```
284+
export MKL_VERBOSE=1
285+
```
286+
Then by running above code snippet, you probably will get the following output message which means `SGEMM` primitive from MKL are called. Layout information and primitive execution performance are also demonstrated in the log message.
287+
```
288+
Numpy + Intel(R) MKL: THREADING LAYER: (null)
289+
Numpy + Intel(R) MKL: setting Intel(R) MKL to use INTEL OpenMP runtime
290+
Numpy + Intel(R) MKL: preloading libiomp5.so runtime
291+
MKL_VERBOSE Intel(R) MKL 2018.0 Update 1 Product build 20171007 for Intel(R) 64 architecture Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) enabled processors, Lnx 2.40GHz lp64 intel_thread NMICDev:0
292+
MKL_VERBOSE SGEMM(T,N,12,10,8,0x7f7f927b1378,0x1bc2140,8,0x1ba8040,8,0x7f7f927b1380,0x7f7f7400a280,12) 8.93ms CNR:OFF Dyn:1 FastMM:1 TID:0 NThr:40 WDiv:HOST:+0.000
293+
```
294+
295+
<h2 id="6">Next Steps and Support</h2>
296+
297+
- For questions or support specific to MKL, visit the [Intel MKL](https://software.intel.com/en-us/mkl)
298+
299+
- For questions or support specific to MKL, visit the [Intel MKLDNN](https://github.com/intel/mkl-dnn)
300+
301+
- If you find bugs, please open an issue on GitHub for [MXNet with MKL](https://github.com/apache/incubator-mxnet/labels/MKL) or [MXNet with MKLDNN](https://github.com/apache/incubator-mxnet/labels/MKLDNN)

MKL_README.md

Lines changed: 0 additions & 77 deletions
This file was deleted.

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ What's New
3838
* [Version 0.8.0 Release](https://github.com/dmlc/mxnet/releases/tag/v0.8.0)
3939
* [Updated Image Classification with new Pre-trained Models](./example/image-classification)
4040
* [Python Notebooks for How to Use MXNet](https://github.com/dmlc/mxnet-notebooks)
41-
* [MKLDNN for Faster CPU Performance](./MKL_README.md)
41+
* [MKLDNN for Faster CPU Performance](./MKLDNN_README.md)
4242
* [MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost](https://github.com/dmlc/mxnet-memonger)
4343
* [Tutorial for NVidia GTC 2016](https://github.com/dmlc/mxnet-gtc-tutorial)
4444
* [Embedding Torch layers and functions in MXNet](https://mxnet.incubator.apache.org/faq/torch.html)

ci/docker/install/ubuntu_python.sh

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -29,5 +29,5 @@ wget -nv https://bootstrap.pypa.io/get-pip.py
2929
python3 get-pip.py
3030
python2 get-pip.py
3131

32-
pip2 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1
33-
pip3 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1
32+
pip2 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3
33+
pip3 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 boto3

0 commit comments

Comments
 (0)