-
Notifications
You must be signed in to change notification settings - Fork 696
[Feature] Add docker files #67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
18 commits
Select commit
Hold shift + click to select a range
c7ad68d
add gpu and cpu dockerfile
AllentDan 206aa66
fix lint
AllentDan 88b4d28
fix cpu docker and remove redundant
AllentDan 17720c3
use pip instead
AllentDan 924d902
add build arg and readme
AllentDan 5d75ea3
fix grammar
AllentDan 21f76bc
update readme
AllentDan a7e8705
add chinese doc for dockerfile and add docker build to build.md
AllentDan 3d705f1
grammar
AllentDan aef6fb5
refine dockerfiles
AllentDan f79dc7b
add FAQs
AllentDan 487030e
update Dpplcv_DIR for SDK building
AllentDan a3e4458
remove mmcls
AllentDan 51afea0
add sdk demos
AllentDan 010196a
Merge branch 'docker-new' into docker
AllentDan 60f15eb
fix typo and lint
AllentDan 8144262
update FAQs
AllentDan 18e2899
resolve conflicts
AllentDan File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,105 @@ | ||
| FROM openvino/ubuntu18_dev:2021.4.2 | ||
| ARG PYTHON_VERSION=3.7 | ||
| ARG TORCH_VERSION=1.8.0 | ||
| ARG TORCHVISION_VERSION=0.9.0 | ||
| ARG ONNXRUNTIME_VERSION=1.8.1 | ||
| ARG MMCV_VERSION=1.4.0 | ||
| ARG CMAKE_VERSION=3.20.0 | ||
| USER root | ||
| RUN apt-get update && apt-get install -y --no-install-recommends \ | ||
| ca-certificates \ | ||
| libopencv-dev libspdlog-dev \ | ||
| gnupg \ | ||
| libssl-dev \ | ||
| libprotobuf-dev protobuf-compiler \ | ||
| build-essential \ | ||
| libjpeg-dev \ | ||
| libpng-dev \ | ||
| ccache \ | ||
| cmake \ | ||
| gcc \ | ||
| g++ \ | ||
| git \ | ||
| vim \ | ||
| wget \ | ||
| curl \ | ||
| && rm -rf /var/lib/apt/lists/* | ||
|
|
||
| RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \ | ||
| chmod +x ~/miniconda.sh && \ | ||
| ~/miniconda.sh -b -p /opt/conda && \ | ||
| rm ~/miniconda.sh && \ | ||
| /opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \ | ||
| /opt/conda/bin/conda clean -ya | ||
|
|
||
| ### pytorch | ||
| RUN /opt/conda/bin/pip install torch==${TORCH_VERSION}+cpu torchvision==${TORCHVISION_VERSION}+cpu -f https://download.pytorch.org/whl/cpu/torch_stable.html | ||
| ENV PATH /opt/conda/bin:$PATH | ||
|
|
||
| ### install open-mim | ||
| RUN /opt/conda/bin/pip install mmcv-full==${MMCV_VERSION} -f https://download.openmmlab.com/mmcv/dist/cpu/torch${TORCH_VERSION}/index.html | ||
|
|
||
| WORKDIR /root/workspace | ||
|
|
||
| ### get onnxruntime | ||
| RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \ | ||
| && tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz | ||
|
|
||
| ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION} | ||
|
|
||
| ### update cmake to 20 | ||
| RUN wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz &&\ | ||
| tar -zxvf cmake-${CMAKE_VERSION}.tar.gz &&\ | ||
| cd cmake-${CMAKE_VERSION} &&\ | ||
| ./bootstrap &&\ | ||
| make &&\ | ||
| make install | ||
|
|
||
| ### install onnxruntme and openvino | ||
| RUN /opt/conda/bin/pip install onnxruntime==${ONNXRUNTIME_VERSION} openvino-dev | ||
|
|
||
| ### build ncnn | ||
| RUN git clone https://github.com/Tencent/ncnn.git &&\ | ||
| cd ncnn &&\ | ||
| export NCNN_DIR=$(pwd) &&\ | ||
| git submodule update --init &&\ | ||
| mkdir -p build && cd build &&\ | ||
| cmake -DNCNN_VULKAN=OFF -DNCNN_SYSTEM_GLSLANG=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_PYTHON=ON -DNCNN_BUILD_TOOLS=ON -DNCNN_BUILD_BENCHMARK=ON -DNCNN_BUILD_TESTS=ON .. &&\ | ||
| make install &&\ | ||
| cd /root/workspace/ncnn/python &&\ | ||
| pip install -e . | ||
RunningLeon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| ### install mmdeploy | ||
| WORKDIR /root/workspace | ||
| ARG VERSION | ||
| RUN git clone https://github.com/open-mmlab/mmdeploy.git &&\ | ||
| cd mmdeploy &&\ | ||
| if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\ | ||
| git submodule update --init --recursive &&\ | ||
| rm -rf build &&\ | ||
| mkdir build &&\ | ||
| cd build &&\ | ||
| cmake -DMMDEPLOY_TARGET_BACKENDS=ncnn -Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn .. &&\ | ||
| make -j$(nproc) &&\ | ||
| cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\ | ||
| make -j$(nproc) &&\ | ||
| cd .. &&\ | ||
| pip install -e . | ||
|
|
||
| ### build SDK | ||
| ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:/opt/intel/openvino/deployment_tools/ngraph/lib:/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64:${LD_LIBRARY_PATH}" | ||
| RUN cd mmdeploy && rm -rf build/CM* && mkdir -p build && cd build && cmake .. \ | ||
| -DMMDEPLOY_BUILD_SDK=ON \ | ||
| -DCMAKE_CXX_COMPILER=g++-7 \ | ||
| -DONNXRUNTIME_DIR=${ONNXRUNTIME_DIR} \ | ||
| -Dncnn_DIR=/root/workspace/ncnn/build/install/lib/cmake/ncnn \ | ||
| -DInferenceEngine_DIR=/opt/intel/openvino/deployment_tools/inference_engine/share \ | ||
| -DMMDEPLOY_TARGET_DEVICES=cpu \ | ||
| -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ | ||
| -DMMDEPLOY_TARGET_BACKENDS="ort;ncnn;openvino" \ | ||
| -DMMDEPLOY_CODEBASES=all &&\ | ||
| cmake --build . -- -j$(nproc) && cmake --install . &&\ | ||
| cd install/example && mkdir -p build && cd build &&\ | ||
| cmake -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy .. &&\ | ||
| cmake --build . && export SPDLOG_LEVEL=warn &&\ | ||
| if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for CPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for CPU devices successfully!" ; fi | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,90 @@ | ||
| FROM nvcr.io/nvidia/tensorrt:21.04-py3 | ||
|
|
||
| ARG CUDA=10.2 | ||
| ARG PYTHON_VERSION=3.8 | ||
| ARG TORCH_VERSION=1.8.0 | ||
| ARG TORCHVISION_VERSION=0.9.0 | ||
| ARG ONNXRUNTIME_VERSION=1.8.1 | ||
| ARG MMCV_VERSION=1.4.0 | ||
| ARG CMAKE_VERSION=3.20.0 | ||
| ENV FORCE_CUDA="1" | ||
|
|
||
| ENV DEBIAN_FRONTEND=noninteractive | ||
|
|
||
| ### update apt and install libs | ||
| RUN apt-get update &&\ | ||
| apt-get install -y vim libsm6 libxext6 libxrender-dev libgl1-mesa-glx git wget libssl-dev libopencv-dev libspdlog-dev --no-install-recommends &&\ | ||
| rm -rf /var/lib/apt/lists/* | ||
|
|
||
| RUN curl -fsSL -v -o ~/miniconda.sh -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh && \ | ||
| chmod +x ~/miniconda.sh && \ | ||
| ~/miniconda.sh -b -p /opt/conda && \ | ||
| rm ~/miniconda.sh && \ | ||
| /opt/conda/bin/conda install -y python=${PYTHON_VERSION} conda-build pyyaml numpy ipython cython typing typing_extensions mkl mkl-include ninja && \ | ||
| /opt/conda/bin/conda clean -ya | ||
|
|
||
| ### pytorch | ||
| RUN /opt/conda/bin/conda install pytorch==${TORCH_VERSION} torchvision==${TORCHVISION_VERSION} cudatoolkit=${CUDA} -c pytorch | ||
| ENV PATH /opt/conda/bin:$PATH | ||
|
|
||
| ### install mmcv-full | ||
| RUN /opt/conda/bin/pip install mmcv-full==${MMCV_VERSION} -f https://download.openmmlab.com/mmcv/dist/cu${CUDA//./}/torch${TORCH_VERSION}/index.html | ||
|
|
||
| WORKDIR /root/workspace | ||
| ### get onnxruntime | ||
| RUN wget https://github.com/microsoft/onnxruntime/releases/download/v${ONNXRUNTIME_VERSION}/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz \ | ||
| && tar -zxvf onnxruntime-linux-x64-${ONNXRUNTIME_VERSION}.tgz &&\ | ||
| pip install onnxruntime-gpu==${ONNXRUNTIME_VERSION} | ||
|
|
||
| ### cp trt from pip to conda | ||
| RUN cp -r /usr/local/lib/python${PYTHON_VERSION}/dist-packages/tensorrt* /opt/conda/lib/python${PYTHON_VERSION}/site-packages/ | ||
|
|
||
| ### update cmake | ||
| RUN wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}.tar.gz &&\ | ||
| tar -zxvf cmake-${CMAKE_VERSION}.tar.gz &&\ | ||
| cd cmake-${CMAKE_VERSION} &&\ | ||
| ./bootstrap &&\ | ||
| make &&\ | ||
| make install | ||
|
|
||
| ### install mmdeploy | ||
| ENV ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-${ONNXRUNTIME_VERSION} | ||
| ENV TENSORRT_DIR=/workspace/tensorrt | ||
| ARG VERSION | ||
| RUN git clone https://github.com/open-mmlab/mmdeploy &&\ | ||
| cd mmdeploy &&\ | ||
| if [ -z ${VERSION} ] ; then echo "No MMDeploy version passed in, building on master" ; else git checkout tags/v${VERSION} -b tag_v${VERSION} ; fi &&\ | ||
| git submodule update --init --recursive &&\ | ||
| rm -rf build &&\ | ||
| mkdir build &&\ | ||
| cd build &&\ | ||
| cmake -DMMDEPLOY_TARGET_BACKENDS=ort .. &&\ | ||
| make -j$(nproc) &&\ | ||
| cmake -DMMDEPLOY_TARGET_BACKENDS=trt .. &&\ | ||
| make -j$(nproc) &&\ | ||
| cd .. &&\ | ||
| pip install -e . | ||
|
|
||
| ### build sdk | ||
| RUN git clone https://github.com/openppl-public/ppl.cv.git &&\ | ||
RunningLeon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| cd ppl.cv &&\ | ||
| ./build.sh cuda | ||
| RUN cd /root/workspace/mmdeploy &&\ | ||
| rm -rf build/CM* &&\ | ||
| mkdir -p build && cd build &&\ | ||
| cmake .. \ | ||
| -DMMDEPLOY_BUILD_SDK=ON \ | ||
| -DCMAKE_CXX_COMPILER=g++ \ | ||
| -Dpplcv_DIR=/root/workspace/ppl.cv/cuda-build/install/lib/cmake/ppl \ | ||
| -DTENSORRT_DIR=${TENSORRT_DIR} \ | ||
| -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ | ||
| -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ | ||
| -DMMDEPLOY_TARGET_BACKENDS="trt" \ | ||
| -DMMDEPLOY_CODEBASES=all &&\ | ||
| cmake --build . -- -j$(nproc) && cmake --install . &&\ | ||
| cd install/example && mkdir -p build && cd build &&\ | ||
| cmake -DMMDeploy_DIR=/root/workspace/mmdeploy/build/install/lib/cmake/MMDeploy .. &&\ | ||
| cmake --build . && export SPDLOG_LEVEL=warn &&\ | ||
| if [ -z ${VERSION} ] ; then echo "Built MMDeploy master for GPU devices successfully!" ; else echo "Built MMDeploy version v${VERSION} for GPU devices successfully!" ; fi | ||
|
|
||
| ENV LD_LIBRARY_PATH="/root/workspace/mmdeploy/build/lib:${LD_LIBRARY_PATH}" | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,45 @@ | ||
| ## Docker usage | ||
|
|
||
| We provide two dockerfiles for CPU and GPU respectively. For CPU users, we install MMDeploy with ONNXRuntime, ncnn and OpenVINO backends. For GPU users, we install MMDeploy with TensorRT backend. Besides, users can install mmdeploy with different versions when building the docker image. | ||
RunningLeon marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
|
||
| ### Build docker image | ||
|
|
||
| For CPU users, we can build the docker image with the latest MMDeploy through: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/CPU/ -t mmdeploy:master-cpu | ||
| ``` | ||
| For GPU users, we can build the docker image with the latest MMDeploy through: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/GPU/ -t mmdeploy:master-gpu | ||
| ``` | ||
|
|
||
| For installing MMDeploy with a specific version, we can append `--build-arg VERSION=${VERSION}` to build command. GPU for example: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/GPU/ -t mmdeploy:0.1.0 --build-arg VERSION=0.1.0 | ||
| ``` | ||
|
|
||
| ### Run docker container | ||
|
|
||
| After building the docker image succeed, we can use `docker run` to launch the docker service. GPU docker image for example: | ||
| ``` | ||
| docker run --gpus all -it -p 8080:8081 mmdeploy:master-gpu | ||
| ``` | ||
|
|
||
| ### FAQs | ||
|
|
||
| 1. CUDA error: the provided PTX was compiled with an unsupported toolchain: | ||
|
|
||
| As described [here](https://forums.developer.nvidia.com/t/cuda-error-the-provided-ptx-was-compiled-with-an-unsupported-toolchain/185754), update the GPU driver to the latest one for your GPU. | ||
| 2. docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. | ||
| ``` | ||
| # Add the package repositories | ||
| distribution=$(. /etc/os-release;echo $ID$VERSION_ID) | ||
| curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - | ||
| curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list | ||
|
|
||
| sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit | ||
| sudo systemctl restart docker | ||
| ``` | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,46 @@ | ||
| ## Docker的使用 | ||
|
|
||
| 我们分别为 CPU 和 GPU 提供了两个 dockerfile。对于 CPU 用户,我们对接 ONNXRuntime、ncnn 和 OpenVINO 后端安装 MMDeploy。对于 GPU 用户,我们安装带有 TensorRT 后端的 MMDeploy。此外,用户可以在构建 docker 镜像时安装不同版本的 mmdeploy。 | ||
|
|
||
| ### 构建镜像 | ||
|
|
||
| 对于 CPU 用户,我们可以通过以下方式使用最新的 MMDeploy 构建 docker 镜像: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/CPU/ -t mmdeploy:master-cpu | ||
| ``` | ||
| 对于 GPU 用户,我们可以通过以下方式使用最新的 MMDeploy 构建 docker 镜像: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/GPU/ -t mmdeploy:master-gpu | ||
| ``` | ||
|
|
||
| 要安装具有特定版本的 MMDeploy,我们可以将 `--build-arg VERSION=${VERSION}` 附加到构建命令中。以 GPU 为例: | ||
| ``` | ||
| cd mmdeploy | ||
| docker build docker/GPU/ -t mmdeploy:0.1.0 --build-arg VERSION=0.1.0 | ||
| ``` | ||
|
|
||
| ### 运行 docker 容器 | ||
|
|
||
| 构建 docker 镜像成功后,我们可以使用 `docker run` 启动 docker 服务。 GPU 镜像为例: | ||
| ``` | ||
| docker run --gpus all -it -p 8080:8081 mmdeploy:master-gpu | ||
| ``` | ||
|
|
||
| ### 常见问答 | ||
|
|
||
| 1. CUDA error: the provided PTX was compiled with an unsupported toolchain: | ||
|
|
||
| 如 [这里](https://forums.developer.nvidia.com/t/cuda-error-the-provided-ptx-was-compiled-with-an-unsupported-toolchain/185754)所说,更新 GPU 的驱动到你的GPU能使用的最新版本。 | ||
|
|
||
| 2. docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]]. | ||
| ``` | ||
| # Add the package repositories | ||
| distribution=$(. /etc/os-release;echo $ID$VERSION_ID) | ||
| curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - | ||
| curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list | ||
|
|
||
| sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit | ||
| sudo systemctl restart docker | ||
| ``` |
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.