File tree Expand file tree Collapse file tree 5 files changed +7
-7
lines changed
getstarted/build_and_install
cluster_train_v2/openmpi/docker_cluster Expand file tree Collapse file tree 5 files changed +7
-7
lines changed Original file line number Diff line number Diff line change 1414
1515 $ export CUDA_SO=" $( \l s usr/lib64/libcuda* | xargs -I{} echo ' -v {}:{}' ) $( \l s /usr/lib64/libnvidia* | xargs -I{} echo ' -v {}:{}' ) "
1616 $ export DEVICES=$( \l s /dev/nvidia* | xargs -I{} echo ' --device {}:{}' )
17- $ docker run ${CUDA_SO} ${DEVICES} -it paddledev/ paddlepaddle:latest-gpu
17+ $ docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle/paddle :latest-gpu
1818
1919 更多关于Docker的安装与使用, 请参考 `PaddlePaddle Docker 文档 <http://www.paddlepaddle.org/doc_cn/build_and_install/install/docker_install.html >`_ 。
2020
Original file line number Diff line number Diff line change @@ -114,15 +114,15 @@ PaddlePaddle Book是为用户和开发者制作的一个交互式的Jupyter Note
114114
115115 .. code-block :: bash
116116
117- nvidia-docker run -it -v $PWD :/work paddledev /paddle:latest-gpu /bin/bash
117+ nvidia-docker run -it -v $PWD :/work paddlepaddle /paddle:latest-gpu /bin/bash
118118
119119 **注: 如果没有安装nvidia-docker,可以尝试以下的方法,将CUDA库和Linux设备挂载到Docker容器内: **
120120
121121 .. code-block :: bash
122122
123123 export CUDA_SO=" $( \l s /usr/lib64/libcuda* | xargs -I{} echo ' -v {}:{}' ) $( \l s /usr/lib64/libnvidia* | xargs -I{} echo ' -v {}:{}' ) "
124124 export DEVICES=$( \l s /dev/nvidia* | xargs -I{} echo ' --device {}:{}' )
125- docker run ${CUDA_SO} ${DEVICES} -it paddledev /paddle:latest-gpu
125+ docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle /paddle:latest-gpu
126126
127127 **关于AVX: **
128128
Original file line number Diff line number Diff line change @@ -122,15 +122,15 @@ GPU driver installed before move on.
122122
123123 .. code-block :: bash
124124
125- nvidia-docker run -it -v $PWD :/work paddledev /paddle:latest-gpu /bin/bash
125+ nvidia-docker run -it -v $PWD :/work paddlepaddle /paddle:latest-gpu /bin/bash
126126
127127 **NOTE: If you don't have nvidia-docker installed, try the following method to mount CUDA libs and devices into the container. **
128128
129129 .. code-block :: bash
130130
131131 export CUDA_SO=" $( \l s /usr/lib64/libcuda* | xargs -I{} echo ' -v {}:{}' ) $( \l s /usr/lib64/libnvidia* | xargs -I{} echo ' -v {}:{}' ) "
132132 export DEVICES=$( \l s /dev/nvidia* | xargs -I{} echo ' --device {}:{}' )
133- docker run ${CUDA_SO} ${DEVICES} -it paddledev /paddle:latest-gpu
133+ docker run ${CUDA_SO} ${DEVICES} -it paddlepaddle /paddle:latest-gpu
134134
135135 **About AVX: **
136136
Original file line number Diff line number Diff line change 11# Build this image: docker build -t mpi .
22#
33
4- FROM paddledev /paddle:0.10.0rc3
4+ FROM paddlepaddle /paddle:0.10.0rc3
55
66ENV DEBIAN_FRONTEND noninteractive
77
Original file line number Diff line number Diff line change @@ -5,4 +5,4 @@ docker run --rm \
55 -e " WITH_AVX=ON" \
66 -e " WITH_DOC=ON" \
77 -e " WOBOQ=ON" \
8- ${1:- " paddledev /paddle:dev" }
8+ ${1:- " paddlepaddle /paddle:latest- dev" }
You can’t perform that action at this time.
0 commit comments