Compilation of Dockerfiles with automated builds enabled on the Docker Hub. Not suitable for production environments. These images are under continuous development, so breaking changes may be introduced.
All images are based on Ubuntu Core 16.04 LTS, built with minimising size/layers and best practices in mind. Dependencies are indicated left to right e.g. cuda-theano is theano built on top of CUDA. Explicit dependencies are excluded.
Starting graphical (X11) applications is possible with the following commands:
docker run -it `# Running interactively, but can be replaced with -d for daemons` \
-e DISPLAY `# Pass $DISPLAY` \
-v=/tmp/.X11-unix:/tmp/.X11-unix `# Pass X11 socket` \
--ipc=host `# Allows MIT-SHM` \
<image>General information on running desktop applications with Docker can be found in this blog post. You probably will also need to configure the X server host (xhost) to give access. For hardware acceleration on Linux, it is possible to use nvidia-docker (with an image built for NVIDIA Docker), although OpenGL is not fully supported.
On Mac OS X, use XQuartz and allow connections from network clients. Then the following can be used:
docker run -it \
-e DISPLAY=`ifconfig en0 | grep inet | awk '$1=="inet" {print $2}'`:0 `# Use XQuartz network $DISPLAY` \
--ipc=host \
<image>Most containers run as a foreground process. To daemonise (in Docker terminology, detach) such a container it is possible to use:
docker run -d <image> sh -c "while true; do sleep 1; done"
It is now possible to access the daemonised container, for example using bash:
docker exec -it <id> bash
To start containers on the host from within a docker container, the container requires docker-engine installed, with the same API version as the Docker daemon on the host. The Docker socket also needs to be mounted inside the container:
-v /var/run/docker.sock:/var/run/docker.sock
All images pull the most recent versions of CUDA and cuDNN from NVIDIA's DockerHub
These images need to be run on an Ubuntu host OS with NVIDIA Docker installed. The driver requirements can be found on the NVIDIA Docker wiki.
These Dockerfiles have been modified from Kaixhin source for the original repo is here.