Placeholder for anything to do with Docker (https://docker.com)
You can discuss all things related to this page on the forums here
There is a separate page that addresses the design of a Docker contrib here
There is also a page to discuss on how to create a Docker image of SME here
- 1 About
- 2 Considerations
- 3 Installation
- 4 Using a Docker image
- 5 Challenges
- 6 Building your own images
- 7 Setting up a (Private) Docker repository
- 8 Docker Compose
- 9 Shipyard web GUI
- 10 Related articles of interest
- 11 Things to do
- 12 Issues
- 13 Koozai SME v10
Docker is an open-source project that automates the deployment of applications inside software containers, providing that way an additional layer of abstraction and automatization of operating system–level virtualization on Linux. Docker uses resource isolation features of the Linux kernel such as cgroups and kernel namespaces to allow independent "containers" to run within a single Linux instance, avoiding the overhead of starting virtual machines.
Why Docker on SME Server?
Docker containers hold one or more applications (and all it's dependencies) and can be started and stopped at will. The containers, when activated, use the Linux kernel namespaces and are operating isolated from the rest of your server, except for storage/mount points and networking, depending on the configuration of the container. Some applications require special PHP versions or other modifications to your server settings that are not desirable and may effect yum updates and upgrades. Docker containers is a way to have such an application packed with all it's dependencies and run it isolated. You can have multiple containers running, depending on your server hardware capacity.
- ownCloud running in a container with a higher version of PHP then SME Server provides
- A postgres application running in a container without having to install Postgres on SME Server
- Service on demand, you can start/start (even scripted) a container when you need the service within the container
- Move containers from one SME Server to another (Back-up or production) without installing the application itself
- Time based service e.g. cron jobs. Only have an application running when you need it.
- Keep SME Server's stock stability, security and flexibility, yet run exotic applications
- Storage of image library (local/NAS)
- Storage of Docker application data (local/NAS)
- Networking e.g. bridged with host, new bridge with host or port mapping
- Stand alone all-in-on docker or linked containers
- Only use TRUSTED repo's with images. Who build the image, what's in it?
- Naming convention of images to identify source(person or repo), SME version, application and version. e.g.:
owncloud-7.0.1-smeserver-9.0-john wordpress-3.9.1-smeserver-8.1-mary ehour-1.4.1-smeserver-9.0-richard sharedfolders-2.1.1-smeserver-9.0-fws frontaccounting-3.2.1-smeserver-8.1-contribsorg
Why the SME Server version in the naming convention if it's all inside the container? Well, it could well be that the application inside the container will use some of SME Server specifics such as the db, templates or perl interaction. In that case we need to make sure that we know for which SME Server the image was build.
- Verification (checksum) of available images
- Setting up trusted docker repo's
- disable docker repo's enabled by default at installation and come up with a command that enables them a la Yum
There is a contrib that will set up a lot of this for you in the Reetp Repo:
Add the reetpTest repo:
yum --enablerepo=reetpTest,epel install smeserver-docker
Most of the settings in the Manual Installation below are replicated into the contrib with templates
These can be pulled using docker itself as per Manual Installation below. Note that some require a higher version of docker. Regrettably I can't change that!
Alternatively you can create a docker-compose.yml file directly, or via templates, in:
docker-compose will automatically run this at boot.
Create you compose file and then run this from the configs directory
docker-compose up -d
Docker attempts to guess what network to use and sets a bridged interface for it.
Access to the container.
This allows access to any local services, and any ports in the container will appear locally
This maps container port 80 to host port 8088
# container:host ports: - 8080:8080
So if you ran an Apache container service on port 80, you can connect to it from the host using
Using --net="host" means it is easier to connect to the container using the local IP address. Simple port forwarding/opening will suffice.
However, it exposes all ports on the container locally, and there may also be conflicts with local ports.
Using a port mapping is preferred, but your SME server will then block access container access to local services such as DNS.
The answer is probably to statically set the Docker network, and then add the network to 'Local Network'. You can then expose ports via the docker config entry eg:
docker=service status=enabled UPDPort=1234 TCPPort=8088
I am working on this currently but the LocalNetworking approach doesn't work. It probably need manipulation of the firewall with templates.
Login to container
If permitted, most containers can be logged into using this:
docker exec -t -i -u root <container_name> /bin/bash
Note that most of the following is now in the contrib. See above.
Docker requires some RPM's that are not available in the default upstream repo's. So we need to enable the epel repo first. See epel
Then we can install Docker and it's dependencies:
yum install docker-io --enablerepo=epel
Make the Docker service start at boot time:
ln -s /etc/rc.d/init.d/e-smith-service /etc/rc7.d/S95docker chkconfig docker on config set docker service config setprop docker status enabled
Docker comes with a configuration file located at:
In this file you can set default parameters which are applicable to all containers run by Docker. By default it holds no arguments. All arguments can also be set manually when starting a container, in which case each individual container can have it's specific parameters. To see a list all available arguments that can be used in the Docker configuration file enter:
SME Server specifics
By default Docker will store all images, containers and other data in:
For SME Server this is not ideal for we would like to incorporate all Docker data into the pre-defined backup procedure(s) that come with SME Server. The preferred location for Docker data would be:
We want this to be the default location for all Docker data on SME Server, so we add the '-g' argument and the desired path to the storage location to the docker configuration file like this:
# /etc/sysconfig/docker # # Other arguments to pass to the docker daemon process # These will be parsed by the sysv initscript and appended # to the arguments list passed to docker -d other_args="-g /home/e-smith/files/docker -H unix:///var/run/docker.sock"
Since the Docker service will always check this configuration file upon (re)start, it will automatically pick up the arguments you have provided and act accordingly. That also implies that you can have multiple (but not simultaneously) storage locations if you omit the configuration file and add arguments manually on the command line.
The second argument '-H unix:///var/run/docker.sock' will tell Docker where to find it's socket to bind.
It is important that you adjust the config file before you start using docker otherwise it will create it's default storage location in /var/lib/docker.
You can still change the storage location in a later stage by copying all data to the new location you've defined with the -g argument.
Once the above changes have been made, the Docker service can be started and Docker will create it's new storage layout in /home/e-smith/files/docker.
service docker start
You can check if the docker deamon is running:
service docker status
and if it created the storage layout correctly:
ls -l /home/e-smith/files/docker/
Using a Docker image
By default, there are pre-build images available from the official Docker Hub. In our examples we will use the pre-build centos6 image.
To get a list of all available Centos images you can use:
docker search centos
You will be flooded with available images from the Docker hub. This is because everyone can have a free account on Docker hub and create one repository for him/herself. We limit our testing to the official Centos repo. With all the other images, you are on your own and usage is at your own risk.
Downloading a docker image
To download the centos6 image to your local server, issue the following command as root:
docker pull centos:centos6
where the syntax is 'centos' as the main repository and 'centos6' the specific version. Would you issue only 'docker pull centos', then all centos versions will be downloaded. So be specific.
Once the image has been downloaded, you can check your local images by issuing:
The listing included the Image ID and Name. These are important to run additional commands when the container is running.
Running a docker container
Now that we have downloaded the centos6 image it's time to give it a spin. To start the cento6 container we can issue the following command:
docker run -t -i --net="host" centos:centos6 bash
This will tell docker to run the centos6 container interactively from the local centos repo, use the host network interface and start bash. After a few seconds you will be presented with the bash prompt inside the centos6 container:
and to check if we are really inside the centos6 container we can display the release version:
which will result in:
CentOS release 6.5 (Final)
From here you can use the normal commands like yum etc.
To exit the container you give the normal 'exit' command, which will stop the centos6 container and bring you back to the prompt of your local server.
To run a container in the background, you need to issue to docker run command with the -d flag instead of the -i flag
Copy docker images
Docker images are stored on your local server. If you want to run the image on another machine you first have to take the image out of your local image repository and save the image in a transferable format. For this the save the image in .tar format. To get a listing of all available images on your local server:
will result in (example):
[root@sme9 ~]# docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE sme9 6.5 55db4355a2de 46 minutes ago 854.7 MB leszekk/centos_minimalcd 6.5 bc56fa8f1204 8 months ago 452.6 MB
To create a copy of our sme9 image and save it as 'copyofsme9 you need to enter the following command:
docker save sme9:6.5 > /tmp/copyofsme9.tar
which will result in a copyofsme9.tar file in your /tmp directory of your local server. You can now copy/move this file to another server or simply archive it for later usage.
To use the copyofsme9.tar file on another server and use it on that server with Docker, we can load it into the repository of the new server:
docker load -i < /downloads/copyofsme9.tar
After Docker has loaded the file, you can check it's availability by executing: docker images and you can use it just like any other image on your new server. You can use the save and load commands to clean up your local repository and share copies of your image.
some thoughts to share on docker networking
- Network port mapping
- Network Configuration
Note: Could we use FWS webapps to create an apache sub domain where the docker web application can be reached and 'masquerade' an unusual http port? e.g.
owncloud.mydomain.com vs mydomain.com:8000
would require ibay checking
Docker Name resolution
Normally you could add the DNS directly in the file /etc/sysconfig/docker, if you don't do that, your docker container could ping an IP, but never do the domain name translation. This is the dns of opendns, but you could change them.
# cat /etc/sysconfig/docker
# /etc/sysconfig/docker # # Other arguments to pass to the docker daemon process # These will be parsed by the sysv initscript and appended # to the arguments list passed to docker -d other_args="--dns 188.8.131.52 --dns 184.108.40.206"
or you could add directly from the command line
docker run -i -t -dns 220.127.116.11 -dns 18.104.22.168 sme9_real:6.5 /bin/bash
- How to interact with localhost PAM or LDAP from within a container?
I think that you can access localhost services by adding:
--net="host" to docker run
This means any services on the docker container are equally valid 'localhost' services accessible from the server itself so you need to ensure the server is properly firewalled. See Issues below.
- Many more...
Building your own images
Manual, or.. https://github.com/docker/fig
'Proposal test image:'
An application that requires Java, PHP, Apache, MySQL and LDAP. The localhost MySQL and localhost LDAP should be used by the application. The application should be publicly available either on a subdomain or specific port on the FQDN. The application should only be available between 08:00AM until 19:00PM. All application data should be incorporated by the default SME Sever backup mechanisms, including the image itself.
- Building the image based on centos6
- Configure networking, bridges and ports
- Start/restart and stop syntax of the application
- Configure cron
Setting up a (Private) Docker repository
docker=service access=public status=enabled
The binary is included in the smeserver-docker contrib.
The latest version that you can use with the installed version of docker (currently 1.7.1) is docker-compose version 1.5.2 https://github.com/docker/compose/releases/tag/1.5.2
curl -L https://github.com/docker/compose/releases/download/1.5.2/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose
Shipyard web GUI
There is a separate page on how to install Shipyard, the Docker web GUI here
Related articles of interest
Things to do
- Get the shipyard GUI going
- A LOT more ;-)
You will find that if you use 'host' networking docker will set /sys as Read Only and you will get an error with the raid_check as per this bug
If you don't use host networking, you use the internal IP address set with docker, but this address is unknown as a local network to SME and it will block any queries emanating from the container. I am looking at this with the contrib.
Koozai SME v10
Some basic scratchpad notes as I go
Don't use the extras repo to install
db yum_repositories set docker-ce-stable repository \ BaseURL 'https://download.docker.com/linux/centos/7/$basearch/stable' \ EnableGroups no \ GPGCheck yes \ GPGKey https://download.docker.com/linux/centos/gpg \ Name 'Docker Stable' \ Visible yes \ status disabled
yum --enablerepo=extras,docker-ce-stable install docker-ce docker-ce-cli
Files to modify?
systemd unit file
[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify # the default is not to use systemd for cgroups because the delegate issues still # exists and systemd currently does not support the cgroup feature set required # for containers run by docker ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -g /home/e-smith/files/docker ExecReload=/bin/kill -s HUP $MAINPID TimeoutSec=0 RestartSec=2 Restart=always # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. # Both the old, and new location are accepted by systemd 229 and up, so using the old location # to make them work for either version of systemd. StartLimitBurst=3 # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make # this option work for either version of systemd. StartLimitInterval=60s # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Comment TasksMax if your systemd version does not support it. # Only systemd 226 and above support this option. TasksMax=infinity # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target