In this post I go over the initial set of Docker containers I’m using.
Once I had the OS online and Docker installed, I deployed a sample container (Hello World) to learn how it’s done. I quickly grew to desire a UI and looked for something that would serve as such for Docker. Portainer quickly turned up in my search, and since I was still new to the world of Docker, it amused me that Portainer was deployed as a container on Docker. Of course.
At this point I began to configure the docker-compose.yml file to handle the Docker container deployments. This was one of the advantages of using docker-compose, instead of having to type out a long command with many attributes to start containers, I could define them in a file and tell Docker to use that file. To make these service definitions easier, I needed to define some environment variables for the user, group, and timezone. I executed ‘id‘ and noted down the user ID for the dockeruser, and the group ID for the docker group. Then I edited /etc/environment with the following:
PUID=1005 PGID=999 TZ="America/Chicago" USERDIR="/home/dockeruser" MYSQL_ROOT_PASSWORD="notmyrealpassword"
Saved the file and reboot.
Then in docker-compose.yml, I defined Portainer. I just followed their guide and defined it as the following in the file:
version: "3.6" services: portainer: image: portainer/portainer container_name: portainer restart: always command: --templates http://portainer/templates.json ports: - "9000:9000" volumes: - /var/run/docker.sock:/var/run/docker.sock - ${USERDIR}/docker/portainer/data:/data - ${USERDIR}/docker/shared:/shared environment: - TZ=${TZ}
Once that was saved, I then ran the following command:
docker-compose -f ~/docker/docker-compose.yml up -d
And voila – Portainer was up and running on port 9000.
After this, it was just a matter of defining the containers in the docker-compose.yml file, and running the docker-compose command after to bring them online. I’ll run through the quick specifics for each container, as there was some additional configurations needed.
Watchtower
This container exists solely to watch for any changes in any of the containers that are deployed, and then refresh the container to pick up the updates.
watchtower: container_name: watchtower restart: always image: v2tec/watchtower volumes: - /var/run/docker.sock:/var/run/docker.sock command: --schedule "0 0 4 * * *" --cleanup
The only part to configure is defining when it should run. Here I have it checking daily at 4am.
But what if I didn’t want certain containers to be updated? I’ve seen on the Home Assistant forums that a release can break users setups. For those containers, Watchtower has the ability to define a label and set it to false, which will make Watchtower pass over updating it:
labels: - com.centurylinklabs.watchtower.enable="false"
This will be used in the other container definitions.
ddclient
This is the Dynamic DNS updater that I discovered when I discovered the Docker Container Store. For ddclient to work, I needed to deploy the container in order to have it auto-generate the configuration file, and then edit the configuration and redeploy.
ddclient: image: "linuxserver/ddclient" container_name: ddclient restart: always volumes: - ${USERDIR}/docker/ddclient:/config environment: - PUID=${PUID} - PGID=${PGID} - TZ=${TZ}
I chose ddclient because my domain host, Google Domains, has instructions on how to configure ddclient as a DynDNS updater.
Once it was deployed initially, I saw that the ddclient.conf file was created. I edited the file according to Google Domains instructions, saved, and redeployed the container. After refreshing my Google Domains page, I saw the IP was updated to the WAN IP from my router. (This was especially satisfying, since I had been encountering trouble with getting the built-in DynDNS feature on the Google Fiber router itself working. ¯\_(ツ)_/¯ )
Instead of configuring the server to always set it’s IP to something static, I’ve configured in the router to always assign the server the same IP. This is in the same location I open any ports I want to expose to the internet.
MariaDB
Next up was the database. MariaDB is from the creators of MySQL, and improves on performance. From all of the reading I had been doing on getting this server online, it looked like MariaDB was the way to go. I had previous experience with MySQL, and after diving into MariaDB had no issues, it was very similar.
mariadb: image: "linuxserver/mariadb" container_name: "mariadb" hostname: mariadb volumes: - ${USERDIR}/docker/mariadb:/config ports: - target: 3306 published: 3306 protocol: tcp mode: host restart: always environment: - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} - PUID=${PUID} - PGID=${PGID} - TZ=${TZ}
phpMyAdmin
Same package as what I had used before for administering MySQL. With this deployment I defined a link to the MariaDB database using the container name.
phpmyadmin: hostname: phpmyadmin container_name: phpmyadmin image: phpmyadmin/phpmyadmin restart: always links: - mariadb:db ports: - 9100:80 environment: - PMA_HOST=mariadb - MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD}
Home Assistant
Finally for Home Assistant, I followed their instructions for deploying the container and configuration. Instead of running it from the command line, I took their example and adapted it to the docker-compose.yml format.
homeassistant: container_name: homeassistant restart: always image: homeassistant/home-assistant volumes: - ${USERDIR}/docker/homeassistant:/config - /etc/localtime:/etc/localtime:ro - ${USERDIR}/docker/shared:/shared ports: - "8123:8123" privileged: true environment: - PUID=${PUID} - PGID=${PGID} - TZ=${TZ} labels: - com.centurylinklabs.watchtower.enable="false"
Note here the added label to instruct Watchtower to pass over this container when running the auto-updates. This means I’ll have to stay on top of new Home Assistant developments, and deploy updates manually.
The configuration for Home Assistant was similar to ddclient – once the container is deployed the various configuration files were created. Once those are updated, I’ll run the docker-compose command to read the new edits.
Docker Networking
While dipping my toe into the water with Docker, I learned early on that networks can be defined in various configurations depending on the need. Leaving this alone, Docker will handle the network aspect on new containers with a default network in a bridged mode, unless otherwise directed. Each new container will be assigned an IP (172.18.x.x, for instance), and Docker will bridge this network with the network on the host.
What this means: within the Docker environment, containers communicate with each other either via container name and/or internal Docker network IP. If any communication is happening external the Docker environment, the services are using the hosts IP address – and Docker will handle the bridging automatically. All I had to do was worry about the port each container was using, and making sure when I defined them in docker-compose.yml that I didn’t reuse or conflict with anything else on that port.
Here’s a screenshot of Portainer, listing my containers:

(Open image in a new tab to enlarge.)
As of this moment, that is all I’ve done to bring the home server online. The use of Docker is powerful. Containers are lightweight and compartmentalized. If I see something that grabs my interest – I can deploy the container quickly and in no time have a new service up and running to evaluate.
Here’s a screenshot from ‘htop’ showing the resource usage on the server with everything running:

(Open image in a new tab to enlarge.)
The CPU cores aren’t even touched. I was more concerned with the memory usage since I was working with only 8GB of RAM – yet even with something like Home Assistant running the memory usage is reasonable. I’ll have to see how this pans out once Home Assistant is fully online. (The NUC has a spare RAM slot open and I plan to upgrade with another 8GB stick once the RAM prices return to “normal”. For posterity – it’s currently 6/4/2018 and the matching RAM for my current SO-DIMM is running $120 after tax.)
For the memory, I setup a script to run nightly that calls ‘sync’ (everything in memory is written to disk) and then clears the cache. This has shown to free up a few MB of memory, and generally keeps things tidy. Here’s the script:
#!/bin/sh sync; echo 3 > /proc/sys/vm/drop_caches
This is setup as a cronjob to run nightly.
Home Assistant – What’s Configured Now?
Until we’re in the new house, I’m holding off on getting Home Assistant fully configured and online. For now, I’ve setup a few sensors. Here’s some screenshots:

(Open image in a new tab to enlarge.)
The weather information is being pulled from the closest PWS associated with Weather Underground. (Another future expansion I plan to make with the smart home – a PWS that will coordinate with the sprinkler system.) The commute information is utilizing the Waze API component to calculate live travel time for a set route. The package delivery with the UPS sensor, with USPS and FedEx coming.

(Open image in a new tab to enlarge.)
The second tab is a system monitor – showing various params of the server and a hourly speed test of the WAN connection.