This post will cover the steps I took to install Ubuntu 18.04 on the Intel NUC (NUC7i7DNHE) and bring it online and ready for the Docker environment.
Since this server was going to have some exposure to the internet beyond my house, I needed to ensure I followed some basic security best practices. I made a list of the various aspects to consider:
- Minimal OS – Install only the packages needed, and on an as-needed basis.
- Lockdown ‘root’ access – No external login for root, use a unique password.
- Create a dedicated user for the Docker environment instead of using root.
Fortunately, the OOTB implementation of Ubuntu handles most of the mundane server administration details such as log file management, date/time syncing, etc.
NUC Preparation
Following the directions in the box, I removed the bottom plate from the NUC and installed the M.2 SSD and SO-DIMM. I also took the opportunity to throw in a 100GB SSD into the drive bay as an additional drive to use. Once done, I closed it up and plugged it into power, monitor, keyboard and mouse.
After booting into the NUC’s bios (F2), I verified which version it was and acquired any update to apply. This went smoothly, and after it was finished I had a system that was ready to be installed with an OS.
Installing and Configuring Ubuntu 18.04
Given that the NUC has no CD or DVD drive, I needed to either use a bootable USB flash drive, or do a network install. I decided to go the USB flash route and followed the guide on Ubuntu’s website. I used the 18.04 LTS image.
I decided to go with Ubuntu over other Linux distro’s or Windows for a few reasons: my familiarity with Ubuntu linux from prior projects, the support of Docker on Ubuntu, and the desire to create a GUI-less server.
Once the USB drive was ready, I plugged it in and booted the NUC, pressing F10 to bring up the boot menu and choose the USB as the source. The installation of Ubuntu is straight forward, except with 18.04 there’s an option to choose ‘Minimal Installation’, which is what I selected. You still still get the Gnome desktop installed, but this method will reduce the number of packages I will uninstall later.
Once installation was complete, I logged into the server as the user account I created and uninstalled the desktop:
sudo apt-get remove ubuntu-gnome-desktop sudo apt-get remove gnome-shell sudo apt-get remove --auto-remove ubuntu-gnome-desktop sudo apt-get purge ubuntu-gnome-desktop sudo apt-get purge --auto-remove ubuntu-gnome-desktop sudo apt-get autoremove sudo dpkg-reconfigure gdm sudo apt-get remove gdm
Then I updated the apt repository lists to pull from mirrors in my region:
sudo vi /etc/apt/sources.list
And wherever a URL that began with ‘archive’ I prepended ‘us.’ to it – for example:
deb http://archive.ubuntu.com/ubuntu bionic multiverse
to
deb http://us.archive.ubuntu.com/ubuntu bionic multiverse
Doing this step isn’t strictly necessary, I just want to reduce the number of network hops updates/installs would take.
Then I went through the update for the system:
sudo apt-get update sudo apt-get upgrade
Next was following this guide for enabling automatic updates for security patches.
I then enabled the various media codecs (for when this will be a media server):
sudo apt install ubuntu-restricted-extras
Once these steps were complete, I added the new user for Docker:
sudo adduser dockeruser sudo usermod -aG sudo dockeruser
Installing Docker and Docker Compose
After setting up the new user for running Docker, I logged out and logged back in as that account. Now to install Docker, start it, and enable it for the system:
sudo apt install docker.io sudo systemctl start docker sudo systemctl enable docker
Once that was done, I tested it with:
docker --version
And finally, grabbed the Docker Compose package, which makes interfacing with Docker on the command line easier, as well as automating containers:
sudo apt-get install docker-compose
And with that, the new server was up and running. Next up was deploying the first Docker containers.