Rebuilding a Solid Base Home Server

As time goes on, software is updated, upgraded, and some things even become unsupported. This is true for my home server as well. In previous articles I discussed how I built my server using CentOS 7. As of June 30th 2024, that OS has gone the way of the dodo. There are many flavors of Linux to choose from so we begin our journey with a thought exercise leading us to our OS of choice.

Operating System

CentOS was the operating system of choice for those who like the RedHat environment but did not wish to invest in a RHEL subscription. It had all of the packages provided by RHEL, without support and on a bit of a delay. I say those in the past tense because as of midway through the life of CentOS 8, it was decided to turn it into an upstream OS, versus downstream. What this means is that instead of getting the tried and true packages and updates that RHEL has had for a while, you are forced to get them before RHEL which could possibly lead to instability if something had a bug. The upside to this is you could potentially get a fix for it sooner, the downside is you could get the bug where the RHEL userbase wouldn’t get that bug as it would be fixed by the time it made it there. In the end this did not seem appealing to me, so I ruled out CentOS as an OS going forward.

Enter the new contender, Alma Linux. Many Linux users agreed with me that they were not happy with how CentOS was being handled, and a new project was formed. Alma Linux fills the gap of the downstream from RHEL, and even follows the same version structure for ease of use. Since this is similar to what I have used in previous builds, I will choose the latest version of Alma, which at this time is 9.

The most obvious path is to elevate from CentOS 7 to Alma 9 using a tool such as the Leapp tool1, unfortunately due to the age of my hardware and the fact that 40 TB is hard to keep a backup on a budget, I had to attempt this in a live environment with no rollback plan. This was ill advised and I knew the risks. Unfortunately it did not work out for the better. Thankfully I was able to access my data, and with some creative hard disk juggling, was able to ensure I didn’t lose anything important. So after cleaning up the ripped out hair on the floor, and ensuring I have what I need to move forward, we will start from scratch and apply the backed up data later on.

I will gloss over installing Alma 9 itself as that is fairly straightforward, the only changes to default I made were static IPs and setting the bulk of the 40TB drive to /opt as I want all my custom config to live there versus the various directories that make less sense. I will note here that per best practices the root user does not have a password and is set so nobody can log in as such directly. My user does have sudo privileges though.

Setting the Base

Before installing the packages I need, I’m going to quickly make a directory and add it to the path. This will ensure I have a special place to house the scripts I will be making over the life of this machine. In my case, I created /opt/custom_config/bin/. Finally, I then added a file to /etc/profile.d/ called custom_environment.sh and populated it with the following.

#!/bin/bash

export PATH=$PATH:/opt/custom_config/bin

Installing Docker

The first package I need is going to be Docker, thankfully Alma and CentOS are two sides of the same coin so the steps are interchangeable. Here are the steps I took.2 Please note that even though DNF is the package manager for Alma 9, YUM is still a valid command as it is essentially a symlink to DNF and is an in-place replacement so no changes are needed.

sudo yum install -y yum-utils
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
sudo systemctl start docker
sudo systemctl enable docker

Adding SSL

Now, I do host some services on this server that I wish to access securely from the open internet. Thankfully I did migrate most of them to the cloud already so the list is very short. One thing I will need in order to access the services is an SSL certificate. For this I will use LetsEncrypt3, unfortunately the cloud provider I use doesn’t have a plugin so I have to get my certificate manually at this time, but that makes this much more informative in that I show you how to collect a certificate and install manually. Using the following steps, gathered from a buried page in certbot, lets install this.4

sudo dnf install python3 augeas-libs
sudo python3 -m venv /opt/certbot/
sudo /opt/certbot/bin/pip install --upgrade pip
sudo /opt/certbot/bin/pip install certbot
sudo ln -s /opt/certbot/bin/certbot /usr/bin/certbot

Now that certbot is installed, we can collect our certificate. Since I need to renew it manually, I created a script to do it for me as well as ensure that my proxy restarts with the new cert. We can begin my creating the directory /opt/nginx_data/certificates/. Once the directory is created, within our /opt partition, we can run the special command added to our bin directory from earlier.

#!/bin/bash

certbot certonly --manual -d *.<your.domain.here>
cp --dereference --force /etc/letsencrypt/archive/<your.domain.here>/fullchain* /opt/nginx_data/certificates/fullchain.pem
cp --dereference --force /etc/letsencrypt/archive/<your.domain.here>/privkey* /opt/nginx_data/certificates/privkey.pem
systemctl restart nginx
echo "Sleep for 10 seconds."
systemctl status nginx

Please note that this will copy the certs, not just the symlinks, to the directory of your choice. Our installation of Nginx will not work correctly if you do not dereference. At this point I will assume you have whatever Nginx config you’d like prepared. Mine will reference certificates located in the /opt/certificates/ directory. But why there when we copied them to the /opt/nginx_data/certificates/ directory? Let’s explore!

Adding Nginx

Since we have Docker on the system, it makes no sense to install a reverse proxy directly on the server. So I simply add the following file to /etc/systemd/system/ as nginx.service.

[Unit]
Description=NGINX Service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop nginx
ExecStartPre=-/usr/bin/docker rm nginx
ExecStartPre=/usr/bin/docker pull nginx:latest
ExecStart=/usr/bin/docker run --rm --name nginx \
-v /opt/nginx_data/certificates/:/opt/certificates/:ro \
-v /opt/nginx_data/emby.proxy.conf:/etc/nginx/conf.d/<your-service>.proxy.conf:ro \
-p 443:443 \
nginx:latest
ExecStop=/usr/bin/docker stop nginx

[Install]
WantedBy=default.target

What does this all do, first this runs some pre-steps:

  1. Stops any nginx container already running.
  2. Removes any nginx container that is stopped.
  3. Pulls the latest nginx image.

Now it will copy the certificates as well as the explicitly defined config file in, you can add more as needed, allow traffic to port 443, and start the container. This will allow you to run the container as a service, which has the benefit of being watched by the OS. Run the following steps to ensure it is started and will start upon OS restart. The command from above to collect the SSL certificate will restart the container as well, this will save you the trouble of remembering to when it is needed.

sudo systemctl start nginx.service
sudo systemctl enable nginx.service

Adding DNS

As a final touch for this article, we will add one more service, DNS. This is quite simple as well and can provide you with the ability to direct your home traffic across multiple devices as easy as tweaking one host file. Our choice for this is CoreDNS, and for our case only requires 3 simple files and one directory. Lets start by creating the /opt/dns_data/ directory. Once we have that we can add our Corefile which defines our Hosts file and upstream preferred DNS provider.

. {
   errors
   health {
      lameduck 5s
   }
   ready
   hosts /etc/coredns/Hosts {
      fallthrough
   }
   forward . 75.75.75.75:53 75.75.76.76:53 {
      max_concurrent 1000
   }
   cache 30
   loop
   reload
   loadbalance
}

In this case, my primary and backup DNS providers are 75.75.75.75 and 75.75.76.76 respectively. All my custom entries will be in /opt/dns_data/Hosts, as shown here. It really is as easy as a host file: IP address on the left, URL on the right.

10.0.1.3 <my.hosted.service.domain>

Finally we will add the coredns.service file to /etc/systemd/system/.

[Unit]
Description=CoreDNS Service
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
Restart=always
ExecStartPre=-/usr/bin/docker stop coredns
ExecStartPre=-/usr/bin/docker rm coredns
ExecStartPre=/usr/bin/docker pull coredns/coredns:latest
ExecStart=/usr/bin/docker run --rm --name coredns \
-v /opt/dns_data/:/etc/coredns/ \
-p 53:53/udp \
coredns/coredns:latest \
-conf /etc/coredns/Corefile
ExecStop=/usr/bin/docker stop coredns

[Install]
WantedBy=default.target

Similar to Nginx from above, it will ensure that the image it is pulling is the latest upon each restart. It will copy in the correct config files we made above, and expose port 53 to be used as a DNS server for my internal network. Now we just need to ensure it will start when the server restarts, as well as turn it on.

sudo systemctl start coredns.service
sudo systemctl enable coredns.service

With this step complete, we now have a solid base server for whatever projects we have ahead.

  1. https://wiki.almalinux.org/elevate/ELevating-CentOS7-to-AlmaLinux-9.html ↩︎
  2. https://docs.docker.com/engine/install/centos/ ↩︎
  3. https://letsencrypt.org/ ↩︎
  4. https://certbot.eff.org/instructions?ws=other&os=pip&tab=wildcard ↩︎

Share and Enjoy !

Shares

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.