Ansible setup
Over the years, I had collected quite a collection of servers that all demanded attention to be kept up-to-date and to ensure that things kept on running like they should. A monitoring server here, an Uptime Kuma there, a web server there, a Dokploy instance, another server with some Docker containers on them and before I knew it, I ended up with about 10 servers that all demanded attention. And trust me, that does not scale well if you don’t think about it beforehand.
Moving to a dedicated server
All of those servers aren’t free and they added up to a monthly amount where it started to make sense to move to another solution: a dedicated server that not only offers more than enough resources compared to what I needed, but also more than enough to cover that I could ever need in the future.
So I started planning things out: one of the biggest issues I had was that each server was manually configured and I was postponing making changes on them because with every change, I had to rebuild my mental map of how the server was set up. So, I started planning out things properly this time.
Requirements
There were a few things that I wanted to optimize on this server - I would be moving ahead with my several parts that I really like and ensure the setup is properly done of all parts that hook into it. Whenever you host something on a server, you most likely want to expose it to the outside world using a web server (and my webserver of choice is Caddy). I took a good look at my current servers and these were the use-cases:
- A static website (like this one)
- A PHP-powered website
- A WordPress website
- A redirect
- A proxy (to for example, a Docker container, or web application)
Next to that, there were also some other things I wanted to manage, like for example Docker containers that do not need to exposed themselves.
Options
In order to manage the creation and the maintenance of the server, I wanted to use an automation tool. I personally had the most experience with Puppet.
Puppet
Having used it extensively at Nucleus, it has been a few years and while I love the set-up of Puppet, it might be a bit too complex for my use-case: Puppet really shines in a set-up where you have a Puppet-master server and the agents will contact the master server to get their catalog and the clients will enforce that catalog. Ideally, I would utilize as few servers as possible, so I would need to go for a master-less setup.
One of the things that complicated a Puppet setup though is that not all tools that I wanted to use have proper proper Puppet modules and I didnt want to start building entire modules for every part of my set-up. So I brushed away Puppet quickly.
Chef
Another option I had brief experience with, also at Nucleus, was Chef. I didnt work a lot with it and for that reason I decided not to look too much into it this time.
Ansible
I had dabbled a few times with Ansible in the past and it is the simplest of the three options. While Chef and Puppet require an agent to be installed, Ansible simply runs from your computer over SSH. It is quite a simple solution that can be quite powerful. Since at Sofico we also utilize Ansible, it felt right to try and go with Ansible this time as well.
Initial steps
I am going to assume that you have already set-up Ansible, if not, the Ansible website is probably going to be a good resource to help you.
I decided on this structure for my Ansible code:
. ├── inventory │ ├── production │ └── test ├── playbooks └── roles ├── backup ├── common ├── containers ├── jumphost ├── monitoring ├── notifications ├── security ├── webserver └── wireguard
I am not going to go into too much detail here, but I wanted to be able to utilize Ansible to manage both my jumphost (which runs my WireGuard VPN) and my web server.
Since Ubuntu is probably the best supported operating system for Ansible, I decided to go with Ubuntu Server. I previously used openSUSE Leap but I ended up with several issues (like completely different parts of PHP not being installed by default, including packages which are default on Ubuntu) that I opted not to use openSUSE Leap this time around.
I am not going to go into full detail about my setup - as it took me weeks to finalize it and explaining it in a single blog post will not do it justice, but I will walk through several parts here.
Playbooks
The bread and butter of Ansible is playbooks: they describe what you want to run and who to apply it to. For my setup, I built my own playbooks that all focus on a specific domain:
- backup
- common
- jumphost
- monitoring
- security
- webserver
The main goal of using automation is ensuring that everything that needs to happen also does happen and as such, this is exactly what my setup is geared towards. These playbooks all contain roles that perform the required tasks.
Roles
My playbooks all contain roles and the back-up playbook for example contains the roles common and back-up. The common role runs on every machine and performs a variety of tasks:
- Creating the correct users and adding SSH keys to them
- Configuring the SSH-daemon to only allow passwordless logins using SSH-keys
- Setting up the firewall rules
- Updating the MOTD to a custom template
- Installing common packages
- Configuring the time zone of the server
You know, the kind of things that take a lot of time to configure manually, but completely automated.
The back-up role has other tasks it performs, for example:
- Installing and configuring BorgBackup
- Setting up back-up schedules
- Creating back-up scripts to create back-ups (and send notifications)
- Create systemd configuration for my back-up scripts
In this way, I can rest assured that everything goes well. My monitoring stack is set up to monitor if the back-ups work well and will interfere if needed.
That is it, folks.
And that is how much I can talk about today. I am pretty sure I will be talking more about my Ansible setup in the future, but for now, this is a quick overview of my current setup.