Sometimes Simple Does the Job

One thing in my life that I have never kept a secret is that I have ADHD. It’s one of those things that just define my life and has always been there. It affects everything I do, often in ways that I don’t even realize that it does.

I think I’m currently at a place in my life where I have it under control, but it hasn’t always been that way. The last few years have been a bit of a challenge for me, especially with the pandemic that affected me quite profoundly, being quite a social person and the isolation that came with it.

That, however, is a story for another day. What I want to talk about today is how I too often tend to go for complex solutions to simple problems. It’s not that I want to go for complex solutions, but I often find myself doing so. That mainly comes due to familiarity of things and that often, you are learning to build things “the proper way” while that often isn’t the best solution for a given issue.

Let me introduce you to a project that I worked on the last few years, which is something that I built for Bobfans, which is the biggest fansite for the Belgian theme park Bobbejaanland - if you’ve ever visited my website before, that shouldn’t come as a surprise. There are quite a few theme parks that I can visit, but as a child, we visited Bobbejaanland the most.

So when Bobbejaanland started using a waiting time system to show waiting times in the park, it only made sense that I would go ahead and build something for it. That turned into the Bobfans API, which powers an easy to use website that simply displays the current waiting times for Bobbejaanland at that point.

When I first worked on it, I was under quite a lot of time pressure and I hacked something together quickly, using Golang. It wasn’t very pretty and it was quite a bit of a mess, but it did the job and it ran during the entire season. However, I wasn’t proud of it: it wasn’t optimized (it made a GET-request to the API for every request) and we were just parsing it into a table that could get sorted based on the waiting times.

So, for the next season, I decided to go ahead and fully rewrite it. It was still in Go, but I also had learnt to work with Kubernetes and I decided to go ahead and build the application again in microservices. So I did that and built an application divided in five microservices:

  1. Fetch, which did the actual fetching of the data from the Bobbejaanland API.
  2. Average, which calculates the average waiting times per attraction over the current day.
  3. Maintenance, which runs daily and will fetch the park opening hours (and will check if the park is open) and will also check if there are any new attractions added to the API.
  4. Web, which is the HTML front-end which users interact with and exposes some API endpoints.
  5. Metrics, which exposes metrics that Prometheus can scrape to visualize the waiting times in Grafana.

These applications are all built in a GitHub Actions-pipeline and are deployed to a Kubernetes cluster using Flux. The data was stored in a MariaDB database, which is also deployed using MariaDB Operator and Flux.

The first version of this ran on RackSpace Spot, which is the cheapest way I found to run a Kubernetes cluster. However, I ended up running in a lot of issues, especially with the MariaDB database. For some kind of reason, my MariaDB volume got stuck/locked a few times and I required assistance from the RackSpace Spot support team (and, being as cheap as it is, there isn’t actual a real support team). It seems like I found some issues and they were able to fix them, but I ended up with a corrupted database and even when fixed, it broke a few more times. Even though I had a lot of experience with MariaDB, running it within Kubernetes made things way harder than it should be.

Eventually, I ended up purchasing two Intel N100 powered mini PC’s that I installed in my homelab and started running Kubernetes on. It only made sense to move the Bobfans API to that cluster, so I eventually moved it there and used a CloudFlare tunnel to expose it to the internet, which worked great! I made some configuration errors (the classic not setting limits on resources) and made an error in my back-up job which led to a database backup being created every minute instead of every hour. Add to that that I also didn’t clean up the database back-ups and the disk space was filling up really quickly, which I didn’t caught in time. So, my cluster behaved really strange and I had to figure that all out, but in the end that was an easy fix.

However, due to personal reasons, I moved to another home and setting my Kubernetes cluster up again was low priority. I had a Fritz Box at my old address, and I loved that thing and my cluster was set up assuming that I would have a Fritz Box, which was no longer the case. So it wasn’t as simple as setting up my cluster again… and it got added to my ever growing to-do list. I also realized that setting it up again ment messing with the database again, which kind of paralyzed me completely. I knew it was going to be a mess again and I just kept pushing it in front of me.

Until a few days ago, when I finally decided to do something about it. However, a few days ago, Goilerplate (a Go-based boilerplate for projects) launched and it uses SQLite and it had a great write-up about why. And honestly, I had never even considered SQLite - I didn’t discard it as no option, I knew what SQLite was, but I had never used it or even considered it. It never came up to me.

The write-up made a lot of good points: it’s a simple solution, it works and it scales. And for 99% of projects, SQLite is a more than adequate and probably better solution compared to a full blown database. So with the Bobfans API in mind, I decided to go ahead and thought, why not? So I decided to rewrite the Bobfans API using SQLite instead and I am so glad that I did!

The joy of programming

With the rewrite came a lot of thoughts. How would I deploy this now, since my Kubernetes clusters isn’t ready to be used yet? How would I develop it, how would I go ahead and build it? Would SQLite support being used by multiple microservices?

Well, the best way to know that is by going ahead and trying it. So, I modified my code to utilize SQLite instead and decided to go ahead and rework parts of my application - while it was nice to have multiple microservices, it didn’t exactly make things easier and perhaps some of them could be merged into one.

So I decided upon it and merged fetch, average and maintenance into a new application called scheduler. It does all the same things, but uses gocron instead to schedule the jobs. Every minute the fetch job goes ahead and fetches the data, every 15 minutes the average gets calculated and early in the morning the maintenance job runs to check if the park is open (and during what hours) and if there are any new attractions added to the API.

The other application, web, got some polish but mainly remained the same. Instead of fetching the data from MariaDB, it now fetches the data from the SQLite database. I did end up improving it a lot, though, but more about that later.

Further optimization

When my rebuild was completed and I got ready to deploy it once again, I noticed how easy the deployment was. A Docker-compose file and creating a volume for the SQLite database was just about all that was needed. I modified my pipeline to build the new images and that was about it. Adding a login to the registry in Dokploy, adding an automatic back-up job to an S3 bucket and voila: the application ran. It was a lot easier than I thought it would be and the performance was even better than when I was using MariaDB.

However, I did start thinking about things. I fetched all the information, every minute, and stored all of it. If a ride was open or not, what the waiting time was at that point, it all got added in the database. Even if the data was exactly the same. With how easy development had become now, I made a copy of my development database (which was as easy as downloading the backup from the S3 bucket) and I started improving upon this: instead of storing the waiting times for every ride every minute, what about only storing the waiting times when they differ from the previous information? If an attraction has a 5 minute queue for a whole hour, I would have stored that 60 times before. Now I would only store that once. Once the waiting time changes, I would store the information, but if it was the same, it no longer got stored.

To make sure that I could keep a track if the application was still running properly, the ride record gets updated on every run. That way I can easily see when it was last updated and if it’s still running properly.

Fetch also logs all actions it takes to stdout, so I can easily see what’s going on in Dokploy itself. And it just… works. With no issue at all.

With all my energy in how well this went, I decided to go ahead and fix the CSS. I took a serious look at how the frontend worked and I realized that there were still parts left in it from when it was a simple table, like a table sorting library. That obviously was completely useless because there were no tables anymore… so that got removed. I used a single animation from animate.css, so I just removed that and added the single animation to my CSS file.

In the end, I ended up being able to remove jQuery, the table sorter plugin and the animate.css and table sorter CSS file. The single javascript file that I had also could be removed, as it didn’t do anything anymore. So I ended up with a file that was about 90% smaller. That motivated me to go ahead and try to implement and experiment with something that I wanted to try for a longer time already: HTMX. Because how cool would it be if the waiting times automatically updated?

So, I ended up building a new endpoint that just returns the waiting times for all the attractions. A single line of code in my HTML file now makes an HTMX request every minute to fetch the new waiting times and it automatically updates the page. And it took almost no effort at all.

And I almost would never have done it because of the paralyziation that I experienced because of the database. Stripping away that complexity just led to a ton of fun and I’m glad that I did so.

Conclusion

I’ve learnt a lot in this project. Not only about how I should not discard solutions that might look like they are too simple, but I should always honestly investigate if I need something more complex. I should always start simple and if the situation requires a more advanced solution, I should go ahead at that point and do so. But I shouldn’t assume that things need to be done in the “best way” or “most scalable” way, if 99% of the time it’s just fine and just adds overhead and complexity to the project.

Complexity that will inevitably mean that I will ignore the project instead of improving it. So, my new train of thought is to always start simple. Don’t over-engineer things: start simple, add complexity when it’s needed. Just start building, start shipping and start learning. And if more complexity is needed, go ahead and build it.

Stop being afraid, start doing. I can do it. And so can you.

  • Kevin

Posted on Monday, 3 November 2025