-
Building a multi-network ADS-B feeder with a $20 dongle
For a while now, I’ve wondered what to do with my old Raspberry Pi 3 Model B from 2017, which has basically been doing nothing ever since I replaced it with the Atomic Pi in 2019 and an old PC in 2022. I’ve considered building a stratum 1 NTP server, but ultimately did it with a serial port on my server instead.
Recently, I’ve discovered a new and interesting use for a Raspberry Pi—using it to receive Automatic Dependent Surveillance–Broadcast (ADS-B) signals. These signals are used by planes to broadcast positions and information about themselves, and are what websites like Flightradar24 and FlightAware use to track planes. In fact, these websites rely on volunteers around the world running ADS-B receivers and feeding the data to them to track planes worldwide.
Since I love running public services (e.g. mirror.quantum5.ca), I thought I might run one of these receivers myself and feed the data to anyone who wants it. I quickly looked at the requirements for Flightradar24 and found it wasn’t even that much—all you need was a Raspberry Pi, a view of the sky, and a cheap DVB-T TV tuner, such as the very cheap and popular RTL2832U/R820T dongle, which has a software-defined radio (SDR) that could be used to receive ADS-B signals.
I have enough open sky out of my window to run a stratum 1 NTP server with a GPS receiver, so I figured that was also sufficient for ADS-B1. Since I found an RTL2832U/R820T combo unit with an antenna for around US$20 on AliExpress, I just decided on a whim to buy one. Today, it arrived, and I set out to build my own ADS-B receiver.
-
A whirlwind tour of systemd-nspawn containers
In the last yearly update, I talked about isolating my self-hosted LLMs running in Ollama as well as Open WebUI in
systemd-nspawn
containers and promised a blog post about it. However, while writing that blog post, a footnote on why I am using it instead of Docker accidentally turned into a full blog post on its own. Here’s the actual post onsystemd-nspawn
.Fundamentally,
systemd-nspawn
is a lightweight Linux namespaces-based container technology, not dissimilar to Docker. The difference is mostly in image management—instead of describing how to build images withDockerfile
s and distributing prebuilt, read-only images containing ready-to-run software,systemd-nspawn
is typically used with a writable root filesystem, functioning more similarly to a virtual machine. For those of you who remember usingchroot
to run software on a different Linux distro, it can also be described aschroot
on steroids.I find
systemd-nspawn
especially useful in the following scenarios:- When you want to run some software with some degree of isolation on a VPS, where you can’t create a full virtual machine due to nested virtualization not being available1;
- When you need to share access to hardware, such as a GPU (which is why I run
LLMs in
systemd-nspawn
); - When you don’t want the overhead of virtualization;
- When you want to directly access some files on the host system without
resorting to
virtiofs
; and - When you would normally use Docker but can’t or don’t want to. For reasons, please see the footnote-turned-blog post.
In this post, I’ll describe the process of setting up
systemd-nspawn
containers and how to use them in some common scenarios. -
Docker considered harmful
In the last yearly update, I talked about isolating my self-hosted LLMs running in Ollama, as well as Open WebUI, in
systemd-nspawn
containers. However, as I contemplated writing such a blog post, I realized the inevitable question would be: why not run it in Docker?After all, Docker is super popular in self-hosting circles for its “convenience” and “security.” There’s a vast repository of images that exist for almost any software you might want. You could run almost anything you want with a simple
docker run
, and it’ll run securely in a container. What isn’t there to like?This is probably going to be one of my most controversial blog posts, but the truth is that over the past decade, I’ve run into so many issues with Docker that I’ve simply had enough of it. I now avoid Docker like the plague. In fact, if some software is only available as a Docker container—or worse, requires Docker compose—I sigh and create a full VM to lock away the madness.
This may seem extreme, but fundamentally, this boils down to several things:
- The Docker daemon’s complete overreach;
- Docker’s lack of UID isolation by default;
-
Docker’s lack of
init
by default; and - The quality of Docker images.
Let’s dive into this.
-
On ECC RAM on AMD Ryzen
Last time, I talked about how a bad stick of RAM drove me into buying ECC RAM for my Ryzen 9 3900X home server build—mostly that ECC would have been able to detect that something was wrong with the RAM and also correct for single-bit errors, which would have saved me a ton of headache.
Now that I’ve received the RAM and ran it for a while, I’ll write about the entire experience of getting the RAM working and my attempts to cause errors to verify the ECC functionality.
Spoilers: Injecting faults was way harder than it appeared from online research.
-
2024: Year in Review
For the past two years, I’ve been writing year-end reviews to look back upon the year that had gone by and reflect on what had happened. I thought I might as well continue the tradition this year.
However, I’ll try a new format—instead of grouping by month, I’d group it by area. I’ll focus on the following areas:
- BGP and operating my own autonomous system;
- My homebrew CDN for this blog;
- My home server;
- My new mechanical keyboard;
- My travel router project; and
- My music hobby.
Without further ado, let’s begin.
-
On btrfs and memory corruption
As you may have heard, I have a home server, which hosts
mirror.quantum5.ca
and doubles as my home NAS. To ensure my data is protected, I am running btrfs, which has data checksums to ensure that bit rot can be detected, and I am using theraid1
mode to enable btrfs to recover from such events and restore the correct data. In this mode, btrfs ensures that there are two copies of every file, each on a distinct drive. In theory, if one of the copies is damaged due to bit rot, or even if an entire drive is lost, all of your data can still be recovered.For years, this setup has worked perfectly. I originally created this btrfs array on my old Atomic Pi, with drives inside a USB HDD dock, and the same array is still running on my current Ryzen home server—five years later—even after a bunch of drive changes and capacity upgrades.
In the past week, however, my NAS has experienced some terrible data corruption issues. Most annoyingly, it damaged a backup that I needed to restore, forcing me to perform some horrific sorcery to recover most of the data. After a whole day of troubleshooting, I was eventually able to track the problem down to a bad stick of RAM. Removing it enabled my NAS to function again, albeit with less available RAM than before.
I will now explain my setup and detail the entire event for posterity, including my thoughts on how btrfs fared against such memory corruption, how I managed to mostly recover the broken backup, and what might be done to prevent this in the future.
-
Implementing ASPA validation in the bird2 filter language
When we looked at route authorization, we discussed how Resource Public Key Infrastructure (RPKI)—or more specifically, route origin authorizations—could prevent some types of BGP hijacking, but not all of it. We also mentioned that Autonomous System Provider Authorization (ASPA), a draft standard that extends RPKI to also authenticate the AS path, could prevent unauthorized networks from acting as upstreams. (For more information about upstreams, see my post on autonomous systems).
Essentially, an ASPA is a type of resource certificate in RPKI, just like Route Origin Authorizations (ROAs), which describes which ASNs are allowed to announce a certain IP prefix. However, ASPAs describe which networks are allowed to act as upstreams for any given AS.
There are two parts to deploying ASPA:
- Creating an ASPA resource certificate for your network and publishing it, so that everyone knows who your upstreams are; and
- Checking routes you receive from other networks, rejecting the ones that are invalid according to ASPA.
The first part is fairly straightforward, with RPKI software like Krill offering support out of the box. One simply has to set up delegated RPKI with the RIR that issued the ASN. I’ll give a quick overview of the process, but it’s not the main focus today.
Unfortunately, the second part is less than trivial, since ASPA is just a draft standard, not widely supported by router software. Only OpenBGPd, which I don’t use, has implemented experimental support. However, that doesn’t mean we can’t use ASPA today—we simply need to implement it ourselves. Thus, I embarked on this journey to implement ASPA filtering in the bird 2 filter language.
-
Installing Debian (and Proxmox) by hand from a rescue environment
Normally, installing Debian is a simple process: you boot from the installer CD image and follow the menu options in
debian-installer
. Simple, right? Or even easier, just use the Debian image provided by your server vendor, since Debian is quite popular and an image is bound to be available. Given the simplicity of this, you might have idly wondered: what’s actually going on behinddebian-installer
’s pretty menus? Well, you are about to find out.You see, recently, I got this cheap headless dedicated server without IPMI1—really, just an Intel N100 mini PC. To cut costs, there was no video feed, as that would require separate hardware to receive and stream the screen. Instead, there’s only the ability to power cycle and boot from PXE, which is used to perform a variety of tasks, such as booting rescue CDs or performing automated installation of operating systems. This shouldn’t be a problem for my use case, since there is a Proxmox 8 image right there, and I just set it to install automatically.
Of course that didn’t work, because I wouldn’t be writing about it if it did! As it turns out, the Proxmox 8 image (and also the Debian 12 image) didn’t have the firmware for the Realtek NICs on the mini PC, which prevented them from working. I thought that I just needed to install the firmware package, but when I booted into the included Finnix rescue system, it appeared that Debian wasn’t installed at all! Clearly, the PXE installer failed to start due to the missing firmware.
What now? Well, I’ve already done some pretty sketchy Debian installs in the past, so I thought I might as well just go all out and install a full Debian system through the rescue system. Unlike last time though, I’ll do a complete clean install, instead of keeping the partition scheme.
-
Custom mechanical keyboard: OS-specific custom RGB lighting with QMK
My old Corsair keyboard has been struggling recently. It has some weird issues, either in hardware or firmware, that cause it to sometimes go crazy and randomly “press” the wrong keys, forcing me to pull out my backup keyboard until the lunacy1 passes. On top of that, managing it requires Corsair’s bloated, Windows-only iCUE software or a reverse-engineered alternative like
ckb-next
, which isn’t fun for a Linux user like me, and even withckb-next
, the customization is limited.So I figured I’d get a new keyboard. I have a few simple requirements:
- It should be a 100% keyboard because I use the numpad quite a bit for number entry, e.g. to manage my personal finances;
- It should have a backlight since I often use my computer at night in relative darkness, and while I can touch type just fine, being able to see the keyboard is nice;
- It should have tactile mechanical switches, but not the obnoxious clicky ones. For reference, my old keyboard has Cherry MX browns, which I liked; and
- It should have properly programmable and customizable firmware. QMK is the popular option, so I searched for keyboards supporting that, and failing that, at least keyboards with proper first-party Linux support.
As it turned out, I couldn’t find any prebuilt mechanical keyboards that ticked all the options and were in stock, so I figured I might just get into the custom mechanical keyboard scene and build my own. Thus began a journey that saw immense frustration and nerd-sniping…
-
On the Inter-RIR transfer of AS200351 from RIPE NCC to ARIN
As you might know already, on May 24, 2024, at the RIPE NCC General Meeting, model C for the 2025 charging scheme was adopted. I will not go into the details here, such as the lack of an option to preserve the status quo1, but model C involved adding an annual fee of 50 EUR per ASN, billed to the sponsoring LIR. This meant that the sponsoring LIR for AS200351 would be forced to bill me annually for at least 50 EUR for the ASN, plus some administrative overhead and fees for payment processing2.
To protest against this fee and save myself some money, I decided to transfer AS200351 to ARIN, which charges no extra for me to hold an additional ASN, given that my current service category at ARIN allows up to 3 ASNs, and I only had one ASN already with ARIN: AS54148.
And so, on June 2nd, I decided to initiate the process to transfer AS200351, which was in active use, to ARIN. As it turned out, this became an ordeal, especially on the RIPE NCC end. Since I’ve been asked many times about the process, I am writing this post to share my experience, so that you know what to expect.