Running Unifi Controller on a Raspberry Pi
I recently migrated my Unifi controller from a bhyve instance in Triton to an LXC container in a Raspberry Pi. I won’t go into all the reasons why here, but suffice it to say that I had made some choices about my existing network that, while they made sense at the time, didn’t really jell with the way Unifi is intended to operate. I’ve been running a controller myself for over a year, and I already have a router and several spare Raspberry Pis laying around, so getting a Cloud Key or Dream Machine wasn’t something I was willing to pay for just yet.
Finding the right distro
Shopping around which operating system to run on the Raspberry Pi, I ended up choosing Ubuntu, since FreeBSD isn’t supported for the Unifi controller. The main reason I chose Ubuntu is because it has 64-bit for arm64 while raspbian and alpine do not. Ubuntu also supports the WiFi on the rpi3 and rpi4 which I definitely wanted without having to deal with it. I actually ended up not using either of those, but more on that later. I’ve also been using Ubuntu for my bhyve controller instance, so I figured getting it set up would be pretty straighforward.
The Ubuntu images for Raspberry Pi have some really nice features. There are
a number of files on the SD card that feed directly into cloud-init
, which
is something I’m quite accustomed to from using Ubuntu on Triton. This made
configuring networking, including wifi, and my ssh keys a cinch.
First Steps
I ran into a couple of problems initially. First, Unifi’s apt repo only has
packages for armhf, not arm64. I figured, oh well, it’s not like 64-bit is
actually giving me much on a system with only 1GB of RAM so I re-imaged the SD
card with 32-bit ubuntu-18, loaded my networking-config
, booted it up and
started in again and ran into my second issue. The Unifi controller doesn’t
run on Ubuntu 18 due to an issue with MongoDB. I could have, maybe, looked
around for a ppa to get an older version of mongo and apt-pin it. That seemed
both fairly fragile in the long run and not ideal to me. I remembered
that Ubuntu comes with LXD installed by default and decided to give it a try.
Now, this was my first time using either lxc or lxd. I’ve used Docker, though never runc, but lxc containers feel more like SmrtOS Zones rather than just single process containers like Docker. I did a bit of reading to get a primer on lxc and with my new found knowledge I figured out that Ubuntu provides xenial armhf lxc images that support cloud-init (whereas images from other sources often don’t). Bingo.
Creating the container was super simple. Props to the lxc people.
sudo lxc launch ubuntu:16.04 unifictl
Networking Misadventure
Having never used lxc before, geting the networking right took me a few tries. By default lxd wants to set up a bridge with a private network and configure IPMASQ with dnsmasq providing DHCP for everything. I want my controller to have a direct network interface for L2 discoverability with Unifi devices. I spent far too much time trying to figure out what the recomended way to do this was. As near as I can tell, if you’re not using the default you’re basically just on your own and you can do whatever you want. And coming from illumos with crossbow, virtual networking on Linux…let’s just say it leaves a lot to be desired.
Ultimately I went with a bridge attached to the wired interface for the container (since I needed to have the controller on vlan 1 for managing devices) and the wlan0 connecting to my wifi network (which is on vlan 3). My modified netplan looked like this:
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
dhcp4: false
accept-ra: no
bridges:
br0:
dhcp4: false
accept-ra: no
interfaces: [eth0]
addresses: [172.28.1.10/24]
wifis:
wlan0:
dhcp4: true
optional: true
access-points:
"My Home Network":
password: "walt sent me"
With that set, I needed to reconfigure LXD. Initialy I did this by purging
and re-installing the packages, but apparently all I needed to do was
lxc network delete lxdbr0
to remove the lxd bridge I didn’t want so that
I could use my own. My final lxd preseed looks like this.
# lxd init --preseed < EOF
config: {}
networks: []
storage_pools:
- config: {}
description: ""
name: default
driver: dir
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
nictype: bridged
parent: br0
type: nic
root:
path: /
pool: default
type: disk
name: default
cluster: null
EOF
This creates a non-custered (because it’s just one rpi), local only lxd
with a storage pool named default
just using a directory on the filesystem.
Other options are btrfs or lvm, neither of which I had set up, nor wanted
deal with configuring. For a raspberry pi where I’m probably only ever going
to run one container, this is good enough. Maybe the next time I get around
to it, zfs will be an option.
Next up was setting a static IP since I don’t want the controller changing
IPs on the devices and causing an issue with the inform IP. Nearly everything
I found said to add a device with lxc config device ...
and setting raw.lxc
values, but additional post-provision manual configuration seems absurd to me.
There had to be a better way. This is again where LXC falls short, because
there’s absolutely no guidance here whatsoever, and the answer really is that
if you’re not using the default you’re completly on your own. However, I did
eventually find lxc/lxd#2534 where stgraber says:
Though, note that the preferred way to do this is through your Linux distribution’s own configuration mechanism rather than pre-configure things through raw.lxc.
For Ubuntu, that’d be through some cloud-init configuration of some sort, that said, if raw.lxc works for you, that’s fine too :)
I suppose in hindsight it should have been obvious to me that I wasn’t looking for how to configure container networking, I was looking for how to pass in cloud-init data. Coming from illumos, I’m used to the global zone configuring networking on behalf of zones and not allowing them permission to modify it.
Since I needed an ubuntu-16 container for running the unifi controller, the older version of cloud-init in xenial only supports version 1 cloud-config networking so the format was different from what I used to provision the rpi itself.
#cloud-config
network:
version: 1
config:
- type: physical
name: eth0
subnets:
- type: static
ipv4: true
address: 172.28.1.11
netmask: 255.255.255.0
gateway: 172.28.1.1
control: auto
- type: nameserver
address: 8.8.8.8
And finally, launching the container.
lxc launch ubuntu:16.04 unifictl --config=user.network-config="$(cat network.yml)"
As far as I can tell, there’s no way to pass in a filename, thus resorting to a subshell. Since yml is a superset of json, you could do a one-liner of all json. I don’t know, choose whichever pain you’d prefer to have.
At long last, getting the controller installed
Getting into the running container is as easy as lxc exec unifictl bash
, and
you’re root with a static IP. From here, there are a number of scripts and
tutorials for setting up the unifi controller. That seemed like overkill.
I do the following:
# apt source
echo 'deb http://www.ui.com/downloads/unifi/debian stable ubiquiti' > /etc/apt/sources.list.d/100-ubnt-unifi.list
apt-key adv --keyserver keyserver.ubuntu.com --recv '06E85760C0A52C50'
# install
apt update && apt install openjdk-8-jre-headless unifi
# Make sure mongo and unifi run
systemctl enable mongodb
systemctl enable unifi
systemctl start mongodb
systemctl start unifi
At this point I’ve got unifi
running in a container on vlan 1 where it can
talk to all of my devices, and my wireless network is on vlan 3.
Finalizing the set up with a reverse proxy and SSL certificates
I like to keep my networks isolated so I added an nginx reverse proxy to the rpi (in what I would call the global zone, but linux apparently doesn’t have a name for?).
Ubiquity has documented the ports necessary to access the controller. Ports 8080, 8443, 8880, and 8843 are HTTP. Port 6789 is for mobile speed test and needs to be TCP only. STUN on port 3478 is only needed for Unifi devices which are on VLAN 1, and won’t need to be proxied.
Here’s the nginx config that I used. Note that I’ve elided common settings such as logging and SSL. See https://ssl-config.mozilla.org to generate a suitable SSL configuration for your site, and always use Let’s Encrypt if possible.
# non-ssl ports
server {
listen 8080;
listen 8880;
listen [::]:8080;
listen [::]:8880;
server_name _;
location / {
proxy_pass_header server;
proxy_pass_header date;
proxy_set_header Host $host;
proxy_set_header Forwarded "for=$remote_addr; proto=$scheme; by=$server_addr";
proxy_set_header X-Forwarded-For "$remote_addr";
proxy_pass http://172.28.1.11:$server_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
# ssl ports
server {
listen 8443 ssl http2;
listen 8843 ssl http2;
listen [::]:8443 ssl http2;
listen [::]:8843 ssl http2;
server_name _;
# SSL options go here. See https://ssl-config.mozilla.org
location / {
proxy_pass_header server;
proxy_pass_header date;
proxy_set_header Host $host;
proxy_set_header Forwarded "for=$remote_addr; proto=$scheme; by=$server_addr";
proxy_set_header X-Forwarded-For "$remote_addr";
proxy_pass https://172.28.1.11:$server_port;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
I put this in /etc/nginx/sites-available
and symlinked it in sites-enabled
as is normal on Debian/Ubuntu.
As I mentioned, the mobile speed test on port 6789 is not HTTP, so it needs
to go outside of the http stanza. Given that both the sites-enabled
and
conf.d
include directives are inside the http stanza, the stream stanza
needs to be directly in nginx.conf
. Append this to the end.
# Unifi controller mobile speed test
stream {
server {
listen [::]:6789;
proxy_pass 172.28.1.11:6789;
proxy_buffer_size 128k;
}
}
I could also have created a container for this, and I may still do that when I have some time.
The last Hurdle
There was one final issue after getting the controller set up. Everything
seemed to work great, but since I was replacing a non-unifi switch with a
unifi switch I had to do some reconfiguration of the network, including the
wireless access points. In the past, whenever wireless was down for whatever
reason (e.g., firmware updates) I could disconnect wifi on my phone and access
my controller’s IPv6 address over the cell network. This worked because my
controller being on a bhyve instance was wired. Having the raspberry pi
connected to my main network (I didn’t include IPv6 on vlan 1 since the unifi
devices don’t yet support it (or maybe they just don’t support it without a
USG?)) over wifi, if the wifi is down I couldn’t access the controller
remotely. Maybe this is something you can deal with, but in my experience,
when the wifi is down is precicely when I need to access the controller.
I needed to change the Pi to use a wired network for vlan 3 rather than
connecting over wifi. To do this, I changed the switch port profile for the
pi to All
and changed the netplan to add a vlan interface.
#cloud-config
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
version: 2
ethernets:
eth0:
dhcp4: false
accept-ra: no
bridges:
br0:
dhcp4: false
accept-ra: no
interfaces: [eth0]
addresses: [172.28.1.10/24]
vlans:
vlan.3:
id: 3
link: br0
dhcp4: true
accept-ra: yes
I now have the equivalent of a cloud key for the price of a spare raspberry pi I had lying around.
Conclusion
To summarize, here are the key components to reproducing this.
- Ubuntu 18 raspberry pi image. I used 32-bit, but if I were to do it again I’d try 64-bit first.
- Use only wired networking. I still don’t know what will happen when I need update the firmware on the switch. Juniper switches can still pass traffic while the switch control plane is rebooting. Here’s hoping the unifi can do the same! Maybe it’s unavoidable and I might as well just use wifi. We’ll see.
- Create your own bridge to give the controller instance an interface directly on the network with no nat. Or, have fun with iptables.
- Ubuntu 16 armhf image (
lxc launch ubuntu:16.04/armhf
, if you’re using arm64). You could also use Debian, which you might be able to use a version later than ubuntu-16 without the mongo problem, but Xenial is LTS until 2021.