The Great Big Server Upgrade: Part 1
For the past five years or so, my home server has been a Dell office PC I found at a thrift store for $5 during my student days.
It’s served me incredibly well over the years. Between hosting Jellyfin, Nextcloud, and (at one point) Minecraft, there’s no way I haven’t gotten my money’s worth out of it. But I also can’t help but feel like I’m starting to outgrow it.
I should upgrade the storage from the single 1 TB HDD I started with since I’m starting to run low. I could probably add more RAM than the 8 GB it currently has while I’m at it too. The CPU also is in need of an upgrade and at that point I’d be essentially building a brand new computer.
Making a plan
I had two options that I started considering.
- I could build an all new home server. I would be in full control of the hardware and future upgrades would be fairly straightforward. I would want a solid offsite backup solution though, which could get expensive. Not to mention that family outside my network would have their downloads limited by my 40 Mbps home upload speed.
- I could rent a dedicated server from a hosting provider. Bandwidth would be no concern and networking would likely be more reliable in general. Everything I stored on it could be considered an offsite backup, meaning as long as I kept a local copy of everything important I would be golden. This option had the potential of being more expensive in the long term though.
I thought about things a bit and started liking Option 2 more and more. A large reason for that was OVHcloud’s Eco dedicated servers (though I bet there are other hosting providers out there that have similar offerings). There were a few configurations I liked that were a lot more affordable than I was expecting. The one I settled with, for instance, I could use for two years and only then start to approach my expected cost of building my own server. And this was all before RAM and SSD prices went to the moon recently.
Another benefit to Option 2 in hindsight is that it would give me a pretty low-risk way to experiment and get familiar with some new-to-me tools like TrueNAS and Proxmox. Buying my own hardware would be a much bigger commitment than just a month or two of renting a server.
So with a plan in mind I signed up for a dedicated server from OVHcloud and started figuring out the next steps.
The first attempt at TrueNAS
I had TrueNAS recommended to me by a friend so I decided to start there and install it.
If you’re like me and this is the first time you’re hearing about TrueNAS, in a nutshell it’s a really easy way to set up a NAS with ZFS. It also has a bunch of other nice features like easy to deploy apps with Docker. YouTube has lots of video guides that give a better picture of what it’s like if you’re curious.
Once I had TrueNAS up and running I had a lot of fun poking around and getting ideas for how I wanted things set up. One thing that was worrying me though was that my TrueNAS instance was public facing on the internet, totally exposed for anyone to poke at. Ideally I wanted to block all incoming internet traffic and instead have it accessible only through a Tailscale VPN. I was able to set up Tailscale easily enough by adding it as an app through the TrueNAS web UI, but it turns out that TrueNAS has no kind of built-in firewall. Fair enough, most people aren’t installing it on a machine that directly faces the internet. Still, I really didn’t like the idea of leaving everything open so I started looking into options.
Even though TrueNAS is based on Debian and it’s not hard to open a shell on, they really discourage making any kinds of changes outside of the web UI. I got the feeling that adding my own firewall would be extremely unsupported and prone to breakage, if I could even get one working in the first place (I’m something of an idiot when it comes to networking). So I went back to the drawing board.
Adding Proxmox to the mix
Another tool I had recently learned about was Proxmox. It’s also based on Debian, but is focused on creating and managing virtual machines. My new plan was to reinstall the server with Proxmox instead of TrueNAS, install TrueNAS in a VM inside of Proxmox, then have Proxmox worry about all the networking.
There were a few extra steps to worry about with running TrueNAS in a VM. The most important one was passing through the host’s SATA controller so that TrueNAS could work with the storage drives directly. I learned that passing through the controller instead of the individual drives does end up making a difference in TrueNAS being able to monitor drive health. This was mostly easy to do though and was possible through the Proxmox UI.
Once I had TrueNAS working, I installed Tailscale on the Proxmox host and made sure that was working. Then I was able to add some firewall rules in the Proxmox UI to block all incoming connections except from Tailscale.
I might write more about the specifics in a future post, but essentially I had to:
- Create a new network bridge to connect the Proxmox host to the VM.
- Route outbound traffic from that bridge out through the host’s interface.
- Forward all ports not being used by Proxmox to the TrueNAS VM.
Once all this was done and debugged, I was finally able to access both Proxmox and TrueNAS over Tailscale only, and TrueNAS had a working outgoing connection.
Figuring out file transfers
The last piece of the puzzle (for now) was to figure out the best way to do transfers to and from TrueNAS’s shares.
One of my goals was to make this NAS available to family for their backups too, not just mine. Whatever I ended up doing, I wanted to keep it fairly easy for them to set up and use. Thankfully, TrueNAS supports SMB shares out of the box, and SMB is very widely supported and often doesn’t require installing any extra software to use.
The only problem is that SMB in my testing was painfully slow — 10% of the speed I should have been getting. I did a lot of debugging on this and went pretty far down the rabbit hole. My takeaway: SMB is just a bad protocol to use outside of a LAN when there’s any kind of meaningful latency. I had come across multiple sources talking about how chatty a protocol SMB is and I figured that was the only thing left to explain the poor performance I was seeing.
I set things up for users to use SFTP instead where I was finally seeing the speeds that I was expecting. SFTP is a little more setup for users (at least on Windows and Mac, the Gnome file browser has built-in support!), but that was definitely worth the trade-off if it meant transfers going ten times faster.
Where to go from here?
The NAS functionality of this new server is pretty much done. I’ve been testing it out by backing some of my files up, and I’ve already got automated ZFS snapshots and VM backups going.
Over the next little while I’ll be migrating most of my services over to this new server, starting simple and then working up to the important ones.
There’s a few services I want to check out for the first time too, like Immich. I don’t have any plans right now to ditch Google Photos, but I’m excited to see if Immich can serve as a good backup location.
If there’s anything else you think I should look into, feel free to shoot me a message. Or if you have any feedback at all. Like I said, this is my first time really digging deep into a lot of this stuff and I’m hoping I’m doing things right.
— JP