So I recently finally got around repurposing an old Medion PC that I saved from becoming landfill fodder about 4~5 years ago.
I decided to rehouse it on a new case, in order to be able to fit more drives. Think this build from Wolfgang, except with a 2013 or older motherboard instead of something snazzy and modern with many SATA ports and ECC RAM compatibility.
The rehousing wasn't exactly smooth. Turns out the graphics card PCIe connector is busted (there was a reason it got replaced as a family member's main computer in the first place), so my dreams of doing transcoding with it go down the drain (not that the card itself was anything to write home about either. I think it's an Nvidia card that predates the GTXs series.)
I also took the chance to change the PSU for a slightly more powerful modular one. I didn't fully trust that the one that came with the PC would be up to the task, and I'd rather have the PSU run at lower capacity (and making less noise.)
Also it turned out that the noise from the case fans (which happen to be the same that Wolfgang used for the video) is a specific pitch that drives me insane, so I have also had a week or two of tweaking configs and buying thingamabobs to try to control the fans (which are not PWM controlled and run at maximum capacity 24/7) and get them to spin slower. In the end, resistor cable adapters (that bring the voltage down) and some silicone clips saved the day. But the journey to get there was tough, involving a lot of waiting for packages that I later had to return.
After several weeks of unwillingly-intermittent work on it, I have finally been able to move onto the software side.
The important hardware bits
I'm running an Intel i7-4770. The OS lives on a 500GB SSD, and I have two 4TB drives (one that I bought specifically for the occasion, and one I moved from my old NAS server to this one.) I have also joined them with an old 6TB drive that used to live in my main PC. The 4TB drives are joined with mergerfs, and I plan to get an SkyHawk 6TB drive to add as a parity drive in a SnapRAID configuration. (I know SkyHawk is not ideal, but the market is what it is right now, and this is the best deal I can get on a 6TB right now.)
I probably plan to get refurbished drives in the future when I expand my storage pool, but I did not have the courage to use refurbished drives for the starting lineup.
If you're wondering about this mergerfs and SnapRAID setup, I mentioned it on my post about my first homelab server, but I'm copying the setup from Perfect Media Server.
Software time
I can finally try out Proxmox in this! I saw it many times on Homelab Youtube, and it made me salivate, but my previous NAS was just too limited to run it, so I had to make do with vanilla Debian.
So to really make sure I got it, I installed Proxmox 3 times in a row. Nah, just kidding. It just so happened that for the first install I was following a guide that recommended merging the 2 separate volumes that the standard installation creates (one for the system and one for the containers and VMs); however, after giving a bit of thought, my planned setup is to run containers off of the SSD whenever possible, and bind mount folders on the HDDs for those services that require beefy storage. And it would be nice to keep the SSD used by containers separate from the OS space.
I also chose ext4 as the drive partitions format (both the first and second times), since I figured the important data would be in the storage drives anyway. But giving it a bit more thought, I realized it would be a pain in the ass to spin up a new service again if something went wrong; so I decided to reinstall it a third time, this time with a proper file format that had snapshots that I could set up Proxmox to manage. Against better judgement, I went with btrfs instead of ZFS, because my storage drives were already formatted as btrfs and was somewhat familiar with it already. I know btrfs support is technically a "technology preview" as of yet, but I'm not a company installing Proxmox on a production server so I figure I'll mostly be fine.
(Just after installation is really de best time to do a reinstall. You haven't had time to fill it with important files to backup yet.)
Apparently, the btrfs installation does not set up 2 volumes, though, so the second installation was a complete waste of time.
Boxes within boxes
Even though this server rocks an i7, I don't really trust its capabilities, and I also plan to cram it full of services, so even though the "best" practice for managing Docker containers is to create a VM and then host them there, I instead opted to create a multi-purpose LXC container for all Docker services that won't be accessed from outside the LAN, and then a separate LXC container for each exposed service to the internet. All of them unprivileged.
I find it weird that the web UI does not have a way to set up folder bind mount storage for containers. Since I plan to balance my storage load between drives with the help of mergerfs, a raw disk file (which is the default storage system) is the complete opposite of what I need for the beefier containers. However, there's a neat little command that will set up a bind mount from a host folder to an LXC container:
pct set NNN -mpN /path/to/host/dir,mp=/container/dir
Where NNN is the LXC container number, and mpN is the mount point entry number (starting on mp0).
I do want the services to benefit whenever possible from the speeds of an SSD, so my setup is a small raw disk image on the SSD, and then, for any service that has storage needs, a bind mount to a subfolder on the mergerfs mount.
Sharing around
Now that I finally have a somewhat beefy storage system, I'm taking the chance to centralize my data. Yes, I do plan to implement a 3-2-1 backup system, don't panic; it's just not ideal to have all my stuff scattered around external hard drives and different computers in different places with no specific order to it. Diversification is good as long as it is intentional, and this very much is not.
In the past I have set up Samba shares without much thinking, since my daily driver OS was Windows; but I have recently switched to Linux, and I found out that there is a faster, more Linux-appropriate tool for sharing folders on the local network: NFS. This also enables me to be able to keep my selfhosted services in their current machines, while replacing the internal or external storage with the NFS storage.
For example, there's a Jellyfin server on a laptop that I don't wanna move because the laptop rocks an nVidia GTX 950M and it's the best graphics card I have own that I can dedicate for transcoding, and I also don't really have the ability to take it out of the laptop and use it elsewhere. But I still prefer an NFS share over a dangling external HDD connected to the side of the laptop.
In fact, this laptop sent me on a long sidequest in figuring out why my Jellyfin server was lagging when it previously wouldn't (with the dangling drive). It turned out to be that the Ethernet port on my laptop is busted and can't use the high speeds of any cable I hook it to, but before reaching that conclusion I had to learn what all of the advanced transcoding configuration on the Jellyfin admin panel meant and what it did when changed; which incidentally made my Jellyfin configuratoin more optimized for the laptop (which improved the situaiton somewhat, but not enough, before forcing it to switch to WiFi which removed all remaining lag.) This also involved fiddling with the NFS configuration (like size of data per read/write request, etc), and setting up AutoFS (which mounts NFS shares on demand instead of on startup, like with /etc/fstab, since the latter could find issues with the NFS server dropping the connection if idle for too long.) Maybe I'll collect the details for all of that sidequest in a separate post.
Things I've learned
LXC containers
NFS shares
autofs
Media transcoding & streaming