My Unraid Build, thoughts, and lessons learned
I’ve been using a NAS (network attached storage) for years to back up data and run a Plex server. I started with a simple two disk, RAID 1 Synology NAS years ago and currently I have a Synology DS916+ with a DX517 expansion attached. It’s a little bit older by a few years but it still gets the job done. Synology’s NAS operating system has a lot of apps that you can use including the ability to run a DNS server or a RADIUS server. You can even run virtual machines. Unfortunately, the CPU and RAM for typical NAS hardware just isn’t built with VM’s in mind. This is where my journey led me to Unraid.
Unraid is an OS that you can download, install on a USB, and run on any hardware. This unlocks more possibilities because you can build your own server or run it from an old machine that has higher hardware specs than a pre-bought NAS like Synology. Want a Ryzan CPU for your Unraid server? No problem!
While the primary use case is still a NAS, because of the hardware changes, running VM’s and dockers become much more sustainable using Unraid. As the name suggests, Unraid is “like” RAID but it’s not hardware RAID. It uses software to create either a RAID 4 for multiple disks or if you’re using only 2 disks, a RAID 1.
You can have 1–2 parity disks which must be the largest disks in your array and up to 28 data disks. You can also have SSD pools for caching. The software is not free but the cost is not unreasonable. The number is disks include your SSD cache as well.
$59 for up to 6 disks
$89 for up to 12 disks
$129 for unlimited disks
After doing a lot of research on what I wanted to use to build (though it turns out that I probably should have done more), I went with the parts list that I had built.
Motherboard: AsRock Rack X570D4U-2L2T
CPU: AMD Ryzen 9 5950X
RAM: Corsair Vengeance LPX 64GB (2x32GB) DDR4 3200
PSU: Seasonic FOCUS PX-850, 850W 80+ Platinum PSU
Storage: WD Red Plus 10TB (x4), Samsung 870 QVO 2TB (x1)
SSD Cache: C̶r̶u̶c̶i̶a̶l̶ ̶P̶5̶ ̶1̶T̶B̶ ̶3̶D̶ ̶N̶A̶N̶D̶ ̶P̶C̶I̶e̶ ̶G̶e̶n̶ ̶3̶ ̶x̶4̶ ̶N̶V̶M̶e̶ ̶(̶x̶2̶)̶ Samsung 840 EVO 250GB (x2)
PCIe: A̶S̶U̶S̶ ̶H̶y̶p̶e̶r̶ ̶M̶.̶2̶ ̶X̶1̶6̶ ̶P̶C̶I̶e̶ ̶4̶.̶0̶ ̶X̶4̶ ̶E̶x̶p̶a̶n̶s̶i̶o̶n̶ ̶C̶a̶r̶d̶, N̶v̶i̶d̶i̶a̶ ̶G̶T̶X̶ ̶T̶I̶T̶A̶N̶ ̶X̶, LSI Internal SATA/SAS 9211–8i 6Gb/s PCI-Express 2.0 RAID Controller Card
Case: Rosewill RSV-L4000U 4U Server Chassis Rackmount Case
Fans: Noctua NF-A8 ULN (x2), Noctua NF-A12x25 FLX (x5), Noctua NH-D15 SSO2 D-Type Premium CPU Cooler
Extras: Cable Matters Internal Mini SAS to SATA Cable (x2), Cable Matters 2-Pack 15 Pin SATA to 4 SATA Power Splitter Cable (x2), SanDisk 16GB Cruzer Fit USB 2.0 Flash Drive
The motherboard is a server motherboard that got very good reviews and I am overall very happy with it. One of the appeals of this motherboard for me was that it had two 1GB NICs, two 10GB NICs, and a dedicated management NIC. It also has onboard graphics and a VGA output so there is no need for a GPU in order to get graphics.
I was able to update the BIOS from AsRock’s website by downloading it and booting to DOS via a flash drive. The same process was used to update the BMC. Even though the BIOS was not updated, I was able to get to BIOS without issues using the AMD 5950X.
Finally, one of the key features of this motherboard is the ability to bifurcate the PCIe lanes for the top PCIe slot. It’s a 16 lane slot which can be bifurcated to 4x4x4x4, 8x8, 4x4x8, or 8x4x4. This is important if you’re trying to run something in that slot like the ASUS M.2 card.
This motherboard was sold out everywhere I looked so I ended up getting a used one on Ebay for about $100 off retail. It’s always a dice roll buying computer hardware used but fortunately, this one had no issues.
CPU, RAM, PSU
The AMD Ryzen 5950X is overkill for this setup but I wanted to future proof the server. This CPU crushes and with 16 cores, I’m able to run a lot of VM’s. However, the limit on PCIe lanes was a killer for me. I would probably go with an Intel chip in the future. AMD only has 24 PCIe lanes. I’m not an expert on PCIe lanes but the top PCIe slot takes up 16 and the bottom is 8. The 2nd 10GB NIC also shares some of the lanes. This limitation led to some of the issues I ran into.
For RAM, I went with non-ECC RAM because it was cheaper. I did not need anything too fast. I used 32GB sticks and left two slots open for expansion to 128GB in the future if needed.
So a 850 PSU is also probably overkill for this setup but better to have more than enough power than not enough right? I went with a platinum PSU because I’ll have this server running 24/7.
One of the key parts was the LSI SATA/SAS controller card. This card by default comes in IR mode which is used for RAID. Since it’s going to be used for Unraid, it has to be flashed in IT mode so that Unraid sees the attached disks as a bunch of disks instead of trying to use the card.
I ran into a few issues trying to flash the card into IT mode. I initially used this blog to attempt to flash the card. However, when I ran the file, I got the error “ERROR: Failed to initialize PAL. Exiting program.” So I found another article that suggested using an UEFI shell to update instead of DOS. I had a very hard time finding the EFI flash file on the manufacturer’s website along with an EFI shell to launch it from so I uploaded the files along with instructions in the readme to my Github for anyone who needs it.
Once the card is flashed into IT mode, you can connect your drives to them. I went with Western Digital mainly because of supply chain shortages. I wanted to use Seagate IronWolf but could not find any. The WD drives worked fine and Unraid recognized them right away.
I had wanted to use the ASUS M.2 PCIe expansion card with 2xNVMe drives as my SSD cache. But I could not get this to work. The drives would appear then disappear in Unraid. I tried bifurcating all the different ways and there was no change. I ended up returning the NVMe drives. I bought the expansion card on Ebay as well ($75) and I may end up using it for a different project so I’ll hold on to it for now.
I had a couple of older Samsung 250GB SSD’s in the closet and tossed these into my array in RAID 0 which will give me 500GB of cache for now. I also bought a 2TB Samsung SSD to host my VM’s on. This will greatly improve the performance of the VM’s.
Unraid also recommends using USB 2.0 drives to host the OS so I bought a Sandisk 16GB Cruzer fit to host the OS. The small profile mitigates accidentally bumping or breaking the USB if you’re walking by or moving the server around.
I had also intended on using an old Nvidia GTX TITAN X to possibly use as a passthrough GPU for VM’s but I could not get this to work either. The AMD Ryzen 5950X has graphics built in and will have to do for now. I may end up building a different server using Intel chip(s) so that I have enough PCIe lanes to run a GPU.
I am VERY happy with the Rosewill case. This has plenty of room especially since the motherboard is a mATX form fit. There are plenty of place to mount many different types of boards. I replaced all of the fans with Noctua fans which came at a premium but was a big improvement over the built in fans. There are two 80mm fans in the back with five 120mm fans in front of the motherboard. The CPU fan I got is just a tad tall for the case but I was able to gently put the cover on. Even though these fans are premium and “quiet,” with all the fans going, the server does have a good hum going so it may be loud if you’re putting it in your office.
This case is pretty big as well to put somewhere being a 4U case. I have a rack in my office and used NavePoint 1U Adjustable 4-Post Rack Mount Server Shelf Rails to rest it on instead of mounting it. This allows me to pull the server out without having to unscrew it.
Being pretty new to Unraid, I did not know what to expect. The storage shares were pretty easy to set up and can be locked to different users. However, there’s no method to encrypt individual shares (at least from what I found) which is something my Synology NAS can do.
However, one of the main use cases I wanted was to be able to use VM’s. That has been amazing. I provisioned three VM’s using 8 cores and up to 16GB of RAM. Unraid allows you to set a min/max to dynamically allocate RAM.
While I’m not running anything crazy, with all three VM’s started the server barely breaks a sweat.
I also like that the dashboard has a lot of data built in. Temperature and SMART checks are there at a glance.
Networking settings are pretty robust as well with the ability to bridge and bond the different NIC’s. If you’re not bonding, you can bridge individual VM’s or Dockers to specific bridges.
I’ve barely dived into the community apps store but there’s a ton there to play with. Plenty of capabilities that you can use right out of the box using the community apps (Wireguard VPN, Nessus vuln scanner, Plex, Minecraft server, etc…)
So overall, I am happy with the final build. I’m disappointed I was not able to use the NVMe drives or the GPU (very possibly user error as well) but the performance is still outstanding and will certainly meet my needs to the foreseeable future. The Unraid community seems very active and there’s a lot on their forums that can help new users learn and troubleshoot their setup.