Fixing Network Attached Storage with commodity hardware and BSD

Many years ago, I finally got sick of failing disks and the panic that follows them, so I decided to buy a NAS (Network Attached Storage). A fair amount of research suggested that ReadyNAS devices were cost-effective, flexible and reliable. So, I bought an NV+. Although its a little eccentric to configure (done via a web interface), it was straight-forward enough, and I could even install my own sofware on it, as it runs Linux. This was useful: I run Bacula to do my backups and I could install it directly on the NAS.

Of course, the whole point of using a NAS is that the disks are replicated, so I should never have to deal with a failed disk again - just swap in a new one and the data will be recovered from the other disk(s). What's more, the ReadyNAS will even allow you to expand your storage by adding bigger disks as time goes by. What could possibly go wrong?

About a week ago, after many years of reliable service, I found out: the power supply. One morning, my ReadyNAS was dead, never to live again. No problem, just replace the power supply, you say. No such luck: the version I use went out of production long ago and power supplies are about as available as hens' teeth. OK, so buy a new ReadyNAS? Haha - the joke is on me: NV+ v1 disks don't work in NV+ v2, and they don't even make the v2 any more, though it is still possible to find them around (unlike the v1).

So, what's the one true answer? Luckily there is one: commodity hardware. I set up a new system using an HP Microserver, FreeBSD and ZFS. Cheaper, does everything a ReadyNAS can do, and now even power supplies can't catch me out - if it ever dies, I just buy a new box and plug the disks in. It doesn't have to be HP, so long as it has a 64 bit Intel compatible CPU and SATA disks, it'll work just fine. In my defence, this wasn't possible when I bought the ReadyNAS, and it does require some sysadmin skillz, but this time I have a truly bulletproof solution. And thanks to Bacula, the contents of my dead ReadyNAS are easily recovered.

Netgear ReadyNAS NV+ [Amazon UK]

HP ProLiant MicroServer [Amazon UK]

Netgear ReadyNAS NV+ [Amazon US]

HP ProLiant MicroServer [Amazon US]


ZFS on FreeBSD



  1. I’ve been using unRaid for a couple years now, it’s not the fastest I/O, but it gets the job done and I’ve been quite happy with it.  Linux skills really aren’t needed, though I’m running beta and have a number of plugins going so they have been helpful.  Best part?  Commodity hardware and I don’t have to have matching disks.  So when those 8TB disks roll on in I can just slot them in and go.

  2. In case anybody has a specific interest, this is the lspci:
    00:00.0 Host bridge: Advanced Micro Devices [AMD] RS880 Host Bridge00:01.0 PCI bridge: Hewlett-Packard Company Device 960200:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2)00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] (rev 40)00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 42)00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 IDE Controller (rev 40)00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller (rev 40)00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge (rev 40)00:16.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller00:16.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor HyperTransport Configuration00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Address Map00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor DRAM Controller00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Miscellaneous Control00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 10h Processor Link Control01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS880M [Mobility Radeon HD 4225/4250]02:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet PCIe (rev 10)

    HP clearly hasn’t done a spec bump in some time(the last Turion was released c. 2010), so my system BIOS doesn’t recognize 3TB drives properly; but if booted from something else Linux or BSD has no issues. Conveniently, the motherboard has an internal USB socket for boot flash if you want to go that route. Drive bays are also not hot-swap, and PSU is not redundant or hot-swap; but that’s life in the cheap seats.

    Acoustics and thermals are both satisfactory. The only lingering question is what, exactly, a “Hewlet-Packard Company Device 9602” is. 

  3. I had the same issue.  I didn’t get fancy with an extension cable because I had an extra power supply available.  I just swapped around the right pins, powered it up long enough to get the data off, and called it permanently dead.  I was hitting the maximum capacity of the thing so it wasn’t too hard to declare it dead.

    Now I’ve got a rackmount computer running Debian and software RAID.  Not quite as slick as the ReadyNAS, but a lot more flexible.

  4. I played with FreeNAS before deciding the concept was nifty and I’d stick with it.  Around the same time, a friend was also looking into getting a NAS, which made us useful to each other — I’d done some research into what was out there, he had actual use cases he needed, so we bounced thoughts off of each other.

    One of the things I pointed out to him was that, regardless of what he did, he would want to make sure that either the disks, *or the backups*, could be read by something else he had access to, in case the hardware and company went belly-up.  FreeNAS gave me that with ZFS, which should be a safe data format for a few years at least.

    Of course, the data on it may not be important (if it is, essentially, a local cache of data principally stored elsewhere), in which case that is a box of a different paradigm.

  5. I wanted to run mythtv with a ceton card on my nas. I went with zfsonlinux. A kernel level driver with good performance and stability. The project was run out of lawurence Livermore labs.  Nothing against freebsd I just had hardware is currently better supported in Linux. One thing to watch out for is snapshots. Zfs and a few other systems do it correctly. Snapshots are expected to live indefinitely grow to any size needed. The old Linux lvm support which readynas used expected snapshots only to exist long enough to get a consitent backup, that sucks. I use snapshots to lock down my data but still export it with write permissions without worrying somehow it can be destroyed. Snapshots a really best used as copy on write points in time that can be rolled back or forward as needed.

  6. I’ve been using the built-in mdadm tool to manage my disks for a while. Since I use a PC anyway, it’s also my router, firewall, file-server, and debugging toolbox – now it runs MythTV, and a few other things. I give my box about 3 or 4 years before I move to another box. This type of home-server will be more common in non-sysadmin households as time increases.

  7. If anyone from the UK is interested in the HP N54L Microserver, there is a £100 cashback available until the end of June if you buy from an official reseller (which doesn’t include amazon marketplace sellers), which brings it down to about £90 if you search around. It makes them very affordable little boxes for use as a NAS or (like me) learning to mess with linux for the first time. You should be able to find the details by googling.

    1.  Given that my current NAS is my old gaming rig in a full tower case with a watercooled sempron, and so many fans it’s essential a weapon of sonic warfare, I might go for that, ta…

  8. I’m not sure why using a replacement power supply would be that hard, even if the exact model originally used is no longer available. Wouldn’t it be cheaper than making up a new system?

    1.  As many of us discovered when our ReadyNAS power supplies died, the manufacturer used a non-standard supply with not-quite-standard connectors. If you’re comfortable modding a power supply, you can fix it yourself (some of us did).

      It’s a nasty shock (ahem) for people accustomed to working on hardware to discover that the manufacturer chose to use nonstandard components when standard items probably could have served just as well.

      1. Yeah, if nothing will actually plug in properly without taking some wires apart and installing a new connector, I can see myself being too lazy to bother.

      2. My PS died way out of warranty and they sent me a new one free.  Then that one died and I bought another from them.  Are they out of stock?

  9. I’m running a DLink-323 NAS device which, after years of service, is also starting to flake out. I’m currently debating whether to pick up another off the shelf NAS device or build one myself. Something like:
    Building it myself would give me more flexibility. Plus, it uses standard components so if things die, they can easily be replaced.
    On the other hand, i’m targetting low energy consumption, low noise, and a minimal device footprint, which the off the shelf devices seem to be suited for.

    1. Build around the case.  This is the most capacious mini-ITX case you can get, and it’s pretty darn cheap to boot:

      If you need space for more than 5 drives, you can get a 3-in-2 drive cage for around $50.  If you need room for more than 8 drives, then you’re just plain going to have to go bigger.

      An atom-based mini-ITX board, some RAM, and the NAS software of your choice (I’m a fan of unRAID) is all you need to finish it off.

      1. Atom boards are a bit of a disappointment: they have the punch for it; but Intel has, apparently deliberately, crippled all but a relative handful of (rather expensive) oddballs. You generally get 2 SATA ports, and a single PCI 33MHz slot  for an add-on controller. AMD is incrementally better, since their APU boards at least tend to come with 4 SATA ports and PCIe; but you’ll burn a bit more power on graphics punch you’ll never need.

        It’s too bad that SATA port multipliers are…best left to the adventurous… because once you try to drive a bunch of spindles, you usually have a vendor trying to oversell you on some other part of the system. Yes, I want more than four drives. No, I don’t want a l33t gamer board, or a Xeon. No, that hardware RAID card costs more than my entire computer, get it away from me.

      2.  I used this one

        in my FreeNAS build. It has two more drive bays than the Coolermaster, though it is twice as expensive. But it also has a big honking fan, which you’re going to need with all those drives. You can fit 7 drives in without any modification.

        FreeNAS is a little resource intensive and the things I read tended to warn away from Atom hardware. I used a $50 Celeron and it works fine.

        You need a separate disk to store the OS. It can’t run off the data drives. Some people use a low capacity hard drive or flash but it is more common to run off a USB flash device. I used this,or.r_cp.r_qf.&bvm=bv.48293060,d.dmg&biw=1119&bih=1021&sa=X&ei=pNHHUbXsJ5iy4AOd2YD4CQ&ved=0CJcBEPMCMAA4Cg

        since it is tiny, almost flush against the case, and so unlikely to be dislodged. Now I’ve got 12 TB of ZFS storage built for the price of half that from a commercial supplier. And all of it is commodity hardware.

  10. +1 for the FreeNAS on HP Microserver crowd. Have 12T online now. (5x3T in raidz1)

    There’s an established BIOS hack to make the drives hot-swappable, and you put an adapter in the optical drive to get the fifth spindle, because raidz really wants N to be a power of 2.

  11. I decided long ago, when I got burned with a Windows-specific NVidia RAID,  that I want to able to read the disks on any computer I choose, so I do a JBOD off of a Mac Mini server.  I can mirror what I choose in the OS.

  12. regarding rolling your own – spend a little bit of time reading the linux-raid mailing list before you jump into this whole-hog. lots and lots and lots of stuff can go wrong with creating a raid from scratch (performance-wise), and lots can go wrong when a disk fails. just read through all the horror stories and understand what you need to know *before* one of your disks fails and you issue a mdadm command that will destroy all your data forever.

  13. NAS4Free is the fork when FreeNAS went commercial. It nicely combines FreeBSD, ZFS, and other NAS functions. I have been running it on an HP Microserver for a couple of years now, and it is great.

    1. nas4free really does not support raid6? that’s really bad… a very common raid5 failure scenario is for one disk to throw a bad block and drop out of the array. then you try to rebuild your array with a new disk, during which time you have no parity protection. and then one of your other disks throws a bad sector and now you’re toast, unless you know what you are doing.

      with today’s giant disks an uncorrectable read error somewhere on the disk is something of a certainty, eventually.

  14.  it’s super dangerous to run raid5 with today’s enormous disks. people think of a ‘disk failure’ as a total failure of a disk but in reality all it takes is one uncorrectable sector and you’re dead. if you are going to do this, use raid6 and take the size hit, it’s worth it.

    1. Agreed:
      My (very fast) main computer has a RAID controller built-in to the motherboard.  While initially setting it up, I chose RAID 5 using three 1TB drives.  A few days later, the RAID controller detected something flakey on one drive.  Rebuild time?  Seventy-four hours…  

      I thought that the error-correcting scheme was essentially using XOR gates, which would have had zero effect on data-throughput, but apparently not.  People who recommend Raid 5 haven’t watched it rebuild a disk.  Or at least a disk that is larger than, say, 40 MB.

      While it was rebuilding, the computer was essentially unuseable.  Imagine your AV software deciding to scan your whole HDD while you’re defragmenting.

      I read up some more about RAID schemes and came across a funny website (which I can’t locatre right now), called something like ‘RAID 5 MUST DIE!’ which explained why RAID 5 wasn’t a good choice with today’s cheap, large-capacity HDDs.  I went out and purchased another 1TB HDD, and reinstalled the OS under RAID 10 (striping plus mirror imaging). 

      Now, if/when the RAID detects something odd about one of the HDDs, except for the rebuilding of the first two tracks of the HDD where HDD access is a little slower, I can’t tell the difference between performance while rebuilding and under normal operation.

      1. Thanks for putting in that correction. I was about to start chewing people out for recommending RAID 5 and 6 so much.

        If you’re putting the money in for the board and case and such, it makes more sense to go with RAID 10 and do some combination of buying an extra disk or two and/or reducing size by a small amount. At least in my opinion.

  15. I’ve been contemplating this for a while, but I’m wondering about the choice of OS, as there are various streaming applications I’d wonder if FreeBSD could run (Logitech media server etc). Many are available for Linux, and all are available for Windows. Windows Server seems like the most compatible option, but not terribly desirable for cost reasons.

    1. Logitech Media Server (née SlimServer, Squeezebox Server) runs under any operating system because it’s written in Perl. It uses some associated binaries (sox, flac, lame), but these too are cross-platform. FreeBSD is an ideal platform for it: lightweight, robust, free, free.

  16. Those little HP boxes are nice, a friend of mine is running one as a NAS at his house, backing up the family computers.  He said it’s 10TB total.

    1. drobo has even more problems than other solutions, as the raid system (beyondraid?) is proprietary to drobo.

  17. Oh, I see, you mean fixing the [general NAS situation] with commodity hardware.  I was really hoping there was a going to be an easy way to replace the OS on your existing NAS (theoretically possible, but so very difficult in practice).

    1. If it’s on the list, it’s pretty easy.

      If it’s just an x86 in a funny shape, it’s generally even easier.

      If neither of those hold, things get more exciting.

      1. But, as somebody once said in a review of some piece of equipment or other, “Like anything with a processor and some RAM, the Linux wolves are circling.”

  18. I’m running windows8 and drive bender ( – it’s a spiritual successor to Windows Home Server.  All discs are NTFS and readable, it’s pretty awesome.  Plus I have all the advantages of running a windows system and not having to deal with all that linux stuff I don’t care to learn.

  19.  Small question: How much power does that microserver draw, compared to the old NAS?
    I would like a NAS that handles like a proper PC but then doubling power consumption on the way seems like a much steeper price than acquisition costs make it look.

  20. RAID is not backup. In a home NAS setup, RAID can be an unnecessary complication that makes your data less safe.

    Why not RAID?
    the failure of any disk puts the other disks in your RAID set under unusual strain
    you are not protected from file system corruption
    you are not protected from user error (files deleted/overwritten accidentally)
    RAID sets can be difficult to import/access on other systems

    Why RAID?
    high-availability: you need the data to be accessible without interruption even when a disk fails

    Ask yourself if you need your home NAS to stay online even through inevitable disk failures, or if the safety of your files is more important. If you care about your files, routinely backing up your online data to simple independent drives is essential.
    You can have both, of course, if you have the disks. But backup first; then, maybe you want RAID too for availability, capacity, or performance reasons.

    Filesystems like ZFS have features to mitigate some of the risks to data: scrubs can identify/correct some file system corruption, and snapshots can protect against user mistakes. These are great – if you implement them – and useful both in single-disk and RAID configurations. But they don’t help when disks fail.
    And all hard drives fail, it’s simply a question of when.

    1. To be fair, though, a RAID 0 with two disks in it isn’t a huge hassle; it’s not terribly complicated; and it provides one level of local redundancy.

      Then again, so would any number of backup scripts or programs…hell, even running cp -au [root directory of drive 0] [root directory of drive 1] once a night (or while one is away at work) could work…if not in an especially intelligent manner.  And there are plenty of smarter programs than cp.

      1. I’m sure you meant to say RAID 1 (mirroring). “RAID 0” is the opposite of redundancy.

        I use rsync to backup from one drive to another – it provides intelligent file comparisons so only changed files are copied, integrity checking, good logging and statistics. And it runs equally well locally as over the network.

  21. Does anybody have a decent across-the-table comparable machine to something from the Synology line? I’m looking at their 4-drive systems (412+, 413), and from what I can tell they don’t cost a whole lot more than a regular home NAS server. A four-bay DS413 costs around $450 on amazon, handles everything for you, and is highly-regarded and recommended.

    Sure, it’s not DIY or roll-your-own, but it is optimized to work as a NAS and has a lot of useful plugins. 

    I ask, because when I first started pricing things out, the cheapest reasonable box I could find was about $350, and for me, going through the hassle of putting it all together myself and getting everything working, didn’t seem like worth the money.

  22. I have such a low-fi system.  A 2008 imac, 3 external 2TB drives that I manually back up to 3 other external 2TB drives.  It’s a bit cludgy compared to some of the above (technical term…) but works perfectly.  Plus I can log into it from anywhere on any device, see the desktop, move files about etc, run a streaming radio station when I want, stream music, films etc to anywhere, all good!

Comments are closed.