curator: a file server

Table of Contents


This machine was constructed in early 2014.


  • Runs Linux.
  • Uses SnapRAID for backups.
  • Headless: no keyboard or monitor required
  • 8 SATA devices: 1 parity and 7 data (LATER: 6 data and 2 parity)
  • No optical drives.
  • Discs spin down when not in use.
  • Easy disc replacement; a hot-pluggable backplane would be convenient, although I don't have to have the machine up while swapping discs. (Later: I gave up on that).
  • Will be a file source for XBMC.
  • Quiet.
  • Low power.


  • Bleeding edge.
  • Fast CPU
  • Transcoding or any sort of heavy processing.
  • Saving money compared to a prebuilt system.
  • Cool appearance
  • PCI expansion slots.



First motherboard


  • FM2+ / FM2 AMD A88X (Bolton D4) HDMI SATA 6Gb/s USB 3.0 Micro ATX AMD Motherboard
  • from: NewEgg
  • price: $77.99 + $5.67 shipping

I wanted a board with 8 SATA ports. AMD board + CPU combinations were about half the cost of Intel.

The newer FM2 socket boards are cheaper than the older AM3 boards.

This is my first AMD build, and my first Gigabyte board.

I like the micro-ATX size because it leaves more room in the case and allows for better cooling.

I replaced this as part of the Fourth expansion, found it was not at fault and returned to it because it has HDMI.

Second motherboard

(Tested but not used)


  • AMD Socket FM2 A85X (Hudson D4) HDMI SATA 6Gb/s USB 3.0 Micro ATX Motherboard
  • from: Dealstech via Amazon
  • price: $69.99 + $10.32 shipping

I picked this one as being similar to the original. The board layout is nearly identical.


AMD A4-5300

  • Trinity 3.4GHz (3.6GHz Turbo) Socket FM2 65W Dual-Core Desktop APU (CPU + GPU) with DirectX 11 Graphic AMD Radeon HD 7480D AD5300OKHJBOX
  • from: Amazon
  • price: $49.99 + free shipping

I just selected a 65W processor to fit the FM2 socket of the motherboard.

Lower power AMD processors have been due for a while, but I did not want to wait. I regret something like the Sempron 145 45W processor was not available for the motherboard at the time. It seemed like a good fileserver choice.

CPU cooler

I was getting a 'rat-tat-tat' noise I could not identify, but suspected the stock cooler, which worked fine otherwise. (Later: noise continues. Power supply fan or cable rattling in that area?)

I replaced it with:

Noctua Low-profile Quiet CPU Cooler for AMD Based Retail Cooling NH-L9A

  • from: Amazon
  • price $49.99 + free shipping


First memory

Crucial 4GB kit (2GBx2)

  • DDR3 PC3-12800 • CL=11 • Unbuffered • NON-ECC • DDR3-1600 • 1.5V • 256Meg x 64 • Part #: CT2KIT25664BA160B
  • from: Crucial
  • price: $49.99 + $3.50 tax, free shipping

Found by using the Crucial memory configurator: just select the motherboard.

Second memory

Bought for Fourth Expansion. Using 2 4GB sticks, which is abuandant memory.

I thought the original memory was bad, but turned out to be ok.

Crucial 4GB DDR3L-1600 UDIMM

  • DDR3 PC3-12800 • CL=11 • Unbuffered • NON-ECC • DDR3-1600 • 1.35V • 512Meg x 64
  • from: Crucial
  • price: $37.99 + $2.28 tax, $6.99 expedited shipping

Soon bought a second stick of the above.

  • price: $37.99 + $2.38 tax.


First case


  • from: Amazon
  • price: $89.99 + free shipping

There are cases with internal space for 8 3.5" drives, but I could not find one with a hot-pluggable SATA backplane. Also: it would be good to have externally visible indicator lights for each drive.

An approach often used by server builders are the 5-in-3 and 4-in-3, etc, hard drive cages that fit into external 5.25" spaces.

For eight drives, 2 4-in-3 cages would require 6 external bays. I picked a well-reviewed one with a plain external appearance.

Second case

Silverstone GD08

  • from: Amazon price: $154.20 + $10.79 tax

Hard Drive Cages

(Abandoned for the Fourth expansion)

ICY DOCK FlexCage MB974SP-2B

  • Tray-less 4 x 3.5 Inch HDD in 3 x 5.25 Inch Bay SATA Cage - Front USB 3.0 Hub
  • from: Amazon
  • price: $119.62 each (2) + free shipping

Note that each cage is more expensive than the case itself.

The fan on one cage was pleasantly quiet, but the other noisier than anything else in the case. I replaced it with the Thermaltake fan below.

Later: during the Fourth expansion I took the opportunity to replace the cooling fan with the Noctua below. (Then stopped using the cages).

The LEDs are brilliant blue, way too bright for a dark bedroom.

Power supply

Corsair CX430M

  • CX Series 430 Watt ATX/EPS Modular 80 PLUS Bronze ATX12V/EPS12V 384 Power Supply
  • from: Amazon
  • price: $46.47

Online calculators said I needed something over 300W for this design, including 8 3.5" 7200RPM SATA discs running at once. The NewEgg calculator had 321W.

I picked a modular 430W PSU. Single-rail, as recommended in the UnRaid forums.

Hard drives

Original discs

Toshiba PH3300U-1I72

  • Toshiba Desktop 7200 3.0TB 7200RPM SATA 6Gb/s NCQ 64MB Cache 3.5-Inch Internal Bare Drive PH3300U-1I72
  • from: Amazon
  • price: $120.00 + free shipping

This will be a combined system and data disk.

3TB disks are the sweet spot for TB/$ right now.

Toshiba DT01ACA300

  • Toshiba 3.5-Inch 3TB 7200 RPM SATA3/SATA 6.0 GB/s 64MB Hard Drive DT01ACA300
  • from: Amazon
  • price: $109.99 + free shipping

This will be the parity disk.

I have a collection of 1TB Seagate drives, already populated, which will be the remaining data disks. I will replace these with larger capacity drives as I need the space or they begin to fail.

Later, for the First expansion.

Toshiba PH3400U-1I72

  • Toshiba 4TB SATA 6Gb/s 7200rpm, 128MB Cache 3.5-Inch Internal Hard Drive (PH3400U-1I72)
  • from: Amazon
  • price: $129.99 + free shipping
  • quantity: 3

Later, for the Second expansion and Third Expansion.

  • Western Digital 4TB WD40EZRX Green drives. I received eight of these as a gift.

Replacement hard drive cage fans

(Later: no longer needed).

Thermaltake ISGC Fan 8

  • from: Amazon
  • price: $12.99

The ICYDock cage uses a 80x80x25mm fan, either 2- or 3-pin. Replacing it without removing the cage was not difficult.

The new fan is nicely quiet. Of the remaining noise it is hard to tell what part comes from the fans and what from the drives.

Noctua SSO Bearing Fan Retail Cooling NF-R8

  • from: Amazon
  • price: $9.95


USB sticks, for boot devices and temporary storage.

  • SanDisk Cruzer Fit 8 GB USB Flash Drive (5 Pack) SDCZ33-008G-B35-5PK
    • from: Amazon
    • price: $38.95 for 5 + free shipping

Assembly notes

(Later: this section is obsolete. See Fourth expansion for notes on the second case).


Both case sides come off without tools.

I removed and stored away:

  • the VGA guide and optional 92mm fan
  • 6 of the 10 5.25" drive bay covers
  • the 4 hard drive adapter trays
  • the drive bay locking knobs

Hard drive cages

The IcyDock hard drive cages are an easy fit if you slightly bend out the spring tabs on one side of the case. Else it is tight and the tabs scratch the metal sides of the cage.

Each cage is held by 4 screws on one side and 2 on the other.

If you want to place the bay in the top-most position of the case, you will need to shift the bundled cables from the top panel up just slightly. That means cutting the cable ties holding them in place.

I decided to leave empty spaces above and below each cage for better cooling. From top to bottom:

  • (empty)
  • 3 spaces containing a cage
  • (empty)
  • (empty)
  • 3 spaces containing a cage
  • (empty)

Each empty space has a bay cover.

The IcyDock docs say the SATA connectors are numbered 1 through 4 from top to bottom; the cage itself has them labeled from bottom to top.

Both cages have heavy USB 3 cables coming out the back which I'm not using, as the motherboard has only one USB 3 header and that goes to the top panel. I tucked the cables above each cage and used cable ties threaded through the case to fix them.

A tip: some drives have to be pushed into place a bit more firmly than the latching door will do it. The door can be closed and the power light will come on, but the drive still does not have a data connection. After opening the door and giving a little extract push to seat it, I have not yet had one come loose.

Power supply

The power supply fan blows out through the bottom of the case.

Two modular cables have two SATA connectors each. Since each drive cage needs two power connectors, that works out neatly.


The microATX board and the case match in all 8 standoff positions. I don't think it's going anywhere.

One USB 2 header is attached to the top panel, leaving one header unused.

Ethernet hardware address:

74:d4:35:01:b5:64 (assigned by the router to

Smoke test

All well on first power on.

The TinyCore USB stick booted without having to modify the BIOS boot order.

CPU fan, the two case fans, and one hard drive cage fan all spun up. (Need a hard drive inserted into the cage for the cage fan to work).

Hardware summary results

  • Power consumption

    Idling, with one disk spinning, the system uses 40W.

    With all disks working, it runs about 100W, occasionally spiking higher.

  • Temperature

    I need to get the AMD sensors working to be sure, but it seems to run cool.

    SMART reports disk temperatures, but I don't know how accurate that is.

  • Noise

    It is a quiet box by office standards, but may be distracting in a small home theater room. Currently I have it in the basement and use powerline ethernet, which is slow, but perhaps fast enough to serve media files.

  • Appearance:

    Although "good looking" was not a goal, I think it looks nice anyway. But those lights!


My original intention was to use a Linux distribution that booted from a USB stick and ran entirely from RAM, meaning we would not need a spinning system disk. Fast, low-power, silent. See List of Linux distributions that run from RAM for candidates.

I experimented with Tiny Core Linux and Slax but became impatient with getting everything running just right and so fell back to a hard disk install of openSuSE, which had been my desktop OS for many years.

This means one disk must always be spinning for swap files and logs, etc. The installation includes vastly more software than I need for a file server. Time and interest permitting, I may investigate a RAM-based system again in the future.

Later, for the Third Expansion I switched to Arch Linux, my desktop and laptop distribution.

In the Fourth expansion I finally switched to a USB stick for the boot and system device.


I downloaded the 13.1 Network Install ISO.

Copied that to a USB stick with:

dd if=openSUSE-13.1-NET-x86_64.iso of=/dev/sdc   # MAKE SURE THAT IS THE USB STICK!!!

Booted the new machine from the stick.

Did a standard installation with LXDE for a desktop. I like its speed and minimalism compared to the other choices.

The installation suggested a separate partition for /home, but in retrospect I should have allocated the majority of the system disk to a top-level directory for data. I fixed that up by hand later.

After the setup and configuration period, I changed the default run level to console login. No point in have X or a windows manager running in a headless setup.

Arch Linux


Disk layout

The system disk is a new 3TB unit, formatted with ext4. A generous Linux installation takes less than 80MB. The remaining space is in a partition mounted at:


I have six old pre-filled 1TB discs, formatted with ext3, mounted at:

/mnt/tera01 through /mnt/tera06

A long fsck ran on each disk the first time I mounted them. I had been hot-plugging them on my desktop system and apparently fsck never runs that way.

The last SATA slot is the parity disk, a new 3TB unit formatted with ext4, mounted on:


Disk Maintenance

I run three utilities on a regular schedule with cron in the middle of the night, on different days. Results are emailed to me so I don't forget them.



Run smartctl --test=long on each disk. For a 3TB drive this can take 4 or 5 hours. The tests run in parallel for multiple drives and you can continue to use them during the test.

This runs in the drive's firmware and the OS knows nothing about it. You can poll the drive with smartctl -a to see if it's done. I have the cron script wait 6 hours and then capture the smartctl -a results for each drive to date-stamped output files, which I save for historical analysis.

This is not guaranteed to find disks that are about to fail. Disks sometimes go bad without a SMART warning. But if counts appear for these attributes:

  • Reallocated_Sector_Ct
  • Current_Pending_Sector
  • Offline_Uncorrectable

...then the odds of the disk failing soon are said to be high.

One of my 1TB disks shows 1 Reallocated_Sector_Ct. I'm keeping an eye on it.



This checks and repairs ext2/ext3/ext4 filesystems.

Normally this is run at boot time when a disk has not been checked after so many mounts or after a given time period. Since the file server is not going to be rebooted very often, I want to do this on a running system.

You are not supposed to run e2fsck on mounted disks, so the job will dismount each disk, run the check, and remount it.

I cannot dismount the system disk while running, so that will need some other method.

Tip: I have to un-export each volume for NFS first, else umount fails with "device busy", even though no process accessing the drive shows with lsof or fuser.

I do:

exportfs -ua

before, and:

exportfs -a


SnapRAID scrub


For the data array: this checks the data and parity files for errors. If errors are found they are marked and can be corrected with snapraid -e fix.

By default snapraid scrub checks the oldest 12% of the array, "oldest" in the sense of "since last scrubbed", not filesystem date. Run weekly the whole array will be checked about every 2 months.

With my initial data set, the default scrub takes about 30 minutes.

Run snapraid status to get a histogram of the scrub history.


Since a given drive is sometimes not used for a week at a time, I wanted to spin them down after an interval. I know whether to do this or not is a long-running debate, but I have not seen a authoritative opinion on one side or another. I've never used the feature before, so I'm going to try it now.

For each drive I have a line in /etc/rc.d/after.local: (LATER: not in systemd)

/sbin/hdparm -S 180

...which will cause the disk to spin down after 15 minutes of inactivity.

You can check the status of the drive with:

/sbin/hdparm -C

...or force it to spin down with:

/sbin/hdparm -y

The system disk is always spinning.


My desktop systems are Linux, so it is natural to use NFS to mount the file server shares.

I do not currently use any pooling software to combine the separate disks into one view. I am used to managing them separately and will continue to do so for now.

On the client machines, each disk is mounted in /etc/fstab like so:

curator:/mnt/tera01 /mnt/curator/tera01 nfs defaults,bg,hard,intr 0 0

bg: background the mount so the system doesn't hang at boot time if the server is down.


(LATER: see configuration is Third Expansion).

I do keep a read-only pool. This is mounted at /pool on the system disk and contains symbolic links to the rest of the array.

I export this with samba because nfs + links to multiple drives = headache. SMB handles it better, if you use the configuration parameters shown in the SnapRaid FAQ:

# In the global section of smb.conf
unix extensions = no

# In the share section of smb.conf
comment = Pool
path = /pool
read only = yes
guest ok = yes
wide links = yes

SnapRAID has a pool command to create links for the entire array, but I use a script of my own invention. I wanted to customize the structure of the pool so I could rename links as necessary and provide subset views, for example when using XBMC profiles.

I'm not understanding name mangling in samba very well. To be compatible with ancient Windows requirements, some characters in a file name will cause it to be presented in uppercase 8.3 format. When creating links I edit these characters out of the name: "? : /".

On the Linux clients, the line in /etc/fstab for the pool:

//curator/pool /mnt/curator/pool cifs guest,ro,uid=wmcclain,gid=users,iocharset=utf8,mapchars 0 0


SnapRAID (which is not a standard RAID solution) is a backup program that saves hash and parity information and allows correction of disk errors and recovery from bad disks.

It is targeted towards collections of large files that are not often modified, like a media file library.

You can see its many virtues on the web pages, and here is what seems like a fair comparison with similar software.

I selected SnapRAID because:

  • it covers what I need in a lean, non-obtrusive way
  • the simple command-line interface appeals to my unix biases
  • it is open source and written in C, and I could maintain it if I had to
  • it is a simple, non-privileged application and runs only while you execute its commands: no demon or kernel patches
  • you can begin with already filled disks; it is agnostic as to what file system is on the data disks
  • you are not locked in; just stop using it

things to know

The data is protected after you do the sync command. If you make changes thereafter you may risk data loss until the next sync. Recovery requires the participation of all disks in the array, so you need to be careful about what you change.

Adding new files to the array is not a problem. They are not protected until the next sync, but neither do they cause recovery problems for the existing array. (I keep a copy of new files outside of the server until they are synced).

Deleting a file changes the array and may cause problems if you need to recover from a bad disk. A safer approach is to move the file to a delete directory outside of the array until after the next sync, when it will be safe to delete it permanently. The fix command even has an option to specify files that have been moved out of the array but are still needed for recovery.

Modifying a file also changes the array and could be a risk if you need to run recovery. This is relatively rare in a media file library, but when it happens it would be handy to stage the new version in a modify directory outside of the array. Move that into place just before the sync. Maybe move the old version to the delete folder first just in case?

To do: a script or cron job to do the staged delete and modify operations with sync would be handy. Each disk would need delete and modify folders, but the whole array is handled with one sync.

SnapRAID requires one or more dedicated parity disks, each as large as the largest data disk:

  • 1 parity disk will save you from 1 disk failure (either data or parity)
  • 2 parity disks will save you from 2 disk failures (any combination of data and parity)
  • etc...

The SnapRAID FAQ has recommendations on how many parity disks you should have for a given number of data disks. For my 7 data disks I have only 1 parity disk and it is recommended I have 2.

To do: add a second 3TB parity disk and merge 2 of the 1TB data disks onto a new larger drive.


(LATER: in Arch Linux snapraid is a package in the AUR).

This is a very quick compilation from source without special dependencies.

Untar the downloaded .tar.gz file and:

sudo make install


I started with 6 1TB drives already populated with data, all using ext3. I added the first three to the array and synced them one at a time. Since all the disks are read when calculating parity, I did the last three together. The first sync took 2.5 hours, increasing for each additional single disk. The batch of three took about 8 hours.

Using default parameters, scrub does not operate on files less than 10 days old (in scrub-time: since last scrubbed or newly added to the array). After that, I found scrub took about an hour or less to check the default 12% of the array.


Here is my /etc/snapraid.conf configuration file, with comments removed:

parity /mnt/parity/parity

content /var/snapraid/content
content /mnt/parity/content

disk d0 /mnt/tera00/
disk d1 /mnt/tera01/
disk d2 /mnt/tera02/
disk d3 /mnt/tera03/
disk d4 /mnt/tera04/
disk d5 /mnt/tera05/
disk d6 /mnt/tera06/

exclude *.unrecoverable
exclude /lost+found/
include /backup/
include /video/
include /audiobook/

On the data disks only the top-level "backup", "video", and "audiobook" directories are part of the array. Everything else on the system is invisible to SnapRAID. So the server can be used for other types of backup that are not covered by the sync command.

typical tasks

Print a report of array status, including a histogram of scrub history:

snapraid status

List the contents of the array:

snapraid list

Show adds, changes, and deletes since the last sync:

snapraid diff

Sync the array:

snapraid sync


Per a forum post, I did this to speed up syncs:

echo 512 > /sys/block/sda/queue/read_ahead_kb
echo 512 > /sys/block/sdb/queue/read_ahead_kb
echo 512 > /sys/block/sdc/queue/read_ahead_kb
echo 512 > /sys/block/sdd/queue/read_ahead_kb
echo 512 > /sys/block/sde/queue/read_ahead_kb
echo 512 > /sys/block/sdf/queue/read_ahead_kb
echo 512 > /sys/block/sdg/queue/read_ahead_kb
echo 512 > /sys/block/sdh/queue/read_ahead_kb


First expansion


  • Add another parity disc. Two are recommended for an array with this many discs. Since the machine is full up on disc slots, this meant consolidating two data discs into one.
  • Add additional storage space.
  • Replace 1TB discs tera03 and tera04, which were each showing SMART Reallocated_Sector_Ct = 1. Perhaps not a problem, but since I was replacing discs anyway, these were the ones to go.


  • copy 3TB parity disc to new 4TB parity disc.
  • Old 3TB disc becomes new tera04. Copy all contents and retire old disc.
  • Merge 1TB tera03 onto tera04. Copy all contents and retire old disc. Net gain: 1TB storage. The array will no longer have a tera03 data disc.
  • Install new 4TB parity2 disc into old tera03 slot.
  • Replace old 1TB tera01 disc (the oldest in the array) with new 4TB disc. Copy all contents and retire disc. Net gain: 3TB.


  • Upgrade from 1 to 2 parity discs.
  • Net gain of 4TB data storage.
  • 3 1TB spare discs: old tera01, tera03, tera04.
label size  
tera00 3TB includes system and non-raid areas
tera01 4TB  
tera02 1TB  
parity2 4TB  
tera04 3TB old tera03 + old tera04
tera05 1TB  
tera06 1TB  
parity 4TB  

Second expansion


  • Convert from SuSE to Arch Linux to match the rest of the household.
  • Update software and reboot once a week, just before the weekly snapraid job, so as to use the same spin-up cycle.
  • Replace the 3 remaining 1TB drives with 4TB volumes. Use proper partitioning this time.


  • Upgrade the 1TB drives to 4TB:

    • The drives already had GPT partitioning, but no partitions. Create one:

      # parted /dev/sdb

      (parted) mkpart primary ext4 513MiB 100%

      (parted) align-check

      (parted) quit

    • Create filesystem and label:

      # mkfs.ext4 /dev/sdb1 # takes a while!

      # e2label /dev/sdb1 new02

    • Mount the new drive on the server, using a powered USB popup-dock so I don't have to swap out any other disc:

      # mount /dev/disk/by-label/new02 /mnt/temp

    • Copy everything, retaining all attributes and sub-second timestamps:

      # cd /mnt/tera02; cp -va . /mnt/temp

    • Swap the new disk for the old and update the label:

      # umount /mnt/tera02

      # umount /mnt/temp

      (remove the old disc and insert the new in its slot)

      # e2label /dev/disk/by-label/new02 tera02

      (edit /etc/fstab and make sure filesystem is ext4)

      # make the disc spin down

      # /sbin/hdparm -S 180 /dev/disk/by-label/tera02

      # mount -a

      (verify the disc is online, contents look ok, and ownership under /mnt looks the same as the others)

    • See if snapraid is happy:

      snapraid diff # will show disc uuid change

      snapraid check -a -d d2 # takes 90 minutes for 1TB

      snapraid sync # runs fast

    • Repeat the above for tera05 and tera06.


  • Net gain of 9TB data storage.
  • 3 1TB spare discs: old tera02, tera05, tera06.
label size  
tera00 3TB includes system and non-raid areas
tera01 4TB  
tera02 4TB  
parity2 4TB  
tera04 3TB  
tera05 4TB  
tera06 4TB  
parity 4TB  

Third expansion


  • Convert from SuSE to Arch Linux to match the rest of the household.
  • Upgrade the 3TB tera00 system disc to 4TB.
  • Establish daily incremental backups allowing rollback to previous dates for the household systems.


  • The new 4TB drive had GPT partitioning but no partitions. Make them and the filesystems, a small one for the system and the rest of the disc for snapraid data storage:

    # parted /dev/sdb

    (parted) mkpart primary ext4 513MiB 80GB%

    (parted) align-check

    (parted) mkpart primary ext4 80GB 100%

    (parted) align-check

    (parted) mkpart primary ESP fat32 1MiB 513MiB # forgot this as first step

    (parted) set boot 3 on

    (parted) align-check

    (parted) quit

    # mkfs.ext4 /dev/sdb1

    # e2label /dev/sdb1 system

    # mkfs.ext4 /dev/sdb2

    # e2label /dev/sdb2 tera00

    # mkfs.fat -F32 /dev/sdb3

  • Download installation files, verify and install as shown in kate: a home computer. For running pacstrap, /dev/sda1 is on /mnt, and /dev/sda2 is on /mnt/boot.

  • Had to refresh Arch keyring before installation would complete.

  • For some reason had to generate "initramfs" with:

    # mkinitcpio -p linux

  • Normal setup:

    • This is meant to be a headless server so I have not installed anything requiring X11.

    • I copied the contents of these directories into ~/suse on the new disc: "/etc ~ /root". Will mine this for config info, etc.

    • AUR packages installed manually:

      • burgaur
      • cower
      • python3-threaded_servers (installs python3)
    • AUR packages installed with burgaur:

      • pacserve (wouldn't work with manual install)

        systemctl enable pacserve.service

      • snapraid

    • Packages installed:

      • ca-certificates-utils (needed to be reinstalled for some reason)

      • emacs-nox

      • hdparm

      • lm_sensors

        sensors-detect # take all defaults

      • msmtp (for simple batch email)

      • msmtp-mta

        Create ~/.msmtprc. Template in the wiki.

      • nfs-utils

        systemctl enable nfs-server.service

        (add old entries to /etc/exports)

        exportfs -arv

      • openssh

        systemctl enable sshd.service

      • parted

      • polkit

        systemctl start polkit.service

      • samba

        (Edit /etc/samba/smb.conf. Getting security and wide links to work was a struggle, but adding these lines worked):

          lanman auth = yes
          ntlm auth = yes
          raw NTLMv2 auth = yes
          browse list = yes
          unix extensions = no
          allow insecure wide links = yes
          wide links = Yes
          comment = snapRAID pool
          guest ok = Yes
          path = /pool
          read only = Yes
          follow symlinks = yes
          public = yes
          writable = no

        systemctl enable smb.service

      • smartmontools

      • snapraid

    • Add user:

      useradd -m wmcclain

      passwd wmcclain

    • sudo:

      Run visudo and add:

      wmcclain   ALL=(ALL) NOPASSWD: ALL

    • Copy interesting .bashrc entries from old system.

    • Copy /etc/fstab entries.

    • Copied snapraid array contents from old disc to new.



  • Now on Arch Linux to be compatible with the rest of the house. Will do weekly system updates.
  • Net gain of 1TB data storage.
  • 1 3TB spare disc: old tera00.
label size  
tera00 4TB includes system and non-raid areas
tera01 4TB  
tera02 4TB  
parity2 4TB  
tera04 3TB  
tera05 4TB  
tera06 4TB  
parity 4TB  

Fourth expansion

This was partly a repair project. On January 20 2018 the system was offline and rebooting did not recover it.


  • no POST, no BIOS screen
  • fans run

I tried many things. I tried another power supply. I switched cables. I bought a new motherboard and new memory. I could sometimes boot the system but the problem continued intermittently.

I believe both of these things may have been true:

  • the Zalman case is cursed with a short in the power button or something.
  • one slot in one IcyDock HD cage seemed unreliable.

I experimented with a JBOD USB3 array and a little Zotac Atom computer. I saw data corruption with the Atom, but the array seemed to work well with other systems. I sent it back because it was too loud for home theater use and was slower for snapraid operations.


  • get the server up and running again
  • move it out of the basement into the home theater so I can use wired ethernet
  • finally move to a USB stick for the system device so I don't need a spinning system disc

I bought a home theater case -- a Silverstone GD08 -- that will fit in the cabinet beneath the TV. I reused all the previous components but gave up on the IcyDock cages.

The case is advertised as taking 8 3.5" hard drives, which is true, but note:

  • 7 drives fit in a convenient lift-out cage with good handles.
  • The drives are arranged vertically just behind the face plate.
  • One section takes 4 drives, the other 3. The 3 drive area has more support below the drives than the other section.
  • There are rubber vibration dampers, more for the 3 than the 4.
  • The cage just holds the drives, it does provide data or power connectors. (That's ok, I no longer trust fancy cages).
  • The 8th drive is bolted to the side of the case. It has no vibration dampening. It requires right-angle power and data connectors.
  • NOTE: it is very difficult to disconnect a right-angle snap-connector SATA data cable from that 8th disc when it is mounted. I broke a bit of the plastic housing on the hard drive trying, making it drive unusable with standard cable connectors. (I was able to copy off the contents using a extenal USB dock). Lesson: demount it from the case first.

Looking at the cage from the rear, the arrangement of drives:

Bolted to the case:

label size model serial no speed  
parity 4TB TOSHIBA MD04ACA400 15Q3KFGAFSAA 7200 rpm loud hum

Left section:

label size model serial no speed  
tera00 4TB WDC WD40EZRX-00SPEB0 WD-WCC4E1875294 5400 rpm Green
tera01 4TB TOSHIBA MD04ACA400 15Q3KFG9FSAA 7200 rpm 531 "199 UDMA_CRC_Error_Count"; cabling during repair?
tera02 4TB WDC WD40EZRX-00SPEB0 WD-WCC4E1907277 5400 rpm Green
tera04 3TB TOSHIBA DT01ACA300 Y36AMP5GS 7200 rpm loud

Right section:

label size model serial no speed  
tera05 4TB WDC WD40EZRX-00SPEB0 WD-WCC4E0632405 5400 rpm Green
tera06 4TB WDC WD40EZRX-00SPEB0 WD-WCC4E0987420 5400 rpm Green
parity2 4TB WDC WD40EZRX-00SPEB0 WD-WCC4E1907313 5400 rpm Green

I put parity on the case position because it spins only during snapraid operations and vibration noise would not be an issue.

NOTE: in the left 4-drive section the drives can be mounted facing either way, but in the right 3-drive section only one orientation is possible. If you want them facing all the same way, look at the right section first. When facing the same way the power connector is on top.

It comes with three 120mm case fans (2 beneath and one on the side) and more can be added. They are quiet. The stock CPU cooler is louder, but its noise can be reduced with "CPU Silent" setting in the BIOS.

Two of the fans are connected to the motherboards SYS_FAN1 and SYS_FAN2. The third is connected directly to the PSU with a Zalman fan controller to turn the speed down.

The greater part of the system noise is from eight hard drives spinning. If you spin those down the box is practically silent.

For a boot and system device I use a Samsung 128MB USB3 stick, a tiny device that requires lanyard to extact one inserted. I prepped it just as for the system SSD on kate and did the same write minimization setup. (I would use a SATA SSD but am out of SATA ports and don't want to use an expansion card at this time).

I copied kate's /boot directory and curator's previous system directories to /.

It booted the first time!



This document was generated on March 11, 2019 at 14:32 CDT with docutils.