curator: a file server

Table of Contents

Intro

The latest incarnation of curator is a refurbished Supermicro server.

This machine was purchased in March 2020.

Goals:

  • Runs Arch Linux.
  • Uses SnapRAID for backups.
  • 12 SATA devices: 2 parity and 10 data.
  • Open SATA slot for an SSD system device.
  • No optical drives.
  • Easy disc replacement; hot-pluggable drives.
  • Ideally will be a file source for Kodi or other playback software.
  • Quiet enough to run in the next room during TV watching.

Non-goals:

  • Bleeding edge.
  • Fast CPU.
  • Transcoding or any sort of heavy processing.
  • Saving money compared to a prebuilt system.
  • Cool appearance
  • PCI expansion slots.

Hardware

Server

2U 12 Bay SAS2 6Gbps Server X9DRI-LN4F+ 2x Xeon E5-2630L V2 64GB 32GB SATADOM.

Comes with ECC memory.

  • from UnixSurplus (via ebay)
  • price: $523, $55 Fedex shipping

Motherboard

Supermicro X9DRi-LN4F+.

Host Bus Adapter

LSI 9211-4i

Network

My old Monoprice powerline ethernet stopped working. I replaced it with TP-Link AV1000 Powerline Starter Kit (TL-PA7017 KIT) ($49.99 from Amazon).

Speed is 3-5x faster. I am seeing 16MB/sec.

Backup system disk

Kingston 120GB Sata SSD, $19.95 from Amazon (12/2020).

Assembly notes

Hardware summary results

  • Power consumption and temperature, no disks.

    state watts temp fans
    Off, power supply fans running 16w    
    On, idling 152w 30c 3400-3500rpm
    f@h, "full", 21 cores 228w 45-51c 4500-4800rpm
    handbrake, ~17 cores 195w 44-47c 4500rpm
    handbrake, 6 cores 166w 35c 4200rpm

    Notes:

    • folding@home rates my kate desktop i3 (using 3 cores) at about 20k points-per-day, and curator (using 21 cores) at 130k PPD, about 6.5x greater.

    • With a handbrake test:

      cpu # cores time
      i3 4 72m52s
      xeon 17? 24m35s
      xeon 6 43m28s
    • This confirms that handbrake does not scale well beyond 6 cores.

    • Using the handbrake defaults, the xeon system runs 3x faster than the i3. Restricting the xeon to 6 cores it is 1.8x faster.

    • In both cases, curator runs much cooler than my desktop. Under a full load the i3 is 60-65c, where the xeon system is 40-50c. The 3 high-speed fans, large passive CPU coolers, and wind tunnel design are simple and effective. And loud.

  • Noise

Arch Linux

Installation was as for kate: a home computer with these additional notes:

  • The model bootstrap install command is now:

    pacstrap /mnt base linux linux-firmware

  • UEFI booting of systemd-boot was troublesome until I assigned partition labels with parted.

    And referenced them in /boot/loader/entries/curator.conf:

    options       root=PARTLABEL=curator rw

    ... and in /etc/fstab:

    PARTLABEL=curator-boot   /boot   vfat [etc]

    PARTLABEL=curator        /       ext4 [etc]

  • I had no network because the standard base package did not include dhcpcd. Rebooted with the install media, installed that with pacman, then:

    systemctl start dhcpcd@eno1.service

    systemctl enable dhcpcd@eno1.service

Packages installed while configuring:

  • emacs
  • vi
  • lm_sensors
  • openssh
  • auracle-git (AUR)
  • xorg
  • lxdm
  • xorg-xinit
  • python
  • python3-threaded_servers (AUR)
  • pacserve (AUR)
  • snapraid (AUR)
  • msmtp
  • msmtp-mta

Don't forget:

add user

useradd -m wmcclain

passwd wmcclain

enable sshd daemon

systemctl enable sshd.service

sudo

Run visudo and add:

wmcclain   ALL=(ALL) NOPASSWD: ALL

personal settings

copy .bashrc from kate.

allow ssh without password

On each other system:

ssh-copy-id curator

enable pacserve

systemctl enable pacserve.service

systemctl start pacserve.service

graphical login

systemctl enable lxdm.service

care of system SSD

Same as for kate: a home computer.

simple mail transport

Create ~/.msmtprc. Template in the wiki.

Operations

Disk mounting

My habit has always been to mount all disks at boot time by specifying auto in /etc/fstab.

However, if any disk has failed boot will halt and require manual intervention. With snapraid we anticipate disk failures and correct them with standard operations but would rather do this with a normally running system.

So: the array disks should be mounted only after the system has booted completely. Rather than using noauto and some auxillary scripts, I am now using an automount feature that systemd adds to /etc/fstab. For example:

/dev/disk/by-label/tera00  /mnt/tera00   ext4  noatime,acl,user_xattr,user,noauto,x-systemd.automount,x-systemd.mount-timeout=5s 0 0

This will fsck and mount the partition when it is first accessed. The timeout is to prevent access from hanging on a nonexistent device.

Note: on my desktop the disk is mounted as soon as it is spun up. Maybe a desktop application is watching the devices? If I umount it, it is remounted again.

To prevent this, for mount point /mnt/queen:

# systemctl stop mnt-queen.automount

Disk layout

tera00 tera03 tera06 tera04
tera01 standby05 tera08 tera07
tera02 tera05 parity2 parity

Disk specs

disk make model size (TB) rpm label date
tera00 WD Green WD40EZRX 4 5400 27 Apr 2014
tera01 Toshiba MD04ACA400 4 7200 20150126
tera02 WD Green WD40EZRX 4 5400 27 Apr 2014
tera03 Toshiba MD04ACA400 4 7200 20150126
tera04 Seagate IronWolf ST8000VN0022 8 7200 12dec2020
tera05 Seagate IronWolf ST8000VN004 8 7200 18oct2020
tera06 WD Green WD40EZRX 4 5400 24 Jan 2014
tera07 WD Blue WD40EZRZ 4 5400 23 Feb 2018
tera08 WD Blue WD40EZRZ 4 5400 13 Dec 2019
parity Seagate IronWolf ST8000VN004 8 7200 21mar2020
parity2 Seagate IronWolf ST8000VN004 8 7200 18oct2020

Disk maintenance

Disable Western Digital head parking

Once only.

The WD Blue drives need head parking disabled, just like the Greens.

Use idle3ctl -d. Shutdown completely and power up again.

SMART

Using it manually from time to time. Could not get smartd to produce reports.

e2fsck

Monthly.

This checks and repairs ext2/ext3/ext4 filesystems.

Normally this is run at boot time when a disk has not been checked after so many mounts or after a given time period. Since the file server is not going to be rebooted very often, I want to do this on a running system.

You are not supposed to run e2fsck on mounted disks, so the job will dismount each disk, run the check, and remount it.

I cannot dismount the system disk while running, so that will need some other method.

Tip: I have to un-export each volume for NFS first, else umount fails with "device busy", even though no process accessing the drive shows with lsof or fuser.

I do:

exportfs -ua

before, and:

exportfs -a

after.

SnapRAID scrub

Weekly.

For the data array: this checks the data and parity files for errors. If errors are found they are marked and can be corrected with snapraid -e fix.

By default snapraid scrub checks the oldest 12% of the array, "oldest" in the sense of "since last scrubbed", not filesystem date. Run weekly the whole array will be checked about every 2 months.

With my initial data set, the default scrub takes about 30 minutes.

Run snapraid status to get a histogram of the scrub history.

Disk inventory

Standard preparation for a new disk:

# parted /dev/ device

(parted) mklabel GPT

(parted) mkpart primary ext4 513MiB 100%

(parted) name 1 standby01 # make it easy to recognize

(parted) align-check

(parted) quit

# mkfs.ext4 -m 0 -T largefile4 -cc -L standby01 /dev/disk/by-partlabel/standby01

-m = no reserved space
-T = from snapraid FAQ
-cc = badblocks scan, read/write
-L = filesystem label

FYI: the badblocks command this runs is:

badblocks -b 4096 -X -s -w /dev/disk/by-partlabel/standby01 1953375231

# smartctl -t long /dev/disk/by-partlabel/standby01

Inventory:

  • WD Blue, WD40EZRZ, 4TB, 5400rpm, 13 Dec 2019

    Amazon $87.19 + $6.10

    Model Family: Western Digital Blue
    Device Model: WDC WD40EZRZ-00GXCB0
    Serial Number: WD-WCC7K4VHTRCL

    badblock scan: 7.5 hours (read only this time)

    smart -t long: 7.5 hours

    • 2020.04.26: became parity2
    • 2020.12.30: became tera08 # in tray, but not powered up
    • 2021.08.17: powered up and added to array
  • Seagate IronWolf 8TB 7200rpm, 21mar2020, Thailand

    Amazon $230.94 + $16.17

    Model Family: Seagate IronWolf
    Device Model: ST8000VN004-2M2101
    Serial Number: WKD1D52Z

    Registered at seagate.com 2020.05.15.

    Warranty Valid Until July 13, 2023.

    badblock scan: 87.35 hours (read/write)

    smart -t long: estimated 12.3 hours

    • 2020.05.20: became standby01 # in tray, but not powered up
    • 2020.08.23: became parity
  • Seagate IronWolf 8TB 7200rpm, 12mar2020, Thailand

    Amazon $ 203.03 + 15.26

    Model Family: Seagate IronWolf
    Device Model: ST8000VN004-2M2101
    Serial Number: WKD1AZFJ

    Registered at seagate.com 2020.06.10

    Warranty Valid Until June 16, 2023

    badblock scan: 86.65 hours (read/write)

    smart -t long: estimated 12.16 hours

    • 2020.06.16: became standby02 # in tray, but not powered up
    • 2020.12.20: became parity2
  • Seagate IronWolf 8TB 7200rpm, 18oct2020, Thailand

    Amazon $ 204.99

    Model Family: Seagate IronWolf
    Device Model: ST8000VN004-2M2101-500
    Serial Number: WKD3BE1C

    Registered at seagate.com 2020.12.22

    Warranty Valid Until February 29, 2024

    badblock scan: 3.5 days read/write

    smart -t long: estimated 12 hours

    • 2020.12.27: became tera05
  • Seagate IronWolf 8TB 7200rpm, 15dec2020, Thailand

    Newegg $208.64

    Model Family: Seagate IronWolf
    Device Model: ST8000VN0022-2EL112
    Serial Number: ZA1KAD34

    Registered at seagate.com 2021.02.10

    Warranty Valid Until March 28, 2024

    badblock scan: about 4 days read/write

    smart -t long: estimated 13.2 hours

    • 2021.02.15: became standby04 # in tray, but not powered up
    • 2023.06.17: became tera04
  • Seagate IronWolf 8TB 7200rpm, 17jul2021, Thailand

    Model Family: Seagate IronWolf
    Device Model: ST8000VN004-2M2101-500
    Serial Number: WSD2TWRA
    • 2023.06.17: Bought 2 years earlier, never used. Would not spin up, beeped softly. Still under warranty, returned to Seagate.

      RMA: 110862634. Case # 12296904.

  • Seagate IronWolf 8TB 7200rpm, 07nov2022, Thailand

    Model Family: Seagate IronWolf
    Device Model: ST8000VN004-2M2101-500
    Serial Number: WSD9GJL1

    Registered at seagate.com 2023.06.23.

    Warranty Valid Until 27/Mar/2026.

    badblock scan: 85.76 hours = 3.5 days.

    smart -t long: estimated 12 hours

    • 2023.06.28: became standby05 # in tray, but not powered up

nfs

My desktop systems are Linux, so it is natural to use NFS to mount the file server shares.

I do not currently use any pooling software to combine the separate disks into one view. I am used to managing them separately and will continue to do so for now.

On the client machines, each disk is mounted in /etc/fstab like so:

curator:/mnt/tera01 /mnt/curator/tera01 nfs defaults,bg,hard,intr 0 0

bg: background the mount so the system doesn't hang at boot time if the server is down.

samba

(LATER: see configuration in XXX Third Expansion XXX.

I do keep a read-only pool. This is mounted at /pool on the system disk and contains symbolic links to the rest of the array.

I export this with samba because nfs + links to multiple drives = headache. SMB handles it better, if you use the configuration parameters shown in the SnapRaid FAQ:

# In the global section of smb.conf
unix extensions = no

# In the share section of smb.conf
[pool]
comment = Pool
path = /pool
read only = yes
guest ok = yes
wide links = yes

SnapRAID has a pool command to create links for the entire array, but I use a script of my own invention. I wanted to customize the structure of the pool so I could rename links as necessary and provide subset views, for example when using XBMC profiles.

I'm not understanding name mangling in samba very well. To be compatible with ancient Windows requirements, some characters in a file name will cause it to be presented in uppercase 8.3 format. When creating links I edit these characters out of the name: "? : /".

On the Linux clients, the line in /etc/fstab for the pool:

//curator/pool /mnt/curator/pool cifs guest,ro,uid=wmcclain,gid=users,iocharset=utf8,mapchars 0 0

SnapRAID

SnapRAID (which is not a standard RAID solution) is a backup program that saves hash and parity information and allows correction of disk errors and recovery from bad disks.

It is targeted towards collections of large files that are not often modified, like a media file library.

You can see its many virtues on the web pages, and here is what seems like a fair comparison with similar software.

I selected SnapRAID because:

  • it covers what I need in a lean, non-obtrusive way
  • the simple command-line interface appeals to my unix biases
  • it is open source and written in C, and I could maintain it if I had to
  • it is a simple, non-privileged application and runs only while you execute its commands: no demon or kernel patches
  • you can begin with already filled disks; it is agnostic as to what file system is on the data disks
  • you are not locked in; just stop using it

things to know

The data is protected after you do the sync command. If you make changes thereafter you may risk data loss until the next sync. Recovery requires the participation of all disks in the array, so you need to be careful about what you change.

Adding new files to the array is not a problem. They are not protected until the next sync, but neither do they cause recovery problems for the existing array. (I keep a copy of new files outside of the server until they are synced).

Deleting a file changes the array and may cause problems if you need to recover from a bad disk. A safer approach is to move the file to a delete directory outside of the array until after the next sync, when it will be safe to delete it permanently. The fix command even has an option to specify files that have been moved out of the array but are still needed for recovery.

Modifying a file also changes the array and could be a risk if you need to run recovery. This is relatively rare in a media file library, but when it happens it would be handy to stage the new version in a modify directory outside of the array. Move that into place just before the sync. Maybe move the old version to the delete folder first just in case?

To do: a script or cron job to do the staged delete and modify operations with sync would be handy. Each disk would need delete and modify folders, but the whole array is handled with one sync.

SnapRAID requires one or more dedicated parity disks, each as large as the largest data disk:

  • 1 parity disk will save you from 1 disk failure (either data or parity)
  • 2 parity disks will save you from 2 disk failures (any combination of data and parity)
  • etc...

The SnapRAID FAQ has recommendations on how many parity disks you should have for a given number of data disks. For my 7 data disks I have only 1 parity disk and it is recommended I have 2.

To do: add a second 3TB parity disk and merge 2 of the 1TB data disks onto a new larger drive.

installation

(LATER: in Arch Linux snapraid is a package in the AUR).

This is a very quick compilation from source without special dependencies.

Untar the downloaded .tar.gz file and:

./configure
make
sudo make install

startup

I started with 6 1TB drives already populated with data, all using ext3. I added the first three to the array and synced them one at a time. Since all the disks are read when calculating parity, I did the last three together. The first sync took 2.5 hours, increasing for each additional single disk. The batch of three took about 8 hours.

Using default parameters, scrub does not operate on files less than 10 days old (in scrub-time: since last scrubbed or newly added to the array). After that, I found scrub took about an hour or less to check the default 12% of the array.

configuration

Here is my /etc/snapraid.conf configuration file, with comments removed:

parity /mnt/parity/parity

content /var/snapraid/content
content /mnt/parity/content

disk d0 /mnt/tera00/
disk d1 /mnt/tera01/
disk d2 /mnt/tera02/
disk d3 /mnt/tera03/
disk d4 /mnt/tera04/
disk d5 /mnt/tera05/
disk d6 /mnt/tera06/

exclude *.unrecoverable
exclude /lost+found/
include /backup/
include /video/
include /audiobook/

On the data disks only the top-level "backup", "video", and "audiobook" directories are part of the array. Everything else on the system is invisible to SnapRAID. So the server can be used for other types of backup that are not covered by the sync command.

typical tasks

Print a report of array status, including a histogram of scrub history:

snapraid status

List the contents of the array:

snapraid list

Show adds, changes, and deletes since the last sync:

snapraid diff

Sync the array:

snapraid sync

Replacing a parity disk

After a couple of successful syncs and scrubs, parity2 started given unrecoverable read errors. I replaced it:

  • old: WD Green, WD40EZRX, 4TB, 5400rpm, 27 Apr 2014
  • new: WD Blue, WD40EZRZ, 4TB, 5400rpm, 13 Dec 2019

# parted /dev/sdb

(parted) mklabel GPT

(parted) mkpart primary ext4 513MiB 100%

(parted) name 1 newparity2 # make it easy to recognize

(parted) align-check

(parted) quit

# mkfs.ext4 -m 0 -T largefile4 -c /dev/disk/by-partlabel/newparity2

-m = no reserved space
-T = from snapraid FAQ
-c = badblocks scan, read only (7h32m)

# smartctl -t long /dev/sdk # 7.5 hours to complete

# mkdir newparity2

# mount /dev/disk/by-partlabel/newparity2 newparity2/

# ddrescue -v /mnt/parity2/2-parity newparity2/2-parity ddrescue.log

This ran about 15 hours, very slow at the end, but found no errors!

Made the new disk parity2 anyway. snapraid -e fix ran quickly, all well. Ran snapraid -p new scrub, which is where the failure occurred on the old disk.

On the old disk, now labeled oldparity2, ran

smartctl -t long

and

time mkfs.ext4 -m 0 -T largefile4 -cc /dev/disk/by-partlabel/oldparity2

where -cc performs a read/write badblocks scan. The scan detected Unrecovered read error and critical medium error sections.

The scan produced thousands of errors towards the end of the first pass, so I stopped it and junked the drive.

Promoting standby01 to parity

For space expansion, replace 4TB parity with 8TB standby01. Old parity becomes new tera07 which is new d7 in snapraid.

Net gain: 4TB, plus the beginning of 8TB parity -- will upgrade parity2 later.

standby01 is already partitioned and badblock checked, with a filesystem.

cd /mnt
mkdir tera07
chown --ref=tera06 tera07
umount /mnt/parity
#
parted /dev/sdi # = old parity disk
name 1 tera07
quit
#
e2label /dev/disk/by-label/parity tera07
#
# edit /etc/fstab, add tera07
#
mount /mnt/tera07
#
# power up standby01
# NEXT TIME: do this first, avoid mechanical problems that
# require reboot when mount points are being changed.
#
# (after reboot, device ids changed...)
#
parted /dev/sdi # <- after reboot
name 1 parity
quit
#
e2label /dev/disk/by-label/standby01 parity
mount /mnt/parity
chown --ref=/mnt/parity2 /mnt/parity
#
cp /mnt/tera07/parity /mnt/parity; cmp /mnt/tera07/parity /mnt/parity/parity
#
# exercise snapraid...
#
snapraid status
snapraid sync
snapraid -p new scrub
#
mkdir /mnt/tera07/xfer
mkdir /mnt/tera07/delete
mkdir /mnt/tera07/video
mkdir /mnt/tera07/video/extras
#
# add d7 line to /etc/snapraid.conf
#
# exercise snapraid again...
#
snapraid status
snapraid sync
snapraid -p new scrub
#
# after next full sync+scrub cycle, if all well, delete old parity file
#
rm /mnt/tera07/parity

Replacing tera05

The read performance on tera05 (4TB WD Green) had become so poor, despite showing no SMART or other errors, that I replaced it with a new 8TB Seagate IronWolf.

Net gain: 4TB.

Remember: don't go over 4TB on this device until parity2 is updated to 8TB.

I used new disk standby03 because it was already spinning after initial prep.

After the array has been synced, as root:

systemctl stop mnt-tera05.automount

# remember tera05 device: /dev/sdn1

umount /mnt/tera05 # already done

#
# rename the partition and fs labels of the OLD disk
#
parted /dev/sdn
(parted) name 1 oldtera05
(parted) quit

# ^check the new by-partlabel shows up for /dev/sdn1

e2label /dev/disk/by-partlabel/oldtera05 oldtera05

# ^check the new by-label shows up for /dev/sdn1

# remember standby03 device: /dev/sdg1

#
# rename the partition and fs labels of the NEW disk
#
parted /dev/sdg
(parted) name 1 tera05
(parted) quit

# ^check the new by-partlabel shows up for /dev/sdg1

e2label /dev/disk/by-partlabel/tera05 tera05

# ^check the new by-label shows up for /dev/sdg1

#
# copy from old disk to new
#

mount /mnt/tera05  # <- check this works

cd /dev/shm
mkdir oldtera05
mount /dev/disk/by-partlabel/oldtera05 oldtera05
time cp -av oldtera05/. /mnt/tera05; date
#
# ^cp took about 9 hours
#
umount oldtera05

#
# logout of root
#
# verify all is wall
#

snapraid diff # should be no differences
snapraid -v check -a -d d5 # should no changes
#
# ^check took almost 4 hours
#
snapraid sync # should be quick

# power down oldtera05 and remove

Promoting standby02 to parity2

Replace 4TB parity2 with 8TB standby02. Old parity2 becomes new tera08 which is new d8 in snapraid.

Net gain: 4TB. And: now have complete 8TB parity.

When fully snapraid synced, do the following as root:

#
# power up standby02, remember device name: /dev/sdn
#

cd /mnt
mkdir tera08
chown --ref=tera07 tera08
#
# make old parity2 into new tera08
#
systemctl stop mnt-parity2.automount
umount /mnt/parity2
#
parted /dev/sdj # = old parity2 disk
name 1 tera08
quit
#
e2label /dev/disk/by-label/parity2 tera08
#
# edit /etc/fstab, add tera08
#
mount /mnt/tera08
#
# make old standby02 into new parity2
#
parted /dev/sdn
name 1 parity2
quit
#
e2label /dev/disk/by-label/standby02 parity2
mount /mnt/parity2
chown --ref=/mnt/parity /mnt/parity2
#
# copy the parity file, compare it to the original
#
time cp /mnt/tera08/2-parity /mnt/parity2; date; cmp /mnt/tera08/2-parity /mnt/parity2/2-parity; date
#
# ^cp: 6.6hrs
# ^cmp: 6.5hrs
#
chown --ref=/mnt/tera08/2-parity /mnt/parity2/2-parity
#
# logout of root and exercise snapraid
#
snapraid status
snapraid sync
snapraid -p new scrub
#
# create usual top level dirs
#
mkdir /mnt/tera08/xfer
mkdir /mnt/tera08/delete
mkdir /mnt/tera08/video
mkdir /mnt/tera08/video/extras
#
# after next full sync+scrub cycle, if all well, delete old parity file
#
rm /mnt/tera08/2-parity
#
# umount and power down tera08 until it is needed for expansion
#
systemctl stop mnt-tera08.automount
umount /mnt/tera08
# power down
#
# when it is time to bring tera08 online:
#
# power up tera08; automount?
# add d8 line to /etc/snapraid.conf
#
# exercise snapraid again...
#
snapraid status
snapraid sync
snapraid -p new scrub

Expanding SSD storage

In December 2020 I added additional SSD storage:

  • A Kingston 120B SSD.
  • The small (32GB) Disk On Module card that came with the server.

Goals:

  • Mirror the system SSD to the backup SSD.
  • Provide a bootable backup, similar to kate: a home computer.
  • Move the three snapraid content files off of the hard drives and onto three SSD devices. See if this prevents system crashes when snapraid accesses the content files.

curback

This is the Kingston 120GB backup device. During configuration this was loaded into an external USB dock which the system saw as /dev/sdl.

# parted /dev/sdl

(parted) mklabel gpt

(parted) mkpart ESP fat32 1MiB 513MiB

(parted) set 1 boot on

(parted) mkpart primary ext4 513MiB 100%

(parted) align-check # check partitions 1 & 2

(parted) name 1 curback-boot

(parted) name 2 curback

(parted) quit

# mkfs.fat -F32 /dev/sdl1

# mkfs.ext4 /dev/sdl2

Edit /etc/fstab, mount PARTLABEL=curback on /mnt/curback.

Rsync backup utility: /usr/local/sbin/backup-sys.

Exclude list: /root/exclude-from-backup.txt.

Important: /home/wmcclain/content must be on the exclude list. Let snapraid manage that file in all locations.

Boot management:

tiny

This is the 32GB Supermicro Disk on Module chip. It plugs into a SATA port and has its own 5v power connector on the motherboard. The system found it at /dev/sdc.

# parted /dev/sdc

(parted) mklabel gpt

(parted) mkpart primary ext4 513MiB 100%

(parted) align-check # check partitions 1

(parted) name 1 tiny

(parted) quit

# mkfs.ext4 /dev/sdc1

Edit /etc/fstab, mount PARTLABEL=tiny on /mnt/tiny.

Export system

The "overlayfs" is built into the linux kernel. The new fs is automatically read only if the "upper" and "work" directories are not specifyed when mounting.

Manual mounting:

sudo mount -t overlay overlay -o redirect_dir=nofollow,nfs_export=on,lowerdir=/mnt/tera00:/mnt/tera01:/mnt/tera02:/mnt/tera03:/mnt/tera04:/mnt/tera05:/mnt/tera06:/mnt/tera07 /home/wmcclain/pool/overlay/

fstab mounting:

overlay /home/wmcclain/pool/overlay/ overlay ro,auto,user,redirect_dir=nofollow,nfs_export=on,lowerdir=/mnt/tera00:/mnt/tera01:/mnt/tera02:/mnt/tera03:/mnt/tera04:/mnt/tera05:/mnt/tera06:/mnt/tera07 0 0

NFS will not export the overlay without "redirect_dir=nofollow".

Remember: NFS will export symbolic links, but they must be resolved on the client side. If an exported link references "/xxx/yyy/z.z" then that must exist on the client, even if it is also an NFS exported file.

Problems

History


This document was generated on June 27, 2023 at 18:20 CDT with docutils.