TrueNAS

TrueNAS SCALE Dashboard
TrueNAS SCALE Dashboard[ez-toc]

We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS drive using TrueNAS SCALE.

Useful Links

Installing TrueNAS Scale in a VM

We installed TrueNAS SCALE in a Virtual Machine on our pve1 storage. This VM will not be movable as it will be associated with SSD disks that are only available on pve1. TrueNAS will use the available disks to form a new ZFS pool, so we must pass through the physical disks to the TrueNAS VM. The video below explains the procedure used to set up TrueNAS SCALE in a Proxmox VM and properly pass through the disks.

The Virtual Machine for TrueNAS SCALE was created with the following parameters –

  • 4 CPUs (1 socket, enable NUMA)
  • 32 GB Memory (8 GB + 24 GB to support ZFS caching, Not a Ballooning device)
  • 64 GB Disk (use zfsa_mp for boot drive), enable Discard and SSD emulation, turn off Backup.
  • High-speed Services Network, VLAN Tab=10, use Bridge MTU (=1)
  • QEMU Guest Agent checked
  • Do not start the VM until the disks are passed through and configured in the VM (see below)

It is important to pass the physical disk drives on pve1 through to the TrueNAS VM by referencing the physical device names and serial numbers, as explained in the video.

The following is the disk name and serial number information for our server. Use the commands and procedure here to get this information.

Dev NameModelUnique Storage ID (SCSI)Serial
sde - scsi1
KPM5XRUG3T84scsi-358ce38ee208944e939U0A020TNVF
sdf - scsi2
KPM5XRUG3T84scsi-358ce38ee2089452d39U0A02HTNVF
sdg - scsi3
KPM5XRUG3T84scsi-358ce38ee207e13c129P0A0GWTNVF
sdh - scsi4
KPM5XRUG3T84scsi-358ce38ee207e876d29S0A038TNVF
sdi - scsi5
KPM5XRUG3T84scsi-358ce38ee2089451d39U0A02DTNVF
sdj - scsi6
KPM5XRUG3T84scsi-358ce38ee208844f139S0A0FCTNVF
sdk - scsi7KPM5XRUG3T84scsi-358ce38ee207e877129S0A039TNVF
sdl - scsi8KPM5XRUG3T84scsi-358ce38ee2088b59939T0A08WTNVF
sdm - scsi9PX04SRB384scsi-3500003976c8a5d71Y6T0A101TG2D
sdn - scsi10PX04SRB384scsi-3500003976c8a5099Y6T0A0GLTG2D
sdo - scsi11PX04SRB384scsi-3500003976c8a408dY6S0A0AHTG2D
sdp - scsi12
PX04SRB384scsi-3500003976c8a5259Y6T0A0KWTG2D

Physical Disk Information for Passthrough

The Backup option was turned off for all of the disks that were passed through to TrueNAS so that they could be backed up at a file level, which is much faster.

Options for the initial install after the first boot differ slightly from the video in the updated version of TrueNAS SCALE. The differences include –

  • Configure admin login
  • Do not create a Swap file
  • Allow EFI Boot = Yes
  • There is no need to install the qemu guest agent; it’s already installed

Create a ZFS Storage Pool

After booting TrueNAS, a ZFS storage dataset was created with the passthrough disks as follows –

  • Datastore name – zfs1
  • RAID-10: Type – mirror, Width=2, VDEVs=6 (final capacity is 20.95 TB)
  • All 12 disks are auto-selected
  • All other options were left as defaults

The image below shows the configuration of the ZFS dataset.

TrueNAS ZFS Dataset Configuration
TrueNAS ZFS Dataset Configuration

Once the pool was created, the following were configured –

  • Enabled Auto TRIM
  • Configured Pool Scrubs to run on Sunday at midnight every 30 days

Expand ARC Cache Size

By default, TrueNAS will use only half the memory allocated for its ARC cache. We used the procedure in the video below to expand the ARC Cache memory limit to 24 GB. You must create an init script in Truenas and set an absolute number (25769803776) for max_arc_size in the init script.

TrueNAS Configurationn

The following steps were performed to configure the TrueNAS system –

  • Configured the Dashboard
  • Admin account email set
  • Set up e-mail relay/forwarding
  • Set up a hostname (nas-10), DNS servers, MTU, etc., under the Network menu
  • Set up local NTP servers; eliminated default servers
  • Set the Timezone to New_York
  • Set up Syslog server
  • IMPORTANT: set the applications pool to zfs1 BEFORE creating any shares
  • Set up Homes dataset for user logins
  • Setup account for user logins in users group
  • Create shares as follows –
    • First, create a dataset and set owner and group for each share
    • Then, create an SMB share for the dataset
    • Use Default Share Parameters
    • Setup snapshot that runs every 15 minutes on each of the datasets/shares (creates snapshots for file rollback); also set up daily snapshots on the zfs1 and ix-applications dataset to capture ZFS and apps setups
  • Set up shares and enable access by user group
  • Created a signed SSL certificate – see this procedure
  • Extended GUI session timeout to 30 minutes (System Advanced Settings)

These commands are useful for working with snapshots –

# Make snapshots visible
# zfs set snapdir=visible|hidden <dataset>

# display snapshot list
# zfs list -t snapshots

App Catalog

TrueNAS comes with the IXsystems apps catalog preconfigured. We add the TrueCharts catalog using the minimal variant of the procedures outlined here. Note that initially, it will take a LONG time (like overnight) to set up the Truecharts catalog.

Good information on configuring Apps on TrueNAS SCALE can be found here.

Backups

The TrueNAS VM is included in our cluster’s daily Proxmox Backup Server backup. This backs up the TrueNAS boot disk.

The large ZFS datastore would take a long time to back up at a block level, so we’ll set up a rsync job for one of our Synology NAS drives.

We used the procedure in the following video to set up TrueNAS backups using rsync.

The approach uses an SSH connection between TrueNAS and one of our Synology NAS drives to transfer and update a copy of the file on our main dataset. Here’s an example of the setup of the rsync task on the TrueNAS side.

Example TrueNAS Backup Task (rsync)
Example TrueNAS Backup Task (rsync)

The rsync jobs run every 15 minutes and only copy the files that are changed on the TrueNAS side. The target Synology drive takes snapshots, does replication, and runs off-site backups to protect the data in the TrueNAS dataset.

Here’s another procedure for setting this up that looks pretty good. I have not tried this one.

Data Protection and Integrity

We are using the ZFS file system, pool scrubbing, S.M.A.R.T tests, snapshots, and rsync replication to protect our data. Here’s an overview of our final data integrity setup –

TrueNAS Data Integrity Configuration
TrueNAS Data Integrity Configuration

File Browser App

We installed the File Browser App from the TrueNAS catalog. This app provides a web GUI that enables file and directory manipulation on our NAS. The following videos will help to get File Browser set up –

The key to getting this working without permission errors is to set the ACLs on each of the datasets that are exposed in shares and the File Browser (if used) as follows:

Dataset ACLs for File Browser
Dataset ACLs for Shares and the File Browser

Anita's and Fred's Home Lab

WordPress Appliance - Powered by TurnKey Linux