Category Archives: Storage

For information related to storage and NAS devices.

Raspberry Pi NAS 2

Raspberry Pi NAS 2
Raspberry Pi NAS 2

We’ve built a second NAS and Docker environment using another Raspberry Pi 5. This NAS features four 2.5 in 960 GB SSD drives in a RAID-0 array for fast shared storage on our network.

Raspberry Pi NAS Hardware Components

Raspberry Pi 5 Single Board Computer

We use the following components to build our system –

I had five 960 GB 2.5″ SSD drives from a previous project available for this project.

The following video covers the hardware assembly –

We used a 2.5 GbE USB adapter to create a 2.5 GbE network interface on our NAS.

2.5 GbE USB Adapter
2.5 GbE USB Adapter

The configuration of the Fan/Display HAT top board is covered here.

FAN/Display Top Board
FAN/Display Top Board

This board comes as a kit that includes spaces to mount it on top of the Raspberry Pi 5/SSD Drive Interface HAT in the base kit.

Software Components and Installation

We installed the following software on our system to create our NAS –

CassaOS

CasaOS Web UI
CasaOS Web UI

CasaOS is included to add a very nice GUI for managing each of our NUT servers. Here’s a useful video on how to install CasaOS on the Raspberry Pi –

Installation

The first step is to install the 64-bit Lite Version of Raspberry Pi OS. This is done by first installing a full desktop version on a flash card and then using Raspberry Pi Imager to install the lite version on our SSD boot drive. We did this on our macOS computer using the USB to SATA adapter and belenaEtcher.

We used the process covered in the video above to install CasaOS.

Creating a RAID

We choose to create a RAID-0 array using the four SSD drives in our NAS. Experience with SSD drives in a light-duty application like ours indicates that this approach will be reasonably reliable with SSD drives. We also backup the contents of the NAS daily to another system via Rsync to one of our Synology NAS drives.

RAID-0 Storage Array
RAID-0 Storage Array

CasaOS does not provide support for RAID so this is done using the underlying Linux OS. The process is explained here.

File Share

CasaOS makes all of its shares public and does not password-protect shared folders. While this may be acceptable for home use where the network is isolated from the public Internet, it certainly is not a good security practice.

Fortunately, the Debian Linux-derived distro we are running includes Samba file share support, which we can use to protect our shares properly. This article explains the basics of how to do this.

Here’s an example of the information in smb.conf for one of our shares –

[Public]
    path = /DATA/Public
    browsable = yes
    writeable = Yes
    create mask = 0644
    directory mask = 0755
    public = no
    comment = "General purpose public share"

You will also need to create a Samba user for your Samba shares to work. Samba user privileges can be added to any of the existing Raspberry Pi OS users with the following command –

# sudo smbpasswd -a <User ID to add>

It’s also important to correctly set the shared folder’s owner, group, and modes.

We need to restart the Samba service anytime configuration changes are made. This can be done with the following command –

# sudo systemctl restart smbd

Samba File Server

Samba File Server
Samba File Server

We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS drive using the Turnkey Samba File Server.

Installing the Turkey Samba File Server

We installed the Turnkey File Server in an LXC container that runs on our pve1 storage. This LSC will not be movable as it will be associated with SSD disks that are only available on pve1. The first step is to create a ZFS file system (zfsb) on pve1 to hold the LXC boot drive and storage.

The video below explains the procedure used to set up the File Server LXC and configure Samba shares.

The LXC container for our File Server was created with the following parameters –

  • 2 CPUs
  • 1 GB Memory
  • 8 GB Boot Disk in zfsb_mp
  • 8 TB Share Disk in zfsb_mp (mounted as /mnt/shares with PBS backups enabled.)
  • High-speed Services Network, VLAN Tab=10
  • The container is unprivileged

File Server LXC Configuration

The following steps were performed to configure our File Server –

  • Set the system name to nas-10
  • Configured postfix to forward email
  • Set the timezone
  • Install standard tools
  • Updated the system via apt update && apt upgrade
  • Installed SSL certificates using a variation of the procedures here and here.
  • Setup Samba users, groups, and shares per the video above

Backups

Our strategy for backing up our file server is to run a Rsync job via the Cron inside the host LXC container. The Rsync copies the contents of our file shares to one of our NAS drives. The NAS drive then implements a 1-2-3 Backup Strategy for our data.

Raspberry Pi NAS

Raspberry Pi NAS
Raspberry Pi NAS

We’ve built a NAS and Docker environment using a Raspberry Pi 5. Our NAS features a 2 TB NVMe SSD drive for fast shared storage on our network.

Raspberry Pi NAS Hardware Components

Raspberry Pi 5 Single Board Computer

We use the following components to build our system –

Here’s a photo of the completed hardware assembly –

Pi NAS Internals - Raspberry Pi NAS
Pi NAS Internals

Software Components and Installation

We installed the following software on our system to create our NAS –

CassaOS

CasaOS GUI
CasaOS GUI

CasaOS is included to add a very nice GUI for managing each of our NUT servers. Here’s a useful video on how to install CasaOS on the Raspberry Pi –

Installation

The first step is to install the 64-bit Lite Version of Raspberry Pi OS. This is done by first installing a full desktop version on a flash card and then using Raspberry Pi Imager to install the lite version on our NVMe drive.

Once this installation was done, we used the Raspberry Pi Imager to install the same OS version on our NVMe SSD. After removing the flash card and booting to the NVMe SSD, the following configuration changes were made –

  • The system name is set to NAS-11
  • Enabled SSH
  • Set our user ID and password
  • Applied all available updates
  • We updated /boot/firmware/config.txt to enable PCIe Gen3 operation with our SSD

We used the process covered in the video above to install CasaOS.

CasaOS makes all of its shares public and does not password-protect shared folders. While this may be acceptable for home use where the network is isolated from the public Internet, it certainly is not a good security practice.

Fortunately, the Debian Linux-derived distro we are running includes Samba file share support, which we can use to protect our shares properly. This article explains the basics of how to do this.

Here’s an example of the information in smb.conf for one of our shares –

[Public]
    path = /DATA/Public
    browsable = yes
    writeable = Yes
    create mask = 0644
    directory mask = 0755
    public = no
    comment = "General purpose public share"

You will also need to create a Samba user for your Samba shares to work. Samba user privileges can be added to any of the existing Raspberry Pi OS users with the following command –

# sudo smbpasswd -a <User ID to add>

It’s also important to correctly set the shared folder’s owner, group, and modes.

We need to restart the Samba service anytime configuration changes are made. This can be done with the following command –

# sudo systemctl restart smbd

High-Availability Storage Cluster

Synology HA Storage Cluster - High-Availability Storage
Synology HA Storage Cluster

We are building a High-Availability (HA) Storage Cluster to complement our Proxmox HA Server Cluster. Synology has a nice HA solution that we can use for this. To use Synology’s HA’s solution, one must have the following:

  • Two Identical Synology NAS devices (we are using a pair of RS1221+ rack-mounted Synology NAS’)
  • Both NAS devices must have identical memory and disk configurations.
  • Both NAS devices must have at least two network interfaces available (we are using dual 10 GbE network cards in both of our NAS devices)

The two NAS devices work in an active/standby configuration and present a single IP interface for access to storage and administration.

Synology HA Documentation

Synology provides good documentation for their HA system. Here are some useful links:

The video above provides a good overview of Synology HA and how to configure it.

Storage Cluster Hardware

Synology RS1221+ NAS
Synology RS1221+ NAS

We are using a pair of Synology RS1221+ rack-mounted NAS servers. Each one is configured with the following hardware options:

Networking

Our Proxmox Cluster will connect to our HA Storage Cluster via ethernet connections. We will be storing the virtual disk drives for our VMs and LXC in this cluster on our HA Storage Cluster. Maximizing these connections’ speed and minimizing latency is important to maximize our workload’s overall performance.

Each node in our Proxmox Cluster has dedicated high-speed connections (25 GbE for pve1, 10 GbE for pve2 and pve3)  to a dedicated Storage VLAN. These connections are made through a Unfi Switch – an Enterprise XG 24. This switch is supported by a large UPS that provides battery backup power for our Networking Rack.

Ubiquity EnterpriseXG 24 Switch
Ubiquity EnterpriseXG 24 Switch

This approach is taken to minimize latency as the storage traffic cluster is completely handled with a single switch.

Ideally, we would have a pair of these switches and redundant connections to our Proxmox and HA Storage clusters to maximize reliability. While this would be a nice enhancement, we have chosen to use a single switch for cost reasons.

The NAS drives in our HA Storage Cluster are configured to provide an interface to both our Storage VLAN. This approach ensures that the nodes in our Proxmox cluster can access the HA Storage Cluster directly without a routing hop through our firewall. We also set the MTU for this network to 9000 (Jumbo Frames) to minimize packet overhead.

Storage Design

Each Synology RS1221+ in our cluster has eight 960 GB Enterprise SSDs. The performance of the resulting storage system is important as we will be storing the disks for the VMs and LXCs in our Proxmox Cluster on our HA Storage System. The following are the criteria we used to select a storage pool configuration:

  • Performance – we want to be able to saturate the 10 GbE interfaces to our HA Storage Cluster
  • Reliability – we want to be protected against single-drive failures. We will keep spare drives and use backups to manage the chance of simultaneous multiple-drive failures.
  • Storage Capacity – we want to use the available SSD storage capacity efficiently.

We considered using either a RAID-10 or RAID-5 configuration.

Storage Devices – 960 GB Enterprise SSDs

Toshiba 960 GB SSD Performance
Toshiba 960 GB SSD Performance

Our SSD drives are enterprise models with good throughput and IO/s (IOPs) performance.

960 GB SSD Reliability Features
960 GB SSD Reliability Features

They also feature some desirable reliability features, including good write endurance and MTBF numbers. Our drives also feature sudden power-off features to maintain data integrity in the event of a power failure that cannot be backed up by our UPS system.

Performance Comparison – RAID-10 vs. RAID-5

We used a RAID performance calculator to estimate the performance of our storage system. Based on actual runtime data from our VMs and LXCs running in Proxmox, our IO workload is almost completely written operation-dominated. This is probably due to the fact that read caching handles most read operations from memory on our servers.

The first option we considered was RAID-10. The estimated performance for this configuration is shown below.

RAID-10 Throughput Performance
RAID-10 Throughput Performance

As you can see, this configuration’s throughput will more than saturate our 10 GbE connections to our HA Storage Cluster.

The next option we considered was RAID-5. The estimated performance for this configuration is shown below.

RAID-5 Throughput Performance
RAID-5 Throughput Performance

As you can see, performance is a substantial hit due to the need to generate and store parity data each time storage is written. The RAID-5 configuration should also be able to saturate our 10 GbE connections to the Storage Cluster.

The result is that the RAID-10 and RAID-5 configurations will provide the same performance level given our 10 GbE connections to our Storage Cluster.

Capacity Comparison – RAID-10 vs. RAID-5

The next step in our design process was to compare the usable storage capacity between RAID-10 and RAID-5 using Synology’s RAID Calculator.

RAID-10 vs. RAID-5 Usable Storage Capacity - High-Availability Storage
RAID-10 vs. RAID-5 Usable Storage Capacity

Not surprisingly, the RAID-5 configuration creates roughly twice as much usable storage when compared to the RAID-10 configuration.

Chosen Configuration

We decided to formate our SSDs as a Btrfs storage pool configured as a RAID-5. We choose RAID-5 for the following reasons:

  • A good balance between write performance and reliability
  • Efficient use of available SSD storage space
  • Acceptable overall reliability (single disk failures) given the following:
    • Our storage pools are fully redundant between the primary and secondary NAS pools
    • We run regular automatic snapshots, replications, and backups via Synology’s Hyper Backup as well as server-side backups via Proxmox Backup Server.

The following shows the expected IO/s (IOPs) for our storage system.

RAID-5 IOPs Performance
RAID-5 IOPs Performance

This level of performance should be more than adequate for our three-node cluster’s workload.

Dataset / Share Configuration

The final dataset format that we will use for our vdisks is TBD at this point. We plan to test the performance of both iSCSI LUNs and NFS shares. If these perform roughly the same for our workloads, we will use NFS to gain better support for snapshots and replication features. At present, we are using an NFS dataset to store our vdisks.

HA Configuration

Configuring the pair of RS1212+ NAS servers for HAS was straightforward. Only minimal configurations are needed on the secondary NAS to get the storage and network configurations to match the primary NAS. The process that enables HA on the primary NAS will overwrite all of the settings on the secondary NAS.

Here are the steps that we used to do this.

  • Install all of the upgrades and SSDs in both units
  • Connect both units to our network and install an ethernet connection between the two units for heartbeats and synchronization
  • Install DSM on each unit and set a static IP address for the network-facing ethernet connections (we do not set IPs for the heartbeat connections – Synology HAS takes care of this)
  • Configure the network interfaces on both units to provide direct interfaces to our Storage VLAN (see the previous section)
  • Make sure that the MTU settings are identical on each unit. This includes the MTU setting for unused ethernet interfaces. We had to edit the /etc/synoinfo.conf file on each unit to set the MTU values for the inactive interfaces.
  • Ensure both units are running up-to-date versions of the DSM software
  • Configure the pair for HA (see the documentation above)
  • Complete the configuration of the cluster pair, including –
    • Shares
    • Backups
    • Snapshots and Replication
    • Install Apps

The following shows the completed configuration of our HA Storage Cluster.

Completed HA Cluster Configuration - High-Availability Storage
Completed HA Cluster Configuration

The cluster uses a single IP address to present a GUI that configures and manages the primary and secondary NAS units as if they were a single NAS. The same IP address always points to the active NAS for file sharing and iSCSI I/O operations.

Voting Server

A voting server avoids split-brain scenarios where both units in the HA cluster try to act as the master. Any server that is always accessible via ping to both NAS drives in the cluster can serve as a Voting Server. We used the gateway for the Storage VLAN where the cluster is connected for this purpose.

Performance Benchmarking

We used the ATTO Disk Benchmarking Tool to perform benchmark tests on the complete HA cluster. The benchmarks were run from an M2 Mac Mini running macOS, which used an SMB share to access the Storage Cluster over a 10 GbE connection on the Storage VLAN.

Storage Cluster Benchmark Configuration
Storage Cluster Benchmark Configuration

The following are the benchmark results –

Storage Throughput Benchmarks - High-Availability Storage
Storage Cluster Throughput Benchmarks

The Storage Cluster’s performance is quite good, and the 10 GbE connection is saturated for 128 KB writes and larger. The slightly lower read throughput results from a combination of our SSD’s wire performance and the additional latency on writes due to the need to copy data from the primary NAS storage to the secondary NAS.

Storage Cluster IOPs Benchmarks
Storage Cluster IOPs Benchmarks

IOs/sec (IOPs) performance is important for virtual disks such as VMs and LXC containers, as they frequently perform smaller writes.

We also ran benchmarks from a VM running Windows 10 in our Proxmox Cluster. These benchmarks benefit from a number of caching and compression features in our architecture, including:

  • Write Caching with the Windows 10 OS
  • Write Caching with the iSCSI vdisk driver in Proxmox
  • Write Caching on the NAS drives in our Storage Cluster
Windows VM Disk Benchmarks - High-Availability Storage
Windows VM Disk Benchmarks

The overall performance figures for the Windows VM benchmark exceed the capacity of the 10 GbE connections to the Storage Cluster and are quite good. Also, the IOPs performance is close to the specified maximum performance values for the RS1221+ NAS.

Windows VM IOPs Benchmarks
Windows VM IOPs Benchmarks

Failure Testing

The following scenarios were tested under a full workload –

  • Manual Switch between Active and Standby NAS devices
  • Simulate a network failure by disconnecting the primary NAS ethernet cable.
  • Simulate active NAS failure by pulling power from the primary NAS.
  • Simulate a disk failure by pulling a disk from the primary NAS pool.

In all cases, our system failed over within 30 seconds or less and continued handling the workload without error.

Synology NAS

Main NAS Storage Rack - Synology RS2421RP+ and RX1217RP+ NAS Drives
Main NAS Storage Rack – Synology RS2421RP+ and RX1217RP+ NAS Drives

We use a variety of NAS drives for storage in our Home Lab.

DeviceModelStorage CapacityRAID LevelPurposeNetwork Interface
NAS-1Synology RS2421RP+/RX1223RP272 TB HDDRAID-6Backups and Snapshot ReplicationDual 10 GbE Optical
NAS-2Synology RS2421RP+145 TB HDDRAID-6Video Surveillance and BackupsDual 10 GbE Optical
NAS-3Synology RS1221+/RX418+112 TB HDD/SSDRAID-5&6Media Storage and DVR10 GbE Optical
NAS-4Synology RS2421RP+/R1223RP290 TB HDDRAID-6Backups and Snapshot ReplicationDual 10 GbE Optical
NAS-5Synology FS2017+17 TB SSDRAID F1High-Speed Storage for Video Editing & TimeMachine BUs25 GbE Optical
NAS-6Synology DS1621xs+/DX517116 TB HDDRAID-5General Purpose StorageDual 10 GbE Optical
NAS-7Dual Synology RP1221+ in High-Availability configuration24 TB SSDRAID-5VM and Docker Volumes10 GbE Interface
NAS-10Dell Server-based File Server using ZFS23 TB SAS SSDRAID-10High-Speed Scratch Storage25 GbE Optical
NAS-11Raspberry Pi NAS2 TB NVMen/aExperimentation2.5 GbE
NAS-12Raspberry Pi NAS3.5 TB SSDRAID-0Experimentation2.5 GbE

The table above lists all of the NAS drives in our Home Lab. Most of our production storage is implemented using Synology NAS Drives. Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage.

Systems with Dual Optical interfaces are configured as LACP LAGs to increase network interface capacity and improve reliability.

Hardware and Power

We have moved to mostly rack-mounted NAS drives to save space and power. The picture above shows one of our racks which contains Synology NAS drives. We have also opted for Synology Rack Mount systems with redundant power supplies to improve reliability. Our racks include dual UPS devices to further enhance reliability.

Basic Setup and Configuration

We cover some details of configuring our Synology NAS devices running DSM7.2 here.

Multiple VLANs and Bonds on Synology NAS

Our NAS devices use pairs of ethernet connections configured as 802.3ad LACP bonded interfaces. This approach improves reliability and enhances interface capacity when multiple sessions are active on the same device. DSM supports LACP-bonded interfaces on a single VLAN. This can be easily configured with the DSM GUI.

A few of our NAS drives benefit from multiple interfaces on separate VLANs. This avoids situations where high-volume IP traffic needs to be routed between VLANs for applications such as playing media and surveillance camera recording. Setting this up requires accessing and configuring DSM’s underpinning Linux environment via SSH. The procedure for setting this up is explained here and here.

Creating a RAM Disk

You can create a RAM disk on your Synology NAS by creating a mount point in one of your shares and installing a shell script to run when the NAS boots to create and mount a RAM disk. If your mount point is in a share on your Storage Pool on volume1 named Public and is called tmp then –

#!/bin/sh
mount -t tmpfs -o size=50% ramdisk /volume1/Public/tmp

will create a RAM disk that uses 50% of the available RAM on your NAS and is accessible as /volume1/Public/tmp by packages running on your NAS. The RAM disk will be removed when you reboot your NAS so you’ll need to run the command above each time your NAS boots. This can be scheduled to run on boot using the Synology Task Scheduler.

Proxmox Backup Server

This page covers the installation of the Proxmox Backup Server (PBS) in our HomeLab. We run the PBS in a VM on our server and store backups in shared storage on one of our NAS drives.

We are running a Proxmox Test Node and a Raspberry Pi Proxmox Cluster that can access our Proxmox Backup Server (PBS). This approach enables backups and transfers of VMs and LXCs between our Production Proxmox Cluster, our Proxmox Test Node, and Raspberry Pi Proxmox Cluster.

Proxmox Backup Server Installation

We used the following procedure to install PBS on our server.

PBS was created using the recommended VM settings in the video. The VM is created with the following resources:

  • 4 CPUs
  • 4096 KB Memory
  • 32 GB SSD Storage (Shared PVE-storage)
  • HS Services Network

Once the VM is created, the next step is to run the PBS installer.

After the PBS install is complete, PBS is booted, the QEMU Guest Agent is installed, and the VM is updated using the following commands –

# apt update
# apt upgrade
# apt-get install qemu-guest-agent
# reboot

PBS can now be accessed via the web interface using the following URL –

https://<PBS VM IP Address>:8007

Create a Backup Datastore on a NAS Drive

The steps are as follows –

  • Install CIFS utils
# Install NFS share package on Proxmox
apt install cifs-utils
  • Create  a mount point for the NAS PBS store
mkdir /mnt/pbs-store
  • Create a Samba credentials file to enable logging into NAS share
vi /etc/samba/.smbcreds
...
username=<NAS Share User Name>
password=<NAS Share Password>
...
chmod 400 /etc/samba/.smbcreds
  • Test mount the NAS share in PBS  and make a directory to contain the PBS backups
mount -t cifs -o rw,vers=3.0, \
    credentials=/etc/samba/.smbcreds, \
    uid=backup,gid=backup \
    //<nas-#>.anita-fred.n et/PBS-backups \
    /mnt/pbs-store
mkidr /mnt/pbs-store/pbs-backups
  • Make the NAS share mount permanent by adding it to /etc/fstab
vi /etc/fstab
...after the last line add the following line
# Mount PBS backup store from NAS
//nas-#.anita-fred.net/PBS-backups /mnt/pbs-store cifs vers=3.0,credentials=/etc/samba/.smbcreds,uid=backup,gid=backup,defaults 0 0
  • Create a datastore to hold the PBS backups in the Proxmox Backup Server as follows. The datastore will take some time to create (be patient).
PBS Datastore Configuration
PBS Datastore Configuration
PBS Datastore Prune Options
PBS Datastore Prune Options
  • Add the PBS store as storage at the Proxmox datacenter level. Use the information from the PBS dashboard to set the fingerprint.
PBS Storage in Proxmox VE
PBS Storage in Proxmox VE
  • The PBS-backups store can now be used as a target in Proxmox backups. NOTE THAT YOU CANNOT BACK UP THE PBS VM TO PBS-BACKUPS.
Proxmox Cluster/NodePBS DatastorePurpose
Production ClusterPBS-backupsBackups for 3-node production cluster
Raspberry Pi ClusterRPI-backupsBackups for 3-node Raspberry Pi Cluster
NUC Test NodeNUC-backupsBackups for our Proxmox Test Node

As the table above indicates, additional datastores are created for our Raspberry Pi Cluster and our NUC Proxmox Test Node.

Setup Boot Delay

The NFS share for the Proxmox Backup store needs time to start before the Backup server starts on boot. This can be set for each node under System/Options/Start on Boot delay. A 30-second delay seems to work well.

Setup Backup, Pruning, and Garbage Collection

The overall schedule for Proxmox backup operations is as follows:

  • 02:00 – Run a PVE Backup on the PBS Backup Server VM from our Production Cluster (run in suspend mode; stop mode causes problems)
  • 02:30 – Run PBS Backups in all Clusters/Nodes on all VMs and LXCs EXCEPT for the PBS Backup Server VM
  • 03:00 – Run Pruning on the all PBS datastores
  • 03:30 – Run Garage Collection on all PBS datastores
  • 05:00 – Verify all backups in all PBS G

Local NTP Servers

We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, modify/etc/chrony/chrony.conf to use our servers for the pool. This must be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

Backup Temp Directory

Proxmox backups use vzdump to create compressed backups. By default, backups use /var/tmp, which lives on the boot drive of each node in a Proxmox Cluster. To ensure adequate space for vzdump and reduce the load on each server’s boot drive, we have configured a temp directory on the local ZFS file systems on each of our Proxmox servers. The tmp directory configuration needs to be done on each node in the cluster (details here). The steps to set this up are as follows:

# Create a tmp directory on local node ZFS stores
# (do this once for each server in the cluster)
cd /zfsa
mkdir tmp

# Turn on and verify ACL for ZFSA store
zfs get acltype zfsa
zfs set acltype=posixacl zfsa
zfs get acltype zfsa

# Configure vzdump to use the ZFS tmp dir'
# add/set tmpdir as follows 
# (do on each server)
cd /etc
vi vzdump.conf
tmpdir: /zfsa/tmp
:wq

Welcome To Our Home Lab

Home Network Dashboard
Home Network Dashboard

This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several different components and systems, including:

  • A high-performance home network with redundant Internet connections
  • A storage system that utilizes multiple NAS devices
  • Multiple enterprise-grade servers in a high-availability cluster
  • Applications, services, and websites
  • Powered via dual-UPS protected power feeds and a backup generator

Home Network

Home Network Core, High-Availability Storage and Secondary Server Rack
Home Network Core, High-Availability Storage, and Secondary Server Rack

Our Home Network uses a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We use Ubiquity UniFi equipment throughout. We have installed multiple OM4 multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as Dual-Fiber LACP LAG links.

We have redundant Internet connections which include 1 Gbps optical fiber and a 400 Mbps/12 Mbps cable modem service.

Out Network Rack also includes two SuperMicro Servers and a pair of Synology NAS drives in a high-availability configuration. These drives provide solid-state storage for Proxmox Virtual Machine disks and Docker volumes.

Main Server and Storage

Main Server Rack and NAS Storage Rack
Main Server Rack and NAS Storage Rack

Our Server Rack houses our main Dell Server and several of our Synology NAS Drives. It features redundant UPS power and includes rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab.

Our servers run Proxmox in a high-availability configuration. In total, we have 104 CPUs and 1 TB of RAM available in our primary Proxmox cluster.

This rack includes an all SSD storage high-speed NAS that we use for video editing. It also includes a NAS which stores our video and audio media collection and provides access to this content throughout our home and on the go when we travel.

High Capacity Storage System

Main NAS Storage Rack
Main NAS Storage Rack

Our NAS Rack provides high-capacity storage via several Synology NAS Drives. It features redundant UPS power and includes additional rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab. This rack also houses our Raspberry Pi NAS and NAS 2 systems.

Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage.

Power Over Ethernet (PoE)

Main Power Over Ethernet (PoE) Switch

We make use of Power Over Ethernet (PoE) switches at many edge locations in our network to power devices through their ethernet cables.

The switch shown above is located centrally where all of the CAT6 ethernet connections in our home terminate. It powers our Surveillance Cameras, IP Telephones, Access Points, etc.

Home Media System

Our Home Theater
Our Home Theater

We use our Home Network and NAS System to provide a Home Media System. Our Media System sources content from streaming services as well as stored video and audio content store on our Media NAS drive and enables it to be viewed from any TV or Smart Device in our home. We can also view our content remotely when traveling or in our cars via the Internet.

Surveillance System

Synology Surveillance System
Synology Surveillance Station

We use Synology Surveillance Station running on one of our NAS drives to support a variety of IP cameras throughout our home. This software uses the host NAS drive for storing recordings and provides image recognition and other security features.

Telephone System

Telephone System Dashboard
Telephone System Dashboard

We use Ubiquity Unifi Talk to provide managed telephone service within our home.

Ubiquity IP Telephone

This system uses PoE-powered IP Telephones which we have installed throughout our home.

Applications, Services, and Websites

We are hosting several websites, including:

Set-up information for our self-hosted sites may be found here.