Tag Archives: Network

WireGuard VPN

WireGuard VPN
Our Unifi system can support several different VPN configurations. We used the VPN server built into our Unifi Dream Machine SE and configured it to use Wireguard clients on our iPhones, iPads, macOS laptops, and Windows laptops. The Unifi system makes setting up our WireGuard VPNs simple.

The following video explains the various VPN options and how to configure them.


5 Types of VPNs on Unifi and How To Configure Them

We use DDNS to ensure that our domains point to our router when our ISPs change our IP address. After the clients are installed, they are updated to point at our network’s current IP.

Iperf3

Iperf3
Iperf3

Iperf3 is a common tool for network performance testing. We run an Iperf3 server in a Docker container. You can find information on how to set up and use Iperf3 here.

Pihole with a Cloudflare Tunnel

Pihole in Docker

We are running three Pihole installations, which enable load balancing and high availability for our DNS services. We also use a Cloudflare encrypted tunnel to protect information in external DNS queries via the Internet.

Our three instances are deployed on a combination of Docker host VMs in our Proxmox Cluster and a stand-alone Raspberry Pi Docker host.

Deploy Pihole with a Cloudflare Tunnel

Our software service stack for our dockerPiHole installs Pi includes the following applications:

Our combined stack was created using  information in the following video:


Deploy PiHole with Cloudflare Tunnel in Docker

Ubuntu Port 53 Fix

Unubtu VMs include a DNS caching server on port 53, which prevents Pihole from being deployed. To fix this, run the commands at this link on the host Ubuntu VM before installing the Pihole and Cloudflare Tunnel containers.

Scheduled Block List Updates

We must update our block lists by doing a Gravity pull. We do this daily via a cron job. This can be configured on the RPi host using the following commands –

# Edit the user crontab
sudo crontab -u <user-id> -e

# The following to the user crontab
min hr * * * su ubuntu -c /usr/bin/docker exec pihole pihole -g | /usr/bin/mailx -s"RPi Docker - Gravity Pull" [email protected]

Cloudflare DDNS

Cloudflare DDNS

We use Cloudflare to host our domains and the associated external DNS records. Cloudflare provides excellent security and scaling features and is free for our use cases.

We do not have a static IP address from either of our ISPs. This, coupled with the potential of a failover from our primary to our secondary ISP, requires us to use DDNS to keep the IPs for our domains up to date in Cloudflare’s DNS.

We run a docker container for each domain that periodically checks to see if our external IP address has changed and updates our DNS records in Cloudflare.  The repository for this container can be found here.

Deploying the DDNS update container is done via a simple docker compose yml –

version: '2'
services:
  cloudflare-ddns:
    image: oznu/cloudflare-ddns:latest
    restart: unless-stopped
    container_name: your-container-name
    environment:
        - API_KEY=YOUR-CF-API-KEY
        - ZONE=yourdomain.com
        - PROXIED=true
        # Runs every 5 minutes
        - CRON=*/5 * * * *

You’ll need a separate container for each DNS Zone you host on Cloudflare.

Docker Networking

Docker can create its own internal networks. There are multiple options here, so this aspect of Docker can be confusing.

Docker Networking Types

The following video explains the Docker networking options and provides examples of their creation and use.


Docker Networking Explained

Raspberry Pi – Docker and PiHole

PiHole in Docker

We have set up a Raspberry Pi 5 system to run a third PiHole DNS server in our network. This ensures that DNS services are available even if our other servers are down.

To make this PiHole easy to manage, we configured our Raspberry Pi to run Docker. This enables us to manage the PiHole installation on the Pi from the Portainer instance used to manage our systems running docker.

We are also running the Traefik reverse proxy. Traefik is used to provide an SSL certificate for our PiHole.

Raspberry Pi Hardware

Raspberry Pi Docker Host
Raspberry Pi Docker Host

Our docker host consists of a PoE-powered Raspberry Pi 5 system. The hardware components used include:

OS Installation

We are running the 64-bit Lite version (no GUI desktop) of Raspberry Pi OS. The configuration steps on the initial boot include:

  • Setting the keyboard layout to English (US)
  • Setting a unique user name
  • Setting a strong password

After the system is booted, we used sudo raspi-config to set the following additional options:

  • Updated raspi-config to the latest version
  • Set the system’s hostname
  • Enable ssh
  • Set the Timezone
  • Configure predictable network names
  • Expand the filesystem to use all of the space on our flash card

Next, we did a sudo apt update && sudo apt dist-upgrade to update our system and rebooted.

The RPi 5 works well with the PoE HAT we are using. The RPi5  booted up with the USB interfaces in low-power mode. The PoE HAT provides enough power to enable USB boot, so we added the following to bring our RPi up in full power USB mode:

$ sudo vi /boot/firmware/config.txt

[all]
# Enable RPi 5 to provide full power to USB
usb_max_current_enable=1
:wq

# After rebooting, check USB power mode
$ vcgencmd get_config usb_max_current_enable
usb_max_current_enable=1

Finally, we created and ran a script to install our SSH keys on the system, and we verified that SSH access was working. With this done, we ran our ansible configuration script to install the standard set of tools and utilities that we use on our Linux systems.

Mail Forwarding

We will need to forward emails from containers and scripts on the system. To do this, we set up email forwarding using the procedure here.

Docker/Docker Compose Installation

Installing Docker and the Docker Compose plugin involves a series of command line steps on the RPi. To automate this process, we created a script that runs on our Ubunutu Admin server. The steps required for these installations are covered in the following video:


Steps to install Docker and Docker Compose on a Raspberry Pi

Some important adjustments to the steps in the video included:

The installation can be verified at the end with the following commands:

# docker --version
# docker compose version
# docker run hello-world

Portainer Agent

We installed the Portainer Edge agent using the following command, which is run on the RPi:

# docker run -d \
  -p 9001:9001 \
  --name portainer_agent \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v /var/lib/docker/volumes:/var/lib/docker/volumes \
  portainer/agent:2.19.5

The final step is to connect the Edge Agent to our Portainer.

Traefik Reverse Proxy and PiHole with Cloudflare Tunnel

Our software service stack for our Raspberry Pi includes the following applications:

These applications are installed via custom scripts, and Docker Compose using a single stack. Our combined stack was created using a combination of the information in the following videos:


Deploy PiHole with Cloudflare Tunnel in Docker


Deploying Traefik in Docker

Scheduled Block List Updates

We must update our piHole block list by doing a Gravity pull. We do this daily via a cron job. This can be configured on the RPi host using the following commands –

# Edit the user crontab
sudo crontab -u <user-id> -e

# The following to the user crontab
min hr * * * su ubuntu -c /usr/bin/docker exec pihole pihole -g | /usr/bin/mailx -s"RPi Docker - Gravity Pull" [email protected]

Cloudflare DDNS

We host our domains externally on Cloudflare. We use Docker containers to keep our external IP address up to date in Cloudflare’s DNS system. You can learn about how to set this up here.

Watchtower

We are running the Watchtower container to keep our containers on our RPi Docker host up to date. You can learn more about Watchtower and how to install it here.

Backups

We back up our Raspberry Pi Docker host using Synology ActiveBackup for business running on one of our Synology NAS drives.

Home Network Infrastructure

Gen 2 Gen 4 Home Network Core Rack
Gen 2 Gen 4 Home Network Core Rack

We use UniFi equipment throughout. We chose the UniFi platform for our second-generation home network primarily for its single-plane glass management and configuration capabilities.

Network Structure

Home Network Architecture
Network Structure

The image above shows our network’s structure. Our Network is a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We have installed multiple OM4 fiber multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as dual-fiber LACP LAG links.

Access Layer

At the top layer, redundant Internet connections provide Internet Access and ensure that we remain connected to the outside world.

Firewall, Routing, and Management Layer

Unifi Dream Machine Pro SE - Home Network
UniFi Dream Machine Pro SE

Our network’s firewall and routing layer implement security and routing functions using a UniFi UDM Pro router and firewall.

Home Network Dashboard
Home Network Dashboard

The UDM also provides a single-pane-of-glass management interface. All configuration functions are performed via the GUI provided by the UDM.

Core Aggregation Layer

Home Network Core Aggregation
UniFi High-Capacity Aggregation Switch

The core layer uses a pair of high-capacity Aggregation Switches to provide optical access links to all of the switches in our network’s edge layer. We also include a high-speed 10 GbE wired ethernet switch at this layer. All of our storage devices and servers are connected directly to the core layer of our network to maximize performance and minimize latency.

Edge Connectivity Layer

Home Network Edge
Example UniFi High-Speed Edge Switch

The edge layer uses various switches connected to the core layer, combining 25 GbE, 10 GbE, and 1 GbE optical links. Many of these links are built using pairs of optical links in an LACP/LAG configuration.

Home Network - Router, Core, and Edge Switches In Our Network
UniFi Firewall/Router, Core, and Edge Switches In Our Network

Our edge switches are deployed throughout our home. We use a variety of edge switches in our network, depending on each room’s connectivity needs.

Wi-Fi Access and Telephony

Unifi WiFi APs and Telephones - Home Network
UniFi WiFi APs and Telephones

This layer connects all our devices, including WiFi Access Points and our Telephones.

Synology NAS

Main NAS Storage Rack - Synology RS2421RP+ and RX1217RP+ NAS Drives
Main NAS Storage Rack – Synology RS2421RP+ and RX1217RP+ NAS Drives

We use a variety of NAS drives for storage in our Home Lab.

DeviceModelStorage CapacityRAID LevelPurposeNetwork Interface
NAS-1Synology RS2421RP+/RX1223RP272 TB HDDRAID-6Backups and Snapshot ReplicationDual 10 GbE Optical
NAS-2Synology RS2421RP+145 TB HDDRAID-6Video Surveillance and BackupsDual 10 GbE Optical
NAS-3Synology RS1221+/RX418+112 TB HDD/SSDRAID-5&6Media Storage and DVR10 GbE Optical
NAS-4Synology RS2421RP+/R1223RP290 TB HDDRAID-6Backups and Snapshot ReplicationDual 10 GbE Optical
NAS-5Synology FS2017+17 TB SSDRAID F1High-Speed Storage for Video Editing & TimeMachine BUs25 GbE Optical
NAS-6Synology DS1621xs+/DX517116 TB HDDRAID-5General Purpose StorageDual 10 GbE Optical
NAS-7Dual Synology RP1221+ in High-Availability configuration24 TB SSDRAID-5VM and Docker Volumes10 GbE Interface
NAS-10Dell Server-based File Server using ZFS23 TB SAS SSDRAID-10High-Speed Scratch Storage25 GbE Optical
NAS-11Raspberry Pi NAS2 TB NVMen/aExperimentation2.5 GbE
NAS-12Raspberry Pi NAS3.5 TB SSDRAID-0Experimentation2.5 GbE

The table above lists all of the NAS drives in our Home Lab. Most of our production storage is implemented using Synology NAS Drives. Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage.

Systems with Dual Optical interfaces are configured as LACP LAGs to increase network interface capacity and improve reliability.

Hardware and Power

We have moved to mostly rack-mounted NAS drives to save space and power. The picture above shows one of our racks which contains Synology NAS drives. We have also opted for Synology Rack Mount systems with redundant power supplies to improve reliability. Our racks include dual UPS devices to further enhance reliability.

Basic Setup and Configuration

We cover some details of configuring our Synology NAS devices running DSM7.2 here.

Multiple VLANs and Bonds on Synology NAS

Our NAS devices use pairs of ethernet connections configured as 802.3ad LACP bonded interfaces. This approach improves reliability and enhances interface capacity when multiple sessions are active on the same device. DSM supports LACP-bonded interfaces on a single VLAN. This can be easily configured with the DSM GUI.

A few of our NAS drives benefit from multiple interfaces on separate VLANs. This avoids situations where high-volume IP traffic needs to be routed between VLANs for applications such as playing media and surveillance camera recording. Setting this up requires accessing and configuring DSM’s underpinning Linux environment via SSH. The procedure for setting this up is explained here and here.

Creating a RAM Disk

You can create a RAM disk on your Synology NAS by creating a mount point in one of your shares and installing a shell script to run when the NAS boots to create and mount a RAM disk. If your mount point is in a share on your Storage Pool on volume1 named Public and is called tmp then –

#!/bin/sh
mount -t tmpfs -o size=50% ramdisk /volume1/Public/tmp

will create a RAM disk that uses 50% of the available RAM on your NAS and is accessible as /volume1/Public/tmp by packages running on your NAS. The RAM disk will be removed when you reboot your NAS so you’ll need to run the command above each time your NAS boots. This can be scheduled to run on boot using the Synology Task Scheduler.

Welcome To Our Home Lab

Home Network Dashboard
Home Network Dashboard

This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several different components and systems, including:

  • A high-performance home network with redundant Internet connections
  • A storage system that utilizes multiple NAS devices
  • Multiple enterprise-grade servers in a high-availability cluster
  • Applications, services, and websites
  • Powered via dual-UPS protected power feeds and a backup generator

Home Network

Home Network Core, High-Availability Storage and Secondary Server Rack
Home Network Core, High-Availability Storage, and Secondary Server Rack

Our Home Network uses a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We use Ubiquity UniFi equipment throughout. We have installed multiple OM4 multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as Dual-Fiber LACP LAG links.

We have redundant Internet connections which include 1 Gbps optical fiber and a 400 Mbps/12 Mbps cable modem service.

Out Network Rack also includes two SuperMicro Servers and a pair of Synology NAS drives in a high-availability configuration. These drives provide solid-state storage for Proxmox Virtual Machine disks and Docker volumes.

Main Server and Storage

Main Server Rack and NAS Storage Rack
Main Server Rack and NAS Storage Rack

Our Server Rack houses our main Dell Server and several of our Synology NAS Drives. It features redundant UPS power and includes rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab.

Our servers run Proxmox in a high-availability configuration. In total, we have 104 CPUs and 1 TB of RAM available in our primary Proxmox cluster.

This rack includes an all SSD storage high-speed NAS that we use for video editing. It also includes a NAS which stores our video and audio media collection and provides access to this content throughout our home and on the go when we travel.

High Capacity Storage System

Main NAS Storage Rack
Main NAS Storage Rack

Our NAS Rack provides high-capacity storage via several Synology NAS Drives. It features redundant UPS power and includes additional rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab. This rack also houses our Raspberry Pi NAS and NAS 2 systems.

Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage.

Power Over Ethernet (PoE)

Main Power Over Ethernet (PoE) Switch

We make use of Power Over Ethernet (PoE) switches at many edge locations in our network to power devices through their ethernet cables.

The switch shown above is located centrally where all of the CAT6 ethernet connections in our home terminate. It powers our Surveillance Cameras, IP Telephones, Access Points, etc.

Home Media System

Our Home Theater
Our Home Theater

We use our Home Network and NAS System to provide a Home Media System. Our Media System sources content from streaming services as well as stored video and audio content store on our Media NAS drive and enables it to be viewed from any TV or Smart Device in our home. We can also view our content remotely when traveling or in our cars via the Internet.

Surveillance System

Synology Surveillance System
Synology Surveillance Station

We use Synology Surveillance Station running on one of our NAS drives to support a variety of IP cameras throughout our home. This software uses the host NAS drive for storing recordings and provides image recognition and other security features.

Telephone System

Telephone System Dashboard
Telephone System Dashboard

We use Ubiquity Unifi Talk to provide managed telephone service within our home.

Ubiquity IP Telephone

This system uses PoE-powered IP Telephones which we have installed throughout our home.

Applications, Services, and Websites

We are hosting several websites, including:

Set-up information for our self-hosted sites may be found here.