Tag Archives: Server

Raspberry Pi Home Server

Raspberry Pi Home Server and NAS
PiNAS – A Raspberry Pi Home Server and NAS

While having enterprise-grade equipment in our Home Lab is nice, I aimed to build something simple. It had to be inexpensive for beginners. So, the solution is a simple Raspberry Pi Home Server.

Project Objectives

Raspberry Pi Home Server Running CasaOS
Raspberry Pi Home Server Running CasaOS

Many applications and services can be hosted on a home server. For this project, we choose a basic set of capabilities for our Raspberry Pi Home server project –

  • Sharing files on a home network via Network Attached Storage (NAS)
    • Photos, Music, Videos, Documents, …
  • A DNS Server to create easy-to-remember names to access IP devices and services
    • 192.168.1.xxx vs. your-service.your-domain.net
  • Creating a personal domain via Cloudflare and obtaining a signed SSL Certificate for your web services
  • Setting up a Reverse Proxy to tie it all together in a secure fashion
  • Serving media for viewing across TVs, PCs, and Smart Devices (phones, tablets)
  • Keep your devices and apps up and working via monitoring.

Also, this project can offer an opportunity to learn about and use modern IT technology, and one can build upon this project to add applications to –

  • Create a website and share it with the world
  • Build a ”Smart Home”
  • Add a Home Lab dashboard

We’ll be making use of Docker for much of this project. Sample Docker Compose files are included for many of the applications that follow. Note that files will need some adjustments. In particular, replace <example – password> items with your custom values. Use strong passwords. Keep your passwords and API keys secure.

Back to top

Raspberry Pi Home Server Hardware

PiTech Home Server and NAS
PiTech Home Server and NAS

We recommend a Raspberry Pi 4B or Pi 5 system with 8 GB of RAM for your home server. For storage, we recommend an SSD device for reliability and capacity reasons. Below are links to systems that we’ve built.

  • PiNAS – RPi 5 system with a 2 TB NVMe drive
  • PiNAS 2 – RPi 5 system with 4 x 2.5″ SSDs
  • PiLab – RPi 4B System with a 1 TB 2.5″ SSD
  • PiTech (coming soon) – RPi 5 System with a 2 TB 2.5″ SSD

If you buy new hardware to build your home server, I recommend a system like PiNAS. The PiLab and PiTech systems are good choices. These options are ideal if you already have a Raspberry Pi 4B or Raspberry Pi 5. Make sure you also have a suitable 2.5″ SSD drive available.

Back to top

Project Prerequisites

The prerequisites below are needed for this project. We suggest that you finish these items in place before you start the rest of the steps outlined in the next sections –

  • Raspberry Pi Hardware with adequate storage
  • Broadband Internet and a Router that you can set up to:
      • Reserve IP addresses for your devices
      • Open ports to the Internet
    • A free account on Cloudflare
    • Suggest using a Password Manager like Dashlane to generate, encrypt, and store strong passwords
      • Be sure to set up 2-factor Authentication for access to your Password Manager

    Back to top

    OS and CasaOS Installation

    The video above covers the steps to install Raspberry Pi (RPi) OS and CasaOS on your Raspberry Pi. The steps are as follows –

    • Assemble your hardware and connect the RPi
      to your network – use a wired Ethernet
    • Install RPi OS 64-bit Lite via network install
    • Set up a reserved IP address for your RPi in your router
        • The exact procedure depends on your router model
      • Install CasaOS
        • $ curl -fsSL https://get.casaos.io | sudo bash
        • Set CasaOS login and password

      Once CasaOS is up and running, we recommend doing the next steps –

      • Set up a fixed IP assignment for your server in your router’s DHCP settings (note the IP – we’ll use it in the steps that follow)
      • Set a strong Linux password via the CasaOS terminal
      • Change CasaOS port from 80 to 9095

      Back to top

      Setup Network Attached Storage (NAS)

      CasaOS File Shares
      CasaOS File Shares

      You can use the Files app in CasaOS to share your folders on your network. These shares are not password-protected and can be viewed by anyone who can access your home network.

      Password Protecting CasaOS Shared Folders

      This can be done by manually configuring Samba file sharing in Linux.

      First, set up and share all of your main folders in CasaOS. This is necessary as adding extra shared folders will overwrite the changes we will make here.

      Next, we must create a user and set a password for file sharing. The commands below will create a user called shareuser and set a password for the user.

      $ sudo adduser --no-create-home --disabled-password \
      --disabled-login shareuser
      $ sudo smbpasswd -a shareuser

      The second command prompts you to enter a password to access protected shared folders. The CasaOS Terminal can filter certain characters in your password. It is best to run these commands via SSH from another computer.

      Now, we can turn on password protection for our shared folders by editing /etc/samba/smb.casa.conf using the following command.

      $ sudo nano /etc/samba/smb.casa.conf 

      You can protect each share by modifying the lines shown in bold in the example below for the share.

      [Media]
      comment = CasaOS share Media
      public = No
      path = /DATA/Media
      browseable = Yes
      read only = Yes
      guest ok = No

      valid users = shareuser
      write list = shareuser

      create mask = 0777
      directory mask = 0777
      force user = root

      When you are done making the changes, run the following command to apply your changes.

      $ sudo service smbd restart 

      Your shared folders are now password-protected. When accessing them from your Mac or Windows PC, you will be prompted to enter your user name, which is shareuser. You will also need to enter the password that you set.

      Back to top

      A First Application – OpenSpeedTest

      OpenSpeedTest
      OpenSpeedTest

      We’ll use the CasaOS App Store to install a simple speed test application called OpenSpeedTest on our home server. We’ll use the Utilities version of this app.

      Once our speed test is installed, we can run it using the icon in the CasaOS dashboard or from any web browser on our home network using the following URL –

      http://<you server IP>:3004

      OpenSpeedTest runs as a container inside Docker on your Linux OS. Docker is beneficial for running applications without consuming much processing and memory resources. More about Docker follows.

      Back to top

      Docker, Portainer, and Watchtower

      We’ll use Docker to install and run applications on our home server. Docker provides an efficient environment to host applications.  We’ll use Docker Compose to set up our applications to run in Docker.

      Back to top

      Portainer

      We’ll install an application called Portainer from the CasaOS app store next.

      Portainer Running on Our Home Server
      Portainer Running on Our Home Server

      Portainer provides a graphical user interface (GUI) that makes using Docker much easier. So, we’ll use Portainer to install and manage all the Apps on our home server.

      Back to top

      Watchtower – Automatic Checks for Container Updates

      Next, we’ll install a container called Watchtower. Watchtower will periodically check for updated versions of all of our Docker images.

      Here is a template Docker Compose file for installing Watchtower using a Stack in Portainer.

      Dockerfile
      # Watchtower – check for container image updates
      services:
          watchtower:
              container_name: Watchtower
              image: containrrr/watchtower:latest
              security_opt:
                  - no-new-privileges:true
              volumes:
                  - /var/run/docker.sock:/var/run/docker.sock
              
              # Restart on crashes and reboots
              restart: unless-stopped
              
              # Configure the container
              environment:
                  # Set Timezone
                  - TZ=America/New_York
                  
                  # Cleanup old images
                  - WATCHTOWER_CLEANUP=true
                            
                  # Monitor only - disable auto updates
                  - WATCHTOWER_MONITOR_ONLY=true
                   
                  # Set schedule to run at 5 am daily
                  - WATCHTOWER_SCHEDULE=0 0 5 * * *
      Watchtower Docker Compose Template

      If Watchtower finds any updates, it will leave unused images for the associated containers in the Images section of Portainer. Specifically, to update a Container, you re-create it using the latest image. Afterward, you can remove the old unused images for the updated containers, as they will no longer be needed.

      Back to top

      Configuring Your Domain On Cloudflare

      You should already have purchased a domain on Cloudflare as part of completing the prerequisites for this project. Afterward, we’ll set up a Dynamic DNS service on our home server. This will keep the IP address for our Internet connection current and hidden in our Cloudflare DNS.

      We’ll use a Cloudflare DDNS container to do this. The steps are –

      • Obtain an API token on Cloudflare to allow our container to edit our domain’s DNS records on Cloudflare
      • Next, we’ll paste our token into our Docker Compose for our container. We’ll deploy our container as a stack in Portainer. Refer below for a template Docker Compose file.
      • Finally, we’ll log in to Cloudflare and check that our Internet IP address is correct.
      Dockerfile
      # Cloudflare DDNS: set IP address for your domain on Cloudflare
      services:
        cloudflare-ddns:
          image: oznu/cloudflare-ddns:latest
          restart: unless-stopped
          container_name: Cloudflare-DDNS-Update
          security_opt:
            - no-new-privileges:true
          
          environment:
            - API_KEY=<Your API key from Cloudflare goes here>
            - ZONE=<Your domain name goes here>
            - PROXIED=true
      
            # Check for IP changes every 5 minutes
            - CRON=*/5 * * * *
      Cloudflare DDNS Docker Compose Template

      Back to top

      PiHole DNS Server

      PiHole in Docker
      PiHole DNS Server

      Next, we’ll set up PiHole as an ad-blocking DNS server for our home network. We’ll also use an encrypted tunnel between our PiHole server and Cloudflare to keep our web browsing activities private.

      We’ll deploy PiHole by creating a Stack in Portainer using the Docker Compose template below.

      Dockerfile – scroll to see more
      # Deploy PiHole with an encrypted tunnel to Cloudflare
      services:
        cloudflared:
          container_name: cloudflared
          image: cloudflare/cloudflared:latest
          security_opt:
            - no-new-privileges:true
      
          # Restart on crashes and reboots
          restart: unless-stopped
      
          # Cloudlflare tunnel used in proxy DNS mode
          command: proxy-dns
      
          environment:
            # Use standard Cloudflare DNS servers for Internet
            - "TUNNEL_DNS_UPSTREAM=https://1.1.1.1/dns-query,https://1.0.0.1/dns-query"
      
            # Listen on an unprivileged port
            - "TUNNEL_DNS_PORT=5053"
      
            # Listen on all interfaces
            - "TUNNEL_DNS_ADDRESS=0.0.0.0"
      
          # Attach Cloudflared only to the private network
          networks:
            pihole_internal:
              ipv4_address: 172.70.9.2
      
        pihole:
          container_name: pihole
          image: pihole/pihole:latest
          hostname: pitech-pihole
          security_opt:
            - no-new-privileges:true
      
          # Restart on crashes and reboots
          restart: unless-stopped
      
          # Set external ports for PiHole access
          ports:
            - "53:53/tcp"
            - "53:53/udp"
          # - "67:67/udp"
            - "500:80/tcp"
          # - "443:443/tcp"
      
          # Attach PiHole to the private network
          networks:
            pihole_internal:
              ipv4_address: 172.70.9.3
      
          environment:
            # Set local timezone
            TZ: 'America/New_York'
      
            # Substitute your strong password
            FTLCONF_webserver_api_password: '<your pihole dashboard password goes here>'
            FTLCONF_webtheme: 'default-dark'
            FTLCONF_dns_upstreams: '172.70.9.2#5053'    # Use Cloudflared tunnel
            FTLCONF_dns_listeningMode: 'all'
            FTLCONF_dns_dnssec: 'true'
      
          # Volumes stores your PiHole settings
          volumes:
            - '/DATA/AppData/pihole/:/etc/pihole/'
      
          # Make sure Cloudflare tunnel is up before PiHole
          depends_on:
            - cloudflared
      
      # Create the internal private network
      networks:
        pihole_internal:
           ipam:
             config:
               - subnet: 172.70.9.0/29.   # Allows for 4 IP addresses on network
           name: pihole_internal
      
      PiHole Docker Compose Template

      Once our PiHole stack is up and running, we can access our PiHole dashboard via our web browser using the URL below –

      http://<you server IP>:500/admin/

      You’ll want to set up A and CNAME records for your IP devices and services. You will also need a DNS A Record to point at your new server: server-name.your-domain.

      Hence, we will change the DHCP setup in your router to make your network’s DNS IP your PiHole server.

      Back to top

      Reverse Proxy Using Nginx Proxy Manager

      Nginx Proxy Manager
      Nginx Proxy Manager

      Next, we’ll set up Nginx Proxy Manager (NPM). NPM will offer several valuable services for us, including –

      • The ability to use subdomain names for our services and automatically add the correct port numbers for our hosted services
      • We will obtain a signed wildcard SSL certificate for our domain from Let’s Encrypt. This certificate will allow secure web connections (https) for our services. NPM will use a DNS-01 Challenge to obtain our SSL certificate. This way, we won’t have to open any ports to the Internet.

      We can use the Docker Compose template below to deploy Nginx Proxy Manager in Portainer.

      Dockerfile – scroll to see more
      # Nginix Proxy Manager: Reverse Proxy
      services:
        app:
          image: 'jc21/nginx-proxy-manager:latest'
          restart: unless-stopped
          security_opt:
            - no-new-privileges:true
      
          # Internal network for communicating with the database
          networks:
            - proxy
      
          # Expose ports to the outside world
          ports:
            - '80:80'
            - '81:81'    # For configuration GUI
            - '443:443'
      
          environment:
            DB_MYSQL_HOST: 'db'
            DB_MYSQL_PORT: 3306
            DB_MYSQL_USER: 'npm'
            DB_MYSQL_PASSWORD: '<your DB password>'  # Replace with a strong password
            DB_MYSQL_NAME: 'npm'
      
          # Persistent storage for npm configuration and SSL certs    
          volumes:
            - /DATA/AppData/nginx-proxy-mgr/data:/data
            - /DATA/AppData/nginx-proxy-mgr/letsencrypt:/etc/letsencrypt
        
        db:
          image: 'jc21/mariadb-aria:latest'
          restart: unless-stopped
          security_opt:
            - no-new-privileges:true
      
          # Join the internal network
          networks:
            - proxy
      
          environment:
            MYSQL_ROOT_PASSWORD: '<your DB password>' # Replace with the same strong password
            MYSQL_DATABASE: 'npm'
            MYSQL_USER: 'npm'
            MYSQL_PASSWORD: '<your DB password>'      # Replace with the same strong password
      
          # Persistent storage for the configuration database
          volumes:
            - /DATA/AppData/nginx-proxy-mgr/mysql:/var/lib/mysql
      
      # Define the private internal network
      networks:
        proxy:
          name: npm_proxy
      
      Dockerfile

      Back to top

      Configuring Nginx Proxy Manager

      The video above covers the configuration of NPM. We can use the same Cloudflare API token we obtained for the DDNS Service to create our SSL certificate.

      We can now set up our apps to use NPM and our SSL domain’s wildcard SSL certificate. Additionally, the steps to add a Proxy Host for each service in NPM are as follows –

      • Set up a subdomain for the service in PiHole
      • Set up a service proxy host in NPM using:
        • Your Server IP and the port for your Service
        • Also, set the options to Block Common Exploits and turn on WebSockets Support.
        • Next, apply your domain’s SSL certificate and Force SSL to be used
        • Finally, use HTTP/2 (except for OpenSpeedTest) and turn on HSTS and HSTS sub-domain support.

      Back to top

      Backups Using Duplicati

      Duplicati Backup

      We can use the Duplicati Backup app. It allows us to back up the data stored on our Raspberry Pi Home Server and configuration information. As a result, we have a backup solution that provides deduplication and encrypted backups. It offers a variety of local, network, and cloud backup destinations.

      Duplicati can be deployed as a Portainer Stack using the following template.

      Dockerfile – scroll to see more
      # Duplicati: backup files and folders to a variety of stores
      services:
        duplicati:
          image: lscr.io/linuxserver/duplicati:latest
          container_name: duplicati
          restart: unless-stopped
          security_opt:
            - no-new-privileges:true
      
          # Set the port for GUI access
          ports:
            - 8200:8200
      
          environment:
            - PUID=1000
            - PGID=1000
      
            # Set local timezone
            - TZ=America/New_York
      
            # Choose a default for encryption - replace it with a strong key
            - SETTINGS_ENCRYPTION_KEY='<replace with your strong key>'
      
            # Can add args when this container is launched
            - CLI_ARGS= #optional
      
            # Initial password will be 'changeme' if left blank
            - DUPLICATI__WEBSERVICE_PASSWORD=
      
          # Set locations of sources for backups and destination stores
          volumes:
            # Location for configuration storage
            - /DATA/AppData/duplicati/config:/config
      
            # Location for local backup storage on this server
            - /Backups:/backups
      
            # Root folder for creating backups - Using CasaOS DATA directory
            - /DATA:/source
      Duplicati Backup Docker Compose template

      You should create the /Backup directory in your Raspberry Pi Home Server using the CasaOS Terminal. The root user should own this directory and have access mode 777.

      # Run these commands in the CasaOS Terminal

      $ sudo mkdir /Backups
      $ sudo chown root:root /Backups
      $ sudo chmod 777 /Backups

      Finally, a proxy host was set up in Nginx Proxy Manager and PiHole for Duplicati backup.

      Back to top

      Steps To Perform a Backup

      To back up your data and applications completely, a few steps are required.

      1. Backup your Docker Volume Data (only required when adding or changing application configuration).
      2. Make backups of your settings in PiHole and Portainer and store them in your Configs folder in CasaOS. This is only required when you change your PiHole DNS or Portainer’s configuration.
      3. Backup your Duplicati backup configurations (only required when you add or change the backups configured in Duplicati).
      4. Run a manual or scheduled backup using Duplicati.

      Steps 1 – 3 above are only required when adding or reconfiguring applications. These steps apply to Duplicati backups and/or DNS records on your Home Server. Typically, you will run scheduled Duplicati backups. This will capture your data stored in CasaOS shares, including the configuration information that you saved via steps 1 – 3.

      Back to top

      Docker Volume Data Backup

      We want to include copies of the Docker volumes and configuration data for all our Apps in our backups. To do this, copy the scripts in this section to your home directory on your Home Server. Then, execute them remotely via SSH from your PC. Also, remember to make the scripts executable and run them as the root user.

      Backup Script
      Bash Script – scroll to see more
      #!/bin/bash
      #
      # bu-config.sh - Script to make .zip backups of all our
      #    docker continer volume data folders. Backup .zip files
      #    are stored in CasaOS Configs folder.
      #
      # usage: $ sudo bash ./bu-config.sh
      #
      # Note: This script must be executed remotely via SSH. Do not
      #    use the CasaOS Terminal.
      
      # Configuration
      DATE_STR=`date +%F_%H-%M-%S%Z`
      BACKUP_DIR="/DATA/Configs"
      APP_DIR="AppData"
      DOCKER_ROOT="/DATA/$APP_DIR"
      
      # Make sure we are running as root
      if [ "$EUID" -ne 0 ]
      	then echo "Please run as root"
      	exit 1
      fi
      
      # Make sure backup folder exists
      if [ ! -d "$BACKUP_DIR" ]
      then
      	# Need to create the backup directory
      	mkdir "$BACKUP_DIR"
      
      	# Set owner, group, and mode - follow CasaOS standard
      	chown root:root "$BACKUP_DIR"
      	chmod 777 "$BACKUP_DIR"
      fi
      
      # Warn user not to run this script from the CasaOS Terminal
      echo -e "\n*** WARNING: You cannot run this script using the CasaOS Terminal - use ssh from your PC insted ***"
      
      # Confirm its OK to stop containers
      echo -e "\n*** The following docker containers are running ***"
      docker ps --format '{{.Names}} - {{.Status}}' | sed -e 's/^/    /' 
      read -p "OK to stop these containers (type yes to continue)? " resp
      if [[ "$resp" != "yes" && "$resp" != "y" ]]
      then
      	echo ">>> Backup aborted <<<"
      	exit 2
      fi
      
      # Stop all docker containers
      echo -e "\n*** Stopping docker containers ***"
      docker ps --format '{{.Names}} - {{.Status}}' | sed -e 's/^/    /' 
      docker stop $(docker ps -q) > /dev/null
      
      # Create a backup and set owner/permissions for each folder
      echo -e "\n*** Creating backups of Apps in $DOCKER_ROOT in $BACKUP_DIR ***"
      cd $DOCKER_ROOT
      for FOLDER in *
      do
      	# Skip files - folders only
      	if [ -f $FOLDER ]
      	then
      		continue;
      	fi
      
      	# Backup filename and full pathname
      	BACKUP_FILE=BU_"$FOLDER"_"$DATE_STR".zip	# Avoid problems mixing _ and $VAR
      	BACKUP_PATH="$BACKUP_DIR/$BACKUP_FILE"
      
      	# Create the backup
      	echo -e "    Backing up $FOLDER to $BACKUP_PATH"
      	zip -q -r "$BACKUP_PATH" $FOLDER
      
      	# Set owner/permissions
      	chown root:root "$BACKUP_PATH"
      	chmod 766 "$BACKUP_PATH"
      	
      	# Show size of backup
      	cd $BACKUP_DIR
      	echo -e "    `du -h $BACKUP_FILE |  sed -e 's/\s\+/ - /g'`\n"
      	cd $DOCKER_ROOT
      done
      
      # Handle files in docker root directory
      BACKUP_FILE=BU_"$APP_DIR"_"$DATE_STR".zip	# Avoid problems mixing _ and $VAR
      BACKUP_PATH="$BACKUP_DIR/$BACKUP_FILE"
      
      # Create the backup
      echo -e "    Backing up $APP_DIR files to $BACKUP_PATH"
      cd $DOCKER_ROOT
      zip -q $BACKUP_PATH *
      
      # Set owner/permissions
      chown root:root "$BACKUP_PATH"
      chmod 766 "$BACKUP_PATH"
      	
      # Show size of backup
      cd $BACKUP_DIR
      echo -e "    `du -h $BACKUP_FILE |  sed -e 's/\s\+/ - /g'`\n"
      cd $DOCKER_ROOT
      
      # Start all docker containers
      docker start $(docker ps -a -q) > /dev/null
      echo -e "*** Started docker containers ***"
      docker ps --format '{{.Names}} - {{.Status}}' | sed -e 's/^/    /' 
      
      # All done
      exit 0
      Backup Script for Docker Volumes

      We have developed a custom shell script (referenced above) that effectively terminates all running Docker containers. Then, it generates zip backups of each application’s Docker volumes. Finally, the script restarts all Docker containers. Additionally, it creates a new Configs folder within CasaOS. After executing the script initially, you should share this folder.

      Example Execution
      Example Execution of Configuration Backup Script
      Example Execution of Configuration Backup Script

      The above image shows an example of the script’s execution. You must run this script as the root user via SSH from a PC or external server. You can’t execute the script from the CasaOS Terminal.

      Docker Recovery Script

      The Docker backup script could fail, preventing your containers from being restarted. While this is unlikely, we’ve created the script below to restart your existing Docker containers.

      Bash Script
      #!/bin/bash
      #
      # restart-apps.sh - Script to restart Docker containers
      #    if the backup script fails
      #
      # usage: $ sudo bash ./restart-apps.sh
      #
      # Note: This script must be executed remotely via SSH. Do not
      #    use the CasaOS Terminal.
      #
      
      # Make sure we are running as root
      if [ "$EUID" -ne 0 ]
      	then echo "Please run as root"
      	exit
      fi
      
      # Start all docker containers
      docker start $(docker ps -a -q) > /dev/null
      echo -e "*** Started docker containers ***"
      docker ps --format '{{.Names}} - {{.Status}}' | sed -e 's/^/    /' 
      
      # All done
      exit 0
      Script to Restart Docker Containers

      Back to top

      PiHole and Portainer Application Configuration Backup

      PiHole, Portainer, and Duplicati have specific commands in their GUIs to back up their settings. You can use these to download backups to your PC. Then, move them to the CasaOS Configs folder for Duplicati to backup.

      PiHole
      PiHole Teleporter – Settings Backup

      Using the Teleporter item under Settings, you can back up PiHole’s entire configuration, including all your custom DNS records. After downloading the backup to your PC, move it to the CasaOS Configs folder for Duplicati to backup.

      Back to top

      Portainer
      Portainer Settings Backup

      You can back up Portainer’s program configuration using the “Download backup” button under Administration/Settings. After downloading the backup to your PC, move it to the CasaOS Configs folder for Duplicati to backup.

      Note that this process does not back up your Stacks, Containers, Images, or Volume Data. It only backs up the Portainer app’s configuration.

      You can recreate your Stacks using these two steps –

      • First, use the Docker Volume Backup zip files. Restore your persistent volumes in the CasaOS AppData folder.
      • Second, use the Docker Compose templates you used to create the containers.

      Back to top

      Duplicati Backup Configurations
      Duplicati Backup Configuration Export

      You can back up each Duplicati configuration using the export link on the configuration screen for each associated backup.

      You should move the backups downloaded to your PC to the CasaOS Configs folder for Duplicati to backup.

      Back to top

      Backup Storage via SFTP

      Duplicati can store backups on any device using the SFTP protocol. SFTP is secure and can transfer your backup data over your home network.

      Configuring an SFTP Backup Destination in Duplicati
      Duplicati SFTP Backup Destination Configuration – macOS System with USB Hard Drive Example

      You can set up Duplicati to use any SFTP storage server as a backup destination. Enter the SFTP URI and login information. The Path on server setting depends on the folder on the target device. This folder will store your backups.

      Back to top

      Configuring an SFTP Server

      You can set up an SFTP storage server on your Windows PC, macOS System, or Synology NAS. The videos below cover how to do this on Windows and macOS.

      Setting up an SFTP Server on Windows
      Setting Up an SFTP Server on macOS

      Back to top

      Cloud Backup Storage using Backblaze B2

      Backblaze B2 offers cost-effective cloud storage for your backups. To start, sign up for a B2 storage account on Backblaze. Then, create a bucket to store your backups. Finally, set up a backup destination in Duplicati. The configuration in Duplicati is shown below.

      Backblaze B2 Backup Destination Configuration in Duplicati

      Back to top

      Monitoring Using Uptime Kuma

      Uptime Kuma Monitoring App
      Uptime Kuma Monitoring App

      Our next service is Uptime Kuma. This app can check IP devices, services, websites, and anything with an IP address. It helps by confirming that everything on your network is up and running. Additionally, it ensures that everything in the cloud you care about is operational.

      We will again use a Portainer Stack to deploy Uptime Kuma. The Docker Compose template is shown below.

      Dockerfile
      # Uptime Kuma: monitor services, websites, and devices
      services:
          uptime-kuma:
              container_name: Uptime-Kuma
              image: louislam/uptime-kuma:latest
              restart: unless-stopped
              security_opt:
                  - no-new-privileges:true
      
              # External port for accessing GUI
              ports:
                  - '4001:3001'
              
              # Set specific DNS server to find local services        
              dns:
                  - <your home server IP>	# Replace with your server IP
      
              # Persistent storage for configuration/docker access
              volumes:
                  - /DATA/AppData/uptime-kuma:/app/data
                  - /var/run/docker.sock:/var/run/docker.sock
      
      Uptime Kuma Docker Compose Template

      Back to top

      Configuring Uptime Kuma

      You can set up Uptime Kuma to send texts or emails when a monitored device or service goes down. For more information, see the settings sections inside the App.

      Back to top

      Home Media Using Plex Media Server

      Plex Media Server
      Plex Media Server

      Plex Media Server provides a system for organizing content. Hence, you can build a home media system around Plex.

      You must create an account on plex.tv before starting the installation. Also, you will want to open port 32400 in your router and point it to your home server. This will allow you to access your media from outside your home via the Internet.

      We can install Plex Media Server using a Portainer Stack. First, prepare your Stack for deployment by obtaining a Plex Claim Token and pasting it into your Stack in Portainer. Note that your Claim Token is only valid for 4 minutes, so you should deploy your Stack quickly. You can use the Docker Compose template below to deploy Plex.

      Dockerfile
      # Plex Media Server: Access multimedia content from anywhere
      services:
        plex:
          container_name: plex
          image: lscr.io/linuxserver/plex:latest
          restart: unless-stopped
          security_opt:
            - no-new-privileges:true
          
          # Causes all Plex media server ports to be passed to the outside
          network_mode: host
      
          environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/New_York
            - VERSION=docker
            - PLEX_CLAIM=claim-<insert your plex claim token here>
      
          # Must point to CasaOS Media folders and AppData (for config info)
          volumes:
            - /DATA/AppData/plex/config:/config
            - /DATA/Media/TV Shows:/tv
            - /DATA/Media/Movies:/movies
            - /DATA/Media/Music:/music
      Plex Media Server Docker Compose Template

      Back to top

      Configuring Plex Media Server

      Once you have deployed your stack, you can create a subdomain name for Plex in PiHole. Next, set up a proxy host for Plex in Nginx Proxy Manager. Finally, open your Plex Server in your web browser and log in to your account.

      You should now have Plex Media Server up and running and ready to be configured. The video below explains how to do so.

      Back to top

      Raspberry Pi Home Server Final Cleanup

      Raspberry Pi Home Server Running CasaOS
      Raspberry Pi Home Server Running CasaOS

      If you’ve followed all of the steps, you should now have your Raspberry Pi Home Server fully set up. The last thing to do is create shortcuts in the Apps section of your CasaOS dashboard for all your Apps.

      Some of your Apps will already have shortcuts. To add a new one, click the “+” on the right side of the Apps label. Then, choose Add external link and set up a link.

      App Shortcut for Plex Media Server
      App Shortcut for Plex Media Server

      You can use the subdomain names you created for your Nginx Proxy Manager Proxy Hosts. You should use https in your shortcuts. Finally, look around the web for a URL to a small graphic to use as an icon for your shortcut.

      Back to top

      Future Projects

      If you’ve gotten this far, you now have a capable Raspberry Pi Home Server and NAS. You’ve also built a platform using Cloudflare, Nginx Proxy Manager, and PiHole to allow you to do much more.

      Here are a few projects that we plan to do on our home server in the future –

      • We’ll be setting up a WordPress website and exposing it to the Internet
      • We’ll install Home Assistant and use it to manage our Smart Home devices
      • Also, we’ll be installing a home lab dashboard like Dashy to offer a simple user interface for all of our services

      Many of these services are already running on the Docker/Portainer system that runs on our Proxmox Cluster. You can also find information about these here.

      Back to top

      Raspberry Pi NAS 2

      Raspberry Pi NAS 2
      Raspberry Pi NAS 2

      We’ve built a second NAS and Docker environment using another Raspberry Pi 5. This NAS features four 2.5 in 960 GB SSD drives in a RAID-0 array for fast shared storage on our network.

      Raspberry Pi NAS Hardware Components

      Raspberry Pi 5 Single Board Computer

      We use the following components to build our system –

      I had five 960 GB 2.5″ SSD drives from a previous project available for this project.

      The following video covers the hardware assembly –

      We used a 2.5 GbE USB adapter to create a 2.5 GbE network interface on our NAS.

      2.5 GbE USB Adapter
      2.5 GbE USB Adapter

      The configuration of the Fan/Display HAT top board is covered here.

      FAN/Display Top Board
      FAN/Display Top Board

      This board comes as a kit that includes spaces to mount it on top of the Raspberry Pi 5/SSD Drive Interface HAT in the base kit.

      Software Components and Installation

      We installed the following software on our system to create our NAS –

      CassaOS

      CasaOS Web UI
      CasaOS Web UI

      CasaOS is included to add a very nice GUI for managing each of our NUT servers. Here’s a useful video on how to install CasaOS on the Raspberry Pi –

      Installation

      The first step is to install the 64-bit Lite Version of Raspberry Pi OS. This is done by first installing a full desktop version on a flash card and then using Raspberry Pi Imager to install the lite version on our SSD boot drive. We did this on our macOS computer using the USB to SATA adapter and belenaEtcher.

      We used the process covered in the video above to install CasaOS.

      Creating a RAID

      We choose to create a RAID-0 array using the four SSD drives in our NAS. Experience with SSD drives in a light-duty application like ours indicates that this approach will be reasonably reliable with SSD drives. We also backup the contents of the NAS daily to another system via Rsync to one of our Synology NAS drives.

      RAID-0 Storage Array
      RAID-0 Storage Array

      CasaOS does not provide support for RAID so this is done using the underlying Linux OS. The process is explained here.

      File Share

      CasaOS makes all of its shares public and does not password-protect shared folders. While this may be acceptable for home use where the network is isolated from the public Internet, it certainly is not a good security practice.

      Fortunately, the Debian Linux-derived distro we are running includes Samba file share support, which we can use to protect our shares properly. This article explains the basics of how to do this.

      Here’s an example of the information in smb.conf for one of our shares –

      [Public]
          path = /DATA/Public
          browsable = yes
          writeable = Yes
          create mask = 0644
          directory mask = 0755
          public = no
          comment = "General purpose public share"

      You will also need to create a Samba user for your Samba shares to work. Samba user privileges can be added to any of the existing Raspberry Pi OS users with the following command –

      # sudo smbpasswd -a <User ID to add>

      It’s also important to correctly set the shared folder’s owner, group, and modes.

      We need to restart the Samba service anytime configuration changes are made. This can be done with the following command –

      # sudo systemctl restart smbd

      Proxmox Test Node

      Proxmox Lab Node – AMD NUC Computer

      We have built a Proxmox single-node using an AMD NUC computer for testing and learning purposes. The hardware configuration for this system is as follows:

      Proxmox Installation/ZFS Storage

      Proxmox installation is straightforward. We used the same procedure as our Production Cluster. The two NVMe drives were configured as follows:

      • 1 TB NVMe – ZFS Formatted Boot pool  named rpool
      • 2 TB NVMe – ZFS Formatted pool named zfsa, mount point zfsa_mp

      Networking Configuration

      Virtual BridgePurposeVLANSpeedAdapter
      vmbr0 (Mgmt)Proxmox ManagementComputers2.5 GbELAN #2 on NUC
      vmbr1 (LS Svcs)Low-Speed ServicesAll VLANs500 MbpsLAN #1 on NUC
      vmbr2 (HS Svcs)High-Speed ServicesAll VLANs2.5 GbEUSB-C Adapter #1
      vmbr3 (Storage)HA Storage for VMs/LXCsStorage2.5 GbEUSB-C Adapter #2

      The Networking configuration on our test node mirrors the setup in our Production Cluster. The table above outlines the Test Node networking setup. We could not configure one of the ports on the host system to operate above 500 Mbps.

      Storage Configuration

      Proxmox Test Node Storage Configuration
      Proxmox Test Node Storage Configuration

      The table above shows the storage configuration for our Test Node. NUC-storage is implemented on our high-availability NAS. Access is provided to both the Production Cluster and NUC Proxmox Backup Server datastores (more info here).

      Proxmox Backup Server Configuration

      Backups for our Test Node mirror the configuration and scheduling of Backups on our production Cluster (more info here).

      Additional Configuration

      The following additional items are configured for our test node:

      Server Cluster

      Proxmox Cluster Configuration
      Proxmox Cluster Configuration

      Our server cluster consists of three servers. Our approach was to pair one high-capacity server (a Dell R740 dual-socket machine) with two smaller Supermicro servers.

      NodeModelCPURAMStorageOOB Mgmt.Network
      pve1Dell R7402 x Xeon Gold 6154 3.0 GHz
      (36 Cores)
      768 GB16 x 3.84 TB SSDsiDRAC2 x 10 GbE,
      2 x 25 GbE
      pve2Supermicro 5018D-FN4TXeon D-1540 2.0 GHz
      (8 cores)
      128GB2 x 7.68 TB SSDsIPMI2 x 1 GbE,
      4 x 10 GbE
      pve3Supermicro 5018D-FN4TXeon D-1540 2.0 GHz
      (8 cores)
      128 GB2 x 7.68 TB SSDsIPMI2 x 1 GbE,
      4 x 10 GbE

      Cluster Servers

      This approach allows us to handle most of our workloads on the high-capacity server, have the advantages of HA availability, and move workloads to the smaller servers to prevent downtime during maintenance activities.

      Server Networking Configuration

      All three servers in our cluster have similar networking interfaces consisting of:

      • An OOB management interface (iDRAC or IPMI)
      • Two low-speed ports (1 GbE or 10 GbE)
      • Two high-speed ports (10 GbE or 25 GbE)
      • PVE2 and PVE3 each have an additional two high-speed ports (10 GbE) via an add-on NIC

      The following table shows the interfaces on our three servers and how they are mapped to the various functions available via a standard set of bridges on each server.

      Cluster NodeOOB Mgmt.PVE Mgmt.Low-Speed Svcs.High-Speed Svcs.Storage Svcs.
      pve1 (R740)1 GbE iDRAC10 GbE Port 110 GbE Port 225 GbE Port 125 GbE Port 2
      pve2 (5018D-FN4T)1 GbE IPMI10 GbE Port 11 GbE Ports 1 & 2 (LAG)10 GbE Port 3 & 4 (LAG)10 GbE Port 2
      pve3 (5018D-FN4T)1 GbE IPMI10 GbE Port 1HS Svcs (LAG)10 GbE Port 3 & 4 (LAG)10 GbE Port 2

      Each machine uses a combination of interfaces and bridges to realize a standard networking setup. PVE2 and PVE3 also utilize LACP bonds to provide higher capacity for the low-speed and high-speed service bridges.

      You can see how we configured the LACP Bond interfaces in this video.

      Network Bonding on Proxmox

      We must add specific routes to ensure the separate Storage VLAN is used for Virtual Disk I/O. This is done via the following adjustments to the vmbr3 bridge in /etc/network/interfaces.

      Finally, use the IP address the target NAS uses in the Storage VLAN when configuring the NFS share for PVE-storage. This ensures that the dedicated Storage VLAN will be used for Virtual Disk I/O by all nodes in our Proxmox Cluster. We ran

      # traceroute <storage NAS IP>

      from each of our servers to confirm that we have a direct LAN connection to PVE-Storage that does not go through our router.

      Cluster Setup

      We are currently running a three-server Proxmox cluster. Our servers consist of:

      • A Dell R740 Server
      • Two Supermicro 5018D-FN4T Servers

      The first step was to prepare each server in the cluster as follows:

      • Install and configure Proxmox
      • Setup a standard networking configuration
      • Confirm that all servers can ping the shared storage NAS using the storage VLAN

      We used the procedure in the following video to setup and configure our cluster –

      The first step was to use the pve1 server to create a cluster. Next, we add the other servers to the cluster. If there are problems with connecting to shared stores, check the following:

      • Is the Storage VLAN connection using an address like 192.168.100.<srv>/32?
      • Is there a direct route for VLAN 1000 (Storage) that does not use the router? Check via traceroute  <storage-addr>
      • Is the target NAS drive sitting on the Storage VLAN with multiple gateways enabled
      • Can you ping the storage server from inside the server Proxmox instances?

      Backups

      For backups to work correctly, we need to modify the Proxmox /etc/vzdump.conf file to set the tmpdir to /var/tmp/ as follows:

      # vzdump default settings
      
      tmpdir:  /var/tmp/
      #tmpdir: DIR
      #dumpdir: DIR
      ...

      This will cause our backups to use the Proxmox tmp file directory to create backup archives for all backups.

      We later upgraded to Proxmox Backup Server. You can see how PBS was installed and configured here.

      NFS Backup Mount

      We set up an NFS backup mount on one of our NAS drives to store Proxmox backups.

      An NFS share was set up on NAS-5 as follows:

      • Share PVE-backups (/volume2/PVE-backups)
      • Used the default Management Network

      A Storage volume was configured in Proxmox to use for backups as follows:

      NAS-5 NFS Share for PVE Backups
      NAS-5 NFS Share for PVE Backups

      A Note About DNS Load

      Proxmox constantly does DNS lookups on the servers associated with NFS and other mounted filesystems, which can result in very high transaction loads on our DNS servers. To avoid this problem, we replaced the server domain names with the associated IP addresses. Note that this cannot be done for the virtual mount for the Proxmox Backup Server, as PBS uses a certificate to validate the domain name used to access it. These adjustments can be made by editing the storage configuration file at /etc/pve/storage.cfg on any node in the cluster (changes in this file are synced for all nodes).

      NFS Virtual Disk Mount

      We also created an NFS share for VM and LXC virtual disk storage. The volume chosen provides high-speed SSD storage on a dedicated Storage VLAN.

      Global Backup Job

      A Datacenter level backup job was set up to run daily at 1 am for all VMs and containers as follows (this was later replaced with Proxmox Backup Server backups as explained here):

      Proxmox Backup Job
      Proxmox Backup Job

      The following retention policy was used:

      Proxmox Backup Retention Policy
      Proxmox Backup Retention Policy

      Node File Backups

      We installed the Proxmox Backup Client on each of our server’s nodes and created a corn schedule script that backs up the files on each node to our Proxmox Backup Server each day. The following video explains how to install and configure the PBS client.

      For the installation to work properly, the locations of the PBS repository and access credentials must be set in both the script and the login bash shell. We also need to create a cron job to run the backup script daily.

      Setup SSL Certificates

      We use the procedure in the video below to set up signed SSL certificates for our three server nodes and the Proxmox Backup server.

      This approach uses a Let’s Encrypt DNS-01 challenge via Cloudflare DNS to authenticate with Let’s Encrypt and obtain a signed certificate for each server node in the cluster and for PBS.

      Setup SSH Keys

      A public/private key pair is created and set up for Proxmox VE and all VMs and LXC to ensure secure SSH access. The following procedure is used to do this. The public keys are installed on each server using the ssh-copy-id username@host command.

      Setup Remote Syslog

      Sending logs to a remote syslog server requires the manual installation of the rsyslog service as follows –

      # apt update && apt install rsyslog

      Once the service is installed, you can create the following file to set the address of your remote Syslog server –

      # vi /etc/rsyslog.d/remote-logging.conf
      ...
      # Setup remote syslog
      *.*  @syslog.mydomain.com:514
      ...
      # systemctl restart rsyslog

      High Availability (HA)

      Proxmox can support automatic failover (High Availability) of VMs and Containers to any node in a cluster. The steps to configure this are:

      • Move the virtual disks for all VMs and LXC containers to shared storage. In our case, this is PVE-storage. Note that our TrueNAS VM must run on pve1 as it uses disks that are only available on pve1.
      • Enable HA for all VMs and LXCs (except TrueNAS)
      • Setup an HA group to govern where the VMs and LXC containers migrate to if a node fails

      Cluster Failover Configuration – VMs & LXCs

      We generally run all of our workloads on pve1 since it is our cluster’s highest performance and capacity node. Should this node fail, we want to migrate the pve1 workload to distribute it between the pve2 and pve3 nodes evenly. We can do this by setting up a HA Failover Group as follows:

      HA Failover Group Configuration
      HA Failover Group Configuration

      The nofallback option is set so workloads don’t automatically migrate back to pve1 when we manually migrate them to other nodes to support maintenance operations.

      Windows Virtual Machines

      One of our Homelab environment’s goals is to run our Windows desktop OSs on virtual machines. This enables us to get at standard OS environments such as Microsoft Windows easily from a web browser.

      Windows Virtual Machine Setup

      We use the following procedure to set up our Windows VMs –

      The following ISO images are downloaded to the PVE-templates Share on our Proxmox cluster –

      Each Windows VM is created with the following options (all other choices used the defaults) –

      • Name the VM windows-<macoine name>
      • Use the Windows 10 desktop ISO image.
      • Add an additional drive for the VirtIO drivers and use the Windows VirtIO Driver ISO image.
      • The Type/Version is set to Microsoft Windows 10.
      • Check the Qemu Agent option (we’ll install this later).
      • Set the SCSI Controller to VirtIO SCSI.
      • Use PVE-storage and create a 128 GB disk
      • Set Discard and SSD Emulation options
      • Set Cache to Write Back
      • Allocate 4 CPU Cores
      • Allocate 16 GB of Memory/minimum 4 GB Memory / Ballooning Device enabled
      • Run on HS Services Network, use Intel E1000 NIC, set VLAN Tag to 10

      Start the VM and install Windows. Some notes include –

      • Enter the Windows 10 Pro product key
      • Use the Windows Driver disk to load a driver for the disk
      • Once Windows is up, use Windows Driver disk to install drivers for devices that did not install automatically. You can find the correct driver by searching for drivers from the root of the Windows Driver disk.
      • Install the qemu guest agent from the Windows Driver disk. It’s in the guest agent directory.
      • Set the Computer name, Workgroup, and Domain name for the VM.
      • Do a Windows update and install all updates next.

      Setup Windows applications as follows –

      • Install Chrome browser
      • Install Dashlane password manager
      • Install Dropbox and Synology Drive
      • Install Start10
      • Install Directory Opus File Manager
      • Install PDF Viewer
      • Install Printers
      • Install media tools, VLC Player, and QuickTime Player
      • Install Network utilities, WebSSH
      • Install windows gadgets
      • Install DXlab, tqsl, etc.
      • Install Microsoft Office and Outlook
      • Install SmartSDR
      • Install WSJT-X, JTDX, JTalert
      • Install PSTRotator, Amplifer app
      • Install RealVNC
      • Install Benchmarks (Disk, Graphics, Geekbench)
      • Install Folding at Home
      • Need a sound driver for audio (Windows Remote Desktop or RealVNC).

      Docker in an LXC Container

      Using this procedure, we set up docker using the Turnkey Core LXC container (Debian Linux).

      Docker LXC Container Configuration

      The container is created with the following resources:

      • 4 CPUs
      • 4096 KB Memory
      • 8 GB SSD Storage (Shared PVE-storage)
      • LS Services Network

      Portainer Edge Agent

      We manage Docker using a single Portainer instance.

      Portainer Management Interface
      Portainer Management Interface

      This is done via the Portainer Edge Agent. The steps to install the Portainer Edge Agent are as follows:

      1. Create a new environment on the Portainer Host
        • Select and use the Portainer edge agent choice
        • BE CAREFUL TO SELECT THE PORTAINER HOST URL, NOT THE AGENT when setting up
      2. Carefully copy the EDGE_ID and  the EDGE_KEY fields into the script in the next step that is used to spin up the edge agent
      3. Install the Portainer Edge Agent on the  docker container as follows:
      docker run -d \
      -v /var/run/docker.sock:/var/run/docker.sock \
      -v /var/lib/docker/volumes:/var/lib/docker/volumes \
      -v /:/host \
      -v portainer_agent_data:/data \
      --restart always \
      -e EDGE=1 \
      -e EDGE_ID=<replace with id from portainer> \
      -e EDGE_KEY=<replace with key from portainer> \
      -e EDGE_INSECURE_POLL=1 \
      --name portainer_edge_agent \
      portainer/agent:latest

      Mail Forwarding

      More work needs to be done here. Here’s some information to help get started –

      Proxmox Backup Server

      This page covers the installation of the Proxmox Backup Server (PBS) in our HomeLab. We run the PBS in a VM on our server and store backups in shared storage on one of our NAS drives.

      We are running a Proxmox Test Node and a Raspberry Pi Proxmox Cluster that can access our Proxmox Backup Server (PBS). This approach enables backups and transfers of VMs and LXCs between our Production Proxmox Cluster, our Proxmox Test Node, and Raspberry Pi Proxmox Cluster.

      Proxmox Backup Server Installation

      We used the following procedure to install PBS on our server.

      PBS was created using the recommended VM settings in the video. The VM is created with the following resources:

      • 4 CPUs
      • 4096 KB Memory
      • 32 GB SSD Storage (Shared PVE-storage)
      • HS Services Network

      Once the VM is created, the next step is to run the PBS installer.

      After the PBS install is complete, PBS is booted, the QEMU Guest Agent is installed, and the VM is updated using the following commands –

      # apt update
      # apt upgrade
      # apt-get install qemu-guest-agent
      # reboot

      PBS can now be accessed via the web interface using the following URL –

      https://<PBS VM IP Address>:8007

      Create a Backup Datastore on a NAS Drive

      The steps are as follows –

      • Install CIFS utils
      # Install NFS share package on Proxmox
      apt install cifs-utils
      • Create  a mount point for the NAS PBS store
      mkdir /mnt/pbs-store
      • Create a Samba credentials file to enable logging into NAS share
      vi /etc/samba/.smbcreds
      ...
      username=<NAS Share User Name>
      password=<NAS Share Password>
      ...
      chmod 400 /etc/samba/.smbcreds
      • Test mount the NAS share in PBS  and make a directory to contain the PBS backups
      mount -t cifs -o rw,vers=3.0, \
          credentials=/etc/samba/.smbcreds, \
          uid=backup,gid=backup \
          //<nas-#>.anita-fred.n et/PBS-backups \
          /mnt/pbs-store
      mkidr /mnt/pbs-store/pbs-backups
      • Make the NAS share mount permanent by adding it to /etc/fstab
      vi /etc/fstab
      ...after the last line add the following line
      # Mount PBS backup store from NAS
      //nas-#.anita-fred.net/PBS-backups /mnt/pbs-store cifs vers=3.0,credentials=/etc/samba/.smbcreds,uid=backup,gid=backup,defaults 0 0
      • Create a datastore to hold the PBS backups in the Proxmox Backup Server as follows. The datastore will take some time to create (be patient).

      PBS Datastore Configuration
      PBS Datastore Configuration

      PBS Datastore Prune Options
      PBS Datastore Prune Options

      • Add the PBS store as storage at the Proxmox datacenter level. Use the information from the PBS dashboard to set the fingerprint.

      PBS Storage in Proxmox VE
      PBS Storage in Proxmox VE

      • The PBS-backups store can now be used as a target in Proxmox backups. NOTE THAT YOU CANNOT BACK UP THE PBS VM TO PBS-BACKUPS.

      Proxmox Cluster/NodePBS DatastorePurpose
      Production ClusterPBS-backupsBackups for 3-node production cluster
      Raspberry Pi ClusterRPI-backupsBackups for 3-node Raspberry Pi Cluster
      NUC Test NodeNUC-backupsBackups for our Proxmox Test Node

      As the table above indicates, additional datastores are created for our Raspberry Pi Cluster and our NUC Proxmox Test Node.

      Setup Boot Delay

      The NFS share for the Proxmox Backup store needs time to start before the Backup server starts on boot. This can be set for each node under System/Options/Start on Boot delay. A 30-second delay seems to work well.

      Setup Backup, Pruning, and Garbage Collection

      The overall schedule for Proxmox backup operations is as follows:

      • 02:00 – Run a PVE Backup on the PBS Backup Server VM from our Production Cluster (run in suspend mode; stop mode causes problems)
      • 02:30 – Run PBS Backups in all Clusters/Nodes on all VMs and LXCs EXCEPT for the PBS Backup Server VM
      • 03:00 – Run Pruning on the all PBS datastores
      • 03:30 – Run Garage Collection on all PBS datastores
      • 05:00 – Verify all backups in all PBS G

      Local NTP Servers

      We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, modify/etc/chrony/chrony.conf to use our servers for the pool. This must be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

      Backup Temp Directory

      Proxmox backups use vzdump to create compressed backups. By default, backups use /var/tmp, which lives on the boot drive of each node in a Proxmox Cluster. To ensure adequate space for vzdump and reduce the load on each server’s boot drive, we have configured a temp directory on the local ZFS file systems on each of our Proxmox servers. The tmp directory configuration needs to be done on each node in the cluster (details here). The steps to set this up are as follows:

      # Create a tmp directory on local node ZFS stores
      # (do this once for each server in the cluster)
      cd /zfsa
      mkdir tmp
      
      # Turn on and verify ACL for ZFSA store
      zfs get acltype zfsa
      zfs set acltype=posixacl zfsa
      zfs get acltype zfsa
      
      # Configure vzdump to use the ZFS tmp dir'
      # add/set tmpdir as follows 
      # (do on each server)
      cd /etc
      vi vzdump.conf
      tmpdir: /zfsa/tmp
      :wq

      Proxmox VE

      This page covers the Proxmox VE install and setup on our server. You can find a great deal of information about Proxmox in the Proxmox VE Administrator’s Guide.

      Proxmox Installation/ZFS Storage

      Proxmox was installed on our server using the steps in the following video:

      The Proxmox boot images are installed on MVMe drives (ZFS RAID1 on our Dell Sever BOSS Card, or ZFS single on the MNVe drives on our Supermicro Servers). This video also covers the creation of a ZFS storage pool and filesystem. A single filesystem called zfsa was set up using RAID10 and lz4 compression using four SSD disks on each server.

      A Community Proxmox VE License was purchased and installed for each node. The Proxmox installation was updated on each server using the Enterprise Repository.

      Linux Configuration

      I like to install a few additional tools to help me manage our Proxmox installations. They include the nslookup and ifconfig commands and the tmux terminal multiplexor. The commands to install these tools are found here.

      Cluster Creation

      With these steps done, we can create a 3-node cluster. See our Cluster page for details.

      ZFS Snapshots

      Creating ZFS snapshots of the Proxmox installation can be useful before making changes. This enables rollback to a previous version of the filesystem should any changes need to be undone. Here are some useful commands for this purpose:

      zfs list -t snapshot
      zfs list
      zfs snapshot rpool/ROOT/<node-name>@<snap-name>
      zfs rollback rpool/ROOT/<node-name>t@<snap-name>
      zfs destroy rpool/ROOT/<node-name>@<snap-name>

      Be careful to select the proper dataset – snapshots on the pool that contain the dataset don’t support this use case. Also, you can only roll back to the latest snapshot directly. If you want to roll back to an earlier snapshot, you must first destroy all of the later snapshots.

      In the case of a Proxmox cluster node, the shared files in the associated cluster filesystem will not be included in the snapshot. You can learn more about the Proxmox cluster file system and its shared files here.

      You can view all of the snapshots inside the invisible /.zfs directory on the host filesystem as follows:

      # cd /.zfs/snapshot/<name>
      # ls -la

      Local NTP Servers

      We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, we need to modify /etc/chrony/chrony.conf to use our servers for the pool. This needs to be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

      The first step before following the configuration procedures above is to install chrony on each node –

      apt install chrony

      Mail Forwarding

      We used the following procedure to configure postfix to support forwarding e-mail through smtp2go. Postfix does not seem to work with passwords containing a $ sign. A separate login was set up in smtp2go for forwarding purposes.

      Some key steps in the process include:

      # Install postfix and the supporting modules
      # for smtp2go forwarding
      sudo apt-get install postfix
      sudo apt-get install libsasl2-modules
      
      # Install mailx
      sudo apt -y install bsd-mailx
      sudo apt -y install mailutils
      
      # Run this command to configure postfix
      # per the procedure above
      sudo dpkg-reconfigure postfix
      
      # Use a working prototype of main.cf to edit
      sudo vi /etc/postfix/main.cf
      
      # Setup /etc/mailname -
      #   use version from working server
      #   MAKE SURE mailname is lower case/matches DNS
      sudo uname -n > /etc/mailname
      
      # Restart postfix
      sudo systemctl reload postfix
      sudo service postfix restart
      
      # Reboot may be needed
      sudo reboot
      
      # Test
      echo "Test" | mailx -s "PVE email" <email addr>

      vGPU

      Our servers each include a Nvidia TESLA P4 GPU. This GPU is sharable using Nvidia’s vGPU. The information on how to set up Proxmox for vGPU may be found here. This procedure also explains how to enable IOMMU for GPU pass-through (not sharing). We do not have IOMMU setup on our servers at this time.

      You’ll need to install the git command and the cc compiler to use this procedure. This can be done with the following commands –

      # apt update
      # apt install git
      # apt install build-essential

      Now you can follow the procedure here. Be sure to include the steps to enable IOMMU. I downloaded and installed the 6.4 vGPU driver from the Nvidia site and did a final reboot of the server.

      vGPU Types

      The vGPU drivers support a number of GPU types. You’ll want to select the appropriate one in each VM. Note that multiple sizes of vGPUs are not allowed (i.e., if one GPU uses 2 GB of memory, all must). The following table shows the types available. (this data can be obtained by running mdevctl types on your system).

      Q Profiles - Not Good for OpenGL/Games
      vGPU TypeNameMemoryInstances
      nvidia-63GRID P4-1Q1 GB8
      nvidia-64GRID P4-2Q2 GB4
      nvidia-65GRID P4-4Q4 GB2
      nvidia-66GRID P4-8Q8 GB1
      A Profiles - Windows VMs
      vGPU TypeNameMemoryInstances
      nvidia-67GRID P4-1A1 GB8
      nvidia-68GRID P4-2A2 GB4
      nvidia-69GRID P4-4A4 GB2
      nvidia-70GRID P4-8A8 GB1
      B Profiles - Linux VMs
      vGPU TypeNameMemoryInstances
      nvidia-17GRID P4-1B1 GB8
      nvidia-243GRID P4-1B41 GB8
      nvidia-157GRID P4-2B2 GB4
      nvidia-243GRID PR-2B42 GB4

      Disabling Enterprise Repository

      Proxmox No Subscription Repositories
      Proxmox No Subscription Repositories

      We recommend purchasing at least a Community Support License for production Proxmox servers. We are running some test servers here and we have chosen to use the No Subscription repositories for these systems. The following videos explain how to configure the No Subscription repositories. These procedures work with Proxmox 8.3.

      Explains how to configure the No Subscription repositories

      Disable the No Subscription warning messages

      Problems with Out-of-Date Keys on Server Nodes

      I have occasionally encountered issues with SSH keys becoming outdated on our servers. The solution is to run the following commands on all servers. A reboot is also sometimes necessary.

      # Update certs and repload PVE proxy
      pvecm updatecerts -F && systemctl restart pvedaemon pveproxy
      
      # Reboot if needed
      reboot

      DNS Performance Improvements

      Some proxmox components can do DNS lookups at high rates. Some things that help with this include:

      • Using IP addresses instead of DNS names for NFS shares in /etc/pv/storage.cfg
      • Setting high-use DNS names like ‘pbs.your-domain‘ in /etc/hosts (you’ll need to do this for each node in your cluster)
      • If you use the Metrics Server feature in Datacenter, you’ll want to use an IP address instead of a DNS name to access your metrics database.

      Welcome To Our Home Lab

      Home Network Dashboard
      Home Network Dashboard

      This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several different components and systems, including:

      • A high-performance home network with redundant Internet connections
      • A storage system that utilizes multiple NAS devices
      • Multiple enterprise-grade servers in a high-availability cluster
      • Applications, services, and websites
      • Powered via dual-UPS protected power feeds and a backup generator

      Home Network

      Home Network Core, High-Availability Storage and Secondary Server Rack
      Home Network Core, High-Availability Storage, and Secondary Server Rack

      Our Home Network uses a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We use Ubiquity UniFi equipment throughout. We have installed multiple OM4 multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as Dual-Fiber LACP LAG links.

      We have redundant Internet connections which include 1 Gbps optical fiber and a 400 Mbps/12 Mbps cable modem service.

      Out Network Rack also includes two SuperMicro Servers and a pair of Synology NAS drives in a high-availability configuration. These drives provide solid-state storage for Proxmox Virtual Machine disks and Docker volumes.

      Main Server and Storage

      Main Server Rack and NAS Storage Rack
      Main Server Rack and NAS Storage Rack

      Our Server Rack houses our main Dell Server and several of our Synology NAS Drives. It features redundant UPS power and includes rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab.

      Our servers run Proxmox in a high-availability configuration. In total, we have 104 CPUs and 1 TB of RAM available in our primary Proxmox cluster.

      This rack includes an all SSD storage high-speed NAS that we use for video editing. It also includes a NAS which stores our video and audio media collection and provides access to this content throughout our home and on the go when we travel.

      High Capacity Storage System

      Main NAS Storage Rack
      Main NAS Storage Rack

      Our NAS Rack provides high-capacity storage via several Synology NAS Drives. It features redundant UPS power and includes additional rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab. This rack also houses our Raspberry Pi NAS and NAS 2 systems.

      Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage.

      Power Over Ethernet (PoE)

      Main Power Over Ethernet (PoE) Switch

      We make use of Power Over Ethernet (PoE) switches at many edge locations in our network to power devices through their ethernet cables.

      The switch shown above is located centrally where all of the CAT6 ethernet connections in our home terminate. It powers our Surveillance Cameras, IP Telephones, Access Points, etc.

      Home Media System

      Our Home Theater
      Our Home Theater

      We use our Home Network and NAS System to provide a Home Media System. Our Media System sources content from streaming services as well as stored video and audio content store on our Media NAS drive and enables it to be viewed from any TV or Smart Device in our home. We can also view our content remotely when traveling or in our cars via the Internet.

      Surveillance System

      Synology Surveillance System
      Synology Surveillance Station

      We use Synology Surveillance Station running on one of our NAS drives to support a variety of IP cameras throughout our home. This software uses the host NAS drive for storing recordings and provides image recognition and other security features.

      Telephone System

      Telephone System Dashboard
      Telephone System Dashboard

      We use Ubiquity Unifi Talk to provide managed telephone service within our home.

      Ubiquity IP Telephone

      This system uses PoE-powered IP Telephones which we have installed throughout our home.

      Applications, Services, and Websites

      We are hosting several websites, including:

      Set-up information for our self-hosted sites may be found here.