Tag Archives: Proxmox

Expand Proxmox VM Storage

There are many reasons why you may need to expand Proxmox VM Storage. It is always challenging to grow a VM’s virtual disk in Proxmox. The process requires several steps. Mistakes can result in the loss of data. The video above provides an easy-to-understand guide on how to expand a VM disk.

Before beginning, your should make a backup. It also helps to format your VM disks using LVM thin provisioning.

Proxmox Staging Cluster

Proxmox Staging Node – AMD NUC Computer

We have built a Proxmox staging cluster using a pair of Itel NUC Computers and an AMD NUC Computer for staging, testing, and experimental purposes.

Node Hardware

The hardware configuration for this system is as follows:

NodeModelCPURAMStorageNetwork
pve1ASUS NUC 15 Pro TallIntel S2 Core Ultra 7 255H 3.0 GHz
(16 CPUs)
128 GB DDR51 TB M.2 NVMe3 x 2.5 GbE
pve2ASUS NUC 15 Pro TallIntel S2 Core Ultra 7 255H 3.0 GHz
(16 CPUs)
128 GB DDR51 TB M.2 NVMe3 x 2.5 GbE
pve3GMKtec K8 Plus Mini PCAMD Ryzen 7 8845HS 3.8 GHz
(16 CPUs)
96 GB DDR51 TB M.2 NVMe3 x 2.5 GbE

Each NUC has an onboard 2.5 GbE Ethernet Interface with two Plugable 2.5GB USB-C Ethernet Adapters added for additional Network Interfaces.

Proxmox Installation/ZFS Storage

Proxmox installation is straightforward. We used the same procedure as our Production Cluster.

Networking Configuration

Virtual BridgePurposeVLANSpeedAdapter
vmbr0 (Mgmt)Proxmox ManagementComputers2.5 GbELAN on NUC
vmbr1 (Services)ServicesAll VLANs

2 x 2.5 GbE (Bond)


2 x 2.5 GbE USB-C Adapters
vmbr2 (Storage)HA Storage for VMs/LXCsStorage

The Networking configuration on our test node mirrors the setup in our Production Cluster. The table above outlines the Staging Cluster Node networking setup. A single LACP Bond (2 x 2.5 GbE) is shared between the Services and Storage vmbr’s.

Storage Configuration

Staging Node Storage Configuration
Staging Node Storage Configuration

The table above shows the storage configuration for our Staging Cluster Nodes. PVE-Storage is implemented on our high-availability NAS.

Proxmox Backup Server Configuration

Backups for our Staging Cluster Nodes mirror the configuration and scheduling of Backups on our production Cluster (more info here).

Additional Configuration

The following additional items are configured for our Staging cluster nodes:

  • Community License to enable access to Enterprise Repositories
  • SSL Certificate from Lets Encrypt
  • Postfix for e-mail forwarding
  • Clock sync via NTP
  • Monitoring via built-in InfluxDB and Grafana

Proxmox Monitoring

Proxmox Cluster Metrics - Proxmox Monitoring
Proxmox Cluster Metrics

We set up a Grafana Dashboard to implement Proxmox Monitoring. The main components in our monitoring stack include:

The following sections cover the setup and configuration of our monitoring stack.

Proxmox Monitoring Setup

The following video explains how to set up a Grafana dashboard for Proxmox. This installation uses the monitoring function built into Proxmox to feed data to Influx DB.

And here is a video that explains setting up self-signed certificates –


Configuring Self-Signed Certificates

We are using the Proxmox [Flux] dashboard with our setup.

Proxmox Backup Server

This page covers the installation of the Proxmox Backup Server (PBS) in our HomeLab. We run the PBS in a VM on our server and store backups in shared storage on one of our NAS drives.

We are running a Proxmox Test Node and a Raspberry Pi Proxmox Cluster that can access our Proxmox Backup Server (PBS). This approach enables backups and transfers of VMs and LXCs between our Production Proxmox Cluster, our Proxmox Test Node, and Raspberry Pi Proxmox Cluster.

Proxmox Backup Server Installation

We used the following procedure to install PBS on our server.

PBS was created using the recommended VM settings in the video. The VM is created with the following resources:

  • 4 CPUs
  • 4096 KB Memory
  • 32 GB SSD Storage (Shared PVE-storage)
  • HS Services Network

Once the VM is created, the next step is to run the PBS installer.

After the PBS install is complete, PBS is booted, the QEMU Guest Agent is installed, and the VM is updated using the following commands –

# apt update
# apt upgrade
# apt-get install qemu-guest-agent
# reboot

PBS can now be accessed via the web interface using the following URL –

https://<PBS VM IP Address>:8007

Create a Backup Datastore on a NAS Drive

The steps are as follows –

  • Install CIFS utils
# Install NFS share package on Proxmox
apt install cifs-utils
  • Create  a mount point for the NAS PBS store
mkdir /mnt/pbs-store
  • Create a Samba credentials file to enable logging into NAS share
vi /etc/samba/.smbcreds
...
username=<NAS Share User Name>
password=<NAS Share Password>
...
chmod 400 /etc/samba/.smbcreds
  • Test mount the NAS share in PBS  and make a directory to contain the PBS backups
mount -t cifs -o rw,vers=3.0, \
    credentials=/etc/samba/.smbcreds, \
    uid=backup,gid=backup \
    //<nas-#>.anita-fred.n et/PBS-backups \
    /mnt/pbs-store
mkidr /mnt/pbs-store/pbs-backups
  • Make the NAS share mount permanent by adding it to /etc/fstab
vi /etc/fstab
...after the last line add the following line
# Mount PBS backup store from NAS
//nas-#.anita-fred.net/PBS-backups /mnt/pbs-store cifs vers=3.0,credentials=/etc/samba/.smbcreds,uid=backup,gid=backup,defaults 0 0
  • Create a datastore to hold the PBS backups in the Proxmox Backup Server as follows. The datastore will take some time to create (be patient).

PBS Datastore Configuration
PBS Datastore Configuration

PBS Datastore Prune Options
PBS Datastore Prune Options

  • Add the PBS store as storage at the Proxmox datacenter level. Use the information from the PBS dashboard to set the fingerprint.

PBS Storage in Proxmox VE
PBS Storage in Proxmox VE

  • The PBS-backups store can now be used as a target in Proxmox backups. NOTE THAT YOU CANNOT BACK UP THE PBS VM TO PBS-BACKUPS.

Proxmox Cluster/NodePBS DatastorePurpose
Production ClusterPBS-backupsBackups for 3-node production cluster
Raspberry Pi ClusterRPI-backupsBackups for 3-node Raspberry Pi Cluster
NUC Test NodeNUC-backupsBackups for our Proxmox Test Node

As the table above indicates, additional datastores are created for our Raspberry Pi Cluster and our NUC Proxmox Test Node.

Setup Boot Delay

The NFS share for the Proxmox Backup store needs time to start before the Backup server starts on boot. This can be set for each node under System/Options/Start on Boot delay. A 30-second delay seems to work well.

Setup Backup, Pruning, and Garbage Collection

The overall schedule for Proxmox backup operations is as follows:

  • 02:00 – Run a PVE Backup on the PBS Backup Server VM from our Production Cluster (run in suspend mode; stop mode causes problems)
  • 02:30 – Run PBS Backups in all Clusters/Nodes on all VMs and LXCs EXCEPT for the PBS Backup Server VM
  • 03:00 – Run Pruning on the all PBS datastores
  • 03:30 – Run Garage Collection on all PBS datastores
  • 05:00 – Verify all backups in all PBS G

Local NTP Servers

We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, modify/etc/chrony/chrony.conf to use our servers for the pool. This must be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

Backup Temp Directory

Proxmox backups use vzdump to create compressed backups. By default, backups use /var/tmp, which lives on the boot drive of each node in a Proxmox Cluster. To ensure adequate space for vzdump and reduce the load on each server’s boot drive, we have configured a temp directory on the local ZFS file systems on each of our Proxmox servers. The tmp directory configuration needs to be done on each node in the cluster (details here). The steps to set this up are as follows:

# Create a tmp directory on local node ZFS stores
# (do this once for each server in the cluster)
cd /zfsa
mkdir tmp

# Turn on and verify ACL for ZFSA store
zfs get acltype zfsa
zfs set acltype=posixacl zfsa
zfs get acltype zfsa

# Configure vzdump to use the ZFS tmp dir'
# add/set tmpdir as follows 
# (do on each server)
cd /etc
vi vzdump.conf
tmpdir: /zfsa/tmp
:wq