Tag Archives: Backups

Samba File Server

Samba File Server
Samba File Server

We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS drive using the Turnkey Samba File Server.

Installing the Turkey Samba File Server

We installed the Turnkey File Server in an LXC container that runs on our pve1 storage. This LSC will not be movable as it will be associated with SSD disks that are only available on pve1. The first step is to create a ZFS file system (zfsb) on pve1 to hold the LXC boot drive and storage.

The video below explains the procedure used to set up the File Server LXC and configure Samba shares.

The LXC container for our File Server was created with the following parameters –

  • 2 CPUs
  • 1 GB Memory
  • 8 GB Boot Disk in zfsb_mp
  • 8 TB Share Disk in zfsb_mp (mounted as /mnt/shares with PBS backups enabled.)
  • High-speed Services Network, VLAN Tab=10
  • The container is unprivileged

File Server LXC Configuration

The following steps were performed to configure our File Server –

  • Set the system name to nas-10
  • Configured postfix to forward email
  • Set the timezone
  • Install standard tools
  • Updated the system via apt update && apt upgrade
  • Installed SSL certificates using a variation of the procedures here and here.
  • Setup Samba users, groups, and shares per the video above

Backups

Our strategy for backing up our file server is to run a Rsync job via the Cron inside the host LXC container. The Rsync copies the contents of our file shares to one of our NAS drives. The NAS drive then implements a 1-2-3 Backup Strategy for our data.

Proxmox Backup Server

This page covers the installation of the Proxmox Backup Server in our HomeLab. Our approach will be to run the Proxmox Backup Server (PBS) in a VM on our server and use shared storage on one of our NAS drives to store backups.

Proxmox Backup Server Installation

We used the following procedure to install PBS on our server.

PBS was created using the recommended VM settings in the video. The VM is created with the following resources:

  • 4 CPUs
  • 4096 KB Memory
  • 32 GB SSD Storage (Shared PVE-storage)
  • HS Services Network

Once the VM is created, the next step is to run the PBS installer.

After the PBS install is complete, PBS is booted, the QEMU Guest Agent is installed, and the VM is updated using the following commands –

# apt update
# apt upgrade
# apt-get install qemu-guest-agent
# reboot

PBS can now be accessed via the web interface using the following URL –

https://<PBS VM IP Address>:8007

Create a Backup Datastore on a NAS Drive

The steps are as follows –

  • Install CIFS utils
# Install NFS share package on Proxmox
apt install cifs-utils
  • Create  a mount point for the NAS PBS store
mkdir /mnt/pbs-store
  • Create a Samba credentials file to enable logging into NAS share
vi /etc/samba/.smbcreds
...
username=<NAS Share User Name>
password=<NAS Share Password>
...
chmod 400 /etc/samba/.smbcreds
  • Test mount the NAS share in PBS  and make a directory to contain the PBS backups
mount -t cifs -o rw,vers=3.0, \
    credentials=/etc/samba/.smbcreds, \
    uid=backup,gid=backup \
    //<nas-#>.anita-fred.n et/PBS-backups \
    /mnt/pbs-store
mkidr /mnt/pbs-store/pbs-backups
  • Make the NAS share mount permanent by adding it to /etc/fstab
vi /etc/fstab
...after the last line add the following line
# Mount PBS backup store from NAS
//nas-#.anita-fred.net/PBS-backups /mnt/pbs-store cifs vers=3.0,credentials=/etc/samba/.smbcreds,uid=backup,gid=backup,defaults 0 0
  • Create a datastore to hold the PBS backups in the Proxmox Backup Server as follows. The datastore will take some time to create (be patient).
PBS Datastore Configuration
PBS Datastore Configuration
PBS Datastore Prune Options
PBS Datastore Prune Options
  • Add the PBS store as storage at the Proxmox datacenter level. Use the information from the PBS dashboard to set the fingerprint.
PBS Storage in Proxmox VE
PBS Storage in Proxmox VE
  • The PBS-backups store can now be used as a target in Proxmox backups. NOTE THAT YOU CANNOT BACK UP THE PBS VM TO PBS-BACKUPS.

Setup Boot Delay

The NFS share for the Proxmox Backup store needs time to start before the Backup server starts on boot. This can be set for each node under System/Options/Start on Boot delay. A 30-second delay seems to work well.

Setup Backup, Pruning, and Garbage Collection

The overall schedule for Proxmox backup operations is as follows:

  • 03:00 – Run Pruning on the PBS-backups store
  • 03:30 – Run PBS Backups on all VMs and LXCs EXCEPT for the PBS Backup Server VM
  • 04:00 – Run a standard PVE Backup on the PBS Backup Server VM (run in suspend mode; stop mode causes problems)
  • 04:30 – Run Garage Collection on the PBS-backups store
  • 05:00 – Verify all backups in the PBS-backups store

Local NTP Servers

We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, modify/etc/chrony/chrony.conf to use our servers for the pool. This must be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

Backup Temp Directory

Proxmox backups use vzdump to create compressed backups. By default, backups use /var/tmp, which lives on the boot drive of each node in a Proxmox Cluster. To ensure adequate space for vzdump and reduce the load on each server’s boot drive, we have configured a temp directory on the local ZFS file systems on each of our Proxmox servers. The tmp directory configuration needs to be done on each node in the cluster (details here). The steps to set this up are as follows:

# Create a tmp directory on local node ZFS stores
# (do this once for each server in the cluster)
cd /zfsa
mkdir tmp

# Turn on and verify ACL for ZFSA store
zfs get acltype zfsa
zfs set acltype=posixacl zfsa
zfs get acltype zfsa

# Configure vzdump to use the ZFS tmp dir'
# add/set tmpdir as follows 
# (do on each server)
cd /etc
vi vzdump.conf
tmpdir: /zfsa/tmp
:wq

Proxmox VE

This page covers the Proxmox VE install and setup on our server. You can find a great deal of information about Proxmox in the Proxmox VE Administrator’s Guide.

Proxmox Installation/ZFS Storage

Proxmox was installed on our server using the steps in the following video:

The Proxmox boot images are installed on MVMe drives (ZFS RAID1 on our Dell Sever BOSS Card, or ZFS single on the MNVe drives on our Supermicro Servers). This video also covers the creation of a ZFS storage pool and filesystem. A single filesystem called zfsa was set up using RAID10 and lz4 compression using four SSD disks on each server.

A Community Proxmox VE License was purchased and installed for each node. The Proxmox installation was updated on each server using the Enterprise Repository.

Linux Configuration

I like to install a few additional tools to help me manage our Proxmox installations. They include the nslookup and ifconfig commands and the tmux terminal multiplexor. The commands to install these tools are found here.

Cluster Creation

With these steps done, we can create a 3-node cluster. See our Cluster page for details.

ZFS Snapshots

Creating ZFS snapshots of the Proxmox installation can be useful before making changes. This enables rollback to a previous version of the filesystem should any changes need to be undone. Here are some useful commands for this purpose:

zfs list -t snapshot
zfs list
zfs snapshot rpool/ROOT/<node-name>@<snap-name>
zfs rollback rpool/ROOT/<node-name>t@<snap-name>
zfs destroy rpool/ROOT/<node-name>@<snap-name>

Be careful to select the proper dataset – snapshots on the pool that contain the dataset don’t support this use case. Also, you can only roll back to the latest snapshot directly. If you want to roll back to an earlier snapshot, you must first destroy all of the later snapshots.

In the case of a Proxmox cluster node, the shared files in the associated cluster filesystem will not be included in the snapshot. You can learn more about the Proxmox cluster file system and its shared files here.

You can view all of the snapshots inside the invisible /.zfs directory on the host filesystem as follows:

# cd /.zfs/snapshot/<name>
# ls -la

Local NTP Servers

We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, we need to modify/etc/chrony/chrony.conf to use our servers for the pool. This needs to be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

The first step before following the configuration procedures above is to install chrony on each node –

apt install chrony

Mail Forwarding

We used the following procedure to configure postfix to support forwarding e-mail through smtp2go. Postfix does not seem to work with passwords containing a $ sign. A separate login was set up in smtp2go for forwarding purposes.

Some key steps in the process include:

# Install postfix and the supporting modules
# for smtp2go forwarding
sudo apt-get install postfix
sudo apt-get install libsasl2-modules

# Install mailx
sudo apt -y install bsd-mailx
sudo apt -y install mailutils

# Run this command to configure postfix
# per the procedure above
sudo dpkg-reconfigure postfix

# Use a working prototype of main.cf to edit
sudo vi /etc/postfix/main.cf

# Setup /etc/mailname -
#   use version from working server
#   MAKE SURE mailname is lower case/matches DNS
sudo uname -n > /etc/mailname

# Restart postfix
sudo systemctl reload postfix
sudo service postfix restart

# Reboot may be needed
sudo reboot

# Test
echo "Test" | mailx -s "PVE email" <email addr>

vGPU

Our servers each include a Nvidia TESLA P4 GPU. This GPU is sharable using Nvidia’s vGPU. The information on how to set up Proxmox for vGPU may be found here. This procedure also explains how to enable IOMMU for GPU pass-through (not sharing). We do not have IOMMU setup on our servers at this time.

You’ll need to install the git command and the cc compiler to use this procedure. This can be done with the following commands –

# apt update
# apt install git
# apt install build-essential

Now you can follow the procedure here. Be sure to include the steps to enable IOMMU. I downloaded and installed the 6.4 vGPU driver from the Nvidia site and did a final reboot of the server.

vGPU Types

The vGPU drivers support a number of GPU types. You’ll want to select the appropriate one in each VM. Note that multiple sizes of vGPUs are not allowed (i.e., if one GPU uses 2 GB of memory, all must). The following table shows the types available. (this data can be obtained by running mdevctl types on your system).

Q Profiles - Not Good for OpenGL/Games
vGPU TypeNameMemoryInstances
nvidia-63GRID P4-1Q1 GB8
nvidia-64GRID P4-2Q2 GB4
nvidia-65GRID P4-4Q4 GB2
nvidia-66GRID P4-8Q8 GB1
A Profiles - Windows VMs
vGPU TypeNameMemoryInstances
nvidia-67GRID P4-1A1 GB8
nvidia-68GRID P4-2A2 GB4
nvidia-69GRID P4-4A4 GB2
nvidia-70GRID P4-8A8 GB1
B Profiles - Linux VMs
vGPU TypeNameMemoryInstances
nvidia-17GRID P4-1B1 GB8
nvidia-243GRID P4-1B41 GB8
nvidia-157GRID P4-2B2 GB4
nvidia-243GRID PR-2B42 GB4

Problems with Out Of Date Keys on Server Nodes

I have occasionally seen problems with the SSH keys getting out of date on our servers. The fix for this is to run the following commands on all of the servers. A reboot is also sometimes necessary.

# Update certs and repload PVE proxy
pvecm updatecerts -F && systemctl restart pvedaemon pveproxy

# Reboot if needed
reboot