Tag Archives: Virtual Environment

Linux Desktop Virtual Machines

Ubuntu Desktop
Ubuntu Desktop

Several Ubuntu Linux desktop VMs support general-purpose desktop applications and our DXalarm Clocks. The following steps create the base VMs for these applications.

VM Install

The VMs are created as follows:

  • Image – Ubuntu Desktop download from here.
  • CPUs – 4
  • Memory – 4096 MB, Balloning = off
  • Network – LS Services, VLAN 10
  • Disk – 32 GB, SSD, Discard = on, Cache = Write through
  • QEMU Agent – Installed via apt install quemu-guest-agent && systemctl start qemu-guest-agent

The CPU and Memory parameters are chosen to be on the high side for most applications. This enables a quick installation and setup of the resulting VM. These parameters can be adjusted lower to match the actual workload for each provisioned VM.

Run the Ubuntu installer on the initial boot as follows:

  • US English
  • Normal Installation, Download Updates while installing
  • Erase Disk and Install Ubuntu
  • Timezone = New York, Automatic Timezone, AM/PM Time Format
  • Set computer name and login credentials

I also installed SSH access for all logins. The procedure for doing this can be found here.

Next was a few post-setup configuration steps:

  • Dark Style
  • Set Desktop Wallpaper
  • Setup Remote Desktop and VNC Sharing

Finally, we installed the following via the Ubuntu Software apps:

  • Extension Manager
  • Allow Locked Remote Desktop via Extension Manager (by name search)

E-mail Forwarding

Outbound e-mail is set up via nullmailer using the procedure outlined here.

# Run this as root
sudo bash

# Install nullmailer and mail apps
apt-get install nullmailer mailutils

# Move to the nullmailer directory
cd /etc/nullmailer

# Create configuration files
vi defaultdomain
...
anita-fred.net
:wq

vi adminaddr
...
<my-email-address>
:wq

# This file sets up TLS access to smtp2go
sudo vi /etc/remotes
...
mail.smtp2go.com smtp --port=587 --starttls --user=<my smtp2go login ID> --pass=<my stmp2go password>
:wq

# The next three steps are important!
chmod 644 defaultdomain adminaddr
chmod 600 remotes
chown mail:mail defaultdomain adminaddr remotes

# Check status of nullmailer
service nullmailer status

# Send a test e-mail
mailx -s "Test e-mail via nullmailer MTA" <email address>
...

Sound Through Remote Desktop Client

The xRDP extensions enable sound from our Ubuntu VMs via Remote Desktop. The procedure to install the necessary extensions in Ubuntu VMs can be found here.

The steps are as follows:

# Must run these commands as normal user, not root
su - fkemmerer

# Download script, unzip it, & make it exeutable
wget https://www.c-nergy.be/downloads/xRDP/xrdp-installer-1.4.3.zip
unzip xrdp-installer-1.4.3.zip
chmod +x xrdp-installer-1.4.3.sh

# Must use -s to install sound drivers
./xrdp-installer-1.4.3.sh -s

# Shutdown machine, then reboot
sudo shutdown

Applications

Installed the following applications

  • Chrome & associated apps
  • VLC Player

Install the Python IDE interface as follows:

sudo apt install idle

Template Conversion and Use

The fully set up VM has been converted to a template. The template can you used to create Ubuntu Desktop VMs using the following steps:

  • Clone (unlinked clone is preferred) the template to a VM
  • Edit /etc/hosts and /etc/hostname to set the system name for the new VM
  • Add the new VM to the Backup and HA configurations

 

Windows Virtual Machines

One of our HomeLab environment’s goals is to run our desktop OSs on virtual machines. This enables us to get at standard OS environments such as Microsoft Windows easily from a web browser.

Windows VM Setup

We use the following procedure to set up our Windows VMs –

The following ISO images are downloaded to the PVE-templates Share on our Proxmox cluster –

Each Windows VM is created with the following options (all other choices used the defaults) –

  • Name the VM windows-<macoine name>
  • Use the Windows 10 desktop ISO image.
  • Add an additional drive for the VirtIO drivers and use the Windows VirtIO Driver ISO image.
  • The Type/Version is set to Microsoft Windows 10.
  • Check the Qemu Agent option (we’ll install this later).
  • Set the SCSI Controller to VirtIO SCSI.
  • Use PVE-storage and create a 128 GB disk
  • Set Discard and SSD Emulation options
  • Set Cache to Write Back
  • Allocate 4 CPU Cores
  • Allocate 16 GB of Memory/minimum 4 GB Memory / Ballooning Device enabled
  • Run on HS Services Network, use Intel E1000 NIC, set VLAN Tag to 10

Start the VM and install Windows. Some notes include –

  • Enter the Windows 10 Pro product key
  • Use the Windows Driver disk to load a driver for the disk
  • Once Windows is up, use Windows Driver disk to install drivers for devices that did not install automatically. You can find the correct driver by searching for drivers from the root of the Windows Driver disk.
  • Install the qemu guest agent from the Windows Driver disk. It’s in the guest agent directory.
  • Set the Computer name, Workgroup, and Domain name for the VM.
  • Do a Windows update and install all updates next.

Setup Windows applications as follows –

  • Install Chrome browser
  • Install Dashlane password manager
  • Install Dropbox and Synology Drive
  • Install Start10
  • Install Directory Opus File Manager
  • Install PDF Viewer
  • Install Printers
  • Install media tools, VLC Player, and QuickTime Player
  • Install Network utilities, WebSSH
  • Install windows gadgets
  • Install DXlab, tqsl, etc.
  • Install Microsoft Office and Outlook
  • Install SmartSDR
  • Install WSJT-X, JTDX, JTalert
  • Install PSTRotator, Amplifer app
  • Install RealVNC
  • Install Benchmarks (Disk, Graphics, Geekbench)
  • Install Folding at Home
  • Need a sound driver for audio (Windows Remote Desktop or RealVNC).

Proxmox VE

This page covers the Proxmox install and setup on our server. You can find a great deal of information about Proxmox in the Proxmox VE Administrator’s Guide.

Proxmox Installation/ZFS Storage

Proxmox was installed on our server using the steps in the following video:

The Proxmox boot images are installed on MVMe drives (ZFS RAID1 on our Dell Sever BOSS Card, or ZFS single on the MNVe drives on our Supermicro Servers). This video also covers the creation of a ZFS storage pool and filesystem. A single filesystem called zfsa was set up using RAID10 and lz4 compression using four SSD disks on each server.

A Community Proxmox VE License was purchased and installed for each node. The Proxmox installation was updated on each server using the Enterprise Repository.

Linux Configuration

I like to install a few additional tools to help me manage our Proxmox installations. They include the nslookup and ifconfig commands and the tmux terminal multiplexor. The commands to install these tools are found here.

Cluster Creation

With these steps done, we can create a 3-node cluster. See our Cluster page for details.

ZFS Snapshots

Creating ZFS snapshots of the Proxmox installation can be useful before making changes. This enables rollback to a previous version of the filesystem should any changes need to be undone. Here are some useful commands for this purpose:

zfs list -t snapshot
zfs list
zfs snapshot rpool/ROOT/<node-name>@<snap-name>
zfs rollback rpool/ROOT/<node-name>t@<snap-name>
zfs destroy rpool/ROOT/<node-name>@<snap-name>

Be careful to select the proper dataset – snapshots on the pool that contain the dataset don’t support this use case. Also, you can only roll back to the latest snapshot directly. If you want to roll back to an earlier snapshot, you must first destroy all of the later snapshots.

In the case of a Proxmox cluster node, the shared files in the associated cluster filesystem will not be included in the snapshot. You can learn more about the Proxmox cluster file system and its shared files here.

You can view all of the snapshots inside the invisible /.zfs directory on the host filesystem as follows:

# cd /.zfs/snapshot/<name>
# ls -la

Local NTP Servers

We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, we need to modify/etc/chrony/chrony.conf to use our servers for the pool. This needs to be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details.

The first step before following the configuration procedures above is to install chrony on each node –

apt install chrony

Mail Forwarding

We used the following procedure to configure postfix to support forwarding e-mail through smtp2go. Postfix does not seem to work with passwords containing a $ sign. A separate login was set up in smtp2go for forwarding purposes.

Some key steps in the process include:

# Install postfix and the supporting modules
# for smtp2go forwarding
sudo apt-get install postfix
sudo apt-get install libsasl2-modules

# Install mailx
sudo apt -y install bsd-mailx
sudo apt -y install mailutils

# Run this command to configure postfix
# per the procedure above
sudo dpkg-reconfigure postfix

# Use a working prototype of main.cf to edit
sudo vi /etc/postfix/main.cf

# Setup /etc/mailname -
#   use version from working server
#   MAKE SURE mailname is lower case/matches DNS
sudo uname -n > /etc/mailname

# Restart postfix
sudo systemctl reload postfix
sudo service postfix restart

# Reboot may be needed
sudo reboot

# Test
echo "Test" | mailx -s "PVE email" <email addr>

vGPU

Our servers each include a Nvidia TESLA P4 GPU. This GPU is sharable using Nvidia’s vGPU. The information on how to set up Proxmox for vGPU may be found here. This procedure also explains how to enable IOMMU for GPU pass-through (not sharing). We do not have IOMMU setup on our servers at this time.

You’ll need to install the git command and the cc compiler to use this procedure. This can be done with the following commands –

# apt update
# apt install git
# apt install build-essential

Now you can follow the procedure here. Be sure to include the steps to enable IOMMU. I downloaded and installed the 6.4 vGPU driver from the Nvidia site and did a final reboot of the server.

vGPU Types

The vGPU drivers support a number of GPU types. You’ll want to select the appropriate one in each VM. Note that multiple sizes of vGPUs are not allowed (i.e., if one GPU uses 2 GB of memory, all must). The following table shows the types available. (this data can be obtained by running mdevctl types on your system).

Q Profiles - Not Good for OpenGL/Games
vGPU TypeNameMemoryInstances
nvidia-63GRID P4-1Q1 GB8
nvidia-64GRID P4-2Q2 GB4
nvidia-65GRID P4-4Q4 GB2
nvidia-66GRID P4-8Q8 GB1
A Profiles - Windows VMs
vGPU TypeNameMemoryInstances
nvidia-67GRID P4-1A1 GB8
nvidia-68GRID P4-2A2 GB4
nvidia-69GRID P4-4A4 GB2
nvidia-70GRID P4-8A8 GB1
B Profiles - Linux VMs
vGPU TypeNameMemoryInstances
nvidia-17GRID P4-1B1 GB8
nvidia-243GRID P4-1B41 GB8
nvidia-157GRID P4-2B2 GB4
nvidia-243GRID PR-2B42 GB4

Problems with Out Of Date Keys on Server Nodes

I have occasionally seen problems with the SSH keys getting out of date on our servers. The fix for this is to run the following commands on all of the servers. A reboot is also sometimes necessary.

# Update certs and repload PVE proxy
pvecm updatecerts -F && systemctl restart pvedaemon pveproxy

# Reboot if needed
reboot

WordPress in an LXC Container

We’ve set up several Proxmox LXC containers to host several WordPress sites on our server. LXC containers are more efficient in terms of server resource utilization. You can learn more about Proxmox LXC containers vs. Virtual Machines here. We went through the following steps to set this up.

WordPress Container

The setup uses the WordPress LXC container from Turnkey Linux. This lightweight Debian Linux environment uses MariaDB and Apache2 to complete the LAMP stack. I followed the following YouTube video to complete the installation –

Once the container was set up, I set the timezone for Debian as follows:

# timedatectl set-timezone "America/New_York"

Next, I updated the Debian installation via the following:

# apt update && apt upgrade

I also installed nslookup and apt-utils via the following:

# apt-get install dnsutils
# apt-get install apt-utils

Increase WordPress Memory Limit

The following addition was made just before the “…Happy publishing” line in the wp-config.php file to enable the WordPress installation to take advantage of the additional memory.

/* Increase WP Memory limit */
define('WP_MEMORY_LIMIT', '1024M');

/* That's all, stop editing! Happy publishing. */

Increase the Size of OPcache

The PHP OPcache stores the compiled versions of the PHP scripts that make up our websites. The default size of the cache is 128 MB. We’ve increased our OPcache to 512 MB to improve performance.  OPcache memory is shared across all of the WordPress sites in the host LXC container. The steps to modify the size of the OPcache are as follows:

# vi /etc/php/8.2/apache2

... Modify the following lines in the file

; The OPcache shared memory storage size.
;opcache.memory_consumption=128
opcache.memory_consumption=512

# systemctl restart apache2

Email Access

The supplied postfix MTA did not work, so we’ve installed the WP Mail SMTP plugin and configured it to forward email through our email service.

SPF and DMARC records need to be set up for the domain. This needs to be done properly to accommodate the multiple servers that can send e-mail on behalf of our domain. The DMARC step is simple – the necessary steps can be found here. The DMARC setup for a domain can be confirmed here.

SSL Certificate

This procedure was used to set up apache2 with a signed SSL certificate – click here. It uses Certbot with Let Encrypt to obtain and install a signed SSL certificate in Apache. A DNS-01 challenge is used, which does not require an external Internet connection for Apache.

Once the initial certificates are installed, a script can be created to run the certbot commands to check if the SSL certificates need to be updated.

The final step was to schedule weekly checks for SSL certificate updates using the cron. This is done by executing the script mentioned above once a week.

Persistent Object Cache

We are using the Redis persistent object cache on our sites. Information on Redis and how to install it may be found here and here.

The following changes in each site’s wp-config.php must be made first:

<?php

/* Enable Redis cache - this line must be
   right after the opening <?php line */
define( 'WP_CACHE', true );

Then add the following to the “salt” section of wp-config.php:

/* Redis cache salt - unique for each WordPress
                      site on this host */
define('WP_CACHE_KEY_SALT',
       'homelab.anita-fred.net');

The WP_CACHE_KEY_SALT can be anything, but the value must be unique for each WordPress site host on the LXC.

The next step is to install the redis and the php-redis packages on the LXC as follows:

# apt-get update
# apt-get install redis-server
# apt-get install php-redis
# systemctrl restart apache2

The final step is to install the Redis Object Cache plugin on each WordPress site and use the plugin to enable the Redis cache.

Enable Gzip Compression

Gzip compression helps users with slower internet connections to have a better overall experience. This feature can be enabled as follows (more info here). First, run the following command for the LXC container to enable gzip compression for all websites.

# a2enmod deflate

Next, use a plugin in each WordPress website to enable Gzip compression in the associated .htaccess file.

With these steps done, restarting apache2 will enable compression.

# systemctl restart apache2

You can confirm that Gzip compression is working via the GiftOfSpeed web tool.

Static Browser Caching

First, install the apache2 mod_expires extension using the following command –

# a2enmod expires
# systemctl restart apache2
# apachectl -M | grep expires

Next, set the static caching expiration header to 365 days using the caching plugin installed in each WordPress site.

Setup SSH Keys

A public/private key pair is created and set up for Proxmox VE and all VMs and LXC to ensure secure SSH access. The following procedure is used to do this. Access via password should be disabled once the SSH keys are in place and working.

Add Additional Websites

Apache2 can support multiple websites using a single IP address. This is the configuration that we are using for our home lab. The following procedure explains how to set this up:

Once an additional WordPress instance is installed and an associated database is set up, the following steps should be performed:

  • The SSL certificate for the site must be pulled and installed via certbot before enabling a new website via a2ensite
  • Set the owner and group permissions for all files in the WordPress installation

WordPress Scheduler

WordPress sites need periodic visits to each site to ensure that the internal scheduler runs properly. This can be a problem early in developing a new site when there are few visitors. To address this, we’ve installed a cron job for our sites as follows –

# Modify root crontab
cd /var/spool/cron/crontabs
vi root

... Add these lines...
... The first ensures SSL cert updates...

# m h dom mon dow command
0 4 * * sun /bin/sh /root/install_cert.sh
*/5 * * * * /usr/bin/curl https://travel.anita-fred.net/wp-cron.php?doing_wp_cron
*/5 * * * * /usr/bin/curl https://homelab.anita-fred.net/wp-cron.php?doing_wp_cron
*/5 * * * * /usr/bin/curl https://www.anita-fred.net/wp-cron.php?doing_wp_cron