# Our Home Lab > Anita's and Fred's Home Lab --- ## Pages - [Expand Proxmox VM Storage](https://homelab.anita-fred.net/expand-proxmox-vdisk/): There are many reasons to expand Proxmox VM Storage. It is always challenging to grow a VM's virtual disk in Proxmox. Here's how to do it... - [Web AI App Development Using Generative Coding Tools](https://homelab.anita-fred.net/ai-app-development/): The next step of our AI journey involved Web AI App development. We use a tool called Bolt.new to build a simple World Clock App. - [Stable Diffusion WebUI](https://homelab.anita-fred.net/stable-diffusion-webui/): We want to generate images locally. We choose Stable Diffusion WebUI. This tool provides both text-image and image-image... - [A Simple AI Project: Write Me A Story](https://homelab.anita-fred.net/ai-project/): I wanted to do a simple project since we have LLMs and Image Generation in our Home Lab. We will write an AI-generated story. - [Open WebUI](https://homelab.anita-fred.net/open-webui/): Open WebUI simplifies the process of embedding AI capabilities into web environments. We installed it using Docker... - [Ollama: A Flexible Solution for Custom Machine Learning Models](https://homelab.anita-fred.net/ollama/): We will use Ollama as a core tool to experiment with Large Language Models running locally on our LLM Workstation. - [Building an LLM Workstation Using Apple Silicon](https://homelab.anita-fred.net/llm-server-workstation/): We are building a large language model (LLM) workstation and server around a Mac Computer running Apple Silicon. A new Mac Studio M3 Ultra... - [Artificial Intelligence and Machine Learning](https://homelab.anita-fred.net/artificial-intelligence/): We are experimenting with Artificial Intelligence and Machine Learning. We are building a Workstation using Apple Silicon. - [Raspberry Pi Home Server](https://homelab.anita-fred.net/raspberry-pi-home-server/): While it's nice to have enterprise-grade equipment in our Home Lab, I wanted to build something simple Raspberry Pi Home Server... - [Home Media System](https://homelab.anita-fred.net/home-media-system/): One to have a home network is to create a home media system that enables us to use our content collection anywhere in our home. - [Raspberry Pi NAS 2](https://homelab.anita-fred.net/pi-nas-2/): We’ve built a second NAS and Docker environment using another Raspberry Pi 5. This NAS features four 2. 5 in... - [IT Tools](https://homelab.anita-fred.net/it-tools/): We have found a useful Docker application for Homelab folks. It's called IT Tools and it is run using a Docker container with Traefik... - [Proxmox Test Node](https://homelab.anita-fred.net/test-node/): We have built a Proxmox single-node using an AMD NUC computer for testing and learning purposes. The hardware configuration for... - [Raspberry Pi Servers](https://homelab.anita-fred.net/raspberry-pi/): We use Raspberry Pi Single-Board computers (SBCs) for various server applications in our Home Lab. Applications include DNS, NAS, Proxmox... - [Wallabag](https://homelab.anita-fred.net/wallabag/): We're running a Docker container called Wallabag, which can be used to save cleanly formatted copies of web pages and articles... - [RSS Hub](https://homelab.anita-fred.net/rsshub/): We're ..running a Docker container called RSS Hub, which detects RSS feeds available on websites that we browse.. - [Docker Monitoring](https://homelab.anita-fred.net/docker-monitoring/): Our Docker monitoring dashboard solution covers two aspects of Docker container performance. It uses Grafana to implement... - [Proxmox Monitoring](https://homelab.anita-fred.net/proxmox-monitoring/): We set up a Grafana Dashboard to monitor our Proxmox Cluster. The main components in our Proxmox monitoring stack include... - [File Browser](https://homelab.anita-fred.net/filebrowser/): File Browser is a simple Docker container that provides a file manager in your web browser. It is helpful to have access to files... - [Grafana Logging and Monitoring](https://homelab.anita-fred.net/grafana/): We've added a Monitoring and Logging system. The system is based on Grafana, Prometheus, Grafana Loki, Promtail, Telegraf, and InFlux DB. - [VS Code Server](https://homelab.anita-fred.net/code-server/): VS Code Server allows editing using a web browser on any computer. The VS Code web interface is hosted from a server... - [WireGuard VPN](https://homelab.anita-fred.net/wireguard-vpn/): The WireGuard VPN server built into our Unifi System provides secure connections to our iPhones, iPads, macOS , and Windows systems. - [Adminer](https://homelab.anita-fred.net/adminer/): We run Adminer in a container to enable the configuration and editing of MySQL databases. Information on configuring Adminer... - [Dashy](https://homelab.anita-fred.net/dashy/): We have created many websites and services for our Home Lab. It's nice to have an organized dashboard to access these tools. We use Dashy... - [Iperf3](https://homelab.anita-fred.net/iperf/): Iperf3 is a common tool for network performance testing. We run an Iperf3 server in a Docker container. You can find information... - [Speedtest Tracker](https://homelab.anita-fred.net/speedtest-tracker/): We run a docker container called Speedtest Tracker to monitor the performance of our Internet Connection. The article covers setup... - [Nginx Proxy Manager](https://homelab.anita-fred.net/nginx-proxy-manager/): Many services and devices in our home lab have web interfaces. We secure many of them using Nginx Proxy Manager as a reverse proxy. - [Uptime Kuma](https://homelab.anita-fred.net/uptime-kuma/): We use a tool called Uptime Kuma to monitor the operational status of our home lab. Uptime Kuma can monitor many different... - [Pihole with a Cloudflare Tunnel](https://homelab.anita-fred.net/pihole/): We are running three Pihole installations, which enable load balancing and high availability for our DNS services. We also use Cloudflare... - [Watchtower Container Update](https://homelab.anita-fred.net/watchtower/): We are running the Watchtower container on all our stand-alone docker hosts to keep our containers up to date. - [Cloudflare DDNS](https://homelab.anita-fred.net/cloudflare-ddns/): We use Cloudflare to host our domains and the associated external DNS records. Cloudflare provides excellent security and scaling features... - [Docker Networking](https://homelab.anita-fred.net/docker-networking/): Docker can create its own internal networks. There are multiple options here, so this aspect of Docker can be confusing. - [Traefik Reverse Proxy](https://homelab.anita-fred.net/traefik-reverse-proxy/): We are using Traefik as a reverse proxy in our Home Lab. Traefik is deployed on our Docker Swarm Cluster and Raspberry Pi Docker server. - [Raspberry Pi - Docker and PiHole](https://homelab.anita-fred.net/rpi-docker/): We have set up a Raspberry Pi 5 system to run a third PiHole DNS server in our network. This ensures that DNS services are available... - [Docker Infrastructure](https://homelab.anita-fred.net/docker/): We've been using Docker hosts and Portainer to run various containerized applications in our Home Lab. Our applications have been hosted... - [Ubuntu Server VM Template](https://homelab.anita-fred.net/ubuntu-server-template/): It is common to need to create Ubuntu Server VMs to host various applications. To facilitate the creation of such VMs, we’ve created... - [Linux Desktop Virtual Machines](https://homelab.anita-fred.net/linux-desktop/): Several Ubuntu Linux desktop VMs support general-purpose desktop applications and our DXalarm Clocks. The following steps create the base... - [Samba File Server](https://homelab.anita-fred.net/file-server/): We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS... - [Linux Packages and Tools](https://homelab.anita-fred.net/linux-tools/): We use a variety of Linux Packages and Tools in our Homelab. This page explains how we set up and manage them. - [Raspberry Pi NAS](https://homelab.anita-fred.net/rpi-nas/): We've built a NAS and Docker Staging environment using a Raspberry Pi 5. Our NAS features a 2 TB NVMe SSD... - [Homelab Projects](https://homelab.anita-fred.net/home-lab-projects/): We built our Home Lab to do projects to help us to learn about modern data centers and IT technology. Here are some future projects... - [Uninterruptible Power](https://homelab.anita-fred.net/power/): Uninterruptible power for our network, servers, and storage is key to our Home Lab's high-availability strategy. - [High-Availability Storage Cluster](https://homelab.anita-fred.net/ha-storage/): We are building a High-Availability (HA) Storage Cluster to complement our Proxmox HA Server Cluster. Synology has a nice HA solution... - [Server Cluster](https://homelab.anita-fred.net/server-cluster/): Our server cluster consists of three servers. Our approach was to pair one high-capacity Dell server two smaller Supermicro servers. - [Home Network Infrastructure](https://homelab.anita-fred.net/network-infrastructure/): We use UniFi equipment for our second-generation home network primarily for its single-plane glass management and configuration capabilities. - [Windows Virtual Machines](https://homelab.anita-fred.net/windows-vm/): One of our Homelab environment's goals is to run our Windows desktop OSs on virtual machines. This enables us to get Windows... - [Synology NAS](https://homelab.anita-fred.net/synology-nas/): We cover some details of configuring our Synology NAS devices running DSM7.2 here. All of our Synology NAS devices use pairs of ethernet... - [Docker in an LXC Container](https://homelab.anita-fred.net/docker-lxc/): Using this procedure, we set up docker using the Turnkey Core LXC container (Debian Linux). The container is created to run Docker. - [Proxmox Backup Server](https://homelab.anita-fred.net/pbs/): This page covers the installation of the Proxmox Backup Server in our HomeLab. Our approach is to run the Proxmox Backup Server in a VM... - [Installed Plugins](https://homelab.anita-fred.net/plugins/): This page has a list of all the installed WordPress plugins on our website. Please login to see the details. - [Proxmox VE](https://homelab.anita-fred.net/proxmox/): This page covers the Proxmox VE install and setup on our server. You can find a great deal of information about Proxmox in... - [Welcome To Our Home Lab](https://homelab.anita-fred.net/): This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several components... - [Wordpress in an LXC Container](https://homelab.anita-fred.net/wordpress-lxc/): We've set up several Proxmox LXC containers to host several WordPress sites on our server. LXC containers are efficient... - [Privacy Policy](https://homelab.anita-fred.net/privacy-policy/): Who we are Our website address is: https://homelab. anita-fred. net. Our site documents and shares information about our Home Lab... --- ## Posts --- # # Detailed Content ## Pages ### Expand Proxmox VM Storage > There are many reasons to expand Proxmox VM Storage. It is always challenging to grow a VM's virtual disk in Proxmox. Here's how to do it... - Published: 2025-04-21 - Modified: 2025-04-21 - URL: https://homelab.anita-fred.net/expand-proxmox-vdisk/ - Categories: VMs and LXCs - Tags: Proxmox, Storage, VM https://youtu. be/58m0hnt0ij8? si=5KGEBjFZo2RJHyqC There are many reasons why you may need to expand Proxmox VM Storage. It is always challenging to grow a VM's virtual disk in Proxmox. The process requires several steps. Mistakes can result in the loss of data. The video above provides an easy-to-understand guide on how to expand a VM disk. Before beginning, your should make a backup. It also helps to format your VM disks using LVM thin provisioning. --- ### Web AI App Development Using Generative Coding Tools > The next step of our AI journey involved Web AI App development. We use a tool called Bolt.new to build a simple World Clock App. - Published: 2025-03-27 - Modified: 2025-03-27 - URL: https://homelab.anita-fred.net/ai-app-development/ The next step of our AI journey involved Web AI App development. I have been using a tool called Bolt. new to build some simple web applications. Bolt. new is an AI-powered web development agent. It allows users to build full-stack applications by prompting the AI to generate code. Users can then run, edit, and deploy the application directly from their browser, without writing any code themselves.  The user interacts with the app using a chat dialog to create and refine their application. AI App Development using tools like Bolt. new tremendously boosts productivity. Getting Started with Bolt. new https://www. youtube. com/watch? v=1SfUMQ1yTY8 The video above demonstrates building a web app using Bolt. new. The tool handles code generation. Bolt also handles the necessary interactions with a back-end service called netlify. com to run the generated app. Simple Digital World Clock App To learn about AI generative app development, I decided to build a simple World Clock App. This web app displays the time in two different time zones. It allows users to search for the time zones in a specified country. The app also includes a visit counter tied to a database hosted by Supabase. The app took about 30 minutes to build and deploy. You can try it out here. --- ### Stable Diffusion WebUI > We want to generate images locally. We choose Stable Diffusion WebUI. This tool provides both text-image and image-image... - Published: 2025-03-23 - Modified: 2025-04-05 - URL: https://homelab.anita-fred.net/stable-diffusion-webui/ - Categories: AI and Machine Learning - Tags: AI, LLM Stable Diffusion WebUI Image Generator We want to generate images locally. Several different applications can do this. To get started, we choose Stable Diffusion WebUI. This tool provides both text-image and image-image generation capabilities. The WebUI part offers a user interface. Users can select image generation models, set up these models, and generate images from text. Installing Stable Diffusion and the WebUI A simple procedure for setting up this tool can be found here. We used the installation procedure for AUTOMATIC1111 on Apple Silicon. We created a simple shell script to launch the tool. Another good resource is the git repository for Stable Diffusion. Bash# Location where AI tools are installed cd ~/AI/stable-diffusion-webui # Log startup of Stable Diffusion echo "*** Launching Stable Difussion at `date` ***" >> ~/AI/ai. log # Run Stable Difussion using login/password; enable API . /webui. sh --api --listen --gradio-auth : >> ~/AI/ai. logBasg Script to Launch Stable Diffusion We also used the Mac Automator to launch it when logging in. The final step is to allow access to the WebUI via a proxy host, our Nginx Proxy Manager. Choosing and Downloading Models Once Stable Diffusion is installed, you'll want to download models to generate images. A good source for models is Hugging Face. Here's a guide to choosing models. We have these models installed here - Stable Diffusion v1. 5 Stable Diffusion XL Learning To Use Stable Diffusion The quality and look of the images you can generate using Stable Diffusion are endless. You can control how your images look and their content through the choices that you make in - Your image prompt The model you use The settings you use to set up your model You can find a beginner's guide to Stable Diffusion here. Generating Images Using Text LLMs Using Open WebUI to Generate an Image (The button above the blue arrow triggers image-text) The associated image generation engine can be used from Open WebUI to generate images from text created by LLMs. The steps to do this are as follows. Configuring Open WebUI to Generate Images First, you will need to set up Open WebUI image generation. Our current configuration is as follows: Open WebUI Image Generation Set Up Use a text LLM like llava:34b to generate an Image Prompt describing the desired image. Select the generate image button (shown above the blue arrow in the image) to run your configured image generation model. sd_xl_refiner1. 0 - the base model used to generate images Heun Sampler - A high-quality image convergence algorithm CFG Scale is set to 7 - sets how closely the image prompt is followed during image generation. Image Size is 800x600 - the size of the images in pixels. Steps is set to 50, which is the number of steps used to refine the image based on your prompt. You can experiment with these settings using the Stable Diffusion WebUI. This will help you find a combination that produces good results on your setup. --- ### A Simple AI Project: Write Me A Story > I wanted to do a simple project since we have LLMs and Image Generation in our Home Lab. We will write an AI-generated story. - Published: 2025-03-23 - Modified: 2025-03-31 - URL: https://homelab.anita-fred.net/ai-project/ - Categories: AI and Machine Learning - Tags: I, LLM I wanted to do a simple project using our home lab's LLMs and image generation tools. This project will show the capabilities of our platform. We will write an AI-generated story. Creating The Story I used the deep-seek-r1:32b LLM to create the AI story. Here's the prompt - Please write me a story about a lovely home by a lake in New England. The scene is a fall with colorful fall foliage. The prompt is all the information I gave the LLM running here on my Mac Mini system. The AI model generated the rest. The model was run using a combination of Open WebUI and Ollama. Creating The Accompanying Image I used the gemma3:27b model to generate a photo prompt using the story, I then used the image generation choice in Open WebUI to generate an image from the image prompt. Here's my setup for image generation in Open WebUI. To generate the image, I used the AUTOMATIC1111 Stable Diffusion Engine along with the sd_xl_refiner_1. 0 model. You can learn more about the local Stable Diffusion WebUI engine we are using. The entire process was run locally on our Mac Mini system. It took less than 5 minutes to finish. How did my AI story turn out? You be the judge. A Lovely Home by a Lake in New England In the heart of New England's serene countryside stands an enchanting cottage, its wooden walls weathered yet warm, wrapped in ivy that clings gracefully like an embrace from the earth. The house is nestled beside a tranquil lake, whose surface mirrors the vibrant tapestry of autumn hues reflected by the surrounding foliage. The exterior of the cottage is painted in soft earth tones—muted greens and rusty reds—that harmonize perfectly with the season's palette. A wraparound porch invites weary travelers to rest, its wooden slats worn smooth by time and weather. The roof, steeply pitched, seems to bow under the weight of golden leaves that have drifted down from the trees. The lake is a mirror of the fall, its waters rippling gently with hues of amber, burgundy, and gold. Along its shores, birches stand tall, their trunks white as snow against the deepening forest backdrop. The air carries the crispness of approaching winter, yet the sun still warms the land, casting long shadows that dance across the landscape. Inside the cottage, a crackling fire in the hearth fills the room with a comforting glow. Plush chairs and sofas are arranged around the fireplace, their upholstery soft to the touch, inviting stories to be shared. Large windows frame the view of the lake and forest, allowing sunlight to filter through, highlighting dust motes that twirl in the golden beams. The scent of wood smoke mingles with the earthy aroma of damp leaves outside. A cat curls up by the fire, its fur as warm as the blankets draped over the furniture. The house is alive with the sounds of life: the occasional splash from the lake, birdsong echoing through the trees, and the distant crunch of leaves underfoot as someone takes a stroll through the woods. Autumn activities are a delightful part of daily life here. Apples picked from nearby orchards are transformed into pies that fill the house with a sweet aroma. Walks through the forest reveal hidden treasures—acorns, pine cones, and the occasional squirrel darting across the path. The simplicity of these moments is cherished, each day a gift to be savored. As evening approaches, the sky softens into hues of peach and lavender, casting a warm glow over the landscape. The lake reflects this twilight beauty, its surface still as if holding its breath. Inside the cottage, the fire casts dancing shadows on the walls, and the sounds of the day give way to the gentle rustle of leaves in the breeze. This is a place where time moves slowly, where each season brings its own magic. In the fall, the cottage by the lake becomes a haven of warmth and beauty, a testament to nature's artistry, inviting all who enter to find peace and wonder in the simple joys of life. --- ### Open WebUI > Open WebUI simplifies the process of embedding AI capabilities into web environments. We installed it using Docker... - Published: 2025-03-21 - Modified: 2025-03-23 - URL: https://homelab.anita-fred.net/open-webui/ - Categories: AI and Machine Learning - Tags: AI, LLM Open WebUI running an image analysis model. Open WebUI simplifies the process of embedding AI capabilities into web environments. It allows developers to create dynamic interfaces. Users can enter data, view AI-generated outputs, and visualize real-time results. Its open-source nature provides flexibility for customization, adapting to various AI project requirements, or integrating with popular machine learning libraries. (This summary was written, in part, using the deepseek-r1:32b model). Open WebUI Installation We installed Open WebUI as a Docker container using the approach outlined in the video below. https://youtu. be/RQFfK7xIL28? si=t1AgAPTpmiIvkOEb&t=1198 Encryption and reverse proxy implementation is handled using Traefik. A docker-compose template file for our installation follows. It can be installed as a Portainer stack. Dockerfileservices: openwebui: image: ghcr. io/open-webui/open-webui:0. 5. 20 container_name: openwebui restart: unless-stopped environment: # Ollama Config - OLLAMA_BASE_URL=http://:11434 # Map configuration to our docker persitent store volumes: - /mnt/nfs/docker/open-webui/data:/app/backend/data:rw # Connect to our Traefik proxy network networks: - proxy # Configure for Traefik reverse proxy labels: - "traefik. enable=true" - "traefik. http. routers. openwebui. rule=Host(`openwebui. anita-fred. net`)" - "traefik. http. middlewares. openwebui-https-redirect. redirectscheme. scheme=https" - "traefik. http. routers. openwebui. middlewares=openwebui-https-redirect" - "traefik. http. routers. openwebui-secure. entrypoints=https" - "traefik. http. routers. openwebui-secure. rule=Host(`openwebui. anita-fred. net`)" - "traefik. http. routers. openwebui-secure. tls=true" - "traefik. http. routers. openwebui-secure. service=openwebui" - "traefik. http. services. openwebui. loadbalancer. server. port=8080" - "traefik. docker. network=proxy" - "traefik. http. services. openwebui. loadBalancer. server. port=8080" volumes: data: driver: local networks: proxy: external: trueDocker Compose template for Open WebUI Once the stack is running, you can set up your user account. The final step is to allow access to the WebUI via a proxy host, our Nginx Proxy Manager. What Can I Do With Open WebUI and LLMs? How to Tutorials This tool can be used for a variety of applications involving local—and cloud-based LLMs. A great place to start is the Tutorials page. Using Paid APIs https://www. youtube. com/watch? v=nQCOTzS5oU0 You can use Open WebUI with paid services like ChatGPT. The video above explains how to set this up. --- ### Ollama: A Flexible Solution for Custom Machine Learning Models > We will use Ollama as a core tool to experiment with Large Language Models running locally on our LLM Workstation. - Published: 2025-03-21 - Modified: 2025-03-23 - URL: https://homelab.anita-fred.net/ollama/ - Categories: AI and Machine Learning - Tags: AI, LLM Ollama is an open-source platform designed for training and deploying custom machine-learning models locally. It enables users to work without relying on cloud services. It supports various model architectures, offering flexibility for diverse applications. Ideal for researchers and developers seeking privacy and control over their data, it facilitates offline AI development and experimentation. (This paragraph was written using the deepseek-r1:32b model running on an Apple Mac Mini M4 Pro). We will use Ollama as a core tool to experiment with Large Language Models running locally on our LLM Workstation. Installing and Running the Tool You can install Ollama by downloading and running the installer. Next, you can choose a model (ex. llama3. 2) and execute the following command to install and run it. % ollama run llama3. 2 We'll ask the model to write a paragraph about large language models. Commands to control the model process are entered by starting with a /. Here is a list of options. Exposing Ollama on Our Network An API that enables applications to interact with or change the running model is available on http://localhost:11434. We want to make the API accessible across our network. We can create a plist to set the OLLAMA_HOST environment variable to "0. 0. 0. 0:11434" to expose the API on our workstation's IP interface. The list file should be created and saved in ~/Library/LaunchAgents. It is named com. ollama. plist. The plist's contents are shown below. XML DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1. 0//EN" "http://www. apple. com/DTDs/PropertyList-1. 0. dtd"> Label com. ollama Program /Applications/Ollama. app/Contents/MacOS/Ollama EnvironmentVariables OLLAMA_HOST 0. 0. 0. 0:11434 RunAtLoad KeepAlive Ollama com. ollama. plist contents Finally, make sure that Ollama is stopped and run the following command. Then restart Ollama. % launchctl load ~/Library/LaunchAgents/com. ollama. plist You can now access the API anywhere on your network as http://:11434. --- ### Building an LLM Workstation Using Apple Silicon > We are building a large language model (LLM) workstation and server around a Mac Computer running Apple Silicon. A new Mac Studio M3 Ultra... - Published: 2025-03-21 - Modified: 2025-03-23 - URL: https://homelab.anita-fred.net/llm-server-workstation/ - Categories: AI and Machine Learning - Tags: AI, LLM We are building a Large Language Model (LLM) workstation and server around a Mac Computer and Apple Silicon. My current machine is a Mac Mini M4 Pro. The specs for this machine are - M4 Pro Chip 48 GB of Unified Memory 16 CPU Cores 20 GPU Cores 16-Core Neural Engine 2 TB SSD Storage We have a new Mac Studio M3 Ultra coming. This upgrade should give us considerably more processing power for my LLM Workstation. The specs for the new machine are - M3 Ultra Chip 512 GB of Unified Memory 32 CPU Cores 80 GPU Cores 32-Core Neural Engine 2 TB SSD Storage 5 TB external Thunderbolt 5 SSD Storage The setup has a pair of Apple 5K Studio Displays. They allow me to use it as my primary desktop machine. Our LLM Workstation is a good platform for learning about Artificial Intelligence and Machine Learning. It is also suitable for learning about Machine learning and our planned AI projects. We will set up our Workstation to run LLMs continuously. We will expose these LLMs via our Home Network. A web (Open WebUI) interface will connect them to other computers and smart devices around our home. Our current Docker setup will be used for this purpose. --- ### Artificial Intelligence and Machine Learning > We are experimenting with Artificial Intelligence and Machine Learning. We are building a Workstation using Apple Silicon. - Published: 2025-03-21 - Modified: 2025-03-24 - URL: https://homelab.anita-fred.net/artificial-intelligence/ - Categories: AI and Machine Learning - Tags: AI, LLM We are interested in learning about and experimenting with Artificial Intelligence and Machine Learning, and our plans include these projects. Setting up an environment to run Large Language Models (LLMs) locally on a high-performance Mac system Exposing our LLMs via a web interface so that we can use them within our network from multiple devices Creating specialized versions of publicly available LLMs using Retrieval-Augmented Generation (RAG) and LLM Fine-Tuning techniques. Running benchmarks using various LLMs on our Apple Silicon Workstation and Server. Exploring Artificial Intelligence applications, including software generation, image generation, and domain-specific learning. We are building a high-performance Workstation and Server using Apple Silicon. It should be ideal for these and many other projects. Are you wondering what you can do with all of this? Here's a simple project that we did using our AI platform. --- ### Raspberry Pi Home Server > While it's nice to have enterprise-grade equipment in our Home Lab, I wanted to build something simple Raspberry Pi Home Server... - Published: 2025-03-11 - Modified: 2025-05-02 - URL: https://homelab.anita-fred.net/raspberry-pi-home-server/ - Categories: Raspberry Pi - Tags: CasaOS, Docker, Media, NAS, Server, Storage PiNAS - A Raspberry Pi Home Server and NAS While having enterprise-grade equipment in our Home Lab is nice, I aimed to build something simple. It had to be inexpensive for beginners. So, the solution is a simple Raspberry Pi Home Server. Project Objectives Raspberry Pi Home Server Running CasaOS Many applications and services can be hosted on a home server. For this project, we choose a basic set of capabilities for our Raspberry Pi Home server project - Sharing files on a home network via Network Attached Storage (NAS) Photos, Music, Videos, Documents, ... A DNS Server to create easy-to-remember names to access IP devices and services 192. 168. 1. xxx vs. your-service. your-domain. net Creating a personal domain via Cloudflare and obtaining a signed SSL Certificate for your web services Setting up a Reverse Proxy to tie it all together in a secure fashion Serving media for viewing across TVs, PCs, and Smart Devices (phones, tablets) Keep your devices and apps up and working via monitoring. Also, this project can offer an opportunity to learn about and use modern IT technology, and one can build upon this project to add applications to - Create a website and share it with the world Build a ”Smart Home" Add a Home Lab dashboard ... We'll be making use of Docker for much of this project. Sample Docker Compose files are included for many of the applications that follow. Note that files will need some adjustments. In particular, replace items with your custom values. Use strong passwords. Keep your passwords and API keys secure. Back to top Raspberry Pi Home Server Hardware PiTech Home Server and NAS We recommend a Raspberry Pi 4B or Pi 5 system with 8 GB of RAM for your home server. For storage, we recommend an SSD device for reliability and capacity reasons. Below are links to systems that we've built. PiNAS - RPi 5 system with a 2 TB NVMe drive PiNAS 2 - RPi 5 system with 4 x 2. 5" SSDs PiLab - RPi 4B System with a 1 TB 2. 5" SSD PiTech (coming soon) - RPi 5 System with a 2 TB 2. 5" SSD If you buy new hardware to build your home server, I recommend a system like PiNAS. The PiLab and PiTech systems are good choices. These options are ideal if you already have a Raspberry Pi 4B or Raspberry Pi 5. Make sure you also have a suitable 2. 5" SSD drive available. Back to top Project Prerequisites The prerequisites below are needed for this project. We suggest that you finish these items in place before you start the rest of the steps outlined in the next sections - Raspberry Pi Hardware with adequate storage Broadband Internet and a Router that you can set up to: Reserve IP addresses for your devices Open ports to the Internet A free account on Cloudflare Choose and register a domain on Cloudflare Suggest using a Password Manager like Dashlane to generate, encrypt, and store strong passwords Be sure to set up 2-factor Authentication for access to your Password Manager Back to top OS and CasaOS Installation https://player. vimeo. com/video/1064753965 The video above covers the steps to install Raspberry Pi (RPi) OS and CasaOS on your Raspberry Pi. The steps are as follows - Assemble your hardware and connect the RPito your network - use a wired Ethernet Install RPi OS 64-bit Lite via network install Update your RPi OS Set up the OS Generate a key pair and set up SSH Set up a reserved IP address for your RPi in your router The exact procedure depends on your router model Install CasaOS $ curl -fsSL https://get. casaos. io | sudo bash Set CasaOS login and password Once CasaOS is up and running, we recommend doing the next steps - Set up a fixed IP assignment for your server in your router's DHCP settings (note the IP - we'll use it in the steps that follow) Set a strong Linux password via the CasaOS terminal Change CasaOS port from 80 to 9095 Back to top Setup Network Attached Storage (NAS) CasaOS File Shares You can use the Files app in CasaOS to share your folders on your network. These shares are not password-protected and can be viewed by anyone who can access your home network. Password Protecting CasaOS Shared Folders This can be done by manually configuring Samba file sharing in Linux. First, set up and share all of your main folders in CasaOS. This is necessary as adding extra shared folders will overwrite the changes we will make here. Next, we must create a user and set a password for file sharing. The commands below will create a user called shareuser and set a password for the user. $ sudo adduser --no-create-home --disabled-password \ --disabled-login shareuser$ sudo smbpasswd -a shareuser The second command prompts you to enter a password to access protected shared folders. The CasaOS Terminal can filter certain characters in your password. It is best to run these commands via SSH from another computer. Now, we can turn on password protection for our shared folders by editing /etc/samba/smb. casa. conf using the following command. $ sudo nano /etc/samba/smb. casa. conf You can protect each share by modifying the lines shown in bold in the example below for the share. comment = CasaOS share Mediapublic = Nopath = /DATA/Mediabrowseable = Yesread only = Yesguest ok = Novalid users = shareuserwrite list = shareusercreate mask = 0777directory mask = 0777force user = root When you are done making the changes, run the following command to apply your changes. $ sudo service smbd restart Your shared folders are now password-protected. When accessing them from your Mac or Windows PC, you will be prompted to enter your user name, which is shareuser. You will also need to enter the password that you set. Back to top A First Application - OpenSpeedTest OpenSpeedTest We'll use the CasaOS App Store to install a simple speed test application called OpenSpeedTest on our home server. We'll use the Utilities version of this app. Once our speed test is installed, we... --- ### Home Media System > One to have a home network is to create a home media system that enables us to use our content collection anywhere in our home. - Published: 2025-02-25 - Modified: 2025-03-21 - URL: https://homelab.anita-fred.net/home-media-system/ - Categories: Media - Tags: Media Our Home Theater We have been into Home Theater and whole-house video systems for many years. As a result, we have purchased a large collection of Movies, Albums, TV Shows, and other video content on discs. One of the main reasons for our investment in our Home Network and our Homelab was to create a home media system that enables us to use our extensive content collection anywhere in our home. This project also enables us to access our content from our cars and while traveling. We own all of the content that we use. Our purchased content is in the form of physical discs (mostly) and some high-quality downloaded audio content from HDTracks. We respect the rights of the original creators of the content that we use. Our Content Lineup Apple TV Interface - Apps for Home Media System We subscribe to the following online services for streaming content: Hulu for our main streaming channel lineup Hallmark Plus for Movies and TV Shows Great American Pure Flix for Movies and TV Shows Disney Plus for Movies and TV Shows Apple TV Plus for Movies and TV Shows Amazon Prime Video for Movies and TV Shows Netflix for Movies and TV Shows YouTube Premium for short video content We use a combination of AppleTV 4K Clients, our iPhones/iPads, and our macOS Laptops to run Apps to access all of these services. Home Media System Architecture Whole House Media Architecture Our Home Network and NAS Storage play a central role in our Media Architecture. We can source content from multiple sources including the Internet, Local NAS Storage, or our Off-Air IP Tuners. Our content can be viewed or played on any of the TVs and in our Theater via AppleTV 4K Clients. Content can also be viewed and played on our iPhones, iPads, and laptops via the Internet. The AppleTV 4K's, iPhones, iPads, and laptops in our home run applications that provide consistent user interfaces to all of our content stores and streaming services across all of our devices. Central to our architecture is a Synology NAS which provides content storage and servers that organize and stream our stored video to all of our devices. We consume most of our content by streaming it in its native resolution over our network to our Apple TV 4K clients, macOS Laptops, and iPhones/iPads. Our client devices then convert the video and audio into a format that we can view. Our Media NAS can also transcode content to formats that require reduced streaming bandwidth. This is useful when we use our content on the go via the Internet. Our NAS does not provide Hardware Transcoding but it seems to do OK with transcoding of content with 1080p resolution or less. We used wired Ethernet connections for all of the media devices within our Home to ensure a quality viewing experience for 4K content and multi-channel audio. Our iPhone and iPad devices play our content fine using our WiFi network. Our Synology NAS devices provide snapshots and backups for all of our content including offsite backups via the Internet and Immutable Snapshots for ransomware protection. Off-Air Video Tuners IP Off-Air Tuners We also receive our local channels in off-air digital formats using HDHomeRun Tuners. We have three of these including one that can receive 4K transmissions. These tuners provide IP/Ethernet interfaces to Channels DVR (see below) over our home network. Synology NAS Media Server Main Server Rack and NAS Storage Rack We use a Synology RS1224+ NAS in our Server Rack (NAS device below the drawer in the rack above) to store our purchased Movies and Audio Content. This NAS runs two media server applications that make our content easier to use: Channels DVR - Consolodates our Hulu streaming channels and our Off-Air Tuners into a channel lineup and guide. Channels DVR also provides recording and playback of these sources to and from our Media NAS storage. Plex Media Server - Organizes all our videos into visually appealing libraries and adds posters, ratings, cast lists, season and episode details, and more. Channels DVR Server and Plex Media Server run as native packages on our Synology Media NAS. Information on downloading and installing these servers is available via these links: Download and install Channels DVR Server Download and install Plex Media Server TV and Theater Clients Apple TV UI on Screen We use Apple TV 4K media streaming devices in our Home Theater and on all of the TVs in our house. Apple TV 4K The AppleTV 4Ks run apps that provide interfaces to all of our content services and our Channels DVR Server and Plex Media Server. They are capable of displaying 4K content and support Dolby ATMOS multi-channel sound. Our Apple TV 4Ks connect to the Internet and our Media NAS via our Home Network. We use wired Ethernet interfaces to connect our AppleTVs to ensure a consistently good viewing experience. Channels DVR Channels DVR can consolidate channels from TV Everywhere capable streaming services and off-air tuners like those from HDHomeRun into a single channel line and program guide. It also records channels onto our Media NAS storage. Channel DVR provides a host of features including Commercial Skipping. You can learn more about the features of Channels DVR here. Channels DVR Server Optimizations We've implemented a few optimizations of our Channels DVR Server to improve its performance. These include: Creating an SSD volume for the Channels DVR Server application and associated metadata. This optimization required separating the recording storage from the app's storage. The recordings are stored on an HDD-based volume on the hosting Synology NAS. Creating an NVMe read-only cache for the Synology Storage Pool that stores our recordings. These optimizations speed up Channels DVR significantly. Plex Media Server Plex Media Server We use Plex Media Server to store and organize all of our purchased Video and Audio content. Here is a video that provides a brief overview of Plex Media Server, what it does, and how to get started setting it up - https://player. vimeo. com/video/271752262 The following are some feature summaries from the Plex Media site: Movies - Organize all your videos into beautiful libraries... --- ### Raspberry Pi NAS 2 - Published: 2025-02-25 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/pi-nas-2/ - Categories: General, Raspberry Pi, Storage - Tags: Docker, Raspberry Pi, Server, Storage Raspberry Pi NAS 2 We've built a second NAS and Docker environment using another Raspberry Pi 5. This NAS features four 2. 5 in 960 GB SSD drives in a RAID-0 array for fast shared storage on our network. Raspberry Pi NAS Hardware Components Raspberry Pi 5 Single Board Computer We use the following components to build our system - Raspberry Pi 5 SBC with 8 GB memory An Active Cooler for the Raspberry Pi 5 Plugable 2. 5GB USB-C Ethernet Adapter A Radxa Penta SATA Hat which provides a "case" and mounting for four SATA drives A Radxa Penta SATA Top Board which adds a fan for cooling and a status display - is required to avoid overheating the components. A 12 Vdc, 10A Power Supply A Sabrent SATA to USB Cable to connect an additional SSD drive to hold the OS Five 960 GB SSD drives, four for storage and one additional for OS storage and boot. I had five 960 GB 2. 5" SSD drives from a previous project available for this project. The following video covers the hardware assembly - https://www. youtube. com/watch? v=l30sADfDiM8 We used a 2. 5 GbE USB adapter to create a 2. 5 GbE network interface on our NAS. 2. 5 GbE USB Adapter The configuration of the Fan/Display HAT top board is covered here. FAN/Display Top Board This board comes as a kit that includes spaces to mount it on top of the Raspberry Pi 5/SSD Drive Interface HAT in the base kit. Software Components and Installation We installed the following software on our system to create our NAS - Raspberry Pi OS 64-bit Lite Version - for system configuration and general applications CasaOS - for docker environment and container applications CassaOS CasaOS Web UI CasaOS is included to add a very nice GUI for managing each of our NUT servers. Here's a useful video on how to install CasaOS on the Raspberry Pi - Installation The first step is to install the 64-bit Lite Version of Raspberry Pi OS. This is done by first installing a full desktop version on a flash card and then using Raspberry Pi Imager to install the lite version on our SSD boot drive. We did this on our macOS computer using the USB to SATA adapter and belenaEtcher. We used the process covered in the video above to install CasaOS. Creating a RAID We choose to create a RAID-0 array using the four SSD drives in our NAS. Experience with SSD drives in a light-duty application like ours indicates that this approach will be reasonably reliable with SSD drives. We also backup the contents of the NAS daily to another system via Rsync to one of our Synology NAS drives. RAID-0 Storage Array CasaOS does not provide support for RAID so this is done using the underlying Linux OS. The process is explained here. File Share CasaOS makes all of its shares public and does not password-protect shared folders. While this may be acceptable for home use where the network is isolated from the public Internet, it certainly is not a good security practice. Fortunately, the Debian Linux-derived distro we are running includes Samba file share support, which we can use to protect our shares properly. This article explains the basics of how to do this. Here's an example of the information in smb. conf for one of our shares - path = /DATA/Public browsable = yes writeable = Yes create mask = 0644 directory mask = 0755 public = no comment = "General purpose public share" You will also need to create a Samba user for your Samba shares to work. Samba user privileges can be added to any of the existing Raspberry Pi OS users with the following command - # sudo smbpasswd -a It's also important to correctly set the shared folder's owner, group, and modes. We need to restart the Samba service anytime configuration changes are made. This can be done with the following command - # sudo systemctl restart smbd --- ### IT Tools > We have found a useful Docker application for Homelab folks. It's called IT Tools and it is run using a Docker container with Traefik... - Published: 2025-01-27 - Modified: 2025-01-27 - URL: https://homelab.anita-fred.net/it-tools/ - Categories: Docker - Tags: Docker, Tools IT Tools We have found a useful Docker application for Homelab folks. It's called IT Tools. It can be run as a Docker container. Our installation uses our Traefik reverse proxy. https://www. youtube. com/watch? v=CbIASgzUIUU&t=182s The video above covers the installation via Docker Compose. --- ### Proxmox Test Node - Published: 2025-01-10 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/test-node/ - Categories: Server - Tags: Proxmox, Server Proxmox Lab Node - AMD NUC Computer We have built a Proxmox single-node using an AMD NUC computer for testing and learning purposes. The hardware configuration for this system is as follows: GMKtec K8 Plus Mini PC, AMD Ryzen 7 8845HS processor with 8 C/ 16T, up to 5. 1GHz with 1 TB MVNe SSD Crucial 96GB DDR5 RAM 5600MHz Memory upgrade 2TB SAMSUNG 990 EVO Plus M. 2 MVNe SSD as a second drive M. 2 SSD 5mm Low-Profile Heat Sink for the second NVMe Two Plugable 2. 5GB USB-C Ethernet Adapters for additional Network Interfaces Proxmox Installation/ZFS Storage Proxmox installation is straightforward. We used the same procedure as our Production Cluster. The two NVMe drives were configured as follows: 1 TB NVMe - ZFS Formatted Boot pool named rpool 2 TB NVMe - ZFS Formatted pool named zfsa, mount point zfsa_mp Networking Configuration The Networking configuration on our test node mirrors the setup in our Production Cluster. The table above outlines the Test Node networking setup. We could not configure one of the ports on the host system to operate above 500 Mbps. Storage Configuration Proxmox Test Node Storage Configuration The table above shows the storage configuration for our Test Node. NUC-storage is implemented on our high-availability NAS. Access is provided to both the Production Cluster and NUC Proxmox Backup Server datastores (more info here). Proxmox Backup Server Configuration Backups for our Test Node mirror the configuration and scheduling of Backups on our production Cluster (more info here). Additional Configuration The following additional items are configured for our test node: Use of No Subscription Repository SSL Certificate from Lets Encrypt Postfix for e-mail forwarding Clock sync via NTP Monitoring via built-in Influxdb and Grafana --- ### Raspberry Pi Servers > We use Raspberry Pi Single-Board computers (SBCs) for various server applications in our Home Lab. Applications include DNS, NAS, Proxmox... - Published: 2025-01-04 - Modified: 2025-05-29 - URL: https://homelab.anita-fred.net/raspberry-pi/ - Categories: Raspberry Pi - Tags: Raspberry Pi Raspberry Pi Rack Mount System We use Raspberry Pi (RPi) Single-Board computers (SBCs) for various server applications in our Home Lab. Uctronics Raspberry Pi Rack Mount We use rack-mount cases for many of our RPIs for Uctronics. Rack Mounting our RPi's takes less space and enables additional features, including Solid-State Disk (SSD) storage and displays. We've added a PoE hat to each of our RPi's to allow powering the units via ethernet. Pi Rack Module These cases feature removable rack-mount carriers for four Raspberry Pi single-board computers (SBCs). The package includes boards that enable SSD storage as the RPi's main drive. Each Pi Rack module adds some nice features for the associated Raspberry Pi, including: A display showing the RPi's IP address, operating parameters, and temperature Front panel access to the SD card slot SSD Storage via either a 2. 5" SSD Drive (4B Model) or an NVMe drive (5 Model) Convenient access to the PI's USB connections Indicator lights for SSD and SD card activity The procedures for installing the software for the display can be found here. The following configuration changes are sometimes required to enable the RPi's IP address to be displayed: # Turn off deterministic network names # sudo raspi-config (change the option under 'advanced' # Add the following to the end of the line # vi /boot/firmware/cmdline. txt net. ifnames=0 biosdevname=0 :wq # sudo reboot The case also includes cooling fans to keep the RPi's cool. The following are some server applications that run on Raspberry Pi Systems in our Home Lab. Network UPS Tools (NUT) Servers We use the Network UPS Tools software running on Raspberry Pi computers to manage our critical UPS devices. This software allows us to remotely monitor their operational condition and enables our Storage Devices and Servers to sense conditions when a complete backup power loss is imminent and perform a controlled shutdown to protect themselves and the data that they store. You can find a summary of the available features here. You can find more information on our NUT servers here. PiLAB We built a Raspberry Pi 4B System to demonstrate using a RPi to build a simple home server. The hardware used for PiLAB is as follows: An 8 GB Raspberry Pi 4B Single-Board Computer A Raspberry Pi PoE+ HAT with Low Profile Heatsink A 1TB 2. 5" SSD Drive CasaOS Running On PiLAB We are running CasaOS on PiLAB, which provides a simple GUI interface for managing file sharing and Docker containers. The applications running as docker containers on PiLAB include: PiHole Ad Blocking DNS A local speed test server (OpenSpeedTest) An Internet speed monitor (MySpeed) A simple SMB-base NAS Portainer for managing docker containers PiNAS Raspberry Pi NAS We also built a variation of PiLAB we call PiNAS. This system uses a Raspberry Pi 5 SBC running CasaOS and a larger NVMe drive to build a more capable NAS. PiNAS is installed in a stand-alone case with an external power supply. You can learn more about PiNAS here. PiHole Server via Docker PiHole in Docker We have set up a Raspberry Pi 5 system to run a third PiHole DNS server in our network. This ensures that DNS services are available even if our other servers are down. This system is installed in our Raspberry Pi 5 Rack Mount system. You can learn more about this system here. Raspberry Pi 5 Proxmox Cluster We built a three-node Proxmox cluster using Raspberry Pi 5 SBCs. This system is installed in our Raspberry Pi 5 Rack Mount system. Each node includes the following hardware: Raspberry Pi 5B with 16 GB RAM 2 TB NVMe storage drive PoE HAT for power Plugable 2. 5GB USB-C Ethernet Adapter - Two on each RPi for additional Network Interfaces https://www. youtube. com/watch? v=DOuWlZ96Xww We used the procedure in the video above to install Proxmox on our Raspberry Pi 5 nodes. This is a high-availability cluster for application testing before deployment on our production Proxmox cluster. It also uses our Proxmox Backup server and the upcoming Proxmox Data Center Manager. Networking Configuration We added two USB-C 2. 5 GbE adapters to each node to enable the networking configuration in the table above. --- ### Wallabag > We're running a Docker container called Wallabag, which can be used to save cleanly formatted copies of web pages and articles... - Published: 2024-12-10 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/wallabag/ - Categories: Docker, Website - Tags: Docker, Website Wallabag We're running a Docker container called Wallabag, which can be used to save cleanly formatted copies of web pages and articles. Smartphone (iPhone and Android) apps are available for Wallabag, making viewing the saved content easy on the go. The following video explains what Wallbag does. https://www. youtube. com/watch? v=mY0KuiabKuE Wallabag Installation This video covers the installation of Wallabag using Docker Compose. It can run on any docker host. https://www. youtube. com/watch? v=2RWhkCVJZcs&t=1512s Wallabag in Docker This video also contains some good information on sources for other self-hosted apps. --- ### RSS Hub > We're ..running a Docker container called RSS Hub, which detects RSS feeds available on websites that we browse.. - Published: 2024-12-09 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/rsshub/ - Categories: Docker, Website - Tags: Docker, Website RSS Hub We're running a Docker container called RSSHub, which detects RSS feeds available on websites that we browse. You can learn more about RSS Hub here. RSS Hub Installation This page covers the installation and use of the RSSHub container and the associated RSSHub Radar Chrome browser extension. How to Install RSS Hub in Docker We use the BazQux RSS Reader with our installation. The following shows our configuration for the RSS Hub Radar Chrome Extension, which is used with RSSHub - RSS Hub Radar Chrome Extension Configuration --- ### Docker Monitoring > Our Docker monitoring dashboard solution covers two aspects of Docker container performance. It uses Grafana to implement... - Published: 2024-07-12 - Modified: 2025-04-19 - URL: https://homelab.anita-fred.net/docker-monitoring/ - Categories: Docker, Logging and Monitoring - Tags: Docker, Monitoring Docker Node Exporter Dashboard Many of our applications and services run as Docker containers. Our monitoring dashboard solution covers two aspects of Docker container performance: Docker Host VM performance via Node Exporter Windows VM performance via Windows Exporter Docker Container performance via cAdvisor These data collectors enable several Grafana dashboards that help us manage our Docker cluster. Monitoring Setup We run a combination of Node Exporter and cAdvisor on each Docker host VMs. These containers scrape data for our Docker hosts and feed it to the Prometheus instance in our Docker stack. The following video explains how all of this is set up - Dashboards We are using several dashboards to implement our Docker monitoring solution. Docker Node Summary Docker Host Summary Dashboard We are using a modified version of the Grafana Dashboard above to monitor the overall performance of our Docker nodes. Docker Node Details Docker Host Details We are using a modified version of the Grafana Dashboard above to monitor and enable drilling into detailed performance metrics for our Docker nodes. Docker Container Summary Docker Container Summary Dashboard We are using a modified version of the Grafana Dashboard above to monitor and enable a summary view of the containers in our Docker cluster. Docker Container Details Docker Container Details Dashboard We are using a modified version of the Grafana Dashboard above to monitor and enable drilling into the detailed performance of containers in our Docker cluster. Windows VM Dashboard Windows VM Dashboard We are using a modified version of the Grafana Dashboard above to monitor and enable drilling into the performance of Windows VMs in our Docker cluster. --- ### Proxmox Monitoring > We set up a Grafana Dashboard to monitor our Proxmox Cluster. The main components in our Proxmox monitoring stack include... - Published: 2024-07-08 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/proxmox-monitoring/ - Categories: Docker, Logging and Monitoring, Server - Tags: Cluster, Docker, Encryption, Monitoring, Proxmox Proxmox Cluster Metrics We set up a Grafana Dashboard to implement Proxmox Monitoring. The main components in our monitoring stack include: A Proxmox Metric Server running in our Proxmox Cluster Influx DB (part of our Grafana Logging and Monitoring stack) to store the metrics data A Grafana Dashboard to display the Proxmox metrics The following sections cover the setup and configuration of our monitoring stack. Proxmox Monitoring Setup The following video explains how to set up a Grafana dashboard for Proxmox. This installation uses the monitoring function built into Proxmox to feed data to Influx DB. And here is a video that explains setting up self-signed certificates - Configuring Self-Signed Certificates We are using the Proxmox dashboard with our setup. --- ### File Browser > File Browser is a simple Docker container that provides a file manager in your web browser. It is helpful to have access to files... - Published: 2024-07-08 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/filebrowser/ - Categories: Docker, Storage - Tags: Docker, Storage File Browser It is helpful to have access to files and directories associated with our Docker persistent volume stores. File Browser is a simple Docker container that provides a file manager. Installation The following video covers the installation and use of the File Browser container. File Browser Installation and Use --- ### Grafana Logging and Monitoring > We've added a Monitoring and Logging system. The system is based on Grafana, Prometheus, Grafana Loki, Promtail, Telegraf, and InFlux DB. - Published: 2024-07-06 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/grafana/ - Categories: Docker, Logging and Monitoring - Tags: Dashboard, Docker, Monitoring We've added a Grafana Monitoring and Logging system to our Home Lab. The system is based on Grafana, Prometheus, Grafana Loki, Promtail, Telegraf, and InFlux DB. Installation The following video covers the installation of our Grafana Monitoring and Logging monitoring stack. Setup Logging and Monitoring in Docker Configure Loki and Promtail Grafana Lofi and Promtail work together to scape and store log data. These tools can scrape docker data and accept syslog data as well. The following video explains how to configure Loki and Promtail. Configure Grafana Loki and Promtail for logs There are a few details that we needed to do differently than the video: We had to configure a tsdb schema for Loki The links for configuring the Loki Docker driver can be found here and here. Set parameters in the Loki Docker driver via /etc/docker/daemon. json to avoid blocking the Docker. Recreating containers with Portainer does not enable Lofi to access their logs. To make this work, we needed to use docker compose up -d --force-recreate The contents of /etc/docker/daemon. json are as follows: { "log-driver": "loki", "log-opts": { "loki-url": "http://localhost:3100/loki/api/v1/push", "loki-batch-size": "400", "loki-retries": "2", "loki-max-backoff": "800ms", "loki-timeout": "1s", "keep-file": "true", "mode": "non-blocking" } } Syslog We have configured a combination of Loki and Promtail to accept Syslog events. Promtail does not support Syslog events using the UDP protocol. To solve this problem, we set up rsyslog running under the Ubuntu system, which hosts the Promtail Docker container, to consolidate and forward all Syslog events as a front end to Promtail. Information on configuring rsyslog as a front end to Promtail can be found here. Monitoring Dashboards The following video provides some information on configuring dashboards and other monitoring capabilities. Create and Configure Grafana Dashboards --- ### VS Code Server > VS Code Server allows editing using a web browser on any computer. The VS Code web interface is hosted from a server... - Published: 2024-07-05 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/code-server/ - Categories: Docker, Software Development - Tags: Docker We do a variety of software development and Java coding tasks. To make this easier and more accessible from all our computers, we will try VS Code and VS Code Server. This tool allows editing using a web browser on any computer. The VS Code web interface is hosted from a server running in a Docker container. Installation and Set Up The following video explains how to set up the tool and connect it to a GitHub repository. VS Code Server Installation and Set Up VS Code Extensions The following video suggests several useful VS Code plugin extensions. --- ### WireGuard VPN > The WireGuard VPN server built into our Unifi System provides secure connections to our iPhones, iPads, macOS , and Windows systems. - Published: 2024-07-05 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/wireguard-vpn/ - Categories: Network - Tags: Network, VPN Our Unifi system can support several different VPN configurations. We used the VPN server built into our Unifi Dream Machine SE and configured it to use Wireguard clients on our iPhones, iPads, macOS laptops, and Windows laptops. The Unifi system makes setting up our WireGuard VPNs simple. The following video explains the various VPN options and how to configure them. 5 Types of VPNs on Unifi and How To Configure Them We use DDNS to ensure that our domains point to our router when our ISPs change our IP address. After the clients are installed, they are updated to point at our network's current IP. --- ### Adminer > We run Adminer in a container to enable the configuration and editing of MySQL databases. Information on configuring Adminer... - Published: 2024-07-05 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/adminer/ - Categories: Docker, Website - Tags: Website We run Adminer in a container to enable the configuration and editing of MySQL databases. Information on configuring Adminer can be found here. --- ### Dashy > We have created many websites and services for our Home Lab. It's nice to have an organized dashboard to access these tools. We use Dashy... - Published: 2024-07-05 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/dashy/ - Categories: Docker - Tags: Dashboard Dashy dashboard We have created many websites and services for our Home Lab. It's nice to have an organized dashboard to access these tools. We use a dashboard tool called Dashy for this purpose. Dashy runs in a Docker container and is easy to configure. The following video explains how to set up and configure Dashy. How To Set Up and Configure Dashy --- ### Iperf3 > Iperf3 is a common tool for network performance testing. We run an Iperf3 server in a Docker container. You can find information... - Published: 2024-07-05 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/iperf/ - Categories: Docker, Network - Tags: Network Iperf3 Iperf3 is a common tool for network performance testing. We run an Iperf3 server in a Docker container. You can find information on how to set up and use Iperf3 here. --- ### Speedtest Tracker > We run a docker container called Speedtest Tracker to monitor the performance of our Internet Connection. The article covers setup... - Published: 2024-07-04 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/speedtest-tracker/ - Categories: Docker, Network - Tags: Internet We run a Docker container called Speedtest Tracker to monitor the performance of our Internet connection. This container is a self-hosted application that monitors the performance and uptime of our internet connection. It uses test servers on the Internet to measure our Internet upload and download speeds, latency, and jitter. The main use case for this tool is to build a history of your internet performance and ISP's uptime so you can be informed when you're not receiving your ISP's advertised rates. Setup and Configuration This container is easy to set up in Docker. We used the process in the video below - We also configured the container to store test results in our Influxdb. Speedtest Tracker Grafana Dashboard This allows us to configure a Grafana Dashboard to view the results of our tests. Speedtest Tracker Results The Grafana Dashboard that we used can be found here. You can learn more about how we have deployed and configured Grafana dashboard in our Home Lab here. --- ### Nginx Proxy Manager > Many services and devices in our home lab have web interfaces. We secure many of them using Nginx Proxy Manager as a reverse proxy. - Published: 2024-07-04 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/nginx-proxy-manager/ - Categories: Docker, Network - Tags: Docker Many services and devices in our home lab have web interfaces. We secure many of them using Nginx Proxy Manager as a reverse proxy. Traefik Reverse Proxy provides ingress control and SSL certificates for our docker services. While Traefik can be used for services outside Docker, configuring it is complex and requires restarting the Trafik container. As a result, we also run Nginx PM in a container to enable SSL certificates and simple reverse proxy configuration of our web-based services outside of Docker. Nginx Proxy Manager Installation Installing is easy. The following video explains the process, including using a DNS-01 challenge to obtain SSL certificates via Let's Encrypt. We configured a Docker macVLAN network for the Nginx PM container so that the proxy could determine the source IP addresses that access it. This enables IP filtering and other features. --- ### Uptime Kuma > We use a tool called Uptime Kuma to monitor the operational status of our home lab. Uptime Kuma can monitor many different... - Published: 2024-07-04 - Modified: 2025-04-19 - URL: https://homelab.anita-fred.net/uptime-kuma/ - Categories: Docker - Tags: Monitoring As our Home Lab and the associated network become more complex, monitoring the operational status of our services and equipment becomes essential. We use a tool called Uptime Kuma to monitor the operational status of our home lab. This tool can monitor various types of equipment and services, providing multiple mechanisms to notify us when a service is unavailable. Uptime Kuma Docker Install We deployed this tool as a Docker container in our Docker cluster. It is easy to install and configure. We used the following video to help with the installation - Docker Install Monitor Local and Remote Docker Hosts Uptime Kuma can be used to monitor the health of Docker containers running on Local and Remote Docker Hosts. The Local Docker Host can be monitored by binding /var/run/docker. sock to the Uptime Kuma container. Some additional configuration is required on Remote Docker Hosts to expose Docker information. The process for setting up both of the cases is covered here. Performance and Backups The tool's database is sensitive to the volume store used to contain its database. For this reason, we bound Uptime Kuma's present volume to storage inside the Docker Host VM instead of using our high-availability network store. We also used the root crontab to back up the local VM configuration data to the Docker volume on our high-availability store as follows: # Backup local VM configuration for uptime kuma # to HA docker volume */15 * * * * /usr/bin/rsync -r --delete \ /home/ubuntu/uptime-kuma/ \ /home/ubuntu/docker/uptime-kuma/data --- ### Pihole with a Cloudflare Tunnel > We are running three Pihole installations, which enable load balancing and high availability for our DNS services. We also use Cloudflare... - Published: 2024-06-30 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/pihole/ - Categories: Docker, Network - Tags: Docker, Network We are running three Pihole installations, which enable load balancing and high availability for our DNS services. We also use a Cloudflare encrypted tunnel to protect information in external DNS queries via the Internet. Our three instances are deployed on a combination of Docker host VMs in our Proxmox Cluster and a stand-alone Raspberry Pi Docker host. Deploy Pihole with a Cloudflare Tunnel Our software service stack for our dockerPiHole installs Pi includes the following applications: Pihole – Ad blocking DNS server Cloudflare Tunnel – For encrypted DNS lookups via the Internet Our combined stack was created using information in the following video: Deploy PiHole with Cloudflare Tunnel in Docker Ubuntu Port 53 Fix Unubtu VMs include a DNS caching server on port 53, which prevents Pihole from being deployed. To fix this, run the commands at this link on the host Ubuntu VM before installing the Pihole and Cloudflare Tunnel containers. Scheduled Block List Updates We must update our block lists by doing a Gravity pull. We do this daily via a cron job. This can be configured on the RPi host using the following commands – # Edit the user crontab sudo crontab -u -e # The following to the user crontab min hr * * * su ubuntu -c /usr/bin/docker exec pihole pihole -g | /usr/bin/mailx -s"RPi Docker - Gravity Pull" your-email@mydomain. com --- ### Watchtower Container Update > We are running the Watchtower container on all our stand-alone docker hosts to keep our containers up to date. - Published: 2024-06-24 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/watchtower/ - Categories: Docker - Tags: Docker We are running the Watchtower container on all our stand-alone docker hosts to keep our containers up to date. The following video explains how to install and configure Watchtower. Install and Configure Watchtower on Docker We have Watchtower configured to detect and notify us about updated container images. We install these manually using Protainer. --- ### Cloudflare DDNS > We use Cloudflare to host our domains and the associated external DNS records. Cloudflare provides excellent security and scaling features... - Published: 2024-06-24 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/cloudflare-ddns/ - Categories: Docker, Network - Tags: Docker, Network We use Cloudflare to host our domains and the associated external DNS records. Cloudflare provides excellent security and scaling features and is free for our use cases. We do not have a static IP address from either of our ISPs. This, coupled with the potential of a failover from our primary to our secondary ISP, requires us to use DDNS to keep the IPs for our domains up to date in Cloudflare's DNS. We run a docker container for each domain that periodically checks to see if our external IP address has changed and updates our DNS records in Cloudflare. The repository for this container can be found here. Deploying the DDNS update container is done via a simple docker compose yml - version: '2' services: cloudflare-ddns: image: oznu/cloudflare-ddns:latest restart: unless-stopped container_name: your-container-name environment: - API_KEY=YOUR-CF-API-KEY - ZONE=yourdomain. com - PROXIED=true # Runs every 5 minutes - CRON=*/5 * * * * You'll need a separate container for each DNS Zone you host on Cloudflare. --- ### Docker Networking > Docker can create its own internal networks. There are multiple options here, so this aspect of Docker can be confusing. - Published: 2024-06-21 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/docker-networking/ - Categories: Docker, Network - Tags: Docker, Network Docker can create its own internal networks. There are multiple options here, so this aspect of Docker can be confusing. Docker Networking Types The following video explains the Docker networking options and provides examples of their creation and use. Docker Networking Explained --- ### Traefik Reverse Proxy > We are using Traefik as a reverse proxy in our Home Lab. Traefik is deployed on our Docker Swarm Cluster and Raspberry Pi Docker server. - Published: 2024-06-21 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/traefik-reverse-proxy/ - Categories: Docker - Tags: Docker, Security We are using Traefik as a reverse proxy in our Home Lab. Traefik is deployed on our Docker Cluster and Raspberry Pi Docker server. Traefik is set to use Let's Encrypt to obtain and update SSL certificates for our domain. We use a DNS-01 challenge and Cloudflare for this purpose The steps required to deploy Traefik are covered in this video: Deploy Traefik with Lets Encrypt SSL Certificates We also used the information in this video to separate and secure external and internal access to our Docker containers via Taefik: Secure Traffic External Access Adding Workloads Traefik can serve as a reverse proxy for services in our Docker environment, external workloads on VMs, and stand-alone Docker hosts such as our Raspberry Pi Docker host. The last two chapters of the following video explain how to set up additional services behind a Traefik reverse proxy. Configuring Traefik 3 --- ### Raspberry Pi - Docker and PiHole > We have set up a Raspberry Pi 5 system to run a third PiHole DNS server in our network. This ensures that DNS services are available... - Published: 2024-06-20 - Modified: 2025-05-04 - URL: https://homelab.anita-fred.net/rpi-docker/ - Categories: Docker - Tags: Docker, Network We have set up a Raspberry Pi 5 system to run a third PiHole DNS server in our network. This ensures that DNS services are available even if our other servers are down. To make this PiHole easy to manage, we configured our Raspberry Pi to run Docker. This enables us to manage the PiHole installation on the Pi from the Portainer instance used to manage our systems running docker. We are also running the Traefik reverse proxy. Traefik is used to provide an SSL certificate for our PiHole. Raspberry Pi Hardware Raspberry Pi Docker Host Our docker host consists of a PoE-powered Raspberry Pi 5 system. The hardware components used include: Raspberry Pi 5 8GB Single Board 2. 4GHz Quad-core SBC Waveshare PoE Hat for Raspberry Pi 5 GeeekPi Aluminum Case for Raspberry Pi 5, with Pi 5 Active Cooler for Raspberry Pi 5 SanDisk 256GB Extreme microSDXC UHS-I Memory Card OS Installation We are running the 64-bit Lite version (no GUI desktop) of Raspberry Pi OS. The configuration steps on the initial boot include: Setting the keyboard layout to English (US) Setting a unique user name Setting a strong password After the system is booted, we used sudo raspi-config to set the following additional options: Updated raspi-config to the latest version Set the system's hostname Enable ssh Set the Timezone Configure predictable network names Expand the filesystem to use all of the space on our flash card Next, we did a sudo apt update && sudo apt dist-upgrade to update our system and rebooted. The RPi 5 works well with the PoE HAT we are using. The RPi5 booted up with the USB interfaces in low-power mode. The PoE HAT provides enough power to enable USB boot, so we added the following to bring our RPi up in full power USB mode: $ sudo vi /boot/firmware/config. txt # Enable RPi 5 to provide full power to USB usb_max_current_enable=1 :wq # After rebooting, check USB power mode $ vcgencmd get_config usb_max_current_enable usb_max_current_enable=1 Finally, we created and ran a script to install our SSH keys on the system, and we verified that SSH access was working. With this done, we ran our ansible configuration script to install the standard set of tools and utilities that we use on our Linux systems. Mail Forwarding We will need to forward emails from containers and scripts on the system. To do this, we set up email forwarding using the procedure here. Docker/Docker Compose Installation Installing Docker and the Docker Compose plugin involves a series of command line steps on the RPi. To automate this process, we created a script that runs on our Ubunutu Admin server. The steps required for these installations are covered in the following video: Steps to install Docker and Docker Compose on a Raspberry Pi Some important adjustments to the steps in the video included: Installed the Docker Compose plugin instead of Docker Compose. The procedure to install the plugin can be found here. The installation can be verified at the end with the following commands: # docker --version # docker compose version # docker run hello-world Portainer Agent We installed the Portainer Edge agent using the following command, which is run on the RPi: # docker run -d \ -p 9001:9001 \ --name portainer_agent \ --restart=always \ -v /var/run/docker. sock:/var/run/docker. sock \ -v /var/lib/docker/volumes:/var/lib/docker/volumes \ portainer/agent:2. 19. 5 The final step is to connect the Edge Agent to our Portainer. Traefik Reverse Proxy and PiHole with Cloudflare Tunnel Our software service stack for our Raspberry Pi includes the following applications: PiHole - Ad blocking DNS server Cloudflare Tunnel - For encrypted DNS lookups via the Internet Traefik - Reverse Proxy with SSL Encryption These applications are installed via custom scripts, and Docker Compose using a single stack. Our combined stack was created using a combination of the information in the following videos: Deploy PiHole with Cloudflare Tunnel in Docker Deploying Traefik in Docker Scheduled Block List Updates We must update our piHole block list by doing a Gravity pull. We do this daily via a cron job. This can be configured on the RPi host using the following commands - # Edit the user crontab sudo crontab -u -e # The following to the user crontab min hr * * * su ubuntu -c /usr/bin/docker exec pihole pihole -g | /usr/bin/mailx -s"RPi Docker - Gravity Pull" your-email@mydomain. com Cloudflare DDNS We host our domains externally on Cloudflare. We use Docker containers to keep our external IP address up to date in Cloudflare's DNS system. You can learn about how to set this up here. Watchtower We are running the Watchtower container to keep our containers on our RPi Docker host up to date. You can learn more about Watchtower and how to install it here. Backups We back up our Raspberry Pi Docker host using Synology ActiveBackup for business running on one of our Synology NAS drives. The parameters for the backup job are as follows: Backup type - File Server, Multi-Version Access via SSH—This requires configuring SSH to support root access. The procedure to set this up can be found here. Schedule - Run Daily at 2 am --- ### Docker Infrastructure > We've been using Docker hosts and Portainer to run various containerized applications in our Home Lab. Our applications have been hosted... - Published: 2024-06-20 - Modified: 2025-03-22 - URL: https://homelab.anita-fred.net/docker/ - Categories: Docker - Tags: Docker We've been using Docker hosts and Portainer to run various containerized applications in our Home Lab infrastructure. Our applications have been hosted using a combination of our Synology NAS drives and our Proxmox Cluster. Getting Started With Docker The following video provides a good beginner's overview of Docker and how to get started. Getting Started With Docker Architecture We run our Docker infrastructure using our Proxmox Cluster and a stand-alone Raspberry Pi. We have a total of found Docker hosts in our setup. Three run on top of Ubuntu Server VMs on our Proxmox Cluster, and the fourth runs on a Raspberry Pi using Raspberry Pi OS. The Proxmox VMs utilize Proxmox High-Availability features to ensure that no single failure causes our Docker hosts to fail. We are also spreading the VM workload across our three physical servers to improve the capacity and performance of our Docker system. Our Synology High-Availability storage system stores persistent volumes for our Docker system. This enables high-performance storage for our container volumes, allows configuration file editing, and facilitates backups. Docker and Docker Compose Setup We installed Docker and the Docker Compose plugin on our Ubuntu VMs and used the convenience script procedure documented here. The procedure for installing Docker and the Docker Compose plugin on the Raspberry Pi is covered here. Mail Forwarding Containers and other workloads need to be able to send mail. This procedure can enable mail forwarding from inside the host VMs. Volume Storage We use our shared high-availability storage pool as a location for persistent volume storage in Docker. This approach makes it easier to edit container configuration files and perform backups. We access this storage via NFS mounts on our Docker host VMs. This requires that the docker volume share on our HA NAS device be mounted inside the VM running Docker. The following video explains setting up the necessary NFS client on our Docker VMs. Set up NFS on Ubuntu Here are some notes on our installation: It's essential to get the NFS permissions and user ID mapping correct on the Synology NFS server We used the autofs approach covered in the video to our NFS share (see chapter at 20:08 in the video) We created a script to automate the setup of the NFS client and autofs Traefik Reverse Proxy and Portainer We have deployed a combination of Traefik as a reverse proxy and Portainer on our Docker infrastructure. Both of these applications are deployed via a combined Docker Compose . yml file. Portainer Environment - Setting Public IP It is essential to set some details about your docker environment so that the links ports associated with your containers work correctly in the Portainer UI. This should be done for each Docker instance that you manage via Portainer. Configure the Public IP for your Docker instance in Administration / Environment-related / Environments. The procedure for deploying Traefik is covered here. The steps to add Portainer are covered here. --- ### Ubuntu Server VM Template > It is common to need to create Ubuntu Server VMs to host various applications. To facilitate the creation of such VMs, we’ve created... - Published: 2024-06-20 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/ubuntu-server-template/ - Categories: VMs and LXCs - Tags: Cluster It is common to need to create Ubuntu Server VMs to host various applications. To facilitate the creation of such VMs, we’ve created a Proxmox template using the procedure in this video: Create VM Template Including Cloud-Init The template can be used to create the VMs to support Docker Swarm, Kubernetes, and other applications running on Ubuntu Server. The QEMU Guest Agent should also installed on each VM after they have been cloned via the template. --- ### Linux Desktop Virtual Machines > Several Ubuntu Linux desktop VMs support general-purpose desktop applications and our DXalarm Clocks. The following steps create the base... - Published: 2024-04-05 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/linux-desktop/ - Categories: VMs and LXCs - Tags: Linux, Virtual Environment Ubuntu Desktop Several Ubuntu Linux desktop Virtual Machines (VMs) support general-purpose desktop applications and our DXalarm Clocks. The following steps create the base VMs for these applications. Linux Virtual Machine Install The VMs are created as follows: Image - Ubuntu Desktop download from here. CPUs - 4 Memory - 4096 MB, Balloning = off Network - LS Services, VLAN 10 Disk - 32 GB, SSD, Discard = on, Cache = Write through QEMU Agent - Installed via apt install quemu-guest-agent && systemctl start qemu-guest-agent The CPU and Memory parameters are chosen to be on the high side for most applications. This enables a quick installation and setup of the resulting VM. These parameters can be adjusted lower to match the actual workload for each provisioned VM. Run the Ubuntu installer on the initial boot as follows: US English Normal Installation, Download Updates while installing Erase Disk and Install Ubuntu Timezone = New York, Automatic Timezone, AM/PM Time Format Set computer name and login credentials I also installed SSH access for all logins. The procedure for doing this can be found here. Next was a few post-setup configuration steps: Dark Style Set Desktop Wallpaper Setup Remote Desktop and VNC Sharing Finally, we installed the following via the Ubuntu Software apps: Extension Manager Allow Locked Remote Desktop via Extension Manager (by name search) E-mail Forwarding Outbound e-mail is set up via nullmailer using the procedure outlined here. # Run this as root sudo bash # Install nullmailer and mail apps apt-get install nullmailer mailutils # Move to the nullmailer directory cd /etc/nullmailer # Create configuration files vi defaultdomain ... anita-fred. net :wq vi adminaddr ... :wq # This file sets up TLS access to smtp2go sudo vi /etc/remotes ... mail. smtp2go. com smtp --port=587 --starttls --user= --pass= :wq # The next three steps are important! chmod 644 defaultdomain adminaddr chmod 600 remotes chown mail:mail defaultdomain adminaddr remotes # Check status of nullmailer service nullmailer status # Send a test e-mail mailx -s "Test e-mail via nullmailer MTA" ... Sound Through Remote Desktop Client The xRDP extensions enable sound from our Ubuntu VMs via Remote Desktop. The procedure to install the necessary extensions in Ubuntu VMs can be found here. The steps are as follows: # Must run these commands as normal user, not root su - fkemmerer # Download script, unzip it, & make it exeutable wget https://www. c-nergy. be/downloads/xRDP/xrdp-installer-1. 4. 3. zip unzip xrdp-installer-1. 4. 3. zip chmod +x xrdp-installer-1. 4. 3. sh # Must use -s to install sound drivers . /xrdp-installer-1. 4. 3. sh -s # Shutdown machine, then reboot sudo shutdown Applications Installed the following applications Chrome & associated apps VLC Player Install the Python IDE interface as follows: sudo apt install idle Template Conversion and Use The fully set up VM has been converted to a template. The template can you used to create Ubuntu Desktop VMs using the following steps: Clone (unlinked clone is preferred) the template to a VM Edit /etc/hosts and /etc/hostname to set the system name for the new VM Add the new VM to the Backup and HA configurations --- ### Samba File Server > We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS... - Published: 2024-03-22 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/file-server/ - Categories: Storage - Tags: Backups, Storage Samba File Server We have quite a bit of high-speed SSD storage available on the pve1 server in our Proxmox cluster. We made this storage available as a NAS drive using the Turnkey Samba File Server. Installing the Turkey Samba File Server We installed the Turnkey File Server in an LXC container that runs on our pve1 storage. This LSC will not be movable as it will be associated with SSD disks that are only available on pve1. The first step is to create a ZFS file system (zfsb) on pve1 to hold the LXC boot drive and storage. The video below explains the procedure used to set up the File Server LXC and configure Samba shares. The LXC container for our File Server was created with the following parameters - 2 CPUs 1 GB Memory 8 GB Boot Disk in zfsb_mp 8 TB Share Disk in zfsb_mp (mounted as /mnt/shares with PBS backups enabled. ) High-speed Services Network, VLAN Tab=10 The container is unprivileged File Server LXC Configuration The following steps were performed to configure our File Server - Set the system name to nas-10 Configured postfix to forward email Set the timezone Install standard tools Updated the system via apt update && apt upgrade Installed SSL certificates using a variation of the procedures here and here. Setup Samba users, groups, and shares per the video above Backups Our strategy for backing up our file server is to run a Rsync job via the Cron inside the host LXC container. The Rsync copies the contents of our file shares to one of our NAS drives. The NAS drive then implements a 1-2-3 Backup Strategy for our data. --- ### Linux Packages and Tools > We use a variety of Linux Packages and Tools in our Homelab. This page explains how we set up and manage them. - Published: 2024-03-18 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/linux-tools/ - Categories: Configuration - Tags: Linux We use a variety of Linux Packages and Tools in our Homelab. This page explains how we set up and manage them. Git Repo We are maintaining a private git repository to store Docker Compose files, Ansible playbooks, and shell configuration scripts for setting up tools and utilities on our machines, VMs, and LXC. The following video explains how to set up a git client on the Ubuntu server to access and update our repo. How to Install and Configure Git Some notes on the installation process above: You must use your GitHub access token at any password prompts Need to install ansible via sudo apt install ansible Network and DNS Tools Most of the systems here run Linux distributions in Debian or Ubuntu. We add a standard set of tools to our Linux machines to create our working environment. These packages are installed with the following commands: # Update repository information apt update # Network utilities including nslookup & ifconfig apt install dnsutils apt install net-tools # The tmux terminal multiplexor apt install tmux tmux on macOS MacOS does not support repository access outside of the box. To enable this, we first need to install Brew, and then we can install tmux. The commands to do this are (from the macOS terminal in user mode) - # install brew on macOS (this takes awhile... ) /bin/bash -c "$(curl -fsSL https://raw. githubusercontent. com/Homebrew/install/HEAD/install. sh)" # Update PATH to include brew (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users//. zprofile eval "$(/opt/homebrew/bin/brew shellenv)" # Now install tmux brew install tmux --- ### Raspberry Pi NAS > We've built a NAS and Docker Staging environment using a Raspberry Pi 5. Our NAS features a 2 TB NVMe SSD... - Published: 2024-03-12 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/rpi-nas/ - Categories: Storage - Tags: Docker, Raspberry Pi, Storage Raspberry Pi NAS We've built a NAS and Docker environment using a Raspberry Pi 5. Our NAS features a 2 TB NVMe SSD drive for fast shared storage on our network. Raspberry Pi NAS Hardware Components Raspberry Pi 5 Single Board Computer We use the following components to build our system - Raspberry Pi 5 SBC with 8 GB memory Geekworm X1001 PCIe NVMe SSD PIP PCIe Board Samsung 980 PRO SSD with Heatsink 2TB PCIe Gen 4 NVMe M. 2 SSD Plugable 2. 5GB USB-C Ethernet Adapter GeekPi Aluminum Case for Raspberry Pi 5, with Active Cooler CanaKit 45W USB-C Power Supply (27W @ 5A) A 32 GB microSDHC Flash Card to enable the initial OS installation Here's a photo of the completed hardware assembly - Pi NAS Internals Software Components and Installation We installed the following software on our system to create our NAS - Raspberry Pi OS 64-bit Lite Version - for system configuration and general applications CasaOS - for docker environment and container applications CassaOS CasaOS GUI CasaOS is included to add a very nice GUI for managing each of our NUT servers. Here's a useful video on how to install CasaOS on the Raspberry Pi - Installation The first step is to install the 64-bit Lite Version of Raspberry Pi OS. This is done by first installing a full desktop version on a flash card and then using Raspberry Pi Imager to install the lite version on our NVMe drive. Once this installation was done, we used the Raspberry Pi Imager to install the same OS version on our NVMe SSD. After removing the flash card and booting to the NVMe SSD, the following configuration changes were made - The system name is set to NAS-11 Enabled SSH Set our user ID and password Applied all available updates We updated /boot/firmware/config. txt to enable PCIe Gen3 operation with our SSD We used the process covered in the video above to install CasaOS. CasaOS makes all of its shares public and does not password-protect shared folders. While this may be acceptable for home use where the network is isolated from the public Internet, it certainly is not a good security practice. Fortunately, the Debian Linux-derived distro we are running includes Samba file share support, which we can use to protect our shares properly. This article explains the basics of how to do this. Here's an example of the information in smb. conf for one of our shares - path = /DATA/Public browsable = yes writeable = Yes create mask = 0644 directory mask = 0755 public = no comment = "General purpose public share" You will also need to create a Samba user for your Samba shares to work. Samba user privileges can be added to any of the existing Raspberry Pi OS users with the following command - # sudo smbpasswd -a It's also important to correctly set the shared folder's owner, group, and modes. We need to restart the Samba service anytime configuration changes are made. This can be done with the following command - # sudo systemctl restart smbd --- ### Homelab Projects > We built our Home Lab to do projects to help us to learn about modern data centers and IT technology. Here are some future projects... - Published: 2024-03-10 - Modified: 2025-03-23 - URL: https://homelab.anita-fred.net/home-lab-projects/ - Categories: General We built our Home Lab to do projects to help us to learn about modern data centers and IT technology. Here are some Home Lab projects that we are planning to do. Raspberry Pi Home Lab Project Pi Rack Module We provide many training presentations for the Amateur Radio community. We are working on a project to build a simple, low-power Home Server and NAS device using a Raspberry Pi 4. Artificial Intelligence We are experimenting with Artificial Intelligence tools. We are approaching this in a way that allows us to run things locally on our own systems. We want to experiment with Generative AI using Large Language Models for text and image generation. We have already begun our AI journey. Want to see what we are doing so far? Check out our AI Category or try the AI menu above. Home Automation We plan to set up Home Assistant to manage our smart devices, Home Lab, and Media Services. Replace PCs with VMs and Thin Clients Currently, we run quite a few computers in our home. We plan to replace some machines with VMs and access them via thin clients and web browsers. Ansible Playbooks and Semaphore We have done some work with Ansible related to configuring Linux hosts. We plan to create playbooks to update our systems, set up VMs, and more. We also want to explore Ansible Semaphore to improve the management and automation of our Ansible playbooks. The following video covers Ansible Semaphore - Set up and Application of Ansible Semaphore Containers, Containers, Containers There are lots of cool applications and services that can run in Docker or LXC containers. We will continue to look for opportunities to try new containers and add more services to our Home Lab. Check back again soon for more on our Home Lab projects. --- ### Uninterruptible Power > Uninterruptible power for our network, servers, and storage is key to our Home Lab's high-availability strategy. - Published: 2024-03-10 - Modified: 2025-01-04 - URL: https://homelab.anita-fred.net/power/ - Categories: Infrastructure - Tags: Power CyberPower Uninterruptable Power Supply (UPS) Uninterruptible power for our network, servers, and storage is key to our Home Lab's high-availability strategy. Our Home uses residential power, so we experience frequent power interruptions. Here in New England, storms and wind events cause power outages lasting from a few seconds to as long as a week. As a result, we need a reliable, tiered power backup system to protect our equipment and keep our Home Lab online. Power Architecture We use a two-tiered power architecture. The first tier uses sine-wave Uninterruptable Power Supplies (UPSs) to protect our equipment from surges and provide a few minutes to maybe an hour of backup power. We have standardized CyberPower equipment for this tier. Generac 20 KW Propane Power Generator The second tier uses a Generac 20 KW whole-house generator. The generator system automatically kicks in about a minute after an extended power failure begins. Our generator and associated large propane tank can power our home, including our Home Lab and Amateur Radio Station, for 7 - 10 days. Our generator system includes automatic load heading devices for our air conditioner, range, hot tub, and other high-current devices to avoid overloading our generator. Redundant Internet A weak link in our power backup strategy is our Internet connection. Our modems are backed up by our two-tier power management system. We also have redundant connections to fiber- and cable-based ISPs to provide additional resilience in the face of wide-area power outages. Power Monitoring and Managed Shutdown We are using the Network UPS Tools software running on Raspberry Pi computers to manage our critical UPS devices. This software allows us to remotely monitor the operational condition of our UPS devices and enables our Storage Devices and Servers to sense conditions when a complete backup power loss is imminent and perform a controlled shutdown to protect themselves and the data that they store. You can find a summary of the available features here. NUT Setup and Configuration Raspberry Pi 4B NUT Server Each of our NUT Raspberry Pi devices is PoE-powered. They are built using the following components: An 8 GB Raspberry Pi 4B Single-Board Computer A Raspberry Pi PoE+ HAT with Low Profile Heatsink A SanDisk 32GB Ultra microSDHC UHS-I Memory Card A tall aluminum case We have moved our NUT servers to a rack-mount solution. You can learn more about it here. Software Components and Installation We followed the process in the following video to install the software on each of our NUT Servers. The software components required are as follows - Raspberry Pi OS Lite 64-bit NUT Software Tools for Debian Linux (installed via apt install; see video for details) Synology DSM UPS Server Support Software Automatic Shutdown We configured automatic shutdown for our Servers and NAS devices using the following approaches - Automatic Shutdown for Proxmox; the video above is also useful for configuring NUT client support in Proxmox Automatic Shutdown for Synology NAS devices The following table shows the overall configuration for out automatic shutdown setup - Configuring Synology NASs Configuring a Synology NAS device to use our NUT servers is straightforward once the NUT servers are properly configured to meet the interface Synology DSM expects. Synology NA UPS Configuration Configuring Proxmox Servers Steps to configure a Proxmox server to work with a NUT server is more complex. The basic steps are: Ensure that email support is working on the server (we used Postfix to enable mail forwarding) Install the NUT Client Package apt-get install nut-client Configure the NUT Client # Edit the following files in /etc/nut vi /etc/nut/nut. conf vi /etc/nut/upsmon. conf vi /etc/nut/upssched. conf Create a custom shell script to process various UPS events. The script includes e-mail notifications and logging as is placed in the /etc/nut directory. With these steps completed, we can restart the NUT client by rebooting the server. --- ### High-Availability Storage Cluster > We are building a High-Availability (HA) Storage Cluster to complement our Proxmox HA Server Cluster. Synology has a nice HA solution... - Published: 2024-03-03 - Modified: 2025-03-22 - URL: https://homelab.anita-fred.net/ha-storage/ - Categories: Storage - Tags: HA, Storage Synology HA Storage Cluster We are building a High-Availability (HA) Storage Cluster to complement our Proxmox HA Server Cluster. Synology has a nice HA solution that we can use for this. To use Synology's HA's solution, one must have the following: Two Identical Synology NAS devices (we are using a pair of RS1221+ rack-mounted Synology NAS') Both NAS devices must have identical memory and disk configurations. Both NAS devices must have at least two network interfaces available (we are using dual 10 GbE network cards in both of our NAS devices) The two NAS devices work in an active/standby configuration and present a single IP interface for access to storage and administration. Synology HA Documentation Synology provides good documentation for their HA system. Here are some useful links: Synology HA Webpage Synology HA Whitepaper Synology HA User's Guide The video above provides a good overview of Synology HA and how to configure it. Storage Cluster Hardware Synology RS1221+ NAS We are using a pair of Synology RS1221+ rack-mounted NAS servers. Each one is configured with the following hardware options: Synology RS1221+ Rack Mounted NAS Synology 2-Port 10GbE RJ-45 PCIe Network Adapter E10G30-T2 A-Tech 32GB (2x16GB) RAM Replacement for Synology DDR4 2666MHz PC4-21300 ECC SODIMM Memory Upgrade Eight 960 GB 6 Gbps SATA III Toshiba Enterprise SSDs Networking Our Proxmox Cluster will connect to our HA Storage Cluster via ethernet connections. We will be storing the virtual disk drives for our VMs and LXC in this cluster on our HA Storage Cluster. Maximizing these connections' speed and minimizing latency is important to maximize our workload's overall performance. Each node in our Proxmox Cluster has dedicated high-speed connections (25 GbE for pve1, 10 GbE for pve2 and pve3) to a dedicated Storage VLAN. These connections are made through a Unfi Switch - an Enterprise XG 24. This switch is supported by a large UPS that provides battery backup power for our Networking Rack. Ubiquity EnterpriseXG 24 Switch This approach is taken to minimize latency as the storage traffic cluster is completely handled with a single switch. Ideally, we would have a pair of these switches and redundant connections to our Proxmox and HA Storage clusters to maximize reliability. While this would be a nice enhancement, we have chosen to use a single switch for cost reasons. The NAS drives in our HA Storage Cluster are configured to provide an interface to both our Storage VLAN. This approach ensures that the nodes in our Proxmox cluster can access the HA Storage Cluster directly without a routing hop through our firewall. We also set the MTU for this network to 9000 (Jumbo Frames) to minimize packet overhead. Storage Design Each Synology RS1221+ in our cluster has eight 960 GB Enterprise SSDs. The performance of the resulting storage system is important as we will be storing the disks for the VMs and LXCs in our Proxmox Cluster on our HA Storage System. The following are the criteria we used to select a storage pool configuration: Performance - we want to be able to saturate the 10 GbE interfaces to our HA Storage Cluster Reliability - we want to be protected against single-drive failures. We will keep spare drives and use backups to manage the chance of simultaneous multiple-drive failures. Storage Capacity - we want to use the available SSD storage capacity efficiently. We considered using either a RAID-10 or RAID-5 configuration. Storage Devices - 960 GB Enterprise SSDs Toshiba 960 GB SSD Performance Our SSD drives are enterprise models with good throughput and IO/s (IOPs) performance. 960 GB SSD Reliability Features They also feature some desirable reliability features, including good write endurance and MTBF numbers. Our drives also feature sudden power-off features to maintain data integrity in the event of a power failure that cannot be backed up by our UPS system. Performance Comparison - RAID-10 vs. RAID-5 We used a RAID performance calculator to estimate the performance of our storage system. Based on actual runtime data from our VMs and LXCs running in Proxmox, our IO workload is almost completely written operation-dominated. This is probably due to the fact that read caching handles most read operations from memory on our servers. The first option we considered was RAID-10. The estimated performance for this configuration is shown below. RAID-10 Throughput Performance As you can see, this configuration's throughput will more than saturate our 10 GbE connections to our HA Storage Cluster. The next option we considered was RAID-5. The estimated performance for this configuration is shown below. RAID-5 Throughput Performance As you can see, performance is a substantial hit due to the need to generate and store parity data each time storage is written. The RAID-5 configuration should also be able to saturate our 10 GbE connections to the Storage Cluster. The result is that the RAID-10 and RAID-5 configurations will provide the same performance level given our 10 GbE connections to our Storage Cluster. Capacity Comparison - RAID-10 vs. RAID-5 The next step in our design process was to compare the usable storage capacity between RAID-10 and RAID-5 using Synology's RAID Calculator. RAID-10 vs. RAID-5 Usable Storage Capacity Not surprisingly, the RAID-5 configuration creates roughly twice as much usable storage when compared to the RAID-10 configuration. Chosen Configuration We decided to formate our SSDs as a Btrfs storage pool configured as a RAID-5. We choose RAID-5 for the following reasons: A good balance between write performance and reliability Efficient use of available SSD storage space Acceptable overall reliability (single disk failures) given the following: Our storage pools are fully redundant between the primary and secondary NAS pools We run regular automatic snapshots, replications, and backups via Synology's Hyper Backup as well as server-side backups via Proxmox Backup Server. The following shows the expected IO/s (IOPs) for our storage system. RAID-5 IOPs Performance This level of performance should be more than adequate for our three-node cluster's workload. Dataset / Share Configuration The final dataset format that we will use for our vdisks is TBD at this point. We plan to test the performance of both iSCSI LUNs and NFS shares. If these perform roughly the same for our workloads,... --- ### Server Cluster > Our server cluster consists of three servers. Our approach was to pair one high-capacity Dell server two smaller Supermicro servers. - Published: 2024-02-25 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/server-cluster/ - Categories: Server - Tags: Cluster, HA, Server Proxmox Cluster Configuration Our server cluster consists of three servers. Our approach was to pair one high-capacity server (a Dell R740 dual-socket machine) with two smaller Supermicro servers. Cluster Servers This approach allows us to handle most of our workloads on the high-capacity server, have the advantages of HA availability, and move workloads to the smaller servers to prevent downtime during maintenance activities. Server Networking Configuration All three servers in our cluster have similar networking interfaces consisting of: An OOB management interface (iDRAC or IPMI) Two low-speed ports (1 GbE or 10 GbE) Two high-speed ports (10 GbE or 25 GbE) PVE2 and PVE3 each have an additional two high-speed ports (10 GbE) via an add-on NIC The following table shows the interfaces on our three servers and how they are mapped to the various functions available via a standard set of bridges on each server. Each machine uses a combination of interfaces and bridges to realize a standard networking setup. PVE2 and PVE3 also utilize LACP bonds to provide higher capacity for the low-speed and high-speed service bridges. Standard Networking Setup - pve2 node example You can see how we configured the LACP Bond interfaces in this video. Network Bonding on Proxmox We must add specific routes to ensure the separate Storage VLAN is used for Virtual Disk I/O. This is done via the following adjustments to the vmbr3 bridge in /etc/network/interfaces. The following is an example of /etc/network/interfaces for pve3 - auto vmbr3 iface vmbr3 inet static address 192. 168. 100. 120/24 bridge-ports eno4 bridge-stp off bridge-fd 0 up ip route add 192. 168. 100. 0/24 dev vmbr3 down ip route del 192. 168. 100. 0/24 dev vmbr3 mtu 9000 #Virtual Disks (Storage) Note that the bridge has an IP address in the Storage VLAN that is the same as pve3 in the Computer VLAN. Finally, use the IP address the target NAS uses in the Storage VLAN when configuring the NFS share for PVE-storage. This ensures that the dedicated Storage VLAN will be used for Virtual Disk I/O by all nodes in our Proxmox Cluster. We ran # traceroute from each of our servers to confirm that we have a direct LAN connection to PVE-Storage that does not go through our router. Cluster Setup We are currently running a three-server Proxmox cluster. Our servers consist of: A Dell R740 Server Two Supermicro 5018D-FN4T Servers The first step was to prepare each server in the cluster as follows: Install and configure Proxmox Setup a standard networking configuration Confirm that all servers can ping the shared storage NAS using the storage VLAN We used the procedure in the following video to setup and configure our cluster - The first step was to use the pve1 server to create a cluster. Next, we add the other servers to the cluster. If there are problems with connecting to shared stores, check the following: Is the Storage VLAN connection using an address like 192. 168. 100. /32? Is there a direct route for VLAN 1000 (Storage) that does not use the router? Check via traceroute Is the target NAS drive sitting on the Storage VLAN with multiple gateways enabled Can you ping the storage server from inside the server Proxmox instances? Backups For backups to work correctly, we need to modify the Proxmox /etc/vzdump. conf file to set the tmpdir to /var/tmp/ as follows: # vzdump default settings tmpdir: /var/tmp/ #tmpdir: DIR #dumpdir: DIR ... This will cause our backups to use the Proxmox tmp file directory to create backup archives for all backups. We later upgraded to Proxmox Backup Server. You can see how PBS was installed and configured here. NFS Backup Mount We set up an NFS backup mount on one of our NAS drives to store Proxmox backups. An NFS share was set up on NAS-5 as follows: Share PVE-backups (/volume2/PVE-backups) Used the default Management Network A Storage volume was configured in Proxmox to use for backups as follows: NAS-5 NFS Share for PVE Backups A Note About DNS Load Proxmox constantly does DNS lookups on the servers associated with NFS and other mounted filesystems, which can result in very high transaction loads on our DNS servers. To avoid this problem, we replaced the server domain names with the associated IP addresses. Note that this cannot be done for the virtual mount for the Proxmox Backup Server, as PBS uses a certificate to validate the domain name used to access it. These adjustments can be made by editing the storage configuration file at /etc/pve/storage. cfg on any node in the cluster (changes in this file are synced for all nodes). NFS Virtual Disk Mount We also created an NFS share for VM and LXC virtual disk storage. The volume chosen provides high-speed SSD storage on a dedicated Storage VLAN. An NFS share was set up on NAS-5 as follows: Share PVE-storage (/volume1/PVE-storage) IP Permission: 192. 168. 10. 0/24 (Computer VLAN Devices) NAS-5 NFS Share for PVE Storage Global Backup Job A Datacenter level backup job was set up to run daily at 1 am for all VMs and containers as follows (this was later replaced with Proxmox Backup Server backups as explained here): Proxmox Backup Job The following retention policy was used: Proxmox Backup Retention Policy Node File Backups We installed the Proxmox Backup Client on each of our server's nodes and created a corn schedule script that backs up the files on each node to our Proxmox Backup Server each day. The following video explains how to install and configure the PBS client. For the installation to work properly, the locations of the PBS repository and access credentials must be set in both the script and the login bash shell. We also need to create a cron job to run the backup script daily. Setup SSL Certificates We use the procedure in the video below to set up signed SSL certificates for our three server nodes and the Proxmox Backup server. This approach uses a Let's Encrypt DNS-01 challenge via Cloudflare DNS to authenticate with Let's Encrypt and obtain a signed certificate for each server node in the cluster and for PBS.... --- ### Home Network Infrastructure > We use UniFi equipment for our second-generation home network primarily for its single-plane glass management and configuration capabilities. - Published: 2024-02-22 - Modified: 2025-03-21 - URL: https://homelab.anita-fred.net/network-infrastructure/ - Categories: Network - Tags: Network Gen 2 Gen 4 Home Network Core Rack We use UniFi equipment throughout. We chose the UniFi platform for our second-generation home network primarily for its single-plane glass management and configuration capabilities. Network Structure Network Structure The image above shows our network's structure. Our Network is a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We have installed multiple OM4 fiber multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as dual-fiber LACP LAG links. Access Layer At the top layer, redundant Internet connections provide Internet Access and ensure that we remain connected to the outside world. Firewall, Routing, and Management Layer UniFi Dream Machine Pro SE Our network's firewall and routing layer implement security and routing functions using a UniFi UDM Pro router and firewall. Home Network Dashboard The UDM also provides a single-pane-of-glass management interface. All configuration functions are performed via the GUI provided by the UDM. Core Aggregation Layer UniFi High-Capacity Aggregation Switch The core layer uses a pair of high-capacity Aggregation Switches to provide optical access links to all of the switches in our network's edge layer. We also include a high-speed 10 GbE wired ethernet switch at this layer. All of our storage devices and servers are connected directly to the core layer of our network to maximize performance and minimize latency. Edge Connectivity Layer Example UniFi High-Speed Edge Switch The edge layer uses various switches connected to the core layer, combining 25 GbE, 10 GbE, and 1 GbE optical links. Many of these links are built using pairs of optical links in an LACP/LAG configuration. UniFi Firewall/Router, Core, and Edge Switches In Our Network Our edge switches are deployed throughout our home. We use a variety of edge switches in our network, depending on each room's connectivity needs. Wi-Fi Access and Telephony UniFi WiFi APs and Telephones This layer connects all our devices, including WiFi Access Points and our Telephones. --- ### Windows Virtual Machines > One of our Homelab environment's goals is to run our Windows desktop OSs on virtual machines. This enables us to get Windows... - Published: 2024-02-22 - Modified: 2024-12-10 - URL: https://homelab.anita-fred.net/windows-vm/ - Categories: VMs and LXCs - Tags: Server, Virtual Environment, Windows One of our Homelab environment's goals is to run our Windows desktop OSs on virtual machines. This enables us to get at standard OS environments such as Microsoft Windows easily from a web browser. Windows Virtual Machine Setup We use the following procedure to set up our Windows VMs - The following ISO images are downloaded to the PVE-templates Share on our Proxmox cluster - Windows 10 Desktop ISO Windows Driver ISO Each Windows VM is created with the following options (all other choices used the defaults) - Name the VM windows- Use the Windows 10 desktop ISO image. Add an additional drive for the VirtIO drivers and use the Windows VirtIO Driver ISO image. The Type/Version is set to Microsoft Windows 10. Check the Qemu Agent option (we'll install this later). Set the SCSI Controller to VirtIO SCSI. Use PVE-storage and create a 128 GB disk Set Discard and SSD Emulation options Set Cache to Write Back Allocate 4 CPU Cores Allocate 16 GB of Memory/minimum 4 GB Memory / Ballooning Device enabled Run on HS Services Network, use Intel E1000 NIC, set VLAN Tag to 10 Start the VM and install Windows. Some notes include - Enter the Windows 10 Pro product key Use the Windows Driver disk to load a driver for the disk Once Windows is up, use Windows Driver disk to install drivers for devices that did not install automatically. You can find the correct driver by searching for drivers from the root of the Windows Driver disk. Install the qemu guest agent from the Windows Driver disk. It's in the guest agent directory. Set the Computer name, Workgroup, and Domain name for the VM. Do a Windows update and install all updates next. Setup Windows applications as follows - Install Chrome browser Install Dashlane password manager Install Dropbox and Synology Drive Install Start10 Install Directory Opus File Manager Install PDF Viewer Install Printers Install media tools, VLC Player, and QuickTime Player Install Network utilities, WebSSH Install windows gadgets Install DXlab, tqsl, etc. Install Microsoft Office and Outlook Install SmartSDR Install WSJT-X, JTDX, JTalert Install PSTRotator, Amplifer app Install RealVNC Install Benchmarks (Disk, Graphics, Geekbench) Install Folding at Home Need a sound driver for audio (Windows Remote Desktop or RealVNC). We also enabled Automatic Logon using this procedure. --- ### Synology NAS > We cover some details of configuring our Synology NAS devices running DSM7.2 here. All of our Synology NAS devices use pairs of ethernet... - Published: 2024-02-18 - Modified: 2025-03-02 - URL: https://homelab.anita-fred.net/synology-nas/ - Categories: Storage - Tags: Network Main NAS Storage Rack - Synology RS2421RP+ and RX1217RP+ NAS Drives We use a variety of NAS drives for storage in our Home Lab. The table above lists all of the NAS drives in our Home Lab. Most of our production storage is implemented using Synology NAS Drives. Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage. Systems with Dual Optical interfaces are configured as LACP LAGs to increase network interface capacity and improve reliability. Hardware and Power We have moved to mostly rack-mounted NAS drives to save space and power. The picture above shows one of our racks which contains Synology NAS drives. We have also opted for Synology Rack Mount systems with redundant power supplies to improve reliability. Our racks include dual UPS devices to further enhance reliability. Basic Setup and Configuration We cover some details of configuring our Synology NAS devices running DSM7. 2 here. Multiple VLANs and Bonds on Synology NAS Our NAS devices use pairs of ethernet connections configured as 802. 3ad LACP bonded interfaces. This approach improves reliability and enhances interface capacity when multiple sessions are active on the same device. DSM supports LACP-bonded interfaces on a single VLAN. This can be easily configured with the DSM GUI. A few of our NAS drives benefit from multiple interfaces on separate VLANs. This avoids situations where high-volume IP traffic needs to be routed between VLANs for applications such as playing media and surveillance camera recording. Setting this up requires accessing and configuring DSM's underpinning Linux environment via SSH. The procedure for setting this up is explained here and here. Creating a RAM Disk You can create a RAM disk on your Synology NAS by creating a mount point in one of your shares and installing a shell script to run when the NAS boots to create and mount a RAM disk. If your mount point is in a share on your Storage Pool on volume1 named Public and is called tmp then - #! /bin/sh mount -t tmpfs -o size=50% ramdisk /volume1/Public/tmp will create a RAM disk that uses 50% of the available RAM on your NAS and is accessible as /volume1/Public/tmp by packages running on your NAS. The RAM disk will be removed when you reboot your NAS so you'll need to run the command above each time your NAS boots. This can be scheduled to run on boot using the Synology Task Scheduler. --- ### Docker in an LXC Container > Using this procedure, we set up docker using the Turnkey Core LXC container (Debian Linux). The container is created to run Docker. - Published: 2024-02-16 - Modified: 2025-02-28 - URL: https://homelab.anita-fred.net/docker-lxc/ - Categories: VMs and LXCs - Tags: Docker, Server Using this procedure, we set up docker using the Turnkey Core LXC container (Debian Linux). Docker LXC Container Configuration The container is created with the following resources: 4 CPUs 4096 KB Memory 8 GB SSD Storage (Shared PVE-storage) LS Services Network Portainer Edge Agent We manage Docker using a single Portainer instance. Portainer Management Interface This is done via the Portainer Edge Agent. The steps to install the Portainer Edge Agent are as follows: Create a new environment on the Portainer Host Select and use the Portainer edge agent choice BE CAREFUL TO SELECT THE PORTAINER HOST URL, NOT THE AGENT when setting up Carefully copy the EDGE_ID and the EDGE_KEY fields into the script in the next step that is used to spin up the edge agent Install the Portainer Edge Agent on the docker container as follows: docker run -d \ -v /var/run/docker. sock:/var/run/docker. sock \ -v /var/lib/docker/volumes:/var/lib/docker/volumes \ -v /:/host \ -v portainer_agent_data:/data \ --restart always \ -e EDGE=1 \ -e EDGE_ID= \ -e EDGE_KEY= \ -e EDGE_INSECURE_POLL=1 \ --name portainer_edge_agent \ portainer/agent:latest Mail Forwarding More work needs to be done here. Here's some information to help get started - Postfix configuration in Turnkey LXC Procedure to get postfix configured to support forwarding e-mail through smtp2go. --- ### Proxmox Backup Server > This page covers the installation of the Proxmox Backup Server in our HomeLab. Our approach is to run the Proxmox Backup Server in a VM... - Published: 2024-02-13 - Modified: 2025-03-01 - URL: https://homelab.anita-fred.net/pbs/ - Categories: Server, Storage - Tags: Backups, Proxmox, Server, Storage This page covers the installation of the Proxmox Backup Server (PBS) in our HomeLab. We run the PBS in a VM on our server and store backups in shared storage on one of our NAS drives. We are running a Proxmox Test Node and a Raspberry Pi Proxmox Cluster that can access our Proxmox Backup Server (PBS). This approach enables backups and transfers of VMs and LXCs between our Production Proxmox Cluster, our Proxmox Test Node, and Raspberry Pi Proxmox Cluster. Proxmox Backup Server Installation We used the following procedure to install PBS on our server. PBS was created using the recommended VM settings in the video. The VM is created with the following resources: 4 CPUs 4096 KB Memory 32 GB SSD Storage (Shared PVE-storage) HS Services Network Once the VM is created, the next step is to run the PBS installer. Proxmox Backup Server Install After the PBS install is complete, PBS is booted, the QEMU Guest Agent is installed, and the VM is updated using the following commands - # apt update # apt upgrade # apt-get install qemu-guest-agent # reboot PBS can now be accessed via the web interface using the following URL - https://:8007 Create a Backup Datastore on a NAS Drive The steps are as follows - Install CIFS utils # Install NFS share package on Proxmox apt install cifs-utils Create a mount point for the NAS PBS store mkdir /mnt/pbs-store Create a Samba credentials file to enable logging into NAS share vi /etc/samba/. smbcreds ... username= password= ... chmod 400 /etc/samba/. smbcreds Test mount the NAS share in PBS and make a directory to contain the PBS backups mount -t cifs -o rw,vers=3. 0, \ credentials=/etc/samba/. smbcreds, \ uid=backup,gid=backup \ //. anita-fred. n et/PBS-backups \ /mnt/pbs-store mkidr /mnt/pbs-store/pbs-backups Make the NAS share mount permanent by adding it to /etc/fstab vi /etc/fstab ... after the last line add the following line # Mount PBS backup store from NAS //nas-#. anita-fred. net/PBS-backups /mnt/pbs-store cifs vers=3. 0,credentials=/etc/samba/. smbcreds,uid=backup,gid=backup,defaults 0 0 Create a datastore to hold the PBS backups in the Proxmox Backup Server as follows. The datastore will take some time to create (be patient). PBS Datastore Configuration PBS Datastore Prune Options Add the PBS store as storage at the Proxmox datacenter level. Use the information from the PBS dashboard to set the fingerprint. PBS Storage in Proxmox VE The PBS-backups store can now be used as a target in Proxmox backups. NOTE THAT YOU CANNOT BACK UP THE PBS VM TO PBS-BACKUPS. As the table above indicates, additional datastores are created for our Raspberry Pi Cluster and our NUC Proxmox Test Node. Setup Boot Delay The NFS share for the Proxmox Backup store needs time to start before the Backup server starts on boot. This can be set for each node under System/Options/Start on Boot delay. A 30-second delay seems to work well. Setup Backup, Pruning, and Garbage Collection The overall schedule for Proxmox backup operations is as follows: 02:00 - Run a PVE Backup on the PBS Backup Server VM from our Production Cluster (run in suspend mode; stop mode causes problems) 02:30 - Run PBS Backups in all Clusters/Nodes on all VMs and LXCs EXCEPT for the PBS Backup Server VM 03:00 - Run Pruning on the all PBS datastores 03:30 - Run Garage Collection on all PBS datastores 05:00 - Verify all backups in all PBS G Local NTP Servers We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, modify/etc/chrony/chrony. conf to use our servers for the pool. This must be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details. Backup Temp Directory Proxmox backups use vzdump to create compressed backups. By default, backups use /var/tmp, which lives on the boot drive of each node in a Proxmox Cluster. To ensure adequate space for vzdump and reduce the load on each server's boot drive, we have configured a temp directory on the local ZFS file systems on each of our Proxmox servers. The tmp directory configuration needs to be done on each node in the cluster (details here). The steps to set this up are as follows: # Create a tmp directory on local node ZFS stores # (do this once for each server in the cluster) cd /zfsa mkdir tmp # Turn on and verify ACL for ZFSA store zfs get acltype zfsa zfs set acltype=posixacl zfsa zfs get acltype zfsa # Configure vzdump to use the ZFS tmp dir' # add/set tmpdir as follows # (do on each server) cd /etc vi vzdump. conf tmpdir: /zfsa/tmp :wq --- ### Installed Plugins - Published: 2024-01-31 - Modified: 2025-05-01 - URL: https://homelab.anita-fred.net/plugins/ - Categories: General, Website - Tags: Website This page has a list of all the installed WordPress plugins on our website. Please login to see the details. Purchased plugins tagged as or . These are supported or enhanced versions of a free plugin. Also, some plugins are not active at all times. Such plugins are tagged as . Some items that still need to be done here include: Setup a backup system Update network images on the home page and link to stationproject. blog post. Solve site health warnings: DMARC for email in the domain Improve really Simple SSL - SSL Health Check (A -> A+) Only works with browsers that support SNI Installed Plugins Akismet Anti-spam: Spam Protection – Used by millions, Akismet is quite possibly the best way in the world to protect your blog from spam. Even while you sleep, your site is fully configured and protected for comment spam protection. Broken Link Checker - Checks your site for broken links and notifies you on the dashboard if any are found. Classic Editor - Enables the WordPress classic editor and the old-style Edit Post screen with TinyMCE, Meta Boxes, etc. It supports the older plugins that extend this screen. Classic Widgets – Enables the classic widgets settings screens in Appearance and Customizer. Disables the block editor from managing widgets. Cloudflare - Cloudflare speeds up and protects your WordPress site. Code Block Pro - Code highlighting powered by the VS Code engine. Enables embedding of snippets with links. Easy Table of Contents - Adds a user-friendly and fully automatic way to create and display a table of contents generated from the page content. Easy Username Updater - Allow admin to update username. Enable Media Replace - Enable replacing media files by uploading a new file in the "Edit Media" section of the WordPress Media Library. Health Check & Troubleshooting - Check the health of your WordPress install. Jetpack - Security, performance, and marketing tools made by WordPress experts. Jetpack protects your site so you can focus on more important things. Members - A user and role management plugin that puts you in full control of your site's permissions. This plugin allows you to edit your roles and their capabilities, clone existing roles, assign multiple roles per user, block post content, or even make your site completely private. Pages with category and tag - Add Categories and Tags to Pages. Really Simple SSL - Lightweight SSL & Hardening Plugin. Really Simple SSL Pro - Adds additional security and content protection features to the base plugin. Redis Object Cache - A persistent object cache backend powered by Redis. It supports Predis, PhpRedis, Relay, replication, sentinels, clustering, and WP-CLI. Simple History - a plugin that logs various things in WordPress and presents those events in a very nice GUI. Tablepress - Embed beautiful and interactive tables into your WordPress website’s posts and pages, without having to write code! UpdraftPlus - Backup/Restore - Backup and restore: take backups locally, or backup to Amazon S3, Dropbox, Google Drive, Rackspace, (S)FTP, WebDAV & email, on automatic schedules. User Menus - Customize your menus with a user's name and avatar or show items based on user role. User Switching - Instant switching between user accounts in WordPress. Webmaster Spelling Notifications - The plugin allows site visitors to send reports to the webmaster/website owner about any spelling or grammatical errors that readers may find. Visitors should select text with a mouse, press Ctrl+Enter, and enter comments, and the webmaster will be notified about such errors. Nice and simple plugin - no external websites needed and fully customizable; easily change plugin language. Added a content sidebar note to let users know how to report errors. Website LLMs. txt - Manages and automatically generates LLMS. txt files for LLM/AI consumption and integrates with SEO plugins (Yoast SEO, RankMath). Wordfence Security - Wordfence Security - Anti-virus, Firewall, and Malware Scan. WP Crontrol - WP Crontrol lets you view and control what's happening in the WP-Cron system. WP Mail SMTP - Reconfigures the wp_mail function to use Gmail/Mailgun/SendGrid/SMTP instead of the default mail and creates an options page to manage the settings. WP OPcache - This plugin allows you to manage Zend OPcache inside your WordPress admin dashboard. WP-Optimize Premium - Clean, Compress, Cache - WP-Optimize makes your site fast and efficient. It cleans the database, compresses images, and caches pages. Fast sites attract more traffic and users. Yoast SEO - Paid. Improve your WordPress SEO: Write better content and have a fully optimized WordPress site using the Yoast SEO plugin. . --- ### Proxmox VE > This page covers the Proxmox VE install and setup on our server. You can find a great deal of information about Proxmox in... - Published: 2024-01-25 - Modified: 2025-05-29 - URL: https://homelab.anita-fred.net/proxmox/ - Categories: Server - Tags: Backups, Proxmox, Server, Storage, Virtual Environment This page covers the Proxmox VE install and setup on our server. You can find a great deal of information about Proxmox in the Proxmox VE Administrator's Guide. Proxmox Installation/ZFS Storage Proxmox was installed on our server using the steps in the following video: The Proxmox boot images are installed on MVMe drives (ZFS RAID1 on our Dell Sever BOSS Card, or ZFS single on the MNVe drives on our Supermicro Servers). This video also covers the creation of a ZFS storage pool and filesystem. A single filesystem called zfsa was set up using RAID10 and lz4 compression using four SSD disks on each server. A Community Proxmox VE License was purchased and installed for each node. The Proxmox installation was updated on each server using the Enterprise Repository. Linux Configuration I like to install a few additional tools to help me manage our Proxmox installations. They include the nslookup and ifconfig commands and the tmux terminal multiplexor. The commands to install these tools are found here. Cluster Creation With these steps done, we can create a 3-node cluster. See our Cluster page for details. ZFS Snapshots Creating ZFS snapshots of the Proxmox installation can be useful before making changes. This enables rollback to a previous version of the filesystem should any changes need to be undone. Here are some useful commands for this purpose: zfs list -t snapshot zfs list zfs snapshot rpool/ROOT/@ zfs rollback rpool/ROOT/t@ zfs destroy rpool/ROOT/@ Be careful to select the proper dataset - snapshots on the pool that contain the dataset don't support this use case. Also, you can only roll back to the latest snapshot directly. If you want to roll back to an earlier snapshot, you must first destroy all of the later snapshots. In the case of a Proxmox cluster node, the shared files in the associated cluster filesystem will not be included in the snapshot. You can learn more about the Proxmox cluster file system and its shared files here. You can view all of the snapshots inside the invisible /. zfs directory on the host filesystem as follows: # cd /. zfs/snapshot/ # ls -la Local NTP Servers We want Proxmox and Proxmox Backup Server to use our local NTP servers for time synchronization. To do this, we need to modify /etc/chrony/chrony. conf to use our servers for the pool. This needs to be done on each server individually and inside the Proxmox Backup Server VM. See the following page for details. The first step before following the configuration procedures above is to install chrony on each node - apt install chrony Mail Forwarding We used the following procedure to configure postfix to support forwarding e-mail through smtp2go. Postfix does not seem to work with passwords containing a $ sign. A separate login was set up in smtp2go for forwarding purposes. Some key steps in the process include: # Install postfix and the supporting modules # for smtp2go forwarding sudo apt-get install postfix sudo apt-get install libsasl2-modules # Install mailx sudo apt -y install bsd-mailx sudo apt -y install mailutils # Run this command to configure postfix # per the procedure above sudo dpkg-reconfigure postfix # Use a working prototype of main. cf to edit sudo vi /etc/postfix/main. cf # Setup /etc/mailname - # use version from working server # MAKE SURE mailname is lower case/matches DNS sudo uname -n > /etc/mailname # Restart postfix sudo systemctl reload postfix sudo service postfix restart # Reboot may be needed sudo reboot # Test echo "Test" | mailx -s "PVE email" Here are the contents of /etc/posfix/main. cf on our host machines - # See /usr/share/postfix/main. cf. dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname myhostname = . anita-fred. net smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no # appending . domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # See https://url. emailprotection. link/? bC2DfWF-US5gMbCYlXpDN_PMtxWWIY1vIwjOTPJVmKpM_nL-pWxREqSbF-fh32iAqSCIcpruOsulxUGOpqVZzvssQAi4JLabqmkS_417yBXpPhVJ5K2Ab2Z9xZKgMA2U5 -- default to 3. 6 on # fresh installs. compatibility_level = 3. 6 # This group of lines added by FCK to support mail forwarding via smtp2go # relayhost = smtp_sasl_auth_enable = yes smtp_sasl_password_maps = static:: smtp_sasl_security_options = noanonymous header_size_limit = 4096000 relayhost = :2525 # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil. pem smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil. key smtpd_tls_security_level=may smtp_tls_CApath=/etc/ssl/certs smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = $myhostname, . anita-fred. net, , localhost. localdomain, localhost mynetworks = 127. 0. 0. 0/8 192. 168. 10. 0/24 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all inet_protocols = ipv4 vGPU Our servers each include a Nvidia TESLA P4 GPU. This GPU is sharable using Nvidia's vGPU. The information on how to set up Proxmox for vGPU may be found here. This procedure also explains how to enable IOMMU for GPU pass-through (not sharing). We do not have IOMMU setup on our servers at this time. You'll need to install the git command and the cc compiler to use this procedure. This can be done with the following commands - # apt update # apt install git # apt install build-essential Now you can follow the procedure here. Be sure to include the steps to enable IOMMU. I downloaded and installed the 6. 4 vGPU driver from the Nvidia site and did a final reboot of the server. vGPU Types The vGPU drivers support a number of GPU types. You'll want to select the appropriate one in each VM. Note that multiple sizes of vGPUs are not allowed (i. e. , if one GPU uses 2 GB of memory, all must). The following table shows the types available. (this data can be obtained by running mdevctl types on your system). Disabling Enterprise Repository Proxmox No Subscription Repositories We recommend purchasing at least a Community Support License for production Proxmox servers. We are running some test servers here and we have chosen to use the No Subscription repositories for these systems. The following videos explain... --- ### Welcome To Our Home Lab > This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several components... - Published: 2024-01-24 - Modified: 2025-04-18 - URL: https://homelab.anita-fred.net/ - Categories: General, Infrastructure, Network, Server, Storage - Tags: NAS, Network, Server, Storage, Surveillance, Telephone Home Network Dashboard This site is dedicated to documenting the setup, features, and operation of our Home Lab. Our Home Lab consists of several different components and systems, including: A high-performance home network with redundant Internet connections A storage system that utilizes multiple NAS devices Multiple enterprise-grade servers in a high-availability cluster Applications, services, and websites Powered via dual-UPS protected power feeds and a backup generator Home Network Home Network Core, High-Availability Storage, and Secondary Server Rack Our Home Network uses a two-tiered structure with a core based upon high-speed 25 GbE capable aggregation switches and optically connected edge switches. We use Ubiquity UniFi equipment throughout. We have installed multiple OM4 multi-mode fiber links from the core to each room in our house. The speed of these links ranges from 1 Gbps to 25 Gbps, with most connections running as Dual-Fiber LACP LAG links. We have redundant Internet connections which include 1 Gbps optical fiber and a 400 Mbps/12 Mbps cable modem service. Out Network Rack also includes two SuperMicro Servers and a pair of Synology NAS drives in a high-availability configuration. These drives provide solid-state storage for Proxmox Virtual Machine disks and Docker volumes. Main Server and Storage Main Server Rack and NAS Storage Rack Our Server Rack houses our main Dell Server and several of our Synology NAS Drives. It features redundant UPS power and includes rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab. Our servers run Proxmox in a high-availability configuration. In total, we have 104 CPUs and 1 TB of RAM available in our primary Proxmox cluster. This rack includes an all SSD storage high-speed NAS that we use for video editing. It also includes a NAS which stores our video and audio media collection and provides access to this content throughout our home and on the go when we travel. High Capacity Storage System Main NAS Storage Rack Our NAS Rack provides high-capacity storage via several Synology NAS Drives. It features redundant UPS power and includes additional rack-mounted Raspberry Pi systems which provide several different functions in our Home Lab. This rack also houses our Raspberry Pi NAS and NAS 2 systems. Our total storage capacity is just over 1 Petabyte. Our setup also provides approximately 70 TB of high-speed solid-state storage. Power Over Ethernet (PoE) Main Power Over Ethernet (PoE) Switch We make use of Power Over Ethernet (PoE) switches at many edge locations in our network to power devices through their ethernet cables. The switch shown above is located centrally where all of the CAT6 ethernet connections in our home terminate. It powers our Surveillance Cameras, IP Telephones, Access Points, etc. Home Media System Our Home Theater We use our Home Network and NAS System to provide a Home Media System. Our Media System sources content from streaming services as well as stored video and audio content store on our Media NAS drive and enables it to be viewed from any TV or Smart Device in our home. We can also view our content remotely when traveling or in our cars via the Internet. Surveillance System Synology Surveillance Station We use Synology Surveillance Station running on one of our NAS drives to support a variety of IP cameras throughout our home. This software uses the host NAS drive for storing recordings and provides image recognition and other security features. Telephone System Telephone System Dashboard We use Ubiquity Unifi Talk to provide managed telephone service within our home. Ubiquity IP Telephone This system uses PoE-powered IP Telephones which we have installed throughout our home. Applications, Services, and Websites We are hosting several websites, including: This site, which documents our Home Lab (self-hosted) Our Hobbies and Pets Website (self-hosted) Our Travel Adventures Website (self-hosted) My Amatur Radio Activities Website (self-hosted) The Nashua Area Radio Society Website (currently in the cloud) Our Amateur Radio Station Building Site (currently in the cloud) Set-up information for our self-hosted sites may be found here. --- ### Wordpress in an LXC Container > We've set up several Proxmox LXC containers to host several WordPress sites on our server. LXC containers are efficient... - Published: 2024-01-24 - Modified: 2025-03-21 - URL: https://homelab.anita-fred.net/wordpress-lxc/ - Categories: VMs and LXCs, Website - Tags: Virtual Environment, Website We've set up several Proxmox LXC containers to host several WordPress sites on our server. LXC containers are more efficient in terms of server resource utilization. You can learn more about Proxmox LXC containers vs. Virtual Machines here. We went through the following steps to set this up. The current and planned sites include: www. anitafred. net - Our personal website (future) homelab. anita-fred. net - Our Home Lab website ab1oc-4-director. org - Our campaign website (future) WordPress Container The setup uses the WordPress LXC container from Turnkey Linux. This lightweight Debian Linux environment uses MariaDB and Apache2 to complete the LAMP stack. I followed the following YouTube video to complete the installation - Each Proxmox LXC configuration is configured as follows: IP: 192. 168. 10. 16x (. 160-. 169) DNS: 192. 168. 10. 30 Domain: homelab. anita-fred. net, www. anita-fred. net (future) Disk: 64 GB SSD Storage (Shared PVE-storage) CPU: 4 Cores Memory: 1 GB / 512 MB Swap Network: LS Services I elected the security updates during the container installation process. Here's the final access information for the various components in the installation. WordPress Component Access Info Once the container was set up, I set the timezone for Debian as follows: # timedatectl set-timezone "America/New_York" Next, I updated the Debian installation via the following: # apt update && apt upgrade I also installed nslookup and apt-utils via the following: # apt-get install dnsutils # apt-get install apt-utils Increase WordPress Memory Limit The following addition was made just before the "... Happy publishing" line in the wp-config. php file to enable the WordPress installation to take advantage of the additional memory. /* Increase WP Memory limit */ define('WP_MEMORY_LIMIT', '1024M'); /* That's all, stop editing! Happy publishing. */ Increase the Size of OPcache The PHP OPcache stores the compiled versions of the PHP scripts that make up our websites. The default size of the cache is 128 MB. We've increased our OPcache to 512 MB to improve performance. OPcache memory is shared across all of the WordPress sites in the host LXC container. The steps to modify the size of the OPcache are as follows: # vi /etc/php/8. 2/apache2 ... Modify the following lines in the file ; The OPcache shared memory storage size. ;opcache. memory_consumption=128 opcache. memory_consumption=512 # systemctl restart apache2 Email Access The supplied postfix MTA did not work, so we've installed the WP Mail SMTP plugin and configured it to forward email through our email service. SPF and DMARC records need to be set up for the domain. This needs to be done properly to accommodate the multiple servers that can send e-mail on behalf of our domain. The DMARC step is simple - the necessary steps can be found here. The DMARC setup for a domain can be confirmed here. SSL Certificate This procedure was used to set up apache2 with a signed SSL certificate - click here. It uses Certbot with Let Encrypt to obtain and install a signed SSL certificate in Apache. A DNS-01 challenge is used, which does not require an external Internet connection for Apache. Once the initial certificates are installed, a script can be created to run the certbot commands to check if the SSL certificates need to be updated. The final step was to schedule weekly checks for SSL certificate updates using the cron. This is done by executing the script mentioned above once a week. Persistent Object Cache We are using the Redis persistent object cache on our sites. Information on Redis and how to install it may be found here and here. The following changes in each site's wp-config. php must be made first: --- ### Privacy Policy - Published: 2023-11-03 - Modified: 2024-02-22 - URL: https://homelab.anita-fred.net/privacy-policy/ - Categories: General Who we are Our website address is: https://homelab. anita-fred. net. Our site documents and shares information about our Home Lab installation and features. Comments When visitors leave comments on the site, we collect the data shown in the comments form and the visitor’s IP address and browser user agent string to help spam detection. An anonymized string created from your email address (also called a hash) may be provided to the Gravatar service to see if you are using it. The Gravatar service privacy policy is available here: https://automattic. com/privacy/. After approval of your comment, your profile picture is visible to the public in the context of your comment. Media If you upload images to the website, you should avoid uploading images with embedded location data (EXIF GPS) included. Visitors to the website can download and extract any location data from images on the website. Cookies If you leave a comment on our site, you may opt-in to save your name, email address, and website in cookies. These are for your convenience so you do not have to fill in your details again when you leave another comment. These cookies will last for one year. If you visit our login page, we will set a temporary cookie to determine if your browser accepts cookies. This cookie contains no personal data and is discarded when you close your browser. When you log in, we will set up several cookies to save your login information and screen display choices. Login cookies last two days, and screen options cookies last a year. If you select "Remember Me", your login will persist for two weeks. If you log out of your account, the login cookies will be removed. If you edit or publish an article, an additional cookie will be saved in your browser. This cookie includes no personal data and indicates the post ID of the article you just edited. It expires after 1 day. Embedded content from other websites Articles on this site may include embedded content (e. g. , videos, images, articles, etc. ). Embedded content from other websites behaves exactly as if the visitor has visited the other website. These websites may collect data about you, use cookies, embed additional third-party tracking, and monitor your interaction with that embedded content, including tracking your interaction with the embedded content if you have an account and are logged in to that website. Who do we share your data with Suggested text: If you request a password reset, your IP address will be included in the reset email. How long we retain your data Suggested text: If you leave a comment, the comment and its metadata are retained indefinitely. This is so we can automatically recognize and approve any follow-up comments instead of holding them in a moderation queue. For users who register on our website (if any), we also store the personal information they provide in their user profile. All users can see, edit, or delete their personal information at any time (except they cannot change their username). Website administrators can also see and edit that information. What rights do you have over your data If you have an account on this site or have left comments, you can request to receive an exported file of the personal data we hold about you, including any data you have provided to us. You can also request that we erase any personal data we hold about you. This does not include any data we are obliged to keep for administrative, legal, or security purposes. Where your data is sent Visitor comments may be checked through an automated spam detection service. --- --- ## Posts ---