Kubernetes Cluster Setup

We have set up a Kubernetes Cluster and plan to migrate some of our Docker workloads to Kubernetes.

What Is Kubernetes?

The following video explains what Kubernetes is and why you might want to use it:

Kubernetes Architecture

Kubernetes Cluster Architecture
Kubernetes Cluster Architecture

It helps to begin with a sound understanding of Kubernetes architecture. The following video does a good job of explaining the basics:

The next video explains how Kubernetes realizes high-availability and scalability:

The final video explains the various components that make up a Kubernetes cluster:

Learning Kubernetes

Kubernetes is complex and can be challenging to understand and use. The following video series thoroughly covers the basics to help you build a solid understanding of Kubernetes.

Kubernetes Node VMs

We are implementing a high availability (HA) configuration. A Kubernetes Cluster requires three types of virtual machines to host the necessary nodes.

  • VMs for Master Nodes
  • VMs for Persistent Volume Storage Nodes
  • VMs for Workers that run Containers
  • VM for Cluster Administration

To facilitate the creation of these VMs, we’ve created a Proxmox template using the procedure outlined here.

The template was used to create the VMs to support our Kubernetes installation. The QEMU Guest Agent was also installed on each VM.

Administrative Node Setup

We will use our Administration VM to set up and manage our cluster via a set of Ansible playbooks stored in a provided GitHub repository. To set up Ansible and Git on our Administration VM, we followed the steps in the following video:

Finally, we install Ansible with the following command:

# sudo apt install ansible

Cluster VM Preparation

The VM configuration of our cluster is summarized in the following table:

NodeVM Name(s)CoresMemoryStorage
Master NodesK3S-Master[1-3]48G60G
Worker NodesK3S-Worker1832G150G
K3S-Worker[2-3]432G150G
Volume Storage Nodes (Longhorn)K3S-Store[1-3]48G200G
Admin NodeK3S-Admin44G20G

Our Master, Worker, and Storage nodes run three instances on a different physical server in our Proxmox Cluster. This approach enables our Kubernetes cluster to provide HA operation should one of our servers fail. The Administrative node has one instance protected via our Proxmox Cluster’s HA setup.

The K3S-Worker1 node runs on our dual-socket Dell R740 server in our Proxmox cluster. This machine has many more computing and memory resources than our other servers. K3S-Worker2 and K3S-Worker3 run on each of our two Supermicro servers.

We added these VMs to our Ansible vm-config playbook, which initializes newly created VMs and LXCs. We ran the playbook to prepare our cluster VMs for the Kubernetes installation.

K3S Cluster Setup

Setting up a Kubernetes cluster can be a complex task. We chose the K3S distribution of Kubernetes, which simplifies things somewhat. We also use Kube-vip for our API load balancer and Metal LB for our external service load balancer.

We found an excellent video that explains the configuration process. The author provides an Ansible playbook that automates our K3S cluster installation and sets up K3S in a high-availability configuration.

Initially, we did not install our three storage nodes (k3-store[1-3}) as worker nodes to ensure the pods created during the subsequent installation of Rancher and Cert-Manager run on our work nodes k3s-worker[1-3]. Once Rancher and Cert-Manager are installed, the hosts.ini file associated with the Ansible playbook in the video is modified to add the three storage worker nodes (k3s-store[1-3]), and the playbook is run again to add the storage nodes. Longhorn and Traefik can then be installed. This approach is required to ensure that all of the nodes in our K3S cluster installation are running the same version of K3S.

We also installed kubectl on our administration node and successfully tested the Nginx test deployment in the video. With these steps completed, our K3S high-availability cluster is ready for Rancher.

Rancher – Cluster Management

Rancher provides a GUI interface for managing our K3S cluster. We deployed Rancher as a high-availability application in our K3S cluster. The procedure for doing this is explained in the following video:

We used the Metal LB load balancer that is part of our K3S installation to set up load balancing and expose our high availability Rancher service. The command to do this follows below:

# kubectl expose deployment rancher \
     --name rancher-lb \
     --load-balancer-ip=[your IP] \
     --port=443 \
     --type=LoadBalancer \
     -n cattle-system

Longhorn – Persistent Volume Storage

Applications often require persistent storage. We have installed Longhorn to provide this functionality in our cluster. We used the procedure in the following video to install Longhorn:

Ingress Controller – Traefik

We are using Traefik as our Ingress Controller. Traefik provides security features, SSL certificates via LetsEncrypt, and service routing with our K3S cluster. The following videos provide setup and security configuration procedures.

Monitoring and Alterting – Prometheus and Grafana

The app catalog in Rancher provides an assembled K3S monitoring solution based on a combination of Prometheus, Grafana, and an Alerting solution (via Prometheus). The following video explains how to set this up (it’s easy):

Anita's and Fred's Home Lab

WordPress Appliance - Powered by TurnKey Linux