paint-brush
Building a Small K8s Cluster on a Single PC - Chapter 2 - Forging the Cluster.by@ehlesp
424 reads
424 reads

Building a Small K8s Cluster on a Single PC - Chapter 2 - Forging the Cluster.

by Eduardo HiguerasAugust 22nd, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Second in a thread of articles where I introduce my guide series for building a small Kubernetes cluster in a single low-end PC. This entry covers the guides that detail how to setup the cluster itself on a few Debian VMs with Rancher K3s.
featured image - Building a Small K8s Cluster on a Single PC - Chapter 2 - Forging the Cluster.
Eduardo Higueras HackerNoon profile picture

In my previous article, I started the overview of my guides (link to the project at the end of this article) with the ones that explain how to setup the virtualizing platform Proxmox VE in a single common PC. In this piece I'll tackle those that deal with creating the cluster itself with Debian virtual machines and K3s.

Chapter 02. Forging the cluster

Setting up a cluster demands considering a number of aspects: from the internal networking between the cluster nodes to the resources you're going to assign to each node, among several other details. My guides cover all of these problems, and a bit more.

G017 - Virtual Networking ~ Network configuration

The initial Proxmox VE setup has its networking wrapped around a default SDN (Software Defined Network) that comes with just one virtual Linux Bridge. This configuration is enough for giving connectivity to the Proxmox VE host itself and any VMs you create in it, but you cannot use it for isolating the internal networking of a cluster. You need to create another Linux Bridge, this one isolated from external connections completely. This guide explains how to do this rather easy procedure in Proxmox VE, plus a little extra explanation about managing these Linux Bridges with shell commands.

Alternative network setup with OVS

There's an alternative configuration you might like to try. Instead of using Linux Bridges, you could use the Open vSwitch package to setup the whole software-based networking in your Proxmox VE setup. I explain how to setup a basic configuration in the guide G910 - Appendix 10 ~ Setting up virtual network with Open vSwitch, but bear in mind that OVS is a more complex system that also demands more resources to run.

G018 - K3s cluster setup 01 ~ Requirements and arrangement

Here is when you have to make certain important decisions. What kind of cluster you want to run? What is the most adequate Kubernetes distribution to use? Which apps or services you'll deploy later in the cluster? You have to answer this questions, but taking into account the constraints of your host system. This guide does this, first reviewing the hardware requirements of each software, then proposing a particular cluster setup that can fit into the host system. In particular, my guide proposes Rancher K3s as the Kubernetes distribution to use, since is designed to be as lightweight as possible. Also, it takes a look at the requirements of Nextcloud, Gitea and a monitoring stack that includes Prometheus and Grafana; all of them will be services deployed in the cluster later. And, finally, the K3s cluster arrangement that I put forward here is one made up of one master (in K8s jargon)/server (for K3s) node and two worker (in K8s)/agent (for K3s) nodes.

G019 - K3s cluster setup 02 ~ Storage setup

At this point, in your Proxmox VE host there's just a very elemental storage setup. You need to organize its free space to make it useful for your purposes, which means you have to configure it in such a way that Proxmox VE can make use of it properly. This guide explains you exactly this, by using a combination of LVM and Proxmox VE configuration that enables different storage spaces for different purposes: from storing your VMs images and templates, to storing backups.

G020 - K3s cluster setup 03 ~ Debian VM creation

In my proposed setup, I use three small Debian-based virtual machines. To build each of them I use what in Proxmox VE are called VM templates, which I create in a two-stage approach. The first stage is the creation and configuration of a regular Debian VM that can be used for anything. In the G020 guide I start this first stage with the creation of the VM, set with a particular hardware configuration that has —among other details— two network cards, and the installation of a barebones Debian system in it.

G021 - K3s cluster setup 04 ~ Debian VM configuration

After installing Debian into the virtual machine, you still have to configure the VM properly although in a generic way so it can be useful for almost anything. This is a procedure very similar to the one applied to the Proxmox VE host, since it implies installing certain packages through apt, system hardening and sysctl adjustments, among other things. All of this is explained in this guide.

G022 - K3s cluster setup 05 ~ Connecting the VM to the NUT server

This guide is all about connecting the Debian VM to the UPS unit through the NUT software. This implies turning the NUT setup in the Proxmox VE host into a NUT server, and enabling a NUT client in the VM. I also explain how to setup the NUT software in such a way that can shutdown the VM when it detects a situation that requires a controlled emergency shutdown, such as when the UPS kicks in due to power loss. Of course, if you don't have an UPS unit (hint, you should if your Proxmox VE host is a regular PC) you can skip this guide.

G023 - K3s cluster setup 06 ~ Debian VM template and backup

With the Debian VM completely configured, you can turn it into a VM template. It's a very simple procedure done through the Proxmox VE web console, and turns the VM into a frozen-in-time compressed image that you can clone to create other VMs. The main advantage of this is mainly the smaller chunck of space it takes up from your storage capacity, and the main disadvantage is that its frozen nature means that it'll get old over time. You cannot turn a VM template back into a regular VM, at least not through the Proxmox VE web console, so you may very well keep the VM as is if you happen to have plenty of storage space. This guide also shows how to make a backup of a VM manually through the Proxmox VE web console, a process that it's too rather easy to do.

G024 - K3s cluster setup 07 ~ K3s node VM template setup

With the generic Debian VM available, you can do the second stage for creating a Kubernetes node VM template. The guide G024 explains all that is required to clone a VM from the Debian VM template and make it ready to become a K3s node. If you're wondering what's the need for further specific configuration, know that a Kubernetes node in general, and a K3s one in particular, has certain particularities that must be taken into account in the supporting system. Things like disabling swap or enabling the second network card to later facilitate the internal cluster communication through the isolated Linux Bridge (prepared earlier in Proxmox VE), are things that you must leave prepared in a more specialized VM template.

G025 - K3s cluster setup 08 ~ K3s Kubernetes cluster setup

At last, with the K3s node VM template ready, you can start creating the VMs you'll use as nodes in your K3s Kubernetes cluster. Remember that the cluster I propose is made up of three nodes: one is the master/server, while the remaining two are the workers/agents. Each node has its own particularities to attend to, such as requiring unique different hostnames or having certain ports enabled in the Proxmox VE firewall. All of these and more details are covered by this G025 guide, plus the proper preconfiguration and installation of the K3s software in each of the VMs. At the end of this walkthrough, I also give some basic indications about how to monitor your K3s cluster with the kubectl command and the K3s log files.

Deploying a cluster with two or more master/server nodes

If you want an even more complete K8s experience, you might like to try building a cluster with more than one master/server node. The differences in the K3s configuration are small in length but have very significant implications. The very first thing to know is that, while a single-server K3s cluster uses just a sqlite database, a multi-server one requires a full database engine like etcd (the default database use in a standard Kubernetes installation). This has implications in aspects such as performance or firewalling, things that are covered in detail in my extra guide G908 - Appendix 08 ~ K3s cluster with two or more server nodes.

G026 - K3s cluster setup 09 ~ Setting up a kubectl client for remote access

You don't want to be monitoring your cluster from your master node, it's not safe at all. You want to use another client system from which you can launch kubectl commands remotely against your server node. This G026 guide explains how to install the official kubectl in a Debian-based Linux system, and how to stablish connection with your K3s cluster. It also tells you about a kubeval tool for validating Kubernetes manifest.

G027 - K3s cluster setup 10 ~ Deploying the MetalLB load balancer

K3s comes with embedded services that provide all the essential functionality required to run a cluster. Yet in my G025 guide I disable in the K3s configuration two particular services, the metrics-server and the default load balancer. In the first case is due to the need of adjusting the metrics-server configuration, something that it's better done by deploying it later. The second service is disabled because there's a better load balancer that can be deployed in your cluster: MetalLB. The G027 guide shows you how to deploy MetalLB with a Kustomize project, and what is a fitting configuration for this service in your cluster.


Kustomize is an official Kubernetes tool, embedded in the kubectl command since a while ago, that allows you to compose deployments as coherent projects. In other words, you can consider it an official alternative to use Helm charts or similar tools.

Adjusting the configuration for the newer version of MetalLB

Like any other software, MetalLB updated frequently. And since its 0.13.0 version onwards it has changed how it can be configured. I've amended my G027 guide with the appendix guide G912 - Appendix 12 ~ Adapting MetalLB config to CR, where I explain how to do the MetalLB configuration in the new way. Stick to the configuration explained in the G027 guide only if, for some reason, you need or want to work with an older version of MetalLB.

G028 - K3s cluster setup 11 ~ Deploying the metrics-server service

The default metrics-server is the other service that was configured as disabled in your K3s cluster. The G028 guide explains how to deploy it with a Kustomize project, while also indicating the small configuration change that demanded deploying this service later. This service is the one that makes the kubectl top works, giving you stats of the CPU and RAM resources consumed by the nodes and pods deployed in your cluster.

G029 - K3s cluster setup 12 ~ Setting up cert-manager and wildcard certificate

At this point you'll have a fully functional cluster, but there's still a service that is very common to have in a Kubernetes setup: the cert-manager. With this service you can manage certificates, such as the one you'll need later to enable secure connections to other services, such as Nextcloud or Gitea, you'll see deployed in later guides. In this G029 guide you'll see how to deploy cert-manager and also how to use to generate a self-signed certificate you can use later.

There's a particular functionality that cert-manager doesn't provide, which is syncing a certificate's secret between different namespaces. To help you with this task, there's another software called Reflector that you can deploy in your cluster to take care of this in a more automated way. The guide also explains you the setup and use of this other utility service.

G030 - K3s cluster setup 13 ~ Deploying the Kubernetes Dashboard

One way to see what's going on in your cluster is to use kubectl commands. A more convenient one is by looking through graphical interfaces, and K8s also has its own official one known as the Kubernetes Dashboard. My G030 explains you how to deploy and access it from your kubectl client system, and also hints you about how to navigate through this dashboard.

G031 - K3s cluster setup 14 ~ Enabling the Traefik dashboard

One of the embedded services that come included in any K3s installation is the Traefik ingress controller. In the cluster setup explained in my guides, it's running just fine but know that it has also its own graphic dashboard. With the G031 guide you can see the basic way of enabling access to it but in a secure manner, since the dashboard itself doesn't come with any kind of login control.

In the next chapter

With your K3s cluster fully ready and able, its time to give it something to do for real. What about deploying some useful servers like Nextcloud or Gitea, or even a Prometheus-based monitoring stack? In my guides I did just that, and I'll tell you about it in my next article.

Small homelab K8s cluster on Proxmox VE