Skip to content

systemscribe/k3s-cluster

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 

Repository files navigation

Setting up k3s cluster running on Proxmox VE and installing Wordpress using helm

Intro and Base Setup

This is a documentation of my homelab project to learn virtualization, kubernetes, container deployment, nginx, database management systems and web security. It will be changing and evolving, mostly serving as a guide for myself but also to everyone who would like to try out the project.

Hardware

The hardware I'll be using for this project is an old laptop of mine - ASUS N56JR with a few changed specifications.

CPU: Intel Core i7-4700HQ (4-core, 2.40 - 3.40 GHz) / RAM: 12GB (1x 4096MB, 1x 8192MB) - DDR3, 1600Mhz / HDD: 256 GB

It's connected via cable to a router and accessed via SSH from my main NUC workstation via Wi-Fi.

Server

The base of the server is Proxmox VE 7.2. An open-source virtualization platform and hypervisor based on Debian 11. ISO was installed directly through the interactive installer on the laptop.

Use

Access to the hypervisor web interface is through the given address at 192.168.X.X:8006, using the login user and password set during installation. From here on, everything is done through the NUC workstation and the laptop is left closed and running.

The NUC8i3BEH workstation is currently running Solus OS x86_64. Desktop config files can be found in the desktop repo

AltText

GitHub Repo

I'll be creating a GitHub repo to contain all my documentation and some config files.

  1. Created a k3s-cluster repo through the online interface in GitHub
  2. Created a local repo in $HOME/Code/github/k3s-cluster
  3. Setting up remote repo using git:
git init -b main
git add .
# Adds the files in the local repository and stages them for commit. To unstage a file, use 'git reset HEAD YOUR-FILE'.
git commit -m "First Commit"
# Commits the tracked changes and prepares them to be pushed to a remote repository. To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.
  1. Copy remote repository URL from GitHub:
git remote add origin <REMOTE_URL>
# Sets the new remote
git remote -v
# Verifies the new remote URL
  1. Pushing changes from local to remote:
git push origin main
  1. Adding local file in the future:
git add .
# Adds the file to your local repository and stages it for commit. To unstage a file, use 'git reset HEAD YOUR-FILE'.
git commit -m "Commit Description"
# Commits the tracked changes and prepares them to be pushed to a remote repository. To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.
git push origin main
# Pushes the changes in your local repository up to the remote repository you specified as the origin

Setting up VM Environment

For this step I've been consulting a tutorial on EnigmaCurry's dev blog

Most of the code is taken from there.

SSH keys

  1. Created SSH host entry in $HOME/.ssh/config file:
Host pve 
	Hostname 192.168.X.X 
	User root

Changing the 192.168.X.X to the IP of the Proxmox virtual machine.

  1. Created an SSH identity on the workstation running ssh-keygen
  2. Run ssh-copy-id pve to confirm SSH key fingerprint and remote password chosen during install to login to the Proxmox server via SSH.
  3. SSH to the Proxmox host: ssh pve
  4. Disable password authentication editing with your favorite text editor /etc/ssh/sshd_config
  5. Uncomment # on the PasswordAuthentication line and change yes to no
  6. Save /etc/ssh/sshd_config and close the editor
  7. Restart ssh, run: systemctl restart sshd
  8. Testing if PasswordAuthentication is really turned off with a non-existant username: ssh nonexistant@pve

This attempt should return Permission denied (publickey) and if no password prompt came up, everything is working.

Firewall

The Proxmox firewall will now be set up to enable only connections from the workstation. This is all done from the Proxmox web interface.

  1. Click on Datacenter in the Server View list
  2. Find Firewall settings
  3. Find Firewall Options below
  4. Double-click on the the Firewall entry and change No to Yes

Firewall is disabled by default and is now enabled.

Ubuntu cloud-init template

  1. SSH back into the Proxmox server (ssh pve)
  2. Download the Ubuntu 20.04 LTS cloud image:
wget  http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
  1. Created a new VM that will serve as a template:
qm create 9000
  1. Import the cloud image:
qm importdisk 9000 focal-server-cloudimg-amd64.img local-lvm
  1. You can delete the downloaded image if you wish:
rm focal-server-cloudimg-amd64.img
  1. Configuring the VM:
qm set 9000 --name Ubuntu-20.04 --memory 2048 --net0 virtio,bridge=vmbr0 \
  --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0 \
  --ide0 none,media=cdrom --ide2 local-lvm:cloudinit --boot c \
  --bootdisk scsi0 --serial0 socket --vga std --ciuser root \
  --sshkey $HOME/.ssh/authorized_keys --ipconfig0 ip=dhcp

Resizing template with gParted

  1. Downloading the gparted ISO image to resize the disk:
wget -P /var/lib/vz/template/iso \
  https://downloads.sourceforge.net/gparted/gparted-live-1.3.0-1-amd64.iso
  1. Setting the first boot device to load gparted:
qm set 9000 --ide0 local:iso/gparted-live-1.3.0-1-amd64.iso,media=cdrom \
  --boot 'order=ide0;scsi0'
  1. Resizing the disk to whatever size you choose. To add 50 GB write +50G, for 15 GB +15G and so on:
qm resize 9000 scsi0 +15G

In the Proxmox web interface you can now find VM 9000. All these settings can also be done via the interface. For now, start the VM and click on the Console in the node list.

  1. Once gParted loads click Fix when the error prompt comes up.
  2. Select /dev/sda1 and Resize/Move
  3. Resize the partition all the way to the right so there is no free space left.
  4. Apply your settings and shutdown the VM.
  5. Remove the gParted drive from the VM:
qm set 9000 --delete ide0
  1. You can now conver your virtual machine into a template:
qm template 9000

Installing k3s

Now that we have a VM template, we can create a cluster with one master and two worker nodes.

In the Proxmox interface we can now clone our 9000 template.

  1. Right click on the template, select Clone and then Linked Clone to save disk space. This option requires the template image to be present. Full Clones can function independantly.
  2. Enter the name of the clone to be k3s-1, pve-k3s-1 or whatever you choose.
  3. Start the VM and repeat for as many clones you wish. I only made 3 clones for 1 master and 2 workers
  4. Wait for DHCP to assign IPs to the VMs. You can search for them through various methods. I used Nmap on my Solus workstation.
sudo eopkg it nmap

If you're using an Ubuntu or Debian based system you can install it with:

sudo apt install nmap

Then:

nmap -sP 192.168.1.0/24

Note: You will need to alter the IP address scheme to match yours.

Output should be something like this:

Starting Nmap 7.90 ( https://nmap.org ) at 2022-06-19 11:10 EEST
Nmap scan report for ralink.ralinktech.com (192.168.1.1)
Host is up (0.0039s latency).
Nmap scan report for 192.168.1.2
Host is up (0.035s latency).
Nmap scan report for 192.168.1.6
Host is up (0.000076s latency).
Nmap scan report for 192.168.1.7
Host is up (0.029s latency).
Nmap scan report for 192.168.1.8
Host is up (0.029s latency).
Nmap scan report for 192.168.1.9
Host is up (0.040s latency).
Nmap scan report for 192.168.1.10
Host is up (0.020s latency).
Nmap done: 256 IP addresses (7 hosts up) scanned in 3.40 seconds

If your doing the command right after creating and starting the clones. It should be the last three. In my case 192.168.1.8, 192.168.1.9 and 192.168.1.10.

  1. Now it's time to edit $HOME/.ssh/config on the workstation and add sections for each node:
# Host SSH client config: ~/.ssh/config

Host pve-k3s-1
    Hostname 192.168.X.X
    User root
    
Host pve-k3s-2
    Hostname 192.168.X.X
    User root

Host pve-k3s-3
    Hostname 192.168.X.X
    User root
  1. Test login to k3s-1 via ssh:
ssh pve-k3s-1
  1. While t's time to install the k3s server on what is to be the master node:
curl -sfL https://get.k3s.io | sh -s - server --disable traefik
  1. Retrieve the cluster token:
cat /var/lib/rancher/k3s/server/node-token
  1. SSH into the k3s-2 and k3s-3 nodes and install the worker agents, replacing the value in K3S_URL with the node IP and K3S_TOKEN with the token value you copied from the previous command:
# Install K3s worker agent: fill in K3S_URL and K3S_TOKEN
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.X.X:6443 K3S_TOKEN=xxxx sh

DHCP Reservations

As this is a home, low-budget project, the cluster IPs are generated through DHCP. I'm not using static IPs but my ISP / router settings do provide DHCP Reservations. In my case, I used each of the VMs MAC Address to reseve a slot between the 192.168.1.0 - 192.168.1.100 IP range.

To find the MAC Address of each VM:

  1. Go to the Proxmox VE web insterface.
  2. Click on the VM in the Server View list.
  3. Click on Hardware -> Network Device
  4. Copy the MAC Address in your ISP's DHCP Reservations options and assign the desired IP.

In my case, pve-k3s-1 - the master node is reserverd at 192.168.1.7

If you don't do this, there is a rish of your nodes changing IPs within your network and bringing chaos into your configuration files. If the IPs change, hosts, kubernetes configs and other settings that requried the VM IPs will need to be reconfigured.

  1. Restart all machines and check if the IPs are the same by ssh into each of them.

Local Workstation Access

Exit the ssh sessions from the nodes and return to the workstation.

  1. Create the kubectl config file:
mkdir -p $HOME/.kube && \
scp pve-k3s-1:/etc/rancher/k3s/k3s.yaml $HOME/.kube/config && \
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile && \
export KUBECONFIG=$HOME/.kube/config
  1. Now edit $HOME/.kube/config file and replace 127.0.0.1 with the IP address of the master node. Also find and replace default with the name of the cluster k3s-1 and save the file.

Install kubectl and helm

kubectl

On my Solus workstation, I used the package from the official repository.

sudo eopkg it kubectl

If you're running an Ubuntu or Debian based system:

  1. Update repository and install packages needed to use the Kubernetes apt repository:
sudo apt update
sudo apt-get install -y apt-transport-https ca-certificates curl
  1. Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
  1. Add the Kubernetes apt repository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  1. Update apt package index with the new repository and install kubectl:
sudo apt-get update
sudo apt-get install -y kubectl

Running

  1. Test that kubectl works.
kubectl get nodes

Output should be:

NAME    STATUS   ROLES                  AGE     VERSION
k3s-1   Ready    control-plane,master   5d18h   v1.23.6+k3s1
k3s-2   Ready    <none>                 5d17h   v1.23.6+k3s1
k3s-3   Ready    <none>                 5d17h   v1.23.6+k3s1

Don't worry of the worker nodes have <none> values. Everything is working.

You now have a working k3s cluster running on Proxmox.

helm

Consult the official helm docs to choose the best way to install helm on your particular workstation.

I chose the binary release, as there isn't an official package in the Solus repos.

If you're on an Ubuntu or Debian based system:

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

It's a good idea to add some chart repositories now:

  1. Add bitnami repo:
helm repo add bitnami https://charts.bitnami.com/bitnami
  1. Add kubernetes at home repo:
helm repo add k8s-at-home https://k8s-at-home.com/charts/
  1. Add helm stable repo:
helm repo add stable https://charts.helm.sh/stable

Note that many charts in this repo are deprecated.

To search for charts in a particular repo:

helm search repo <repo name>

Replace <repo name> with the actual name of your chosen repository.

Create snapshots in Proxmox

Now that the basic setup is complete, it's a good idea to create some snapshots to revert to this base level configuration easily.

In the Proxmox web interface. Click on each VM of the cluster and create snapshots:

  1. Shutdown all machines and click Take Snapshot in the Snapshots tab. Name them something like k3s_1_off

  2. Start all VMs and wait for them to load. Take another set of snapshots, this time with the machines running check Include RAM on each. Name them something like k3s_1_on

Make sure to create new snapshots everytime you make major changes.

Installing Wordpress Helm chart

I'll be using the ready helm chart from bitnami

  1. Install the chart in a dedicated namespace:
helm install wordpress bitnami/wordpress --namespace wordpress --create-namespace
  1. Output after installation. Follow the given steps. It takes a few moments to get all services running:
Your WordPress site can be accessed through the following DNS name from within your cluster:

    wordpress.wordpress.svc.cluster.local (port 80)

To access your WordPress site from outside the cluster follow the steps below:

1. Get the WordPress URL by running these commands:

  NOTE: It may take a few minutes for the LoadBalancer IP to be available.
        Watch the status with: 'kubectl get svc --namespace wordpress -w wordpress'

   export SERVICE_IP=$(kubectl get svc --namespace wordpress wordpress --include "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
   echo "WordPress URL: http://$SERVICE_IP/"
   echo "WordPress Admin URL: http://$SERVICE_IP/admin"

2. Open a browser and access WordPress using the obtained URL.

3. Login with the following credentials below to see your blog:

  echo Username: user
  echo Password: $(kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)
  1. Take another snapshot in Proxmox to save the working wordpress environment.

About

Homelab learning project to educate myself on kubernetes, containers and networks.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published