This is a documentation of my homelab project to learn virtualization, kubernetes, container deployment, nginx, database management systems and web security. It will be changing and evolving, mostly serving as a guide for myself but also to everyone who would like to try out the project.
The hardware I'll be using for this project is an old laptop of mine - ASUS N56JR with a few changed specifications.
CPU: Intel Core i7-4700HQ (4-core, 2.40 - 3.40 GHz) / RAM: 12GB (1x 4096MB, 1x 8192MB) - DDR3, 1600Mhz / HDD: 256 GB
It's connected via cable to a router and accessed via SSH from my main NUC workstation via Wi-Fi.
The base of the server is Proxmox VE 7.2. An open-source virtualization platform and hypervisor based on Debian 11. ISO was installed directly through the interactive installer on the laptop.
Access to the hypervisor web interface is through the given address at 192.168.X.X:8006, using the login user and password set during installation. From here on, everything is done through the NUC workstation and the laptop is left closed and running.
The NUC8i3BEH workstation is currently running Solus OS x86_64. Desktop config files can be found in the desktop repo
I'll be creating a GitHub repo to contain all my documentation and some config files.
- Created a
k3s-clusterrepo through the online interface in GitHub - Created a local repo in $HOME/Code/github/k3s-cluster
- Setting up remote repo using
git:
git init -b main
git add .
# Adds the files in the local repository and stages them for commit. To unstage a file, use 'git reset HEAD YOUR-FILE'.
git commit -m "First Commit"
# Commits the tracked changes and prepares them to be pushed to a remote repository. To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.- Copy remote repository URL from GitHub:
git remote add origin <REMOTE_URL>
# Sets the new remote
git remote -v
# Verifies the new remote URL- Pushing changes from local to remote:
git push origin main- Adding local file in the future:
git add .
# Adds the file to your local repository and stages it for commit. To unstage a file, use 'git reset HEAD YOUR-FILE'.
git commit -m "Commit Description"
# Commits the tracked changes and prepares them to be pushed to a remote repository. To remove this commit and modify the file, use 'git reset --soft HEAD~1' and commit and add the file again.
git push origin main
# Pushes the changes in your local repository up to the remote repository you specified as the originFor this step I've been consulting a tutorial on EnigmaCurry's dev blog
Most of the code is taken from there.
- Created SSH host entry in
$HOME/.ssh/configfile:
Host pve
Hostname 192.168.X.X
User rootChanging the 192.168.X.X to the IP of the Proxmox virtual machine.
- Created an SSH identity on the workstation running
ssh-keygen - Run
ssh-copy-id pveto confirm SSH key fingerprint and remote password chosen during install to login to the Proxmox server via SSH. - SSH to the Proxmox host:
ssh pve - Disable password authentication editing with your favorite text editor
/etc/ssh/sshd_config - Uncomment
#on the PasswordAuthentication line and changeyestono - Save
/etc/ssh/sshd_configand close the editor - Restart ssh, run:
systemctl restart sshd - Testing if
PasswordAuthenticationis really turned off with a non-existant username:ssh nonexistant@pve
This attempt should return Permission denied (publickey) and if no password prompt came up, everything is working.
The Proxmox firewall will now be set up to enable only connections from the workstation. This is all done from the Proxmox web interface.
- Click on
Datacenterin theServer Viewlist - Find
Firewallsettings - Find Firewall
Optionsbelow - Double-click on the the
Firewallentry and changeNotoYes
Firewall is disabled by default and is now enabled.
- SSH back into the Proxmox server (
ssh pve) - Download the Ubuntu 20.04 LTS cloud image:
wget http://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img- Created a new VM that will serve as a template:
qm create 9000- Import the cloud image:
qm importdisk 9000 focal-server-cloudimg-amd64.img local-lvm- You can delete the downloaded image if you wish:
rm focal-server-cloudimg-amd64.img- Configuring the VM:
qm set 9000 --name Ubuntu-20.04 --memory 2048 --net0 virtio,bridge=vmbr0 \
--scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0 \
--ide0 none,media=cdrom --ide2 local-lvm:cloudinit --boot c \
--bootdisk scsi0 --serial0 socket --vga std --ciuser root \
--sshkey $HOME/.ssh/authorized_keys --ipconfig0 ip=dhcp
- Downloading the
gpartedISO image to resize the disk:
wget -P /var/lib/vz/template/iso \
https://downloads.sourceforge.net/gparted/gparted-live-1.3.0-1-amd64.iso- Setting the first boot device to load
gparted:
qm set 9000 --ide0 local:iso/gparted-live-1.3.0-1-amd64.iso,media=cdrom \
--boot 'order=ide0;scsi0'- Resizing the disk to whatever size you choose. To add 50 GB write
+50G, for 15 GB+15Gand so on:
qm resize 9000 scsi0 +15GIn the Proxmox web interface you can now find VM 9000. All these settings can also be done via the interface. For now, start the VM and click on the Console in the node list.
- Once gParted loads click
Fixwhen the error prompt comes up. - Select
/dev/sda1andResize/Move - Resize the partition all the way to the right so there is no free space left.
Applyyour settings and shutdown the VM.- Remove the gParted drive from the VM:
qm set 9000 --delete ide0- You can now conver your virtual machine into a template:
qm template 9000Now that we have a VM template, we can create a cluster with one master and two worker nodes.
In the Proxmox interface we can now clone our 9000 template.
- Right click on the template, select
Cloneand thenLinked Cloneto save disk space. This option requires the template image to be present.Full Clonescan function independantly. - Enter the name of the clone to be
k3s-1,pve-k3s-1or whatever you choose. - Start the VM and repeat for as many clones you wish. I only made 3 clones for 1
masterand 2workers - Wait for DHCP to assign IPs to the VMs. You can search for them through various methods. I used
Nmapon my Solus workstation.
sudo eopkg it nmapIf you're using an Ubuntu or Debian based system you can install it with:
sudo apt install nmapThen:
nmap -sP 192.168.1.0/24Note: You will need to alter the IP address scheme to match yours.
Output should be something like this:
Starting Nmap 7.90 ( https://nmap.org ) at 2022-06-19 11:10 EEST
Nmap scan report for ralink.ralinktech.com (192.168.1.1)
Host is up (0.0039s latency).
Nmap scan report for 192.168.1.2
Host is up (0.035s latency).
Nmap scan report for 192.168.1.6
Host is up (0.000076s latency).
Nmap scan report for 192.168.1.7
Host is up (0.029s latency).
Nmap scan report for 192.168.1.8
Host is up (0.029s latency).
Nmap scan report for 192.168.1.9
Host is up (0.040s latency).
Nmap scan report for 192.168.1.10
Host is up (0.020s latency).
Nmap done: 256 IP addresses (7 hosts up) scanned in 3.40 secondsIf your doing the command right after creating and starting the clones. It should be the last three. In my case 192.168.1.8, 192.168.1.9 and 192.168.1.10.
- Now it's time to edit
$HOME/.ssh/configon the workstation and add sections for each node:
# Host SSH client config: ~/.ssh/config
Host pve-k3s-1
Hostname 192.168.X.X
User root
Host pve-k3s-2
Hostname 192.168.X.X
User root
Host pve-k3s-3
Hostname 192.168.X.X
User root- Test login to k3s-1 via ssh:
ssh pve-k3s-1- While t's time to install the k3s server on what is to be the
masternode:
curl -sfL https://get.k3s.io | sh -s - server --disable traefik- Retrieve the cluster token:
cat /var/lib/rancher/k3s/server/node-token- SSH into the
k3s-2andk3s-3nodes and install theworkeragents, replacing the value inK3S_URLwith the node IP andK3S_TOKENwith the token value you copied from the previous command:
# Install K3s worker agent: fill in K3S_URL and K3S_TOKEN
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.X.X:6443 K3S_TOKEN=xxxx shAs this is a home, low-budget project, the cluster IPs are generated through DHCP. I'm not using static IPs but my ISP / router settings do provide DHCP Reservations. In my case, I used each of the VMs MAC Address to reseve a slot between the 192.168.1.0 - 192.168.1.100 IP range.
To find the MAC Address of each VM:
- Go to the Proxmox VE web insterface.
- Click on the VM in the
Server Viewlist. - Click on
Hardware->Network Device - Copy the
MAC Addressin your ISP'sDHCP Reservationsoptions and assign the desired IP.
In my case, pve-k3s-1 - the master node is reserverd at 192.168.1.7
If you don't do this, there is a rish of your nodes changing IPs within your network and bringing chaos into your configuration files. If the IPs change, hosts, kubernetes configs and other settings that requried the VM IPs will need to be reconfigured.
- Restart all machines and check if the IPs are the same by
sshinto each of them.
Exit the ssh sessions from the nodes and return to the workstation.
- Create the
kubectlconfig file:
mkdir -p $HOME/.kube && \
scp pve-k3s-1:/etc/rancher/k3s/k3s.yaml $HOME/.kube/config && \
echo "export KUBECONFIG=$HOME/.kube/config" >> $HOME/.bash_profile && \
export KUBECONFIG=$HOME/.kube/config- Now edit
$HOME/.kube/configfile and replace127.0.0.1with the IP address of themasternode. Also find and replacedefaultwith the name of the clusterk3s-1and save the file.
On my Solus workstation, I used the package from the official repository.
sudo eopkg it kubectlIf you're running an Ubuntu or Debian based system:
- Update repository and install packages needed to use the Kubernetes
aptrepository:
sudo apt update
sudo apt-get install -y apt-transport-https ca-certificates curl- Download the Google Cloud public signing key:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg- Add the Kubernetes
aptrepository:
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list- Update
aptpackage index with the new repository and installkubectl:
sudo apt-get update
sudo apt-get install -y kubectl- Test that
kubectlworks.
kubectl get nodesOutput should be:
NAME STATUS ROLES AGE VERSION
k3s-1 Ready control-plane,master 5d18h v1.23.6+k3s1
k3s-2 Ready <none> 5d17h v1.23.6+k3s1
k3s-3 Ready <none> 5d17h v1.23.6+k3s1Don't worry of the worker nodes have <none> values. Everything is working.
You now have a working k3s cluster running on Proxmox.
Consult the official helm docs to choose the best way to install helm on your particular workstation.
I chose the binary release, as there isn't an official package in the Solus repos.
If you're on an Ubuntu or Debian based system:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helmIt's a good idea to add some chart repositories now:
- Add
bitnamirepo:
helm repo add bitnami https://charts.bitnami.com/bitnami- Add
kubernetes at homerepo:
helm repo add k8s-at-home https://k8s-at-home.com/charts/- Add
helm stablerepo:
helm repo add stable https://charts.helm.sh/stableNote that many charts in this repo are deprecated.
To search for charts in a particular repo:
helm search repo <repo name>Replace <repo name> with the actual name of your chosen repository.
Now that the basic setup is complete, it's a good idea to create some snapshots to revert to this base level configuration easily.
In the Proxmox web interface. Click on each VM of the cluster and create snapshots:
-
Shutdown all machines and click
Take Snapshotin the Snapshots tab. Name them something likek3s_1_off -
Start all VMs and wait for them to load. Take another set of snapshots, this time with the machines running check
Include RAMon each. Name them something likek3s_1_on
Make sure to create new snapshots everytime you make major changes.
I'll be using the ready helm chart from bitnami
- Install the chart in a dedicated namespace:
helm install wordpress bitnami/wordpress --namespace wordpress --create-namespace- Output after installation. Follow the given steps. It takes a few moments to get all services running:
Your WordPress site can be accessed through the following DNS name from within your cluster:
wordpress.wordpress.svc.cluster.local (port 80)
To access your WordPress site from outside the cluster follow the steps below:
1. Get the WordPress URL by running these commands:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace wordpress -w wordpress'
export SERVICE_IP=$(kubectl get svc --namespace wordpress wordpress --include "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}")
echo "WordPress URL: http://$SERVICE_IP/"
echo "WordPress Admin URL: http://$SERVICE_IP/admin"
2. Open a browser and access WordPress using the obtained URL.
3. Login with the following credentials below to see your blog:
echo Username: user
echo Password: $(kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 -d)
- Take another snapshot in Proxmox to save the working wordpress environment.
