Documentation Restructure
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s

This commit is contained in:
2026-02-27 04:02:06 -07:00
parent 52e6f83418
commit 554c04aa32
201 changed files with 378 additions and 47 deletions

View File

@@ -0,0 +1,79 @@
---
tags:
- Containers
- Docker
- Containerization
---
**Purpose**:
This document will outline the general workflow of using Visual Studio Code to author and update custom containers and push them to a container registry hosted in Gitea. This will be referencing the `git-repo-updater` project throughout.
!!! note "Assumptions"
This document assumes you are authoring the containers in Microsoft Windows, and does not include the fine-tuning necessary to work in Linux or MacOS environments. You are on your own if you want to author containers in Linux.
## Install Visual Studio Code
The management of the Gitea repositories, Dockerfile building, and pushing container images to the Gitea container registry will all involve using just Visual Studio Code. You can download Visual Studio Code from this [direct download link](https://code.visualstudio.com/docs/?dv=win64user).
## Configure Required Docker Extensions
You will need to locate and install the `Dev Containers`, `Docker`, and `WSL` extensions in Visual Studio Code to move forward. This may request that you install Docker Desktop onto your computer as part of the installation process. Proceed to do so, then when the Docker "Engine" is running, you can proceed to the next step.
!!! warning
You need to have Docker Desktop "Engine" running whenever working with containers, as it is necessary to build the images. VSCode will complain if it is not running.
## Add Gitea Container Registry
At this point, we need to add a registry to Visual Studio Code so it can proceed with pulling down the repository data.
- Click the Docker icon on the left-hand toolbar
- Under "**Registries**", click "**Connect Registry...**"
- In the dropdown menu that appears, click "**Generic Registry V2**"
- Enter `https://git.bunny-lab.io/container-registry`
- Registry Username: `nicole.rappe`
- Registry Password or Personal Access Token: `Personal Access API Token You Generated in Gitea`
- You will now see a sub-listing named "**Generic Registry V2**"
- If you click the dropdown, you will see "**https://git.bunny-lab.io/container-registry**"
- Under this section, you will see any containers in the registry that you have access to, in this case, you will see `container-registry/git-repo-updater`
## Add Source Control Repository
Now it is time to pull down the repository where the container's core elements are stored on Gitea.
- Click the "**Source Control**" button on the left-hand menu then click the "**Clone Repository**" button
- Enter `https://git.bunny-lab.io/container-registry/git-repo-updater.git`
- Click the dropdown menu option "**Clone from URL**" then choose a location to locally store the repository on your computer
- When prompted with "**Would you like to open the cloned repository**", click the "**Open**" button
## Making Changes
You will be presented with four files in this specific repository. `.env`, `docker-compose.yml`, `Dockerfile`, and `repo_watcher.sh`
- `.env` is the environment variables passed to the container to tell it which ntfy server to talk to, which credentials to use with Gitea, and which repositories to download and push into production servers
- `docker-compose.yml` is an example docker-compose file that can be used in Portainer to deploy the server along with the contents of the `.env` file
- `Dockerfile` is the base of the container, telling docker what operating system to use and how to start the script in the container
- `repo_watcher.sh` is the script called by the `Dockerfile` which loops checking for updates in Gitea repositories that were configured in the `.env` file
### Push to Repository
When you make any changes, you will need to first commit them to the repository
- Save all of the edited files
- Click the "**Source Control**" button in the toolbar
- Write a message about what you changed in the commit description field
- Click the "**Commit**" button
- Click the "**Sync Changes**" button that appears
- You may be presented with various dialogs, just click the equivalant of "**Yes/OK**" to each of them
### Build the Dockerfile
At this point, we need to build the dockerfile, which takes all of the changes and packages it into a container image
- Navigate back to the file explorer inside of Visual Studio Code
- Right-click the `Dockerfile`, then click "**Build Image...**"
- In the "Tag Image As..." window, type in `git.bunny-lab.io/container-registry/git-repo-updater:latest`
- When you navigate back to the Docker menu, you will see a new image appear under the "**Images**" section
- You should see something similar to "Latest - X Seconds Ago` indicating this is the image you just built
- Delete the older image(s) by right-clicking on them and selecting "**Remove...**"
- Push the image to the container registry in Gitea by right-clicking the latest image, and selecting "**Push...**"
- In the dropdown menu that appears, enter `git.bunny-lab.io/container-registry/git-repo-updater:latest`
- You can confirm if it was successful by navigating to the [Gitea Container Webpage](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest) and seeing if it says "**Published Now**" or "**Published 1 Minute Ago**"
!!! warning "CRLF End of Line Sequences"
When you are editing files in the container's repository, you need to ensure that Visual Studio Code is editing that file in "**LF**" mode and not "**CRLF**". You can find this toggle at the bottom-right of the VSCode window. Simply clicking on the letters "**CRLF**" will let you toggle the file to "**LF**". If you do not make this change, the container will misunderstand the dockerfile and/or scripts inside of the container and have runtime errors.
## Deploy the Container
You can now use the `.env` file along with the `docker-compose.yml` file inside of Portainer to deploy a stack using the container you just built / updated.

View File

@@ -0,0 +1,114 @@
---
tags:
- Containers
- Docker
- Containerization
---
**Purpose**: Docker container running Alpine Linux that automates and improves upon much of the script mentioned in the [Git Repo Updater](../../../../../scripts/bash/git-repo-updater.md) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings, and can have an infinite number of monitored repositories.
### Deployment
You can find the current up-to-date Gitea repository that includes the `docker-compose.yml` and `.env` files that you need to deploy everything [here](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest)
```jsx title="docker-compose.yml"
version: '3.3'
services:
git-repo-updater:
privileged: true
container_name: git-repo-updater
env_file:
- stack.env
image: git.bunny-lab.io/container-registry/git-repo-updater:latest
volumes:
- /srv/containers:/srv/containers
- /srv/containers/git-repo-updater/Repo_Cache:/root/Repo_Cache
restart: always
```
```jsx title=".env"
# Gitea Credentials
GIT_USERNAME=nicole.rappe
GIT_PASSWORD=USE-AN-APP-PASSWORD
# NTFY Push Notification Server URL
NTFY_URL=https://ntfy.cyberstrawberry.net/git-repo-updater
# Repository/Destination Pairs (Add as Many as Needed)
REPO_01="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/bunny-lab/docs.git,/srv/containers/material-mkdocs/docs/docs"
REPO_02="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/GitOps/servers.bunny-lab.io.git,/srv/containers/homepage-docker"
```
### Build / Development
If you want to learn how the container was assembled, the related build files are located [here](https://git.cyberstrawberry.net/container-registry/git-repo-updater)
```jsx title="Dockerfile"
# Use Alpine as the base image of the container
FROM alpine:latest
# Install necessary packages
RUN apk --no-cache add git curl rsync
# Add script
COPY repo_watcher.sh /repo_watcher.sh
RUN chmod +x /repo_watcher.sh
#Create Directory to store Repositories
RUN mkdir -p /root/Repo_Cache
# Start script (Alpine uses /bin/sh instead of /bin/bash)
CMD ["/bin/sh", "-c", "/repo_watcher.sh"]
```
```jsx title="repo_watcher.sh"
#!/bin/sh
# Function to process each repo-destination pair
process_repo() {
FULL_REPO_URL=$1
DESTINATION=$2
# Extract the URL without credentials for logging and notifications
CLEAN_REPO_URL=$(echo "$FULL_REPO_URL" | sed 's/https:\/\/[^@]*@/https:\/\//')
# Directory to hold the repository locally
REPO_DIR="/root/Repo_Cache/$(basename $CLEAN_REPO_URL .git)"
# Clone the repo if it doesn't exist, or navigate to it if it does
if [ ! -d "$REPO_DIR" ]; then
curl -d "Cloning: $CLEAN_REPO_URL" $NTFY_URL
git clone "$FULL_REPO_URL" "$REPO_DIR" > /dev/null 2>&1
fi
cd "$REPO_DIR" || exit
# Fetch the latest changes
git fetch origin main > /dev/null 2>&1
# Check if the local repository is behind the remote
LOCAL=$(git rev-parse @)
REMOTE=$(git rev-parse @{u})
if [ "$LOCAL" != "$REMOTE" ]; then
curl -d "Updating: $CLEAN_REPO_URL" $NTFY_URL
git pull origin main > /dev/null 2>&1
rsync -av --delete --exclude '.git/' ./ "$DESTINATION" > /dev/null 2>&1
fi
}
# Main loop
while true; do
# Iterate over each environment variable matching 'REPO_[0-9]+'
env | grep '^REPO_[0-9]\+=' | while IFS='=' read -r name value; do
# Split the value by comma and read into separate variables
OLD_IFS="$IFS" # Save the original IFS
IFS=',' # Set IFS to comma for splitting
set -- $value # Set positional parameters ($1, $2, ...)
REPO_URL="$1" # Assign first parameter to REPO_URL
DESTINATION="$2" # Assign second parameter to DESTINATION
IFS="$OLD_IFS" # Restore original IFS
process_repo "$REPO_URL" "$DESTINATION"
done
# Wait for 5 seconds before the next iteration
sleep 5
done
```

View File

@@ -0,0 +1,63 @@
---
tags:
- Docker
- Portainer
- Containerization
---
### Update The Package Manager
We need to update the server before installing Docker
=== "Ubuntu Server"
``` sh
sudo apt update
sudo apt upgrade -y
```
=== "Rocky Linux"
``` sh
sudo dnf check-update
```
### Deploy Docker
Install Docker then deploy Portainer
Convenience Script:
```
curl -fsSL https://get.docker.com | sudo sh
dockerd-rootless-setuptool.sh install
```
Alternative Methods:
=== "Ubuntu Server"
``` sh
sudo apt install docker.io -y
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (1)
```
1. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
=== "Rocky Linux"
``` sh
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf install -y docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker --now
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (2)
```
1. This is needed to ensure that docker starts automatically every time the server is turned on.
2. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
### Configure Docker Network
I highly recomment setting up a [Dedicated Docker MACVLAN Network](../../../../reference/infrastructure/networking/docker-networking/docker-networking.md). You can use it to keep your containers on their own subnet.
### Access Portainer WebUI
You will be able to access the Portainer WebUI at the following address: `https://<IP Address>:9443`
!!! warning
You need to be quick, as there is a timeout period where you wont be able to onboard / provision Portainer and will be forced to restart it's container. If this happens, you can find the container using `sudo docker container ls` proceeded by `sudo docker restart <ID of Portainer Container>`.

View File

@@ -0,0 +1,193 @@
---
tags:
- Kubernetes
- Containerization
---
# Deploy Generic Kubernetes
The instructions outlined below assume you are deploying the environment using Ansible Playbooks either via Ansible's CLI or AWX.
### Deploy K8S User
```jsx title="01-deploy-k8s-user.yml"
- hosts: 'controller-nodes, worker-nodes'
become: yes
tasks:
- name: create the k8sadmin user account
user: name=k8sadmin append=yes state=present createhome=yes shell=/bin/bash
- name: allow 'k8sadmin' to use sudo without needing a password
lineinfile:
dest: /etc/sudoers
line: 'k8sadmin ALL=(ALL) NOPASSWD: ALL'
validate: 'visudo -cf %s'
- name: set up authorized keys for the k8sadmin user
authorized_key: user=k8sadmin key="{{item}}"
with_file:
- ~/.ssh/id_rsa.pub
```
### Install K8S
```jsx title="02-install-k8s.yml"
---
- hosts: "controller-nodes, worker-nodes"
remote_user: nicole
become: yes
become_method: sudo
become_user: root
gather_facts: yes
connection: ssh
tasks:
- name: Create containerd config file
file:
path: "/etc/modules-load.d/containerd.conf"
state: "touch"
- name: Add conf for containerd
blockinfile:
path: "/etc/modules-load.d/containerd.conf"
block: |
overlay
br_netfilter
- name: modprobe
shell: |
sudo modprobe overlay
sudo modprobe br_netfilter
- name: Set system configurations for Kubernetes networking
file:
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
state: "touch"
- name: Add conf for containerd
blockinfile:
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
block: |
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
- name: Apply new settings
command: sudo sysctl --system
- name: install containerd
shell: |
sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo systemctl restart containerd
- name: disable swap
shell: |
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
- name: install and configure dependencies
shell: |
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
- name: Create kubernetes repo file
file:
path: "/etc/apt/sources.list.d/kubernetes.list"
state: "touch"
- name: Add K8s Source
blockinfile:
path: "/etc/apt/sources.list.d/kubernetes.list"
block: |
deb https://apt.kubernetes.io/ kubernetes-xenial main
- name: Install Kubernetes
shell: |
sudo apt-get update
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
sudo apt-mark hold kubelet kubeadm kubectl
```
### Configure ControlPlanes
```jsx title="03-configure-controllers.yml"
- hosts: controller-nodes
become: yes
tasks:
- name: Initialize the K8S Cluster
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
args:
chdir: $HOME
creates: cluster_initialized.txt
- name: Create .kube directory
become: yes
become_user: k8sadmin
file:
path: /home/k8sadmin/.kube
state: directory
mode: 0755
- name: Copy admin.conf to user's kube config
copy:
src: /etc/kubernetes/admin.conf
dest: /home/k8sadmin/.kube/config
remote_src: yes
owner: k8sadmin
- name: Install the Pod Network
become: yes
become_user: k8sadmin
shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
args:
chdir: $HOME
- name: Get the token for joining the worker nodes
become: yes
become_user: k8sadmin
shell: kubeadm token create --print-join-command
register: kubernetes_join_command
- name: Output Join Command to the Screen
debug:
msg: "{{ kubernetes_join_command.stdout }}"
- name: Copy join command to local file.
become: yes
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
```
### Join Worker Node(s)
```jsx title="04-join-worker-nodes.yml"
- hosts: worker-nodes
become: yes
gather_facts: yes
tasks:
- name: Copy join command from Ansible host to the worker nodes.
become: yes
copy:
src: /tmp/kubernetes_join_command
dest: /tmp/kubernetes_join_command
mode: 0777
- name: Join the Worker nodes to the cluster.
become: yes
command: sh /tmp/kubernetes_join_command
register: joined_or_not
```
### Host Inventory File Template
```jsx title="hosts"
[controller-nodes]
k8s-ctrlr-01 ansible_host=192.168.3.6 ansible_user=nicole
[worker-nodes]
k8s-node-01 ansible_host=192.168.3.4 ansible_user=nicole
k8s-node-02 ansible_host=192.168.3.5 ansible_user=nicole
[all:vars]
ansible_become_user=root
ansible_become_method=sudo
```

View File

@@ -0,0 +1,226 @@
---
tags:
- Kubernetes
- RKE2
- Rancher
- Containerization
---
# Deploy RKE2 Cluster
Deploying a Rancher RKE2 Cluster is fairly straightforward. Just run the commands in-order and pay attention to which steps apply to all machines in the cluster, the controlplanes, and the workers.
!!! note "Prerequisites"
This document assumes you are running **Ubuntu Server 24.04.3 LTS**. It also assumes that every node in the cluster has a unique hostname.
## All Cluster Nodes
Assume all commands are running as root moving forward. (e.g. `sudo su`)
### Run Updates
You will need to run these commands on every server that participates in the cluster then perform a reboot of the server **PRIOR** to moving onto the next section.
``` sh
apt update && apt upgrade -y
apt install nfs-common iptables nano htop -y
echo "Adding 15 Second Delay to Ensure Previous Commands finish running"
sleep 15
apt autoremove -y
reboot
```
!!! tip
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
## Initial ControlPlane Node
When you are starting a brand new cluster, you need to create what is referred to as the "Initial ControlPlane". This node is responsible for bootstrapping the entire cluster together in the beginning, and will eventually assist in handling container workloads and orchestrating operations in the cluster.
!!! warning
You only want to follow the instructions for the **initial** controlplane once. Running it on another machine to create additional controlplanes will cause the cluster to try to set up two different clusters, wrecking havok. Instead, follow the instructions in the next section to add redundant controlplanes.
### Download the Run Server Deployment Script
```
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
```
### Enable & Configure Services
``` sh
# Start and Enable the Kubernetes Service
systemctl enable --now rke2-server.service
# Symlink the Kubectl Management Command
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
# Temporarily Export the Kubeconfig to manage the cluster from CLI during initial deployment.
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
# Add a Delay to Allow Cluster to Finish Initializing / Get Ready
echo "Adding 60 Second Delay to Ensure Cluster is Ready - Run (kubectl get node) if the server is still not ready to know when to proceed."
sleep 60
# Check that the Cluster Node is Running and Ready
kubectl get node
```
!!! example
When the cluster is ready, you should see something like this when you run `kubectl get node`
This may be a good point to step away for 5 minutes, get a cup of coffee, and come back so it has a little extra time to be fully ready before moving on.
```
root@awx:/home/nicole# kubectl get node
NAME STATUS ROLES AGE VERSION
awx Ready control-plane,etcd,master 3m21s v1.26.12+rke2r1
```
### Install Helm, Rancher, CertManager, Jetstack, Rancher, and Longhorn
``` sh
# Install Helm
curl -L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-4 | bash
# Install Necessary Helm Repositories
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
helm repo add jetstack https://charts.jetstack.io
helm repo add longhorn https://charts.longhorn.io
helm repo update
# Install Cert-Manager via Helm
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.19.2/cert-manager.crds.yaml
# Install Jetstack via Helm
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace
# Install Rancher via Helm
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rke2-cluster.bunny-lab.io --set bootstrapPassword=bootStrapAllTheThings --set replicas=1
# Install Longhorn via Helm
helm upgrade -i longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
```
!!! example "Be Patient - Come back in 20 Minutes"
Rancher is going to take a while to fully set itself up, things will appear broken. Depending on how many resources you gave the cluster, it may take longer or shorter. A good ballpark is giving it at least 20 minutes to deploy itself before attempting to log into the webUI at https://awx.bunny-lab.io.
If you want to keep an eye on the deployment progress, you need to run the following command: `KUBECONFIG=/etc/rancher/rke2/rke2.yaml kubectl get pods --all-namespaces`
The output should look like how it does below:
```
NAMESPACE NAME READY STATUS RESTARTS AGE
cattle-fleet-system fleet-controller-59cdb866d7-94r2q 1/1 Running 0 4m31s
cattle-fleet-system gitjob-f497866f8-t726l 1/1 Running 0 4m31s
cattle-provisioning-capi-system capi-controller-manager-6f87d6bd74-xx22v 1/1 Running 0 55s
cattle-system helm-operation-28dcp 0/2 Completed 0 109s
cattle-system helm-operation-f9qww 0/2 Completed 0 4m39s
cattle-system helm-operation-ft8gq 0/2 Completed 0 26s
cattle-system helm-operation-m27tq 0/2 Completed 0 61s
cattle-system helm-operation-qrgj8 0/2 Completed 0 5m11s
cattle-system rancher-64db9f48c-qm6v4 1/1 Running 3 (8m8s ago) 13m
cattle-system rancher-webhook-65f5455d9c-tzbv4 1/1 Running 0 98s
cert-manager cert-manager-55cf8685cb-86l4n 1/1 Running 0 14m
cert-manager cert-manager-cainjector-fbd548cb8-9fgv4 1/1 Running 0 14m
cert-manager cert-manager-webhook-655b4d58fb-s2cjh 1/1 Running 0 14m
kube-system cloud-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
kube-system etcd-awx 1/1 Running 0 19m
kube-system helm-install-rke2-canal-q9vm6 0/1 Completed 0 19m
kube-system helm-install-rke2-coredns-q8w57 0/1 Completed 0 19m
kube-system helm-install-rke2-ingress-nginx-54vgk 0/1 Completed 0 19m
kube-system helm-install-rke2-metrics-server-87zhw 0/1 Completed 0 19m
kube-system helm-install-rke2-snapshot-controller-crd-q6bh6 0/1 Completed 0 19m
kube-system helm-install-rke2-snapshot-controller-tjk5f 0/1 Completed 0 19m
kube-system helm-install-rke2-snapshot-validation-webhook-r9pcn 0/1 Completed 0 19m
kube-system kube-apiserver-awx 1/1 Running 0 19m
kube-system kube-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
kube-system kube-proxy-awx 1/1 Running 0 19m
kube-system kube-scheduler-awx 1/1 Running 5 (3m35s ago) 19m
kube-system rke2-canal-gm45f 2/2 Running 0 19m
kube-system rke2-coredns-rke2-coredns-565dfc7d75-qp64p 1/1 Running 0 19m
kube-system rke2-coredns-rke2-coredns-autoscaler-6c48c95bf9-fclz5 1/1 Running 0 19m
kube-system rke2-ingress-nginx-controller-lhjwq 1/1 Running 0 17m
kube-system rke2-metrics-server-c9c78bd66-fnvx8 1/1 Running 0 18m
kube-system rke2-snapshot-controller-6f7bbb497d-dw6v4 1/1 Running 4 (6m17s ago) 18m
kube-system rke2-snapshot-validation-webhook-65b5675d5c-tdfcf 1/1 Running 0 18m
longhorn-system csi-attacher-785fd6545b-6jfss 1/1 Running 1 (6m17s ago) 9m39s
longhorn-system csi-attacher-785fd6545b-k7jdh 1/1 Running 0 9m39s
longhorn-system csi-attacher-785fd6545b-rr6k4 1/1 Running 0 9m39s
longhorn-system csi-provisioner-8658f9bd9c-58dc8 1/1 Running 0 9m38s
longhorn-system csi-provisioner-8658f9bd9c-g8cv2 1/1 Running 0 9m38s
longhorn-system csi-provisioner-8658f9bd9c-mbwh2 1/1 Running 0 9m38s
longhorn-system csi-resizer-68c4c75bf5-d5vdd 1/1 Running 0 9m36s
longhorn-system csi-resizer-68c4c75bf5-r96lf 1/1 Running 0 9m36s
longhorn-system csi-resizer-68c4c75bf5-tnggs 1/1 Running 0 9m36s
longhorn-system csi-snapshotter-7c466dd68f-5szxn 1/1 Running 0 9m30s
longhorn-system csi-snapshotter-7c466dd68f-w96lw 1/1 Running 0 9m30s
longhorn-system csi-snapshotter-7c466dd68f-xt42z 1/1 Running 0 9m30s
longhorn-system engine-image-ei-68f17757-jn986 1/1 Running 0 10m
longhorn-system instance-manager-fab02be089480f35c7b2288110eb9441 1/1 Running 0 10m
longhorn-system longhorn-csi-plugin-5j77p 3/3 Running 0 9m30s
longhorn-system longhorn-driver-deployer-75fff9c757-dps2j 1/1 Running 0 13m
longhorn-system longhorn-manager-2vfr4 1/1 Running 4 (10m ago) 13m
longhorn-system longhorn-ui-7dc586665c-hzt6k 1/1 Running 0 13m
longhorn-system longhorn-ui-7dc586665c-lssfj 1/1 Running 0 13m
```
!!! note
Be sure to write down the "*bootstrapPassword*" variable for when you log into Rancher later. In this example, the password is `bootStrapAllTheThings`.
Also be sure to adjust the "*hostname*" variable to reflect the FQDN of the cluster. You can leave it default like this and change it upon first login if you want. This is important for the last step where you adjust DNS. The example given is `rke2-cluster.bunny-lab.io`.
### Log into webUI
At this point, you can log into the webUI at https://rke2-cluster.bunny-lab.io using the default `bootStrapAllTheThings` password, or whatever password you configured, you can change the password after logging in if you need to by navigating to **Home > Users & Authentication > "..." > Edit Config > "New Password" > Save**. From here, you can deploy more nodes, or deploy single-node workloads such as an Ansible AWX Operator.
### Rebooting the ControlNode
If you ever find yourself needing to reboot the ControlNode, and need to run kubectl CLI commands, you will need to run the command below to import the cluster credentials upon every reboot. Reboots should take much less time to get the cluster ready again as compared to the original deployments.
```
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
```
## Create Additional ControlPlane Node(s)
This is the part where you can add additional controlplane nodes to add additional redundancy to the RKE2 Cluster. This is important for high-availability environments.
### Download the Server Deployment Script
``` sh
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
```
### Configure and Connect to Existing/Initial ControlPlane Node
``` sh
# Symlink the Kubectl Management Command
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
# Manually Create a Rancher-Kubernetes-Specific Config File
mkdir -p /etc/rancher/rke2/
# Inject IP of Initial ControlPlane Node into Config File
echo "server: https://192.168.3.69:9345" > /etc/rancher/rke2/config.yaml
# Inject the Initial ControlPlane Node trust token into the config file
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
# Start and Enable the Kubernetes Service
systemctl enable --now rke2-server.service
```
!!! note
Be sure to change the IP address of the initial controlplane node provided in the example above to match your environment.
## Add Worker Node(s)
Worker nodes are the bread-and-butter of a Kubernetes cluster. They handle running container workloads, and acting as storage for the cluster (this can be configured to varying degrees based on your needs).
### Download the Server Worker Script
``` sh
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
```
### Configure and Connect to RKE2 Cluster
``` sh
# Manually Create a Rancher-Kubernetes-Specific Config File
mkdir -p /etc/rancher/rke2/
# Inject IP of Initial ControlPlane Node into Config File
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
# Inject the Initial ControlPlane Node trust token into the config file
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
# Start and Enable the Kubernetes Service**
systemctl enable --now rke2-agent.service
```
## DNS Server Record
You will need to set up some kind of DNS server record to point the FQDN of the cluster (e.g. `rke2-cluster.bunny-lab.io`) to the IP address of the Initial ControlPlane. This can be achieved in a number of ways, such as editing the Windows `HOSTS` file, Linux's `/etc/resolv.conf` file, a Windows DNS Server "A" Record, or an NGINX/Traefik Reverse Proxy.
Once you have added the DNS record, you should be able to access the login page for the Rancher RKE2 Kubernetes cluster. Use the `bootstrapPassword` mentioned previously to log in, then change it immediately from the user management area of Rancher.
| TYPE OF ACCESS | FQDN | IP ADDRESS |
| -------------- | ------------------------------------- | ------------ |
| HOST FILE | rke2-cluster.bunny-lab.io | 192.168.3.69 |
| REVERSE PROXY | http://rke2-cluster.bunny-lab.io:80 | 192.168.5.29 |
| DNS RECORD | A Record: rke2-cluster.bunny-lab.io | 192.168.3.69 |