All checks were successful
GitOps Automatic Deployment / GitOps Automatic Deployment (push) Successful in 7s
267 lines
12 KiB
Markdown
267 lines
12 KiB
Markdown
# Migrating `docker-compose.yml` to Rancher RKE2 Cluster
|
|
You may be comfortable operating with Portainer or `docker-compose`, but there comes a point where you might want to migrate those existing workloads to a Kubernetes cluster as easily-as-possible. Lucklily, there is a way to do this using a tool called "**Kompose**'. Follow the instructions seen below to convert and deploy your existing `docker-compose.yml` into a Kubernetes cluster such as Rancher RKE2.
|
|
|
|
!!! info "RKE2 Cluster Deployment"
|
|
This document assumes that you have an existing Rancher RKE2 cluster deployed. If not, you can deploy one following the [Deploy RKE2 Cluster](https://docs.bunny-lab.io/Servers/Containerization/Kubernetes/Deployment/Rancher RKE2/) documentation.
|
|
|
|
We also assume that the cluster name within Rancher RKE2 is named `local`, which is the default cluster name when setting up a Kubernetes Cluster in the way seen in the above documentation.
|
|
|
|
## Installing Kompose
|
|
The first step involves downloading Kompose from https://kompose.io/installation. Once you have it downloaded and installed onto your environment of choice, save a copy of your `docker-compose.yml` file somewhere on-disk, then open up a terminal and run the following command:
|
|
|
|
```sh
|
|
kompose --file docker-compose.yaml convert --stdout > ntfy-k8s.yaml
|
|
```
|
|
|
|
This will attempt to convert the `docker-compose.yml` file into a Kubernetes manifest YAML file. The Before and after example can be seen below:
|
|
|
|
=== "(Original) docker-compose.yml"
|
|
|
|
``` yaml
|
|
version: "2.1"
|
|
services:
|
|
ntfy:
|
|
image: binwiederhier/ntfy
|
|
container_name: ntfy
|
|
command:
|
|
- serve
|
|
environment:
|
|
- NTFY_ATTACHMENT_CACHE_DIR=/var/lib/ntfy/attachments
|
|
- NTFY_BASE_URL=https://ntfy.bunny-lab.io
|
|
- TZ=America/Denver # optional: Change to your desired timezone
|
|
#user: UID:GID # optional: Set custom user/group or uid/gid
|
|
volumes:
|
|
- /srv/containers/ntfy/cache:/var/cache/ntfy
|
|
- /srv/containers/ntfy/etc:/etc/ntfy
|
|
ports:
|
|
- 80:80
|
|
restart: always
|
|
networks:
|
|
docker_network:
|
|
ipv4_address: 192.168.5.45
|
|
|
|
networks:
|
|
default:
|
|
external:
|
|
name: docker_network
|
|
docker_network:
|
|
external: true
|
|
```
|
|
|
|
=== "(Converted) ntfy-k8s.yaml"
|
|
|
|
``` yaml
|
|
---
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
annotations:
|
|
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
|
kompose.version: 1.37.0 (fb0539e64)
|
|
labels:
|
|
io.kompose.service: ntfy
|
|
name: ntfy
|
|
spec:
|
|
ports:
|
|
- name: "80"
|
|
port: 80
|
|
targetPort: 80
|
|
selector:
|
|
io.kompose.service: ntfy
|
|
|
|
---
|
|
apiVersion: apps/v1
|
|
kind: Deployment
|
|
metadata:
|
|
annotations:
|
|
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
|
kompose.version: 1.37.0 (fb0539e64)
|
|
labels:
|
|
io.kompose.service: ntfy
|
|
name: ntfy
|
|
spec:
|
|
replicas: 1
|
|
selector:
|
|
matchLabels:
|
|
io.kompose.service: ntfy
|
|
strategy:
|
|
type: Recreate
|
|
template:
|
|
metadata:
|
|
annotations:
|
|
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
|
kompose.version: 1.37.0 (fb0539e64)
|
|
labels:
|
|
io.kompose.service: ntfy
|
|
spec:
|
|
containers:
|
|
- args:
|
|
- serve
|
|
env:
|
|
- name: NTFY_ATTACHMENT_CACHE_DIR
|
|
value: /var/lib/ntfy/attachments
|
|
- name: NTFY_BASE_URL
|
|
value: https://ntfy.bunny-lab.io
|
|
- name: TZ
|
|
value: America/Denver
|
|
image: binwiederhier/ntfy
|
|
name: ntfy
|
|
ports:
|
|
- containerPort: 80
|
|
protocol: TCP
|
|
volumeMounts:
|
|
- mountPath: /var/cache/ntfy
|
|
name: ntfy-claim0
|
|
- mountPath: /etc/ntfy
|
|
name: ntfy-claim1
|
|
restartPolicy: Always
|
|
volumes:
|
|
- name: ntfy-claim0
|
|
persistentVolumeClaim:
|
|
claimName: ntfy-claim0
|
|
- name: ntfy-claim1
|
|
persistentVolumeClaim:
|
|
claimName: ntfy-claim1
|
|
|
|
---
|
|
apiVersion: v1
|
|
kind: PersistentVolumeClaim
|
|
metadata:
|
|
labels:
|
|
io.kompose.service: ntfy-claim0
|
|
name: ntfy-claim0
|
|
spec:
|
|
accessModes:
|
|
- ReadWriteOnce
|
|
resources:
|
|
requests:
|
|
storage: 100Mi
|
|
|
|
---
|
|
apiVersion: v1
|
|
kind: PersistentVolumeClaim
|
|
metadata:
|
|
labels:
|
|
io.kompose.service: ntfy-claim1
|
|
name: ntfy-claim1
|
|
spec:
|
|
accessModes:
|
|
- ReadWriteOnce
|
|
resources:
|
|
requests:
|
|
storage: 100Mi
|
|
```
|
|
|
|
## Deploy Workload into Rancher RKE2 Cluster
|
|
At this point, you need to import the yaml file you created into the Kubernetes cluster. This will occur in four sequential stages:
|
|
|
|
- Setting up a "**Project**" to logically organize your containers
|
|
- Setting up a "**Namespace**" for your container to isolate it from other containers in your Kubernetes cluster
|
|
- Importing the YAML file into the aforementioned namespace
|
|
- Configuring Ingress to allow external access to the container / service stack.
|
|
|
|
### Create a Project
|
|
The purpose of the project is to logically organize your services together. This can be something like `Home Automation`, `Log Analysis Systems`, `Network Tools`, etc. You can do this by logging into your Rancher RKE2 cluster (e.g. https://rke2-cluster.bunny-lab.io). This Project name is unique to Rancher and purely used for organizational purposes and does not affect the namespaces / containers in any way.
|
|
|
|
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > "Create Project"**
|
|
- **Name**: <Friendly Name> (e.g. `Home Automation`)
|
|
- **Description**: <Useful Description for the Group of Services> (e.g. `Various services that automate things within Bunny Lab`)
|
|
- Click the "**Create**" button
|
|
|
|
### Create a Namespace within the Project
|
|
At this point, we need to create a namespace. This basically isolates the networking, credentials, secrets, and storage between the services/stacks. This ensures that if someone exploits one of your services, they will not be able to laterally move into another service within the same Kubernetes cluster.
|
|
|
|
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > <ProjectName> > "Create Namespace"**
|
|
- The name for the namespace should be named based on its operational-context, such as `prod-ntfy` or `dev-ntfy`.
|
|
|
|
### Import Converted YAML Manifest into Namespace
|
|
At this point, we can now proceed to import the YAML file we generated in the beginning of this document.
|
|
|
|
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces**
|
|
- At the top-right of the screen will be an upload / up-arrow button with tooltip text stating "Import YAML" > Click on this button
|
|
- Click the "**Read from File**" button
|
|
- Navigate to your `ntfy-k8s.yaml` file. (Name will differ from your actual converted file) > then click the "**Open**" button.
|
|
- On the top-right of the dialog box will be a "**Default Namespace**" dropdown menu, select the `prod-ntfy` namespace we created earlier.
|
|
- Click the blue "**Import** button at the bottom of the dialog box.
|
|
|
|
!!! warning "Be Patient"
|
|
This part of the process can take a while depending on the container stack and complexity of the service. It has to download container images and deploy them into newly spun-up pods within Kubernetes. Just be patient and click on the `prod-ntfy` namespace, then look at the "**Workloads**" tab to see if the "ntfy" service exists and is Active, then you can move onto the next step.
|
|
|
|
### Configuring Ingress
|
|
This final step within Kubernetes itself involves reconfiguring the container to list via a "NodePort" instead of "ClusterIP". Don't worry, you do not have to mangle with the ports that the container uses, this is entirely within Kubernetes itself and does not make changes to the original `docker-compose.yml` ports of the container(s) you imported.
|
|
|
|
- Navigate to: **Clusters > `local` > Service Discovery > Services > ntfy**
|
|
- On the top-right, click on the blue "**Show Configuration**" button
|
|
- On the bottom-right, click the blue "**Edit Config**" button
|
|
- On the bottom-right, click the "**Edit as YAML**" button
|
|
- Within the yaml editor, you will see a section named `spec:`, within that section is a subsection named `type:`. You will see a value of `type: ClusterIP` > You want to change that to `type: NodePort`
|
|
- On the bottom-right, click the blue "**Save**" button and wait for the process to finish.
|
|
- On the new page that appears, click on the `ntfy` service again
|
|
- Click on the "**Ports**" tab
|
|
- You will see a column of the table labeled "Node Port" with a number in the 30,000s such as `30996`. This will be important for later.
|
|
|
|
!!! success "Verifying Access Before Configuring Reverse Proxy"
|
|
At this point, you will want to verify that you can access the service via the cluster node IP addresses such as the examples seen below, all of the cluster nodes should route the traffic to the container's service and will be used for load-balancing later in the reverse proxy configuration file.
|
|
|
|
- http://192.168.3.69:30996
|
|
- http://192.168.3.70:30996
|
|
- http://192.168.3.71:30996
|
|
- http://192.168.3.72:30996
|
|
|
|
## Configuring Reverse Proxy
|
|
If you were able to successfully verify access to the service when talking to it directly via one of the cluster node IP addresses with its given Node Port port number, then you can proceed to creating a reverse proxy configuration file for the service. This will be very similar to the original `docker-compose.yml` version of the reverse proxy configuration file, but with additional IP addresses to load-balance across the Kubernetes cluster nodes.
|
|
|
|
!!! info "Section Considerations"
|
|
This section of the document does not (*currently*) cover the process of setting up health checks to ensure that the load-balanced server destinations in the reverse proxy are online before redirecting traffic to them. This is on my to-do list of things to implement to further harden the deployment process.
|
|
|
|
This section also does not cover the process of setting up a reverse proxy. If you want to follow along with this document, you can deploy a Traefik reverse proxy via the [Traefik](https://docs.bunny-lab.io/Servers/Containerization/Docker/Compose/Traefik/) deployment documentation.
|
|
|
|
With the above considerations in-mind, we just need to make some small changes to the existing Traefik configuration file to ensure that it load-balanced across every node of the cluster to ensure high-availability functions as-expected.
|
|
|
|
=== "(Original) ntfy.bunny-lab.io.yml"
|
|
|
|
``` yaml
|
|
http:
|
|
routers:
|
|
ntfy:
|
|
entryPoints:
|
|
- websecure
|
|
tls:
|
|
certResolver: letsencrypt
|
|
service: ntfy
|
|
rule: Host(`ntfy.bunny-lab.io`)
|
|
|
|
services:
|
|
ntfy:
|
|
loadBalancer:
|
|
passHostHeader: true
|
|
servers:
|
|
- url: http://192.168.5.45:80
|
|
```
|
|
|
|
=== "(Updated) ntfy.bunny-lab.io.yml"
|
|
|
|
``` yaml
|
|
http:
|
|
routers:
|
|
ntfy:
|
|
entryPoints:
|
|
- websecure
|
|
tls:
|
|
certResolver: letsencrypt
|
|
service: ntfy
|
|
rule: Host(`ntfy.bunny-lab.io`)
|
|
|
|
services:
|
|
ntfy:
|
|
loadBalancer:
|
|
passHostHeader: true
|
|
servers:
|
|
- url: http://192.168.3.69:30996
|
|
- url: http://192.168.3.70:30996
|
|
- url: http://192.168.3.71:30996
|
|
- url: http://192.168.3.72:30996
|
|
```
|
|
|
|
!!! success "Verify Access via Reverse Proxy"
|
|
If everything worked, you should be able to access the service at https://ntfy.bunny-lab.io, and if one of the cluster nodes goes offline, Rancher will automatically migrate the load to another cluster node which will take over the web request. |