diff --git a/Servers/Containerization/Kubernetes/Migrating Docker-Compose.yml to k8s.md b/Servers/Containerization/Kubernetes/Migrating Docker-Compose.yml to k8s.md index 63a2ef0..e0ebeab 100644 --- a/Servers/Containerization/Kubernetes/Migrating Docker-Compose.yml to k8s.md +++ b/Servers/Containerization/Kubernetes/Migrating Docker-Compose.yml to k8s.md @@ -211,5 +211,56 @@ This final step within Kubernetes itself involves reconfiguring the container to ## Configuring Reverse Proxy If you were able to successfully verify access to the service when talking to it directly via one of the cluster node IP addresses with its given Node Port port number, then you can proceed to creating a reverse proxy configuration file for the service. This will be very similar to the original `docker-compose.yml` version of the reverse proxy configuration file, but with additional IP addresses to load-balance across the Kubernetes cluster nodes. -!!! info "Considerations" - This section of the document \ No newline at end of file +!!! info "Section Considerations" + This section of the document does not (*currently*) cover the process of setting up health checks to ensure that the load-balanced server destinations in the reverse proxy are online before redirecting traffic to them. This is on my to-do list of things to implement to further harden the deployment process. + + This section also does not cover the process of setting up a reverse proxy. If you want to follow along with this document, you can deploy a Traefik reverse proxy via the [Traefik](https://docs.bunny-lab.io/Servers/Containerization/Docker/Compose/Traefik/) deployment documentation. + +With the above considerations in-mind, we just need to make some small changes to the existing Traefik configuration file to ensure that it load-balanced across every node of the cluster to ensure high-availability functions as-expected. + +=== "(Original) ntfy.bunny-lab.io.yml" + + ``` yaml + http: + routers: + ntfy: + entryPoints: + - websecure + tls: + certResolver: letsencrypt + service: ntfy + rule: Host(`ntfy.bunny-lab.io`) + services: + ntfy: + loadBalancer: + passHostHeader: true + servers: + - url: http://192.168.5.45:80 + ``` + +=== "(Updated) ntfy.bunny-lab.io.yml" + + ``` yaml + http: + routers: + ntfy: + entryPoints: + - websecure + tls: + certResolver: letsencrypt + service: ntfy + rule: Host(`ntfy.bunny-lab.io`) + + services: + ntfy: + loadBalancer: + passHostHeader: true + servers: + - url: http://192.168.3.69:30996 + - url: http://192.168.3.70:30996 + - url: http://192.168.3.71:30996 + - url: http://192.168.3.72:30996 + ``` + +!!! success "Verify Access via Reverse Proxy" + If everything worked, you should be able to access the service at https://ntfy.bunny-lab.io, and if one of the cluster nodes goes offline, Rancher will automatically migrate the load to another cluster node which will take over the web request. \ No newline at end of file