Files
docs/Containers/Kubernetes/Rancher RKE2/AWX Operator/Ansible AWX Operator.md

141 lines
7.1 KiB
Markdown

**Purpose**:
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
!!! note Prerequisites
This document assumes you are running **Ubuntu Server 20.04** or later with at least 8GB of memory, 4 CPU cores, and 64GB of storage.
## Deploy Rancher RKE2 Cluster
You will need to deploy a [Rancher RKE2 Cluster](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster/) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
!!! tip
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
## Server Configuration
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
``` sh
# Make the deployment folder
mkdir -p /awx
cd /awx
# Run a command to adjust open file limits in Ubuntu Server (just-in-case)
ulimit -n 4096
```
### Create AWX Deployment Donfiguration Files
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
=== "awx.yml"
```jsx title="/awx/awx.yml"
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: ClusterIP
```
=== "ingress.yml"
```jsx title="/awx/ingress.yml"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: awx.bunny-lab.io
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: awx-service
port:
number: 80
```
=== "kustomization.yml"
```jsx title="/awx/kustomization.yml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref=2.10.0
- awx.yml
- ingress.yml
images:
- name: quay.io/ansible/awx-operator
newTag: 2.10.0
namespace: awx
```
## Ensure the Kubernetes Cluster is Ready
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster/#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
```
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get pods --all-namespaces
```
## Deploy AWX using Kustomize
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
!!! warning "Be Patient"
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/AWX%20Operator/Ansible%20AWX%20Operator/#troubleshooting) section if you want to track the progress after running the commands below.
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
``` sh
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
ensure CRDs are installed first
```
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
You will know that AWX is ready to be accessed in the next step if the output looks like below:
```
NAME READY STATUS RESTARTS AGE
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
awx-postgres-13-0 1/1 Running 0 6m12s
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
```
``` sh
cd /awx
kubectl apply -k .
```
## Access the AWX WebUI behind Ingress Controller
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically. SSL certificates are not covered in this documentation, but suffice to say, the can be configured on another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
!!! success "Accessing the AWX WebUI"
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
- AWX WebUI: https://awx.bunny-lab.io
![Ansible AWX WebUI](awx.png)
AWX will generate its own secure password the first time you set up AWX. You can run the following command to retrieve it.
```
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
```
## Troubleshooting
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
!!! failure "Nested Reverse Proxy Issues"
My homelab environment primarily uses a Traefik reverse proxy to handle all communications, but AWX currently has issues running behind Traefik/NGINX, and documentation outlining how to fix this does not exist here yet. For the time being, when you create the DNS record, use an `A` record pointing directly to the IP address of the Virtual Machine running the Rancher / AWX Operator cluster.
### Show the container deployment progress for AWX
```
kubectl get pods -n awx
```
### AWX-Manager Deployment Logs
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
```
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
```
!!! note
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.