Restructured Documentation

This commit is contained in:
2023-12-21 05:28:41 -07:00
parent fc878acaa4
commit bfbdbc147c
59 changed files with 2 additions and 2 deletions

View File

@ -0,0 +1,28 @@
# WinRM (Kerberos)
**Name**: "Kerberos WinRM"
```jsx title="Input Configuration"
fields:
- id: username
type: string
label: Username
- id: password
type: string
label: Password
secret: true
- id: krb_realm
type: string
label: Kerberos Realm (Domain)
required:
- username
- password
- krb_realm
```
```jsx title="Injector Configuration"
extra_vars:
ansible_user: '{{ username }}'
ansible_password: '{{ password }}'
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
```

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
---
# AWX Credential Types
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
## Windows-based Credentials
### NTLM
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
Credential Type: Machine
Username: nicole.rappe@MOONGATE.LOCAL
Password: <Encrypted>
Privilege Escalation Method: runas
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
```
### Kerberos
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
```jsx title="(Kerberos WinRM) nicole.rappe"
Credential Type: Kerberos WinRM
Username: nicole.rappe
Password: <Encrypted>
Kerberos Realm (Domain): MOONGATE.LOCAL
```
## Linux-based Credentials
```jsx title="(LINUX) nicole"
Credential Type: Machine
Username: nicole
Password: <Encrypted>
Privilege Escalation Method: sudo
Privilege Escalation Username: root
```
:::note
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
:::

View File

@ -0,0 +1,39 @@
# Host Inventories
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook.
```jsx title="(NTLM) MOON-HOST-01"
Name: (NTLM) MOON-HOST-01
Host(s): MOON-HOST-01 @ 192.168.3.4
Variables:
---
ansible_connection: winrm
ansible_winrm_kerberos_delegation: false
ansible_port: 5986
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
```
```jsx title="(NTLM) CyberStrawberry - Windows Hosts"
Name: (NTLM) CyberStrawberry - Windows Hosts
Host(s): MOON-HOST-01 @ 192.168.3.4
Host(s): MOON-HOST-02 @ 192.168.3.5
Variables:
---
ansible_connection: winrm
ansible_winrm_kerberos_delegation: false
ansible_port: 5986
ansible_winrm_transport: ntlm
ansible_winrm_server_cert_validation: ignore
```
```jsx title="(LINUX) Unsorted Devices"
Name: (LINUX) Unsorted Devices
Host(s): CLSTR-COMPUTE-01 @ 192.168.3.50
Host(s): CLSTR-COMPUTE-02 @ 192.168.3.51
Variables:
---
None
```

View File

@ -0,0 +1,16 @@
# AWX Projects
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
```jsx title="Ansible Playbooks (Gitea)"
Name: Ansible Playbooks (Gitea)
Source Control Type: Git
Source Control URL: https://git.cyberstrawberry.net/nicole.rappe/ansible.git
Source Control Credential: CyberStrawberry Gitea
```
```jsx title="Resources > Credentials > CyberStrawberry Gitea"
Name: CyberStrawberry Gitea
Credential Type: Source Control
Username: nicole.rappe
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
```

View File

@ -0,0 +1,100 @@
---
sidebar_position: 1
---
# Deploy AWX on RKE2 Cluster
Deploying a Rancher RKE2 Cluster based Ansible AWX server may be considered overkill for a homelab, however the configuration seen below allows you to scale the needs of the cluster over time and gives you more experience with a more enterprise-ready cluster.
![Ansible AWX WebUI](awx.png)
:::note Prerequisites
This document assumes you are running **Ubuntu Server 20.04** or later with at least 8GB of memory and 4 CPU cores.
:::
## Deploy Rancher RKE2 Cluster
You will need to follow the [Rancher RKE2 Cluster Deployment](https://docs.cyberstrawberry.net/Container%20Documentation/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster)
guide in order to initially set up the cluster itself. After this phase, you can focus on the Ansible AWX-specific aspects of the deployment. If you are only deploying AWX in a small environment, a single ControlPlane node is all you need to set up AWX.
:::tip
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
:::
## Server Configuration
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
### Make the deployment folder
```
mkdir -p /awx
cd /awx
```
### Run a command to adjust open file limits in Ubuntu Server (just-in-case)
```
ulimit -n 4096
```
### Create the AWX deployment configuration files
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=ansible.cyberstrawberry.net` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
```jsx title="nano /awx/kustomization.yml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref=2.4.0
- awx.yml
- awx-ingress.yml
images:
- name: quay.io/ansible/awx-operator
newTag: 2.4.0
namespace: awx
```
```jsx title="nano /awx/awx.yml"
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: ClusterIP
```
```jsx title="nano /awx/awx-ingress.yml"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awx-ingress
spec:
rules:
- host: ansible.cyberstrawberry.net
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: awx-service
port:
number: 80
```
## Deploy AWX using Kustomize
Now it is time to tell Rancher / Kubernetes to read the configuration files using Kustomize (built-in to newer versions of Kubernetes) to deploy AWX into the cluster. Be sure that you are still in the `/awx` folder before running this command. **Be Patient: ** The AWX deployment process can take a while. Go grab a cup of coffee and use the commands in the [Troubleshooting](https://docs.cyberstrawberry.net/Container%20Documentation/Kubernetes/Rancher%20RKE2/RKE2%20Ansible%20AWX#troubleshooting)
section if you want to track the progress more directly.
```
kubectl apply -k .
```
:::caution
If you get any errors mentioning "**CRD**" in the output, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
:::
## Access the WebUI behind Ingress Controller
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your home computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host. The RKE2 Cluster will translate `ansible.cyberstrawberry.net` to the AWX web-service container(s) automatically. SSL certificates are not covered in this documentation, but suffice to say, the can be configured on another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
- AWX WebUI: https://ansible.cyberstrawberry.net
### Retrieving the Auto-Generated Admin Password
AWX will generate its own secure password the first time you set up AWX. This password is stored as a *secret* in Kubernetes. You can navigate to the WebUI of Rancher in the RKE2 Cluster as long as you have a DNS record matching the hostname you assigned to Rancher the first time you signed in.
- Rancher WebUI: https://awx-cluster.cyberstrawberry.net
- Alternatively, you can try running the following command to pull the admin password / secret automatically
```
kubectl get secret awx-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
```
## Troubleshooting
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
### Show the container deployment progress for AWX
```
kubectl get pods -n awx
```
### AWX-Manager Deployment Logs
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
```
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
```
:::note
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
:::

View File

@ -0,0 +1,21 @@
# Templates
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
```jsx title="Deploy Hyper-V VM"
Name: Deploy Hyper-V VM
Inventory: (NTLM) MOON-HOST-01
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
Credentials: (NTLM) nicole.rappe@MOONGATE.local
Execution Environment: AWX EE (latest)
Project: Ansible Playbooks (Gitea)
Variables:
---
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB