Additional Doc Restructure
This commit is contained in:
202
operations/automation/ansible/awx/awx-kerberos-implementation.md
Normal file
202
operations/automation/ansible/awx/awx-kerberos-implementation.md
Normal file
@@ -0,0 +1,202 @@
|
||||
## Kerberos Implementation
|
||||
You may find that you need to be able to run playbooks on domain-joined Windows devices using Kerberos. You need to go through some extra steps to set this up after you have successfully fully deployed AWX Operator into Kubernetes.
|
||||
|
||||
### Configure Windows Devices
|
||||
You will need to prepare the Windows devices to allow them to be remotely controlled by Ansible playbooks. Run the following powershell script on all of the devices that will be managed by the Ansible AWX environment.
|
||||
|
||||
- [WinRM Prerequisite Setup Script](../enable-winrm-on-windows-devices.md)
|
||||
|
||||
### Create an AWX Instance Group
|
||||
At this point, we need to make an "Instance Group" for the AWX Execution Environments that will use both a Keytab file and custom DNS servers defined by configmap files created below. Reference information was found [here](https://github.com/kurokobo/awx-on-k3s/blob/main/tips/use-kerberos.md#create-container-group). This group allows for persistence across playbooks/templates, so that if you establish a Kerberos authentication in one playbook, it will persist through the entire job's workflow.
|
||||
|
||||
Create the following files in the `/awx` folder on the AWX Operator server you deployed earlier when setting up the Kubernetes Cluster and deploying AWX Operator into it so we can later mount them into the new Execution Environment we will be building.
|
||||
|
||||
=== "Custom DNS Records"
|
||||
|
||||
```yaml title="/awx/custom_dns_records.yml"
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: custom-dns
|
||||
namespace: awx
|
||||
data:
|
||||
custom-hosts: |
|
||||
192.168.3.25 LAB-DC-01.bunny-lab.io LAB-DC-01
|
||||
192.168.3.26 LAB-DC-02.bunny-lab.io LAB-DC-02
|
||||
192.168.3.4 VIRT-NODE-01.bunny-lab.io VIRT-NODE-01
|
||||
192.168.3.5 BUNNY-NODE-02.bunny-lab.io BUNNY-NODE-02
|
||||
```
|
||||
|
||||
=== "Kerberos Keytab File"
|
||||
|
||||
```ini title="/awx/krb5.conf"
|
||||
[libdefaults]
|
||||
default_realm = BUNNY-LAB.IO
|
||||
dns_lookup_realm = false
|
||||
dns_lookup_kdc = false
|
||||
|
||||
[realms]
|
||||
BUNNY-LAB.IO = {
|
||||
kdc = 192.168.3.25
|
||||
kdc = 192.168.3.26
|
||||
admin_server = 192.168.3.25
|
||||
}
|
||||
|
||||
[domain_realm]
|
||||
192.168.3.25 = BUNNY-LAB.IO
|
||||
192.168.3.26 = BUNNY-LAB.IO
|
||||
.bunny-lab.io = BUNNY-LAB.IO
|
||||
bunny-lab.io = BUNNY-LAB.IO
|
||||
```
|
||||
|
||||
Then we apply these configmaps to the AWX namespace with the following commands:
|
||||
``` sh
|
||||
cd /awx
|
||||
kubectl -n awx create configmap awx-kerberos-config --from-file=/awx/krb5.conf
|
||||
kubectl apply -f custom_dns_records.yml
|
||||
```
|
||||
|
||||
- Open AWX UI and click on "**Instance Groups**" under the "**Administration**" section, then press "**Add > Add container group**".
|
||||
- Enter a descriptive name as you like (e.g. `Kerberos`) and click the toggle "**Customize Pod Specification**".
|
||||
- Put the following YAML string in "**Custom pod spec**" then press the "**Save**" button
|
||||
```yaml title="Custom Pod Spec"
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
namespace: awx
|
||||
spec:
|
||||
serviceAccountName: default
|
||||
automountServiceAccountToken: false
|
||||
initContainers:
|
||||
- name: init-hosts
|
||||
image: busybox
|
||||
command:
|
||||
- sh
|
||||
- '-c'
|
||||
- cat /etc/custom-dns/custom-hosts >> /etc/hosts
|
||||
volumeMounts:
|
||||
- name: custom-dns
|
||||
mountPath: /etc/custom-dns
|
||||
containers:
|
||||
- image: quay.io/ansible/awx-ee:latest
|
||||
name: worker
|
||||
args:
|
||||
- ansible-runner
|
||||
- worker
|
||||
- '--private-data-dir=/runner'
|
||||
resources:
|
||||
requests:
|
||||
cpu: 250m
|
||||
memory: 100Mi
|
||||
volumeMounts:
|
||||
- name: awx-kerberos-volume
|
||||
mountPath: /etc/krb5.conf
|
||||
subPath: krb5.conf
|
||||
volumes:
|
||||
- name: awx-kerberos-volume
|
||||
configMap:
|
||||
name: awx-kerberos-config
|
||||
- name: custom-dns
|
||||
configMap:
|
||||
name: custom-dns
|
||||
```
|
||||
|
||||
### Job Template & Inventory Examples
|
||||
At this point, you need to adjust your exist Job Template(s) that need to communicate via Kerberos to domain-joined Windows devices to use the "Instance Group" of "**Kerberos**" while keeping the same Execution Environment you have been using up until this point. This will change the Execution Environment to include the Kerberos Keytab file in the EE at playbook runtime. When the playbook has completed running, (or if you are chain-loading multiple playbooks in a workflow job template), it will cease to exist. The kerberos keytab data will be regenerated at the next runtime.
|
||||
|
||||
Also add the following variables to the job template you have associated with the playbook below:
|
||||
``` yaml
|
||||
---
|
||||
kerberos_user: nicole.rappe@BUNNY-LAB.IO
|
||||
kerberos_password: <DomainPassword>
|
||||
```
|
||||
|
||||
You will want to ensure your inventory file is configured to use Kerberos Authentication as well, so the following example is a starting point:
|
||||
```ini
|
||||
virt-node-01 ansible_host=virt-node-01.bunny-lab.io
|
||||
bunny-node-02 ansible_host=bunny-node-02.bunny-lab.io
|
||||
|
||||
[virtualizationHosts]
|
||||
virt-node-01
|
||||
bunny-node-02
|
||||
|
||||
[virtualizationHosts:vars]
|
||||
ansible_connection=winrm
|
||||
ansible_port=5986
|
||||
ansible_winrm_transport=kerberos
|
||||
ansible_winrm_scheme=https
|
||||
ansible_winrm_server_cert_validation=ignore
|
||||
#kerberos_user=nicole.rappe@BUNNY-LAB.IO #Optional, if you define this in the Job Template, it is not necessary here.
|
||||
#kerberos_password=<DomainPassword> #Optional, if you define this in the Job Template, it is not necessary here.
|
||||
```
|
||||
!!! failure "Usage of Fully-Quality Domain Names"
|
||||
It is **critical** that you define Kerberos-authenticated devices with fully qualified domain names. This is just something I found out from 4+ hours of troubleshooting. If the device is Linux or you are using NTLM authentication instead of Kerberos authentication, you can skip this warning. If you do not define the inventory using FQDNs, it will fail to run the commands against the targeted device(s).
|
||||
|
||||
In this example, the host is defined via FQDN: `virt-node-01 ansible_host=virt-node-01.bunny-lab.io`
|
||||
|
||||
### Kerberos Connection Playbook
|
||||
At this point, you need a playbook that you can run in a Workflow Job Template (to keep things modular and simplified) to establish a connection to an Active Directory Domain Controller via Kerberos before running additional playbooks/templates against the actual devices.
|
||||
|
||||
You can visualize the connection workflow below:
|
||||
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[Update AWX Project] --> B[Update Project Inventory]
|
||||
B --> C[Establish Kerberos Connection]
|
||||
C --> D[Run Playbook against Windows Device]
|
||||
```
|
||||
|
||||
The following playbook is an example pulled from https://git.bunny-lab.io
|
||||
|
||||
!!! note "Playbook Redundancies"
|
||||
I have several areas where I could optimize this playbook and remove redundancies. I just have not had enough time to iterate through it deeply-enough to narrow down exact things I can remove, so for now, it will remain as-is, since it functions as-expected with the example below.
|
||||
|
||||
```yaml title="Establish_Kerberos_Connection.yml"
|
||||
---
|
||||
- name: Generate Kerberos Ticket to Communicate with Domain-Joined Windows Devices
|
||||
hosts: localhost
|
||||
vars:
|
||||
kerberos_password: "{{ lookup('env', 'KERBEROS_PASSWORD') }}" # Alternatively, you can set this as an environment variable
|
||||
# BE SURE TO PASS "kerberos_user: nicole.rappe@BUNNY-LAB.IO" and "kerberos_password: <domain_admin_password>" to the template variables when running this playbook in a template.
|
||||
|
||||
tasks:
|
||||
- name: Generate the keytab file
|
||||
ansible.builtin.shell: |
|
||||
ktutil <<EOF
|
||||
addent -password -p {{ kerberos_user }} -k 1 -e aes256-cts
|
||||
{{ kerberos_password }}
|
||||
wkt /tmp/krb5.keytab
|
||||
quit
|
||||
EOF
|
||||
environment:
|
||||
KRB5_CONFIG: /etc/krb5.conf
|
||||
register: generate_keytab_result
|
||||
|
||||
- name: Ensure keytab file was generated successfully
|
||||
fail:
|
||||
msg: "Failed to generate keytab file"
|
||||
when: generate_keytab_result.rc != 0
|
||||
|
||||
- name: Keytab successfully generated
|
||||
ansible.builtin.debug:
|
||||
msg: "Keytab successfully generated at /tmp/krb5.keytab"
|
||||
when: generate_keytab_result.rc == 0
|
||||
|
||||
- name: Acquire Kerberos ticket using keytab
|
||||
ansible.builtin.shell: |
|
||||
kinit -kt /tmp/krb5.keytab {{ kerberos_user }}
|
||||
environment:
|
||||
KRB5_CONFIG: /etc/krb5.conf
|
||||
register: kinit_result
|
||||
|
||||
- name: Ensure Kerberos ticket was acquired successfully
|
||||
fail:
|
||||
msg: "Failed to acquire Kerberos ticket"
|
||||
when: kinit_result.rc != 0
|
||||
|
||||
- name: Kerberos ticket successfully acquired
|
||||
ansible.builtin.debug:
|
||||
msg: "Kerberos ticket successfully acquired for user {{ kerberos_user }}"
|
||||
when: kinit_result.rc == 0
|
||||
```
|
||||
|
||||
BIN
operations/automation/ansible/awx/awx.png
Normal file
BIN
operations/automation/ansible/awx/awx.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 122 KiB |
67
operations/automation/ansible/awx/connect-awx-to-gitea.md
Normal file
67
operations/automation/ansible/awx/connect-awx-to-gitea.md
Normal file
@@ -0,0 +1,67 @@
|
||||
**Purpose**: Once AWX is deployed, you will want to connect Gitea at https://git.bunny-lab.io. The reason for this is so we can pull in our playbooks, inventories, and templates automatically into AWX, making it more stateless overall and more resilient to potential failures of either AWX or the underlying Kubernetes Cluster hosting it.
|
||||
|
||||
## Obtain Gitea Token
|
||||
You already have this documented in Vaultwarden's password notes for awx.bunny-lab.io, but in case it gets lost, go to the [Gitea Token Page](https://git.bunny-lab.io/user/settings/applications) to set up an application token with read-only access for AWX, with a descriptive name.
|
||||
|
||||
## Create Gitea Credentials
|
||||
Before you make move on and make the project, you need to associate the Gitea token with an AWX "Credential". Navigate to **Resources > Credentials > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Credential Name | `git.bunny-lab.io` |
|
||||
| Description | `Gitea` |
|
||||
| Organization | `Default` *(Click the Magnifying Lens)* |
|
||||
| Credential Type | `Source Control` |
|
||||
| Username | `Gitea Username` *(e.g. `nicole`)* |
|
||||
| Password | `<Gitea Token>` |
|
||||
|
||||
## Create an AWX Project
|
||||
In order to link AWX to Gitea, you have to connect the two of them together with an AWX "Project". Navigate to **Resources > Projects > Add**
|
||||
|
||||
**Project Variables**:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Project Name | `Bunny-Lab` |
|
||||
| Description | `Homelab Environment` |
|
||||
| Organization | `Default` |
|
||||
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
|
||||
| Source Control Type | `Git` |
|
||||
|
||||
**Gitea-specific Variables**:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Source Control URL | `https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git` |
|
||||
| Source Control Branch/Tag/Commit | `main` |
|
||||
| Source Control Credential | `git.bunny-lab.io` *(Click the Magnifying Lens)* |
|
||||
|
||||
## Add Playbooks
|
||||
AWX automatically imports any playbooks it finds from the project, and makes them available for templates operating within the same project-space. (e.g. "Bunny-Lab"). This means no special configuration is needed for the playbooks.
|
||||
|
||||
## Create an Inventory
|
||||
You will want to associate an inventory with the Gitea project now. Navigate to **Resources > Inventories > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Inventory Name | `Homelab` |
|
||||
| Description | `Homelab Inventory` |
|
||||
| Organization | `Default` |
|
||||
|
||||
### Add Gitea Inventory Source
|
||||
Now you will want to connect this inventory to the inventory file(s) hosted in the aforementioned Gitea repository. Navigate to **Resources > Inventories > Homelab > Sources > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Source Name | `git.bunny-lab.io` |
|
||||
| Description | `Gitea` |
|
||||
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
|
||||
| Source | `Sourced from a Project` |
|
||||
| Project | `Bunny-Lab` |
|
||||
| Inventory File | `inventories/homelab.ini` |
|
||||
|
||||
!!! info "Overwriting Existing Inventory Data"
|
||||
You want to make sure that the checkboxes for "**Overwrite**" and "**Overwrite Variables**" are checked. This ensures that if devices and/or group variables are removed from the inventory file in Gitea, they will also be removed from the inventory inside AWX.
|
||||
|
||||
## Webhooks
|
||||
Optionally, set up webhooks in Gitea to trigger inventory updates in AWX upon changes in the repository. This section is not documented yet, but will eventually be documented.
|
||||
139
operations/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
139
operations/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
@@ -0,0 +1,139 @@
|
||||
# Deploy AWX on Minikube Cluster
|
||||
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
|
||||
## Install Minikube Cluster
|
||||
### Update the Ubuntu Server
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
### Download and Install Minikube (Ubuntu Server)
|
||||
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
|
||||
sudo dpkg -i minikube_latest_amd64.deb
|
||||
|
||||
# Download Docker and Common Tools
|
||||
sudo apt install docker.io nfs-common iptables nano htop -y
|
||||
|
||||
# Configure Docker User
|
||||
sudo usermod -aG docker nicole
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
|
||||
:::
|
||||
### Fully Logout then sign back in to the server
|
||||
```
|
||||
exit
|
||||
```
|
||||
### Validate that permissions allow you to run docker commands while non-root
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Initialize Minikube Cluster
|
||||
Additional Documentation: https://github.com/ansible/awx-operator
|
||||
```
|
||||
minikube start --driver=docker
|
||||
minikube kubectl -- get nodes
|
||||
minikube kubectl -- get pods -A
|
||||
```
|
||||
|
||||
### Make sure Minikube Cluster Automatically Starts on Boot
|
||||
```jsx title="/etc/systemd/system/minikube.service"
|
||||
[Unit]
|
||||
Description=Minikube service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=nicole
|
||||
ExecStart=/usr/bin/minikube start --driver=docker
|
||||
ExecStop=/usr/bin/minikube stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
|
||||
:::
|
||||
:::info
|
||||
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
|
||||
:::
|
||||
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable minikube
|
||||
sudo systemctl start minikube
|
||||
```
|
||||
|
||||
### Make command alias for `kubectl`
|
||||
Be sure to add the following to the bottom of your existing profile file noted below.
|
||||
```jsx title="~/.bashrc"
|
||||
...
|
||||
alias kubectl="minikube kubectl --"
|
||||
```
|
||||
:::tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
|
||||
## Make AWX Operator Kustomization File:
|
||||
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
|
||||
```jsx title="kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: awx-service
|
||||
namespace: awx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080 # Choose an available port in the range of 30000-32767
|
||||
selector:
|
||||
app.kubernetes.io/name: awx-web
|
||||
```
|
||||
### Apply Configuration File
|
||||
Run from the same directory as the `awx-operator.yaml` file.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::info
|
||||
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
|
||||
:::
|
||||
|
||||
### View Logs / Track Deployment Progress
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
|
||||
```
|
||||
### Get AWX WebUI Address
|
||||
```
|
||||
minikube service -n awx awx-service --url
|
||||
```
|
||||
### Get WebUI Password:
|
||||
```
|
||||
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
191
operations/automation/ansible/awx/deployment/awx-operator.md
Normal file
191
operations/automation/ansible/awx/deployment/awx-operator.md
Normal file
@@ -0,0 +1,191 @@
|
||||
**Purpose**:
|
||||
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
|
||||
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 22.04** or later with at least 16GB of memory, 8 CPU cores, and 64GB of storage.
|
||||
|
||||
## Deploy Rancher RKE2 Cluster
|
||||
You will need to deploy a [Rancher RKE2 Cluster](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
|
||||
|
||||
!!! tip "Checkpoint/Snapshot Reminder"
|
||||
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
|
||||
|
||||
## Server Configuration
|
||||
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
|
||||
|
||||
``` sh
|
||||
# Make the deployment folder
|
||||
mkdir -p /awx
|
||||
cd /awx
|
||||
```
|
||||
|
||||
We need to increase filesystem access limits:
|
||||
Temporarily Set the Limits Now:
|
||||
``` sh
|
||||
sudo sysctl fs.inotify.max_user_watches=524288
|
||||
sudo sysctl fs.inotify.max_user_instances=512
|
||||
```
|
||||
|
||||
Permanently Set the Limits for Later:
|
||||
```jsx title="/etc/sysctl.conf"
|
||||
# <End of File>
|
||||
fs.inotify.max_user_watches = 524288
|
||||
fs.inotify.max_user_instances = 512
|
||||
```
|
||||
|
||||
Apply the Settings:
|
||||
``` sh
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
### Create AWX Deployment Donfiguration Files
|
||||
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
|
||||
|
||||
=== "awx.yml"
|
||||
|
||||
```yaml title="/awx/awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
spec:
|
||||
service_type: ClusterIP
|
||||
```
|
||||
|
||||
=== "ingress.yml"
|
||||
|
||||
```yaml title="/awx/ingress.yml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: awx.bunny-lab.io
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: awx-service
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
=== "kustomization.yml"
|
||||
|
||||
```yaml title="/awx/kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.10.0
|
||||
- awx.yml
|
||||
- ingress.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.10.0
|
||||
namespace: awx
|
||||
```
|
||||
|
||||
## Ensure the Kubernetes Cluster is Ready
|
||||
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
## Ensure the Timezone / Date is Accurate
|
||||
You want to make sure that the Kubernetes environment and Node itself have accurate time for a number of reasons, least of which, is if you are using Ansible with Kubernetes authentication, if the date/time is inaccurate, things will not work correctly.
|
||||
``` sh
|
||||
sudo timedatectl set-timezone America/Denver
|
||||
```
|
||||
|
||||
## Deploy AWX using Kustomize
|
||||
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
|
||||
!!! warning "Be Patient"
|
||||
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](./awx-operator.md#troubleshooting) section if you want to track the progress after running the commands below.
|
||||
|
||||
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
|
||||
``` sh
|
||||
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
|
||||
ensure CRDs are installed first
|
||||
```
|
||||
|
||||
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
|
||||
You will know that AWX is ready to be accessed in the next step if the output looks like below:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
|
||||
awx-postgres-13-0 1/1 Running 0 6m12s
|
||||
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
|
||||
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
|
||||
```
|
||||
|
||||
``` sh
|
||||
cd /awx
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
!!! warning "Be Patient - Wait 20 Minutes"
|
||||
The process may take a while to spin up AWX, postgresql, redis, and other workloads necessary for AWX to function. Depending on the speed of the server, it may take between 5 and 20 minutes for AWX to be ready to connect to. You can watch the progress via the CLI commands listed above, or directly on Rancher's WebUI at https://rancher.bunny-lab.io.
|
||||
|
||||
## Access the AWX WebUI behind Ingress Controller
|
||||
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
|
||||
|
||||
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically due to having an internal Reverse Proxy within the Kubernetes Cluster. SSL certificates generated within Kubernetes/Rancher RKE2 are not covered in this documentation, but suffice to say, the AWX server can be configured on behind another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
|
||||
|
||||
### Traefik Implementation
|
||||
If you want to put this behind traefik, you will need a slightly unique traefik configuration file, seen below, to effectively transparently passthrough traffic into the RKE2 Cluster's reverse proxy.
|
||||
|
||||
```yaml title="awx.bunny-lab.io.yml"
|
||||
tcp:
|
||||
routers:
|
||||
awx-tcp-router:
|
||||
rule: "HostSNI(`awx.bunny-lab.io`)"
|
||||
entryPoints: ["websecure"]
|
||||
service: awx-nginx-service
|
||||
tls:
|
||||
passthrough: true
|
||||
# middlewares:
|
||||
# - auth-bunny-lab-io # Referencing the Keycloak Server
|
||||
|
||||
services:
|
||||
awx-nginx-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.10:443"
|
||||
```
|
||||
|
||||
!!! success "Accessing the AWX WebUI"
|
||||
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
|
||||
|
||||
- AWX WebUI: https://awx.bunny-lab.io
|
||||

|
||||
You may see a prompt about "AWX is currently upgrading. This page will refresh when complete". Be patient, let it finish. When it's done, it will take you to a login page.
|
||||
AWX will generate its own secure password the first time you set up AWX. Username is `admin`. You can run the following command to retrieve the password:
|
||||
```
|
||||
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
## Change Admin Password
|
||||
You will want to change the admin password straight-away. Use the following navigation structure to find where to change the password:
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[AWX Dashboard] --> B[Access]
|
||||
B --> C[Users]
|
||||
C --> D[admin]
|
||||
D --> E[Edit]
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
|
||||
|
||||
### AWX-Manager Deployment Logs
|
||||
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
|
||||
```
|
||||
!!! note
|
||||
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
|
||||
|
||||
@@ -0,0 +1,62 @@
|
||||
## Upgrading from 2.10.0 to 2.19.1+
|
||||
There is a known issue with upgrading / install AWX Operator beyond version 2.10.0, because of how the PostgreSQL database upgrades from 13.0 to 15.0, and has changed permissions. The following workflow will help get past that and adjust the permissions in such a way that allows the upgrade to proceed successfully. If this is a clean installation, you can also perform this step if the fresh install of 2.19.1 is not working yet. (It wont work out of the box because of this bug). `The developers of AWX seem to just not care about this issue, and have not implemented an official fix themselves at this time).
|
||||
|
||||
### Create a Temporary Pod to Adjust Permissions
|
||||
We need to create a pod that will mount the PostgreSQL PVC, make changes to permissions, then destroy the v15.0 pod to have the AWX Operator automatically regenerate it.
|
||||
|
||||
```yaml title="/awx/temp-pod.yml"
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: temp-pod
|
||||
namespace: awx
|
||||
spec:
|
||||
containers:
|
||||
- name: temp-container
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'sleep 3600']
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/pgsql/data
|
||||
name: postgres-data
|
||||
volumes:
|
||||
- name: postgres-data
|
||||
persistentVolumeClaim:
|
||||
claimName: postgres-15-awx-postgres-15-0
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Deploy Temporary Pod
|
||||
kubectl apply -f /awx/temp-pod.yaml
|
||||
|
||||
# Open a Shell in the Temporary Pod
|
||||
kubectl exec -it temp-pod -n awx -- sh
|
||||
|
||||
# Adjust Permissions of the PostgreSQL 15.0 Database Folder
|
||||
chown -R 26:root /var/lib/pgsql/data
|
||||
exit
|
||||
|
||||
# Delete the Temporary Pod
|
||||
kubectl delete pod temp-pod -n awx
|
||||
|
||||
# Delete the Crashlooped PostgreSQL 15.0 Pod to Regenerate It
|
||||
kubectl delete pod awx-postgres-15-0 -n awx
|
||||
|
||||
# Track the Migration
|
||||
kubectl get pods -n awx
|
||||
kubectl logs -n awx awx-postgres-15-0
|
||||
```
|
||||
|
||||
!!! warning "Be Patient"
|
||||
This upgrade may take a few minutes depending on the speed of the node it is running on. Be patient and wait until the output looks something similar to this:
|
||||
```
|
||||
root@awx:/awx# kubectl get pods -n awx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-migration-24.6.1-bh5vb 0/1 Completed 0 9m55s
|
||||
awx-operator-controller-manager-745b55d94b-2dhvx 2/2 Running 0 25m
|
||||
awx-postgres-15-0 1/1 Running 0 12m
|
||||
awx-task-7946b46dd6-7z9jm 4/4 Running 0 10m
|
||||
awx-web-9497647b4-s4gmj 3/3 Running 0 10m
|
||||
```
|
||||
|
||||
If you see a migration pod, like seen in the above example, you can feel free to delete it with the following command: `kubectl delete pod awx-migration-24.6.1-bh5vb -n awx`.
|
||||
@@ -0,0 +1,28 @@
|
||||
# WinRM (Kerberos)
|
||||
**Name**: "Kerberos WinRM"
|
||||
|
||||
```jsx title="Input Configuration"
|
||||
fields:
|
||||
- id: username
|
||||
type: string
|
||||
label: Username
|
||||
- id: password
|
||||
type: string
|
||||
label: Password
|
||||
secret: true
|
||||
- id: krb_realm
|
||||
type: string
|
||||
label: Kerberos Realm (Domain)
|
||||
required:
|
||||
- username
|
||||
- password
|
||||
- krb_realm
|
||||
```
|
||||
|
||||
```jsx title="Injector Configuration"
|
||||
extra_vars:
|
||||
ansible_user: '{{ username }}'
|
||||
ansible_password: '{{ password }}'
|
||||
ansible_winrm_transport: kerberos
|
||||
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
|
||||
```
|
||||
36
operations/automation/ansible/credentials/overview.md
Normal file
36
operations/automation/ansible/credentials/overview.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# AWX Credential Types
|
||||
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
|
||||
|
||||
## Windows-based Credentials
|
||||
### NTLM
|
||||
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
|
||||
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
|
||||
Credential Type: Machine
|
||||
Username: nicole.rappe@MOONGATE.LOCAL
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: runas
|
||||
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
|
||||
```
|
||||
### Kerberos
|
||||
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
|
||||
```jsx title="(Kerberos WinRM) nicole.rappe"
|
||||
Credential Type: Kerberos WinRM
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted>
|
||||
Kerberos Realm (Domain): MOONGATE.LOCAL
|
||||
```
|
||||
## Linux-based Credentials
|
||||
```jsx title="(LINUX) nicole"
|
||||
Credential Type: Machine
|
||||
Username: nicole
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: sudo
|
||||
Privilege Escalation Username: root
|
||||
```
|
||||
|
||||
:::note
|
||||
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
|
||||
:::
|
||||
@@ -0,0 +1,71 @@
|
||||
**Purpose**:
|
||||
You will need to enable secure WinRM management of the Windows devices you are running playbooks against, as compared to the Linux devices. The following powershell script needs to be ran on every Windows device you intend to run Ansible playbooks on. This script can also be useful for simply enabling / resetting WinRM configurations for Hyper-V hosts in general, just omit the Powershell script remote signing section if you dont plan on using it for Ansible.
|
||||
|
||||
``` powershell
|
||||
# Script to configure WinRM over HTTPS on the Hyper-V host
|
||||
|
||||
# Ensure WinRM is enabled
|
||||
Write-Host "Enabling WinRM..."
|
||||
winrm quickconfig -force
|
||||
|
||||
# Generate a self-signed certificate (Optional: Use your certificate if you have one)
|
||||
$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "$(Get-WmiObject -Class Win32_ComputerSystem).DomainName"
|
||||
$certThumbprint = $cert.Thumbprint
|
||||
|
||||
# Function to delete existing HTTPS listener
|
||||
function Remove-HTTPSListener {
|
||||
Write-Host "Removing existing HTTPS listener if it exists..."
|
||||
$listeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
|
||||
foreach ($listener in $listeners) {
|
||||
if ($listener.Transport -eq "HTTPS") {
|
||||
Write-Host "Deleting listener with Address: $($listener.Address) and Transport: $($listener.Transport)"
|
||||
Remove-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{Address=$listener.Address; Transport=$listener.Transport}
|
||||
}
|
||||
}
|
||||
Start-Sleep -Seconds 5 # Wait for a few seconds to ensure deletion
|
||||
}
|
||||
|
||||
# Remove existing HTTPS listener
|
||||
Remove-HTTPSListener
|
||||
|
||||
# Confirm deletion
|
||||
$existingListeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
|
||||
if ($existingListeners | Where-Object { $_.Transport -eq "HTTPS" }) {
|
||||
Write-Host "Failed to delete the existing HTTPS listener. Exiting script."
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Create a new HTTPS listener
|
||||
Write-Host "Creating a new HTTPS listener..."
|
||||
$listenerCmd = "winrm create winrm/config/Listener?Address=*+Transport=HTTPS '@{Hostname=`"$(Get-WmiObject -Class Win32_ComputerSystem).DomainName`"; CertificateThumbprint=`"$certThumbprint`"}'"
|
||||
Invoke-Expression $listenerCmd
|
||||
|
||||
# Set TrustedHosts to allow connections from any IP address (adjust as needed for security)
|
||||
Write-Host "Setting TrustedHosts to allow any IP address..."
|
||||
winrm set winrm/config/client '@{TrustedHosts="*"}'
|
||||
|
||||
# Enable the firewall rule for WinRM over HTTPS
|
||||
Write-Host "Enabling firewall rule for WinRM over HTTPS..."
|
||||
$existingFirewallRule = Get-NetFirewallRule -DisplayName "WinRM HTTPS" -ErrorAction SilentlyContinue
|
||||
if (-not $existingFirewallRule) {
|
||||
New-NetFirewallRule -Name "WINRM-HTTPS-In-TCP-PUBLIC" -DisplayName "WinRM HTTPS" -Enabled True -Direction Inbound -Protocol TCP -LocalPort 5986 -RemoteAddress Any -Action Allow
|
||||
}
|
||||
|
||||
# Ensure Kerberos authentication is enabled
|
||||
Write-Host "Enabling Kerberos authentication for WinRM..."
|
||||
winrm set winrm/config/service/auth '@{Kerberos="true"}'
|
||||
|
||||
# Configure the WinRM service to use HTTPS and Kerberos
|
||||
Write-Host "Configuring WinRM service to use HTTPS and Kerberos..."
|
||||
winrm set winrm/config/service '@{AllowUnencrypted="false"}'
|
||||
|
||||
# Configure the WinRM client to use Kerberos
|
||||
Write-Host "Configuring WinRM client to use Kerberos..."
|
||||
winrm set winrm/config/client/auth '@{Kerberos="true"}'
|
||||
|
||||
# Ensure the PowerShell execution policy is set to allow remotely running scripts
|
||||
Write-Host "Setting PowerShell execution policy to RemoteSigned..."
|
||||
Set-ExecutionPolicy RemoteSigned -Force
|
||||
|
||||
Write-Host "Configuration complete. The Hyper-V host is ready for remote management over HTTPS with Kerberos authentication."
|
||||
```
|
||||
35
operations/automation/ansible/inventories/overview.md
Normal file
35
operations/automation/ansible/inventories/overview.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Host Inventories
|
||||
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook. You can see an example inventory file below.
|
||||
|
||||
Keep in mind the "Group Variables" section varies based on your environment. NTLM is considered insecure, but may be necessary when you are interacting with Windows servers that are not domain-joined. Otherwise you want to use Kerberos authentication. This is outlined more in the [AWX Kerberos Implementation](../awx/awx-kerberos-implementation.md#job-template-inventory-examples) documentation.
|
||||
|
||||
!!! note "Inventory Data Relationships"
|
||||
An inventory file consists of hosts, groups, and variables. A host belongs to a group, and a group can have variables configured for it. If you run a playbook / job template against a host, it will assign the variables associated to the group that host belongs to (if any) during runtime.
|
||||
|
||||
```ini title="https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini"
|
||||
# Networking
|
||||
pfsense-example ansible_host=192.168.3.1
|
||||
|
||||
# Servers
|
||||
example01 ansible_host=192.168.3.2
|
||||
example02 ansible_host=192.168.3.3
|
||||
example03 ansible_host=example03.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
|
||||
example04 ansible_host=example04.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
|
||||
|
||||
# Group Definitions
|
||||
[linuxServers]
|
||||
example01
|
||||
example02
|
||||
|
||||
[domainControllers]
|
||||
example03
|
||||
example04
|
||||
|
||||
[domainControllers:vars]
|
||||
ansible_connection=winrm
|
||||
ansible_winrm_kerberos_delegation=false
|
||||
ansible_port=5986
|
||||
ansible_winrm_transport=ntlm
|
||||
ansible_winrm_server_cert_validation=ignore
|
||||
```
|
||||
|
||||
56
operations/automation/ansible/playbooks/playbooks.md
Normal file
56
operations/automation/ansible/playbooks/playbooks.md
Normal file
@@ -0,0 +1,56 @@
|
||||
!!! warning "DOCUMENT UNDER CONSTRUCTION"
|
||||
This document is a "scaffold" document. It is missing significant portions of several sections and should not be read with any scrutiny until it is more feature-complete down-the-road. Come back later and I should have added more to this document hopefully by then.
|
||||
|
||||
**Purpose**:
|
||||
This is an indexed list of Ansible Playbooks / Workflows that I have developed to deploy and manage various aspects of my lab environment. The list is not dynamically updated, so it may sometimes be out-of-date.
|
||||
|
||||
## Linux Playbooks
|
||||
### Deployments
|
||||
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
|
||||
|
||||
- Authentik
|
||||
- [1-Authentik-Bootstrapper.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/1-Authentik-Bootstrapper.yml)
|
||||
- [2-Deploy-Cluster.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/2-Deploy-Cluster.yml)
|
||||
- [3-Deploy-Authentik.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/3-Deploy-Authentik.yml)
|
||||
- [Check_Cluster_Nodes.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Nodes.yml)
|
||||
- [Check_Cluster_Pods.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Pods.yml)
|
||||
- Immich
|
||||
- [Full_Deployment.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Immich/Full_Deployment.yml)
|
||||
- Keycloak
|
||||
- [Deploy-Keycloak.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Keycloak/Deploy-Keycloak.yml)
|
||||
- Portainer
|
||||
- [Deploy-Portainer.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Portainer/Deploy-Portainer.yml)
|
||||
- PrivacyIDEA
|
||||
- [privacyIDEA.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/privacyIDEA.yml)
|
||||
- Rancher RKE2 Kubernetes Cluster
|
||||
- [PLACEHOLDER]()
|
||||
- [PLACEHOLDER]()
|
||||
- [PLACEHOLDER]()
|
||||
- [PLACEHOLDER]()
|
||||
- [PLACEHOLDER]()
|
||||
### Kerberos
|
||||
This playbook is designed to be chain-loaded before any playbooks that involve interacting with Active Directory Domain-Joined Windows Devices. It establishes a connection with Active Directory using domain credentials, sets up a keytab file (among other things), and makes it so the execution environment that the subsequent jobs are running in are able to run against windows devices. This ensures the connection is encrypted the entire time the playbooks are running instead of using lower-security authentication methods like NTLM, which don't even always work in most circumstances. You can find more information in the [Kerberos Authentication](../awx/awx-kerberos-implementation.md#kerberos-implementation) section of the AWX documentation. `It does require additional setup prior to running the playbook.`
|
||||
|
||||
- [Establish_Kerberos_Connection.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Establish_Kerberos_Connection.yml)
|
||||
|
||||
!!! warning "Ansible w/ Kerberos is **not** for beginners"
|
||||
I advise against jumping into the deep-end with setting up Kerberos authentication for your playbooks until you have made yourself more comfortable with how Kubernetes works, or at the very least, you need to read the linked documentation above very closely to ensure nothing goes wrong during the setup.
|
||||
|
||||
### Security
|
||||
Security playbooks do things like secure devices with additional auditing functionality, login notifications, enforcing SSH certificate-based authentication, things of that sort.
|
||||
|
||||
- Install SSH Public Key Authentication
|
||||
- [PLACEHOLDER]()
|
||||
- SSH Login Notifications
|
||||
- [PLACEHOLDER]()
|
||||
|
||||
## Windows Playbooks
|
||||
### Deployments
|
||||
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
|
||||
- Hyper-V - Deploy GuestVM
|
||||
- [PLACEHOLDER]()
|
||||
- Query Active Directory Domain Computers
|
||||
- [PLACEHOLDER]()
|
||||
- Install BGInfo
|
||||
- [PLACEHOLDER]()
|
||||
|
||||
16
operations/automation/ansible/projects/overview.md
Normal file
16
operations/automation/ansible/projects/overview.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# AWX Projects
|
||||
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
|
||||
|
||||
```jsx title="Ansible Playbooks (Gitea)"
|
||||
Name: Bunny Lab
|
||||
Source Control Type: Git
|
||||
Source Control URL: https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git
|
||||
Source Control Credential: Bunny Lab (Gitea)
|
||||
```
|
||||
|
||||
```jsx title="Resources > Credentials > Bunny Lab (Gitea)"
|
||||
Name: Bunny Lab (Gitea)
|
||||
Credential Type: Source Control
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
|
||||
```
|
||||
21
operations/automation/ansible/templates/overview.md
Normal file
21
operations/automation/ansible/templates/overview.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Templates
|
||||
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
|
||||
|
||||
```jsx title="Deploy Hyper-V VM"
|
||||
Name: Deploy Hyper-V VM
|
||||
Inventory: (NTLM) MOON-HOST-01
|
||||
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
|
||||
Credentials: (NTLM) nicole.rappe@MOONGATE.local
|
||||
Execution Environment: AWX EE (latest)
|
||||
Project: Ansible Playbooks (Gitea)
|
||||
|
||||
Variables:
|
||||
---
|
||||
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
|
||||
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
|
||||
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
|
||||
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
|
||||
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
|
||||
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
|
||||
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
|
||||
```
|
||||
Reference in New Issue
Block a user