Re-Structured Documentation

This commit is contained in:
2024-11-17 22:09:46 -07:00
parent a5169d1abd
commit f67c858dd3
97 changed files with 0 additions and 12 deletions

View File

@ -0,0 +1,68 @@
**Purpose**: Once AWX is deployed, you will want to connect Gitea at https://git.bunny-lab.io. The reason for this is so we can pull in our playbooks, inventories, and templates automatically into AWX, making it more stateless overall and more resilient to potential failures of either AWX or the underlying Kubernetes Cluster hosting it.
## Obtain Gitea Token
You already have this documented in Vaultwarden's password notes for awx.bunny-lab.io, but in case it gets lost, go to the [Gitea Token Page](https://git.bunny-lab.io/user/settings/applications) to set up an application token with read-only access for AWX, with a descriptive name.
## Create Gitea Credentials
Before you make move on and make the project, you need to associate the Gitea token with an AWX "Credential". Navigate to **Resources > Credentials > Add**
| **Field** | **Value** |
| :--- | :--- |
| Credential Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Organization | `Default` *(Click the Magnifying Lens)* |
| Credential Type | `Source Control` |
| Username | `Gitea Username` *(e.g. `nicole`)* |
| Password | `<Gitea Token>` |
## Create an AWX Project
In order to link AWX to Gitea, you have to connect the two of them together with an AWX "Project". Navigate to **Resources > Projects > Add**
**Project Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Project Name | `Bunny-Lab` |
| Description | `Homelab Environment` |
| Organization | `Default` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source Control Type | `Git` |
**Gitea-specific Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Source Control URL | `https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git` |
| Source Control Branch/Tag/Commit | `main` |
| Source Control Credential | `git.bunny-lab.io` *(Click the Magnifying Lens)* |
## Add Playbooks
AWX automatically imports any playbooks it finds from the project, and makes them available for templates operating within the same project-space. (e.g. "Bunny-Lab"). This means no special configuration is needed for the playbooks.
## Create an Inventory
You will want to associate an inventory with the Gitea project now. Navigate to **Resources > Inventories > Add**
| **Field** | **Value** |
| :--- | :--- |
| Inventory Name | `Homelab` |
| Description | `Homelab Inventory` |
| Organization | `Default` |
### Add Gitea Inventory Source
Now you will want to connect this inventory to the inventory file(s) hosted in the aforementioned Gitea repository. Navigate to **Resources > Inventories > Homelab > Sources > Add**
| **Field** | **Value** |
| :--- | :--- |
| Source Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source | `Sourced from a Project` |
| Project | `Bunny-Lab` |
| Inventory File | `inventories/homelab.ini` |
Check the box at the bottom named "**Update on Launch**". This will pull the latest inventory each time a job is run. It may slightly slow down jobs, but it ensures that everything is updated every time a job is ran.
## Webhooks
Optionally, set up webhooks in Gitea to trigger inventory updates in AWX upon changes in the repository. This section is not documented yet, but will eventually be documented.

View File

@ -0,0 +1,28 @@
# WinRM (Kerberos)
**Name**: "Kerberos WinRM"
```jsx title="Input Configuration"
fields:
- id: username
type: string
label: Username
- id: password
type: string
label: Password
secret: true
- id: krb_realm
type: string
label: Kerberos Realm (Domain)
required:
- username
- password
- krb_realm
```
```jsx title="Injector Configuration"
extra_vars:
ansible_user: '{{ username }}'
ansible_password: '{{ password }}'
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
```

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
---
# AWX Credential Types
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
## Windows-based Credentials
### NTLM
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
Credential Type: Machine
Username: nicole.rappe@MOONGATE.LOCAL
Password: <Encrypted>
Privilege Escalation Method: runas
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
```
### Kerberos
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
```jsx title="(Kerberos WinRM) nicole.rappe"
Credential Type: Kerberos WinRM
Username: nicole.rappe
Password: <Encrypted>
Kerberos Realm (Domain): MOONGATE.LOCAL
```
## Linux-based Credentials
```jsx title="(LINUX) nicole"
Credential Type: Machine
Username: nicole
Password: <Encrypted>
Privilege Escalation Method: sudo
Privilege Escalation Username: root
```
:::note
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
:::

View File

@ -0,0 +1,139 @@
# Deploy AWX on Minikube Cluster
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
!!! note Prerequisites
This document assumes you are running **Ubuntu Server 20.04** or later.
## Install Minikube Cluster
### Update the Ubuntu Server
```
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
```
### Download and Install Minikube (Ubuntu Server)
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
```
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
# Download Docker and Common Tools
sudo apt install docker.io nfs-common iptables nano htop -y
# Configure Docker User
sudo usermod -aG docker nicole
```
:::caution
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
:::
### Fully Logout then sign back in to the server
```
exit
```
### Validate that permissions allow you to run docker commands while non-root
```
docker ps
```
### Initialize Minikube Cluster
Additional Documentation: https://github.com/ansible/awx-operator
```
minikube start --driver=docker
minikube kubectl -- get nodes
minikube kubectl -- get pods -A
```
### Make sure Minikube Cluster Automatically Starts on Boot
```jsx title="/etc/systemd/system/minikube.service"
[Unit]
Description=Minikube service
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
User=nicole
ExecStart=/usr/bin/minikube start --driver=docker
ExecStop=/usr/bin/minikube stop
[Install]
WantedBy=multi-user.target
```
:::caution
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
:::
:::info
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
:::
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
```
sudo systemctl daemon-reload
sudo systemctl enable minikube
sudo systemctl start minikube
```
### Make command alias for `kubectl`
Be sure to add the following to the bottom of your existing profile file noted below.
```jsx title="~/.bashrc"
...
alias kubectl="minikube kubectl --"
```
:::tip
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
:::
## Make AWX Operator Kustomization File:
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
```jsx title="kustomization.yml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref=2.4.0
- awx.yml
images:
- name: quay.io/ansible/awx-operator
newTag: 2.4.0
namespace: awx
```
```jsx title="awx.yml"
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
---
apiVersion: v1
kind: Service
metadata:
name: awx-service
namespace: awx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Choose an available port in the range of 30000-32767
selector:
app.kubernetes.io/name: awx-web
```
### Apply Configuration File
Run from the same directory as the `awx-operator.yaml` file.
```
kubectl apply -k .
```
:::info
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
:::
### View Logs / Track Deployment Progress
```
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
```
### Get AWX WebUI Address
```
minikube service -n awx awx-service --url
```
### Get WebUI Password:
```
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
```

View File

@ -0,0 +1,71 @@
**Purpose**:
You will need to enable secure WinRM management of the Windows devices you are running playbooks against, as compared to the Linux devices. The following powershell script needs to be ran on every Windows device you intend to run Ansible playbooks on:
``` powershell
# Script to configure WinRM over HTTPS on the Hyper-V host
# Ensure WinRM is enabled
Write-Host "Enabling WinRM..."
winrm quickconfig -force
# Generate a self-signed certificate (Optional: Use your certificate if you have one)
$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "hyperv-host.local"
$certThumbprint = $cert.Thumbprint
# Function to delete existing HTTPS listener
function Remove-HTTPSListener {
Write-Host "Removing existing HTTPS listener if it exists..."
$listeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
foreach ($listener in $listeners) {
if ($listener.Transport -eq "HTTPS") {
Write-Host "Deleting listener with Address: $($listener.Address) and Transport: $($listener.Transport)"
Remove-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{Address=$listener.Address; Transport=$listener.Transport}
}
}
Start-Sleep -Seconds 5 # Wait for a few seconds to ensure deletion
}
# Remove existing HTTPS listener
Remove-HTTPSListener
# Confirm deletion
$existingListeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
if ($existingListeners | Where-Object { $_.Transport -eq "HTTPS" }) {
Write-Host "Failed to delete the existing HTTPS listener. Exiting script."
exit 1
}
# Create a new HTTPS listener
Write-Host "Creating a new HTTPS listener..."
$listenerCmd = "winrm create winrm/config/Listener?Address=*+Transport=HTTPS '@{Hostname=`"hyperv-host.local`"; CertificateThumbprint=`"$certThumbprint`"}'"
Invoke-Expression $listenerCmd
# Set TrustedHosts to allow connections from any IP address (adjust as needed for security)
Write-Host "Setting TrustedHosts to allow any IP address..."
winrm set winrm/config/client '@{TrustedHosts="*"}'
# Enable the firewall rule for WinRM over HTTPS
Write-Host "Enabling firewall rule for WinRM over HTTPS..."
$existingFirewallRule = Get-NetFirewallRule -DisplayName "WinRM HTTPS" -ErrorAction SilentlyContinue
if (-not $existingFirewallRule) {
New-NetFirewallRule -Name "WINRM-HTTPS-In-TCP-PUBLIC" -DisplayName "WinRM HTTPS" -Enabled True -Direction Inbound -Protocol TCP -LocalPort 5986 -RemoteAddress Any -Action Allow
}
# Ensure Kerberos authentication is enabled
Write-Host "Enabling Kerberos authentication for WinRM..."
winrm set winrm/config/service/auth '@{Kerberos="true"}'
# Configure the WinRM service to use HTTPS and Kerberos
Write-Host "Configuring WinRM service to use HTTPS and Kerberos..."
winrm set winrm/config/service '@{AllowUnencrypted="false"}'
# Configure the WinRM client to use Kerberos
Write-Host "Configuring WinRM client to use Kerberos..."
winrm set winrm/config/client/auth '@{Kerberos="true"}'
# Ensure the PowerShell execution policy is set to allow running scripts
Write-Host "Setting PowerShell execution policy to RemoteSigned..."
Set-ExecutionPolicy RemoteSigned -Force
Write-Host "Configuration complete. The Hyper-V host is ready for remote management over HTTPS with Kerberos authentication."
```

View File

@ -0,0 +1,62 @@
## Upgrading from 2.10.0 to 2.19.1+
There is a known issue with upgrading / install AWX Operator beyond version 2.10.0, because of how the PostgreSQL database upgrades from 13.0 to 15.0, and has changed permissions. The following workflow will help get past that and adjust the permissions in such a way that allows the upgrade to proceed successfully. If this is a clean installation, you can also perform this step if the fresh install of 2.19.1 is not working yet. (It wont work out of the box because of this bug). `The developers of AWX seem to just not care about this issue, and have not implemented an official fix themselves at this time).
### Create a Temporary Pod to Adjust Permissions
We need to create a pod that will mount the PostgreSQL PVC, make changes to permissions, then destroy the v15.0 pod to have the AWX Operator automatically regenerate it.
```yaml title="/awx/temp-pod.yml"
apiVersion: v1
kind: Pod
metadata:
name: temp-pod
namespace: awx
spec:
containers:
- name: temp-container
image: busybox
command: ['sh', '-c', 'sleep 3600']
volumeMounts:
- mountPath: /var/lib/pgsql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-15-awx-postgres-15-0
restartPolicy: Never
```
``` sh
# Deploy Temporary Pod
kubectl apply -f /awx/temp-pod.yaml
# Open a Shell in the Temporary Pod
kubectl exec -it temp-pod -n awx -- sh
# Adjust Permissions of the PostgreSQL 15.0 Database Folder
chown -R 26:root /var/lib/pgsql/data
exit
# Delete the Temporary Pod
kubectl delete pod temp-pod -n awx
# Delete the Crashlooped PostgreSQL 15.0 Pod to Regenerate It
kubectl delete pod awx-postgres-15-0 -n awx
# Track the Migration
kubectl get pods -n awx
kubectl logs -n awx awx-postgres-15-0
```
!!! warning "Be Patient"
This upgrade may take a few minutes depending on the speed of the node it is running on. Be patient and wait until the output looks something similar to this:
```
root@awx:/awx# kubectl get pods -n awx
NAME READY STATUS RESTARTS AGE
awx-migration-24.6.1-bh5vb 0/1 Completed 0 9m55s
awx-operator-controller-manager-745b55d94b-2dhvx 2/2 Running 0 25m
awx-postgres-15-0 1/1 Running 0 12m
awx-task-7946b46dd6-7z9jm 4/4 Running 0 10m
awx-web-9497647b4-s4gmj 3/3 Running 0 10m
```
If you see a migration pod, like seen in the above example, you can feel free to delete it with the following command: `kubectl delete pod awx-migration-24.6.1-bh5vb -n awx`.

View File

@ -0,0 +1,97 @@
# Host Inventories
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook. You can see an example of my Bunny Lab inventory file at the time of writing this document, below:
!!! note "Inventory Data Relationships"
An inventory file consists of hosts, groups, and variables. A host belongs to a group, and a group can have variables configured for it. If you run a playbook / job template against a host, it will assign the variables associated to the group that host belongs to (if any) during runtime.
```ini title="https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini"
# Networking
bunny-pfsense-01 ansible_host=192.168.3.1
# Servers
pfsense ansible_host=192.168.3.1
lab-jelly-01 ansible_host=192.168.3.2
moon-storage-01 ansible_host=192.168.3.3
virt-node-01 ansible_host=virt-node-01.bunny-lab.io
virt-node-02 ansible_host=virt-node-02.bunny-lab.io
lab-photos-01 ansible_host=lab-photos-01.bunny-lab.io
lab-veeam-01 ansible_host=192.168.3.8
lab-veeam-02 ansible_host=192.168.3.9
awx ansible_host=192.168.3.10
lab-games-02 ansible_host=lab-games-01.bunny-lab.io
bunny-docker-01 ansible_host=192.168.3.12
mail ansible_host=mail.bunny-lab.io
lab-games-03 ansible_host=lab-games-03.bunny-lab.io
lab-veeam-03 ansible_host=192.168.3.15
alpine-work-01 ansible_host=192.168.3.17
lab-auth-01 ansible_host=192.168.3.18
lab-auth-02 ansible_host=192.168.3.20
container-node-01 ansible_host=192.168.3.19
lab-dc-01 ansible_host=192.168.3.25
lab-dc-02 ansible_host=192.168.3.26
lab-iris-01 ansible_host=192.168.3.27
lab-games-01 ansible_host=192.168.3.28
cloud ansible_host=192.168.3.29
lab-dt-01 ansible_host=192.168.3.30
lab-sophos-01 ansible_host=192.168.3.254
# Workstations
bunny-dsktp-01 ansible_host=10.0.0.20
bunny-lptp-01 ansible_host=10.0.0.17
bunny-lptp-02 ansible_host=10.0.0.4
lab-dt-01 ansible_host=192.168.3.30
# Group Definitions
[domainControllers]
lab-dc-01
lab-dc-02
[domainControllers:vars]
ansible_connection=winrm
ansible_winrm_kerberos_delegation=false
ansible_port=5986
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
[containerOrchestration]
container-node-01
[windowsServers]
lab-dc-01
lab-dc-02
virt-node-01
virt-node-02
lab-veeam-01
lab-games-01
[windowsServers:vars]
ansible_connection=winrm
ansible_winrm_kerberos_delegation=false
ansible_port=5986
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
[linuxServers]
lab-jelly-01
lab-photos-01
mail
alpine-work-01
lab-auth-01
lab-auth-02
container-node-01
lab-dt-01
cloud
[workstations]
bunny-dsktp-01
bunny-lptp-01
bunny-lptp-02
bunny-dsktp-01
[workstations:vars]
ansible_connection=winrm
ansible_winrm_kerberos_delegation=false
ansible_port=5986
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
```

View File

@ -0,0 +1,201 @@
## Kerberos Implementation
You may find that you need to be able to run playbooks on domain-joined Windows devices using Kerberos. You need to go through some extra steps to set this up after you have successfully fully deployed AWX Operator into Kubernetes.
### Configure Windows Devices
You will need to prepare the Windows devices to allow them to be remotely controlled by Ansible playbooks. Run the following powershell script on all of the devices that will be managed by the Ansible AWX environment.
- [WinRM Prerequisite Setup Script](https://docs.bunny-lab.io/Docker%20%26%20Kubernetes/Servers/AWX/AWX%20Operator/Enable%20Kerberos%20WinRM/)
### Create an AWX Instance Group
At this point, we need to make an "Instance Group" for the AWX Execution Environments that will use both a Keytab file and custom DNS servers defined by configmap files created below. Reference information was found [here](https://github.com/kurokobo/awx-on-k3s/blob/main/tips/use-kerberos.md#create-container-group). This group allows for persistence across playbooks/templates, so that if you establish a Kerberos authentication in one playbook, it will persist through the entire job's workflow.
Create the following files in the `/awx` folder on the AWX Operator server you deployed earlier when setting up the Kubernetes Cluster and deploying AWX Operator into it so we can later mount them into the new Execution Environment we will be building.
=== "Custom DNS Records"
```yaml title="/awx/custom_dns_records.yml"
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-dns
namespace: awx
data:
custom-hosts: |
192.168.3.25 LAB-DC-01.bunny-lab.io LAB-DC-01
192.168.3.26 LAB-DC-02.bunny-lab.io LAB-DC-02
192.168.3.4 VIRT-NODE-01.bunny-lab.io VIRT-NODE-01
192.168.3.5 BUNNY-NODE-02.bunny-lab.io BUNNY-NODE-02
```
=== "Kerberos Keytab File"
```ini title="/awx/krb5.conf"
[libdefaults]
default_realm = BUNNY-LAB.IO
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
BUNNY-LAB.IO = {
kdc = 192.168.3.25
kdc = 192.168.3.26
admin_server = 192.168.3.25
}
[domain_realm]
192.168.3.25 = BUNNY-LAB.IO
192.168.3.26 = BUNNY-LAB.IO
.bunny-lab.io = BUNNY-LAB.IO
bunny-lab.io = BUNNY-LAB.IO
```
Then we apply these configmaps to the AWX namespace with the following commands:
``` sh
cd /awx
kubectl -n awx create configmap awx-kerberos-config --from-file=/awx/krb5.conf
kubectl apply -f custom_dns_records.yml
```
- Open AWX UI and click on "**Instance Groups**" under the "**Administration**" section, then press "**Add > Add container group**".
- Enter a descriptive name as you like (e.g. `Kerberos`) and click the toggle "**Customize Pod Specification**".
- Put the following YAML string in "**Custom pod spec**" then press the "**Save**" button
```yaml title="Custom Pod Spec"
apiVersion: v1
kind: Pod
metadata:
namespace: awx
spec:
serviceAccountName: default
automountServiceAccountToken: false
initContainers:
- name: init-hosts
image: busybox
command:
- sh
- '-c'
- cat /etc/custom-dns/custom-hosts >> /etc/hosts
volumeMounts:
- name: custom-dns
mountPath: /etc/custom-dns
containers:
- image: quay.io/ansible/awx-ee:latest
name: worker
args:
- ansible-runner
- worker
- '--private-data-dir=/runner'
resources:
requests:
cpu: 250m
memory: 100Mi
volumeMounts:
- name: awx-kerberos-volume
mountPath: /etc/krb5.conf
subPath: krb5.conf
volumes:
- name: awx-kerberos-volume
configMap:
name: awx-kerberos-config
- name: custom-dns
configMap:
name: custom-dns
```
### Job Template & Inventory Examples
At this point, you need to adjust your exist Job Template(s) that need to communicate via Kerberos to domain-joined Windows devices to use the "Instance Group" of "**Kerberos**" while keeping the same Execution Environment you have been using up until this point. This will change the Execution Environment to include the Kerberos Keytab file in the EE at playbook runtime. When the playbook has completed running, (or if you are chain-loading multiple playbooks in a workflow job template), it will cease to exist. The kerberos keytab data will be regenerated at the next runtime.
Also add the following variables to the job template you have associated with the playbook below:
``` yaml
---
kerberos_user: nicole.rappe@BUNNY-LAB.IO
kerberos_password: <DomainPassword>
```
You will want to ensure your inventory file is configured to use Kerberos Authentication as well, so the following example is a starting point:
```ini
virt-node-01 ansible_host=virt-node-01.bunny-lab.io
bunny-node-02 ansible_host=bunny-node-02.bunny-lab.io
[virtualizationHosts]
virt-node-01
bunny-node-02
[virtualizationHosts:vars]
ansible_connection=winrm
ansible_port=5986
ansible_winrm_transport=kerberos
ansible_winrm_scheme=https
ansible_winrm_server_cert_validation=ignore
#kerberos_user=nicole.rappe@BUNNY-LAB.IO #Optional, if you define this in the Job Template, it is not necessary here.
#kerberos_password=<DomainPassword> #Optional, if you define this in the Job Template, it is not necessary here.
```
!!! failure "Usage of Fully-Quality Domain Names"
It is **critical** that you define Kerberos-authenticated devices with fully qualified domain names. This is just something I found out from 4+ hours of troubleshooting. If the device is Linux or you are using NTLM authentication instead of Kerberos authentication, you can skip this warning. If you do not define the inventory using FQDNs, it will fail to run the commands against the targeted device(s).
In this example, the host is defined via FQDN: `virt-node-01 ansible_host=virt-node-01.bunny-lab.io`
### Kerberos Connection Playbook
At this point, you need a playbook that you can run in a Workflow Job Template (to keep things modular and simplified) to establish a connection to an Active Directory Domain Controller via Kerberos before running additional playbooks/templates against the actual devices.
You can visualize the connection workflow below:
``` mermaid
graph LR
A[Update AWX Project] --> B[Update Project Inventory]
B --> C[Establish Kerberos Connection]
C --> D[Run Playbook against Windows Device]
```
The following playbook is an example pulled from https://git.bunny-lab.io
!!! note "Playbook Redundancies"
I have several areas where I could optimize this playbook and remove redundancies. I just have not had enough time to iterate through it deeply-enough to narrow down exact things I can remove, so for now, it will remain as-is, since it functions as-expected with the example below.
```yaml title="Establish_Kerberos_Connection.yml"
---
- name: Generate Kerberos Ticket to Communicate with Domain-Joined Windows Devices
hosts: localhost
vars:
kerberos_password: "{{ lookup('env', 'KERBEROS_PASSWORD') }}" # Alternatively, you can set this as an environment variable
# BE SURE TO PASS "kerberos_user: nicole.rappe@BUNNY-LAB.IO" and "kerberos_password: <domain_admin_password>" to the template variables when running this playbook in a template.
tasks:
- name: Generate the keytab file
ansible.builtin.shell: |
ktutil <<EOF
addent -password -p {{ kerberos_user }} -k 1 -e aes256-cts
{{ kerberos_password }}
wkt /tmp/krb5.keytab
quit
EOF
environment:
KRB5_CONFIG: /etc/krb5.conf
register: generate_keytab_result
- name: Ensure keytab file was generated successfully
fail:
msg: "Failed to generate keytab file"
when: generate_keytab_result.rc != 0
- name: Keytab successfully generated
ansible.builtin.debug:
msg: "Keytab successfully generated at /tmp/krb5.keytab"
when: generate_keytab_result.rc == 0
- name: Acquire Kerberos ticket using keytab
ansible.builtin.shell: |
kinit -kt /tmp/krb5.keytab {{ kerberos_user }}
environment:
KRB5_CONFIG: /etc/krb5.conf
register: kinit_result
- name: Ensure Kerberos ticket was acquired successfully
fail:
msg: "Failed to acquire Kerberos ticket"
when: kinit_result.rc != 0
- name: Kerberos ticket successfully acquired
ansible.builtin.debug:
msg: "Kerberos ticket successfully acquired for user {{ kerberos_user }}"
when: kinit_result.rc == 0
```

View File

@ -0,0 +1,16 @@
# AWX Projects
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
```jsx title="Ansible Playbooks (Gitea)"
Name: Bunny Lab
Source Control Type: Git
Source Control URL: https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git
Source Control Credential: Bunny Lab (Gitea)
```
```jsx title="Resources > Credentials > Bunny Lab (Gitea)"
Name: Bunny Lab (Gitea)
Credential Type: Source Control
Username: nicole.rappe
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
```

View File

@ -0,0 +1,21 @@
# Templates
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
```jsx title="Deploy Hyper-V VM"
Name: Deploy Hyper-V VM
Inventory: (NTLM) MOON-HOST-01
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
Credentials: (NTLM) nicole.rappe@MOONGATE.local
Execution Environment: AWX EE (latest)
Project: Ansible Playbooks (Gitea)
Variables:
---
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

View File

@ -0,0 +1,212 @@
**Purpose**: Puppet Bolt can be leveraged in an Ansible-esque manner to connect to and enroll devices such as Windows Servers, Linux Servers, and various workstations. To this end, it could be used to run ad-hoc tasks or enroll devices into a centralized Puppet server. (e.g. `LAB-PUPPET-01.bunny-lab.io`)
!!! note "Assumptions"
This deployment assumes you are deploying Puppet bolt onto the same server as Puppet. If you have not already, follow the [Puppet Deployment](https://docs.bunny-lab.io/Servers%20%26%20Workflows/Linux/Automation/Puppet/Puppet/) documentation to do so before continuing with the Puppet Bolt deployment.
## Initial Preparation
``` sh
# Install Bolt Repository
sudo rpm -Uvh https://yum.puppet.com/puppet-tools-release-el-9.noarch.rpm
sudo yum install -y puppet-bolt
# Verify Installation
bolt --version
# Clone Puppet Bolt Repository into Bolt Directory
#sudo git clone https://git.bunny-lab.io/GitOps/Puppet-Bolt.git /etc/puppetlabs/bolt <-- Disabled for now
sudo mkdir -p /etc/puppetlabs/bolt
sudo chown -R $(whoami):$(whoami) /etc/puppetlabs/bolt
sudo chmod -R 644 /etc/puppetlabs/bolt
#sudo chmod -R u+rwx,g+rx,o+rx /etc/puppetlabs/bolt/modules/bolt <-- Disabled for now
# Initialize A New Bolt Project
cd /etc/puppetlabs/bolt
bolt project init bunny_lab
```
## Configuring Inventory
At this point, you will want to create an inventory file that you can use for tracking devices. For now, this will have hard-coded credentials until a cleaner method is figured out.
``` yaml title="/etc/puppetlabs/bolt/inventory.yaml"
# Inventory file for Puppet Bolt
groups:
- name: linux_servers
targets:
- lab-auth-01.bunny-lab.io
- lab-auth-02.bunny-lab.io
config:
transport: ssh
ssh:
host-key-check: false
private-key: "/etc/puppetlabs/bolt/id_rsa_OpenSSH" # (1)
user: nicole
native-ssh: true
- name: windows_servers
config:
transport: winrm
winrm:
realm: BUNNY-LAB.IO
ssl: true
user: "BUNNY-LAB\\nicole.rappe"
password: DomainPassword # (2)
groups:
- name: domain_controllers
targets:
- lab-dc-01.bunny-lab.io
- lab-dc-02.bunny-lab.io
- name: dedicated_game_servers
targets:
- lab-games-01.bunny-lab.io
- lab-games-02.bunny-lab.io
- lab-games-03.bunny-lab.io
- lab-games-04.bunny-lab.io
- lab-games-05.bunny-lab.io
- name: hyperv_hosts
targets:
- virt-node-01.bunny-lab.io
- bunny-node-02.bunny-lab.io
```
1. Point the inventory file to the private key (if you use key-based authentication instead of password-based SSH authentication.)
2. Replace this with your actual domain admin / domain password.
### Validate Bolt Inventory Works
If the inventory file is created correctly, you will see the hosts listed when you run the command below:
``` sh
cd /etc/puppetlabs/bolt
bolt inventory show
```
??? example "Example Output of `bolt inventory show`"
You should expect to see output similar to the following:
``` sh
[root@lab-puppet-01 bolt-lab]# bolt inventory show
Targets
lab-auth-01.bunny-lab.io
lab-auth-02.bunny-lab.io
lab-dc-01.bunny-lab.io
lab-dc-02.bunny-lab.io
lab-games-01.bunny-lab.io
lab-games-02.bunny-lab.io
lab-games-03.bunny-lab.io
lab-games-04.bunny-lab.io
lab-games-05.bunny-lab.io
virt-node-01.bunny-lab.io
bunny-node-02.bunny-lab.io
Inventory source
/tmp/bolt-lab/inventory.yaml
Target count
11 total, 11 from inventory, 0 adhoc
Additional information
Use the '--targets', '--query', or '--rerun' option to view specific targets
Use the '--detail' option to view target configuration and data
```
## Configuring Kerberos
If you work with Windows-based devices in a domain environment, you will need to set up Puppet so it can perform Kerberos authentication while interacting with Windows devices. This involves a little bit of setup, but nothing too crazy.
### Install Krb5
We need to install the necessary software on the puppet server to allow Kerberos authentication to occur.
=== "Rocky, CentOS, RHEL, Fedora"
``` sh
sudo yum install krb5-workstation
```
=== "Debian, Ubuntu"
``` sh
sudo apt-get install krb5-user
```
=== "SUSE"
``` sh
sudo zypper install krb5-client
```
### Prepare `/etc/krb5.conf` Configuration
We need to configure Kerberos to know how to reach the domain, this is achieved by editing `/etc/krb5.conf` to look similar to the following, with your own domain substituting the example values.
``` ini
[libdefaults]
default_realm = BUNNY-LAB.IO
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 7d
forwardable = true
[realms]
BUNNY-LAB.IO = {
kdc = LAB-DC-01.bunny-lab.io # (1)
kdc = LAB-DC-02.bunny-lab.io # (2)
admin_server = LAB-DC-01.bunny-lab.io # (3)
}
[domain_realm]
.bunny-lab.io = BUNNY-LAB.IO
bunny-lab.io = BUNNY-LAB.IO
```
1. Your primary domain controller
2. Your secondary domain controller (if applicable)
3. This is your Primary Domain Controller (PDC)
### Initialize Kerberos Connection
Now we need to log into the domain using (preferrably) domain administrator credentials, such as the example below. You will be prompted to enter your domain password.
``` sh
kinit nicole.rappe@BUNNY-LAB.IO
klist
```
??? example "Example Output of `klist`"
You should expect to see output similar to the following. Finding a way to ensure the Kerberos tickets live longer is still under research, as 7 days is not exactly practical for long-term deployments.
``` sh
[root@lab-puppet-01 bolt-lab]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: nicole.rappe@BUNNY-LAB.IO
Valid starting Expires Service principal
11/14/2024 21:57:03 11/15/2024 07:57:03 krbtgt/BUNNY-LAB.IO@BUNNY-LAB.IO
renew until 11/21/2024 21:57:03
```
### Prepare Windows Devices
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
You can find the [WinRM Enablement Script](https://docs.bunny-lab.io/Docker%20%26%20Kubernetes/Servers/AWX/AWX%20Operator/Enable%20Kerberos%20WinRM/?h=winrm) in the Bunny Lab documentation.
## Ad-Hoc Command Examples
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
??? example "Example Output of `bolt command run whoami -t domain_controllers --no-ssl-verify`"
You should expect to see output similar to the following. This is what you will see when leveraging WinRM via Kerberos on Windows devices.
``` sh
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t domain_controllers --no-ssl-verify
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
Started on lab-dc-01.bunny-lab.io...
Started on lab-dc-02.bunny-lab.io...
Finished on lab-dc-02.bunny-lab.io:
bunny-lab\nicole.rappe
Finished on lab-dc-01.bunny-lab.io:
bunny-lab\nicole.rappe
Successful on 2 targets: lab-dc-01.bunny-lab.io,lab-dc-02.bunny-lab.io
Ran on 2 targets in 1.91 sec
```
??? example "Example Output of `bolt command run whoami -t linux_servers`"
You should expect to see output similar to the following. This is what you will see when leveraging native SSH on Linux devices.
``` sh
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t linux_servers
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
Started on lab-auth-01.bunny-lab.io...
Started on lab-auth-02.bunny-lab.io...
Finished on lab-auth-02.bunny-lab.io:
nicole
Finished on lab-auth-01.bunny-lab.io:
nicole
Successful on 2 targets: lab-auth-01.bunny-lab.io,lab-auth-02.bunny-lab.io
Ran on 2 targets in 0.68 sec
```

View File

@ -0,0 +1,422 @@
**Purpose**:
Puppet is another declarative configuration management tool that excels in system configuration and enforcement. Like Ansible, it's designed to maintain the desired state of a system's configuration but uses a client-server (master-agent) architecture by default.
!!! note "Assumptions"
This document assumes you are deploying Puppet server onto Rocky Linux 9.4. Any version of RHEL/CentOS/Alma/Rocky should behave similarily.
## Architectural Overview
### Detailed
``` mermaid
sequenceDiagram
participant Gitea as Gitea Repo (Puppet Environment)
participant r10k as r10k (Environment Deployer)
participant PuppetMaster as Puppet Server (lab-puppet-01.bunny-lab.io)
participant Agent as Managed Agent (fedora.bunny-lab.io)
participant Neofetch as Neofetch Package
%% PuppetMaster pulling environment updates
PuppetMaster->>Gitea: Pull Puppet Environment updates
Gitea-->>PuppetMaster: Send latest Puppet repository code
%% r10k deployment process
PuppetMaster->>r10k: Deploy environment with r10k
r10k->>PuppetMaster: Fetch and install Puppet modules
r10k-->>PuppetMaster: Compile environments and apply updates
%% Agent enrollment process
Agent->>PuppetMaster: Request to enroll (Agent Check-in)
PuppetMaster->>Agent: Verify SSL Certificate & Authenticate
Agent-->>PuppetMaster: Send facts about system (Facter)
%% PuppetMaster compiles catalog for the agent
PuppetMaster->>PuppetMaster: Compile Catalog
PuppetMaster->>PuppetMaster: Check if 'neofetch' is required in manifest
PuppetMaster-->>Agent: Send compiled catalog with 'neofetch' installation instructions
%% Agent installs neofetch
Agent->>Agent: Check if 'neofetch' is installed
Agent--xNeofetch: 'neofetch' not installed
Agent->>Neofetch: Install 'neofetch'
Neofetch-->>Agent: Installation complete
%% Agent reports back to PuppetMaster
Agent->>PuppetMaster: Report status (catalog applied and neofetch installed)
```
### Simplified
``` mermaid
sequenceDiagram
participant Gitea as Gitea (Puppet Repository)
participant PuppetMaster as Puppet Server
participant Agent as Managed Agent (fedora.bunny-lab.io)
participant Neofetch as Neofetch Package
%% PuppetMaster pulling environment updates
PuppetMaster->>Gitea: Pull environment updates
Gitea-->>PuppetMaster: Send updated code
%% Agent enrollment and catalog request
Agent->>PuppetMaster: Request catalog (Check-in)
PuppetMaster->>Agent: Send compiled catalog (neofetch required)
%% Agent installs neofetch
Agent->>Neofetch: Install neofetch
Neofetch-->>Agent: Installation complete
%% Agent reports back
Agent->>PuppetMaster: Report catalog applied (neofetch installed)
```
### Breakdown
#### 1. **PuppetMaster Pulls Updates from Gitea**
- PuppetMaster uses `r10k` to fetch the latest environment updates from Gitea. These updates include manifests, hiera data, and modules for the specified Puppet environments.
#### 2. **PuppetMaster Compiles Catalogs and Modules**
- After pulling updates, the PuppetMaster compiles the latest node-specific catalogs based on the manifests and modules. It ensures the configuration is ready for agents to retrieve.
#### 3. **Agent (fedora.bunny-lab.io) Checks In**
- The Puppet agent on `fedora.bunny-lab.io` checks in with the PuppetMaster for its catalog. This request tells the PuppetMaster to compile the node's desired configuration.
#### 4. **Agent Downloads and Applies the Catalog**
- The agent retrieves its compiled catalog from the PuppetMaster. It compares the current system state with the desired state outlined in the catalog.
#### 5. **Agent Installs `neofetch`**
- The agent identifies that `neofetch` is missing and installs it using the system's package manager. The installation follows the directives in the catalog.
#### 6. **Agent Reports Success**
- Once changes are applied, the agent sends a report back to the PuppetMaster. The report includes details of the changes made, confirming `neofetch` was installed.
## Deployment Steps:
You will need to perform a few steps outlined in the [official Puppet documentation](https://www.puppet.com/docs/puppet/7/install_puppet.html) to get a Puppet server operational. A summarized workflow is seen below:
### Install Puppet Repository
**Installation Scope**: Puppet Server / Managed Devices
``` sh
# Add Puppet Repository / Enable Puppet on YUM
sudo rpm -Uvh https://yum.puppet.com/puppet7-release-el-9.noarch.rpm
```
### Install Puppet Server
**Installation Scope**: Puppet Server
``` sh
# Install the Puppet Server
sudo yum install -y puppetserver
systemctl enable --now puppetserver
# Validate Successful Deployment
exec bash
puppetserver -v
```
### Install Puppet Agent
**Installation Scope**: Puppet Server / Managed Devices
``` sh
# Install Puppet Agent (This will already be installed on the Puppet Server)
sudo yum install -y puppet-agent
# Enable the Puppet Agent
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
# Configure Puppet Server to Connect To
puppet config set server lab-puppet-01.bunny-lab.io --section main
# Establish Secure Connection to Puppet Server
puppet ssl bootstrap
# ((On the Puppet Server))
# You will see an error stating: "Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (fedora.bunny-lab.io)."
# Run the following command (as root) on the Puppet Server to generate a certificate
sudo su
puppetserver ca sign --certname fedora.bunny-lab.io
```
#### Validate Agent Functionality
At this point, you want to ensure that the device being managed by the agent is able to pull down configurations from the Puppet Server. You will know if it worked by getting a message similar to `Notice: Applied catalog in X.XX seconds` after running the following command:
``` sh
puppet agent --test
```
## Install r10k
At this point, we need to configure Gitea as the storage repository for the Puppet "Environments" (e.g. `Production` and `Development`). We can do this by leveraging a tool called "r10k" which pulls a Git repository and configures it as the environment in Puppet.
``` sh
# Install r10k Pre-Requisites
sudo dnf install -y ruby ruby-devel gcc make
# Install r10k Gem (The Software)
# Note: If you encounter any issues with permissions, you can install the gem with "sudo gem install r10k --no-document".
sudo gem install r10k
# Verify the Installation (Run this as a non-root user)
r10k version
```
### Configure r10k
``` sh
# Create the r10k Configuration Directory
sudo mkdir -p /etc/puppetlabs/r10k
# Create the r10k Configuration File
sudo nano /etc/puppetlabs/r10k/r10k.yaml
```
```yaml title="/etc/puppetlabs/r10k/r10k.yaml"
---
# Cache directory for r10k
cachedir: '/var/cache/r10k'
# Sources define which repositories contain environments (Be sure to use the SSH URL, not the Git URL)
sources:
puppet:
remote: 'https://git.bunny-lab.io/GitOps/Puppet.git'
basedir: '/etc/puppetlabs/code/environments'
```
``` sh
# Lockdown the Permissions of the Configuration File
sudo chmod 600 /etc/puppetlabs/r10k/r10k.yaml
# Create r10k Cache Directory
sudo mkdir -p /var/cache/r10k
sudo chown -R puppet:puppet /var/cache/r10k
```
## Configure Gitea
At this point, we need to set up the branches and file/folder structure of the Puppet repository on Gitea.
You will make a repository on Gitea with the following files and structure as noted by each file's title. You will make a mirror copy of all of the files below in both the `Production` and `Development` branches of the repository. For the sake of this example, the repository will be located at `https://git.bunny-lab.io/GitOps/Puppet.git`
!!! example "Example Agent & Neofetch"
You will notice there is a section for `fedora.bunny-lab.io` as well as mentions of `neofetch`. These are purely examples in my homelab of a computer I was testing against during the development of the Puppet Server and associated documentation. You can feel free to not include the entire `modules/neofetch/manifests/init.pp` file in the Gitea repository, as well as remove this entire section from the `manifests/site.pp` file:
``` yaml
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
```
=== "Puppetfile"
This file is used by the Puppet Server (PuppetMaster) to prepare the environment by installing modules / Forge packages into the environment prior to devices getting their configurations. It's important and the modules included in this example are the bare-minimum to get things working with PuppetDB functionality.
```json title="Puppetfile"
forge 'https://forge.puppet.com'
mod 'puppetlabs-stdlib', '9.6.0'
mod 'puppetlabs-puppetdb', '8.1.0'
mod 'puppetlabs-postgresql', '10.3.0'
mod 'puppetlabs-firewall', '8.1.0'
mod 'puppetlabs-inifile', '6.1.1'
mod 'puppetlabs-concat', '9.0.2'
mod 'puppet-systemd', '7.1.0'
```
=== "environment.conf"
This file is mostly redundant, as it states the values below, which are the default values Puppet works with. I only included it in case I had a unique use-case that required a more custom approach to the folder structure. (This is very unlikely).
```yaml title="environment.conf"
# Specifies the module path for this environment
modulepath = modules:$basemodulepath
# Optional: Specifies the manifest file for this environment
manifest = manifests/site.pp
# Optional: Set the environment's config_version (e.g., a script to output the current Git commit hash)
# config_version = scripts/config_version.sh
# Optional: Set the environment's environment_timeout
# environment_timeout = 0
```
=== "site.pp"
This file is kind of like an inventory of devices and their states. In this example, you will see that the puppet server itself is named `lab-puppet-01.bunny-lab.io` and the agent device is named `fedora.bunny-lab.io`. By "including" modules like PuppetDB, it installs the PuppetDB role and configures it automatically on the Puppet Server. By stating the firewall rules, it also ensures that those firewall ports are open no matter what, and if they close, Puppet will re-open them automatically. Port 8140 is for Agent communication, and port 8081 is for PuppetDB functionality.
!!! example "Neofetch Example"
In the example configuration below, you will notice this section. This tells Puppet to deploy the neofetch package to any device that has `include neofetch` written. Grouping devices etc is currently undocumented as of writing this.
``` sh
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
```
```yaml title="manifests/site.pp"
# Node definition for the Puppet Server
node 'lab-puppet-01.bunny-lab.io' {
# Include the puppetdb class with custom parameters
class { 'puppetdb':
listen_address => '0.0.0.0', # Allows access from all network interfaces
}
# Configure the Puppet Server to use PuppetDB
include puppetdb
include puppetdb::master::config
# Ensure the required iptables rules are in place using Puppet's firewall resources
firewall { '100 allow Puppet traffic on 8140':
proto => 'tcp',
dport => '8140',
jump => 'accept', # Corrected parameter from action to jump
chain => 'INPUT',
ensure => 'present',
}
firewall { '101 allow PuppetDB traffic on 8081':
proto => 'tcp',
dport => '8081',
jump => 'accept', # Corrected parameter from action to jump
chain => 'INPUT',
ensure => 'present',
}
}
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
# Default node definition (optional)
node default {
# This can be left empty or include common classes for all other nodes
}
```
=== "init.pp"
This is used by the neofetch class noted in the `site.pp` file. This is basically the declaration of how we want neofetch to be on the devices that include the neofetch "class". In this case, we don't care how it does it, but it will install Neofetch, whether that is through yum, dnf, or apt. A few lines of code is OS-agnostic. The formatting / philosophy is similar in a way to the modules in Ansible playbooks, and how they declare the "state" of things.
```yaml title="modules/neofetch/manifests/init.pp"
class neofetch {
package { 'neofetch':
ensure => installed,
}
}
```
### Storing Credentials to Gitea
We need to be able to pull down the data from Gitea's Puppet repository under the root user so that r10k can automatically pull down any changes made to the Puppet environments (e.g. `Production` and `Development`). Each Git branch represents a different Puppet environment. We will use an application token to do this.
Navigate to "**Gitea > User (Top-Right) > Settings > Applications
- Token Name: `Puppet r10k`
- Permissions: `Repository > Read Only`
- Click the "**Generate Token**" button to finish.
!!! warning "Securely Store the Application Token"
It is critical that you store the token somewhere safe like a password manager as you will need to reference it later and might need it in the future if you re-build the r10k environment.
Now we want to configure Gitea to store the credentials for later use by r10k:
``` sh
# Enable Stored Credentials (We will address security concerns further down...)
sudo yum install -y git
sudo git config --global credential.helper store
# Clone the Git Repository Once to Store the Credentials (Use the Application Token as the password)
# Username: nicole.rappe
# Password: <Application Token Value>
sudo git clone https://git.bunny-lab.io/GitOps/Puppet.git /tmp/PuppetTest
# Verify the Credentials are Stored
sudo cat /root/.git-credentials
# Lockdown Permissions
sudo chmod 600 /root/.git-credentials
# Cleanup After Ourselves
sudo rm -rf /tmp/PuppetTest
```
Finally we validate that everything is working by pulling down the Puppet environments using r10k on the Puppet Server:
``` sh
# Deploy Puppy Environments from Gitea
sudo /usr/local/bin/r10k deploy environment -p
# Validate r10k is Installing Modules in the Environments
sudo ls /etc/puppetlabs/code/environments/production/modules
sudo ls /etc/puppetlabs/code/environments/development/modules
```
!!! success "Successful Puppet Environment Deployment
If you got no errors about Puppetfile formatting or Gitea permissions errors, then you are good to move onto the next step.
## External Node Classifier (ENC)
An ENC allows you to define node-specific data, including the environment, on the Puppet Server. The agent requests its configuration, and the Puppet Server provides the environment and classes to apply.
**Advantages**:
- **Centralized Control**: Environments and classifications are managed from the server.
- **Security**: Agents cannot override their assigned environment.
- **Scalability**: Suitable for managing environments for hundreds or thousands of nodes.
### Create an ENC Script
``` sh
sudo mkdir -p /opt/puppetlabs/server/data/puppetserver/scripts/
```
```ruby title="/opt/puppetlabs/server/data/puppetserver/scripts/enc.rb"
#!/usr/bin/env ruby
# enc.rb
require 'yaml'
node_name = ARGV[0]
# Define environment assignments
node_environments = {
'fedora.bunny-lab.io' => 'development',
# Add more nodes and their environments as needed
}
environment = node_environments[node_name] || 'production'
# Define classes to include per node (optional)
node_classes = {
'fedora.bunny-lab.io' => ['neofetch'],
# Add more nodes and their classes as needed
}
classes = node_classes[node_name] || []
# Output the YAML document
output = {
'environment' => environment,
'classes' => classes
}
puts output.to_yaml
```
``` sh
# Ensure the File is Executable
sudo chmod +x /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
```
### Configure Puppet Server to Use the ENC
Edit the Puppet Server's `puppet.conf` and set the `node_terminus` and `external_nodes` parameters:
```ini title="/etc/puppetlabs/puppet/puppet.conf"
[master]
node_terminus = exec
external_nodes = /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
```
Restart the Puppet Service
``` sh
sudo systemctl restart puppetserver
```
## Pull Puppet Environments from Gitea
At this point, we can tell r10k to pull down the Puppet environments (e.g. `Production` and `Development`) that we made in the Gitea repository in previous steps. Run the following command on the Puppet Server to pull down the environments. This will download / configure any Puppet Forge modules as well as any hand-made modules such as Neofetch.
``` sh
sudo /usr/local/bin/r10k deploy environment -p
# OPTIONAL: You can pull down a specific environment instead of all environments if you specify the branch name, seen here:
#sudo /usr/local/bin/r10k deploy environment development -p
```
### Apply Configuration to Puppet Server
At this point, we are going to deploy the configuration from Gitea to the Puppet Server itself so it installs PuppetDB automatically as well as configures firewall ports and other small things to functional properly. Once this is completed, you can add additional agents / managed devices and they will be able to communicate with the Puppet Server over the network.
``` sh
sudo /opt/puppetlabs/bin/puppet agent -t
```
!!! success "Puppet Server Deployed and Validated"
Congradulations! You have successfully deployed an entire Puppet Server, as well as integrated Gitea and r10k to deploy environment changes in a versioned environment, as well as validated functionality against a managed device using the agent (such as a spare laptop/desktop). If you got this far, be proud, because it took me over 12 hours write this documentation allowing you to deploy a server in less than 30 minutes.