Documentation Restructure
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
This commit is contained in:
BIN
deployments/automation/ansible/awx/awx.png
Normal file
BIN
deployments/automation/ansible/awx/awx.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 122 KiB |
146
deployments/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
146
deployments/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
# Deploy AWX on Minikube Cluster
|
||||
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
|
||||
## Install Minikube Cluster
|
||||
### Update the Ubuntu Server
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
### Download and Install Minikube (Ubuntu Server)
|
||||
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
|
||||
sudo dpkg -i minikube_latest_amd64.deb
|
||||
|
||||
# Download Docker and Common Tools
|
||||
sudo apt install docker.io nfs-common iptables nano htop -y
|
||||
|
||||
# Configure Docker User
|
||||
sudo usermod -aG docker nicole
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
|
||||
:::
|
||||
### Fully Logout then sign back in to the server
|
||||
```
|
||||
exit
|
||||
```
|
||||
### Validate that permissions allow you to run docker commands while non-root
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Initialize Minikube Cluster
|
||||
Additional Documentation: https://github.com/ansible/awx-operator
|
||||
```
|
||||
minikube start --driver=docker
|
||||
minikube kubectl -- get nodes
|
||||
minikube kubectl -- get pods -A
|
||||
```
|
||||
|
||||
### Make sure Minikube Cluster Automatically Starts on Boot
|
||||
```jsx title="/etc/systemd/system/minikube.service"
|
||||
[Unit]
|
||||
Description=Minikube service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=nicole
|
||||
ExecStart=/usr/bin/minikube start --driver=docker
|
||||
ExecStop=/usr/bin/minikube stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
|
||||
:::
|
||||
:::info
|
||||
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
|
||||
:::
|
||||
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable minikube
|
||||
sudo systemctl start minikube
|
||||
```
|
||||
|
||||
### Make command alias for `kubectl`
|
||||
Be sure to add the following to the bottom of your existing profile file noted below.
|
||||
```jsx title="~/.bashrc"
|
||||
...
|
||||
alias kubectl="minikube kubectl --"
|
||||
```
|
||||
:::tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
|
||||
## Make AWX Operator Kustomization File:
|
||||
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
|
||||
```jsx title="kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: awx-service
|
||||
namespace: awx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080 # Choose an available port in the range of 30000-32767
|
||||
selector:
|
||||
app.kubernetes.io/name: awx-web
|
||||
```
|
||||
### Apply Configuration File
|
||||
Run from the same directory as the `awx-operator.yaml` file.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::info
|
||||
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
|
||||
:::
|
||||
|
||||
### View Logs / Track Deployment Progress
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
|
||||
```
|
||||
### Get AWX WebUI Address
|
||||
```
|
||||
minikube service -n awx awx-service --url
|
||||
```
|
||||
### Get WebUI Password:
|
||||
```
|
||||
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
198
deployments/automation/ansible/awx/deployment/awx-operator.md
Normal file
198
deployments/automation/ansible/awx/deployment/awx-operator.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
|
||||
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 22.04** or later with at least 16GB of memory, 8 CPU cores, and 64GB of storage.
|
||||
|
||||
## Deploy Rancher RKE2 Cluster
|
||||
You will need to deploy a [Rancher RKE2 Cluster](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
|
||||
|
||||
!!! tip "Checkpoint/Snapshot Reminder"
|
||||
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
|
||||
|
||||
## Server Configuration
|
||||
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
|
||||
|
||||
``` sh
|
||||
# Make the deployment folder
|
||||
mkdir -p /awx
|
||||
cd /awx
|
||||
```
|
||||
|
||||
We need to increase filesystem access limits:
|
||||
Temporarily Set the Limits Now:
|
||||
``` sh
|
||||
sudo sysctl fs.inotify.max_user_watches=524288
|
||||
sudo sysctl fs.inotify.max_user_instances=512
|
||||
```
|
||||
|
||||
Permanently Set the Limits for Later:
|
||||
```jsx title="/etc/sysctl.conf"
|
||||
# <End of File>
|
||||
fs.inotify.max_user_watches = 524288
|
||||
fs.inotify.max_user_instances = 512
|
||||
```
|
||||
|
||||
Apply the Settings:
|
||||
``` sh
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
### Create AWX Deployment Donfiguration Files
|
||||
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
|
||||
|
||||
=== "awx.yml"
|
||||
|
||||
```yaml title="/awx/awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
spec:
|
||||
service_type: ClusterIP
|
||||
```
|
||||
|
||||
=== "ingress.yml"
|
||||
|
||||
```yaml title="/awx/ingress.yml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: awx.bunny-lab.io
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: awx-service
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
=== "kustomization.yml"
|
||||
|
||||
```yaml title="/awx/kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.10.0
|
||||
- awx.yml
|
||||
- ingress.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.10.0
|
||||
namespace: awx
|
||||
```
|
||||
|
||||
## Ensure the Kubernetes Cluster is Ready
|
||||
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
## Ensure the Timezone / Date is Accurate
|
||||
You want to make sure that the Kubernetes environment and Node itself have accurate time for a number of reasons, least of which, is if you are using Ansible with Kubernetes authentication, if the date/time is inaccurate, things will not work correctly.
|
||||
``` sh
|
||||
sudo timedatectl set-timezone America/Denver
|
||||
```
|
||||
|
||||
## Deploy AWX using Kustomize
|
||||
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
|
||||
!!! warning "Be Patient"
|
||||
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](./awx-operator.md#troubleshooting) section if you want to track the progress after running the commands below.
|
||||
|
||||
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
|
||||
``` sh
|
||||
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
|
||||
ensure CRDs are installed first
|
||||
```
|
||||
|
||||
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
|
||||
You will know that AWX is ready to be accessed in the next step if the output looks like below:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
|
||||
awx-postgres-13-0 1/1 Running 0 6m12s
|
||||
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
|
||||
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
|
||||
```
|
||||
|
||||
``` sh
|
||||
cd /awx
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
!!! warning "Be Patient - Wait 20 Minutes"
|
||||
The process may take a while to spin up AWX, postgresql, redis, and other workloads necessary for AWX to function. Depending on the speed of the server, it may take between 5 and 20 minutes for AWX to be ready to connect to. You can watch the progress via the CLI commands listed above, or directly on Rancher's WebUI at https://rancher.bunny-lab.io.
|
||||
|
||||
## Access the AWX WebUI behind Ingress Controller
|
||||
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
|
||||
|
||||
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically due to having an internal Reverse Proxy within the Kubernetes Cluster. SSL certificates generated within Kubernetes/Rancher RKE2 are not covered in this documentation, but suffice to say, the AWX server can be configured on behind another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
|
||||
|
||||
### Traefik Implementation
|
||||
If you want to put this behind traefik, you will need a slightly unique traefik configuration file, seen below, to effectively transparently passthrough traffic into the RKE2 Cluster's reverse proxy.
|
||||
|
||||
```yaml title="awx.bunny-lab.io.yml"
|
||||
tcp:
|
||||
routers:
|
||||
awx-tcp-router:
|
||||
rule: "HostSNI(`awx.bunny-lab.io`)"
|
||||
entryPoints: ["websecure"]
|
||||
service: awx-nginx-service
|
||||
tls:
|
||||
passthrough: true
|
||||
# middlewares:
|
||||
# - auth-bunny-lab-io # Referencing the Keycloak Server
|
||||
|
||||
services:
|
||||
awx-nginx-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.10:443"
|
||||
```
|
||||
|
||||
!!! success "Accessing the AWX WebUI"
|
||||
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
|
||||
|
||||
- AWX WebUI: https://awx.bunny-lab.io
|
||||

|
||||
You may see a prompt about "AWX is currently upgrading. This page will refresh when complete". Be patient, let it finish. When it's done, it will take you to a login page.
|
||||
AWX will generate its own secure password the first time you set up AWX. Username is `admin`. You can run the following command to retrieve the password:
|
||||
```
|
||||
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
## Change Admin Password
|
||||
You will want to change the admin password straight-away. Use the following navigation structure to find where to change the password:
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[AWX Dashboard] --> B[Access]
|
||||
B --> C[Users]
|
||||
C --> D[admin]
|
||||
D --> E[Edit]
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
|
||||
|
||||
### AWX-Manager Deployment Logs
|
||||
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
|
||||
```
|
||||
!!! note
|
||||
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
|
||||
|
||||
@@ -0,0 +1,69 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
## Upgrading from 2.10.0 to 2.19.1+
|
||||
There is a known issue with upgrading / install AWX Operator beyond version 2.10.0, because of how the PostgreSQL database upgrades from 13.0 to 15.0, and has changed permissions. The following workflow will help get past that and adjust the permissions in such a way that allows the upgrade to proceed successfully. If this is a clean installation, you can also perform this step if the fresh install of 2.19.1 is not working yet. (It wont work out of the box because of this bug). `The developers of AWX seem to just not care about this issue, and have not implemented an official fix themselves at this time).
|
||||
|
||||
### Create a Temporary Pod to Adjust Permissions
|
||||
We need to create a pod that will mount the PostgreSQL PVC, make changes to permissions, then destroy the v15.0 pod to have the AWX Operator automatically regenerate it.
|
||||
|
||||
```yaml title="/awx/temp-pod.yml"
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: temp-pod
|
||||
namespace: awx
|
||||
spec:
|
||||
containers:
|
||||
- name: temp-container
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'sleep 3600']
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/pgsql/data
|
||||
name: postgres-data
|
||||
volumes:
|
||||
- name: postgres-data
|
||||
persistentVolumeClaim:
|
||||
claimName: postgres-15-awx-postgres-15-0
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Deploy Temporary Pod
|
||||
kubectl apply -f /awx/temp-pod.yaml
|
||||
|
||||
# Open a Shell in the Temporary Pod
|
||||
kubectl exec -it temp-pod -n awx -- sh
|
||||
|
||||
# Adjust Permissions of the PostgreSQL 15.0 Database Folder
|
||||
chown -R 26:root /var/lib/pgsql/data
|
||||
exit
|
||||
|
||||
# Delete the Temporary Pod
|
||||
kubectl delete pod temp-pod -n awx
|
||||
|
||||
# Delete the Crashlooped PostgreSQL 15.0 Pod to Regenerate It
|
||||
kubectl delete pod awx-postgres-15-0 -n awx
|
||||
|
||||
# Track the Migration
|
||||
kubectl get pods -n awx
|
||||
kubectl logs -n awx awx-postgres-15-0
|
||||
```
|
||||
|
||||
!!! warning "Be Patient"
|
||||
This upgrade may take a few minutes depending on the speed of the node it is running on. Be patient and wait until the output looks something similar to this:
|
||||
```
|
||||
root@awx:/awx# kubectl get pods -n awx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-migration-24.6.1-bh5vb 0/1 Completed 0 9m55s
|
||||
awx-operator-controller-manager-745b55d94b-2dhvx 2/2 Running 0 25m
|
||||
awx-postgres-15-0 1/1 Running 0 12m
|
||||
awx-task-7946b46dd6-7z9jm 4/4 Running 0 10m
|
||||
awx-web-9497647b4-s4gmj 3/3 Running 0 10m
|
||||
```
|
||||
|
||||
If you see a migration pod, like seen in the above example, you can feel free to delete it with the following command: `kubectl delete pod awx-migration-24.6.1-bh5vb -n awx`.
|
||||
38
deployments/automation/index.md
Normal file
38
deployments/automation/index.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
tags:
|
||||
- Operations
|
||||
- Automation
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Automation
|
||||
## Purpose
|
||||
Infrastructure automation, orchestration, and workflow tooling.
|
||||
|
||||
## Includes
|
||||
- Ansible and Puppet patterns
|
||||
- Inventory and credential conventions
|
||||
- CI/CD and automation notes
|
||||
|
||||
## New Document Template
|
||||
````markdown
|
||||
# <Document Title>
|
||||
## Purpose
|
||||
<what this automation doc exists to describe>
|
||||
|
||||
!!! info "Assumptions"
|
||||
- <platform or tooling assumptions>
|
||||
- <privilege assumptions>
|
||||
|
||||
## Inputs
|
||||
- <variables, inventories, secrets>
|
||||
|
||||
## Procedure
|
||||
```sh
|
||||
# Commands or job steps
|
||||
```
|
||||
|
||||
## Validation
|
||||
- <command + expected result>
|
||||
````
|
||||
219
deployments/automation/puppet/deployment/puppet-bolt.md
Normal file
219
deployments/automation/puppet/deployment/puppet-bolt.md
Normal file
@@ -0,0 +1,219 @@
|
||||
---
|
||||
tags:
|
||||
- Puppet
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**: Puppet Bolt can be leveraged in an Ansible-esque manner to connect to and enroll devices such as Windows Servers, Linux Servers, and various workstations. To this end, it could be used to run ad-hoc tasks or enroll devices into a centralized Puppet server. (e.g. `LAB-PUPPET-01.bunny-lab.io`)
|
||||
|
||||
!!! note "Assumptions"
|
||||
This deployment assumes you are deploying Puppet bolt onto the same server as Puppet. If you have not already, follow the [Puppet Deployment](./puppet.md) documentation to do so before continuing with the Puppet Bolt deployment.
|
||||
|
||||
## Initial Preparation
|
||||
``` sh
|
||||
# Install Bolt Repository
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet-tools-release-el-9.noarch.rpm
|
||||
sudo yum install -y puppet-bolt
|
||||
|
||||
# Verify Installation
|
||||
bolt --version
|
||||
|
||||
# Clone Puppet Bolt Repository into Bolt Directory
|
||||
#sudo git clone https://git.bunny-lab.io/GitOps/Puppet-Bolt.git /etc/puppetlabs/bolt <-- Disabled for now
|
||||
sudo mkdir -p /etc/puppetlabs/bolt
|
||||
sudo chown -R $(whoami):$(whoami) /etc/puppetlabs/bolt
|
||||
sudo chmod -R 644 /etc/puppetlabs/bolt
|
||||
#sudo chmod -R u+rwx,g+rx,o+rx /etc/puppetlabs/bolt/modules/bolt <-- Disabled for now
|
||||
|
||||
# Initialize A New Bolt Project
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt project init bunny_lab
|
||||
```
|
||||
|
||||
## Configuring Inventory
|
||||
At this point, you will want to create an inventory file that you can use for tracking devices. For now, this will have hard-coded credentials until a cleaner method is figured out.
|
||||
``` yaml title="/etc/puppetlabs/bolt/inventory.yaml"
|
||||
# Inventory file for Puppet Bolt
|
||||
groups:
|
||||
- name: linux_servers
|
||||
targets:
|
||||
- lab-auth-01.bunny-lab.io
|
||||
- lab-auth-02.bunny-lab.io
|
||||
config:
|
||||
transport: ssh
|
||||
ssh:
|
||||
host-key-check: false
|
||||
private-key: "/etc/puppetlabs/bolt/id_rsa_OpenSSH" # (1)
|
||||
user: nicole
|
||||
native-ssh: true
|
||||
|
||||
- name: windows_servers
|
||||
config:
|
||||
transport: winrm
|
||||
winrm:
|
||||
realm: BUNNY-LAB.IO
|
||||
ssl: true
|
||||
user: "BUNNY-LAB\\nicole.rappe"
|
||||
password: DomainPassword # (2)
|
||||
groups:
|
||||
- name: domain_controllers
|
||||
targets:
|
||||
- lab-dc-01.bunny-lab.io
|
||||
- lab-dc-02.bunny-lab.io
|
||||
- name: dedicated_game_servers
|
||||
targets:
|
||||
- lab-games-01.bunny-lab.io
|
||||
- lab-games-02.bunny-lab.io
|
||||
- lab-games-03.bunny-lab.io
|
||||
- lab-games-04.bunny-lab.io
|
||||
- lab-games-05.bunny-lab.io
|
||||
- name: hyperv_hosts
|
||||
targets:
|
||||
- virt-node-01.bunny-lab.io
|
||||
- bunny-node-02.bunny-lab.io
|
||||
```
|
||||
|
||||
1. Point the inventory file to the private key (if you use key-based authentication instead of password-based SSH authentication.)
|
||||
2. Replace this with your actual domain admin / domain password.
|
||||
|
||||
### Validate Bolt Inventory Works
|
||||
If the inventory file is created correctly, you will see the hosts listed when you run the command below:
|
||||
``` sh
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt inventory show
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt inventory show`"
|
||||
You should expect to see output similar to the following:
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt inventory show
|
||||
Targets
|
||||
lab-auth-01.bunny-lab.io
|
||||
lab-auth-02.bunny-lab.io
|
||||
lab-dc-01.bunny-lab.io
|
||||
lab-dc-02.bunny-lab.io
|
||||
lab-games-01.bunny-lab.io
|
||||
lab-games-02.bunny-lab.io
|
||||
lab-games-03.bunny-lab.io
|
||||
lab-games-04.bunny-lab.io
|
||||
lab-games-05.bunny-lab.io
|
||||
virt-node-01.bunny-lab.io
|
||||
bunny-node-02.bunny-lab.io
|
||||
|
||||
Inventory source
|
||||
/tmp/bolt-lab/inventory.yaml
|
||||
|
||||
Target count
|
||||
11 total, 11 from inventory, 0 adhoc
|
||||
|
||||
Additional information
|
||||
Use the '--targets', '--query', or '--rerun' option to view specific targets
|
||||
Use the '--detail' option to view target configuration and data
|
||||
```
|
||||
|
||||
## Configuring Kerberos
|
||||
If you work with Windows-based devices in a domain environment, you will need to set up Puppet so it can perform Kerberos authentication while interacting with Windows devices. This involves a little bit of setup, but nothing too crazy.
|
||||
|
||||
### Install Krb5
|
||||
We need to install the necessary software on the puppet server to allow Kerberos authentication to occur.
|
||||
=== "Rocky, CentOS, RHEL, Fedora"
|
||||
|
||||
``` sh
|
||||
sudo yum install krb5-workstation
|
||||
```
|
||||
|
||||
=== "Debian, Ubuntu"
|
||||
|
||||
``` sh
|
||||
sudo apt-get install krb5-user
|
||||
```
|
||||
|
||||
=== "SUSE"
|
||||
|
||||
``` sh
|
||||
sudo zypper install krb5-client
|
||||
```
|
||||
|
||||
### Prepare `/etc/krb5.conf` Configuration
|
||||
We need to configure Kerberos to know how to reach the domain, this is achieved by editing `/etc/krb5.conf` to look similar to the following, with your own domain substituting the example values.
|
||||
``` ini
|
||||
[libdefaults]
|
||||
default_realm = BUNNY-LAB.IO
|
||||
dns_lookup_realm = false
|
||||
dns_lookup_kdc = false
|
||||
ticket_lifetime = 7d
|
||||
forwardable = true
|
||||
|
||||
[realms]
|
||||
BUNNY-LAB.IO = {
|
||||
kdc = LAB-DC-01.bunny-lab.io # (1)
|
||||
kdc = LAB-DC-02.bunny-lab.io # (2)
|
||||
admin_server = LAB-DC-01.bunny-lab.io # (3)
|
||||
}
|
||||
|
||||
[domain_realm]
|
||||
.bunny-lab.io = BUNNY-LAB.IO
|
||||
bunny-lab.io = BUNNY-LAB.IO
|
||||
```
|
||||
|
||||
1. Your primary domain controller
|
||||
2. Your secondary domain controller (if applicable)
|
||||
3. This is your Primary Domain Controller (PDC)
|
||||
|
||||
### Initialize Kerberos Connection
|
||||
Now we need to log into the domain using (preferrably) domain administrator credentials, such as the example below. You will be prompted to enter your domain password.
|
||||
``` sh
|
||||
kinit nicole.rappe@BUNNY-LAB.IO
|
||||
klist
|
||||
```
|
||||
|
||||
??? example "Example Output of `klist`"
|
||||
You should expect to see output similar to the following. Finding a way to ensure the Kerberos tickets live longer is still under research, as 7 days is not exactly practical for long-term deployments.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# klist
|
||||
Ticket cache: FILE:/tmp/krb5cc_0
|
||||
Default principal: nicole.rappe@BUNNY-LAB.IO
|
||||
|
||||
Valid starting Expires Service principal
|
||||
11/14/2024 21:57:03 11/15/2024 07:57:03 krbtgt/BUNNY-LAB.IO@BUNNY-LAB.IO
|
||||
renew until 11/21/2024 21:57:03
|
||||
```
|
||||
|
||||
### Prepare Windows Devices
|
||||
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
|
||||
|
||||
You can find the [WinRM Enablement Script](../../../../workflows/operations/automation/ansible/enable-winrm-on-windows-devices.md) in the Bunny Lab documentation.
|
||||
|
||||
## Ad-Hoc Command Examples
|
||||
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t domain_controllers --no-ssl-verify`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging WinRM via Kerberos on Windows devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t domain_controllers --no-ssl-verify
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-dc-01.bunny-lab.io...
|
||||
Started on lab-dc-02.bunny-lab.io...
|
||||
Finished on lab-dc-02.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Finished on lab-dc-01.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Successful on 2 targets: lab-dc-01.bunny-lab.io,lab-dc-02.bunny-lab.io
|
||||
Ran on 2 targets in 1.91 sec
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t linux_servers`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging native SSH on Linux devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t linux_servers
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-auth-01.bunny-lab.io...
|
||||
Started on lab-auth-02.bunny-lab.io...
|
||||
Finished on lab-auth-02.bunny-lab.io:
|
||||
nicole
|
||||
Finished on lab-auth-01.bunny-lab.io:
|
||||
nicole
|
||||
Successful on 2 targets: lab-auth-01.bunny-lab.io,lab-auth-02.bunny-lab.io
|
||||
Ran on 2 targets in 0.68 sec
|
||||
```
|
||||
|
||||
428
deployments/automation/puppet/deployment/puppet.md
Normal file
428
deployments/automation/puppet/deployment/puppet.md
Normal file
@@ -0,0 +1,428 @@
|
||||
---
|
||||
tags:
|
||||
- Puppet
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Puppet is another declarative configuration management tool that excels in system configuration and enforcement. Like Ansible, it's designed to maintain the desired state of a system's configuration but uses a client-server (master-agent) architecture by default.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes you are deploying Puppet server onto Rocky Linux 9.4. Any version of RHEL/CentOS/Alma/Rocky should behave similarily.
|
||||
|
||||
## Architectural Overview
|
||||
### Detailed
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea Repo (Puppet Environment)
|
||||
participant r10k as r10k (Environment Deployer)
|
||||
participant PuppetMaster as Puppet Server (lab-puppet-01.bunny-lab.io)
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull Puppet Environment updates
|
||||
Gitea-->>PuppetMaster: Send latest Puppet repository code
|
||||
|
||||
%% r10k deployment process
|
||||
PuppetMaster->>r10k: Deploy environment with r10k
|
||||
r10k->>PuppetMaster: Fetch and install Puppet modules
|
||||
r10k-->>PuppetMaster: Compile environments and apply updates
|
||||
|
||||
%% Agent enrollment process
|
||||
Agent->>PuppetMaster: Request to enroll (Agent Check-in)
|
||||
PuppetMaster->>Agent: Verify SSL Certificate & Authenticate
|
||||
Agent-->>PuppetMaster: Send facts about system (Facter)
|
||||
|
||||
%% PuppetMaster compiles catalog for the agent
|
||||
PuppetMaster->>PuppetMaster: Compile Catalog
|
||||
PuppetMaster->>PuppetMaster: Check if 'neofetch' is required in manifest
|
||||
PuppetMaster-->>Agent: Send compiled catalog with 'neofetch' installation instructions
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Agent: Check if 'neofetch' is installed
|
||||
Agent--xNeofetch: 'neofetch' not installed
|
||||
Agent->>Neofetch: Install 'neofetch'
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back to PuppetMaster
|
||||
Agent->>PuppetMaster: Report status (catalog applied and neofetch installed)
|
||||
```
|
||||
|
||||
### Simplified
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea (Puppet Repository)
|
||||
participant PuppetMaster as Puppet Server
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull environment updates
|
||||
Gitea-->>PuppetMaster: Send updated code
|
||||
|
||||
%% Agent enrollment and catalog request
|
||||
Agent->>PuppetMaster: Request catalog (Check-in)
|
||||
PuppetMaster->>Agent: Send compiled catalog (neofetch required)
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Neofetch: Install neofetch
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back
|
||||
Agent->>PuppetMaster: Report catalog applied (neofetch installed)
|
||||
```
|
||||
|
||||
### Breakdown
|
||||
#### 1. **PuppetMaster Pulls Updates from Gitea**
|
||||
- PuppetMaster uses `r10k` to fetch the latest environment updates from Gitea. These updates include manifests, hiera data, and modules for the specified Puppet environments.
|
||||
|
||||
#### 2. **PuppetMaster Compiles Catalogs and Modules**
|
||||
- After pulling updates, the PuppetMaster compiles the latest node-specific catalogs based on the manifests and modules. It ensures the configuration is ready for agents to retrieve.
|
||||
|
||||
#### 3. **Agent (fedora.bunny-lab.io) Checks In**
|
||||
- The Puppet agent on `fedora.bunny-lab.io` checks in with the PuppetMaster for its catalog. This request tells the PuppetMaster to compile the node's desired configuration.
|
||||
|
||||
#### 4. **Agent Downloads and Applies the Catalog**
|
||||
- The agent retrieves its compiled catalog from the PuppetMaster. It compares the current system state with the desired state outlined in the catalog.
|
||||
|
||||
#### 5. **Agent Installs `neofetch`**
|
||||
- The agent identifies that `neofetch` is missing and installs it using the system's package manager. The installation follows the directives in the catalog.
|
||||
|
||||
#### 6. **Agent Reports Success**
|
||||
- Once changes are applied, the agent sends a report back to the PuppetMaster. The report includes details of the changes made, confirming `neofetch` was installed.
|
||||
|
||||
## Deployment Steps:
|
||||
You will need to perform a few steps outlined in the [official Puppet documentation](https://www.puppet.com/docs/puppet/7/install_puppet.html) to get a Puppet server operational. A summarized workflow is seen below:
|
||||
|
||||
### Install Puppet Repository
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Add Puppet Repository / Enable Puppet on YUM
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet7-release-el-9.noarch.rpm
|
||||
```
|
||||
|
||||
### Install Puppet Server
|
||||
**Installation Scope**: Puppet Server
|
||||
``` sh
|
||||
# Install the Puppet Server
|
||||
sudo yum install -y puppetserver
|
||||
systemctl enable --now puppetserver
|
||||
|
||||
# Validate Successful Deployment
|
||||
exec bash
|
||||
puppetserver -v
|
||||
```
|
||||
|
||||
### Install Puppet Agent
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Install Puppet Agent (This will already be installed on the Puppet Server)
|
||||
sudo yum install -y puppet-agent
|
||||
|
||||
# Enable the Puppet Agent
|
||||
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
|
||||
|
||||
# Configure Puppet Server to Connect To
|
||||
puppet config set server lab-puppet-01.bunny-lab.io --section main
|
||||
|
||||
# Establish Secure Connection to Puppet Server
|
||||
puppet ssl bootstrap
|
||||
|
||||
# ((On the Puppet Server))
|
||||
# You will see an error stating: "Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (fedora.bunny-lab.io)."
|
||||
# Run the following command (as root) on the Puppet Server to generate a certificate
|
||||
sudo su
|
||||
puppetserver ca sign --certname fedora.bunny-lab.io
|
||||
```
|
||||
|
||||
#### Validate Agent Functionality
|
||||
At this point, you want to ensure that the device being managed by the agent is able to pull down configurations from the Puppet Server. You will know if it worked by getting a message similar to `Notice: Applied catalog in X.XX seconds` after running the following command:
|
||||
``` sh
|
||||
puppet agent --test
|
||||
```
|
||||
|
||||
## Install r10k
|
||||
At this point, we need to configure Gitea as the storage repository for the Puppet "Environments" (e.g. `Production` and `Development`). We can do this by leveraging a tool called "r10k" which pulls a Git repository and configures it as the environment in Puppet.
|
||||
``` sh
|
||||
# Install r10k Pre-Requisites
|
||||
sudo dnf install -y ruby ruby-devel gcc make
|
||||
|
||||
# Install r10k Gem (The Software)
|
||||
# Note: If you encounter any issues with permissions, you can install the gem with "sudo gem install r10k --no-document".
|
||||
sudo gem install r10k
|
||||
|
||||
# Verify the Installation (Run this as a non-root user)
|
||||
r10k version
|
||||
```
|
||||
|
||||
### Configure r10k
|
||||
``` sh
|
||||
# Create the r10k Configuration Directory
|
||||
sudo mkdir -p /etc/puppetlabs/r10k
|
||||
|
||||
# Create the r10k Configuration File
|
||||
sudo nano /etc/puppetlabs/r10k/r10k.yaml
|
||||
```
|
||||
|
||||
```yaml title="/etc/puppetlabs/r10k/r10k.yaml"
|
||||
---
|
||||
# Cache directory for r10k
|
||||
cachedir: '/var/cache/r10k'
|
||||
|
||||
# Sources define which repositories contain environments (Be sure to use the SSH URL, not the Git URL)
|
||||
sources:
|
||||
puppet:
|
||||
remote: 'https://git.bunny-lab.io/GitOps/Puppet.git'
|
||||
basedir: '/etc/puppetlabs/code/environments'
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Lockdown the Permissions of the Configuration File
|
||||
sudo chmod 600 /etc/puppetlabs/r10k/r10k.yaml
|
||||
|
||||
# Create r10k Cache Directory
|
||||
sudo mkdir -p /var/cache/r10k
|
||||
sudo chown -R puppet:puppet /var/cache/r10k
|
||||
```
|
||||
|
||||
## Configure Gitea
|
||||
At this point, we need to set up the branches and file/folder structure of the Puppet repository on Gitea.
|
||||
|
||||
You will make a repository on Gitea with the following files and structure as noted by each file's title. You will make a mirror copy of all of the files below in both the `Production` and `Development` branches of the repository. For the sake of this example, the repository will be located at `https://git.bunny-lab.io/GitOps/Puppet.git`
|
||||
|
||||
!!! example "Example Agent & Neofetch"
|
||||
You will notice there is a section for `fedora.bunny-lab.io` as well as mentions of `neofetch`. These are purely examples in my homelab of a computer I was testing against during the development of the Puppet Server and associated documentation. You can feel free to not include the entire `modules/neofetch/manifests/init.pp` file in the Gitea repository, as well as remove this entire section from the `manifests/site.pp` file:
|
||||
|
||||
``` yaml
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
=== "Puppetfile"
|
||||
This file is used by the Puppet Server (PuppetMaster) to prepare the environment by installing modules / Forge packages into the environment prior to devices getting their configurations. It's important and the modules included in this example are the bare-minimum to get things working with PuppetDB functionality.
|
||||
|
||||
```json title="Puppetfile"
|
||||
forge 'https://forge.puppet.com'
|
||||
mod 'puppetlabs-stdlib', '9.6.0'
|
||||
mod 'puppetlabs-puppetdb', '8.1.0'
|
||||
mod 'puppetlabs-postgresql', '10.3.0'
|
||||
mod 'puppetlabs-firewall', '8.1.0'
|
||||
mod 'puppetlabs-inifile', '6.1.1'
|
||||
mod 'puppetlabs-concat', '9.0.2'
|
||||
mod 'puppet-systemd', '7.1.0'
|
||||
```
|
||||
|
||||
=== "environment.conf"
|
||||
This file is mostly redundant, as it states the values below, which are the default values Puppet works with. I only included it in case I had a unique use-case that required a more custom approach to the folder structure. (This is very unlikely).
|
||||
|
||||
```yaml title="environment.conf"
|
||||
# Specifies the module path for this environment
|
||||
modulepath = modules:$basemodulepath
|
||||
|
||||
# Optional: Specifies the manifest file for this environment
|
||||
manifest = manifests/site.pp
|
||||
|
||||
# Optional: Set the environment's config_version (e.g., a script to output the current Git commit hash)
|
||||
# config_version = scripts/config_version.sh
|
||||
|
||||
# Optional: Set the environment's environment_timeout
|
||||
# environment_timeout = 0
|
||||
```
|
||||
|
||||
=== "site.pp"
|
||||
This file is kind of like an inventory of devices and their states. In this example, you will see that the puppet server itself is named `lab-puppet-01.bunny-lab.io` and the agent device is named `fedora.bunny-lab.io`. By "including" modules like PuppetDB, it installs the PuppetDB role and configures it automatically on the Puppet Server. By stating the firewall rules, it also ensures that those firewall ports are open no matter what, and if they close, Puppet will re-open them automatically. Port 8140 is for Agent communication, and port 8081 is for PuppetDB functionality.
|
||||
|
||||
!!! example "Neofetch Example"
|
||||
In the example configuration below, you will notice this section. This tells Puppet to deploy the neofetch package to any device that has `include neofetch` written. Grouping devices etc is currently undocumented as of writing this.
|
||||
``` sh
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
```yaml title="manifests/site.pp"
|
||||
# Node definition for the Puppet Server
|
||||
node 'lab-puppet-01.bunny-lab.io' {
|
||||
|
||||
# Include the puppetdb class with custom parameters
|
||||
class { 'puppetdb':
|
||||
listen_address => '0.0.0.0', # Allows access from all network interfaces
|
||||
}
|
||||
|
||||
# Configure the Puppet Server to use PuppetDB
|
||||
include puppetdb
|
||||
include puppetdb::master::config
|
||||
|
||||
# Ensure the required iptables rules are in place using Puppet's firewall resources
|
||||
firewall { '100 allow Puppet traffic on 8140':
|
||||
proto => 'tcp',
|
||||
dport => '8140',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
|
||||
firewall { '101 allow PuppetDB traffic on 8081':
|
||||
proto => 'tcp',
|
||||
dport => '8081',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
}
|
||||
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
|
||||
# Default node definition (optional)
|
||||
node default {
|
||||
# This can be left empty or include common classes for all other nodes
|
||||
}
|
||||
```
|
||||
|
||||
=== "init.pp"
|
||||
This is used by the neofetch class noted in the `site.pp` file. This is basically the declaration of how we want neofetch to be on the devices that include the neofetch "class". In this case, we don't care how it does it, but it will install Neofetch, whether that is through yum, dnf, or apt. A few lines of code is OS-agnostic. The formatting / philosophy is similar in a way to the modules in Ansible playbooks, and how they declare the "state" of things.
|
||||
|
||||
```yaml title="modules/neofetch/manifests/init.pp"
|
||||
class neofetch {
|
||||
package { 'neofetch':
|
||||
ensure => installed,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Storing Credentials to Gitea
|
||||
We need to be able to pull down the data from Gitea's Puppet repository under the root user so that r10k can automatically pull down any changes made to the Puppet environments (e.g. `Production` and `Development`). Each Git branch represents a different Puppet environment. We will use an application token to do this.
|
||||
|
||||
Navigate to "**Gitea > User (Top-Right) > Settings > Applications
|
||||
- Token Name: `Puppet r10k`
|
||||
- Permissions: `Repository > Read Only`
|
||||
- Click the "**Generate Token**" button to finish.
|
||||
|
||||
!!! warning "Securely Store the Application Token"
|
||||
It is critical that you store the token somewhere safe like a password manager as you will need to reference it later and might need it in the future if you re-build the r10k environment.
|
||||
|
||||
Now we want to configure Gitea to store the credentials for later use by r10k:
|
||||
``` sh
|
||||
# Enable Stored Credentials (We will address security concerns further down...)
|
||||
sudo yum install -y git
|
||||
sudo git config --global credential.helper store
|
||||
|
||||
# Clone the Git Repository Once to Store the Credentials (Use the Application Token as the password)
|
||||
# Username: nicole.rappe
|
||||
# Password: <Application Token Value>
|
||||
sudo git clone https://git.bunny-lab.io/GitOps/Puppet.git /tmp/PuppetTest
|
||||
|
||||
# Verify the Credentials are Stored
|
||||
sudo cat /root/.git-credentials
|
||||
|
||||
# Lockdown Permissions
|
||||
sudo chmod 600 /root/.git-credentials
|
||||
|
||||
# Cleanup After Ourselves
|
||||
sudo rm -rf /tmp/PuppetTest
|
||||
```
|
||||
|
||||
Finally we validate that everything is working by pulling down the Puppet environments using r10k on the Puppet Server:
|
||||
``` sh
|
||||
# Deploy Puppy Environments from Gitea
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
|
||||
# Validate r10k is Installing Modules in the Environments
|
||||
sudo ls /etc/puppetlabs/code/environments/production/modules
|
||||
sudo ls /etc/puppetlabs/code/environments/development/modules
|
||||
```
|
||||
|
||||
!!! success "Successful Puppet Environment Deployment
|
||||
If you got no errors about Puppetfile formatting or Gitea permissions errors, then you are good to move onto the next step.
|
||||
|
||||
## External Node Classifier (ENC)
|
||||
An ENC allows you to define node-specific data, including the environment, on the Puppet Server. The agent requests its configuration, and the Puppet Server provides the environment and classes to apply.
|
||||
|
||||
**Advantages**:
|
||||
|
||||
- **Centralized Control**: Environments and classifications are managed from the server.
|
||||
- **Security**: Agents cannot override their assigned environment.
|
||||
- **Scalability**: Suitable for managing environments for hundreds or thousands of nodes.
|
||||
|
||||
### Create an ENC Script
|
||||
``` sh
|
||||
sudo mkdir -p /opt/puppetlabs/server/data/puppetserver/scripts/
|
||||
```
|
||||
|
||||
```ruby title="/opt/puppetlabs/server/data/puppetserver/scripts/enc.rb"
|
||||
#!/usr/bin/env ruby
|
||||
# enc.rb
|
||||
|
||||
require 'yaml'
|
||||
|
||||
node_name = ARGV[0]
|
||||
|
||||
# Define environment assignments
|
||||
node_environments = {
|
||||
'fedora.bunny-lab.io' => 'development',
|
||||
# Add more nodes and their environments as needed
|
||||
}
|
||||
|
||||
environment = node_environments[node_name] || 'production'
|
||||
|
||||
# Define classes to include per node (optional)
|
||||
node_classes = {
|
||||
'fedora.bunny-lab.io' => ['neofetch'],
|
||||
# Add more nodes and their classes as needed
|
||||
}
|
||||
|
||||
classes = node_classes[node_name] || []
|
||||
|
||||
# Output the YAML document
|
||||
output = {
|
||||
'environment' => environment,
|
||||
'classes' => classes
|
||||
}
|
||||
|
||||
puts output.to_yaml
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Ensure the File is Executable
|
||||
sudo chmod +x /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
### Configure Puppet Server to Use the ENC
|
||||
Edit the Puppet Server's `puppet.conf` and set the `node_terminus` and `external_nodes` parameters:
|
||||
```ini title="/etc/puppetlabs/puppet/puppet.conf"
|
||||
[master]
|
||||
node_terminus = exec
|
||||
external_nodes = /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
Restart the Puppet Service
|
||||
``` sh
|
||||
sudo systemctl restart puppetserver
|
||||
```
|
||||
|
||||
## Pull Puppet Environments from Gitea
|
||||
At this point, we can tell r10k to pull down the Puppet environments (e.g. `Production` and `Development`) that we made in the Gitea repository in previous steps. Run the following command on the Puppet Server to pull down the environments. This will download / configure any Puppet Forge modules as well as any hand-made modules such as Neofetch.
|
||||
``` sh
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
# OPTIONAL: You can pull down a specific environment instead of all environments if you specify the branch name, seen here:
|
||||
#sudo /usr/local/bin/r10k deploy environment development -p
|
||||
```
|
||||
|
||||
### Apply Configuration to Puppet Server
|
||||
At this point, we are going to deploy the configuration from Gitea to the Puppet Server itself so it installs PuppetDB automatically as well as configures firewall ports and other small things to functional properly. Once this is completed, you can add additional agents / managed devices and they will be able to communicate with the Puppet Server over the network.
|
||||
``` sh
|
||||
sudo /opt/puppetlabs/bin/puppet agent -t
|
||||
```
|
||||
|
||||
!!! success "Puppet Server Deployed and Validated"
|
||||
Congradulations! You have successfully deployed an entire Puppet Server, as well as integrated Gitea and r10k to deploy environment changes in a versioned environment, as well as validated functionality against a managed device using the agent (such as a spare laptop/desktop). If you got this far, be proud, because it took me over 12 hours write this documentation allowing you to deploy a server in less than 30 minutes.
|
||||
Reference in New Issue
Block a user