Re-Structured Documentation
This commit is contained in:
212
Servers/Automation/Puppet/Deployment/Puppet Bolt.md
Normal file
212
Servers/Automation/Puppet/Deployment/Puppet Bolt.md
Normal file
@ -0,0 +1,212 @@
|
||||
**Purpose**: Puppet Bolt can be leveraged in an Ansible-esque manner to connect to and enroll devices such as Windows Servers, Linux Servers, and various workstations. To this end, it could be used to run ad-hoc tasks or enroll devices into a centralized Puppet server. (e.g. `LAB-PUPPET-01.bunny-lab.io`)
|
||||
|
||||
!!! note "Assumptions"
|
||||
This deployment assumes you are deploying Puppet bolt onto the same server as Puppet. If you have not already, follow the [Puppet Deployment](https://docs.bunny-lab.io/Servers%20%26%20Workflows/Linux/Automation/Puppet/Puppet/) documentation to do so before continuing with the Puppet Bolt deployment.
|
||||
|
||||
## Initial Preparation
|
||||
``` sh
|
||||
# Install Bolt Repository
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet-tools-release-el-9.noarch.rpm
|
||||
sudo yum install -y puppet-bolt
|
||||
|
||||
# Verify Installation
|
||||
bolt --version
|
||||
|
||||
# Clone Puppet Bolt Repository into Bolt Directory
|
||||
#sudo git clone https://git.bunny-lab.io/GitOps/Puppet-Bolt.git /etc/puppetlabs/bolt <-- Disabled for now
|
||||
sudo mkdir -p /etc/puppetlabs/bolt
|
||||
sudo chown -R $(whoami):$(whoami) /etc/puppetlabs/bolt
|
||||
sudo chmod -R 644 /etc/puppetlabs/bolt
|
||||
#sudo chmod -R u+rwx,g+rx,o+rx /etc/puppetlabs/bolt/modules/bolt <-- Disabled for now
|
||||
|
||||
# Initialize A New Bolt Project
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt project init bunny_lab
|
||||
```
|
||||
|
||||
## Configuring Inventory
|
||||
At this point, you will want to create an inventory file that you can use for tracking devices. For now, this will have hard-coded credentials until a cleaner method is figured out.
|
||||
``` yaml title="/etc/puppetlabs/bolt/inventory.yaml"
|
||||
# Inventory file for Puppet Bolt
|
||||
groups:
|
||||
- name: linux_servers
|
||||
targets:
|
||||
- lab-auth-01.bunny-lab.io
|
||||
- lab-auth-02.bunny-lab.io
|
||||
config:
|
||||
transport: ssh
|
||||
ssh:
|
||||
host-key-check: false
|
||||
private-key: "/etc/puppetlabs/bolt/id_rsa_OpenSSH" # (1)
|
||||
user: nicole
|
||||
native-ssh: true
|
||||
|
||||
- name: windows_servers
|
||||
config:
|
||||
transport: winrm
|
||||
winrm:
|
||||
realm: BUNNY-LAB.IO
|
||||
ssl: true
|
||||
user: "BUNNY-LAB\\nicole.rappe"
|
||||
password: DomainPassword # (2)
|
||||
groups:
|
||||
- name: domain_controllers
|
||||
targets:
|
||||
- lab-dc-01.bunny-lab.io
|
||||
- lab-dc-02.bunny-lab.io
|
||||
- name: dedicated_game_servers
|
||||
targets:
|
||||
- lab-games-01.bunny-lab.io
|
||||
- lab-games-02.bunny-lab.io
|
||||
- lab-games-03.bunny-lab.io
|
||||
- lab-games-04.bunny-lab.io
|
||||
- lab-games-05.bunny-lab.io
|
||||
- name: hyperv_hosts
|
||||
targets:
|
||||
- virt-node-01.bunny-lab.io
|
||||
- bunny-node-02.bunny-lab.io
|
||||
```
|
||||
|
||||
1. Point the inventory file to the private key (if you use key-based authentication instead of password-based SSH authentication.)
|
||||
2. Replace this with your actual domain admin / domain password.
|
||||
|
||||
### Validate Bolt Inventory Works
|
||||
If the inventory file is created correctly, you will see the hosts listed when you run the command below:
|
||||
``` sh
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt inventory show
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt inventory show`"
|
||||
You should expect to see output similar to the following:
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt inventory show
|
||||
Targets
|
||||
lab-auth-01.bunny-lab.io
|
||||
lab-auth-02.bunny-lab.io
|
||||
lab-dc-01.bunny-lab.io
|
||||
lab-dc-02.bunny-lab.io
|
||||
lab-games-01.bunny-lab.io
|
||||
lab-games-02.bunny-lab.io
|
||||
lab-games-03.bunny-lab.io
|
||||
lab-games-04.bunny-lab.io
|
||||
lab-games-05.bunny-lab.io
|
||||
virt-node-01.bunny-lab.io
|
||||
bunny-node-02.bunny-lab.io
|
||||
|
||||
Inventory source
|
||||
/tmp/bolt-lab/inventory.yaml
|
||||
|
||||
Target count
|
||||
11 total, 11 from inventory, 0 adhoc
|
||||
|
||||
Additional information
|
||||
Use the '--targets', '--query', or '--rerun' option to view specific targets
|
||||
Use the '--detail' option to view target configuration and data
|
||||
```
|
||||
|
||||
## Configuring Kerberos
|
||||
If you work with Windows-based devices in a domain environment, you will need to set up Puppet so it can perform Kerberos authentication while interacting with Windows devices. This involves a little bit of setup, but nothing too crazy.
|
||||
|
||||
### Install Krb5
|
||||
We need to install the necessary software on the puppet server to allow Kerberos authentication to occur.
|
||||
=== "Rocky, CentOS, RHEL, Fedora"
|
||||
|
||||
``` sh
|
||||
sudo yum install krb5-workstation
|
||||
```
|
||||
|
||||
=== "Debian, Ubuntu"
|
||||
|
||||
``` sh
|
||||
sudo apt-get install krb5-user
|
||||
```
|
||||
|
||||
=== "SUSE"
|
||||
|
||||
``` sh
|
||||
sudo zypper install krb5-client
|
||||
```
|
||||
|
||||
### Prepare `/etc/krb5.conf` Configuration
|
||||
We need to configure Kerberos to know how to reach the domain, this is achieved by editing `/etc/krb5.conf` to look similar to the following, with your own domain substituting the example values.
|
||||
``` ini
|
||||
[libdefaults]
|
||||
default_realm = BUNNY-LAB.IO
|
||||
dns_lookup_realm = false
|
||||
dns_lookup_kdc = false
|
||||
ticket_lifetime = 7d
|
||||
forwardable = true
|
||||
|
||||
[realms]
|
||||
BUNNY-LAB.IO = {
|
||||
kdc = LAB-DC-01.bunny-lab.io # (1)
|
||||
kdc = LAB-DC-02.bunny-lab.io # (2)
|
||||
admin_server = LAB-DC-01.bunny-lab.io # (3)
|
||||
}
|
||||
|
||||
[domain_realm]
|
||||
.bunny-lab.io = BUNNY-LAB.IO
|
||||
bunny-lab.io = BUNNY-LAB.IO
|
||||
```
|
||||
|
||||
1. Your primary domain controller
|
||||
2. Your secondary domain controller (if applicable)
|
||||
3. This is your Primary Domain Controller (PDC)
|
||||
|
||||
### Initialize Kerberos Connection
|
||||
Now we need to log into the domain using (preferrably) domain administrator credentials, such as the example below. You will be prompted to enter your domain password.
|
||||
``` sh
|
||||
kinit nicole.rappe@BUNNY-LAB.IO
|
||||
klist
|
||||
```
|
||||
|
||||
??? example "Example Output of `klist`"
|
||||
You should expect to see output similar to the following. Finding a way to ensure the Kerberos tickets live longer is still under research, as 7 days is not exactly practical for long-term deployments.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# klist
|
||||
Ticket cache: FILE:/tmp/krb5cc_0
|
||||
Default principal: nicole.rappe@BUNNY-LAB.IO
|
||||
|
||||
Valid starting Expires Service principal
|
||||
11/14/2024 21:57:03 11/15/2024 07:57:03 krbtgt/BUNNY-LAB.IO@BUNNY-LAB.IO
|
||||
renew until 11/21/2024 21:57:03
|
||||
```
|
||||
|
||||
### Prepare Windows Devices
|
||||
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
|
||||
|
||||
You can find the [WinRM Enablement Script](https://docs.bunny-lab.io/Docker%20%26%20Kubernetes/Servers/AWX/AWX%20Operator/Enable%20Kerberos%20WinRM/?h=winrm) in the Bunny Lab documentation.
|
||||
|
||||
## Ad-Hoc Command Examples
|
||||
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t domain_controllers --no-ssl-verify`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging WinRM via Kerberos on Windows devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t domain_controllers --no-ssl-verify
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-dc-01.bunny-lab.io...
|
||||
Started on lab-dc-02.bunny-lab.io...
|
||||
Finished on lab-dc-02.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Finished on lab-dc-01.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Successful on 2 targets: lab-dc-01.bunny-lab.io,lab-dc-02.bunny-lab.io
|
||||
Ran on 2 targets in 1.91 sec
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t linux_servers`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging native SSH on Linux devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t linux_servers
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-auth-01.bunny-lab.io...
|
||||
Started on lab-auth-02.bunny-lab.io...
|
||||
Finished on lab-auth-02.bunny-lab.io:
|
||||
nicole
|
||||
Finished on lab-auth-01.bunny-lab.io:
|
||||
nicole
|
||||
Successful on 2 targets: lab-auth-01.bunny-lab.io,lab-auth-02.bunny-lab.io
|
||||
Ran on 2 targets in 0.68 sec
|
||||
```
|
422
Servers/Automation/Puppet/Deployment/Puppet.md
Normal file
422
Servers/Automation/Puppet/Deployment/Puppet.md
Normal file
@ -0,0 +1,422 @@
|
||||
**Purpose**:
|
||||
Puppet is another declarative configuration management tool that excels in system configuration and enforcement. Like Ansible, it's designed to maintain the desired state of a system's configuration but uses a client-server (master-agent) architecture by default.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes you are deploying Puppet server onto Rocky Linux 9.4. Any version of RHEL/CentOS/Alma/Rocky should behave similarily.
|
||||
|
||||
## Architectural Overview
|
||||
### Detailed
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea Repo (Puppet Environment)
|
||||
participant r10k as r10k (Environment Deployer)
|
||||
participant PuppetMaster as Puppet Server (lab-puppet-01.bunny-lab.io)
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull Puppet Environment updates
|
||||
Gitea-->>PuppetMaster: Send latest Puppet repository code
|
||||
|
||||
%% r10k deployment process
|
||||
PuppetMaster->>r10k: Deploy environment with r10k
|
||||
r10k->>PuppetMaster: Fetch and install Puppet modules
|
||||
r10k-->>PuppetMaster: Compile environments and apply updates
|
||||
|
||||
%% Agent enrollment process
|
||||
Agent->>PuppetMaster: Request to enroll (Agent Check-in)
|
||||
PuppetMaster->>Agent: Verify SSL Certificate & Authenticate
|
||||
Agent-->>PuppetMaster: Send facts about system (Facter)
|
||||
|
||||
%% PuppetMaster compiles catalog for the agent
|
||||
PuppetMaster->>PuppetMaster: Compile Catalog
|
||||
PuppetMaster->>PuppetMaster: Check if 'neofetch' is required in manifest
|
||||
PuppetMaster-->>Agent: Send compiled catalog with 'neofetch' installation instructions
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Agent: Check if 'neofetch' is installed
|
||||
Agent--xNeofetch: 'neofetch' not installed
|
||||
Agent->>Neofetch: Install 'neofetch'
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back to PuppetMaster
|
||||
Agent->>PuppetMaster: Report status (catalog applied and neofetch installed)
|
||||
```
|
||||
|
||||
### Simplified
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea (Puppet Repository)
|
||||
participant PuppetMaster as Puppet Server
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull environment updates
|
||||
Gitea-->>PuppetMaster: Send updated code
|
||||
|
||||
%% Agent enrollment and catalog request
|
||||
Agent->>PuppetMaster: Request catalog (Check-in)
|
||||
PuppetMaster->>Agent: Send compiled catalog (neofetch required)
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Neofetch: Install neofetch
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back
|
||||
Agent->>PuppetMaster: Report catalog applied (neofetch installed)
|
||||
```
|
||||
|
||||
### Breakdown
|
||||
#### 1. **PuppetMaster Pulls Updates from Gitea**
|
||||
- PuppetMaster uses `r10k` to fetch the latest environment updates from Gitea. These updates include manifests, hiera data, and modules for the specified Puppet environments.
|
||||
|
||||
#### 2. **PuppetMaster Compiles Catalogs and Modules**
|
||||
- After pulling updates, the PuppetMaster compiles the latest node-specific catalogs based on the manifests and modules. It ensures the configuration is ready for agents to retrieve.
|
||||
|
||||
#### 3. **Agent (fedora.bunny-lab.io) Checks In**
|
||||
- The Puppet agent on `fedora.bunny-lab.io` checks in with the PuppetMaster for its catalog. This request tells the PuppetMaster to compile the node's desired configuration.
|
||||
|
||||
#### 4. **Agent Downloads and Applies the Catalog**
|
||||
- The agent retrieves its compiled catalog from the PuppetMaster. It compares the current system state with the desired state outlined in the catalog.
|
||||
|
||||
#### 5. **Agent Installs `neofetch`**
|
||||
- The agent identifies that `neofetch` is missing and installs it using the system's package manager. The installation follows the directives in the catalog.
|
||||
|
||||
#### 6. **Agent Reports Success**
|
||||
- Once changes are applied, the agent sends a report back to the PuppetMaster. The report includes details of the changes made, confirming `neofetch` was installed.
|
||||
|
||||
## Deployment Steps:
|
||||
You will need to perform a few steps outlined in the [official Puppet documentation](https://www.puppet.com/docs/puppet/7/install_puppet.html) to get a Puppet server operational. A summarized workflow is seen below:
|
||||
|
||||
### Install Puppet Repository
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Add Puppet Repository / Enable Puppet on YUM
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet7-release-el-9.noarch.rpm
|
||||
```
|
||||
|
||||
### Install Puppet Server
|
||||
**Installation Scope**: Puppet Server
|
||||
``` sh
|
||||
# Install the Puppet Server
|
||||
sudo yum install -y puppetserver
|
||||
systemctl enable --now puppetserver
|
||||
|
||||
# Validate Successful Deployment
|
||||
exec bash
|
||||
puppetserver -v
|
||||
```
|
||||
|
||||
### Install Puppet Agent
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Install Puppet Agent (This will already be installed on the Puppet Server)
|
||||
sudo yum install -y puppet-agent
|
||||
|
||||
# Enable the Puppet Agent
|
||||
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
|
||||
|
||||
# Configure Puppet Server to Connect To
|
||||
puppet config set server lab-puppet-01.bunny-lab.io --section main
|
||||
|
||||
# Establish Secure Connection to Puppet Server
|
||||
puppet ssl bootstrap
|
||||
|
||||
# ((On the Puppet Server))
|
||||
# You will see an error stating: "Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (fedora.bunny-lab.io)."
|
||||
# Run the following command (as root) on the Puppet Server to generate a certificate
|
||||
sudo su
|
||||
puppetserver ca sign --certname fedora.bunny-lab.io
|
||||
```
|
||||
|
||||
#### Validate Agent Functionality
|
||||
At this point, you want to ensure that the device being managed by the agent is able to pull down configurations from the Puppet Server. You will know if it worked by getting a message similar to `Notice: Applied catalog in X.XX seconds` after running the following command:
|
||||
``` sh
|
||||
puppet agent --test
|
||||
```
|
||||
|
||||
## Install r10k
|
||||
At this point, we need to configure Gitea as the storage repository for the Puppet "Environments" (e.g. `Production` and `Development`). We can do this by leveraging a tool called "r10k" which pulls a Git repository and configures it as the environment in Puppet.
|
||||
``` sh
|
||||
# Install r10k Pre-Requisites
|
||||
sudo dnf install -y ruby ruby-devel gcc make
|
||||
|
||||
# Install r10k Gem (The Software)
|
||||
# Note: If you encounter any issues with permissions, you can install the gem with "sudo gem install r10k --no-document".
|
||||
sudo gem install r10k
|
||||
|
||||
# Verify the Installation (Run this as a non-root user)
|
||||
r10k version
|
||||
```
|
||||
|
||||
### Configure r10k
|
||||
``` sh
|
||||
# Create the r10k Configuration Directory
|
||||
sudo mkdir -p /etc/puppetlabs/r10k
|
||||
|
||||
# Create the r10k Configuration File
|
||||
sudo nano /etc/puppetlabs/r10k/r10k.yaml
|
||||
```
|
||||
|
||||
```yaml title="/etc/puppetlabs/r10k/r10k.yaml"
|
||||
---
|
||||
# Cache directory for r10k
|
||||
cachedir: '/var/cache/r10k'
|
||||
|
||||
# Sources define which repositories contain environments (Be sure to use the SSH URL, not the Git URL)
|
||||
sources:
|
||||
puppet:
|
||||
remote: 'https://git.bunny-lab.io/GitOps/Puppet.git'
|
||||
basedir: '/etc/puppetlabs/code/environments'
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Lockdown the Permissions of the Configuration File
|
||||
sudo chmod 600 /etc/puppetlabs/r10k/r10k.yaml
|
||||
|
||||
# Create r10k Cache Directory
|
||||
sudo mkdir -p /var/cache/r10k
|
||||
sudo chown -R puppet:puppet /var/cache/r10k
|
||||
```
|
||||
|
||||
## Configure Gitea
|
||||
At this point, we need to set up the branches and file/folder structure of the Puppet repository on Gitea.
|
||||
|
||||
You will make a repository on Gitea with the following files and structure as noted by each file's title. You will make a mirror copy of all of the files below in both the `Production` and `Development` branches of the repository. For the sake of this example, the repository will be located at `https://git.bunny-lab.io/GitOps/Puppet.git`
|
||||
|
||||
!!! example "Example Agent & Neofetch"
|
||||
You will notice there is a section for `fedora.bunny-lab.io` as well as mentions of `neofetch`. These are purely examples in my homelab of a computer I was testing against during the development of the Puppet Server and associated documentation. You can feel free to not include the entire `modules/neofetch/manifests/init.pp` file in the Gitea repository, as well as remove this entire section from the `manifests/site.pp` file:
|
||||
|
||||
``` yaml
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
=== "Puppetfile"
|
||||
This file is used by the Puppet Server (PuppetMaster) to prepare the environment by installing modules / Forge packages into the environment prior to devices getting their configurations. It's important and the modules included in this example are the bare-minimum to get things working with PuppetDB functionality.
|
||||
|
||||
```json title="Puppetfile"
|
||||
forge 'https://forge.puppet.com'
|
||||
mod 'puppetlabs-stdlib', '9.6.0'
|
||||
mod 'puppetlabs-puppetdb', '8.1.0'
|
||||
mod 'puppetlabs-postgresql', '10.3.0'
|
||||
mod 'puppetlabs-firewall', '8.1.0'
|
||||
mod 'puppetlabs-inifile', '6.1.1'
|
||||
mod 'puppetlabs-concat', '9.0.2'
|
||||
mod 'puppet-systemd', '7.1.0'
|
||||
```
|
||||
|
||||
=== "environment.conf"
|
||||
This file is mostly redundant, as it states the values below, which are the default values Puppet works with. I only included it in case I had a unique use-case that required a more custom approach to the folder structure. (This is very unlikely).
|
||||
|
||||
```yaml title="environment.conf"
|
||||
# Specifies the module path for this environment
|
||||
modulepath = modules:$basemodulepath
|
||||
|
||||
# Optional: Specifies the manifest file for this environment
|
||||
manifest = manifests/site.pp
|
||||
|
||||
# Optional: Set the environment's config_version (e.g., a script to output the current Git commit hash)
|
||||
# config_version = scripts/config_version.sh
|
||||
|
||||
# Optional: Set the environment's environment_timeout
|
||||
# environment_timeout = 0
|
||||
```
|
||||
|
||||
=== "site.pp"
|
||||
This file is kind of like an inventory of devices and their states. In this example, you will see that the puppet server itself is named `lab-puppet-01.bunny-lab.io` and the agent device is named `fedora.bunny-lab.io`. By "including" modules like PuppetDB, it installs the PuppetDB role and configures it automatically on the Puppet Server. By stating the firewall rules, it also ensures that those firewall ports are open no matter what, and if they close, Puppet will re-open them automatically. Port 8140 is for Agent communication, and port 8081 is for PuppetDB functionality.
|
||||
|
||||
!!! example "Neofetch Example"
|
||||
In the example configuration below, you will notice this section. This tells Puppet to deploy the neofetch package to any device that has `include neofetch` written. Grouping devices etc is currently undocumented as of writing this.
|
||||
``` sh
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
```yaml title="manifests/site.pp"
|
||||
# Node definition for the Puppet Server
|
||||
node 'lab-puppet-01.bunny-lab.io' {
|
||||
|
||||
# Include the puppetdb class with custom parameters
|
||||
class { 'puppetdb':
|
||||
listen_address => '0.0.0.0', # Allows access from all network interfaces
|
||||
}
|
||||
|
||||
# Configure the Puppet Server to use PuppetDB
|
||||
include puppetdb
|
||||
include puppetdb::master::config
|
||||
|
||||
# Ensure the required iptables rules are in place using Puppet's firewall resources
|
||||
firewall { '100 allow Puppet traffic on 8140':
|
||||
proto => 'tcp',
|
||||
dport => '8140',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
|
||||
firewall { '101 allow PuppetDB traffic on 8081':
|
||||
proto => 'tcp',
|
||||
dport => '8081',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
}
|
||||
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
|
||||
# Default node definition (optional)
|
||||
node default {
|
||||
# This can be left empty or include common classes for all other nodes
|
||||
}
|
||||
```
|
||||
|
||||
=== "init.pp"
|
||||
This is used by the neofetch class noted in the `site.pp` file. This is basically the declaration of how we want neofetch to be on the devices that include the neofetch "class". In this case, we don't care how it does it, but it will install Neofetch, whether that is through yum, dnf, or apt. A few lines of code is OS-agnostic. The formatting / philosophy is similar in a way to the modules in Ansible playbooks, and how they declare the "state" of things.
|
||||
|
||||
```yaml title="modules/neofetch/manifests/init.pp"
|
||||
class neofetch {
|
||||
package { 'neofetch':
|
||||
ensure => installed,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Storing Credentials to Gitea
|
||||
We need to be able to pull down the data from Gitea's Puppet repository under the root user so that r10k can automatically pull down any changes made to the Puppet environments (e.g. `Production` and `Development`). Each Git branch represents a different Puppet environment. We will use an application token to do this.
|
||||
|
||||
Navigate to "**Gitea > User (Top-Right) > Settings > Applications
|
||||
- Token Name: `Puppet r10k`
|
||||
- Permissions: `Repository > Read Only`
|
||||
- Click the "**Generate Token**" button to finish.
|
||||
|
||||
!!! warning "Securely Store the Application Token"
|
||||
It is critical that you store the token somewhere safe like a password manager as you will need to reference it later and might need it in the future if you re-build the r10k environment.
|
||||
|
||||
Now we want to configure Gitea to store the credentials for later use by r10k:
|
||||
``` sh
|
||||
# Enable Stored Credentials (We will address security concerns further down...)
|
||||
sudo yum install -y git
|
||||
sudo git config --global credential.helper store
|
||||
|
||||
# Clone the Git Repository Once to Store the Credentials (Use the Application Token as the password)
|
||||
# Username: nicole.rappe
|
||||
# Password: <Application Token Value>
|
||||
sudo git clone https://git.bunny-lab.io/GitOps/Puppet.git /tmp/PuppetTest
|
||||
|
||||
# Verify the Credentials are Stored
|
||||
sudo cat /root/.git-credentials
|
||||
|
||||
# Lockdown Permissions
|
||||
sudo chmod 600 /root/.git-credentials
|
||||
|
||||
# Cleanup After Ourselves
|
||||
sudo rm -rf /tmp/PuppetTest
|
||||
```
|
||||
|
||||
Finally we validate that everything is working by pulling down the Puppet environments using r10k on the Puppet Server:
|
||||
``` sh
|
||||
# Deploy Puppy Environments from Gitea
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
|
||||
# Validate r10k is Installing Modules in the Environments
|
||||
sudo ls /etc/puppetlabs/code/environments/production/modules
|
||||
sudo ls /etc/puppetlabs/code/environments/development/modules
|
||||
```
|
||||
|
||||
!!! success "Successful Puppet Environment Deployment
|
||||
If you got no errors about Puppetfile formatting or Gitea permissions errors, then you are good to move onto the next step.
|
||||
|
||||
## External Node Classifier (ENC)
|
||||
An ENC allows you to define node-specific data, including the environment, on the Puppet Server. The agent requests its configuration, and the Puppet Server provides the environment and classes to apply.
|
||||
|
||||
**Advantages**:
|
||||
|
||||
- **Centralized Control**: Environments and classifications are managed from the server.
|
||||
- **Security**: Agents cannot override their assigned environment.
|
||||
- **Scalability**: Suitable for managing environments for hundreds or thousands of nodes.
|
||||
|
||||
### Create an ENC Script
|
||||
``` sh
|
||||
sudo mkdir -p /opt/puppetlabs/server/data/puppetserver/scripts/
|
||||
```
|
||||
|
||||
```ruby title="/opt/puppetlabs/server/data/puppetserver/scripts/enc.rb"
|
||||
#!/usr/bin/env ruby
|
||||
# enc.rb
|
||||
|
||||
require 'yaml'
|
||||
|
||||
node_name = ARGV[0]
|
||||
|
||||
# Define environment assignments
|
||||
node_environments = {
|
||||
'fedora.bunny-lab.io' => 'development',
|
||||
# Add more nodes and their environments as needed
|
||||
}
|
||||
|
||||
environment = node_environments[node_name] || 'production'
|
||||
|
||||
# Define classes to include per node (optional)
|
||||
node_classes = {
|
||||
'fedora.bunny-lab.io' => ['neofetch'],
|
||||
# Add more nodes and their classes as needed
|
||||
}
|
||||
|
||||
classes = node_classes[node_name] || []
|
||||
|
||||
# Output the YAML document
|
||||
output = {
|
||||
'environment' => environment,
|
||||
'classes' => classes
|
||||
}
|
||||
|
||||
puts output.to_yaml
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Ensure the File is Executable
|
||||
sudo chmod +x /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
### Configure Puppet Server to Use the ENC
|
||||
Edit the Puppet Server's `puppet.conf` and set the `node_terminus` and `external_nodes` parameters:
|
||||
```ini title="/etc/puppetlabs/puppet/puppet.conf"
|
||||
[master]
|
||||
node_terminus = exec
|
||||
external_nodes = /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
Restart the Puppet Service
|
||||
``` sh
|
||||
sudo systemctl restart puppetserver
|
||||
```
|
||||
|
||||
## Pull Puppet Environments from Gitea
|
||||
At this point, we can tell r10k to pull down the Puppet environments (e.g. `Production` and `Development`) that we made in the Gitea repository in previous steps. Run the following command on the Puppet Server to pull down the environments. This will download / configure any Puppet Forge modules as well as any hand-made modules such as Neofetch.
|
||||
``` sh
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
# OPTIONAL: You can pull down a specific environment instead of all environments if you specify the branch name, seen here:
|
||||
#sudo /usr/local/bin/r10k deploy environment development -p
|
||||
```
|
||||
|
||||
### Apply Configuration to Puppet Server
|
||||
At this point, we are going to deploy the configuration from Gitea to the Puppet Server itself so it installs PuppetDB automatically as well as configures firewall ports and other small things to functional properly. Once this is completed, you can add additional agents / managed devices and they will be able to communicate with the Puppet Server over the network.
|
||||
``` sh
|
||||
sudo /opt/puppetlabs/bin/puppet agent -t
|
||||
```
|
||||
|
||||
!!! success "Puppet Server Deployed and Validated"
|
||||
Congradulations! You have successfully deployed an entire Puppet Server, as well as integrated Gitea and r10k to deploy environment changes in a versioned environment, as well as validated functionality against a managed device using the agent (such as a spare laptop/desktop). If you got this far, be proud, because it took me over 12 hours write this documentation allowing you to deploy a server in less than 30 minutes.
|
Reference in New Issue
Block a user