Documentation Restructure
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
This commit is contained in:
BIN
deployments/automation/ansible/awx/awx.png
Normal file
BIN
deployments/automation/ansible/awx/awx.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 122 KiB |
146
deployments/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
146
deployments/automation/ansible/awx/deployment/awx-in-minikube.md
Normal file
@@ -0,0 +1,146 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
# Deploy AWX on Minikube Cluster
|
||||
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
|
||||
## Install Minikube Cluster
|
||||
### Update the Ubuntu Server
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
### Download and Install Minikube (Ubuntu Server)
|
||||
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
|
||||
sudo dpkg -i minikube_latest_amd64.deb
|
||||
|
||||
# Download Docker and Common Tools
|
||||
sudo apt install docker.io nfs-common iptables nano htop -y
|
||||
|
||||
# Configure Docker User
|
||||
sudo usermod -aG docker nicole
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
|
||||
:::
|
||||
### Fully Logout then sign back in to the server
|
||||
```
|
||||
exit
|
||||
```
|
||||
### Validate that permissions allow you to run docker commands while non-root
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Initialize Minikube Cluster
|
||||
Additional Documentation: https://github.com/ansible/awx-operator
|
||||
```
|
||||
minikube start --driver=docker
|
||||
minikube kubectl -- get nodes
|
||||
minikube kubectl -- get pods -A
|
||||
```
|
||||
|
||||
### Make sure Minikube Cluster Automatically Starts on Boot
|
||||
```jsx title="/etc/systemd/system/minikube.service"
|
||||
[Unit]
|
||||
Description=Minikube service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=nicole
|
||||
ExecStart=/usr/bin/minikube start --driver=docker
|
||||
ExecStop=/usr/bin/minikube stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
|
||||
:::
|
||||
:::info
|
||||
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
|
||||
:::
|
||||
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable minikube
|
||||
sudo systemctl start minikube
|
||||
```
|
||||
|
||||
### Make command alias for `kubectl`
|
||||
Be sure to add the following to the bottom of your existing profile file noted below.
|
||||
```jsx title="~/.bashrc"
|
||||
...
|
||||
alias kubectl="minikube kubectl --"
|
||||
```
|
||||
:::tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
|
||||
## Make AWX Operator Kustomization File:
|
||||
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
|
||||
```jsx title="kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: awx-service
|
||||
namespace: awx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080 # Choose an available port in the range of 30000-32767
|
||||
selector:
|
||||
app.kubernetes.io/name: awx-web
|
||||
```
|
||||
### Apply Configuration File
|
||||
Run from the same directory as the `awx-operator.yaml` file.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::info
|
||||
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
|
||||
:::
|
||||
|
||||
### View Logs / Track Deployment Progress
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
|
||||
```
|
||||
### Get AWX WebUI Address
|
||||
```
|
||||
minikube service -n awx awx-service --url
|
||||
```
|
||||
### Get WebUI Password:
|
||||
```
|
||||
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
198
deployments/automation/ansible/awx/deployment/awx-operator.md
Normal file
198
deployments/automation/ansible/awx/deployment/awx-operator.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
|
||||
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 22.04** or later with at least 16GB of memory, 8 CPU cores, and 64GB of storage.
|
||||
|
||||
## Deploy Rancher RKE2 Cluster
|
||||
You will need to deploy a [Rancher RKE2 Cluster](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
|
||||
|
||||
!!! tip "Checkpoint/Snapshot Reminder"
|
||||
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
|
||||
|
||||
## Server Configuration
|
||||
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
|
||||
|
||||
``` sh
|
||||
# Make the deployment folder
|
||||
mkdir -p /awx
|
||||
cd /awx
|
||||
```
|
||||
|
||||
We need to increase filesystem access limits:
|
||||
Temporarily Set the Limits Now:
|
||||
``` sh
|
||||
sudo sysctl fs.inotify.max_user_watches=524288
|
||||
sudo sysctl fs.inotify.max_user_instances=512
|
||||
```
|
||||
|
||||
Permanently Set the Limits for Later:
|
||||
```jsx title="/etc/sysctl.conf"
|
||||
# <End of File>
|
||||
fs.inotify.max_user_watches = 524288
|
||||
fs.inotify.max_user_instances = 512
|
||||
```
|
||||
|
||||
Apply the Settings:
|
||||
``` sh
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
### Create AWX Deployment Donfiguration Files
|
||||
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
|
||||
|
||||
=== "awx.yml"
|
||||
|
||||
```yaml title="/awx/awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
spec:
|
||||
service_type: ClusterIP
|
||||
```
|
||||
|
||||
=== "ingress.yml"
|
||||
|
||||
```yaml title="/awx/ingress.yml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: awx.bunny-lab.io
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: awx-service
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
=== "kustomization.yml"
|
||||
|
||||
```yaml title="/awx/kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.10.0
|
||||
- awx.yml
|
||||
- ingress.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.10.0
|
||||
namespace: awx
|
||||
```
|
||||
|
||||
## Ensure the Kubernetes Cluster is Ready
|
||||
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
## Ensure the Timezone / Date is Accurate
|
||||
You want to make sure that the Kubernetes environment and Node itself have accurate time for a number of reasons, least of which, is if you are using Ansible with Kubernetes authentication, if the date/time is inaccurate, things will not work correctly.
|
||||
``` sh
|
||||
sudo timedatectl set-timezone America/Denver
|
||||
```
|
||||
|
||||
## Deploy AWX using Kustomize
|
||||
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
|
||||
!!! warning "Be Patient"
|
||||
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](./awx-operator.md#troubleshooting) section if you want to track the progress after running the commands below.
|
||||
|
||||
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
|
||||
``` sh
|
||||
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
|
||||
ensure CRDs are installed first
|
||||
```
|
||||
|
||||
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
|
||||
You will know that AWX is ready to be accessed in the next step if the output looks like below:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
|
||||
awx-postgres-13-0 1/1 Running 0 6m12s
|
||||
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
|
||||
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
|
||||
```
|
||||
|
||||
``` sh
|
||||
cd /awx
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
!!! warning "Be Patient - Wait 20 Minutes"
|
||||
The process may take a while to spin up AWX, postgresql, redis, and other workloads necessary for AWX to function. Depending on the speed of the server, it may take between 5 and 20 minutes for AWX to be ready to connect to. You can watch the progress via the CLI commands listed above, or directly on Rancher's WebUI at https://rancher.bunny-lab.io.
|
||||
|
||||
## Access the AWX WebUI behind Ingress Controller
|
||||
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
|
||||
|
||||
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically due to having an internal Reverse Proxy within the Kubernetes Cluster. SSL certificates generated within Kubernetes/Rancher RKE2 are not covered in this documentation, but suffice to say, the AWX server can be configured on behind another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
|
||||
|
||||
### Traefik Implementation
|
||||
If you want to put this behind traefik, you will need a slightly unique traefik configuration file, seen below, to effectively transparently passthrough traffic into the RKE2 Cluster's reverse proxy.
|
||||
|
||||
```yaml title="awx.bunny-lab.io.yml"
|
||||
tcp:
|
||||
routers:
|
||||
awx-tcp-router:
|
||||
rule: "HostSNI(`awx.bunny-lab.io`)"
|
||||
entryPoints: ["websecure"]
|
||||
service: awx-nginx-service
|
||||
tls:
|
||||
passthrough: true
|
||||
# middlewares:
|
||||
# - auth-bunny-lab-io # Referencing the Keycloak Server
|
||||
|
||||
services:
|
||||
awx-nginx-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.10:443"
|
||||
```
|
||||
|
||||
!!! success "Accessing the AWX WebUI"
|
||||
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
|
||||
|
||||
- AWX WebUI: https://awx.bunny-lab.io
|
||||

|
||||
You may see a prompt about "AWX is currently upgrading. This page will refresh when complete". Be patient, let it finish. When it's done, it will take you to a login page.
|
||||
AWX will generate its own secure password the first time you set up AWX. Username is `admin`. You can run the following command to retrieve the password:
|
||||
```
|
||||
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
## Change Admin Password
|
||||
You will want to change the admin password straight-away. Use the following navigation structure to find where to change the password:
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[AWX Dashboard] --> B[Access]
|
||||
B --> C[Users]
|
||||
C --> D[admin]
|
||||
D --> E[Edit]
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
|
||||
|
||||
### AWX-Manager Deployment Logs
|
||||
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
|
||||
```
|
||||
!!! note
|
||||
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
|
||||
|
||||
@@ -0,0 +1,69 @@
|
||||
---
|
||||
tags:
|
||||
- Ansible
|
||||
- AWX
|
||||
- Automation
|
||||
---
|
||||
|
||||
## Upgrading from 2.10.0 to 2.19.1+
|
||||
There is a known issue with upgrading / install AWX Operator beyond version 2.10.0, because of how the PostgreSQL database upgrades from 13.0 to 15.0, and has changed permissions. The following workflow will help get past that and adjust the permissions in such a way that allows the upgrade to proceed successfully. If this is a clean installation, you can also perform this step if the fresh install of 2.19.1 is not working yet. (It wont work out of the box because of this bug). `The developers of AWX seem to just not care about this issue, and have not implemented an official fix themselves at this time).
|
||||
|
||||
### Create a Temporary Pod to Adjust Permissions
|
||||
We need to create a pod that will mount the PostgreSQL PVC, make changes to permissions, then destroy the v15.0 pod to have the AWX Operator automatically regenerate it.
|
||||
|
||||
```yaml title="/awx/temp-pod.yml"
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: temp-pod
|
||||
namespace: awx
|
||||
spec:
|
||||
containers:
|
||||
- name: temp-container
|
||||
image: busybox
|
||||
command: ['sh', '-c', 'sleep 3600']
|
||||
volumeMounts:
|
||||
- mountPath: /var/lib/pgsql/data
|
||||
name: postgres-data
|
||||
volumes:
|
||||
- name: postgres-data
|
||||
persistentVolumeClaim:
|
||||
claimName: postgres-15-awx-postgres-15-0
|
||||
restartPolicy: Never
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Deploy Temporary Pod
|
||||
kubectl apply -f /awx/temp-pod.yaml
|
||||
|
||||
# Open a Shell in the Temporary Pod
|
||||
kubectl exec -it temp-pod -n awx -- sh
|
||||
|
||||
# Adjust Permissions of the PostgreSQL 15.0 Database Folder
|
||||
chown -R 26:root /var/lib/pgsql/data
|
||||
exit
|
||||
|
||||
# Delete the Temporary Pod
|
||||
kubectl delete pod temp-pod -n awx
|
||||
|
||||
# Delete the Crashlooped PostgreSQL 15.0 Pod to Regenerate It
|
||||
kubectl delete pod awx-postgres-15-0 -n awx
|
||||
|
||||
# Track the Migration
|
||||
kubectl get pods -n awx
|
||||
kubectl logs -n awx awx-postgres-15-0
|
||||
```
|
||||
|
||||
!!! warning "Be Patient"
|
||||
This upgrade may take a few minutes depending on the speed of the node it is running on. Be patient and wait until the output looks something similar to this:
|
||||
```
|
||||
root@awx:/awx# kubectl get pods -n awx
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-migration-24.6.1-bh5vb 0/1 Completed 0 9m55s
|
||||
awx-operator-controller-manager-745b55d94b-2dhvx 2/2 Running 0 25m
|
||||
awx-postgres-15-0 1/1 Running 0 12m
|
||||
awx-task-7946b46dd6-7z9jm 4/4 Running 0 10m
|
||||
awx-web-9497647b4-s4gmj 3/3 Running 0 10m
|
||||
```
|
||||
|
||||
If you see a migration pod, like seen in the above example, you can feel free to delete it with the following command: `kubectl delete pod awx-migration-24.6.1-bh5vb -n awx`.
|
||||
38
deployments/automation/index.md
Normal file
38
deployments/automation/index.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
tags:
|
||||
- Operations
|
||||
- Automation
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Automation
|
||||
## Purpose
|
||||
Infrastructure automation, orchestration, and workflow tooling.
|
||||
|
||||
## Includes
|
||||
- Ansible and Puppet patterns
|
||||
- Inventory and credential conventions
|
||||
- CI/CD and automation notes
|
||||
|
||||
## New Document Template
|
||||
````markdown
|
||||
# <Document Title>
|
||||
## Purpose
|
||||
<what this automation doc exists to describe>
|
||||
|
||||
!!! info "Assumptions"
|
||||
- <platform or tooling assumptions>
|
||||
- <privilege assumptions>
|
||||
|
||||
## Inputs
|
||||
- <variables, inventories, secrets>
|
||||
|
||||
## Procedure
|
||||
```sh
|
||||
# Commands or job steps
|
||||
```
|
||||
|
||||
## Validation
|
||||
- <command + expected result>
|
||||
````
|
||||
219
deployments/automation/puppet/deployment/puppet-bolt.md
Normal file
219
deployments/automation/puppet/deployment/puppet-bolt.md
Normal file
@@ -0,0 +1,219 @@
|
||||
---
|
||||
tags:
|
||||
- Puppet
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**: Puppet Bolt can be leveraged in an Ansible-esque manner to connect to and enroll devices such as Windows Servers, Linux Servers, and various workstations. To this end, it could be used to run ad-hoc tasks or enroll devices into a centralized Puppet server. (e.g. `LAB-PUPPET-01.bunny-lab.io`)
|
||||
|
||||
!!! note "Assumptions"
|
||||
This deployment assumes you are deploying Puppet bolt onto the same server as Puppet. If you have not already, follow the [Puppet Deployment](./puppet.md) documentation to do so before continuing with the Puppet Bolt deployment.
|
||||
|
||||
## Initial Preparation
|
||||
``` sh
|
||||
# Install Bolt Repository
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet-tools-release-el-9.noarch.rpm
|
||||
sudo yum install -y puppet-bolt
|
||||
|
||||
# Verify Installation
|
||||
bolt --version
|
||||
|
||||
# Clone Puppet Bolt Repository into Bolt Directory
|
||||
#sudo git clone https://git.bunny-lab.io/GitOps/Puppet-Bolt.git /etc/puppetlabs/bolt <-- Disabled for now
|
||||
sudo mkdir -p /etc/puppetlabs/bolt
|
||||
sudo chown -R $(whoami):$(whoami) /etc/puppetlabs/bolt
|
||||
sudo chmod -R 644 /etc/puppetlabs/bolt
|
||||
#sudo chmod -R u+rwx,g+rx,o+rx /etc/puppetlabs/bolt/modules/bolt <-- Disabled for now
|
||||
|
||||
# Initialize A New Bolt Project
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt project init bunny_lab
|
||||
```
|
||||
|
||||
## Configuring Inventory
|
||||
At this point, you will want to create an inventory file that you can use for tracking devices. For now, this will have hard-coded credentials until a cleaner method is figured out.
|
||||
``` yaml title="/etc/puppetlabs/bolt/inventory.yaml"
|
||||
# Inventory file for Puppet Bolt
|
||||
groups:
|
||||
- name: linux_servers
|
||||
targets:
|
||||
- lab-auth-01.bunny-lab.io
|
||||
- lab-auth-02.bunny-lab.io
|
||||
config:
|
||||
transport: ssh
|
||||
ssh:
|
||||
host-key-check: false
|
||||
private-key: "/etc/puppetlabs/bolt/id_rsa_OpenSSH" # (1)
|
||||
user: nicole
|
||||
native-ssh: true
|
||||
|
||||
- name: windows_servers
|
||||
config:
|
||||
transport: winrm
|
||||
winrm:
|
||||
realm: BUNNY-LAB.IO
|
||||
ssl: true
|
||||
user: "BUNNY-LAB\\nicole.rappe"
|
||||
password: DomainPassword # (2)
|
||||
groups:
|
||||
- name: domain_controllers
|
||||
targets:
|
||||
- lab-dc-01.bunny-lab.io
|
||||
- lab-dc-02.bunny-lab.io
|
||||
- name: dedicated_game_servers
|
||||
targets:
|
||||
- lab-games-01.bunny-lab.io
|
||||
- lab-games-02.bunny-lab.io
|
||||
- lab-games-03.bunny-lab.io
|
||||
- lab-games-04.bunny-lab.io
|
||||
- lab-games-05.bunny-lab.io
|
||||
- name: hyperv_hosts
|
||||
targets:
|
||||
- virt-node-01.bunny-lab.io
|
||||
- bunny-node-02.bunny-lab.io
|
||||
```
|
||||
|
||||
1. Point the inventory file to the private key (if you use key-based authentication instead of password-based SSH authentication.)
|
||||
2. Replace this with your actual domain admin / domain password.
|
||||
|
||||
### Validate Bolt Inventory Works
|
||||
If the inventory file is created correctly, you will see the hosts listed when you run the command below:
|
||||
``` sh
|
||||
cd /etc/puppetlabs/bolt
|
||||
bolt inventory show
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt inventory show`"
|
||||
You should expect to see output similar to the following:
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt inventory show
|
||||
Targets
|
||||
lab-auth-01.bunny-lab.io
|
||||
lab-auth-02.bunny-lab.io
|
||||
lab-dc-01.bunny-lab.io
|
||||
lab-dc-02.bunny-lab.io
|
||||
lab-games-01.bunny-lab.io
|
||||
lab-games-02.bunny-lab.io
|
||||
lab-games-03.bunny-lab.io
|
||||
lab-games-04.bunny-lab.io
|
||||
lab-games-05.bunny-lab.io
|
||||
virt-node-01.bunny-lab.io
|
||||
bunny-node-02.bunny-lab.io
|
||||
|
||||
Inventory source
|
||||
/tmp/bolt-lab/inventory.yaml
|
||||
|
||||
Target count
|
||||
11 total, 11 from inventory, 0 adhoc
|
||||
|
||||
Additional information
|
||||
Use the '--targets', '--query', or '--rerun' option to view specific targets
|
||||
Use the '--detail' option to view target configuration and data
|
||||
```
|
||||
|
||||
## Configuring Kerberos
|
||||
If you work with Windows-based devices in a domain environment, you will need to set up Puppet so it can perform Kerberos authentication while interacting with Windows devices. This involves a little bit of setup, but nothing too crazy.
|
||||
|
||||
### Install Krb5
|
||||
We need to install the necessary software on the puppet server to allow Kerberos authentication to occur.
|
||||
=== "Rocky, CentOS, RHEL, Fedora"
|
||||
|
||||
``` sh
|
||||
sudo yum install krb5-workstation
|
||||
```
|
||||
|
||||
=== "Debian, Ubuntu"
|
||||
|
||||
``` sh
|
||||
sudo apt-get install krb5-user
|
||||
```
|
||||
|
||||
=== "SUSE"
|
||||
|
||||
``` sh
|
||||
sudo zypper install krb5-client
|
||||
```
|
||||
|
||||
### Prepare `/etc/krb5.conf` Configuration
|
||||
We need to configure Kerberos to know how to reach the domain, this is achieved by editing `/etc/krb5.conf` to look similar to the following, with your own domain substituting the example values.
|
||||
``` ini
|
||||
[libdefaults]
|
||||
default_realm = BUNNY-LAB.IO
|
||||
dns_lookup_realm = false
|
||||
dns_lookup_kdc = false
|
||||
ticket_lifetime = 7d
|
||||
forwardable = true
|
||||
|
||||
[realms]
|
||||
BUNNY-LAB.IO = {
|
||||
kdc = LAB-DC-01.bunny-lab.io # (1)
|
||||
kdc = LAB-DC-02.bunny-lab.io # (2)
|
||||
admin_server = LAB-DC-01.bunny-lab.io # (3)
|
||||
}
|
||||
|
||||
[domain_realm]
|
||||
.bunny-lab.io = BUNNY-LAB.IO
|
||||
bunny-lab.io = BUNNY-LAB.IO
|
||||
```
|
||||
|
||||
1. Your primary domain controller
|
||||
2. Your secondary domain controller (if applicable)
|
||||
3. This is your Primary Domain Controller (PDC)
|
||||
|
||||
### Initialize Kerberos Connection
|
||||
Now we need to log into the domain using (preferrably) domain administrator credentials, such as the example below. You will be prompted to enter your domain password.
|
||||
``` sh
|
||||
kinit nicole.rappe@BUNNY-LAB.IO
|
||||
klist
|
||||
```
|
||||
|
||||
??? example "Example Output of `klist`"
|
||||
You should expect to see output similar to the following. Finding a way to ensure the Kerberos tickets live longer is still under research, as 7 days is not exactly practical for long-term deployments.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# klist
|
||||
Ticket cache: FILE:/tmp/krb5cc_0
|
||||
Default principal: nicole.rappe@BUNNY-LAB.IO
|
||||
|
||||
Valid starting Expires Service principal
|
||||
11/14/2024 21:57:03 11/15/2024 07:57:03 krbtgt/BUNNY-LAB.IO@BUNNY-LAB.IO
|
||||
renew until 11/21/2024 21:57:03
|
||||
```
|
||||
|
||||
### Prepare Windows Devices
|
||||
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
|
||||
|
||||
You can find the [WinRM Enablement Script](../../../../workflows/operations/automation/ansible/enable-winrm-on-windows-devices.md) in the Bunny Lab documentation.
|
||||
|
||||
## Ad-Hoc Command Examples
|
||||
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t domain_controllers --no-ssl-verify`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging WinRM via Kerberos on Windows devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t domain_controllers --no-ssl-verify
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-dc-01.bunny-lab.io...
|
||||
Started on lab-dc-02.bunny-lab.io...
|
||||
Finished on lab-dc-02.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Finished on lab-dc-01.bunny-lab.io:
|
||||
bunny-lab\nicole.rappe
|
||||
Successful on 2 targets: lab-dc-01.bunny-lab.io,lab-dc-02.bunny-lab.io
|
||||
Ran on 2 targets in 1.91 sec
|
||||
```
|
||||
|
||||
??? example "Example Output of `bolt command run whoami -t linux_servers`"
|
||||
You should expect to see output similar to the following. This is what you will see when leveraging native SSH on Linux devices.
|
||||
``` sh
|
||||
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t linux_servers
|
||||
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
|
||||
Started on lab-auth-01.bunny-lab.io...
|
||||
Started on lab-auth-02.bunny-lab.io...
|
||||
Finished on lab-auth-02.bunny-lab.io:
|
||||
nicole
|
||||
Finished on lab-auth-01.bunny-lab.io:
|
||||
nicole
|
||||
Successful on 2 targets: lab-auth-01.bunny-lab.io,lab-auth-02.bunny-lab.io
|
||||
Ran on 2 targets in 0.68 sec
|
||||
```
|
||||
|
||||
428
deployments/automation/puppet/deployment/puppet.md
Normal file
428
deployments/automation/puppet/deployment/puppet.md
Normal file
@@ -0,0 +1,428 @@
|
||||
---
|
||||
tags:
|
||||
- Puppet
|
||||
- Automation
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Puppet is another declarative configuration management tool that excels in system configuration and enforcement. Like Ansible, it's designed to maintain the desired state of a system's configuration but uses a client-server (master-agent) architecture by default.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes you are deploying Puppet server onto Rocky Linux 9.4. Any version of RHEL/CentOS/Alma/Rocky should behave similarily.
|
||||
|
||||
## Architectural Overview
|
||||
### Detailed
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea Repo (Puppet Environment)
|
||||
participant r10k as r10k (Environment Deployer)
|
||||
participant PuppetMaster as Puppet Server (lab-puppet-01.bunny-lab.io)
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull Puppet Environment updates
|
||||
Gitea-->>PuppetMaster: Send latest Puppet repository code
|
||||
|
||||
%% r10k deployment process
|
||||
PuppetMaster->>r10k: Deploy environment with r10k
|
||||
r10k->>PuppetMaster: Fetch and install Puppet modules
|
||||
r10k-->>PuppetMaster: Compile environments and apply updates
|
||||
|
||||
%% Agent enrollment process
|
||||
Agent->>PuppetMaster: Request to enroll (Agent Check-in)
|
||||
PuppetMaster->>Agent: Verify SSL Certificate & Authenticate
|
||||
Agent-->>PuppetMaster: Send facts about system (Facter)
|
||||
|
||||
%% PuppetMaster compiles catalog for the agent
|
||||
PuppetMaster->>PuppetMaster: Compile Catalog
|
||||
PuppetMaster->>PuppetMaster: Check if 'neofetch' is required in manifest
|
||||
PuppetMaster-->>Agent: Send compiled catalog with 'neofetch' installation instructions
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Agent: Check if 'neofetch' is installed
|
||||
Agent--xNeofetch: 'neofetch' not installed
|
||||
Agent->>Neofetch: Install 'neofetch'
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back to PuppetMaster
|
||||
Agent->>PuppetMaster: Report status (catalog applied and neofetch installed)
|
||||
```
|
||||
|
||||
### Simplified
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea as Gitea (Puppet Repository)
|
||||
participant PuppetMaster as Puppet Server
|
||||
participant Agent as Managed Agent (fedora.bunny-lab.io)
|
||||
participant Neofetch as Neofetch Package
|
||||
|
||||
%% PuppetMaster pulling environment updates
|
||||
PuppetMaster->>Gitea: Pull environment updates
|
||||
Gitea-->>PuppetMaster: Send updated code
|
||||
|
||||
%% Agent enrollment and catalog request
|
||||
Agent->>PuppetMaster: Request catalog (Check-in)
|
||||
PuppetMaster->>Agent: Send compiled catalog (neofetch required)
|
||||
|
||||
%% Agent installs neofetch
|
||||
Agent->>Neofetch: Install neofetch
|
||||
Neofetch-->>Agent: Installation complete
|
||||
|
||||
%% Agent reports back
|
||||
Agent->>PuppetMaster: Report catalog applied (neofetch installed)
|
||||
```
|
||||
|
||||
### Breakdown
|
||||
#### 1. **PuppetMaster Pulls Updates from Gitea**
|
||||
- PuppetMaster uses `r10k` to fetch the latest environment updates from Gitea. These updates include manifests, hiera data, and modules for the specified Puppet environments.
|
||||
|
||||
#### 2. **PuppetMaster Compiles Catalogs and Modules**
|
||||
- After pulling updates, the PuppetMaster compiles the latest node-specific catalogs based on the manifests and modules. It ensures the configuration is ready for agents to retrieve.
|
||||
|
||||
#### 3. **Agent (fedora.bunny-lab.io) Checks In**
|
||||
- The Puppet agent on `fedora.bunny-lab.io` checks in with the PuppetMaster for its catalog. This request tells the PuppetMaster to compile the node's desired configuration.
|
||||
|
||||
#### 4. **Agent Downloads and Applies the Catalog**
|
||||
- The agent retrieves its compiled catalog from the PuppetMaster. It compares the current system state with the desired state outlined in the catalog.
|
||||
|
||||
#### 5. **Agent Installs `neofetch`**
|
||||
- The agent identifies that `neofetch` is missing and installs it using the system's package manager. The installation follows the directives in the catalog.
|
||||
|
||||
#### 6. **Agent Reports Success**
|
||||
- Once changes are applied, the agent sends a report back to the PuppetMaster. The report includes details of the changes made, confirming `neofetch` was installed.
|
||||
|
||||
## Deployment Steps:
|
||||
You will need to perform a few steps outlined in the [official Puppet documentation](https://www.puppet.com/docs/puppet/7/install_puppet.html) to get a Puppet server operational. A summarized workflow is seen below:
|
||||
|
||||
### Install Puppet Repository
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Add Puppet Repository / Enable Puppet on YUM
|
||||
sudo rpm -Uvh https://yum.puppet.com/puppet7-release-el-9.noarch.rpm
|
||||
```
|
||||
|
||||
### Install Puppet Server
|
||||
**Installation Scope**: Puppet Server
|
||||
``` sh
|
||||
# Install the Puppet Server
|
||||
sudo yum install -y puppetserver
|
||||
systemctl enable --now puppetserver
|
||||
|
||||
# Validate Successful Deployment
|
||||
exec bash
|
||||
puppetserver -v
|
||||
```
|
||||
|
||||
### Install Puppet Agent
|
||||
**Installation Scope**: Puppet Server / Managed Devices
|
||||
``` sh
|
||||
# Install Puppet Agent (This will already be installed on the Puppet Server)
|
||||
sudo yum install -y puppet-agent
|
||||
|
||||
# Enable the Puppet Agent
|
||||
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
|
||||
|
||||
# Configure Puppet Server to Connect To
|
||||
puppet config set server lab-puppet-01.bunny-lab.io --section main
|
||||
|
||||
# Establish Secure Connection to Puppet Server
|
||||
puppet ssl bootstrap
|
||||
|
||||
# ((On the Puppet Server))
|
||||
# You will see an error stating: "Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (fedora.bunny-lab.io)."
|
||||
# Run the following command (as root) on the Puppet Server to generate a certificate
|
||||
sudo su
|
||||
puppetserver ca sign --certname fedora.bunny-lab.io
|
||||
```
|
||||
|
||||
#### Validate Agent Functionality
|
||||
At this point, you want to ensure that the device being managed by the agent is able to pull down configurations from the Puppet Server. You will know if it worked by getting a message similar to `Notice: Applied catalog in X.XX seconds` after running the following command:
|
||||
``` sh
|
||||
puppet agent --test
|
||||
```
|
||||
|
||||
## Install r10k
|
||||
At this point, we need to configure Gitea as the storage repository for the Puppet "Environments" (e.g. `Production` and `Development`). We can do this by leveraging a tool called "r10k" which pulls a Git repository and configures it as the environment in Puppet.
|
||||
``` sh
|
||||
# Install r10k Pre-Requisites
|
||||
sudo dnf install -y ruby ruby-devel gcc make
|
||||
|
||||
# Install r10k Gem (The Software)
|
||||
# Note: If you encounter any issues with permissions, you can install the gem with "sudo gem install r10k --no-document".
|
||||
sudo gem install r10k
|
||||
|
||||
# Verify the Installation (Run this as a non-root user)
|
||||
r10k version
|
||||
```
|
||||
|
||||
### Configure r10k
|
||||
``` sh
|
||||
# Create the r10k Configuration Directory
|
||||
sudo mkdir -p /etc/puppetlabs/r10k
|
||||
|
||||
# Create the r10k Configuration File
|
||||
sudo nano /etc/puppetlabs/r10k/r10k.yaml
|
||||
```
|
||||
|
||||
```yaml title="/etc/puppetlabs/r10k/r10k.yaml"
|
||||
---
|
||||
# Cache directory for r10k
|
||||
cachedir: '/var/cache/r10k'
|
||||
|
||||
# Sources define which repositories contain environments (Be sure to use the SSH URL, not the Git URL)
|
||||
sources:
|
||||
puppet:
|
||||
remote: 'https://git.bunny-lab.io/GitOps/Puppet.git'
|
||||
basedir: '/etc/puppetlabs/code/environments'
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Lockdown the Permissions of the Configuration File
|
||||
sudo chmod 600 /etc/puppetlabs/r10k/r10k.yaml
|
||||
|
||||
# Create r10k Cache Directory
|
||||
sudo mkdir -p /var/cache/r10k
|
||||
sudo chown -R puppet:puppet /var/cache/r10k
|
||||
```
|
||||
|
||||
## Configure Gitea
|
||||
At this point, we need to set up the branches and file/folder structure of the Puppet repository on Gitea.
|
||||
|
||||
You will make a repository on Gitea with the following files and structure as noted by each file's title. You will make a mirror copy of all of the files below in both the `Production` and `Development` branches of the repository. For the sake of this example, the repository will be located at `https://git.bunny-lab.io/GitOps/Puppet.git`
|
||||
|
||||
!!! example "Example Agent & Neofetch"
|
||||
You will notice there is a section for `fedora.bunny-lab.io` as well as mentions of `neofetch`. These are purely examples in my homelab of a computer I was testing against during the development of the Puppet Server and associated documentation. You can feel free to not include the entire `modules/neofetch/manifests/init.pp` file in the Gitea repository, as well as remove this entire section from the `manifests/site.pp` file:
|
||||
|
||||
``` yaml
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
=== "Puppetfile"
|
||||
This file is used by the Puppet Server (PuppetMaster) to prepare the environment by installing modules / Forge packages into the environment prior to devices getting their configurations. It's important and the modules included in this example are the bare-minimum to get things working with PuppetDB functionality.
|
||||
|
||||
```json title="Puppetfile"
|
||||
forge 'https://forge.puppet.com'
|
||||
mod 'puppetlabs-stdlib', '9.6.0'
|
||||
mod 'puppetlabs-puppetdb', '8.1.0'
|
||||
mod 'puppetlabs-postgresql', '10.3.0'
|
||||
mod 'puppetlabs-firewall', '8.1.0'
|
||||
mod 'puppetlabs-inifile', '6.1.1'
|
||||
mod 'puppetlabs-concat', '9.0.2'
|
||||
mod 'puppet-systemd', '7.1.0'
|
||||
```
|
||||
|
||||
=== "environment.conf"
|
||||
This file is mostly redundant, as it states the values below, which are the default values Puppet works with. I only included it in case I had a unique use-case that required a more custom approach to the folder structure. (This is very unlikely).
|
||||
|
||||
```yaml title="environment.conf"
|
||||
# Specifies the module path for this environment
|
||||
modulepath = modules:$basemodulepath
|
||||
|
||||
# Optional: Specifies the manifest file for this environment
|
||||
manifest = manifests/site.pp
|
||||
|
||||
# Optional: Set the environment's config_version (e.g., a script to output the current Git commit hash)
|
||||
# config_version = scripts/config_version.sh
|
||||
|
||||
# Optional: Set the environment's environment_timeout
|
||||
# environment_timeout = 0
|
||||
```
|
||||
|
||||
=== "site.pp"
|
||||
This file is kind of like an inventory of devices and their states. In this example, you will see that the puppet server itself is named `lab-puppet-01.bunny-lab.io` and the agent device is named `fedora.bunny-lab.io`. By "including" modules like PuppetDB, it installs the PuppetDB role and configures it automatically on the Puppet Server. By stating the firewall rules, it also ensures that those firewall ports are open no matter what, and if they close, Puppet will re-open them automatically. Port 8140 is for Agent communication, and port 8081 is for PuppetDB functionality.
|
||||
|
||||
!!! example "Neofetch Example"
|
||||
In the example configuration below, you will notice this section. This tells Puppet to deploy the neofetch package to any device that has `include neofetch` written. Grouping devices etc is currently undocumented as of writing this.
|
||||
``` sh
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
```
|
||||
|
||||
```yaml title="manifests/site.pp"
|
||||
# Node definition for the Puppet Server
|
||||
node 'lab-puppet-01.bunny-lab.io' {
|
||||
|
||||
# Include the puppetdb class with custom parameters
|
||||
class { 'puppetdb':
|
||||
listen_address => '0.0.0.0', # Allows access from all network interfaces
|
||||
}
|
||||
|
||||
# Configure the Puppet Server to use PuppetDB
|
||||
include puppetdb
|
||||
include puppetdb::master::config
|
||||
|
||||
# Ensure the required iptables rules are in place using Puppet's firewall resources
|
||||
firewall { '100 allow Puppet traffic on 8140':
|
||||
proto => 'tcp',
|
||||
dport => '8140',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
|
||||
firewall { '101 allow PuppetDB traffic on 8081':
|
||||
proto => 'tcp',
|
||||
dport => '8081',
|
||||
jump => 'accept', # Corrected parameter from action to jump
|
||||
chain => 'INPUT',
|
||||
ensure => 'present',
|
||||
}
|
||||
}
|
||||
|
||||
# Node definition for the Fedora agent
|
||||
node 'fedora.bunny-lab.io' {
|
||||
# Include the neofetch class to ensure Neofetch is installed
|
||||
include neofetch
|
||||
}
|
||||
|
||||
# Default node definition (optional)
|
||||
node default {
|
||||
# This can be left empty or include common classes for all other nodes
|
||||
}
|
||||
```
|
||||
|
||||
=== "init.pp"
|
||||
This is used by the neofetch class noted in the `site.pp` file. This is basically the declaration of how we want neofetch to be on the devices that include the neofetch "class". In this case, we don't care how it does it, but it will install Neofetch, whether that is through yum, dnf, or apt. A few lines of code is OS-agnostic. The formatting / philosophy is similar in a way to the modules in Ansible playbooks, and how they declare the "state" of things.
|
||||
|
||||
```yaml title="modules/neofetch/manifests/init.pp"
|
||||
class neofetch {
|
||||
package { 'neofetch':
|
||||
ensure => installed,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Storing Credentials to Gitea
|
||||
We need to be able to pull down the data from Gitea's Puppet repository under the root user so that r10k can automatically pull down any changes made to the Puppet environments (e.g. `Production` and `Development`). Each Git branch represents a different Puppet environment. We will use an application token to do this.
|
||||
|
||||
Navigate to "**Gitea > User (Top-Right) > Settings > Applications
|
||||
- Token Name: `Puppet r10k`
|
||||
- Permissions: `Repository > Read Only`
|
||||
- Click the "**Generate Token**" button to finish.
|
||||
|
||||
!!! warning "Securely Store the Application Token"
|
||||
It is critical that you store the token somewhere safe like a password manager as you will need to reference it later and might need it in the future if you re-build the r10k environment.
|
||||
|
||||
Now we want to configure Gitea to store the credentials for later use by r10k:
|
||||
``` sh
|
||||
# Enable Stored Credentials (We will address security concerns further down...)
|
||||
sudo yum install -y git
|
||||
sudo git config --global credential.helper store
|
||||
|
||||
# Clone the Git Repository Once to Store the Credentials (Use the Application Token as the password)
|
||||
# Username: nicole.rappe
|
||||
# Password: <Application Token Value>
|
||||
sudo git clone https://git.bunny-lab.io/GitOps/Puppet.git /tmp/PuppetTest
|
||||
|
||||
# Verify the Credentials are Stored
|
||||
sudo cat /root/.git-credentials
|
||||
|
||||
# Lockdown Permissions
|
||||
sudo chmod 600 /root/.git-credentials
|
||||
|
||||
# Cleanup After Ourselves
|
||||
sudo rm -rf /tmp/PuppetTest
|
||||
```
|
||||
|
||||
Finally we validate that everything is working by pulling down the Puppet environments using r10k on the Puppet Server:
|
||||
``` sh
|
||||
# Deploy Puppy Environments from Gitea
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
|
||||
# Validate r10k is Installing Modules in the Environments
|
||||
sudo ls /etc/puppetlabs/code/environments/production/modules
|
||||
sudo ls /etc/puppetlabs/code/environments/development/modules
|
||||
```
|
||||
|
||||
!!! success "Successful Puppet Environment Deployment
|
||||
If you got no errors about Puppetfile formatting or Gitea permissions errors, then you are good to move onto the next step.
|
||||
|
||||
## External Node Classifier (ENC)
|
||||
An ENC allows you to define node-specific data, including the environment, on the Puppet Server. The agent requests its configuration, and the Puppet Server provides the environment and classes to apply.
|
||||
|
||||
**Advantages**:
|
||||
|
||||
- **Centralized Control**: Environments and classifications are managed from the server.
|
||||
- **Security**: Agents cannot override their assigned environment.
|
||||
- **Scalability**: Suitable for managing environments for hundreds or thousands of nodes.
|
||||
|
||||
### Create an ENC Script
|
||||
``` sh
|
||||
sudo mkdir -p /opt/puppetlabs/server/data/puppetserver/scripts/
|
||||
```
|
||||
|
||||
```ruby title="/opt/puppetlabs/server/data/puppetserver/scripts/enc.rb"
|
||||
#!/usr/bin/env ruby
|
||||
# enc.rb
|
||||
|
||||
require 'yaml'
|
||||
|
||||
node_name = ARGV[0]
|
||||
|
||||
# Define environment assignments
|
||||
node_environments = {
|
||||
'fedora.bunny-lab.io' => 'development',
|
||||
# Add more nodes and their environments as needed
|
||||
}
|
||||
|
||||
environment = node_environments[node_name] || 'production'
|
||||
|
||||
# Define classes to include per node (optional)
|
||||
node_classes = {
|
||||
'fedora.bunny-lab.io' => ['neofetch'],
|
||||
# Add more nodes and their classes as needed
|
||||
}
|
||||
|
||||
classes = node_classes[node_name] || []
|
||||
|
||||
# Output the YAML document
|
||||
output = {
|
||||
'environment' => environment,
|
||||
'classes' => classes
|
||||
}
|
||||
|
||||
puts output.to_yaml
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Ensure the File is Executable
|
||||
sudo chmod +x /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
### Configure Puppet Server to Use the ENC
|
||||
Edit the Puppet Server's `puppet.conf` and set the `node_terminus` and `external_nodes` parameters:
|
||||
```ini title="/etc/puppetlabs/puppet/puppet.conf"
|
||||
[master]
|
||||
node_terminus = exec
|
||||
external_nodes = /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
|
||||
```
|
||||
|
||||
Restart the Puppet Service
|
||||
``` sh
|
||||
sudo systemctl restart puppetserver
|
||||
```
|
||||
|
||||
## Pull Puppet Environments from Gitea
|
||||
At this point, we can tell r10k to pull down the Puppet environments (e.g. `Production` and `Development`) that we made in the Gitea repository in previous steps. Run the following command on the Puppet Server to pull down the environments. This will download / configure any Puppet Forge modules as well as any hand-made modules such as Neofetch.
|
||||
``` sh
|
||||
sudo /usr/local/bin/r10k deploy environment -p
|
||||
# OPTIONAL: You can pull down a specific environment instead of all environments if you specify the branch name, seen here:
|
||||
#sudo /usr/local/bin/r10k deploy environment development -p
|
||||
```
|
||||
|
||||
### Apply Configuration to Puppet Server
|
||||
At this point, we are going to deploy the configuration from Gitea to the Puppet Server itself so it installs PuppetDB automatically as well as configures firewall ports and other small things to functional properly. Once this is completed, you can add additional agents / managed devices and they will be able to communicate with the Puppet Server over the network.
|
||||
``` sh
|
||||
sudo /opt/puppetlabs/bin/puppet agent -t
|
||||
```
|
||||
|
||||
!!! success "Puppet Server Deployed and Validated"
|
||||
Congradulations! You have successfully deployed an entire Puppet Server, as well as integrated Gitea and r10k to deploy environment changes in a versioned environment, as well as validated functionality against a managed device using the agent (such as a spare laptop/desktop). If you got this far, be proud, because it took me over 12 hours write this documentation allowing you to deploy a server in less than 30 minutes.
|
||||
15
deployments/index.md
Normal file
15
deployments/index.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
tags:
|
||||
- Deployments
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Deployments
|
||||
## Purpose
|
||||
Build and deployment documentation for platforms, services, and automation stacks.
|
||||
|
||||
## Includes
|
||||
- Platform deployments (virtualization and containerization)
|
||||
- Service deployments and integration patterns
|
||||
- Automation stack deployment guides
|
||||
@@ -0,0 +1,79 @@
|
||||
---
|
||||
tags:
|
||||
- Containers
|
||||
- Docker
|
||||
- Containerization
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
This document will outline the general workflow of using Visual Studio Code to author and update custom containers and push them to a container registry hosted in Gitea. This will be referencing the `git-repo-updater` project throughout.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes you are authoring the containers in Microsoft Windows, and does not include the fine-tuning necessary to work in Linux or MacOS environments. You are on your own if you want to author containers in Linux.
|
||||
|
||||
## Install Visual Studio Code
|
||||
The management of the Gitea repositories, Dockerfile building, and pushing container images to the Gitea container registry will all involve using just Visual Studio Code. You can download Visual Studio Code from this [direct download link](https://code.visualstudio.com/docs/?dv=win64user).
|
||||
|
||||
## Configure Required Docker Extensions
|
||||
You will need to locate and install the `Dev Containers`, `Docker`, and `WSL` extensions in Visual Studio Code to move forward. This may request that you install Docker Desktop onto your computer as part of the installation process. Proceed to do so, then when the Docker "Engine" is running, you can proceed to the next step.
|
||||
|
||||
!!! warning
|
||||
You need to have Docker Desktop "Engine" running whenever working with containers, as it is necessary to build the images. VSCode will complain if it is not running.
|
||||
|
||||
## Add Gitea Container Registry
|
||||
At this point, we need to add a registry to Visual Studio Code so it can proceed with pulling down the repository data.
|
||||
|
||||
- Click the Docker icon on the left-hand toolbar
|
||||
- Under "**Registries**", click "**Connect Registry...**"
|
||||
- In the dropdown menu that appears, click "**Generic Registry V2**"
|
||||
- Enter `https://git.bunny-lab.io/container-registry`
|
||||
- Registry Username: `nicole.rappe`
|
||||
- Registry Password or Personal Access Token: `Personal Access API Token You Generated in Gitea`
|
||||
- You will now see a sub-listing named "**Generic Registry V2**"
|
||||
- If you click the dropdown, you will see "**https://git.bunny-lab.io/container-registry**"
|
||||
- Under this section, you will see any containers in the registry that you have access to, in this case, you will see `container-registry/git-repo-updater`
|
||||
|
||||
## Add Source Control Repository
|
||||
Now it is time to pull down the repository where the container's core elements are stored on Gitea.
|
||||
|
||||
- Click the "**Source Control**" button on the left-hand menu then click the "**Clone Repository**" button
|
||||
- Enter `https://git.bunny-lab.io/container-registry/git-repo-updater.git`
|
||||
- Click the dropdown menu option "**Clone from URL**" then choose a location to locally store the repository on your computer
|
||||
- When prompted with "**Would you like to open the cloned repository**", click the "**Open**" button
|
||||
|
||||
## Making Changes
|
||||
You will be presented with four files in this specific repository. `.env`, `docker-compose.yml`, `Dockerfile`, and `repo_watcher.sh`
|
||||
|
||||
- `.env` is the environment variables passed to the container to tell it which ntfy server to talk to, which credentials to use with Gitea, and which repositories to download and push into production servers
|
||||
- `docker-compose.yml` is an example docker-compose file that can be used in Portainer to deploy the server along with the contents of the `.env` file
|
||||
- `Dockerfile` is the base of the container, telling docker what operating system to use and how to start the script in the container
|
||||
- `repo_watcher.sh` is the script called by the `Dockerfile` which loops checking for updates in Gitea repositories that were configured in the `.env` file
|
||||
|
||||
### Push to Repository
|
||||
When you make any changes, you will need to first commit them to the repository
|
||||
|
||||
- Save all of the edited files
|
||||
- Click the "**Source Control**" button in the toolbar
|
||||
- Write a message about what you changed in the commit description field
|
||||
- Click the "**Commit**" button
|
||||
- Click the "**Sync Changes**" button that appears
|
||||
- You may be presented with various dialogs, just click the equivalant of "**Yes/OK**" to each of them
|
||||
|
||||
### Build the Dockerfile
|
||||
At this point, we need to build the dockerfile, which takes all of the changes and packages it into a container image
|
||||
|
||||
- Navigate back to the file explorer inside of Visual Studio Code
|
||||
- Right-click the `Dockerfile`, then click "**Build Image...**"
|
||||
- In the "Tag Image As..." window, type in `git.bunny-lab.io/container-registry/git-repo-updater:latest`
|
||||
- When you navigate back to the Docker menu, you will see a new image appear under the "**Images**" section
|
||||
- You should see something similar to "Latest - X Seconds Ago` indicating this is the image you just built
|
||||
- Delete the older image(s) by right-clicking on them and selecting "**Remove...**"
|
||||
- Push the image to the container registry in Gitea by right-clicking the latest image, and selecting "**Push...**"
|
||||
- In the dropdown menu that appears, enter `git.bunny-lab.io/container-registry/git-repo-updater:latest`
|
||||
- You can confirm if it was successful by navigating to the [Gitea Container Webpage](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest) and seeing if it says "**Published Now**" or "**Published 1 Minute Ago**"
|
||||
|
||||
!!! warning "CRLF End of Line Sequences"
|
||||
When you are editing files in the container's repository, you need to ensure that Visual Studio Code is editing that file in "**LF**" mode and not "**CRLF**". You can find this toggle at the bottom-right of the VSCode window. Simply clicking on the letters "**CRLF**" will let you toggle the file to "**LF**". If you do not make this change, the container will misunderstand the dockerfile and/or scripts inside of the container and have runtime errors.
|
||||
|
||||
## Deploy the Container
|
||||
You can now use the `.env` file along with the `docker-compose.yml` file inside of Portainer to deploy a stack using the container you just built / updated.
|
||||
@@ -0,0 +1,114 @@
|
||||
---
|
||||
tags:
|
||||
- Containers
|
||||
- Docker
|
||||
- Containerization
|
||||
---
|
||||
|
||||
**Purpose**: Docker container running Alpine Linux that automates and improves upon much of the script mentioned in the [Git Repo Updater](../../../../../scripts/bash/git-repo-updater.md) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings, and can have an infinite number of monitored repositories.
|
||||
|
||||
### Deployment
|
||||
You can find the current up-to-date Gitea repository that includes the `docker-compose.yml` and `.env` files that you need to deploy everything [here](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest)
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.3'
|
||||
services:
|
||||
git-repo-updater:
|
||||
privileged: true
|
||||
container_name: git-repo-updater
|
||||
env_file:
|
||||
- stack.env
|
||||
image: git.bunny-lab.io/container-registry/git-repo-updater:latest
|
||||
volumes:
|
||||
- /srv/containers:/srv/containers
|
||||
- /srv/containers/git-repo-updater/Repo_Cache:/root/Repo_Cache
|
||||
restart: always
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
# Gitea Credentials
|
||||
GIT_USERNAME=nicole.rappe
|
||||
GIT_PASSWORD=USE-AN-APP-PASSWORD
|
||||
|
||||
# NTFY Push Notification Server URL
|
||||
NTFY_URL=https://ntfy.cyberstrawberry.net/git-repo-updater
|
||||
|
||||
# Repository/Destination Pairs (Add as Many as Needed)
|
||||
REPO_01="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/bunny-lab/docs.git,/srv/containers/material-mkdocs/docs/docs"
|
||||
REPO_02="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/GitOps/servers.bunny-lab.io.git,/srv/containers/homepage-docker"
|
||||
```
|
||||
### Build / Development
|
||||
If you want to learn how the container was assembled, the related build files are located [here](https://git.cyberstrawberry.net/container-registry/git-repo-updater)
|
||||
```jsx title="Dockerfile"
|
||||
# Use Alpine as the base image of the container
|
||||
FROM alpine:latest
|
||||
|
||||
# Install necessary packages
|
||||
RUN apk --no-cache add git curl rsync
|
||||
|
||||
# Add script
|
||||
COPY repo_watcher.sh /repo_watcher.sh
|
||||
RUN chmod +x /repo_watcher.sh
|
||||
|
||||
#Create Directory to store Repositories
|
||||
RUN mkdir -p /root/Repo_Cache
|
||||
|
||||
# Start script (Alpine uses /bin/sh instead of /bin/bash)
|
||||
CMD ["/bin/sh", "-c", "/repo_watcher.sh"]
|
||||
```
|
||||
|
||||
```jsx title="repo_watcher.sh"
|
||||
#!/bin/sh
|
||||
|
||||
# Function to process each repo-destination pair
|
||||
process_repo() {
|
||||
FULL_REPO_URL=$1
|
||||
DESTINATION=$2
|
||||
|
||||
# Extract the URL without credentials for logging and notifications
|
||||
CLEAN_REPO_URL=$(echo "$FULL_REPO_URL" | sed 's/https:\/\/[^@]*@/https:\/\//')
|
||||
|
||||
# Directory to hold the repository locally
|
||||
REPO_DIR="/root/Repo_Cache/$(basename $CLEAN_REPO_URL .git)"
|
||||
|
||||
# Clone the repo if it doesn't exist, or navigate to it if it does
|
||||
if [ ! -d "$REPO_DIR" ]; then
|
||||
curl -d "Cloning: $CLEAN_REPO_URL" $NTFY_URL
|
||||
git clone "$FULL_REPO_URL" "$REPO_DIR" > /dev/null 2>&1
|
||||
fi
|
||||
cd "$REPO_DIR" || exit
|
||||
|
||||
# Fetch the latest changes
|
||||
git fetch origin main > /dev/null 2>&1
|
||||
|
||||
# Check if the local repository is behind the remote
|
||||
LOCAL=$(git rev-parse @)
|
||||
REMOTE=$(git rev-parse @{u})
|
||||
|
||||
if [ "$LOCAL" != "$REMOTE" ]; then
|
||||
curl -d "Updating: $CLEAN_REPO_URL" $NTFY_URL
|
||||
git pull origin main > /dev/null 2>&1
|
||||
rsync -av --delete --exclude '.git/' ./ "$DESTINATION" > /dev/null 2>&1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main loop
|
||||
while true; do
|
||||
# Iterate over each environment variable matching 'REPO_[0-9]+'
|
||||
env | grep '^REPO_[0-9]\+=' | while IFS='=' read -r name value; do
|
||||
# Split the value by comma and read into separate variables
|
||||
OLD_IFS="$IFS" # Save the original IFS
|
||||
IFS=',' # Set IFS to comma for splitting
|
||||
set -- $value # Set positional parameters ($1, $2, ...)
|
||||
REPO_URL="$1" # Assign first parameter to REPO_URL
|
||||
DESTINATION="$2" # Assign second parameter to DESTINATION
|
||||
IFS="$OLD_IFS" # Restore original IFS
|
||||
|
||||
process_repo "$REPO_URL" "$DESTINATION"
|
||||
done
|
||||
|
||||
# Wait for 5 seconds before the next iteration
|
||||
sleep 5
|
||||
done
|
||||
|
||||
```
|
||||
|
||||
@@ -0,0 +1,63 @@
|
||||
---
|
||||
tags:
|
||||
- Docker
|
||||
- Portainer
|
||||
- Containerization
|
||||
---
|
||||
|
||||
### Update The Package Manager
|
||||
We need to update the server before installing Docker
|
||||
|
||||
=== "Ubuntu Server"
|
||||
|
||||
``` sh
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
```
|
||||
|
||||
=== "Rocky Linux"
|
||||
|
||||
``` sh
|
||||
sudo dnf check-update
|
||||
```
|
||||
|
||||
### Deploy Docker
|
||||
Install Docker then deploy Portainer
|
||||
|
||||
Convenience Script:
|
||||
```
|
||||
curl -fsSL https://get.docker.com | sudo sh
|
||||
dockerd-rootless-setuptool.sh install
|
||||
```
|
||||
|
||||
Alternative Methods:
|
||||
|
||||
=== "Ubuntu Server"
|
||||
|
||||
``` sh
|
||||
sudo apt install docker.io -y
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (1)
|
||||
```
|
||||
|
||||
1. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
|
||||
|
||||
=== "Rocky Linux"
|
||||
|
||||
``` sh
|
||||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
sudo dnf install -y docker-ce docker-ce-cli containerd.io
|
||||
sudo systemctl enable docker --now
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (2)
|
||||
```
|
||||
|
||||
1. This is needed to ensure that docker starts automatically every time the server is turned on.
|
||||
2. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
|
||||
|
||||
### Configure Docker Network
|
||||
I highly recomment setting up a [Dedicated Docker MACVLAN Network](../../../../reference/infrastructure/networking/docker-networking/docker-networking.md). You can use it to keep your containers on their own subnet.
|
||||
|
||||
### Access Portainer WebUI
|
||||
You will be able to access the Portainer WebUI at the following address: `https://<IP Address>:9443`
|
||||
!!! warning
|
||||
You need to be quick, as there is a timeout period where you wont be able to onboard / provision Portainer and will be forced to restart it's container. If this happens, you can find the container using `sudo docker container ls` proceeded by `sudo docker restart <ID of Portainer Container>`.
|
||||
|
||||
@@ -0,0 +1,193 @@
|
||||
---
|
||||
tags:
|
||||
- Kubernetes
|
||||
- Containerization
|
||||
---
|
||||
|
||||
# Deploy Generic Kubernetes
|
||||
The instructions outlined below assume you are deploying the environment using Ansible Playbooks either via Ansible's CLI or AWX.
|
||||
|
||||
### Deploy K8S User
|
||||
```jsx title="01-deploy-k8s-user.yml"
|
||||
- hosts: 'controller-nodes, worker-nodes'
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: create the k8sadmin user account
|
||||
user: name=k8sadmin append=yes state=present createhome=yes shell=/bin/bash
|
||||
|
||||
- name: allow 'k8sadmin' to use sudo without needing a password
|
||||
lineinfile:
|
||||
dest: /etc/sudoers
|
||||
line: 'k8sadmin ALL=(ALL) NOPASSWD: ALL'
|
||||
validate: 'visudo -cf %s'
|
||||
|
||||
- name: set up authorized keys for the k8sadmin user
|
||||
authorized_key: user=k8sadmin key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
### Install K8S
|
||||
```jsx title="02-install-k8s.yml"
|
||||
---
|
||||
- hosts: "controller-nodes, worker-nodes"
|
||||
remote_user: nicole
|
||||
become: yes
|
||||
become_method: sudo
|
||||
become_user: root
|
||||
gather_facts: yes
|
||||
connection: ssh
|
||||
|
||||
tasks:
|
||||
- name: Create containerd config file
|
||||
file:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
block: |
|
||||
overlay
|
||||
br_netfilter
|
||||
|
||||
- name: modprobe
|
||||
shell: |
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
|
||||
- name: Set system configurations for Kubernetes networking
|
||||
file:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
block: |
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
|
||||
- name: Apply new settings
|
||||
command: sudo sysctl --system
|
||||
|
||||
- name: install containerd
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y containerd
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
sudo systemctl restart containerd
|
||||
|
||||
- name: disable swap
|
||||
shell: |
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
|
||||
- name: install and configure dependencies
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
|
||||
- name: Create kubernetes repo file
|
||||
file:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
state: "touch"
|
||||
|
||||
- name: Add K8s Source
|
||||
blockinfile:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
block: |
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
|
||||
- name: Install Kubernetes
|
||||
shell: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
|
||||
### Configure ControlPlanes
|
||||
```jsx title="03-configure-controllers.yml"
|
||||
- hosts: controller-nodes
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: Initialize the K8S Cluster
|
||||
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
|
||||
args:
|
||||
chdir: $HOME
|
||||
creates: cluster_initialized.txt
|
||||
|
||||
- name: Create .kube directory
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
file:
|
||||
path: /home/k8sadmin/.kube
|
||||
state: directory
|
||||
mode: 0755
|
||||
|
||||
- name: Copy admin.conf to user's kube config
|
||||
copy:
|
||||
src: /etc/kubernetes/admin.conf
|
||||
dest: /home/k8sadmin/.kube/config
|
||||
remote_src: yes
|
||||
owner: k8sadmin
|
||||
|
||||
- name: Install the Pod Network
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
|
||||
args:
|
||||
chdir: $HOME
|
||||
|
||||
- name: Get the token for joining the worker nodes
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubeadm token create --print-join-command
|
||||
register: kubernetes_join_command
|
||||
|
||||
- name: Output Join Command to the Screen
|
||||
debug:
|
||||
msg: "{{ kubernetes_join_command.stdout }}"
|
||||
|
||||
- name: Copy join command to local file.
|
||||
become: yes
|
||||
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
|
||||
```
|
||||
|
||||
### Join Worker Node(s)
|
||||
```jsx title="04-join-worker-nodes.yml"
|
||||
- hosts: worker-nodes
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Copy join command from Ansible host to the worker nodes.
|
||||
become: yes
|
||||
copy:
|
||||
src: /tmp/kubernetes_join_command
|
||||
dest: /tmp/kubernetes_join_command
|
||||
mode: 0777
|
||||
|
||||
- name: Join the Worker nodes to the cluster.
|
||||
become: yes
|
||||
command: sh /tmp/kubernetes_join_command
|
||||
register: joined_or_not
|
||||
```
|
||||
|
||||
### Host Inventory File Template
|
||||
```jsx title="hosts"
|
||||
[controller-nodes]
|
||||
k8s-ctrlr-01 ansible_host=192.168.3.6 ansible_user=nicole
|
||||
|
||||
[worker-nodes]
|
||||
k8s-node-01 ansible_host=192.168.3.4 ansible_user=nicole
|
||||
k8s-node-02 ansible_host=192.168.3.5 ansible_user=nicole
|
||||
|
||||
[all:vars]
|
||||
ansible_become_user=root
|
||||
ansible_become_method=sudo
|
||||
```
|
||||
@@ -0,0 +1,226 @@
|
||||
---
|
||||
tags:
|
||||
- Kubernetes
|
||||
- RKE2
|
||||
- Rancher
|
||||
- Containerization
|
||||
---
|
||||
|
||||
# Deploy RKE2 Cluster
|
||||
Deploying a Rancher RKE2 Cluster is fairly straightforward. Just run the commands in-order and pay attention to which steps apply to all machines in the cluster, the controlplanes, and the workers.
|
||||
|
||||
!!! note "Prerequisites"
|
||||
This document assumes you are running **Ubuntu Server 24.04.3 LTS**. It also assumes that every node in the cluster has a unique hostname.
|
||||
|
||||
## All Cluster Nodes
|
||||
Assume all commands are running as root moving forward. (e.g. `sudo su`)
|
||||
|
||||
### Run Updates
|
||||
You will need to run these commands on every server that participates in the cluster then perform a reboot of the server **PRIOR** to moving onto the next section.
|
||||
``` sh
|
||||
apt update && apt upgrade -y
|
||||
apt install nfs-common iptables nano htop -y
|
||||
echo "Adding 15 Second Delay to Ensure Previous Commands finish running"
|
||||
sleep 15
|
||||
apt autoremove -y
|
||||
reboot
|
||||
```
|
||||
!!! tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
## Initial ControlPlane Node
|
||||
When you are starting a brand new cluster, you need to create what is referred to as the "Initial ControlPlane". This node is responsible for bootstrapping the entire cluster together in the beginning, and will eventually assist in handling container workloads and orchestrating operations in the cluster.
|
||||
!!! warning
|
||||
You only want to follow the instructions for the **initial** controlplane once. Running it on another machine to create additional controlplanes will cause the cluster to try to set up two different clusters, wrecking havok. Instead, follow the instructions in the next section to add redundant controlplanes.
|
||||
|
||||
### Download the Run Server Deployment Script
|
||||
```
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Enable & Configure Services
|
||||
``` sh
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable --now rke2-server.service
|
||||
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Temporarily Export the Kubeconfig to manage the cluster from CLI during initial deployment.
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
|
||||
# Add a Delay to Allow Cluster to Finish Initializing / Get Ready
|
||||
echo "Adding 60 Second Delay to Ensure Cluster is Ready - Run (kubectl get node) if the server is still not ready to know when to proceed."
|
||||
sleep 60
|
||||
|
||||
# Check that the Cluster Node is Running and Ready
|
||||
kubectl get node
|
||||
```
|
||||
|
||||
!!! example
|
||||
When the cluster is ready, you should see something like this when you run `kubectl get node`
|
||||
|
||||
This may be a good point to step away for 5 minutes, get a cup of coffee, and come back so it has a little extra time to be fully ready before moving on.
|
||||
```
|
||||
root@awx:/home/nicole# kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
awx Ready control-plane,etcd,master 3m21s v1.26.12+rke2r1
|
||||
```
|
||||
|
||||
### Install Helm, Rancher, CertManager, Jetstack, Rancher, and Longhorn
|
||||
``` sh
|
||||
# Install Helm
|
||||
curl -L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-4 | bash
|
||||
|
||||
# Install Necessary Helm Repositories
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm repo update
|
||||
|
||||
# Install Cert-Manager via Helm
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.19.2/cert-manager.crds.yaml
|
||||
|
||||
# Install Jetstack via Helm
|
||||
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace
|
||||
|
||||
# Install Rancher via Helm
|
||||
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rke2-cluster.bunny-lab.io --set bootstrapPassword=bootStrapAllTheThings --set replicas=1
|
||||
|
||||
# Install Longhorn via Helm
|
||||
helm upgrade -i longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
|
||||
```
|
||||
|
||||
!!! example "Be Patient - Come back in 20 Minutes"
|
||||
Rancher is going to take a while to fully set itself up, things will appear broken. Depending on how many resources you gave the cluster, it may take longer or shorter. A good ballpark is giving it at least 20 minutes to deploy itself before attempting to log into the webUI at https://awx.bunny-lab.io.
|
||||
|
||||
If you want to keep an eye on the deployment progress, you need to run the following command: `KUBECONFIG=/etc/rancher/rke2/rke2.yaml kubectl get pods --all-namespaces`
|
||||
The output should look like how it does below:
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
cattle-fleet-system fleet-controller-59cdb866d7-94r2q 1/1 Running 0 4m31s
|
||||
cattle-fleet-system gitjob-f497866f8-t726l 1/1 Running 0 4m31s
|
||||
cattle-provisioning-capi-system capi-controller-manager-6f87d6bd74-xx22v 1/1 Running 0 55s
|
||||
cattle-system helm-operation-28dcp 0/2 Completed 0 109s
|
||||
cattle-system helm-operation-f9qww 0/2 Completed 0 4m39s
|
||||
cattle-system helm-operation-ft8gq 0/2 Completed 0 26s
|
||||
cattle-system helm-operation-m27tq 0/2 Completed 0 61s
|
||||
cattle-system helm-operation-qrgj8 0/2 Completed 0 5m11s
|
||||
cattle-system rancher-64db9f48c-qm6v4 1/1 Running 3 (8m8s ago) 13m
|
||||
cattle-system rancher-webhook-65f5455d9c-tzbv4 1/1 Running 0 98s
|
||||
cert-manager cert-manager-55cf8685cb-86l4n 1/1 Running 0 14m
|
||||
cert-manager cert-manager-cainjector-fbd548cb8-9fgv4 1/1 Running 0 14m
|
||||
cert-manager cert-manager-webhook-655b4d58fb-s2cjh 1/1 Running 0 14m
|
||||
kube-system cloud-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
|
||||
kube-system etcd-awx 1/1 Running 0 19m
|
||||
kube-system helm-install-rke2-canal-q9vm6 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-coredns-q8w57 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-ingress-nginx-54vgk 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-metrics-server-87zhw 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-controller-crd-q6bh6 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-controller-tjk5f 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-validation-webhook-r9pcn 0/1 Completed 0 19m
|
||||
kube-system kube-apiserver-awx 1/1 Running 0 19m
|
||||
kube-system kube-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
|
||||
kube-system kube-proxy-awx 1/1 Running 0 19m
|
||||
kube-system kube-scheduler-awx 1/1 Running 5 (3m35s ago) 19m
|
||||
kube-system rke2-canal-gm45f 2/2 Running 0 19m
|
||||
kube-system rke2-coredns-rke2-coredns-565dfc7d75-qp64p 1/1 Running 0 19m
|
||||
kube-system rke2-coredns-rke2-coredns-autoscaler-6c48c95bf9-fclz5 1/1 Running 0 19m
|
||||
kube-system rke2-ingress-nginx-controller-lhjwq 1/1 Running 0 17m
|
||||
kube-system rke2-metrics-server-c9c78bd66-fnvx8 1/1 Running 0 18m
|
||||
kube-system rke2-snapshot-controller-6f7bbb497d-dw6v4 1/1 Running 4 (6m17s ago) 18m
|
||||
kube-system rke2-snapshot-validation-webhook-65b5675d5c-tdfcf 1/1 Running 0 18m
|
||||
longhorn-system csi-attacher-785fd6545b-6jfss 1/1 Running 1 (6m17s ago) 9m39s
|
||||
longhorn-system csi-attacher-785fd6545b-k7jdh 1/1 Running 0 9m39s
|
||||
longhorn-system csi-attacher-785fd6545b-rr6k4 1/1 Running 0 9m39s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-58dc8 1/1 Running 0 9m38s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-g8cv2 1/1 Running 0 9m38s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-mbwh2 1/1 Running 0 9m38s
|
||||
longhorn-system csi-resizer-68c4c75bf5-d5vdd 1/1 Running 0 9m36s
|
||||
longhorn-system csi-resizer-68c4c75bf5-r96lf 1/1 Running 0 9m36s
|
||||
longhorn-system csi-resizer-68c4c75bf5-tnggs 1/1 Running 0 9m36s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-5szxn 1/1 Running 0 9m30s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-w96lw 1/1 Running 0 9m30s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-xt42z 1/1 Running 0 9m30s
|
||||
longhorn-system engine-image-ei-68f17757-jn986 1/1 Running 0 10m
|
||||
longhorn-system instance-manager-fab02be089480f35c7b2288110eb9441 1/1 Running 0 10m
|
||||
longhorn-system longhorn-csi-plugin-5j77p 3/3 Running 0 9m30s
|
||||
longhorn-system longhorn-driver-deployer-75fff9c757-dps2j 1/1 Running 0 13m
|
||||
longhorn-system longhorn-manager-2vfr4 1/1 Running 4 (10m ago) 13m
|
||||
longhorn-system longhorn-ui-7dc586665c-hzt6k 1/1 Running 0 13m
|
||||
longhorn-system longhorn-ui-7dc586665c-lssfj 1/1 Running 0 13m
|
||||
```
|
||||
|
||||
!!! note
|
||||
Be sure to write down the "*bootstrapPassword*" variable for when you log into Rancher later. In this example, the password is `bootStrapAllTheThings`.
|
||||
Also be sure to adjust the "*hostname*" variable to reflect the FQDN of the cluster. You can leave it default like this and change it upon first login if you want. This is important for the last step where you adjust DNS. The example given is `rke2-cluster.bunny-lab.io`.
|
||||
|
||||
### Log into webUI
|
||||
At this point, you can log into the webUI at https://rke2-cluster.bunny-lab.io using the default `bootStrapAllTheThings` password, or whatever password you configured, you can change the password after logging in if you need to by navigating to **Home > Users & Authentication > "..." > Edit Config > "New Password" > Save**. From here, you can deploy more nodes, or deploy single-node workloads such as an Ansible AWX Operator.
|
||||
|
||||
### Rebooting the ControlNode
|
||||
If you ever find yourself needing to reboot the ControlNode, and need to run kubectl CLI commands, you will need to run the command below to import the cluster credentials upon every reboot. Reboots should take much less time to get the cluster ready again as compared to the original deployments.
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
```
|
||||
|
||||
## Create Additional ControlPlane Node(s)
|
||||
This is the part where you can add additional controlplane nodes to add additional redundancy to the RKE2 Cluster. This is important for high-availability environments.
|
||||
|
||||
### Download the Server Deployment Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Configure and Connect to Existing/Initial ControlPlane Node
|
||||
``` sh
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.69:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable --now rke2-server.service
|
||||
```
|
||||
!!! note
|
||||
Be sure to change the IP address of the initial controlplane node provided in the example above to match your environment.
|
||||
|
||||
## Add Worker Node(s)
|
||||
Worker nodes are the bread-and-butter of a Kubernetes cluster. They handle running container workloads, and acting as storage for the cluster (this can be configured to varying degrees based on your needs).
|
||||
|
||||
### Download the Server Worker Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
|
||||
```
|
||||
### Configure and Connect to RKE2 Cluster
|
||||
``` sh
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service**
|
||||
systemctl enable --now rke2-agent.service
|
||||
```
|
||||
|
||||
## DNS Server Record
|
||||
You will need to set up some kind of DNS server record to point the FQDN of the cluster (e.g. `rke2-cluster.bunny-lab.io`) to the IP address of the Initial ControlPlane. This can be achieved in a number of ways, such as editing the Windows `HOSTS` file, Linux's `/etc/resolv.conf` file, a Windows DNS Server "A" Record, or an NGINX/Traefik Reverse Proxy.
|
||||
|
||||
Once you have added the DNS record, you should be able to access the login page for the Rancher RKE2 Kubernetes cluster. Use the `bootstrapPassword` mentioned previously to log in, then change it immediately from the user management area of Rancher.
|
||||
|
||||
| TYPE OF ACCESS | FQDN | IP ADDRESS |
|
||||
| -------------- | ------------------------------------- | ------------ |
|
||||
| HOST FILE | rke2-cluster.bunny-lab.io | 192.168.3.69 |
|
||||
| REVERSE PROXY | http://rke2-cluster.bunny-lab.io:80 | 192.168.5.29 |
|
||||
| DNS RECORD | A Record: rke2-cluster.bunny-lab.io | 192.168.3.69 |
|
||||
37
deployments/platforms/index.md
Normal file
37
deployments/platforms/index.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
tags:
|
||||
- Platforms
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Platforms
|
||||
## Purpose
|
||||
Virtualization and containerization platforms, cluster builds, and base OS images.
|
||||
|
||||
## Includes
|
||||
- Hypervisors and virtualization stacks
|
||||
- Kubernetes and Docker foundations
|
||||
- Base image and cluster provisioning patterns
|
||||
|
||||
## New Document Template
|
||||
````markdown
|
||||
# <Document Title>
|
||||
## Purpose
|
||||
<what this platform doc exists to describe>
|
||||
|
||||
!!! info "Assumptions"
|
||||
- <OS / platform version>
|
||||
- <privilege assumptions>
|
||||
|
||||
## Architectural Overview
|
||||
<ASCII diagram or concise flow>
|
||||
|
||||
## Procedure
|
||||
```sh
|
||||
# Commands (grouped and annotated)
|
||||
```
|
||||
|
||||
## Validation
|
||||
- <command + expected result>
|
||||
````
|
||||
@@ -0,0 +1,124 @@
|
||||
---
|
||||
tags:
|
||||
- Documentation
|
||||
---
|
||||
|
||||
**Purpose**: Deploying a Windows Server Node into the Hyper-V Failover Cluster is an essential part of rebuilding and expanding the backbone of my homelab. The documentation below goes over the process of setting up a bare-metal host from scratch and integrating it into the Hyper-V Failover Cluster.
|
||||
|
||||
!!! note "Prerequisites & Assumptions"
|
||||
This document assumes you are have installed and are running a bare-metal Hewlett-Packard Enterprise server with iLO (Integrated Lights Out) with the latest build of **Windows Server 2022 Datacenter (Desktop Experience)**.
|
||||
|
||||
This document also assumes that you are adding an additional server node to an existing Hyper-V Failover Cluster. This document does not outline the exact process of setting up a Hyper-V Failover Cluster from-scratch, setting up a domain, DNS server, etc. Those are assumed to already exist in the environment. Your domain controller(s) need to be online and accessible from the Failover Cluster node you are building for things to work correctly.
|
||||
|
||||
Download the newest build ISO of Windows Server 2022 at the [Microsoft Evaluation Center](https://go.microsoft.com/fwlink/p/?linkid=2195686&clcid=0x409&culture=en-us&country=us)
|
||||
|
||||
### Enable Remote Desktop
|
||||
Enable remote desktop however you can, but just be sure to disable NLA, see the notes below for details.
|
||||
!!! warning "Disable NLA (Network Level Authentication)"
|
||||
Ensure that "Allow Connections only from computers running Remote Desktop with Network Level Authentication" is un-checked. This is important because if you are running a Hyper-V Failover Cluster, if the domain controller(s) are not running, you may be effectively locked out from using Remote Desktop to access the failover cluster's nodes, forcing you to use iLO or a physical console into the server to log in and bootstrap the cluster's Guest VMs online.
|
||||
|
||||
This step can be disregarded if the domain controller(s) exist outside of the Hyper-V Failover Cluster.
|
||||
|
||||
``` powershell
|
||||
# Enable Remote Desktop (NLA-Disabled)
|
||||
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server" -Name "fDenyTSConnections" -Value 0
|
||||
Set-ItemProperty -Path "HKLM:\System\CurrentControlSet\Control\Terminal Server\WinStations\RDP-Tcp" -Name "UserAuthentication" -Value 0 Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
|
||||
```
|
||||
|
||||
### Provision Server Roles, Activate, and Domain Join
|
||||
``` powershell
|
||||
# Rename the server
|
||||
Rename-Computer BUNNY-NODE-02
|
||||
|
||||
# Install Hyper-V, Failover, and MPIO Server Roles
|
||||
Install-WindowsFeature -Name Hyper-V, Failover-Clustering, Multipath-IO -IncludeManagementTools
|
||||
|
||||
# Change edition of Windows (Then Reboot)
|
||||
irm https://get.activated.win | iex
|
||||
|
||||
# Force activate server (KMS38)
|
||||
irm https://get.activated.win | iex
|
||||
|
||||
# Configure DNS Servers
|
||||
Get-NetAdapter | Where-Object { $_.Status -eq 'Up' } | ForEach-Object { Set-DnsClientServerAddress -InterfaceIndex $_.InterfaceIndex -ServerAddresses ("192.168.3.25","192.168.3.26") }
|
||||
|
||||
# Domain-join the server
|
||||
Add-Computer BUNNY-LAB.io
|
||||
|
||||
# Restart the Server
|
||||
Restart-Computer
|
||||
```
|
||||
|
||||
## Failover Cluster Configuration
|
||||
### Configure Cluster SET Networking
|
||||
!!! note "Disable Embedded Ports"
|
||||
We want to only use the 10GbE Cluster_SET network for both virtual machines and the virtualization host itself. This ensures that **all** traffic goes through the 10GbE team. Disable all other non-10GbE network adapters.
|
||||
You will need to start off by configuring a Switch Embedded Teaming (SET) team. This is the backbone that the server will use for all Guest VM traffic as well as remote-desktop access to the server node itself. You will need to rename the network adapters to make management easier.
|
||||
|
||||
- Navigate to "Network Connections" then "Change Adapter Options"
|
||||
* Rename the network adapters with simpler names. e.g. (`Ethernet 1` becomes `Port_1`)
|
||||
* For the sake of demonstration, assume there are 2 10GbE NICs (`Port_1` and `Port_2`)
|
||||
|
||||
``` powershell
|
||||
# Create Switch Embedded Teaming (SET) team
|
||||
New-VMSwitch -Name Cluster_SET -NetAdapterName Port_1, Port_2 -EnableEmbeddedTeaming $true
|
||||
|
||||
# Disable IPv4 and IPv6 on all other network adapters
|
||||
Get-NetAdapter | Where-Object { $_.Name -ne "vEthernet (Cluster_SET)" } | ForEach-Object { Set-NetAdapterBinding -Name $_.Name -ComponentID "ms_tcpip" -Enabled $false; Set-NetAdapterBinding -Name $_.Name -ComponentID "ms_tcpip6" -Enabled $false }
|
||||
|
||||
# Set IP Address of Cluster_SET for host-access and clustering
|
||||
New-NetIPAddress -InterfaceAlias "vEthernet (Cluster_SET)" -IPAddress 192.168.3.5 -PrefixLength 24 -DefaultGateway 192.168.3.1
|
||||
Set-DnsClientServerAddress -InterfaceAlias "vEthernet (Cluster_SET)" -ServerAddresses ("192.168.3.25","192.168.3.26")
|
||||
```
|
||||
### Configure iSCSI Initiator to Connect to TrueNAS Core Server
|
||||
At this point, now that we have verified that the 10GbE NICs can ping their respective iSCSI target server IP addresses, we can add them to the iSCSI Initiator in Server Manager which will allow us to mount the cluster storage for the Hyper-V Failover Cluster.
|
||||
|
||||
- Open **Server Manager > MPIO**
|
||||
* Navigate to the "Discover Multi-Paths" tab
|
||||
* Check the "Add support for iSCSI devices" checkbox
|
||||
* Click the "Add" button
|
||||
|
||||
- Open **TrueNAS Core Server**
|
||||
* Navigate to the [TrueNAS Core server](http://192.168.3.3) and add the "Initiator Name" seen on the "Configuration" tab of the iSCSI Initiator on the Virtualization Host to the `Sharing > iSCSI > Initiator Groups` > "iSCSI-Connected Servers"
|
||||
|
||||
- Open **iSCSI Initiator**
|
||||
* Click on the "Discovery" tab
|
||||
* Click the "Discover Portal" button
|
||||
* Enter the IP addresses of "192.168.3.3". Leave the port as "3260".
|
||||
* Example Initiator Name: `iqn.1991-05.com.microsoft:bunny-node-02.bunny-lab.io`
|
||||
* Click the "Targets" tab to go back to the main page
|
||||
* Click the "Refresh" button to display available iSCSI Targets
|
||||
* Click on the first iSCSI Target `iqn.2005-10.org.moon-storage-01.ctl:iscsi-cluster-storage` then click the "Connect" button
|
||||
* Check the "Enable Multi-Path" checkbox
|
||||
* Click the "Advanced" button
|
||||
* Click the "OK" button
|
||||
* Navigate to "Disk Management" to bring the iSCSI drives "Online" (Dont do anything after this in Disk Management)
|
||||
|
||||
## Initialize and Join to Existing Failover-Cluster
|
||||
### Validate Server is Ready to Join Cluster
|
||||
Now it is time to set up the Failover Cluster itself so we can join the server to the existing cluster.
|
||||
|
||||
- Open **Server Manager**
|
||||
* Click on the "Tools" dropdown menu
|
||||
* Click on "Failover Cluster Manager"
|
||||
* Click the "Validate Configuration" button in the middle of the window that appears
|
||||
* Click "Next"
|
||||
* Enter Server Name: `BUNNY-NODE-02.bunny-lab.io`
|
||||
* Click the "Add" button, then "Next"
|
||||
* Ensure "Run All Tests (Recommended)" is selected, then click "Next", then click "Next" to start.
|
||||
### Join Server to Failover Cluster
|
||||
* On the left-hand side, right-click on the "Failover Cluster Manager" in the tree
|
||||
* Click on "Connect to Cluster"
|
||||
* Enter `USAGI-CLUSTER.bunny-lab.io`
|
||||
* Click "OK"
|
||||
* Expand "USAGI-CLUSTER.bunny-lab.io" on the left-hand tree
|
||||
* Right-click on "Nodes"
|
||||
* Click "Add Node..."
|
||||
* Click "Next"
|
||||
* Enter Server Name: `BUNNY-NODE-02.bunny-lab.io`
|
||||
* Click the "Add" button, then "Next"
|
||||
* Ensure that "Run Configuration Validation Tests" radio box is checked, then click "Next"
|
||||
* Validate that the node was successfully added to the Hyper-V Failover Cluster
|
||||
|
||||
## Cleanup & Final Touches
|
||||
Ensure that you run all available Windows Updates before delegating guest VM roles to the new server in the failover cluster. This ensures you are up-to-date before you become reliant on the server for production operations.
|
||||
@@ -0,0 +1,156 @@
|
||||
---
|
||||
tags:
|
||||
- OpenStack
|
||||
- Ansible
|
||||
---
|
||||
|
||||
!!! warning "Document Under Construction"
|
||||
This document is very unfinished and should **NOT** be followed by anyone for deployment at this time.
|
||||
|
||||
**Purpose**: Deploying OpenStack via Ansible.
|
||||
|
||||
## Required Hardware/Infrastructure Breakdown
|
||||
Every node in the OpenStack environment (including the deployment node) will be running Rocky Linux 9.5, as OpenStack Ansible only supports CentOS/RHEL/Rocky for its deployment.
|
||||
|
||||
| **Hostname** | **IP** | **Storage** | **Memory** | **CPU** | **Network** | **Purpose** |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| OPENSTACK-BOOTSTRAPPER | 192.168.3.46 (eth0) | 32GB (OS) | 4GB | 4-Cores | eth0 | OpenStack Ansible Playbook Deployment Node |
|
||||
| OPENSTACK-NODE-01 | 192.168.3.43 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
| OPENSTACK-NODE-02 | 192.168.3.44 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
| OPENSTACK-NODE-03 | 192.168.3.45 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
|
||||
## Configure Hard-Coded DNS for Cluster Nodes
|
||||
We want to ensure everything works even if the nodes have no internet access. By hardcoding the FQDNs, this protects us against several possible stupid situations.
|
||||
|
||||
Run the following script to add the DNS entries.
|
||||
```sh
|
||||
# Make yourself root
|
||||
sudo su
|
||||
```
|
||||
|
||||
!!! note "Run `sudo su` Separately"
|
||||
When I ran `sudo su` and the echo commands below as one block of commands, it did not correctly write the changes to the `/etc/hosts` file. Just run `sudo su` by itself, then you can copy paste the codeblock below for all of the echo lines for each DNS entry.
|
||||
|
||||
```sh
|
||||
# Add the OpenStack node entries to /etc/hosts
|
||||
echo "192.168.3.43 OPENSTACK-NODE-01.bunny-lab.io OPENSTACK-NODE-01" >> /etc/hosts
|
||||
echo "192.168.3.44 OPENSTACK-NODE-02.bunny-lab.io OPENSTACK-NODE-02" >> /etc/hosts
|
||||
echo "192.168.3.45 OPENSTACK-NODE-03.bunny-lab.io OPENSTACK-NODE-03" >> /etc/hosts
|
||||
```
|
||||
|
||||
### Validate DNS Entries Added
|
||||
```sh
|
||||
cat /etc/hosts
|
||||
```
|
||||
|
||||
!!! example "/etc/hosts Example Contents"
|
||||
When you run `cat /etc/hosts`, you should see output similar to the following:
|
||||
```ini title="/etc/hosts"
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
192.168.3.43 OPENSTACK-NODE-01.bunny-lab.io OPENSTACK-NODE-01
|
||||
192.168.3.44 OPENSTACK-NODE-02.bunny-lab.io OPENSTACK-NODE-02
|
||||
192.168.3.45 OPENSTACK-NODE-03.bunny-lab.io OPENSTACK-NODE-03
|
||||
```
|
||||
|
||||
## OpenStack Deployment Node
|
||||
The "Deployment" node / bootstrapper is responsible for running Ansible playbooks against the cluster nodes that will eventually be running OpenStack. [Original Deployment Node Documentation](https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html)
|
||||
|
||||
### Install Necessary Software
|
||||
```sh
|
||||
sudo su
|
||||
dnf upgrade
|
||||
dnf install -y git chrony openssh-server python3-devel sudo
|
||||
dnf group install -y "Development Tools"
|
||||
```
|
||||
|
||||
### Configure SSH keys
|
||||
Ansible uses SSH with public key authentication to connect the deployment host and target hosts. Run the following commands to configure this.
|
||||
|
||||
!!! warning "Do not run as root"
|
||||
You want to make sure you run these commands as a normal user. (e.g. `nicole`).
|
||||
|
||||
``` sh
|
||||
# Generate SSH Keys (Private / Public)
|
||||
ssh-keygen
|
||||
|
||||
# Install Public Key on OpenStack Cluster/Target Nodes
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-01.bunny-lab.io
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-02.bunny-lab.io
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-03.bunny-lab.io
|
||||
|
||||
# Validate that SSH Authentication Works Successfully on Each Node
|
||||
ssh nicole@openstack-node-01.bunny-lab.io
|
||||
ssh nicole@openstack-node-02.bunny-lab.io
|
||||
ssh nicole@openstack-node-03.bunny-lab.io
|
||||
```
|
||||
|
||||
### Install the source and dependencies
|
||||
Install the source and dependencies for the deployment host.
|
||||
```sh
|
||||
sudo su
|
||||
git clone -b master https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible
|
||||
cd /opt/openstack-ansible
|
||||
bash scripts/bootstrap-ansible.sh
|
||||
```
|
||||
|
||||
### Disable Firewalld
|
||||
The `firewalld` service is enabled on most CentOS systems by default and its default ruleset prevents OpenStack components from communicating properly. Stop the firewalld service and mask it to prevent it from starting.
|
||||
```sh
|
||||
systemctl stop firewalld
|
||||
systemctl mask firewalld
|
||||
```
|
||||
|
||||
## OpenStack Target Node (1/3)
|
||||
Now we need to get the cluster/target nodes configured so that OpenStack can be deployed into them via the bootstrapper node later. [Original Target Node Documentation](https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/targethosts.html)
|
||||
|
||||
### Disable SELinux
|
||||
SELinux enabled is not currently supported in OpenStack-Ansible for CentOS/RHEL due to a lack of maintainers for the feature.
|
||||
```sh
|
||||
sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
|
||||
```
|
||||
|
||||
### Disable Firewalld
|
||||
The `firewalld` service is enabled on most CentOS systems by default and its default ruleset prevents OpenStack components from communicating properly. Stop the firewalld service and mask it to prevent it from starting.
|
||||
```sh
|
||||
systemctl stop firewalld
|
||||
systemctl mask firewalld
|
||||
```
|
||||
|
||||
### Install Necessary Software
|
||||
```sh
|
||||
dnf upgrade
|
||||
dnf install -y iputils lsof openssh-server sudo tcpdump python3
|
||||
```
|
||||
|
||||
### Reduce Kernel Logging
|
||||
Reduce the kernel log level by changing the printk value in your sysctls.
|
||||
```sh
|
||||
sudo echo "kernel.printk='4 1 7 4'" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
### Configure Local Cinder/Ceph Storage (Optional if using iSCSI)
|
||||
At this point, we need to configure `/dev/sdb` as the local storage for Cinder.
|
||||
```sh
|
||||
pvcreate --metadatasize 2048 /dev/sdb
|
||||
vgcreate cinder-volumes /dev/sdb
|
||||
```
|
||||
|
||||
!!! failure "`Cannot use /dev/sdb: device is partitioned`"
|
||||
You may (in rare cases) see the following error when trying to run `pvcreate --metadatasize 2048 /dev/sdb`, if that happens, just use `lsblk` to get the drive of the expected disk. In my example, we want the 500GB disk located at `/dev/sda`, seen in the example below:
|
||||
```
|
||||
[root@openstack-node-02 nicole]# lsblk
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
|
||||
sda 8:0 0 500G 0 disk
|
||||
sdb 8:16 0 250G 0 disk
|
||||
├─sdb1 8:17 0 600M 0 part /boot/efi
|
||||
├─sdb2 8:18 0 1G 0 part /boot
|
||||
├─sdb3 8:19 0 15.7G 0 part [SWAP]
|
||||
└─sdb4 8:20 0 232.7G 0 part /
|
||||
sr0 11:0 1 1024M 0 rom
|
||||
```
|
||||
|
||||
!!! question "End of Current Documentation"
|
||||
This is the end of where I have currently iterated in my lab and followed-along with the official documentation while generalizing it for my specific lab scenarios. The following link is where I am currently at/stuck and need to revisit at my earliest convenience.
|
||||
|
||||
https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/targethosts.html#configuring-the-network
|
||||
@@ -0,0 +1,81 @@
|
||||
---
|
||||
tags:
|
||||
- OpenStack
|
||||
---
|
||||
|
||||
# OpenStack
|
||||
OpenStack is basically a virtual machine hypervisor that is HA and cluster-friendly. This particular variant is deployed via Canonical's MiniStack environment using SNAP. It will deploy OpenStack onto a single node, which can later be expanded to additional nodes. You can also use something like OpenShift to deploy a Kubernetes Cluster onto OpenStack automatically via its various APIs.
|
||||
|
||||
**Reference Documentation**:
|
||||
- https://discourse.ubuntu.com/t/single-node-guided/35765
|
||||
- https://microstack.run/docs/single-node-guided
|
||||
|
||||
!!! note
|
||||
This document assumes your bare-metal host server is running Ubuntu 22.04 LTS, has at least 16GB of Memory (**32GB for Multi-Node Deployments**), two network interfaces (one for management, one for remote VM access), 200GB of Disk Space for the root filesystem, another 200GB disk for Ceph distributed storage, and 4 processor cores. See [Single-Node Mode System Requirements](https://ubuntu.com/openstack/install)
|
||||
|
||||
!!! note Assumed Networking on the First Cluster Node
|
||||
- **eth0** = 192.168.3.5
|
||||
- **eth1** = 192.168.5.200
|
||||
|
||||
### Update APT then install upgrades
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y && sudo apt install htop ncdu iptables nano -y
|
||||
```
|
||||
!!! tip
|
||||
At this time, it would be a good idea to take a checkpoint/snapshot of the server (if it is a virtual machine). This gives you a starting point to come back to as you troubleshoot inevitable deployment issues.
|
||||
|
||||
### Update SNAP then install OpenStack SNAP
|
||||
```
|
||||
sudo snap refresh
|
||||
sudo snap install openstack --channel 2023.1
|
||||
```
|
||||
### Install & Configure Dependencies
|
||||
Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in MicroStack.
|
||||
```
|
||||
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
|
||||
sudo reboot
|
||||
```
|
||||
### Bootstrapping
|
||||
Deploy the OpenStack cloud using the cluster bootstrap command.
|
||||
```
|
||||
sunbeam cluster bootstrap
|
||||
```
|
||||
!!! warning
|
||||
If you get an "Unable to connect to websocket" error, run `sudo snap restart lxd`.
|
||||
[Known Bug Report](https://bugs.launchpad.net/snap-openstack/+bug/2033400)
|
||||
|
||||
!!! note
|
||||
Management networks shared by hosts = `192.168.3.0/24`
|
||||
MetalLB address allocation range (supports multiple ranges, comma separated) (10.20.21.10-10.20.21.20): `192.168.3.50-192.168.3.60`
|
||||
|
||||
### Cloud Initialization:
|
||||
- nicole@moon-stack-01:~$ `sunbeam configure --openrc demo-openrc`
|
||||
- Local or remote access to VMs [local/remote] (local): `remote`
|
||||
- CIDR of network to use for external networking (10.20.20.0/24): `192.168.5.0/24`
|
||||
- IP address of default gateway for external network (192.168.5.1):
|
||||
- Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
|
||||
- Username to use for access to OpenStack (demo): `nicole`
|
||||
- Password to use for access to OpenStack (Vb********): `<PASSWORD>`
|
||||
- Network range to use for project network (192.168.122.0/24):
|
||||
- List of nameservers guests should use for DNS resolution (192.168.3.11 192.168.3.10):
|
||||
- Enable ping and SSH access to instances? [y/n] (y):
|
||||
- Start of IP allocation range for external network (192.168.5.2): `192.168.5.201`
|
||||
- End of IP allocation range for external network (192.168.5.254): `192.168.5.251`
|
||||
- Network type for access to external network [flat/vlan] (flat):
|
||||
- Free network interface that will be configured for external traffic: `eth1`
|
||||
- WARNING: Interface eth1 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
|
||||
|
||||
### Pull Down / Generate the Dashboard URL
|
||||
```
|
||||
sunbeam openrc > admin-openrc
|
||||
sunbeam dashboard-url
|
||||
```
|
||||
|
||||
### Launch a Test VM:
|
||||
Verify the cloud by launching a VM called ‘test’ based on the ‘ubuntu’ image (Ubuntu 22.04 LTS).
|
||||
```
|
||||
sunbeam launch ubuntu --name test
|
||||
```
|
||||
!!! note Sample output:
|
||||
- Launching an OpenStack instance ...
|
||||
- Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@10.20.20.200`
|
||||
@@ -0,0 +1,122 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
- Ubuntu
|
||||
---
|
||||
|
||||
## Purpose
|
||||
You may need to deploy many copies of a virtual machine rapidly, and don't want to go through the hassle of setting up everything ad-hoc as the needs arise for each VM workload. Creating a cloud-init template allows you to more rapidly deploy production-ready copies of a template VM (that you create below) into a ProxmoxVE environment.
|
||||
|
||||
### Download Image and Import into ProxmoxVE
|
||||
You will first need to pull down the OS image from Ubuntu's website via CLI, as there is currently no way to do this via the WebUI. Using SSH or the Shell within the WebUI of one of the ProxmoxVE servers, run the following commands to download and import the image into ProxmoxVE.
|
||||
```sh
|
||||
# Make a place to keep cloud images
|
||||
mkdir -p /var/lib/vz/template/images/ubuntu && cd /var/lib/vz/template/images/ubuntu
|
||||
|
||||
# Download Ubuntu 24.04 LTS cloud image (amd64, server)
|
||||
wget -q --show-progress https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
|
||||
|
||||
# Create a Placeholder VM to Attach Cloud Image
|
||||
qm create 9000 --name ubuntu-2404-cloud --memory 8192 --cores 8 --net0 virtio,bridge=vmbr0
|
||||
|
||||
# Set UEFI (OVMF) + SCSI controller (Cloud images expect UEFI firmware and SCSI disk.)
|
||||
qm set 9000 --bios ovmf --scsihw virtio-scsi-pci
|
||||
qm set 9000 --efidisk0 nfs-cluster-storage:0,pre-enrolled-keys=1
|
||||
|
||||
# Import the disk into ProxmoxVE
|
||||
qm importdisk 9000 noble-server-cloudimg-amd64.img nfs-cluster-storage --format qcow2
|
||||
|
||||
# Query ProxmoxVE to find out where the volume was created
|
||||
pvesm list nfs-cluster-storage | grep 9000
|
||||
|
||||
# Attach the disk to the placeholder VM
|
||||
qm set 9000 --scsi0 nfs-cluster-storage:9000/vm-9000-disk-0.qcow2
|
||||
|
||||
# Configure Disk to Boot
|
||||
qm set 9000 --boot c --bootdisk scsi0
|
||||
```
|
||||
|
||||
### Add Cloud-Init Drive & Configure Template Defaults
|
||||
Now that the Ubuntu cloud image is attached as the VM’s primary disk, you need to attach a Cloud-Init drive. This special drive is where Proxmox writes your user data (username, SSH keys, network settings, etc.) at clone time.
|
||||
```sh
|
||||
# Add a Cloud-Init drive to the VM
|
||||
qm set 9000 --ide2 nfs-cluster-storage:cloudinit
|
||||
|
||||
# Enable QEMU Guest Agent
|
||||
qm set 9000 --agent enabled=1
|
||||
|
||||
# Set a default Cloud-Init user (replace 'nicole' with your preferred username)
|
||||
qm set 9000 --ciuser nicole
|
||||
|
||||
# Set a default password (this will be resettable per-clone)
|
||||
qm set 9000 --cipassword 'SuperSecretPassword'
|
||||
|
||||
# Set DNS Servers and Search Domain
|
||||
qm set 9000 --nameserver "1.1.1.1 1.0.0.1"
|
||||
qm set 9000 --searchdomain bunny-lab.io
|
||||
|
||||
# Enable automatic package upgrades within the VM on first boot
|
||||
qm set 9000 --ciupgrade 1
|
||||
|
||||
# Download your infrastructure public SSH key onto the Proxmox node
|
||||
wget -O /root/infrastructure_id_rsa.pub \
|
||||
https://git.bunny-lab.io/Infrastructure/LinuxServer_SSH_PublicKey/raw/branch/main/id_rsa.pub
|
||||
|
||||
# Tell Proxmox to inject this key via Cloud-Init
|
||||
qm set 9000 --sshkey /root/infrastructure_id_rsa.pub
|
||||
|
||||
# Configure networking to use DHCP by default (this will be overridden at cloning)
|
||||
qm set 9000 --ipconfig0 ip=dhcp
|
||||
```
|
||||
|
||||
### Setup Packages in VM & Convert to Template
|
||||
At this point, we have a few things we need to do first before we can turn the VM into a template and make clones of it. You will need to boot up the VM we made (id 9000) and run the following commands to prepare it for becoming a template:
|
||||
|
||||
```sh
|
||||
# Install Updates
|
||||
sudo apt update && sudo apt upgrade
|
||||
sudo apt install -y qemu-guest-agent cloud-init
|
||||
sudo systemctl enable qemu-guest-agent --now
|
||||
|
||||
# Magic Stuff Goes Here =============================
|
||||
|
||||
# Convert the placeholder VM into a reusable template (ignore chattr errors on NFS storage backends)
|
||||
qm template 9000
|
||||
```
|
||||
|
||||
### Clone the Template into a New VM
|
||||
You can now create new VMs instantly from the template we created above.
|
||||
|
||||
=== "Via WebUI"
|
||||
|
||||
- Log into the ProxmoxVE node where the template was created
|
||||
- Right-Click the Template > "**Clone**"
|
||||
- Give the new VM a name
|
||||
- Set the "Mode" of the clone to "**Full Clone**"
|
||||
- Navigate to the new GuestVM in ProxmoxVE and click on the "**Cloud-Init**" tab
|
||||
- Change the "**User**" and "**Password**" fields if you want to change them
|
||||
- Double-click on the "**IP Config (net0)**" option
|
||||
- **IPv4/CIDR**: `192.168.3.67/24`
|
||||
- **Gateway (IPv4)**: `192.168.3.1`
|
||||
- Click the "**OK**" button
|
||||
- Start the VM and wait for it to automatically provision itself
|
||||
|
||||
=== "Via CLI"
|
||||
|
||||
``` sh
|
||||
# Create a new VM (example: VM 9100) cloned from the template
|
||||
qm clone 9000 9100 --name ubuntu-2404-test --full
|
||||
|
||||
# Optionally, override Cloud-Init settings for this clone:
|
||||
qm set 9100 --ciuser nicole --cipassword 'AnotherStrongPass'
|
||||
qm set 9100 --ipconfig0 ip=192.168.3.67/24,gw=192.168.3.1
|
||||
|
||||
# Boot the new cloned VM
|
||||
qm start 9100
|
||||
```
|
||||
|
||||
### Configure VM Hostname
|
||||
At this point, the hostname of the VM will be randomized and you will probably want to set it to something statically, you can do that with the following commands after the server has finished starting:
|
||||
```sh
|
||||
|
||||
```
|
||||
@@ -0,0 +1,252 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
- iSCSI
|
||||
- Storage
|
||||
---
|
||||
|
||||
## Purpose
|
||||
This document describes the **end-to-end procedure** for creating a **thick-provisioned iSCSI-backed shared storage target** on **TrueNAS CORE**, and consuming it from a **Proxmox VE cluster** using **shared LVM**.
|
||||
|
||||
This approach is intended to:
|
||||
|
||||
- Provide SAN-style block semantics
|
||||
- Enable Proxmox-native snapshot functionality (LVM volume chains)
|
||||
- Avoid third-party plugins or middleware
|
||||
- Be fully reproducible via CLI
|
||||
|
||||
## Assumptions
|
||||
- TrueNAS **CORE** (not SCALE)
|
||||
- ZFS pool already exists and is healthy
|
||||
- SSH service is enabled on TrueNAS
|
||||
- Proxmox VE nodes have network connectivity to TrueNAS
|
||||
- iSCSI traffic is on a reliable, low-latency network (10GbE recommended)
|
||||
- All VM workloads are drained from at least one Proxmox node for maintenance
|
||||
|
||||
!!! note "Proxmox VE Version Context"
|
||||
This guide assumes **Proxmox VE 9.1.4 (or later)**. Snapshot-as-volume-chain support on shared LVM (e.g., iSCSI) is available and improved, including enhanced handling of vTPM state in offline snapshots.
|
||||
|
||||
!!! warning "Important"
|
||||
`volblocksize` **cannot be changed after zvol creation**. Choose carefully.
|
||||
|
||||
## Target Architecture
|
||||
|
||||
```
|
||||
ZFS Pool
|
||||
└─ Zvol (Thick / Reserved)
|
||||
└─ iSCSI Extent
|
||||
└─ Proxmox LVM PV
|
||||
└─ Shared VG
|
||||
└─ VM Disks
|
||||
```
|
||||
|
||||
## Create a Dedicated Zvol for Proxmox
|
||||
|
||||
### Variables
|
||||
Adjust as needed before execution.
|
||||
|
||||
```sh
|
||||
POOL_NAME="CLUSTER-STORAGE"
|
||||
ZVOL_NAME="iscsi-storage"
|
||||
ZVOL_SIZE="14T"
|
||||
VOLBLOCKSIZE="16K"
|
||||
```
|
||||
|
||||
### Create the Zvol (Thick-Provisioned)
|
||||
```sh
|
||||
zfs create -V ${ZVOL_SIZE} \
|
||||
-o volblocksize=${VOLBLOCKSIZE} \
|
||||
-o compression=lz4 \
|
||||
-o refreservation=${ZVOL_SIZE} \
|
||||
${POOL_NAME}/${ZVOL_NAME}
|
||||
```
|
||||
|
||||
!!! note
|
||||
The `refreservation` enforces **true thick provisioning** and prevents overcommit.
|
||||
|
||||
## Configure iSCSI Target (TrueNAS CORE)
|
||||
|
||||
This section uses a **hybrid approach**:
|
||||
- **CLI** is used for ZFS and LUN (extent backing) creation
|
||||
- **TrueNAS GUI** is used for iSCSI portal, target, and association
|
||||
- **CLI** is used again for validation
|
||||
|
||||
### Enable iSCSI Service
|
||||
|
||||
```sh
|
||||
service ctld start
|
||||
sysrc ctld_enable=YES
|
||||
```
|
||||
|
||||
### Create the iSCSI LUN Backing (CLI)
|
||||
This step creates the **actual block-backed LUN** that will be exported via iSCSI.
|
||||
|
||||
```sh
|
||||
# Sanity check: confirm the backing zvol exists
|
||||
ls -l /dev/zvol/${POOL_NAME}/${ZVOL_NAME}
|
||||
|
||||
# Create CTL LUN backed by the zvol
|
||||
ctladm create -b block \
|
||||
-o file=/dev/zvol/${POOL_NAME}/${ZVOL_NAME} \
|
||||
-S ISCSI-STORAGE \
|
||||
-d ISCSI-STORAGE
|
||||
```
|
||||
|
||||
### Verify the LUN is real and correctly sized
|
||||
|
||||
```sh
|
||||
ctladm devlist -v
|
||||
```
|
||||
|
||||
!!! tip
|
||||
`Size (Blocks)` must be **non-zero** and match the zvol size. If it is `0`, stop and correct before proceeding.
|
||||
|
||||
### Configure iSCSI Portal, Target, and Extent Association (CLI Only)
|
||||
|
||||
!!! warning "Do NOT Use the TrueNAS iSCSI GUI"
|
||||
**Once you choose a CLI-managed iSCSI configuration, the TrueNAS Web UI must never be used for iSCSI.**
|
||||
Opening or modifying **Sharing → Block Shares (iSCSI)** in the GUI will **overwrite CTL runtime state**, invalidate manual `ctladm` configuration, and result in targets that appear correct but expose **no LUNs** to initiators.
|
||||
|
||||
**This configuration is CLI-owned and CLI-managed.**
|
||||
|
||||
- Do **not** add, edit, or view iSCSI objects in the GUI
|
||||
- Do **not** use the iSCSI wizard
|
||||
- Do **not** mix GUI extents with CLI-created LUNs
|
||||
|
||||
#### Create iSCSI Portal (Listen on All Interfaces)
|
||||
|
||||
```sh
|
||||
# Backup any existing ctl.conf
|
||||
cp -av /etc/ctl.conf /etc/ctl.conf.$(date +%Y%m%d-%H%M%S).bak 2>/dev/null || true
|
||||
|
||||
# Write a clean /etc/ctl.conf
|
||||
cat > /etc/ctl.conf <<'EOF'
|
||||
# --- Bunny Lab: Proxmox iSCSI (CLI-only) ---
|
||||
auth-group "no-auth" {
|
||||
auth-type none
|
||||
initiator-name "iqn.1993-08.org.debian:01:5b963dd51f93" # cluster-node-01 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
initiator-name "iqn.1993-08.org.debian:01:1b4df0fa3540" # cluster-node-02 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
initiator-name "iqn.1993-08.org.debian:01:5669aa2d89a2" # cluster-node-03 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
}
|
||||
|
||||
# Listen on all interfaces on the default iSCSI port
|
||||
portal-group "pg0" {
|
||||
listen 0.0.0.0:3260
|
||||
discovery-auth-group "no-auth"
|
||||
}
|
||||
|
||||
# Create a target IQN
|
||||
target "iqn.2026-01.io.bunny-lab:storage" {
|
||||
portal-group "pg0"
|
||||
auth-group "no-auth"
|
||||
|
||||
# Export LUN 0 backed by the zvol device
|
||||
lun 0 {
|
||||
path /dev/zvol/CLUSTER-STORAGE/iscsi-storage
|
||||
serial "ISCSI-STORAGE"
|
||||
device-id "ISCSI-STORAGE"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Restart ctld to apply the configuration file
|
||||
service ctld restart
|
||||
|
||||
# Verify the iSCSI listener is actually up
|
||||
sockstat -4l | grep ':3260'
|
||||
|
||||
# Verify CTL now shows an iSCSI frontend
|
||||
ctladm portlist -v | egrep -i '(^Port|iscsi|listen=)'
|
||||
```
|
||||
|
||||
!!! success
|
||||
At this point, the iSCSI target is live and correctly exposing a block device to initiators. You may now proceed to **Connect from ProxmoxVE Nodes** section.
|
||||
|
||||
## Connect from ProxmoxVE Nodes
|
||||
Perform the following **on each Proxmox node**.
|
||||
|
||||
```sh
|
||||
# Install iSCSI Utilities
|
||||
apt update
|
||||
apt install -y open-iscsi lvm2
|
||||
|
||||
# Discover Target
|
||||
iscsiadm -m discovery -t sendtargets -p <TRUENAS_IP>
|
||||
|
||||
# Log In
|
||||
iscsiadm -m node --login
|
||||
|
||||
# Rescan SCSI Bus
|
||||
iscsiadm -m session -P 3
|
||||
|
||||
### Verify Device
|
||||
# If everything works successfully, you should see something like "sdi 8:128 0 8T 0 disk".
|
||||
lsblk
|
||||
```
|
||||
|
||||
## Create Shared LVM (Execute on One Node Only)
|
||||
|
||||
!!! warning "Important"
|
||||
**Only run LVM creation on ONE node**. All other nodes will only scan.
|
||||
|
||||
```sh
|
||||
# Initialize Physical Volume
|
||||
pvcreate /dev/sdX
|
||||
|
||||
# Create Volume Group
|
||||
vgcreate vg_proxmox_iscsi /dev/sdX
|
||||
```
|
||||
|
||||
## Register Storage in Proxmox
|
||||
### Rescan LVM (Other Nodes)
|
||||
```sh
|
||||
pvscan
|
||||
vgscan
|
||||
```
|
||||
|
||||
### Add Storage (GUI)
|
||||
**Datacenter → Storage → Add → LVM**
|
||||
|
||||
- ID: `iscsi-cluster-lvm`
|
||||
- Volume Group: `vg_proxmox_iscsi`
|
||||
- Content: `Disk image, Container`
|
||||
- Shared: ✔️
|
||||
- Allow Snapshots as Volume-Chain: ✔️
|
||||
|
||||
## Validation
|
||||
|
||||
- Snapshot create / revert / delete
|
||||
- Live migration between nodes
|
||||
- PBS backup and restore test
|
||||
|
||||
!!! success
|
||||
If all validation tests pass, the storage is production-ready.
|
||||
|
||||
## Expanding iSCSI Storage (No Downtime)
|
||||
If you need to expand the storage space of the newly-created iSCSI LUN, you can run the ZFS commands seen below on the TrueNAS Core server. The first command increases the size, the second command pre-allocated the space (thick-provisioned).
|
||||
|
||||
!!! warning "ProxmoxVE Cluster-specific Notes"
|
||||
|
||||
- `pvresize` must be executed on **exactly one** ProxmoxVE node.
|
||||
- All other nodes should only perform `pvscan` / `vgscan` after the resize.
|
||||
- Running `pvresize` on multiple nodes can corrupt shared LVM metadata.
|
||||
|
||||
```sh
|
||||
# Expand Zvol (TrueNAS)
|
||||
zfs set volsize=16T CLUSTER-STORAGE/iscsi-storage
|
||||
zfs set refreservation=16T CLUSTER-STORAGE/iscsi-storage
|
||||
service ctld restart
|
||||
|
||||
# Rescan the block device on all ProxmoxVE nodes
|
||||
echo 1 > /sys/class/block/sdX/device/rescan
|
||||
|
||||
# Verify on all nodes that the new size is displayed
|
||||
lsblk /dev/sdX
|
||||
|
||||
# Run this on only ONE of the ProxmoxVE nodes.
|
||||
pvresize /dev/sdX
|
||||
|
||||
# Rescan on the other nodes that you did not run the pvresize command on. They will now see the expanded free space.
|
||||
pvscan
|
||||
vgscan
|
||||
```
|
||||
157
deployments/platforms/virtualization/proxmox/proxmoxve.md
Normal file
157
deployments/platforms/virtualization/proxmox/proxmoxve.md
Normal file
@@ -0,0 +1,157 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
---
|
||||
|
||||
## Initial Installation / Configuration
|
||||
Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.
|
||||
|
||||
!!! note
|
||||
This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.
|
||||
|
||||
### Create the first Node
|
||||
You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox.com/en/downloads) from the Official Proxmox Website. Once it is downloaded, you can use [Balena Etcher](https://etcher.balena.io/#download-etcher) or [Rufus](https://rufus.ie/en/) to deploy Proxmox onto a server.
|
||||
|
||||
!!! warning
|
||||
If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the [Official Documentation](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/enable-nested-virtualization) to ensure that nested virtualization is enabled. An example is listed below:
|
||||
```
|
||||
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
|
||||
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)
|
||||
```
|
||||
|
||||
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
|
||||
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](../../../../reference/infrastructure/networking/docker-networking/docker-networking.md) leveraging MACVLAN technology.
|
||||
|
||||
### Networking
|
||||
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
|
||||
|
||||
| Hostname | IP Address | Subnet Mask | Gateway | DNS Server | iSCSI Portal IP |
|
||||
| --------------- | --------------- | ------------------- | ------- | ---------- | ----------------- |
|
||||
| proxmox-node-01 | 192.168.101.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.101.100 |
|
||||
| proxmox-node-01 | 192.168.103.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.103.100 |
|
||||
| proxmox-node-02 | 192.168.102.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.102.100 |
|
||||
| proxmox-node-02 | 192.168.104.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.104.100 |
|
||||
|
||||
### iSCSI Initator Configuration
|
||||
You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "**Sharing > Block Shares (iSCSI) > Initiators Groups**"
|
||||
|
||||
In this instance, we will reference Group ID: `2`. We need to add the iniator to the "**Allowed Initiators (IQN)**" section. This also includes the following networks that are allowed to connect to the iSCSI portal:
|
||||
|
||||
- `192.168.101.0/24`
|
||||
- `192.168.102.0/24`
|
||||
- `192.168.103.0/24`
|
||||
- `192.168.104.0/24`
|
||||
|
||||
To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at `https://<IP>:8006` then log in with username `root` and whatever you set the password to during initial setup when the ISO image was mounted earlier.
|
||||
|
||||
- On the left-hand side, click on the name of the server node (e.g. `proxmox-node-01` or `proxmox-node-02`)
|
||||
- Click on "**Shell**" to open a CLI to the server
|
||||
- Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
|
||||
``` sh
|
||||
cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
|
||||
```
|
||||
|
||||
!!! example
|
||||
Output of this command will look something like `iqn.1993-08.org.debian:01:b16b0ff1778`.
|
||||
|
||||
## Disable Enterprise Subscription functionality
|
||||
You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.
|
||||
|
||||
Add Unstable Update Repository:
|
||||
```jsx title="/etc/apt/sources.list"
|
||||
# Add to the end of the file
|
||||
# Non-Production / Unstable Updates
|
||||
deb https://download.proxmox.com/debian bookworm pve-no-subscription
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Please note the reference to `bookworm` in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.
|
||||
|
||||
Comment-Out Enterprise Repository:
|
||||
```jsx title="/etc/apt/sources.list.d/pve-enterprise.list"
|
||||
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
|
||||
```
|
||||
|
||||
Pull / Install Available Updates:
|
||||
``` sh
|
||||
apt-get update
|
||||
apt dist-upgrade
|
||||
reboot
|
||||
```
|
||||
|
||||
## NIC Teaming
|
||||
You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.
|
||||
|
||||
- Ensure that all of the network interfaces appear as something similar to the following:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
iface eno1 inet manual
|
||||
iface eno2 inet manual
|
||||
# etc
|
||||
```
|
||||
|
||||
- Adjust the network interfaces to add a bond:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
auto eno1
|
||||
iface eno1 inet manual
|
||||
|
||||
auto eno2
|
||||
iface eno2 inet manual
|
||||
|
||||
auto bond0
|
||||
iface bond0 inet manual
|
||||
bond-slaves eno1 eno2
|
||||
bond-miimon 100
|
||||
bond-mode 802.3ad
|
||||
bond-xmit-hash-policy layer2+3
|
||||
|
||||
auto vmbr0
|
||||
iface vmbr0 inet static
|
||||
address 192.168.0.11/24
|
||||
gateway 192.168.0.1
|
||||
bridge-ports bond0
|
||||
bridge-stp off
|
||||
bridge-fd 0
|
||||
# bridge-vlan-aware yes # I do not use VLANs
|
||||
# bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.
|
||||
|
||||
- Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.
|
||||
|
||||
## Generalizing VMs for Cloning / Templating:
|
||||
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
|
||||
|
||||
!!! note
|
||||
If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps won’t be necessary!
|
||||
|
||||
```jsx title="Change Hostname"
|
||||
sudo nano /etc/hostname
|
||||
```
|
||||
|
||||
```jsx title="Change Hosts File"
|
||||
sudo nano /etc/hosts
|
||||
```
|
||||
|
||||
```jsx title="Reset the Machine ID"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Regenerate SSH Keys"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Reboot the Server to Apply Changes"
|
||||
reboot
|
||||
```
|
||||
|
||||
## Configure Alerting
|
||||
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.
|
||||
|
||||
https://technotim.live/posts/proxmox-alerts/
|
||||
|
||||
123
deployments/platforms/virtualization/proxmox/zfs-over-iscsi.md
Normal file
123
deployments/platforms/virtualization/proxmox/zfs-over-iscsi.md
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
- ZFS
|
||||
- iSCSI
|
||||
---
|
||||
|
||||
**Purpose**: There is a way to incorporate ProxmoxVE and TrueNAS more deeply using SSH, simplifying the deployment of virtual disks/volumes passed into GuestVMs in ProxmoxVE. Using ZFS over iSCSI will give you the following non-exhaustive list of benefits:
|
||||
|
||||
- Automatically make Zvols in a ZFS Storage Pool
|
||||
- Automatically bind device-based iSCSI Extents/LUNs to the Zvols
|
||||
- Allow TrueNAS to handle VM snapshots directly
|
||||
- Simplify the filesystem overhead of using TrueNAS and iSCSI with ProxmoxVE
|
||||
- Ability to take snapshots of GuestVMs
|
||||
- Ability to perform live-migrations of GuestVMs between ProxmoxVE cluster nodes
|
||||
|
||||
!!! note "Environment Assumptions"
|
||||
This document assumes you are running at least 2 ProxmoxVE nodes. For the sake of the example, it will assume they are named `proxmox-node-01` and `proxmox-node-02`. We will also assume you are using TrueNAS Core. TrueNAS SCALE (should work) in the same way, but there may be minor operational / setup differences between the two different deployments of TrueNAS.
|
||||
|
||||
Secondly, this guide assumes the ProxmoxVE cluster nodes and TrueNAS server exist on the same network `192.168.101.0/24`.
|
||||
|
||||
## ZFS over iSCSI Operational Flow
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant ProxmoxVE as ProxmoxVE Cluster
|
||||
participant TrueNAS as TrueNAS Core (inc. iSCSI & ZFS Storage)
|
||||
|
||||
ProxmoxVE->>TrueNAS: Cluster VM node connects via SSH to create ZVol for VM
|
||||
TrueNAS->>TrueNAS: Create ZVol in ZFS storage pool
|
||||
TrueNAS->>TrueNAS: Bind ZVol to iSCSI LUN
|
||||
ProxmoxVE->>TrueNAS: Connect to iSCSI & attach ZVol as VM storage
|
||||
ProxmoxVE->>TrueNAS: (On-Demand) Connect via SSH to create VM snapshot of ZVol
|
||||
TrueNAS->>TrueNAS: Create Snapshot of ZVol/VM
|
||||
```
|
||||
|
||||
## All ZFS Storage Nodes / TrueNAS Servers
|
||||
### Configure SSH Key Exchange
|
||||
You first need to make some changes to the SSHD configuration of the ZFS server(s) storing data for your cluster. This is fairly straight-forward and only needs two lines adjusted. This is based on the [Proxmox ZFS over ISCSI](https://pve.proxmox.com/wiki/Legacy:_ZFS_over_iSCSI) documentation. Be sure to restart the SSH service or reboot the storage server after making the changes below before proceeding onto the next steps.
|
||||
|
||||
=== "OpenSSH-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
UseDNS no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
=== "Solaris-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
LookupClientHostnames no
|
||||
VerifyReverseMapping no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
## All ProxmoxVE Cluster Nodes
|
||||
### Configure SSH Key Exchange
|
||||
The first step is creating SSH trust between the ProxmoxVE cluster nodes and the TrueNAS storage appliance. You will leverage the ProxmoxVE `shell` on every node of the cluster to run the following commands.
|
||||
|
||||
**Note**: I will be writing the SSH configuration with the name `192.168.101.100` for simplicity so I know what server the identity belongs to. You could also name it something else like `storage.bunny-lab.io_id_rsa`.
|
||||
|
||||
``` sh
|
||||
mkdir /etc/pve/priv/zfs
|
||||
ssh-keygen -f /etc/pve/priv/zfs/192.168.101.100_id_rsa # (1)
|
||||
ssh-copy-id -i /etc/pve/priv/zfs/192.168.101.100_id_rsa.pub root@192.168.101.100 # (2)
|
||||
ssh -i /etc/pve/priv/zfs/192.168.101.100_id_rsa root@192.168.101.100 # (3)
|
||||
```
|
||||
|
||||
1. Do not set a password. It will break the automatic functionality.
|
||||
2. Send the SSH key to the TrueNAS server.
|
||||
3. Connect to the TrueNAS server at least once to finish establishing the connection.
|
||||
|
||||
### Install & Configure Storage Provider
|
||||
Now you need to set up the storage provider in TrueNAS. You will run the commands below within a ProxmoxVE shell, then when finished, log out of the ProxmoxVE WebUI, clear the browser cache for ProxmoxVE, then log back in. This will have added a new storage provider called `FreeNAS-API` under the `ZFS over iSCSI` storage type.
|
||||
|
||||
``` sh
|
||||
keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' | gpg --dearmor >> ${keyring_location}
|
||||
|
||||
#################################################################
|
||||
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
|
||||
# Source: KSATechnologies
|
||||
# Site: https://cloudsmith.io
|
||||
# Repository: KSATechnologies / truenas-proxmox
|
||||
# Description: TrueNAS plugin for Proxmox VE - Production
|
||||
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main
|
||||
|
||||
EOF
|
||||
#################################################################
|
||||
|
||||
apt update
|
||||
apt install freenas-proxmox
|
||||
apt full-upgrade
|
||||
|
||||
systemctl restart pvedaemon
|
||||
systemctl restart pveproxy
|
||||
systemctl restart pvestatd
|
||||
```
|
||||
|
||||
## Primary ProxmoxVE Cluster Node
|
||||
From this point, we are ready to add the shared storage provider to the cluster via the primary node in the cluster. This is not strictly required, just simplifies the documentation.
|
||||
|
||||
Navigate to **"Datacenter (BUNNY-CLUSTER) > Storage > Add > ZFS over iSCSI"**
|
||||
|
||||
| **Field** | **Value** | **Additional Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| ID | `bunny-zfs-over-iscsi` | Friendly Name |
|
||||
| Portal | `192.168.101.100` | IP Address of iSCSI Portal |
|
||||
| Pool | `PROXMOX-ZFS-STORAGE` | This is the ZFS Storage Pool you will use to store GuestVM Disks |
|
||||
| ZFS Block Size | `4k` | |
|
||||
| Target | `iqn.2005-10.org.moon-storage-01.ctl:proxmox-zfs-storage` | The iSCSI Target |
|
||||
| Target Group | `<Leave Blank>` | |
|
||||
| Enable | `<Checked>` | |
|
||||
| iSCSI Provider | `FreeNAS-API` | |
|
||||
| Thin-Provision | `<Checked>` | |
|
||||
| Write Cache | `<Checked>` | |
|
||||
| API use SSL | `<Unchecked>` | Disabled unless you have SSL Enabled on TrueNAS |
|
||||
| API Username | `root` | This is the account that is allowed to make ZFS zvols / datasets |
|
||||
| API IPv4 Host | `192.168.101.100` | iSCSI Portal Address |
|
||||
| API Password | `<Root Password of TrueNAS Box>` | |
|
||||
| Nodes | `proxmox-node-01,proxmox-node-02` | All ProxmoxVE Cluster Nodes |
|
||||
|
||||
!!! success "Storage is Provisioned"
|
||||
At this point, the storage should propagate throughout the ProxmoxVE cluster, and appear as a location to deploy virtual machines and/or containers. You can now use this storage for snapshots and live-migrations between ProxmoxVE cluster nodes as well.
|
||||
@@ -0,0 +1,57 @@
|
||||
---
|
||||
tags:
|
||||
- Rancher
|
||||
- Harvester
|
||||
---
|
||||
|
||||
**Purpose**: Rancher Harvester is an awesome tool that acts like a self-hosted cloud VDI provider, similar to AWS, Linode, and other online cloud compute platforms. In most scenarios, you will deploy "Rancher" in addition to Harvester to orchestrate the deployment, management, and rolling upgrades of a Kubernetes Cluster. You can also just run standalone Virtual Machines, similar to Hyper-V, RHEV, oVirt, Bhyve, XenServer, XCP-NG, and VMware ESXi.
|
||||
|
||||
:::note Prerequisites
|
||||
This document assumes your bare-metal host has at least 32GB of Memory, 200GB of Disk Space, and 8 processor cores. See [Recommended System Requirements](https://docs.harvesterhci.io/v1.1/install/requirements)
|
||||
:::
|
||||
|
||||
## First Harvester Node
|
||||
### Download Installer ISO
|
||||
You will need to navigate to the Rancher Harvester GitHub to download the [latest ISO release of Harvester](https://releases.rancher.com/harvester/v1.1.2/harvester-v1.1.2-amd64.iso), currently **v1.1.2**. Then image it onto a USB flashdrive using a tool like [Rufus](https://github.com/pbatard/rufus/releases/download/v4.2/rufus-4.2p.exe). Proceed to boot the bare-metal server from the USB drive to begin the Harvester installation process.
|
||||
### Begin Setup Process
|
||||
You will be waiting a few minutes while the server boots from the USB drive, but you will eventually land on a page where it asks you to set up various values to use for networking and the cluster itself.
|
||||
The values seen below are examples and represent how my homelab is configured.
|
||||
- **Management Interface(s)**: `eno1,eno2,eno3,eno4`
|
||||
- **Network Bond Mode**: `Active-Backup`
|
||||
- **IP Address**: `192.168.3.254/24` *<---- **Note:** Be sure to add CIDR Notation*.
|
||||
- **Gateway**: `192.168.3.1`
|
||||
- **DNS Server(s)**: `1.1.1.1,1.0.0.1,8.8.8.8,8.8.4.4`
|
||||
- **Cluster VIP (Virtual IP)**: `192.168.3.251` *<---- **Note**: See "VIRTUAL IP CONFIGURATION" note below.*
|
||||
- **Cluster Node Token**: `19-USED-when-JOINING-more-NODES-to-EXISTING-cluster-55`
|
||||
- **NTP Server(s)**: `0.suse.pool.ntp.org`
|
||||
|
||||
:::caution Virtual IP Configuration
|
||||
The VIP assigned to the first node in the cluster will act as a proxy to the built-in load-balancing system. It is important that you do not create a second node with the same VIP (Could cause instability in existing cluster), or use an existing VIP as the Node IP address of a new Harvester Cluster Node.
|
||||
:::
|
||||
:::tip
|
||||
Based on your preference, it would be good to assign the device a static DHCP reservation, or use numbers counting down from **.254** (e.g. `192.168.3.254`, `192.168.3.253`, `192.168.3.252`, etc...)
|
||||
:::
|
||||
|
||||
### Wait for Installation to Complete
|
||||
The installation process will take quite some time, but when it is finished, the Harvester Node will reboot and take you to a splash screen with the Harvester logo, with indicators as to what the VIP and Management Interface IPs are configured as, and whether or not the associated systems are operational and ready. **Be patient until both statuses say `READY`**. If after 15 minutes the status has still not changed to `READY` both for fields, see the note below.
|
||||
:::caution Issues with `rancher-harvester-repo` Image
|
||||
During my initial deployment efforts with Harvester v.1.1.2, I noticed that the Harvester Node never came online. That was because something bugged-out during installation and the `rancher-harvester-repo` image was not properly installed prior to node initialization. This will effectively soft-lock the node unless you reinstall the node from scratch, as the Docker Hub Registry that Harvester is looking for to finish the deployment does not exist anymore and depends on the local image bundled with the installer ISO.
|
||||
|
||||
If this happens, you unfortunately need to start over and reinstall Harvester and hope that it works the second time around. No other workarounds are currently known at this time on version 1.1.2.
|
||||
:::
|
||||
|
||||
## Additional Harvester Nodes
|
||||
If you work in a production environment, you will want more than one Harvester node to allow live-migrations, high-availability, and better load-balancing in the Harvester Cluster. The section below will outline the steps necessary to create additional Harvester nodes, join them to the existing Harvester cluster, and validate that they are functioning without issues.
|
||||
### Installation Process
|
||||
Not Documented Yet
|
||||
### Joining Node to Existing Cluster
|
||||
Not Documented Yet
|
||||
|
||||
## Installing Rancher
|
||||
If you plan on using Harvester for more than just running Virtual Machines (e.g. Containers), you will want to deploy Rancher inside of the Harvester Cluster in order or orchestrate the deployment, management, and rolling upgrades of various forms of Kubernetes Clusters (RKE2 Suggested). The steps below will go over the process of deploying a High-Availability Rancher environment to "adopt" Harvester as a VDI/compute platform for deploying the Kubernetes Cluster.
|
||||
### Provision ControlPlane Node(s) VMs on Harvester
|
||||
Not Documented Yet
|
||||
### Adopt Harvester as Cluster Target
|
||||
Not Documented Yet
|
||||
### Deploy Production Kubernetes Cluster to Harvester
|
||||
Not Documented Yet
|
||||
83
deployments/services/asset-management/homebox.md
Normal file
83
deployments/services/asset-management/homebox.md
Normal file
@@ -0,0 +1,83 @@
|
||||
---
|
||||
tags:
|
||||
- Homebox
|
||||
- Asset Management
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Homebox is the inventory and organization system built for the Home User! With a focus on simplicity and ease of use, Homebox is the perfect solution for your home inventory, organization, and management needs.
|
||||
|
||||
[Reference Documentation](https://hay-kot.github.io/homebox/quick-start/)
|
||||
|
||||
!!! warning "Protect with Keycloak"
|
||||
The GitHub project for this software appears to have been archived in a read-only state in June 2024. There is no default admin credential, so setting the environment variable `HBOX_OPTIONS_ALLOW_REGISTRATION` to `false` will literally make you unable to log into the system. You also cannot change it after-the-fact, so you cannot just register an account then disable it and restart the container, it doesn't work that way.
|
||||
|
||||
Due to this behavior, it is imperative that you deploy this either only internally, or if its external, put it behind something like [Authentik](../authentication/authentik.md) or [Keycloak](../authentication/keycloak/deployment.md).
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.4"
|
||||
|
||||
services:
|
||||
homebox:
|
||||
image: ghcr.io/hay-kot/homebox:latest
|
||||
container_name: homebox
|
||||
restart: always
|
||||
environment:
|
||||
- HBOX_LOG_LEVEL=info
|
||||
- HBOX_LOG_FORMAT=text
|
||||
- HBOX_WEB_MAX_UPLOAD_SIZE=10
|
||||
- HBOX_MODE=production
|
||||
- HBOX_OPTIONS_ALLOW_REGISTRATION=true
|
||||
- HBOX_WEB_MAX_UPLOAD_SIZE=50
|
||||
- HBOX_WEB_READ_TIMEOUT=20
|
||||
- HBOX_WEB_WRITE_TIMEOUT=20
|
||||
- HBOX_WEB_IDLE_TIMEOUT=60
|
||||
- HBOX_MAILER_HOST=${HBOX_MAILER_HOST}
|
||||
- HBOX_MAILER_PORT=${HBOX_MAILER_PORT}
|
||||
- HBOX_MAILER_USERNAME=${HBOX_MAILER_USERNAME}
|
||||
- HBOX_MAILER_PASSWORD=${HBOX_MAILER_PASSWORD}
|
||||
- HBOX_MAILER_FROM=${HBOX_MAILER_FROM}
|
||||
volumes:
|
||||
- /srv/containers/homebox:/data/
|
||||
ports:
|
||||
- 7745:7745
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.25
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
HBOX_MAILER_HOST=mail.bunny-lab.io
|
||||
HBOX_MAILER_PORT=587
|
||||
HBOX_MAILER_USERNAME=noreply@bunny-lab.io
|
||||
HBOX_MAILER_PASSWORD=REDACTED
|
||||
HBOX_MAILER_FROM=noreply@bunny-lab.io
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
homebox:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: homebox
|
||||
rule: Host(`box.bunny-lab.io`)
|
||||
middlewares:
|
||||
- "auth-bunny-lab-io" # Referencing the Keycloak Server
|
||||
services:
|
||||
homebox:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.25:7745
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
143
deployments/services/asset-management/snipe-it.md
Normal file
143
deployments/services/asset-management/snipe-it.md
Normal file
@@ -0,0 +1,143 @@
|
||||
---
|
||||
tags:
|
||||
- Snipe-IT
|
||||
- Asset Management
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: A free open source IT asset/license management system.
|
||||
|
||||
!!! warning
|
||||
The Snipe-IT container will attempt to launch after the MariaDB container starts, but MariaDB takes a while set itself up before it can accept connections; as a result, Snipe-IT will fail to initialize the database. Just wait about 30 seconds after deploying the stack, then restart the Snipe-IT container to initialize the database. You will know it worked if you see notes about data being `Migrated`.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
snipeit:
|
||||
image: snipe/snipe-it
|
||||
ports:
|
||||
- "8000:80"
|
||||
depends_on:
|
||||
- db
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- /srv/containers/snipe-it:/var/lib/snipeit
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.50
|
||||
|
||||
redis:
|
||||
image: redis:6.2.5-buster
|
||||
ports:
|
||||
- "6379:6379"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.51
|
||||
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
ports:
|
||||
- "3306:3306"
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- /srv/containers/snipe-it/db:/var/lib/mysql
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.52
|
||||
|
||||
mailhog:
|
||||
image: mailhog/mailhog:v1.0.1
|
||||
ports:
|
||||
# - 1025:1025
|
||||
- "8025:8025"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.53
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_KEY=base64:SomethingSecure
|
||||
APP_URL=https://assets.bunny-lab.io
|
||||
APP_TIMEZONE='America/Denver'
|
||||
APP_LOCALE=en
|
||||
MAX_RESULTS=500
|
||||
PRIVATE_FILESYSTEM_DISK=local
|
||||
PUBLIC_FILESYSTEM_DISK=local_public
|
||||
DB_CONNECTION=mysql
|
||||
DB_HOST=db
|
||||
DB_DATABASE=snipedb
|
||||
DB_USERNAME=snipeuser
|
||||
DB_PASSWORD=SomethingSecure
|
||||
DB_PREFIX=null
|
||||
DB_DUMP_PATH='/usr/bin'
|
||||
DB_CHARSET=utf8mb4
|
||||
DB_COLLATION=utf8mb4_unicode_ci
|
||||
IMAGE_LIB=gd
|
||||
MYSQL_DATABASE=snipedb
|
||||
MYSQL_USER=snipeuser
|
||||
MYSQL_PASSWORD=SomethingSecure
|
||||
MYSQL_ROOT_PASSWORD=SomethingSecure
|
||||
REDIS_HOST=redis
|
||||
REDIS_PASSWORD=SomethingSecure
|
||||
REDIS_PORT=6379
|
||||
MAIL_DRIVER=smtp
|
||||
MAIL_HOST=mail.bunny-lab.io
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=assets@bunny-lab.io
|
||||
MAIL_PASSWORD=SomethingSecure
|
||||
MAIL_ENCRYPTION=starttls
|
||||
MAIL_FROM_ADDR=assets@bunny-lab.io
|
||||
MAIL_FROM_NAME='Bunny Lab Asset Management'
|
||||
MAIL_REPLYTO_ADDR=assets@bunny-lab.io
|
||||
MAIL_REPLYTO_NAME='Bunny Lab Asset Management'
|
||||
MAIL_AUTO_EMBED_METHOD='attachment'
|
||||
DATA_LOCATION=/srv/containers/snipe-it
|
||||
APP_TRUSTED_PROXIES=192.168.5.29
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
assets-bunny-lab-io:
|
||||
entryPoints:
|
||||
- websecure
|
||||
rule: "Host(`assets.bunny-lab.io`)"
|
||||
service: "assets-bunny-lab-io"
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
middlewares:
|
||||
- "assets-bunny-lab-io"
|
||||
- "auth-bunny-lab-io" # Referencing the Keycloak Server
|
||||
|
||||
middlewares:
|
||||
assets-bunny-lab-io:
|
||||
headers:
|
||||
customRequestHeaders:
|
||||
X-Forwarded-Proto: "https"
|
||||
X-Forwarded-Host: "assets.bunny-lab.io"
|
||||
customResponseHeaders:
|
||||
X-Custom-Header: "CustomValue" # Example of a static header
|
||||
|
||||
services:
|
||||
assets-bunny-lab-io:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: "http://192.168.5.50:8080"
|
||||
passHostHeader: true
|
||||
```
|
||||
@@ -0,0 +1,234 @@
|
||||
---
|
||||
tags:
|
||||
- Active Directory
|
||||
- Certificate Services
|
||||
- Authentication
|
||||
---
|
||||
|
||||
## Purpose
|
||||
This document outlines the Microsoft-recommended best practices for deploying a secure, internal-use-only, two-tier Public Key Infrastructure (PKI) using Windows Server 2022 or newer. The PKI supports securing S/MIME email, 802.1X Wi-Fi with NPS, and LDAP over SSL (LDAPS).
|
||||
|
||||
!!! abstract "CA Deployment Breakdown"
|
||||
The environment will consist of at least 2 virtual machines. For the purposes of this document they will be named `LAB-CA-01` and `LAB-CA-02`. This stands for "*Lab Certificate Authority [01|02]*". In a two-tier hierarchy, an offline (*you intentionally keep this VM offline*) Root CA signs a single "*Subordinate*" Enterprise CA certificate. The Subordinate CA is domain-joined and handles all certificate requests. Clients trust the PKI via Group Policy and Active Directory integration.
|
||||
|
||||
In this case, `LAB-CA-01` is the Root CA, while `LAB-CA-02` is the Intermediary/Subordinate CA. You can add more than one subordinate CA if you desire more redundancy in your environment. Making them operate together is generally automatic and does not require manual intervention.
|
||||
|
||||
!!! note "Certificate Authority Server Provisioning Assumptions"
|
||||
- OS = Windows Server 2022/2025 bare-metal or as a VM
|
||||
- You should give it at least 4GB of RAM.
|
||||
- [Change the edition of Windows Server from "**Evaluation**" to "**Standard**" via DISM](../../../../workflows/operations/windows/change-windows-edition.md)
|
||||
- Ensure the server is fully updated
|
||||
- [Ensure the server is activated](../../../../workflows/operations/windows/change-windows-edition.md#force-activation-edition-switcher)
|
||||
- Ensure the timezone is correctly configured
|
||||
- Ensure the hostname is correctly configured
|
||||
|
||||
!!! note "Domain Environment Assumptions"
|
||||
It is assumed that you already have existing infrastructure hosting an Active Directory Domain with at least one domain controller. This document does not outline how to set up a domain controller, you will need to figure that out on your own.
|
||||
|
||||
## Offline (Non-Domain-Joined) Root CA `LAB-CA-01`
|
||||
### Role Deployment
|
||||
This is the initial deployment of the root certificate authority, the settings here should be double and triple checked before proceeding through each step.
|
||||
|
||||
- Provision a **non-domain-joined** Windows Server
|
||||
- This is critical that this device is not domain-joined for security purposes
|
||||
- Navigate to "**Server Manager > Manage > Add Roles and Features**"
|
||||
- Check "**Active Directory Certificate Services**"
|
||||
- When prompted to confirm, click the "**Add Features**" button
|
||||
- Ensure the "**Include management tools (if applicable)**" checkbox is checked.
|
||||
- Click "**Next**" > "**Next**" > "**Next**"
|
||||
- You will be told that the name of the server cannot be changed after this point, and it will be associated with `WORKGROUP` > This is fine and you can proceed.
|
||||
- Check the boxes for the following role services:
|
||||
- `Certification Authority`
|
||||
- `Certification Authority Web Enrollment`
|
||||
- When prompted to confirm multiple times, click the "**Add Features**" button
|
||||
- Ensure the "**Include management tools (if applicable)**" checkbox is checked.
|
||||
- There are additional steps such as `Configure AIA and CDP extensions with HTTP paths` and `Publish root cert and CRL to AD and internal HTTP`, but these do not apply to an LDAPS-only deployment, and are more meant for websites / webhosting. (current understanding)
|
||||
- Click "**Next**" > "**Next**" > "**Next**" > "**Install**"
|
||||
- Restart the Server
|
||||
|
||||
### Role Configuration
|
||||
We have a few things we need to configure within the CA to make it ready to handle certificate requests.
|
||||
|
||||
- Navigate to "**Server Manager > (Alert Flag) > Post-deployment Configuration: Active Directory Certificate Services**"
|
||||
- You will be prompted for an admin user, in this example, you will use the pre-populated `LAB-CA-01\Administrator`
|
||||
- Check the boxes for `Certification Authority` and `Certification Authority Web Enrollment` then click "**Next**"
|
||||
- Check the "**Standalone CA**" radio box then click "**Next**"
|
||||
- Check the "**Root CA** radio box then click "**Next**"
|
||||
- Check the "**Create a new private key**" radio box then click "**Next**"
|
||||
- Click the dropdown menu for "**Select a crypotographic provider**" and ensure that "**RSA#Microsoft Software Key Storage Provider**" is selected
|
||||
- *Microsoft Software Key Storage Provider (KSP) is the latest, most flexible provider designed to work with the Cryptography Next Generation (CNG) APIs. It offers better support for modern algorithms and improved security management (such as support for key attestation, better hardware integration, and improved key protection mechanisms).*
|
||||
- Set the key length to `4096`
|
||||
- Set the hash algorithm to `SHA256`
|
||||
- Click "**Next**"
|
||||
- **Common Name for this CA**: `BunnyLab-RootCA`
|
||||
- **Distinguished name suffix**: `O=Bunny Lab,C=US`
|
||||
- **Preview of distinguished name**: `CN=BunnyLab-RootCA,O=Bunny Lab,C=US`
|
||||
- Click "**Next**"
|
||||
- Specify the validity period: `10 Years` then click "**Next**" > "**Next**" > "**Configure**"
|
||||
|
||||
You will see a finalization screen confirming everything we have configured, it should look something like what is seen below:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| CA Type | Standalone Root |
|
||||
| Cryptographic provider | RSA#Microsoft Software Key Storage Provider |
|
||||
| Hash Algorithm | SHA256 |
|
||||
| Key Length | 4096 |
|
||||
| Allow Administrator Interaction | Disabled |
|
||||
| Certificate Validity Period | `<10 Years from Today>` |
|
||||
| Distinguished Name | CN=BunnyLab-RootCA,O=Bunny Lab,C=US |
|
||||
| Certificate Database Location | C:\Windows\system32\CertLog |
|
||||
| Certificate Database Log Location | C:\Windows\system32\CertLog |
|
||||
|
||||
!!! success "Active Directory Certificate Services"
|
||||
If everything went well, you will see that the "**Certificate Authority**" and "**Certification Authority Web Enrollment**" both have a status of "**Configuration succeeded**". At this point, you can click the "**Close**" button to conclude the Root CA configuration.
|
||||
|
||||
## Online (Domain-Joined) Subordinate/Intermediary CA `LAB-CA-02`
|
||||
### Role Deployment
|
||||
Now that we have set up the root certificate authority, we can focus on setting up the subordinate CA.
|
||||
|
||||
!!! warning "Enterprise Admin Requirement"
|
||||
When you are setting up the role, you **absolutely** have to use an "*Enterprise*" Admin account. This could be a service account like `svcCertAdmin` or something similar.
|
||||
|
||||
- Navigate to "**Server Manager > (Alert Flag) > Post-deployment Configuration: Active Directory Certificate Services**"
|
||||
- Under credentials, enter the username for an Enterprise Admin. (e.g. `BUNNY-LAB\nicole.rappe`)
|
||||
- Click "**Next**"
|
||||
- Check the following roles (*we will add the rest after setting up the core CA functionality*)
|
||||
- `Certification Authority`
|
||||
- `Certification Authority Web Enrollment`
|
||||
- Check the "**Enterprise CA**" radio box then click "**Next**"
|
||||
- Check the "**Subordinate CA**" radio box then click "**Next**"
|
||||
- Check the "**Create a new private key**" radio box then click "**Next**"
|
||||
- Click the dropdown menu for "**Select a crypotographic provider**" and ensure that "**RSA#Microsoft Software Key Storage Provider**" is selected
|
||||
- *Microsoft Software Key Storage Provider (KSP) is the latest, most flexible provider designed to work with the Cryptography Next Generation (CNG) APIs. It offers better support for modern algorithms and improved security management (such as support for key attestation, better hardware integration, and improved key protection mechanisms).*
|
||||
- Set the key length to `4096`
|
||||
- Set the hash algorithm to `SHA256`
|
||||
- Click "**Next**"
|
||||
- **Common Name for this CA**: `BunnyLab-SubordinateCA-01`
|
||||
- **Distinguished name suffix**: `DC=bunny-lab,DC=io`
|
||||
- This will be auto-filled based on the domain that the CA is joined to
|
||||
- **Preview of distinguished name**: `CN=BunnyLab-SubordinateCA-01,DC=bunny-lab,DC=io`
|
||||
- Click "**Next**"
|
||||
- Select the "**Save a certificate request to file on the target machine**" radio button
|
||||
- This will auto-populate the destination to something like "`C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req`"
|
||||
- Click "**Next**" > "**Next**" > "**Configure**"
|
||||
|
||||
You will see a finalization screen confirming everything we have configured, it should look something like what is seen below:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| CA Type | Enterprise Subordinate |
|
||||
| Cryptographic provider | RSA#Microsoft Software Key Storage Provider |
|
||||
| Hash Algorithm | SHA256 |
|
||||
| Key Length | 4096 |
|
||||
| Allow Administrator Interaction | Disabled |
|
||||
| Certificate Validity Period | Determined by the parent CA |
|
||||
| Distinguished Name | CN=BunnyLab-SubordinateCA-01,DC=bunny-lab,DC=io |
|
||||
| Offline Request File Location | `C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req` |
|
||||
| Certificate Database Location | C:\Windows\system32\CertLog |
|
||||
| Certificate Database Log Location | C:\Windows\system32\CertLog |
|
||||
|
||||
!!! quote "Pending Certificate Signing Request"
|
||||
You will see a screen telling you that the **Certification Authority Web Enrollment** was successful but it will give a warning about the **Certification Authority**. The Active Directory Certificate Services installation is incomplete. To complete the installation, use the request file <file-name> to obtain a certificate from the parent CA [*The RootCA*]. Then, use the Certification Authority snap-in to install the certificate. To complete this procedure, right-click the node with the name of the CA, and then click "Install CA Certificate".
|
||||
|
||||
### Role Configuration
|
||||
At this point, we will need to focus on getting the certificate signing request generated on `LAB-CA-02` to `LAB-CA-01` (the rootCA), this can be via temporary network access or via a USB flashdrive.
|
||||
|
||||
!!! danger
|
||||
If using a USB flashdrive is not viable, don't leave the RootCA server on the network any longer than what is absolutely necessary.
|
||||
|
||||
- Once the certificate signing request file `C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req` is on `LAB-CA-01` (RootCA) we can proceed to get it signed.
|
||||
- Navigate to "**Server Manager > Tools > Certification Authority**"
|
||||
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-RootCA`)
|
||||
- Click on "**All Tasks" > "Submit new request...**"
|
||||
- Browse to and select the subordinate CA’s .req file (e.g. `LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req`)
|
||||
- Click on "**BunnyLab-RootCA > Pending Requests**
|
||||
- Right-click the request we just imported, and select "**All Tasks > Issue**"
|
||||
- Click on ""**BunnyLab-RootCA > Issued Certificates**"
|
||||
- Locate the new subordinate CA certificate, and double-click it.
|
||||
- Click the "**Details**" tab
|
||||
- Click the "**Copy to File**" button
|
||||
- Click "**Next**"
|
||||
- Choose `DER encoded binary X.509 (.CER)` and save as `LAB-CA-02-SubCA.cer`.
|
||||
- Export the Root CA certificate:
|
||||
- Right-click the `BunnyLab-RootCA` node > Properties > View Certificate > Details > Copy to File...
|
||||
- Save as `RootCA.cer`
|
||||
- Copy both `LAB-CA-02-SubCA.cer` (the signed subordinate CA cert) and `RootCA.cer` (the root CA cert) to the subordinate CA (`LAB-CA-02`), using a secure method (e.g. USB drive).
|
||||
- On `LAB-CA-02` (Subordinate CA), Navigate to "**Server Manager > Tools > Certification Authority**"
|
||||
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-SubordinateCA-01`)
|
||||
- Click on "**All Tasks" > "Install CA Certificate**"
|
||||
- Browse to and select `LAB-CA-02-SubCA.cer` (*you may need to change the cert file extension filter to `X.509 Certificate`*)
|
||||
- When prompted for the CA chain or root certificate, browse for and select the `RootCA.cer` you transferred earlier along with the `LAB-CA-02-SubCA.cer`
|
||||
- Launch `certlm.msc` to open the `[Certificates - Local Computer]` management window
|
||||
- Right-Click "**Trusted Root Certification Authorities**" > All Tasks > Import
|
||||
- Click "**Next**"
|
||||
- Browse to the `BunnyLab-RootCA.crl` located on `\\LAB-CA-01\CertEnroll\BunnyLab-RootCA.crl` (*if the RootCA is temporarily on the network*) or copy the file manually via USB drive from `C:\Windows\System32\certsrv\CertEnroll\BunnyLab-RootCA.crl`
|
||||
- Place all certificates in the following store: "Trusted Root Certification Authorities"
|
||||
- Click "**Next**" and finish importing the Certificate Revocation List
|
||||
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-SubordinateCA-01`)
|
||||
- Click on "**All Tasks" > "Start Service**"
|
||||
- Verify that the CA status is now green (running).
|
||||
|
||||
## Create Auto-Enrollment Group Policy
|
||||
The Certificate Auto-Enrollment Group Policy enables domain-joined devices (*computers, including domain controllers*) to automatically request, renew, and install certificates from the Enterprise CA (in this case, the Subordinate CA `LAB-CA-02`).
|
||||
|
||||
### Create GPO
|
||||
- Open the Group Policy Management editor on one of your domain controllers, then "Create a GPO in this domain, and link it here" wherever it will be able to target the domain controllers, this may be at the root, or in a specific OU that holds domain controllers. (e.g. `bunny-lab.io\Domain Controllers` )
|
||||
- Name the new GPO something like "**Certificate Auto-Enrollment**"
|
||||
- Edit the GPO
|
||||
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies**"
|
||||
- Find and open "**Certificate Services Client - Auto-Enrollment.**"
|
||||
- Set the Configuration Model to "**Enabled**"
|
||||
- Check both checkboxes for "**Renew expired certificates, update pending certificates, and remove revoked certificates**" and "**Update certificates that use certificate templates**"
|
||||
- Click "**OK**"
|
||||
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Trusted Root Certification Authorities**"
|
||||
- Right-click the "**Trusted Root Certification Authorities**" folder and select "**Import...**" > Proceed to browse for the `RootCA.cer` that you previously generated. (*copy it to the domain controller if needed from one of the Certificate Authorities*)
|
||||
- Proceed to import the certificate, clicking-through all of the prompts and confirmations until it finishes the import.
|
||||
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Intermediate Certification Authorities**"
|
||||
- Right-click the "**Trusted Root Certification Authorities**" folder and select "**Import...**" > Proceed to browse for the `LAB-CA-02-SubCA.cer` that you previously generated. (*copy it to the domain controller if needed from one of the Certificate Authorities*)
|
||||
- Proceed to import the certificate, clicking-through all of the prompts and confirmations until it finishes the import.
|
||||
- Run a `gpupdate /force` on your domain controller(s) and give it a few minutes to pull down their new domain controller certificates
|
||||
|
||||
### Validate Auto-Enrollment Functionality
|
||||
At this point, you need to check that there is a certificate installed within "**Certificates - Local Computer > Personal > Certificates**" for "Domain Controller Server Authentication"
|
||||
|
||||
- Load the Certificate - Local Machine (`certlm.msc`) and navigate to "**Personal > Certificates**" > You should see something similar to the following:
|
||||
|
||||
| **Issued To** | **Issued By** | **Expiration Date** | **Intended Purposes** | **Certificate Template** |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Directory Service Email Replication | Directory Email Replication |
|
||||
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Client Authentication, Server Authentication, Smart Card Logon | Domain Controller Authentication |
|
||||
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Client Authentication, Server Authentication, Smart Card Logon, KDC Authentication | Kerberos Authentication |
|
||||
|
||||
### Validate LDAPS Connectivity
|
||||
Lastly, we want to ensure that LDAPS is functioning. By default, once these certs are enrolled on the domain controller(s), LDAPS *should* just work out of the box. To verify this, you can run this command on any device on the same network as the domain controllers. If it comes back successful like in the following example output, then you are golden:
|
||||
|
||||
```powershell
|
||||
PS C:\Users\nicole.rappe> Test-NetConnection LAB-DC-01.bunny-lab.io -Port 636
|
||||
ComputerName : LAB-DC-01.bunny-lab.io
|
||||
RemoteAddress : 192.168.3.25
|
||||
RemotePort : 636
|
||||
InterfaceAlias : Ethernet
|
||||
SourceAddress : 192.168.3.254
|
||||
TcpTestSucceeded : True
|
||||
|
||||
PS C:\Users\nicole.rappe> Test-NetConnection LAB-DC-02.bunny-lab.io -Port 636
|
||||
ComputerName : LAB-DC-02.bunny-lab.io
|
||||
RemoteAddress : 192.168.3.26
|
||||
RemotePort : 636
|
||||
InterfaceAlias : Ethernet
|
||||
SourceAddress : 192.168.3.254
|
||||
TcpTestSucceeded : True
|
||||
```
|
||||
|
||||
!!! success "Successful LDAPS Connectivity"
|
||||
LDAPS should now be functional on your domain controller(s).
|
||||
|
||||
!!! abstract "Raw Unprocessed/Unimplemented Steps"
|
||||
Publish CRLs regularly, configure overlap periods, and monitor expiration. Enable Delta CRLs on the Subordinate CA, but not on the Root.
|
||||
Security Recommendations
|
||||
|
||||
- Harden CA servers; limit access to PKI admins.
|
||||
- Use BitLocker or HSM for key protection.
|
||||
- Monitor issuance and renewals with audit logs and scripts.
|
||||
|
||||
@@ -0,0 +1,34 @@
|
||||
---
|
||||
tags:
|
||||
- Active Directory
|
||||
- Group Policy
|
||||
- Authentication
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
To deploy a shortcut to the desktop pointing to a network share's root path. (e.g. `\\storage.bunny-lab.io`). There is a quirk with how Windows handles network shares and shortcuts and doesn't like when you point the shortcut to a root UNC path.
|
||||
|
||||
### Group Policy Location
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[Create Group Policy] --> B[User Configuration]
|
||||
B --> C[Preferences]
|
||||
C --> D[Windows Settings]
|
||||
D --> E[Shortcuts]
|
||||
```
|
||||
|
||||
### Group Policy Settings
|
||||
- **Action**: `Update`
|
||||
- **Name**: `<FriendlyName>`
|
||||
- **Target Type**: `File System Object`
|
||||
- **Location**: `Desktop`
|
||||
- **Target Path**: `C:\windows\explorer.exe`
|
||||
- **Arguments**: `\\storage.bunny-lab.io`
|
||||
- **Start In**: `<Blank>`
|
||||
- **Shortcut Key**: `<None>`
|
||||
- **Run**: `Normal Window`
|
||||
- **Icon File Path**: `%SystemRoot%\System32\SHELL32.dll`
|
||||
- **Icon Index**: `9`
|
||||
|
||||
### Additional Notes
|
||||
Navigate to the "**Common**" tab in the properties of the shortcut, and check the "**Run in logged-on user's security context (user policy option)**".
|
||||
@@ -0,0 +1,19 @@
|
||||
---
|
||||
tags:
|
||||
- Active Directory
|
||||
- LDAP
|
||||
- Authentication
|
||||
---
|
||||
|
||||
**Purpose**: LDAP settings are used in various services from privacyIDEA to Nextcloud. This will outline the basic parameters in my homelab that are necessary to make it function.
|
||||
|
||||
| **Field** | **Value** | **Description** |
|
||||
| :--- | :--- | :--- |
|
||||
| Server Address(s) | `ldap://bunny-dc-01.bunny-lab.io` / `192.168.3.8`, `ldap://bunny-db-02.bunny.lab.io` / `192.168.3.9` | Domain Controllers |
|
||||
| Port | `389` | Unencrypted LDAP |
|
||||
| STARTTLS | `Disabled` | |
|
||||
| Base DN | `CN=Users,DC=bunny-lab,DC=io` | This is where users are pulled from |
|
||||
| User / Bind DN | `CN=Nicole Rappe,CN=Users,DC=bunny-lab,DC=io` | This is the domain admin used to connect to LDAP |
|
||||
| User / Bind Password | `<Password for User / Bind DN>` | Domain Credentials for Domain Admin account |
|
||||
| Login Attribute | ` LDAP Filter: (&(&(|(objectclass=person))(|(|(memberof=CN=Domain Users,CN=Users,DC=bunny-lab,DC=io)(primaryGroupID=513))))(samaccountname=%uid)) ` | Used by Nextcloud |
|
||||
| Login Attribute | `(sAMAccountName=*)(objectCategory=person)` | Used by PrivacyIDEA |
|
||||
@@ -0,0 +1,14 @@
|
||||
---
|
||||
tags:
|
||||
- Active Directory
|
||||
- Authentication
|
||||
---
|
||||
|
||||
## Purpose
|
||||
If you have a device that lost trust in the domain for some reason, and won't let you login using domain credentials, run the following command as a local administrator on the device to repair trust.
|
||||
|
||||
```powershell
|
||||
Test-ComputerSecureChannel -Repair -Credential (Get-Credential)
|
||||
```
|
||||
|
||||
If it outputs `True`, go ahead and log out then try to login again with the domain credentials.
|
||||
52
deployments/services/authentication/authelia.md
Normal file
52
deployments/services/authentication/authelia.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
tags:
|
||||
- Authelia
|
||||
- Authentication
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Authelia is an open-source authentication and authorization server and portal fulfilling the identity and access management (IAM) role of information security in providing multi-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for common reverse proxies.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
authelia:
|
||||
image: authelia/authelia
|
||||
container_name: authelia
|
||||
volumes:
|
||||
- /mnt/authelia/config:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.159
|
||||
expose:
|
||||
- 9091
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
disable: true
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: redis
|
||||
volumes:
|
||||
- /mnt/authelia/redis:/data
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.158
|
||||
expose:
|
||||
- 6379
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
175
deployments/services/authentication/authentik.md
Normal file
175
deployments/services/authentication/authentik.md
Normal file
@@ -0,0 +1,175 @@
|
||||
---
|
||||
tags:
|
||||
- Authentik
|
||||
- Authentication
|
||||
- Docker
|
||||
---
|
||||
|
||||
!!! bug
|
||||
The docker-compose version of the deployment appears bugged and has known issues, deployment via Kubernetes is required to stability and support.
|
||||
|
||||
**Purpose**: Authentik is an open-source Identity Provider, focused on flexibility and versatility. With authentik, site administrators, application developers, and security engineers have a dependable and secure solution for authentication in almost any type of environment. There are robust recovery actions available for the users and applications, including user profile and password management. You can quickly edit, deactivate, or even impersonate a user profile, and set a new password for new users or reset an existing password.
|
||||
|
||||
This document is based on the [Official Docker-Compose Documentation](https://goauthentik.io/docs/installation/docker-compose). It is meant for testing / small-scale production deployments.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
---
|
||||
version: "3.4"
|
||||
|
||||
services:
|
||||
postgresql:
|
||||
image: docker.io/library/postgres:12-alpine
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- /srv/containers/authentik/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
|
||||
POSTGRES_USER: ${PG_USER:-authentik}
|
||||
POSTGRES_DB: ${PG_DB:-authentik}
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.2
|
||||
|
||||
redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
command: --save 60 1 --loglevel warning
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- /srv/containers/authentik/redis:/data
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.3
|
||||
|
||||
server:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.7}
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
volumes:
|
||||
- /srv/containers/authentik/media:/media
|
||||
- /srv/containers/authentik/custom-templates:/templates
|
||||
env_file:
|
||||
- stack.env
|
||||
ports:
|
||||
- "${COMPOSE_PORT_HTTP:-9000}:9000"
|
||||
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.4
|
||||
|
||||
worker:
|
||||
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.7}
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_REDIS__HOST: redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: postgresql
|
||||
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
|
||||
# `user: root` and the docker socket volume are optional.
|
||||
# See more for the docker socket integration here:
|
||||
# https://goauthentik.io/docs/outposts/integrations/docker
|
||||
# Removing `user: root` also prevents the worker from fixing the permissions
|
||||
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
|
||||
# (1000:1000 by default)
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /srv/containers/authentik/media:/media
|
||||
- /srv/containers/authentik/certs:/certs
|
||||
- /srv/containers/authentik/custom-templates:/templates
|
||||
env_file:
|
||||
- stack.env
|
||||
depends_on:
|
||||
- postgresql
|
||||
- redis
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.5
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
PG_PASS=<See Below>
|
||||
AUTHENTIK_SECRET_KEY=<See Below>
|
||||
AUTHENTIK_BOOTSTRAP_PASSWORD=<SecurePassword>
|
||||
AUTHENTIK_BOOTSTRAP_TOKEN=<SecureOneTimePassword>
|
||||
AUTHENTIK_BOOTSTRAP_EMAIL=nicole.rappe@bunny-lab.io
|
||||
|
||||
## SMTP Host Emails are sent to
|
||||
#AUTHENTIK_EMAIL__HOST=localhost
|
||||
#AUTHENTIK_EMAIL__PORT=25
|
||||
## Optionally authenticate (don't add quotation marks to your password)
|
||||
#AUTHENTIK_EMAIL__USERNAME=
|
||||
#AUTHENTIK_EMAIL__PASSWORD=
|
||||
## Use StartTLS
|
||||
#AUTHENTIK_EMAIL__USE_TLS=false
|
||||
## Use SSL
|
||||
#AUTHENTIK_EMAIL__USE_SSL=false
|
||||
#AUTHENTIK_EMAIL__TIMEOUT=10
|
||||
## Email address authentik will send from, should have a correct @domain
|
||||
#AUTHENTIK_EMAIL__FROM=authentik@localhost
|
||||
```
|
||||
|
||||
!!! note "Generating Passwords"
|
||||
Navigate to the online [PWGen Password Generator](https://pwgen.io/en/) to generate the passwords for `PG_PASS` (40 characters) and `AUTHENTIK_SECRET_KEY` (50 characters).
|
||||
|
||||
Because of a PostgreSQL limitation, only passwords up to 99 characters are supported
|
||||
See https://www.postgresql.org/message-id/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
|
||||
|
||||
!!! warning "Password Symbols"
|
||||
You may encounter the Authentik WebUI throwing `Forbidden` errors, and this is likely caused by you using a password with "problematic" characters for the `PG_PASS` environment variable. Try to avoid using `,` or `;` or `:` in the password you generate.
|
||||
|
||||
## WebUI Initial Setup
|
||||
To start the initial setup, navigate to https://192.168.5.4:9443/if/flow/initial-setup/
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
PLACEHOLDER:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: myresolver
|
||||
service: PLACEHOLDER
|
||||
rule: Host(`PLACEHOLDER.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
PLACEHOLDER:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://PLACEHOLDER:80
|
||||
passHostHeader: true
|
||||
```
|
||||
238
deployments/services/authentication/keycloak/deployment.md
Normal file
238
deployments/services/authentication/keycloak/deployment.md
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
tags:
|
||||
- Keycloak
|
||||
- Authentication
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Keycloak is an open source identity and access management system for modern applications and services.
|
||||
|
||||
- [Original Reference Compose File](https://github.com/JamesTurland/JimsGarage/blob/main/Keycloak/docker-compose.yaml)
|
||||
- [Original Reference Deployment Video](https://www.youtube.com/watch?v=6ye4lP9EA2Y)
|
||||
- [Theme Customization Documentation](https://www.baeldung.com/spring-keycloak-custom-themes)
|
||||
|
||||
## Keycloak Authentication Sequence
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant Traefik as Traefik Reverse Proxy
|
||||
participant Keycloak
|
||||
participant Services
|
||||
|
||||
User->>Traefik: Access service URL
|
||||
Traefik->>Keycloak: Redirect to Keycloak for authentication
|
||||
User->>Keycloak: Provide credentials for authentication
|
||||
Keycloak->>User: Return authorization token/cookie
|
||||
User->>Traefik: Send request with authorization token/cookie
|
||||
Traefik->>Keycloak: Validate token/cookie
|
||||
Keycloak->>Traefik: Token/cookie is valid
|
||||
Traefik->>Services: Forward request to services
|
||||
Services->>Traefik: Response back to Traefik
|
||||
Traefik->>User: Return service response
|
||||
```
|
||||
## Docker Configuration
|
||||
|
||||
=== "docker-compose.yml"
|
||||
|
||||
```yaml
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:16.2
|
||||
volumes:
|
||||
- /srv/containers/keycloak/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_DB: ${POSTGRES_DB}
|
||||
POSTGRES_USER: ${POSTGRES_USER}
|
||||
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U keycloak"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
networks:
|
||||
keycloak_internal_network: # Network for internal communication
|
||||
ipv4_address: 172.16.238.3 # Static IP for PostgreSQL in internal network
|
||||
|
||||
keycloak:
|
||||
image: quay.io/keycloak/keycloak:23.0.6
|
||||
command: start
|
||||
volumes:
|
||||
- /srv/containers/keycloak/themes:/opt/keycloak/themes
|
||||
- /srv/containers/keycloak/base-theme:/opt/keycloak/themes/base
|
||||
environment:
|
||||
TZ: America/Denver # (1)
|
||||
KC_PROXY_ADDRESS_FORWARDING: true # (2)
|
||||
KC_HOSTNAME_STRICT: false
|
||||
KC_HOSTNAME: auth.bunny-lab.io # (3)
|
||||
KC_PROXY: edge # (4)
|
||||
KC_HTTP_ENABLED: true
|
||||
KC_DB: postgres
|
||||
KC_DB_USERNAME: ${POSTGRES_USER}
|
||||
KC_DB_PASSWORD: ${POSTGRES_PASSWORD}
|
||||
KC_DB_URL_HOST: postgres
|
||||
KC_DB_URL_PORT: 5432
|
||||
KC_DB_URL_DATABASE: ${POSTGRES_DB}
|
||||
KC_TRANSACTION_RECOVERY: true
|
||||
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
|
||||
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
|
||||
KC_HEALTH_ENABLED: true
|
||||
DB_POOL_MAX_SIZE: 20 # (5)
|
||||
DB_POOL_MIN_SIZE: 5 # (6)
|
||||
DB_POOL_ACQUISITION_TIMEOUT: 30 # (7)
|
||||
DB_POOL_IDLE_TIMEOUT: 300 # (8)
|
||||
JDBC_PARAMS: "connectTimeout=30"
|
||||
KC_HOSTNAME_DEBUG: false # (9)
|
||||
ports:
|
||||
- 8080:8080
|
||||
restart: always
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/auth"] # Health check for Keycloak
|
||||
interval: 30s # Health check interval
|
||||
timeout: 10s # Health check timeout
|
||||
retries: 3 # Health check retries
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.2
|
||||
keycloak_internal_network: # Network for internal communication
|
||||
ipv4_address: 172.16.238.2 # Static IP for Keycloak in internal network
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
keycloak_internal_network: # Internal network for private communication
|
||||
driver: bridge # Network driver
|
||||
ipam: # IP address management
|
||||
config:
|
||||
- subnet: 172.16.238.0/24 # Subnet for internal network
|
||||
|
||||
```
|
||||
|
||||
1. This sets the timezone of the Keycloak server to your timezone. This is not really necessary according to the official documentation, however I just like to add it to all of my containers as a baseline environment variable to add
|
||||
2. This assumes you are running Keycloak behind a reverse proxy, in my particular case, Traefik
|
||||
3. Set this to the FQDN that you are expecting to reach the Keycloak server at behind your reverse proxy
|
||||
4. This assumes you are running Keycloak behind a reverse proxy, in my particular case, Traefik
|
||||
5. Maximum connections in the database pool
|
||||
6. Minimum idle connections in the database pool
|
||||
7. Timeout for acquiring a connection from the database pool
|
||||
8. Timeout for closing idle connections to the database
|
||||
9. If this is enabled, Navigate to https://auth.bunny-lab.io/realms/master/hostname-debug to troubleshoot issues with the deployment if you experience any issues logging into the web portal or admin UI
|
||||
|
||||
=== ".env"
|
||||
|
||||
```yaml
|
||||
POSTGRES_DB=keycloak
|
||||
POSTGRES_USER=keycloak
|
||||
POSTGRES_PASSWORD=SomethingSecure # (1)
|
||||
KEYCLOAK_ADMIN=admin
|
||||
KEYCLOAK_ADMIN_PASSWORD=SomethingSuperSecureToLoginAsAdmin # (2)
|
||||
```
|
||||
|
||||
1. This is used internally by Keycloak to interact with the PostgreSQL database server
|
||||
2. This is used to log into the web admin portal at https://auth.bunny-lab.io
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
auth:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: auth
|
||||
rule: Host(`auth.bunny-lab.io`)
|
||||
middlewares:
|
||||
- auth-headers
|
||||
|
||||
services:
|
||||
auth:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.2:8080
|
||||
passHostHeader: true
|
||||
|
||||
middlewares:
|
||||
auth-headers:
|
||||
headers:
|
||||
sslRedirect: true
|
||||
stsSeconds: 31536000
|
||||
stsIncludeSubdomains: true
|
||||
stsPreload: true
|
||||
forceSTSHeader: true
|
||||
customRequestHeaders:
|
||||
X-Forwarded-Proto: https
|
||||
X-Forwarded-Port: "443"
|
||||
```
|
||||
|
||||
# Traefik Keycloak Middleware
|
||||
At this point, we need to add the official Keycloak plugin to Traefik's main configuration. In this example, it will be assumed you need to configure this in Portainer/Docker Compose, and not via a static yml/toml file. Assume you follow the [Docker Compose based Traefik Deployment](../../edge/traefik.md).
|
||||
|
||||
## Install Keycloak Plugin
|
||||
If you do not already have the following added to the end of your `command:` section of the docker-compose.yml file in Portainer, go ahead and add it:
|
||||
``` yaml
|
||||
# Keycloak plugin configuration
|
||||
- "--experimental.plugins.keycloakopenid.moduleName=github.com/Gwojda/keycloakopenid"
|
||||
- "--experimental.plugins.keycloakopenid.version=v0.1.34"
|
||||
```
|
||||
|
||||
## Add Middleware to Traefik Dynamic Configuration
|
||||
You will want to ensure the following exists in the dynamically-loaded config file folder, you can name the file whatever you want, but it will be a one-all middleware for any services you want to have communicating as a specific OAuth2 `Client ID`. For example, you might want to have some services exist in a particular realm of Keycloak, or to have different client rules apply to certain services. If this is the case, you can create multiple middlewares in this single yaml file, each handling a different service / realm. It can get pretty complicated if you want to handle a multi-tenant environment, such as one seen in an enterprise environment.
|
||||
|
||||
```jsx title="keycloak-middleware.yml"
|
||||
http:
|
||||
middlewares:
|
||||
auth-bunny-lab-io:
|
||||
plugin:
|
||||
keycloakopenid:
|
||||
KeycloakURL: "https://auth.bunny-lab.io" # <- Also supports complete URL, e.g. https://my-keycloak-url.com/auth
|
||||
ClientID: "traefik-reverse-proxy"
|
||||
ClientSecret: "https://auth.bunny-lab.io > Clients > traefik-reverse-proxy > Credentials > Client Secret"
|
||||
KeycloakRealm: "master"
|
||||
Scope: "openid profile email"
|
||||
TokenCookieName: "AUTH_TOKEN"
|
||||
UseAuthHeader: "false"
|
||||
# IgnorePathPrefixes: "/api,/favicon.ico [comma deliminated] (optional)"
|
||||
```
|
||||
|
||||
## Configure Valid Redirect URLs
|
||||
At this point, within Keycloak, you need to configure domains that you are allowed to visit after authenticating. You can do this with wildcards, but generally you navigate to "**https://auth.bunny-lab.io > Clients > traefik-reverse-proxy > Valid redirect URIs**" A simple example is adding `https://tools.bunny-lab.io/*` to the list of valid redirect URLs. If the site is not in this list, even if it has the middleware configured in Traefik, it will fail to authenticate and not let the user proceed to the website being protected behind Keycloak.
|
||||
|
||||
## Adding Middleware to Dynamic Traefik Service Config Files
|
||||
At this point, you are in the final stretch, you just need to add the middleware to the Traefik dynamic config files to ensure that it routes the traffic to Keycloak when someone attempts to access that service. Put the following middleware section under the `routers:` section of the config file.
|
||||
|
||||
```yaml
|
||||
middlewares:
|
||||
- auth-bunny-lab-io # Referencing the Keycloak Server
|
||||
```
|
||||
|
||||
A full example config file would look like the following:
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
example:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: example
|
||||
rule: Host(`example.bunny-lab.io`)
|
||||
middlewares:
|
||||
- auth-bunny-lab-io # Referencing the Keycloak Server Traefik Middleware
|
||||
|
||||
services:
|
||||
example:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.16:80
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
@@ -0,0 +1,10 @@
|
||||
---
|
||||
tags:
|
||||
- Keycloak
|
||||
- OAuth2
|
||||
- Authentication
|
||||
- Docker
|
||||
---
|
||||
|
||||
You can deploy Keycloak via a [docker-compose stack](../deployment.md) found within the "Containerization" section of the documentation.
|
||||
|
||||
@@ -0,0 +1,20 @@
|
||||
---
|
||||
tags:
|
||||
- Gitea
|
||||
- Keycloak
|
||||
- OAuth2
|
||||
- Authentication
|
||||
---
|
||||
|
||||
### OAuth2 Configuration
|
||||
These are variables referenced by the associated service to connect its authentication system to [Keycloak](../deployment.md).
|
||||
|
||||
| **Parameter** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Authentication Name | `auth-bunny-lab-io` |
|
||||
| OAuth2 Provider | `OpenID Connect` |
|
||||
| Client ID (Key) | `git-bunny-lab-io` |
|
||||
| Client Secret | `https://auth.bunny-lab.io > Clients > git-bunny-lab-io > Credentials > Client Secret` |
|
||||
| OpenID Connect Auto Discovery URL | `https://auth.bunny-lab.io/realms/master/.well-known/openid-configuration` |
|
||||
| Skip Local 2FA | Yes |
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
---
|
||||
tags:
|
||||
- Portainer
|
||||
- Keycloak
|
||||
- OAuth2
|
||||
- Authentication
|
||||
---
|
||||
|
||||
### OAuth2 Configuration
|
||||
These are variables referenced by the associated service to connect its authentication system to [Keycloak](../deployment.md).
|
||||
|
||||
| **Parameter** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Client ID | `container-node-01` |
|
||||
| Client Secret | `https://auth.bunny-lab.io > Clients > container-node-01 > Credentials > Client Secret` |
|
||||
| Authorization URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/auth` |
|
||||
| Access Token URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/token` |
|
||||
| Resource URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/userinfo` |
|
||||
| Redirect URL | `https://192.168.3.19:9443` |
|
||||
| Logout URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/logout` |
|
||||
| User Identifier | `email` |
|
||||
| Scopes | `email openid profile` |
|
||||
|
||||
144
deployments/services/authentication/privacyidea.md
Normal file
144
deployments/services/authentication/privacyidea.md
Normal file
@@ -0,0 +1,144 @@
|
||||
---
|
||||
tags:
|
||||
- PrivacyIDEA
|
||||
- Authentication
|
||||
---
|
||||
|
||||
**Purpose**: privacyIDEA is a modular authentication system. Using privacyIDEA you can enhance your existing applications like local login, VPN, remote access, SSH connections, access to web sites or web portals with a second factor during authentication.
|
||||
|
||||
!!! info "Assumptions"
|
||||
It is assumed you have a provisioned virtual machine / physical machine, running Ubuntu Server 22.04 to deploy a privacyIDEA server.
|
||||
|
||||
## AWX Deployment
|
||||
### Add Server to Inventory and Pull Inventory/Playbook Updates from Gitea
|
||||
You need to target the new server using a template in AWX (preferrably).
|
||||
|
||||
- We will assume the FQDN of the server is `auth.bunny-lab.io` or just `auth`
|
||||
- Be sure to add the host into the [AWX Homelab Inventory File](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini)
|
||||
- Update / Sync the "**Bunny-Lab**" project in AWX ([Resources > Projects > Bunny-Lab > Sync](https://awx.bunny-lab.io/#/projects/8/details))
|
||||
- Update / Sync the git.bunny-lab.io Inventory Source ([Resources > Inventories > Homelab > Sources > git.bunny-lab.io > Sync](https://awx.bunny-lab.io/#/inventories/inventory/2/sources/9/details))
|
||||
|
||||
### Create a Template
|
||||
Next, you want to make a template to automate the deployment of privacyIDEA on any servers that are members of the `[privacyideaServers]` inventory host group. This is useful for development / testing, as well as rapid re-deployment / scaling.
|
||||
|
||||
- Navigate to **Resources > Templates > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Template Name | `Deploy PrivacyIDEA Server` |
|
||||
| Description | `Ubuntu Server 22.04 Required` |
|
||||
| Project | `Bunny-Lab` *(Click the Magnifying Lens)* |
|
||||
| Inventory | `Homelab` |
|
||||
| Playbook | `playbooks/Linux/Deployments/privacyIDEA.yml` |
|
||||
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
|
||||
| Credentials | `SSH: (LINUX) nicole` |
|
||||
|
||||
**Options**:
|
||||
|
||||
- [X] Privilege Escalation: Checked
|
||||
- [X] Enable Fact Storage: Checked
|
||||
|
||||
### Launch the Template
|
||||
Now we need to launch the template. Assuming all of the above was completed, we can now deploy the playbook/template against the Ubuntu Server via SSH.
|
||||
|
||||
- Launch the Template (Rocket Button)
|
||||
- As the template runs, you will see deployment progress output on the screen
|
||||
|
||||
!!! success
|
||||
You will know if everything was successful if you see something that looks like the following:
|
||||
``` sh
|
||||
ok: [auth]
|
||||
TASK [Install wget and software-properties-common] *****************************
|
||||
ok: [auth]
|
||||
TASK [Download PrivacyIDEA signing key] ****************************************
|
||||
changed: [auth]
|
||||
TASK [Add signing key for Ubuntu 22.04LTS] *************************************
|
||||
changed: [auth]
|
||||
TASK [Add PrivacyIDEA repository] **********************************************
|
||||
changed: [auth]
|
||||
TASK [Update apt cache] ********************************************************
|
||||
changed: [auth]
|
||||
TASK [Install PrivacyIDEA with Apache2] ****************************************
|
||||
changed: [auth]
|
||||
PLAY RECAP *********************************************************************auth : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
|
||||
```
|
||||
|
||||
## Admin Access to WebUI
|
||||
### Create a privacyIDEA Administrator Account
|
||||
You will need to use the CLI in the server in order to create the first administrative account. Run the following command and provide a password for the administrator account.
|
||||
``` sh
|
||||
sudo pi-manage admin add nicole.rappe -e nicole.rappe@bunny-lab.io
|
||||
```
|
||||
|
||||
### Log into the WebUI
|
||||
Assuming you created an `A` record in the DNS server pointing to the IP address of the privacyIDEA server, Navigate to https://auth.bunny-lab.io and sign in with your newly-created username and password. (e.g. `nicole.rappe`)
|
||||
|
||||
## Connect to Active Directory/LDAP
|
||||
### Create a LDAP User ID Resolver
|
||||
This is what will connect privacyIDEA to an LDAP backend to pull-down users for authentication in Active Directory. Begin by navigating to "**Config > Users > New LDAP Resolver**"
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Resolver Name | `BunnyLab-LDAP` |
|
||||
| Server URI | `ldap://bunny-dc-01.bunny-lab.io, ldap://bunny-db-02.bunny.lab.io` |
|
||||
| Pooling Strategy | `ROUND_ROBIN` |
|
||||
| StartTLS | `<Unchecked>` |
|
||||
| Base DN | `CN=Users,DC=bunny-lab,DC=io` |
|
||||
| Scope | `SUBTREE` |
|
||||
| Bind Type | `Simple` |
|
||||
| Bind DN | `CN=Nicole Rappe,CN=Users,DC=bunny-lab,DC=io`
|
||||
| Bind Password | `<Domain Admin Password for "nicole.rappe">` |
|
||||
|
||||
- Click the "**Preset Active Directory**" button.
|
||||
- Click the "**Test LDAP Resolver**" button.
|
||||
|
||||
### Associate User ID Resolver with a Realm
|
||||
Now we need to create what is called a "**Realm**". Users need to be in realms to have tokens assigned. A user, who is not member of a realm can not have a token assigned and can not authenticate. You can combine several different User ID Resolvers (see UserIdResolvers) into a realm. Navigate to "**Config > Realms**"
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Realm Name | `Bunny-Lab` |
|
||||
| Resolver(s) | `BunnyLab-LDAP` |
|
||||
|
||||
## Configure Push Notifications
|
||||
### Create Policies
|
||||
You will need to create several policies, you can make them all individual, or merge the ones with identical scopes together to keep things more organized. To begin, navigate to "**Config > Policies > Create New Policy**"
|
||||
|
||||
- **Scope**: `Enrollment` > "**push_firebase_configuration**" = `poll only`
|
||||
- **Scope**: `Enrollment` > "**push_registration_url**" = `https://auth.bunny-lab.io/ttype/push`
|
||||
- **Scope**: `Enrollment` > "**push_ssl_verify**" = `0`
|
||||
- **Scope**: `Authentication` > "**push_allow_polling**" = `allow`
|
||||
|
||||
## Enrolling the First Token
|
||||
!!! bug "Push Notifications Broken"
|
||||
Currently, the push notification system (e.g. Cisco DUO") is not behaving as-expected. For now, you can use other authentication methods for the tokens, such as HOTP (on-demand MFA codes) or TOTP (conventional time-based MFA codes).
|
||||
|
||||
### TOTP Token
|
||||
Navigate to "**Tokens > Enroll Token**"
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Token Type | `TOTP` |
|
||||
| Realm | `Bunny-Lab` |
|
||||
| Username | `[256da6f8-9ddb-4ec5-9409-1a95fea27615] nicole.rappe (Nicole Rappe)` |
|
||||
|
||||
Use any MFA authenticator app like Bitwarden or Google Authenticator to add the code and store the secret key somewhere safe.
|
||||
|
||||
## Install Credential Provider
|
||||
### Install Credential Provider Subscription File
|
||||
In order to use the Credential Provider, you have to upload a subscription file. The free-tier allows up to 50 devices using the Credential Provider, but you can alter the source code of privacyIDEA to ignore subscriptions and just unlock everything (custom python code planned).
|
||||
|
||||
When you want to leverage MFA in an environment using the server, you need to have a domain-joined computer running the Credential Provider, which can be found on the [Official Credential Provider Github Page](https://github.com/privacyidea/privacyidea-credential-provider/releases).
|
||||
|
||||
- Download the MSI
|
||||
- Run the installer on the computer
|
||||
- Click "**Next**"
|
||||
- Check the "**Agree**" checkbox, then click "**Next**"
|
||||
- Hostname: `auth.bunny-lab.io`
|
||||
- Path: `/path/to/pi`
|
||||
- [x] Ignore Unknown CA Errors when Using SSL
|
||||
- [x] Ignore Invalid Common Name Errors when Using SSL
|
||||
- Click "**Next**" > "**Next**" > "**Next**"
|
||||
- Click "**Install**" then "**Finish**"
|
||||
|
||||
You can now log out and verify that the credential provider is displayed as an option, and can log in using your domain username, domain password, and TOTP that you configured in the privacyIDEA WebUI.
|
||||
77
deployments/services/automation-tools/activepieces.md
Normal file
77
deployments/services/automation-tools/activepieces.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
tags:
|
||||
- Activepieces
|
||||
- Automation
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Self-hosted open-source no-code business automation tool.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.0'
|
||||
services:
|
||||
activepieces:
|
||||
image: activepieces/activepieces:0.3.11
|
||||
container_name: activepieces
|
||||
restart: unless-stopped
|
||||
privileged: true
|
||||
ports:
|
||||
- '8080:80'
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
env_file: stack.env
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.62
|
||||
postgres:
|
||||
image: 'postgres:14.4'
|
||||
container_name: postgres
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
volumes:
|
||||
- /srv/containers/activepieces/postgresql:/var/lib/postgresql/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.61
|
||||
redis:
|
||||
image: 'redis:7.0.7'
|
||||
container_name: redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /srv/containers/activepieces/redis:/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.60
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
AP_ENGINE_EXECUTABLE_PATH=dist/packages/engine/main.js
|
||||
AP_ENCRYPTION_KEY=e81f8754faa04acaa7b13caa5d2c6a5a
|
||||
AP_JWT_SECRET=REDACTED #BE SURE TO SET THIS WITH A VALID JWT SECRET > REFER TO OFFICIAL DOCUMENTATION
|
||||
AP_ENVIRONMENT=prod
|
||||
AP_FRONTEND_URL=https://ap.cyberstrawberry.net
|
||||
AP_NODE_EXECUTABLE_PATH=/usr/local/bin/node
|
||||
AP_POSTGRES_DATABASE=activepieces
|
||||
AP_POSTGRES_HOST=192.168.5.61
|
||||
AP_POSTGRES_PORT=5432
|
||||
AP_POSTGRES_USERNAME=postgres
|
||||
AP_POSTGRES_PASSWORD=REDACTED #USE A SECURE SHORT PASSWORD > ENSURE ITS NOT TOO LONG FOR POSTGRESQL
|
||||
AP_REDIS_HOST=redis
|
||||
AP_REDIS_PORT=6379
|
||||
AP_SANDBOX_RUN_TIME_SECONDS=600
|
||||
AP_TELEMETRY_ENABLED=true
|
||||
```
|
||||
36
deployments/services/automation-tools/node-red.md
Normal file
36
deployments/services/automation-tools/node-red.md
Normal file
@@ -0,0 +1,36 @@
|
||||
---
|
||||
tags:
|
||||
- Node-RED
|
||||
- Automation
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
node-red:
|
||||
image: nodered/node-red:latest
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- "1880:1880"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.92
|
||||
volumes:
|
||||
- /srv/containers/node-red:/data
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
84
deployments/services/automation-tools/semaphore-ui.md
Normal file
84
deployments/services/automation-tools/semaphore-ui.md
Normal file
@@ -0,0 +1,84 @@
|
||||
---
|
||||
tags:
|
||||
- Semaphore
|
||||
- Automation
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: User friendly web interface for executing Ansible playbooks, Terraform, OpenTofu code and Bash scripts. It is designed to make your automation tasks easier and more enjoyable.
|
||||
|
||||
[Website Details](https://semaphoreui.com/)
|
||||
|
||||
!!! info "Standalone VM Assumption"
|
||||
It is assumed that you are deploying Semaphore UI in its own standalone virtual machine. These instructions dont accomodate MACVLAN docker networking, and assume that Semaphore UI and its PostgreSQL database backend share their IP address with the VM they are running on.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
semaphore-ui:
|
||||
ports:
|
||||
- 3000:3000
|
||||
image: public.ecr.aws/semaphore/pro/server:v2.13.12
|
||||
privileged: true
|
||||
environment:
|
||||
SEMAPHORE_DB_DIALECT: postgres
|
||||
SEMAPHORE_DB_HOST: postgres
|
||||
SEMAPHORE_DB_NAME: semaphore
|
||||
SEMAPHORE_DB_USER: root
|
||||
SEMAPHORE_DB_PASS: SuperSecretDBPassword
|
||||
SEMAPHORE_ADMIN: nicole
|
||||
SEMAPHORE_ADMIN_PASSWORD: SuperSecretPassword
|
||||
SEMAPHORE_ADMIN_NAME: Nicole Rappe
|
||||
SEMAPHORE_ADMIN_EMAIL: infrastructure@bunny-lab.io
|
||||
SEMAPHORE_EMAIL_SENDER: "noreply@bunny-lab.io"
|
||||
SEMAPHORE_EMAIL_HOST: "mail.bunny-lab.io"
|
||||
SEMAPHORE_EMAIL_PORT: "587"
|
||||
SEMAPHORE_EMAIL_USERNAME: "noreply@bunny-lab.io"
|
||||
SEMAPHORE_EMAIL_PASSWORD: "SuperSecretSMTPPassword"
|
||||
ANSIBLE_HOST_KEY_CHECKING: "False"
|
||||
volumes:
|
||||
- /srv/containers/semaphore-ui/data:/var/lib/semaphore
|
||||
- /srv/containers/semaphore-ui/config:/etc/semaphore
|
||||
- /srv/containers/semaphore-ui/tmp:/tmp/semaphore
|
||||
depends_on:
|
||||
- postgres
|
||||
|
||||
postgres:
|
||||
image: postgres:12-alpine
|
||||
ports:
|
||||
- 5432:5432
|
||||
volumes:
|
||||
- /srv/containers/semaphore-ui/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=semaphore
|
||||
- POSTGRES_USER=root
|
||||
- POSTGRES_PASSWORD=SuperSecretDBPassword
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A - Will be cleaned up later.
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
semaphore:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: semaphore
|
||||
rule: Host(`semaphore.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
semaphore:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.3.51:3000
|
||||
passHostHeader: true
|
||||
```
|
||||
50
deployments/services/backup/kopia.md
Normal file
50
deployments/services/backup/kopia.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
tags:
|
||||
- Kopia
|
||||
- Backup
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Cross-platform backup tool for Windows, macOS & Linux with fast, incremental backups, client-side end-to-end encryption, compression and data deduplication. CLI and GUI included.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
kopia:
|
||||
image: kopia/kopia:latest
|
||||
hostname: kopia-backup
|
||||
user: root
|
||||
restart: always
|
||||
ports:
|
||||
- 51515:51515
|
||||
environment:
|
||||
- KOPIA_PASSWORD=${KOPIA_ENRYPTION_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
privileged: true
|
||||
volumes:
|
||||
- /srv/containers/kopia/config:/app/config
|
||||
- /srv/containers/kopia/cache:/app/cache
|
||||
- /srv/containers/kopia/logs:/app/logs
|
||||
- /srv:/srv
|
||||
- /usr/share/zoneinfo:/usr/share/zoneinfo
|
||||
entrypoint: ["/bin/kopia", "server", "start", "--insecure", "--timezone=America/Denver", "--address=0.0.0.0:51515", "--override-username=${KOPIA_SERVER_USERNAME}", "--server-username=${KOPIA_SERVER_USERNAME}", "--server-password=${KOPIA_SERVER_PASSWORD}", "--disable-csrf-token-checks"]
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.14
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
!!! note "Credentials"
|
||||
Your username will be `kopia@kopia-backup` and the password will be the value you set for `--server-password` in the entrypoint section of the compose file. The `KOPIA_PASSWORD:` is used by the backup repository, such as Backblaze B2, to encrypt/decrypt the backed-up data, and must be updated in the compose file if the repository is changed / updated.
|
||||
|
||||
|
||||
```yaml title=".env"
|
||||
KOPIA_ENRYPTION_PASSWORD=PasswordUsedToEncryptDataOnBackblazeB2
|
||||
KOPIA_SERVER_PASSWORD=ThisIsUsedToLogIntoKopiaWebUI
|
||||
KOPIA_SERVER_USERNAME=kopia@kopia-backup
|
||||
```
|
||||
52
deployments/services/communication/niltalk.md
Normal file
52
deployments/services/communication/niltalk.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
tags:
|
||||
- Niltalk
|
||||
- Communication
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Niltalk is a web based disposable chat server. It allows users to create password protected disposable, ephemeral chatrooms and invite peers to chat rooms.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
volumes:
|
||||
- /srv/niltalk
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.196
|
||||
|
||||
niltalk:
|
||||
image: kailashnadh/niltalk:latest
|
||||
ports:
|
||||
- "9000:9000"
|
||||
depends_on:
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.197
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.niltalk.rule=Host(`temp.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.niltalk.entrypoints=websecure"
|
||||
- "traefik.http.routers.niltalk.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.niltalk.loadbalancer.server.port=9000"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
niltalk-data:
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
tags:
|
||||
- Rocket.Chat
|
||||
- Communication
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
When someone types a message that includes a ticket number (e.g. `T00000000.0000`) we want to replace that text with an API-friendly URL that leverages Markdown language as well.
|
||||
|
||||
From RocketChat, navigate to the "Marketplace" and look for "**Word Replacer**". You can find the application's [GitHub Page](https://github.com/Dimsday/WordReplacer) for additional information / source code review. Proceed to install the application. Once it has been installed, use the following RegEx filter / string in the application's settings:
|
||||
|
||||
``` json
|
||||
[{"search": "T(\\d{8}\\.\\d{4})", "replace": "[$&](https://ww15.autotask.net/Autotask/AutotaskExtend/ExecuteCommand.aspx?Code=OpenTicketDetail&TicketNumber=$&)"}]
|
||||
```
|
||||
|
||||
!!! success
|
||||
Now everything should be functional and replacing ticket numbers with valid links that open the ticket in Autotask.
|
||||
107
deployments/services/communication/rocketchat/deployment.md
Normal file
107
deployments/services/communication/rocketchat/deployment.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
tags:
|
||||
- Rocket.Chat
|
||||
- Communication
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Deploy a RocketChat and MongoDB database together.
|
||||
|
||||
!!! caution Folder Pre-Creation
|
||||
You need to make the folders for the Mongo database before launching the container stack for the first time. If you do not make this folder ahead of time, Mongo will give Permission Denied errors to the data directorry. You can create the folder as well as adjust permissions with the following commands:
|
||||
``` sh
|
||||
mkdir -p /srv/containers/rocketchat/mongodb/data
|
||||
chmod -R 777 /srv/containers/rocketchat
|
||||
```
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
rocketchat:
|
||||
image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
|
||||
restart: always
|
||||
# labels:
|
||||
# traefik.enable: "true"
|
||||
# traefik.http.routers.rocketchat.rule: Host(`${DOMAIN:-}`)
|
||||
# traefik.http.routers.rocketchat.tls: "true"
|
||||
# traefik.http.routers.rocketchat.entrypoints: https
|
||||
# traefik.http.routers.rocketchat.tls.certresolver: le
|
||||
environment:
|
||||
MONGO_URL: "${MONGO_URL:-\
|
||||
mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
${MONGODB_DATABASE:-rocketchat}?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
MONGO_OPLOG_URL: "${MONGO_OPLOG_URL:\
|
||||
-mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
local?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
ROOT_URL: ${ROOT_URL:-http://localhost:${HOST_PORT:-3000}}
|
||||
PORT: ${PORT:-3000}
|
||||
DEPLOY_METHOD: docker
|
||||
DEPLOY_PLATFORM: ${DEPLOY_PLATFORM:-}
|
||||
REG_TOKEN: ${REG_TOKEN:-}
|
||||
depends_on:
|
||||
- rc_mongodb
|
||||
expose:
|
||||
- ${PORT:-3000}
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
ports:
|
||||
- "${BIND_IP:-0.0.0.0}:${HOST_PORT:-3000}:${PORT:-3000}"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.2
|
||||
|
||||
rc_mongodb:
|
||||
image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-5.0}
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/rocket.chat/mongodb:/bitnami/mongodb
|
||||
environment:
|
||||
MONGODB_REPLICA_SET_MODE: primary
|
||||
MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}
|
||||
MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}
|
||||
MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-rc_mongodb}
|
||||
MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}
|
||||
MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}
|
||||
MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}
|
||||
ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.3
|
||||
|
||||
networks:
|
||||
docker__network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
TZ=America/Denver
|
||||
RELEASE=6.3.0
|
||||
PORT=3000 #Redundant - Can be Removed
|
||||
MONGODB_VERSION=6.0
|
||||
MONGODB_INITIAL_PRIMARY_HOST=rc_mongodb #Redundant - Can be Removed
|
||||
MONGODB_ADVERTISED_HOSTNAME=rc_mongodb #Redundant - Can be Removed
|
||||
```
|
||||
## Reverse Proxy Configuration
|
||||
```yaml title="nginx.conf"
|
||||
# Rocket.Chat Server
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name rocketchat.domain.net;
|
||||
error_log /var/log/nginx/new_rocketchat_error.log;
|
||||
client_max_body_size 500M;
|
||||
location / {
|
||||
proxy_pass http://192.168.5.2:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_set_header X-Nginx-Proxy true;
|
||||
proxy_redirect off;
|
||||
}
|
||||
}
|
||||
```
|
||||
16
deployments/services/cpanel/creating-email-server.md
Normal file
16
deployments/services/cpanel/creating-email-server.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
tags:
|
||||
- cPanel
|
||||
- Email
|
||||
---
|
||||
|
||||
## Purpose
|
||||
This documentation helps you deploy an email server within a cPanel hosted environment.
|
||||
|
||||
!!! note "Assumptions"
|
||||
It is assumed that the cPanel environment is set up (prior) to following this documentation, as deploying cPanel itself is not covered in this document.
|
||||
|
||||
## Step
|
||||
|
||||
### Sub-Step
|
||||
|
||||
66
deployments/services/dashboards/dashy.md
Normal file
66
deployments/services/dashboards/dashy.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
tags:
|
||||
- Dashy
|
||||
- Dashboards
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: A self-hostable personal dashboard built for you. Includes status-checking, widgets, themes, icon packs, a UI editor and tons more!
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
dashy:
|
||||
container_name: Dashy
|
||||
|
||||
# Pull latest image from DockerHub
|
||||
image: lissy93/dashy
|
||||
|
||||
# Set port that web service will be served on. Keep container port as 80
|
||||
ports:
|
||||
- 4000:80
|
||||
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dashy.rule=Host(`dashboard.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.dashy.entrypoints=websecure"
|
||||
- "traefik.http.routers.dashy.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.dashy.loadbalancer.server.port=80"
|
||||
|
||||
# Set any environmental variables
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- UID=1000
|
||||
- GID=1000
|
||||
|
||||
# Pass in your config file below, by specifying the path on your host machine
|
||||
volumes:
|
||||
- /srv/Containers/Dashy/conf.yml:/app/public/conf.yml
|
||||
- /srv/Containers/Dashy/item-icons:/app/public/item-icons
|
||||
|
||||
# Specify restart policy
|
||||
restart: unless-stopped
|
||||
|
||||
# Configure healthchecks
|
||||
healthcheck:
|
||||
test: ['CMD', 'node', '/app/services/healthcheck']
|
||||
interval: 1m30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
# Connect container to Docker_Network
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.57
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
65
deployments/services/dashboards/homepage-docker.md
Normal file
65
deployments/services/dashboards/homepage-docker.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
tags:
|
||||
- Docker
|
||||
- Homepage
|
||||
- Dashboards
|
||||
---
|
||||
|
||||
**Purpose**: A highly customizable homepage (or startpage / application dashboard) with Docker and service API integrations.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.8'
|
||||
services:
|
||||
homepage:
|
||||
image: ghcr.io/gethomepage/homepage:latest
|
||||
container_name: homepage
|
||||
volumes:
|
||||
- /srv/containers/homepage-docker:/config
|
||||
- /srv/containers/homepage-docker/icons:/app/public/icons
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
- 3000:3000
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
- HOMEPAGE_ALLOWED_HOSTS=servers.bunny-lab.io
|
||||
dns:
|
||||
- 192.168.3.25
|
||||
- 192.168.3.26
|
||||
restart: unless-stopped
|
||||
extra_hosts:
|
||||
- "rancher.bunny-lab.io:192.168.3.21"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.44
|
||||
|
||||
dockerproxy:
|
||||
image: ghcr.io/tecnativa/docker-socket-proxy:latest
|
||||
container_name: dockerproxy
|
||||
environment:
|
||||
- CONTAINERS=1 # Allow access to viewing containers
|
||||
- SERVICES=1 # Allow access to viewing services (necessary when using Docker Swarm)
|
||||
- TASKS=1 # Allow access to viewing tasks (necessary when using Docker Swarm)
|
||||
- POST=0 # Disallow any POST operations (effectively read-only)
|
||||
ports:
|
||||
- 127.0.0.1:2375:2375
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro # Mounted as read-only
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.46
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
102
deployments/services/devops/gitea.md
Normal file
102
deployments/services/devops/gitea.md
Normal file
@@ -0,0 +1,102 @@
|
||||
---
|
||||
tags:
|
||||
- Gitea
|
||||
- DevOps
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Gitea is a painless self-hosted all-in-one software development service, it includes Git hosting, code review, team collaboration, package registry and CI/CD. It is similar to GitHub, Bitbucket and GitLab. Gitea was forked from Gogs originally and almost all the code has been changed.
|
||||
|
||||
[Detailed SMTP Configuration Reference](https://docs.gitea.com/administration/config-cheat-sheet)
|
||||
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
privileged: true
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- TZ=America/Denver
|
||||
- GITEA__mailer__ENABLED=true
|
||||
- GITEA__mailer__FROM=${GITEA__mailer__FROM:?GITEA__mailer__FROM not set}
|
||||
- GITEA__mailer__PROTOCOL=smtp+starttls
|
||||
- GITEA__mailer__HOST=${GITEA__mailer__HOST:?GITEA__mailer__HOST not set}
|
||||
- GITEA__mailer__IS_TLS_ENABLED=true
|
||||
- GITEA__mailer__USER=${GITEA__mailer__USER:-apikey}
|
||||
- GITEA__mailer__PASSWD="""${GITEA__mailer__PASSWD:?GITEA__mailer__PASSWD not set}"""
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/gitea:/data
|
||||
# - /etc/timezone:/etc/timezone:ro
|
||||
# - /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3000:3000"
|
||||
- "222:22"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.70
|
||||
# labels:
|
||||
# - "traefik.enable=true"
|
||||
# - "traefik.http.routers.gitea.rule=Host(`git.bunny-lab.io`)"
|
||||
# - "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
# - "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
|
||||
# - "traefik.http.services.gitea.loadbalancer.server.port=3000"
|
||||
depends_on:
|
||||
- postgres
|
||||
|
||||
postgres:
|
||||
image: postgres:12-alpine
|
||||
ports:
|
||||
- 5432:5432
|
||||
volumes:
|
||||
- /srv/containers/gitea/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=gitea
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.71
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
GITEA__mailer__FROM=noreply@bunny-lab.io
|
||||
GITEA__mailer__HOST=mail.bunny-lab.io
|
||||
GITEA__mailer__PASSWD=SecureSMTPPassword
|
||||
GITEA__mailer__USER=noreply@bunny-lab.io
|
||||
POSTGRES_PASSWORD=SomethingSuperSecure
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
git:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: git
|
||||
rule: Host(`git.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
git:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.70:3000
|
||||
passHostHeader: true
|
||||
```
|
||||
37
deployments/services/dns/adguard-home.md
Normal file
37
deployments/services/dns/adguard-home.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
tags:
|
||||
- AdGuard Home
|
||||
- DNS
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: AdGuard Home is a network-wide software for blocking ads & tracking. After you set it up, it will cover ALL your home devices, and you don’t need any client-side software for that. With the rise of Internet-Of-Things and connected devices, it becomes more and more important to be able to control your whole network.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: adguard/adguardhome
|
||||
ports:
|
||||
- 3000:3000
|
||||
- 53:53
|
||||
- 80:80
|
||||
volumes:
|
||||
- /srv/containers/adguard_home/workingdir:/opt/adguardhome/work
|
||||
- /srv/containers/adguard_home/config:/opt/adguardhome/conf
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.189
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
48
deployments/services/dns/pi-hole.md
Normal file
48
deployments/services/dns/pi-hole.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
tags:
|
||||
- Pi-hole
|
||||
- DNS
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Pi-hole is a Linux network-level advertisement and Internet tracker blocking application which acts as a DNS sinkhole and optionally a DHCP server, intended for use on a private network.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
|
||||
services:
|
||||
pihole:
|
||||
container_name: pihole
|
||||
image: pihole/pihole:latest
|
||||
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
|
||||
ports:
|
||||
- "53:53/tcp"
|
||||
- "53:53/udp"
|
||||
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
|
||||
- "80:80/tcp"
|
||||
environment:
|
||||
TZ: 'America/Denver'
|
||||
WEBPASSWORD: 'REDACTED' #USE A SECURE PASSWORD HERE
|
||||
# Volumes store your data between container upgrades
|
||||
volumes:
|
||||
- /srv/containers/pihole/app:/etc/pihole
|
||||
- /srv/containers/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
|
||||
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
|
||||
cap_add:
|
||||
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.190
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
95
deployments/services/dns/windows-server/best-practices.md
Normal file
95
deployments/services/dns/windows-server/best-practices.md
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
tags:
|
||||
- DNS
|
||||
- Windows Server
|
||||
- Windows
|
||||
---
|
||||
|
||||
## Purpose
|
||||
This document outlines best practices for DNS server configuration in Active Directory environments, focusing on both performance and security considerations. The goal is to enhance the stability, efficiency, and security of DNS infrastructure within enterprise networks.
|
||||
|
||||
## Performance Best Practices
|
||||
!!! note "Performance Recommendations Overview"
|
||||
The following list is organized in order of priority, with the most critical practices listed first.
|
||||
|
||||
### Redundancy and High Availability
|
||||
* **Always have at least two DNS servers, preferably three (1 master, 2 slaves).**
|
||||
Ensures redundancy and high availability.
|
||||
|
||||
### Internal DNS Usage
|
||||
* **Domain-joined computers should only use internal DNS servers.**
|
||||
This ensures that end-user computers can always resolve internal resources and simplifies troubleshooting and management.
|
||||
* **Extended Reason:** Using only internal DNS servers increases security and streamlines DNS operations.
|
||||
|
||||
### DNS Server Self-Referencing
|
||||
* **A DNS server should have `127.0.0.1` loopback as a secondary or tertiary DNS server.**
|
||||
Improves the DNS server’s own performance and availability.
|
||||
* **Extended Reason:** Setting the loopback address as the primary DNS can prevent Active Directory from locating replication partners. Use as secondary or tertiary only.
|
||||
|
||||
!!! info "Recent Changes"
|
||||
The usage of `127.0.0.1` has been changed to pointing to the actual full IP address of the server itself. I need to research this more to determine where this updated guideline came from. For example, if the DNS server IP was `192.168.3.25` you would set that as the value for the secondary DNS server.
|
||||
|
||||
!!! warning "Do **NOT** Use `127.0.0.1` as Primary DNS Server"
|
||||
When you are setting up domain controllers / DNS servers, you do not want to use the DC itself as the primary. This can cause all sorts of unexpected issues with reliability and replication. Always have another DNS server as the primary, THEN set the 127.0.0.1 localhost as secondary or tertiary.
|
||||
|
||||
### DNS Server Prioritization
|
||||
* **Prioritize DNS servers based on proximity to endpoints.**
|
||||
Assign the primary DNS server as the local server, and secondary as a remote branch server, to improve lookup speeds.
|
||||
|
||||
### DNS Record Aging and Scavenging
|
||||
* **Enable DNS record aging/scavenging (preferably 7 days).**
|
||||
Keeps DNS recordsets manageable, which improves lookup performance and troubleshooting.
|
||||
|
||||
### Use of CNAME Records
|
||||
* **Use CNAME records for DNS aliasing. Avoid A records for aliases.**
|
||||
Updating one host record updates all associated aliases, and PTR records remain properly configured.
|
||||
|
||||
## Security Best Practices
|
||||
!!! note "Security Recommendations Overview"
|
||||
The following list is organized in order of priority, with the most critical practices listed first.
|
||||
|
||||
### Network Exposure
|
||||
* **DNS servers should never be publicly accessible from the internet.**
|
||||
This prevents attackers from performing reconnaissance or planning attacks using exposed DNS infrastructure.
|
||||
|
||||
### Administrative Access
|
||||
* **Restrict RDP/remote desktop access to DNS servers/domain controllers to a limited list of administrators.**
|
||||
Reduces the risk of reconnaissance, reverse shell attacks, and malware installation.
|
||||
|
||||
### Use of Slave DNS Servers
|
||||
* **End-users should be issued only replicated/slave DNS servers.**
|
||||
Protects the master/authoritative DNS server from being directly exposed as an attack vector.
|
||||
* **Extended Reason:** In branch office scenarios, assign the local replicated server as primary, and main office replicated servers as secondary and tertiary, keeping the master server isolated.
|
||||
|
||||
### DNS Server Cache Lockdown
|
||||
* **Lock the DNS server cache to 100% (read-only).**
|
||||
Prevents DNS cache poisoning by allowing cache changes only after TTL expiry.
|
||||
|
||||
### DNS Logging
|
||||
* **Enable DNS logging.**
|
||||
Facilitates troubleshooting and administration.
|
||||
|
||||
### DNS Security Filtering
|
||||
* **Enable DNS security filtering via DNS forwarder or a security appliance.**
|
||||
Use secure public DNS (e.g., 9.9.9.9) or a firewall appliance (e.g., Sophos XG Firewall) to add a security layer to all DNS queries.
|
||||
|
||||
### Enable DNSSEC
|
||||
* **Enable DNSSEC (DNS Security Extensions).**
|
||||
Protects against DNS record spoofing and related attacks.
|
||||
|
||||
### DNS Socket Port Randomization
|
||||
* **Enable DNS socket port randomization.**
|
||||
Prevents network attacks by making DNS queries originate from unpredictable ports.
|
||||
* **Note:** Enabled by default on Windows Server 2016 and newer.
|
||||
|
||||
## Additional Notes
|
||||
!!! note "Best Practices Analyzer"
|
||||
It is recommended to run the official Windows Server DNS Best Practices Analyzer (BPA) on your managed servers for insights specific to your domain environment.
|
||||
|
||||
## Sources / References
|
||||
* [Active Directory Pro: DNS Best Practices](https://activedirectorypro.com/dns-best-practices/)
|
||||
* [Spiceworks: DNS Server Best Practice](https://community.spiceworks.com/topic/1110865-best-practice-for-dns-servers)
|
||||
* [Microsoft Docs: Creating a DNS Infrastructure Design](https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/creating-a-dns-infrastructure-design)
|
||||
* [PhoenixNAP: DNS Best Practices Security](https://phoenixnap.com/kb/dns-best-practices-security)
|
||||
* [Monitis: Best Practices for Active Directory Integrated DNS](https://www.monitis.com/blog/best-practices-for-active-directory-integrated-dns)
|
||||
* [DNS Knowledge: Authoritative Name Server](https://www.dnsknowledge.com/whatis/authoritative-name-server/)
|
||||
41
deployments/services/documentation/docusaurus.md
Normal file
41
deployments/services/documentation/docusaurus.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
tags:
|
||||
- Docusaurus
|
||||
- Documentation
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: An optimized site generator in React. Docusaurus helps you to move fast and write content. Build documentation websites, blogs, marketing pages, and more.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
docusaurus:
|
||||
image: awesometic/docusaurus
|
||||
container_name: docusaurus
|
||||
environment:
|
||||
- TARGET_UID=1000
|
||||
- TARGET_GID=1000
|
||||
- AUTO_UPDATE=true
|
||||
- WEBSITE_NAME=docusaurus
|
||||
- TEMPLATE=classic
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/docusaurus:/docusaurus
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.72
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
188
deployments/services/documentation/material-mkdocs.md
Normal file
188
deployments/services/documentation/material-mkdocs.md
Normal file
@@ -0,0 +1,188 @@
|
||||
---
|
||||
tags:
|
||||
- MkDocs
|
||||
- Material MkDocs
|
||||
- Documentation
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Documentation that simply works. Write your documentation in Markdown and create a professional static site for your Open Source or commercial project in minutes – searchable, customizable, more than 60 languages, for all devices.
|
||||
|
||||
## Deploy Material MKDocs
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
mkdocs:
|
||||
container_name: mkdocs
|
||||
image: squidfunk/mkdocs-material
|
||||
restart: always
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- /srv/containers/material-mkdocs/docs:/docs
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.76
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Config Example
|
||||
When you deploy MKDocs, you will need to give it a configuration to tell MKDocs how to structure itself. The configuration below is what I used in my deployment. This file is one folder level higher than the `/docs` folder that holds the documentation of the website.
|
||||
```yaml title="/srv/containers/material-mkdocs/docs/mkdocs.yml"
|
||||
# Project information
|
||||
site_name: Bunny Lab
|
||||
site_url: https://kb.bunny-lab.io
|
||||
site_author: Nicole Rappe
|
||||
site_description: >-
|
||||
Server, Script, Workflow, and Networking Documentation
|
||||
repo_url: https://git.bunny-lab.io/bunny-lab/docs
|
||||
repo_name: bunny-lab/docs
|
||||
edit_uri: _edit/main/
|
||||
|
||||
# Configuration
|
||||
theme:
|
||||
name: material
|
||||
custom_dir: material/overrides
|
||||
features:
|
||||
- announce.dismiss
|
||||
- content.action.edit
|
||||
# - content.action.view
|
||||
- content.code.annotate
|
||||
- content.code.copy
|
||||
- content.code.select
|
||||
- content.tabs.link
|
||||
- content.tooltips
|
||||
# - header.autohide
|
||||
# - navigation.expand
|
||||
# - navigation.footer
|
||||
- navigation.indexes
|
||||
- navigation.instant
|
||||
- navigation.instant.prefetch
|
||||
- navigation.instant.progress
|
||||
- navigation.prune
|
||||
- navigation.path
|
||||
# - navigation.sections
|
||||
- navigation.tabs
|
||||
- navigation.tabs.sticky
|
||||
- navigation.top
|
||||
- navigation.tracking
|
||||
- search.highlight
|
||||
- search.share
|
||||
- search.suggest
|
||||
- toc.follow
|
||||
# - toc.integrate ## If this is enabled, the TOC will appear on the left navigation menu.
|
||||
palette:
|
||||
- media: "(prefers-color-scheme)"
|
||||
toggle:
|
||||
icon: material/link
|
||||
name: Switch to light mode
|
||||
- media: "(prefers-color-scheme: light)"
|
||||
scheme: default
|
||||
primary: deep purple
|
||||
accent: deep purple
|
||||
toggle:
|
||||
icon: material/toggle-switch
|
||||
name: Switch to dark mode
|
||||
- media: "(prefers-color-scheme: dark)"
|
||||
scheme: slate
|
||||
primary: black
|
||||
accent: deep purple
|
||||
toggle:
|
||||
icon: material/toggle-switch-off
|
||||
name: Switch to system preference
|
||||
font:
|
||||
text: Roboto
|
||||
code: Roboto Mono
|
||||
favicon: assets/favicon.png
|
||||
icon:
|
||||
logo: logo
|
||||
|
||||
# Plugins
|
||||
plugins:
|
||||
- search:
|
||||
separator: '[\s\u200b\-_,:!=\[\]()"`/]+|\.(?!\d)|&[lg]t;|(?!\b)(?=[A-Z][a-z])'
|
||||
- minify:
|
||||
minify_html: true
|
||||
- blog
|
||||
- tags
|
||||
|
||||
# Hooks
|
||||
hooks:
|
||||
- material/overrides/hooks/shortcodes.py
|
||||
- material/overrides/hooks/translations.py
|
||||
|
||||
# Additional configuration
|
||||
extra:
|
||||
status:
|
||||
new: Recently added
|
||||
deprecated: Deprecated
|
||||
|
||||
extra_css:
|
||||
- stylesheets/extra.css
|
||||
|
||||
# Extensions
|
||||
markdown_extensions:
|
||||
- abbr
|
||||
- admonition
|
||||
- attr_list
|
||||
- def_list
|
||||
- footnotes
|
||||
- md_in_html
|
||||
- toc:
|
||||
permalink: true
|
||||
toc_depth: 3
|
||||
- pymdownx.arithmatex:
|
||||
generic: true
|
||||
- pymdownx.betterem:
|
||||
smart_enable: all
|
||||
- pymdownx.caret
|
||||
- pymdownx.details
|
||||
- pymdownx.emoji:
|
||||
emoji_generator: !!python/name:material.extensions.emoji.to_svg
|
||||
emoji_index: !!python/name:material.extensions.emoji.twemoji
|
||||
- pymdownx.highlight:
|
||||
anchor_linenums: true
|
||||
line_spans: __span
|
||||
pygments_lang_class: true
|
||||
- pymdownx.inlinehilite
|
||||
- pymdownx.keys
|
||||
- pymdownx.magiclink:
|
||||
normalize_issue_symbols: true
|
||||
repo_url_shorthand: true
|
||||
user: squidfunk
|
||||
repo: mkdocs-material
|
||||
- pymdownx.mark
|
||||
- pymdownx.smartsymbols
|
||||
- pymdownx.snippets:
|
||||
auto_append:
|
||||
- includes/mkdocs.md
|
||||
- pymdownx.superfences:
|
||||
custom_fences:
|
||||
- name: mermaid
|
||||
class: mermaid
|
||||
format: !!python/name:pymdownx.superfences.fence_code_format
|
||||
- pymdownx.tabbed:
|
||||
alternate_style: true
|
||||
combine_header_slug: true
|
||||
slugify: !!python/object/apply:pymdownx.slugs.slugify
|
||||
kwds:
|
||||
case: lower
|
||||
- pymdownx.tasklist:
|
||||
custom_checkbox: true
|
||||
- pymdownx.tilde
|
||||
```
|
||||
|
||||
## Cleaning up
|
||||
When the server is deployed, it will come with a bunch of unnecessary documentation that tells you how to use it. You will want to go into the `/docs` folder, and delete everything except `assets/favicon.png`, `schema.json`, and `/schema`. These files are necessary to allow MKDocs to automatically detect and structure the documentation based on the file folder structure under `/docs`.
|
||||
|
||||
## Hotloading Bug Workaround
|
||||
There is a [known bug](https://github.com/mkdocs/mkdocs/issues/4055) with the most recent version of Material MKDocs (as of writing) that causes it to not hotload changes immediately. This can be fixed by entering a shell in the docker container using `/bin/sh` then running the following command to downgrade the python "click" package: `pip install click==8.2.1`. After running the command, restart the container and hotloaded changes should start working again. You will have to run this command every time you re-deploy Material MKDocs until the issue is resolved officially.
|
||||
358
deployments/services/documentation/zensical.md
Normal file
358
deployments/services/documentation/zensical.md
Normal file
@@ -0,0 +1,358 @@
|
||||
---
|
||||
tags:
|
||||
- Zensical
|
||||
- Documentation
|
||||
---
|
||||
|
||||
## Purpose
|
||||
After many years of using Material for MKDocs and it being updated with new features and security updates, it finally reached EOL around the end of 2025. The project maintainers started pivoting to a new successor called [Zensical](https://zensical.org/docs/get-started/). This document outlines my particular process for setting up a standalone documentation server within a virtual machine.
|
||||
|
||||
!!! info "Assumptions"
|
||||
It is assumed that you are deploying this server into `Ubuntu Server 24.04.2 LTS (Minimal)`. It is also assumed that you are running every command as a user with superuser privileges (e.g. `root`).
|
||||
|
||||
You are generally safe to have a GuestVM with 16GB for the virtual disk, and expand it over-time based on your needs. CPU count and RAM allocation can also be extremely low based on your preferences, since this is simply a static page website at the end of the day.
|
||||
|
||||
## Architectural Overview
|
||||
It is useful to understand the flow of data and how everything inter-connects, so I have provided a sequence diagram that you can follow below:
|
||||
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
autonumber
|
||||
actor Author as Doc Author
|
||||
participant Gitea as Gitea (Repo + Actions)
|
||||
participant Runner as Act Runner
|
||||
participant Zensical as Zensical Server (watch + build)
|
||||
participant NGINX as NGINX (serves static site)
|
||||
|
||||
Author->>Gitea: Push to main
|
||||
Gitea-->>Runner: Trigger workflow job
|
||||
Runner->>Zensical: rsync docs → /srv/zensical/docs
|
||||
Zensical-->>Zensical: Watch detects change
|
||||
Zensical->>Zensical: Rebuild site → /srv/zensical/site
|
||||
NGINX-->>NGINX: Serve files from /srv/zensical/site
|
||||
```
|
||||
|
||||
## Setup Python Environment
|
||||
The first thing we need to do is install the necessary python packages and install the zensical software stack inside of it.
|
||||
|
||||
```sh
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt install -y nano python3 python3.12-venv
|
||||
mkdir -p /srv/zensical
|
||||
cd /srv/zensical
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install zensical
|
||||
zensical new .
|
||||
deactivate
|
||||
|
||||
# Remove Placeholder Example Docs
|
||||
rm -rf /srv/zensical/docs/{*,.*}
|
||||
```
|
||||
|
||||
## Zensical
|
||||
### Configure Settings
|
||||
Now we want to set some sensible defaults for Zensical to style it to look as close to Material for MKDocs as possible.
|
||||
|
||||
```sh
|
||||
sudo tee /srv/zensical/zensical.toml > /dev/null <<'EOF'
|
||||
[project]
|
||||
site_name = "Bunny Lab"
|
||||
site_description = "Server, Script, Workflow, and Networking Documentation"
|
||||
site_author = "Nicole Rappe"
|
||||
site_url = "https://kb.bunny-lab.io/"
|
||||
repo_url = "https://git.bunny-lab.io/bunny-lab/docs"
|
||||
repo_name = "bunny-lab/docs"
|
||||
edit_uri = "_edit/main/"
|
||||
|
||||
[project.theme]
|
||||
variant = "classic"
|
||||
language = "en"
|
||||
features = [
|
||||
"announce.dismiss",
|
||||
"content.action.edit",
|
||||
"content.code.annotate",
|
||||
"content.code.copy",
|
||||
"content.code.select",
|
||||
"content.footnote.tooltips",
|
||||
"content.tabs.link",
|
||||
"content.tooltips",
|
||||
"navigation.indexes",
|
||||
"navigation.instant",
|
||||
"navigation.instant.prefetch",
|
||||
"navigation.instant.progress",
|
||||
"navigation.path",
|
||||
"navigation.tabs",
|
||||
"navigation.tabs.sticky",
|
||||
"navigation.top",
|
||||
"navigation.tracking",
|
||||
"search.highlight",
|
||||
]
|
||||
|
||||
[[project.theme.palette]]
|
||||
scheme = "default"
|
||||
toggle.icon = "lucide/sun"
|
||||
toggle.name = "Switch to dark mode"
|
||||
|
||||
[[project.theme.palette]]
|
||||
scheme = "slate"
|
||||
toggle.icon = "lucide/moon"
|
||||
toggle.name = "Switch to light mode"
|
||||
|
||||
EOF
|
||||
```
|
||||
|
||||
### Create Watchdog Service
|
||||
Since NGINX has taken over hosting the webpages, this does not need to be accessible from other servers, only NGINX itself which runs on the same host as Zensical. We only want to use the `zensical serve` command to keep a watchdog on the documentation folder and automatically rebuild the static site content when changes are detected. These changes are then served by NGINX's webserver.
|
||||
|
||||
```sh
|
||||
# Create Service User, Assign Access, and Lockdown Zensical Data
|
||||
sudo useradd --system --home /srv/zensical --shell /usr/sbin/nologin zensical || true
|
||||
sudo chown -R zensical:zensical /srv/zensical
|
||||
sudo find /srv/zensical -type d -exec chmod 2775 {} \;
|
||||
sudo find /srv/zensical -type f -exec chmod 664 {} \; # This step likes to take a while, sometimes up to a minute.
|
||||
sudo chmod 755 /srv/zensical/.venv/bin/* # Ensure Python Environment Executables Function
|
||||
```
|
||||
|
||||
```sh
|
||||
# Make Zensical Binary Executable for Service
|
||||
sudo chmod +x /srv/zensical/.venv/bin/zensical
|
||||
|
||||
# Add Additional User(s) to Folder for Extra Access (Such as Doc Runners)
|
||||
sudo usermod -aG zensical nicole
|
||||
|
||||
# Create Service
|
||||
sudo tee /etc/systemd/system/zensical-watchdog.service > /dev/null <<'EOF'
|
||||
[Unit]
|
||||
Description=Zensical Document Changes Watchdog (zensical serve)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=zensical
|
||||
Group=zensical
|
||||
WorkingDirectory=/srv/zensical
|
||||
|
||||
# Run the venv binary directly; no activation needed
|
||||
ExecStart=/srv/zensical/.venv/bin/zensical serve
|
||||
|
||||
Restart=always
|
||||
RestartSec=2
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Start & Enable Automatic Startup of Service
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now zensical-watchdog
|
||||
```
|
||||
### Updating
|
||||
You will obviously want to keep Zensical up-to-date. You can run the following commands to upgrade it. This is taken and simplified from the original [Upgrade Documentation](https://zensical.org/docs/upgrade/) on Zensical's website.
|
||||
|
||||
```sh
|
||||
# Upgrade Zensical
|
||||
systemctl stop zensical-watchdog
|
||||
cd /srv/zensical
|
||||
source .venv/bin/activate
|
||||
pip install --upgrade --force-reinstall zensical
|
||||
deactivate
|
||||
systemctl start zensical-watchdog
|
||||
```
|
||||
|
||||
## NGINX Webserver
|
||||
We need to deploy NGINX as a webserver, because when using reverse proxies like Traefik, it seems to not get along with Zensical at all. Attempts to resolve this all failed, so putting the statically-built copies of site data that Zensical generates into NGINX's root directory is the second-best solution I came up with. Traefik can be reasonably expected to behave when interacting with NGINX versus Zensical's built-in webserver.
|
||||
|
||||
```sh
|
||||
sudo apt install -y nginx
|
||||
sudo rm -f /etc/nginx/sites-enabled/default
|
||||
sudo tee /etc/nginx/sites-available/zensical.conf > /dev/null <<'EOF'
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
server_name _;
|
||||
|
||||
root /srv/zensical/site;
|
||||
index index.html;
|
||||
|
||||
# Primary document handling
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
# Static asset caching (safe for docs)
|
||||
location ~* \.(css|js|png|jpg|jpeg|gif|svg|ico|woff2?)$ {
|
||||
expires 7d;
|
||||
add_header Cache-Control "public, max-age=604800, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# Prevent access to source or metadata
|
||||
location ~* \.(toml|md)$ {
|
||||
deny all;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
sudo ln -s /etc/nginx/sites-available/zensical.conf /etc/nginx/sites-enabled/zensical.conf
|
||||
sudo nginx -t
|
||||
sudo systemctl reload nginx
|
||||
sudo systemctl enable nginx
|
||||
```
|
||||
|
||||
## Gitea ACT Runner
|
||||
Now is time for the arguably most-important stage of deployment, which is setting up a [Gitea Act Runner](https://docs.gitea.com/usage/actions/act-runner). This is how document changes in a Gitea repository will propagate automatically into Zensical's `/srv/zensical/docs` folder.
|
||||
|
||||
```sh
|
||||
# Install Dependencies
|
||||
sudo apt install -y nodejs npm git rsync curl
|
||||
|
||||
# Create dedicated Gitea runner service account
|
||||
sudo useradd --system --create-home --home /var/lib/gitea_runner --shell /usr/sbin/nologin gitearunner || true
|
||||
|
||||
# Allow the runner to write documentation changes
|
||||
sudo usermod -aG zensical gitearunner
|
||||
|
||||
# Allow the runner to start and stop Zensical Watchdog Service
|
||||
sudo tee /etc/sudoers.d/gitearunner-systemctl > /dev/null <<'EOF'
|
||||
gitearunner ALL=NOPASSWD: /usr/bin/systemctl start zensical-watchdog.service, /usr/bin/systemctl stop zensical-watchdog.service
|
||||
EOF
|
||||
sudo chmod 440 /etc/sudoers.d/gitearunner-systemctl
|
||||
sudo chown root:root /etc/sudoers.d/gitearunner-systemctl
|
||||
sudo visudo -c
|
||||
|
||||
# Download Newest Gitea Runner Binary (https://gitea.com/gitea/act_runner/releases)
|
||||
cd /tmp
|
||||
wget https://gitea.com/gitea/act_runner/releases/download/v0.2.13/act_runner-0.2.13-linux-amd64
|
||||
sudo install -m 0755 act_runner-0.2.13-linux-amd64 /usr/local/bin/gitea_runner
|
||||
gitea_runner --version
|
||||
|
||||
# Generate Gitea Runner Configuration
|
||||
sudo mkdir -p /etc/gitea_runner
|
||||
sudo chown gitearunner:gitearunner /etc/gitea_runner
|
||||
sudo -u gitearunner gitea_runner generate-config > /etc/gitea_runner/config.yaml
|
||||
```
|
||||
|
||||
### Configure Registration Token
|
||||
- Navigate to: "**<Gitea Repo> > Settings > Actions > Runners**"
|
||||
- If you don't see this, it needs to be enabled. Navigate to: "**<Gitea Repo> > Settings > "Enable Repository Actions: Enabled" > Update Settings**"
|
||||
- Click the "**Create New Runner**" button on the top-right of the page and copy the registration token somewhere temporarily.
|
||||
- Navigate back to the GuestVM running Zensical and run the following commands.
|
||||
|
||||
```sh
|
||||
# Start Token Registration Process
|
||||
sudo -u gitearunner env HOME=/var/lib/gitea_runner /usr/local/bin/gitea_runner register --config /etc/gitea_runner/config.yaml
|
||||
|
||||
# Gitea Instance URL: https://git.bunny-lab.io
|
||||
# Gitea Runner Token: <Gitea-Runner-Token>
|
||||
# Runner Name: zensical-docs-runner
|
||||
|
||||
# Move Runner Config to Correct Location & Configure Permissions
|
||||
sudo mv /tmp/.runner /var/lib/gitea_runner/.runner
|
||||
sudo chown gitearunner:gitearunner /var/lib/gitea_runner/.runner
|
||||
sudo chmod 600 /var/lib/gitea_runner/.runner
|
||||
```
|
||||
|
||||
### Create Service
|
||||
Now we need to configure the Gitea runner to start automatically via a service just like the Zensical Watchdog service.
|
||||
|
||||
```sh
|
||||
# Create Gitea Runner Service
|
||||
sudo tee /etc/systemd/system/gitea-runner.service > /dev/null <<'EOF'
|
||||
[Unit]
|
||||
Description=Gitea Actions Runner (gitea_runner)
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
|
||||
[Service]
|
||||
Environment=HOME=/var/lib/gitea_runner
|
||||
User=gitearunner
|
||||
Group=gitearunner
|
||||
WorkingDirectory=/var/lib/gitea_runner
|
||||
ExecStart=/usr/local/bin/gitea_runner daemon --config /etc/gitea_runner/config.yaml
|
||||
Restart=always
|
||||
RestartSec=2
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Remove Container-Based Configurations to Force Runner to Run in Host Mode
|
||||
sudo sed -i \
|
||||
'/^[[:space:]]*labels:/,/^[[:space:]]*cache:/{
|
||||
/^[[:space:]]*labels:/c\ labels:\n - "zensical-host:host"
|
||||
/^[[:space:]]*cache:/!d
|
||||
}' \
|
||||
/etc/gitea_runner/config.yaml
|
||||
|
||||
# Enable and Start the Service
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now gitea-runner.service
|
||||
```
|
||||
|
||||
### Repository Workflow
|
||||
Place the following file into your documentation repository at the given location and this will enable the runner to execute when changes happen to the repository data.
|
||||
|
||||
```yaml title="gitea/workflows/automatic-deployment.yml"
|
||||
name: Automatic Documentation Deployment
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
zensical_deploy:
|
||||
name: Sync Docs to https://kb.bunny-lab.io
|
||||
runs-on: zensical-host
|
||||
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Stop Zensical Service
|
||||
run: sudo /usr/bin/systemctl stop zensical-watchdog.service
|
||||
|
||||
- name: Sync repository into /srv/zensical/docs
|
||||
run: |
|
||||
rsync -rlD --delete \
|
||||
--exclude='.git/' \
|
||||
--exclude='.gitea/' \
|
||||
--exclude='assets/' \
|
||||
--exclude='schema/' \
|
||||
--exclude='stylesheets/' \
|
||||
--exclude='schema.json' \
|
||||
--chmod=D2775,F664 \
|
||||
. /srv/zensical/docs/
|
||||
|
||||
- name: Start Zensical Service
|
||||
run: sudo /usr/bin/systemctl start zensical-watchdog.service
|
||||
|
||||
- name: Notify via NTFY
|
||||
if: always()
|
||||
run: |
|
||||
curl -d "https://kb.bunny-lab.io - Zensical job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
|
||||
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy
|
||||
It is assumed that you use a [Traefik](../edge/traefik.md) reverse proxy and are configured to use [dynamic configuration files](../edge/traefik.md#dynamic-configuration-files). Add the file below to expose the Zensical service to the rest of the world.
|
||||
|
||||
```yaml title="kb.bunny-lab.io.yml"
|
||||
http:
|
||||
routers:
|
||||
kb:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: kb
|
||||
rule: Host(`kb.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
kb:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.3.8:80
|
||||
passHostHeader: true
|
||||
```
|
||||
41
deployments/services/edge/nginx.md
Normal file
41
deployments/services/edge/nginx.md
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
tags:
|
||||
- Nginx
|
||||
- Reverse Proxy
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
nginx:
|
||||
image: lscr.io/linuxserver/nginx:latest
|
||||
container_name: nginx
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/nginx-portfolio-website:/config
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.12
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
198
deployments/services/edge/traefik.md
Normal file
198
deployments/services/edge/traefik.md
Normal file
@@ -0,0 +1,198 @@
|
||||
---
|
||||
tags:
|
||||
- Traefik
|
||||
- Reverse Proxy
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: A traefik reverse proxy is a server that sits between your network firewall and servers hosting various web services on your private network(s). Traefik automatically handles the creation of Let's Encrypt SSL certificates if you have a domain registrar that is supported by Traefik such as CloudFlare; by leveraging API keys, Traefik can automatically make the DNS records for Let's Encrypt's DNS "challenges" whenever you add a service behind the Traefik reverse proxy.
|
||||
|
||||
!!! info "Assumptions"
|
||||
This Traefik deployment document assumes you have deployed [Portainer](../../platforms/containerization/docker/deploy-portainer.md) to either a Rocky Linux or Ubuntu Server environment. Other docker-compose friendly operating systems have not been tested, so your mileage may vary regarding successful deployment ouside of these two operating systems.
|
||||
|
||||
Portainer makes deploying and updating Traefik so much easier than via a CLI. It's also much more intuitive.
|
||||
|
||||
## Deployment on Portainer
|
||||
- Login to Portainer (e.g. https://<portainer-ip>:9443)
|
||||
- Navigate to "**Environment (usually "local") > Stacks > "+ Add Stack"**"
|
||||
- Enter the following `docker-compose.yml` and `.env` environment variables into the webpage
|
||||
- When you have finished making adjustments to the environment variables (and docker-compose data if needed), click the "**Deploy the Stack**" button
|
||||
|
||||
!!! warning "Get DNS Registrar API Keys BEFORE DEPLOYMENT"
|
||||
When you are deploying this container, you have to be mindful to set valid data for the environment variables related to the DNS registrar. In this example, it is CloudFlare.
|
||||
|
||||
```jsx title="Environment Variables"
|
||||
CF_API_EMAIL=nicole.rappe@bunny-lab.io
|
||||
CF_API_KEY=REDACTED-CLOUDFLARE-DOMAIN-API-KEY
|
||||
```
|
||||
|
||||
If these are not set, Traefik will still work, but SSL certificates will not be issued from Let's Encrypt, and SSL traffic will be terminated using a self-signed Traefik-based certificate, which is only good for local non-production testing.
|
||||
|
||||
If you plan on using HTTP-based challenges, you will need to make the following changes in the docker-compose.yml data:
|
||||
|
||||
- Un-comment `"--certificatesresolvers.myresolver.acme.tlschallenge=true"`
|
||||
- Comment-out `"--certificatesresolvers.letsencrypt.acme.dnschallenge=true"`
|
||||
- Comment-out `"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"`
|
||||
- Lastly, you need to ensure that port 80 on your firewall is opened to the IP of the Traefik Reverse Proxy to allow Let's Encrypt to do TLS-based challenges.
|
||||
|
||||
### Stack Deployment Information
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.3"
|
||||
services:
|
||||
traefik:
|
||||
image: "traefik:latest"
|
||||
restart: always
|
||||
container_name: "traefik-bunny-lab-io"
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
entrypoint:
|
||||
- /bin/sh
|
||||
- -lc
|
||||
- |
|
||||
ip link set dev eth0 mtu 1500
|
||||
exec traefik "$@"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
labels:
|
||||
- "traefik.http.routers.traefik-proxy.middlewares=my-buffering"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxRequestBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxResponseBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memRequestBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memResponseBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.retryExpression=IsNetworkError() && Attempts() <= 2"
|
||||
command:
|
||||
# Globals
|
||||
- "--log.level=ERROR"
|
||||
- "--api.insecure=true"
|
||||
- "--global.sendAnonymousUsage=false"
|
||||
# Docker
|
||||
- "--providers.docker=true"
|
||||
- "--providers.docker.exposedbydefault=false"
|
||||
# File Provider
|
||||
- "--providers.file.directory=/etc/traefik/dynamic"
|
||||
- "--providers.file.watch=true"
|
||||
|
||||
# Entrypoints
|
||||
- "--entrypoints.web.address=:80"
|
||||
- "--entrypoints.websecure.address=:443"
|
||||
- "--entrypoints.web.http.redirections.entrypoint.to=websecure" # Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.scheme=https" # Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.permanent=true" # Redirect HTTP to HTTPS
|
||||
# LetsEncrypt
|
||||
### - "--certificatesresolvers.myresolver.acme.tlschallenge=true" # Enable if doing Port 80 Let's Encrypt Challenges
|
||||
- "--certificatesresolvers.letsencrypt.acme.dnschallenge=true" # Disable if doing Port 80 Let's Encrypt Challenges
|
||||
- "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare" # Disable if doing Port 80 Let's Encrypt Challenges
|
||||
- "--certificatesresolvers.letsencrypt.acme.email=${LETSENCRYPT_EMAIL}"
|
||||
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
|
||||
|
||||
# Keycloak plugin configuration
|
||||
- "--experimental.plugins.keycloakopenid.moduleName=github.com/Gwojda/keycloakopenid" # Optional if you have Keycloak Deployed
|
||||
- "--experimental.plugins.keycloakopenid.version=v0.1.34" # Optional if you have Keycloak Deployed
|
||||
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- "/srv/containers/traefik/letsencrypt:/letsencrypt"
|
||||
- "/srv/containers/traefik/config:/etc/traefik"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||
- "/srv/containers/traefik/cloudflare:/cloudflare"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.29
|
||||
environment:
|
||||
- CF_API_EMAIL=${CF_API_EMAIL}
|
||||
- CF_API_KEY=${CF_API_KEY}
|
||||
extra_hosts:
|
||||
- "mail.bunny-lab.io:192.168.3.13" # Just an Example
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
CF_API_EMAIL=nicole.rappe@bunny-lab.io
|
||||
CF_API_KEY=REDACTED-CLOUDFLARE-DOMAIN-API-KEY
|
||||
LETSENCRYPT_EMAIL=nicole.rappe@bunny-lab.io
|
||||
```
|
||||
|
||||
!!! info
|
||||
There is a distinction between the "Global API Key" and a "Token API Key". The main difference being that the "Global API Key" can change anything in Cloudflare, while the "Token API Key" can only change what it was granted delegated permissions to.
|
||||
|
||||
## Adding Servers / Services to Traefik
|
||||
Traefik operates in two ways, the first is labels, while the second are dynamic configuration files. We will go over each below.
|
||||
|
||||
### Docker-Compose Labels
|
||||
The first is that it reads "labels" from the docker-compose file of any deployed containers on the same host as Traefik. These labels typically look something like the following:
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.gitea.rule=Host(`example.bunny-lab.io`)"
|
||||
- "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
- "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.gitea.loadbalancer.server.port=8080"
|
||||
```
|
||||
|
||||
By adding these labels to any container on the same server as Traefik, traefik will automatically "adopt" this service and route traffic to it as well as assign an SSL certificate to it from Let's Encrypt. The only downside is as mentioned above, if you are dealing with something that is not just a container, or maybe a container on a different physical server, you need to rely on dynamic configuration files, such as the one seen below.
|
||||
|
||||
### Dynamic Configuration Files
|
||||
Dynamic configuration files exist under the Traefik container located at `/etc/traefik/dynamic`. Any `*.yml` files located in this folder will be hot-loaded anytime they are modified. This makes it convenient to leverage something such as the [Git Repo Updater](../../platforms/containerization/docker/custom-containers/git-repo-updater.md) container to leverage [Gitea](../devops/gitea.md) to push configuration files from Git into the production environment, saving yourself headache and enabling version control over every service behind the reverse proxy.
|
||||
|
||||
An example of a dynamic configuration file would look something like this:
|
||||
|
||||
```yaml title="/etc/traefik/dynamic/example.bunny-lab.io.yml"
|
||||
http:
|
||||
routers:
|
||||
example:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: example
|
||||
rule: Host(`example.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
example:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.70:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
You can see the similarities between the labeling method and how you designate the proxy name `example.bunny-lab.io` the internal ip address `192.168.5.70` the protocol to request the data from the service internally `http`, and the port the server is listening on internally `8080`. If you want to know more about the parameters such as `passHostHeader: true` then you will need to do some of your own research into it.
|
||||
|
||||
!!! example "Service Naming Considerations"
|
||||
When you deploy a service into a Traefik-based reverse proxy, the name of the `router` and `service` have to be unique. The router can have the same name as the service, such as `example`, but I recommend naming the services to match the FQDN of the service itself.
|
||||
|
||||
For example, `remote.bunny-lab.io` would be written as `remote-bunny-lab-io`. This keeps things organized and easy to read if you are troubleshooting things in Traefik's logs or webUI. The complete configuration file would look like the example below:
|
||||
|
||||
```yaml title="/etc/traefik/dynamic/remote.bunny-lab.io.yml"
|
||||
http:
|
||||
routers:
|
||||
remote-bunny-lab-io:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: remote-bunny-lab-io
|
||||
rule: Host(`remote.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
remote-bunny-lab-io:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.70:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
279
deployments/services/email/iredmail/deploy-iredmail.md
Normal file
279
deployments/services/email/iredmail/deploy-iredmail.md
Normal file
@@ -0,0 +1,279 @@
|
||||
---
|
||||
tags:
|
||||
- IredMail
|
||||
- Email
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Self-Hosted Open-Source email server that can be setup in minutes, and is enterprise-grade if upgraded with an iRedAdmin-Pro license.
|
||||
|
||||
!!! note "Assumptions"
|
||||
It is assumed you are running at least Rocky Linux 9.3. While you can use CentOS Stream, Alma, Debian, Ubuntu, FreeBSD, and OpenBSD, the more enterprise-level sections of my homelab are built on Rocky Linux.
|
||||
|
||||
!!! warning "iRedMail / iRedAdmin-Pro Version Mismatching"
|
||||
This document assumes you are deploying iRedMail 1.6.8, which at the time of writing, coincided with iRedAdmin-Pro 5.5. If you are not careful, you may end up with mismatched versions down the road as iRedMail keeps getting updates. Due to how you have to pay for a license in order to get access to the original iRedAdmin-Pro-SQL repository data, if a newer version of iRedAdmin-Pro comes out after February 2025, this document may not account for that, leaving you on an older version of the software. This is unavoidable if you want to avoid paying $500/year for licensing this software.
|
||||
|
||||
## Overview
|
||||
The instructions below are specific to my homelab environment, but can be easily ported depending on your needs. This guide also assumes you want to operate a PostgreSQL-based iRedMail installation. You can follow along with the official documentation on [Installation](https://docs.iredmail.org/install.iredmail.on.rhel.html) as well as [DNS Record Configuration](https://docs.iredmail.org/setup.dns.html) if you want more detailed explanations throughout the installation process.
|
||||
|
||||
## Configure FQDN
|
||||
Ensure the FQDN of the server is correctly set in `/etc/hostname`. The `/etc/hosts` file will be automatically injected using the FQDN from `/etc/hostname` in a script further down, don't worry about editing it.
|
||||
|
||||
## Disable SELinux
|
||||
iRedMail doesn't work with SELinux, so please disable it by setting below value in its config file /etc/selinux/config. After server reboot, SELinux will be completely disabled.
|
||||
``` sh
|
||||
# Elevate to Root User
|
||||
sudo su
|
||||
|
||||
# Disable SELinux
|
||||
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config # (1)
|
||||
setenforce 0
|
||||
```
|
||||
|
||||
1. If you prefer to let SELinux prints warnings instead of enforcing, you can set this value instead: `SELINUX=permissive`
|
||||
|
||||
## iRedMail Installation
|
||||
|
||||
### Set Domain and iRedMail Version
|
||||
Start by connecting to the server / VM via SSH, then set silent deployment variables below.
|
||||
``` sh
|
||||
# Define some deployment variables.
|
||||
VERSION="1.6.8" # (1)
|
||||
MAIL_DOMAIN="bunny-lab.io" # (2)
|
||||
```
|
||||
|
||||
1. This is the version of iRedMail you are deploying. You can find the newest version on the [iRedMail Download Page](https://www.iredmail.org/download.html).
|
||||
2. This is the domain suffix that appears after mailbox names. e.g. `first.last@bunny-lab.io` would use a domain value of `bunny-lab.io`.
|
||||
|
||||
You will then proceed to bootstrap a silent unattended installation of iRedMail. (I've automated as much as I can to make this as turn-key as possible). Just copy/paste this whole thing into your terminal and hit ENTER.
|
||||
|
||||
!!! danger "Storage Space Requirements"
|
||||
You absolutely need to ensure that `/var/vmail` has a lot of space. At least 16GB. This is where all of your emails / mailboxes / a lot of settings will be. If possible, create a second physical/virtual disk specifically for the `/var` partition, or specifically for `/var/vmail` at minimum, so you can expand it over time if necessary. LVM-based provisioning is recommended but not required.
|
||||
|
||||
### Install iRedMail
|
||||
``` sh
|
||||
# Automatically configure the /etc/hosts file to point to the server listed in "/etc/hostname".
|
||||
sudo sed -i "1i 127.0.0.1 $(cat /etc/hostname) $(cut -d '.' -f 1 /etc/hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4" /etc/hosts
|
||||
|
||||
# Check for Updates in the Package Manager
|
||||
yum update -y
|
||||
|
||||
# Install Extra Packages for Enterprise Linux
|
||||
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
|
||||
|
||||
# Download the iRedMail binaries and extract them
|
||||
cd /root
|
||||
curl https://codeload.github.com/iredmail/iRedMail/tar.gz/refs/tags/$VERSION -o iRedMail-$VERSION.tar.gz
|
||||
tar zxf iRedMail-$VERSION.tar.gz
|
||||
|
||||
# Create the unattend config file for silent deployment. This will automatically generate random 32-character passwords for all of the databases.
|
||||
(echo "export STORAGE_BASE_DIR='/var/vmail'"; echo "export WEB_SERVER='NGINX'"; echo "export BACKEND_ORIG='PGSQL'"; echo "export BACKEND='PGSQL'"; for var in VMAIL_DB_BIND_PASSWD VMAIL_DB_ADMIN_PASSWD MLMMJADMIN_API_AUTH_TOKEN NETDATA_DB_PASSWD AMAVISD_DB_PASSWD IREDADMIN_DB_PASSWD RCM_DB_PASSWD SOGO_DB_PASSWD SOGO_SIEVE_MASTER_PASSWD IREDAPD_DB_PASSWD FAIL2BAN_DB_PASSWD PGSQL_ROOT_PASSWD DOMAIN_ADMIN_PASSWD_PLAIN; do echo "export $var='$(openssl rand -base64 48 | tr -d '+/=' | head -c 32)'"; done; echo "export FIRST_DOMAIN='$MAIL_DOMAIN'"; echo "export USE_IREDADMIN='YES'"; echo "export USE_SOGO='YES'"; echo "export USE_NETDATA='YES'"; echo "export USE_FAIL2BAN='YES'"; echo "#EOF") > /root/iRedMail-$VERSION/config
|
||||
|
||||
# Make Config Read-Only
|
||||
chmod 400 /root/iRedMail-$VERSION/config
|
||||
|
||||
# Set Environment Variables for Silent Deployment
|
||||
cd /root/iRedMail-$VERSION
|
||||
|
||||
# Deploy iRedMail via the Install Script
|
||||
AUTO_USE_EXISTING_CONFIG_FILE=y \
|
||||
AUTO_INSTALL_WITHOUT_CONFIRM=y \
|
||||
AUTO_CLEANUP_REMOVE_SENDMAIL=y \
|
||||
AUTO_CLEANUP_REPLACE_FIREWALL_RULES=y \
|
||||
AUTO_CLEANUP_RESTART_FIREWALL=n \
|
||||
AUTO_CLEANUP_REPLACE_MYSQL_CONFIG=y \
|
||||
bash iRedMail.sh
|
||||
```
|
||||
|
||||
When the installation is completed, take note of any output it gives you for future reference. Then reboot the server to finalize the server installation.
|
||||
```
|
||||
reboot
|
||||
```
|
||||
|
||||
!!! warning "Automatically-Generated Postmaster Password"
|
||||
When you deploy iRedMail, it will give you a username and password for the postmaster account. If you accidentally forget to document this, you can log back into the server via SSH and see the credentials at `/root/iRedMail-$VERSION/iRedMail.tips`. This file is critical and contains passwords and DNS information such as DKIM record information as well.
|
||||
|
||||
## Networking Configuration
|
||||
|
||||
### Nested Reverse Proxy Configuration
|
||||
In my homelab environment, I run Traefik reverse proxy in front of everything, which includes the NGINX reverse proxy that iRedMail creates. In my scenario, I have to make some custom adjustments to the reverse proxy dynamic configuration data to ensure it will step aside and let the NGINX reverse proxy inside of iRedMail handle everything, including handling its own SSL termination with Let's Encrypt.
|
||||
|
||||
``` sh
|
||||
tcp:
|
||||
routers:
|
||||
mail-tcp-router:
|
||||
rule: "HostSNI(`mail.bunny-lab.io`)"
|
||||
entryPoints: ["websecure"]
|
||||
service: mail-nginx-service
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
services:
|
||||
mail-nginx-service:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.13:443"
|
||||
```
|
||||
|
||||
### Let's Encrypt ACME Certbot
|
||||
At this point, we want to set up automatic Let's Encrypt SSL termination inside of iRedMail so we don't have to manually touch this in the future.
|
||||
|
||||
#### Generate SSL Certificate
|
||||
=== "Debian/Ubuntu"
|
||||
|
||||
``` sh
|
||||
# Download the Certbot
|
||||
sudo apt update
|
||||
sudo apt install -y certbot
|
||||
sudo certbot certonly --webroot -w /var/www/html -d mail.bunny-lab.io
|
||||
|
||||
# Set up Symbolic Links (Where iRedMail Expects Them)
|
||||
sudo mv /etc/ssl/certs/iRedMail.crt{,.bak}
|
||||
sudo mv /etc/ssl/private/iRedMail.key{,.bak}
|
||||
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/fullchain.pem /etc/ssl/certs/iRedMail.crt
|
||||
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/privkey.pem /etc/ssl/private/iRedMail.key
|
||||
|
||||
# Restart iRedMail Services
|
||||
sudo systemctl restart postfix dovecot nginx
|
||||
```
|
||||
|
||||
=== "CentOS/Rocky/AlmaLinux"
|
||||
|
||||
``` sh
|
||||
# Download the Certbot
|
||||
sudo yum install -y epel-release
|
||||
sudo yum install -y certbot
|
||||
sudo certbot certonly --webroot -w /var/www/html -d mail.bunny-lab.io
|
||||
|
||||
# Set up Symbolic Links (Where iRedMail Expects Them)
|
||||
sudo mv /etc/pki/tls/certs/iRedMail.crt{,.bak}
|
||||
sudo mv /etc/pki/tls/private/iRedMail.key{,.bak}
|
||||
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/fullchain.pem /etc/pki/tls/certs/iRedMail.crt
|
||||
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/privkey.pem /etc/pki/tls/private/iRedMail.key
|
||||
|
||||
# Restart iRedMail Services
|
||||
sudo systemctl restart postfix dovecot nginx
|
||||
```
|
||||
|
||||
#### Configure Automatic Renewal
|
||||
To automate the renewal process, set up a cron job that runs the certbot renew command regularly. This command will renew certificates that are due to expire within 30 days.
|
||||
|
||||
Open the crontab editor with the following command:
|
||||
```
|
||||
sudo crontab -e
|
||||
```
|
||||
|
||||
Add the following line to run the renewal process daily at 3:01 AM:
|
||||
```
|
||||
1 3 * * * certbot renew --post-hook 'systemctl restart postfix dovecot nginx'
|
||||
```
|
||||
|
||||
### DNS Records
|
||||
Now you need to set up DNS records in Cloudflare (or the DNS Registrar you have configured) so that the mail server can be found and validated.
|
||||
|
||||
| **Type** | **Name** | **Content** | **Proxy Status** | **TTL** |
|
||||
| :--- | :--- | :--- | :--- | :--- |
|
||||
| MX | bunny-lab.io | mail.bunny-lab.io | DNS Only | Auto |
|
||||
| TXT | bunny-lab.io | "v=spf1 a:mail.bunny-lab.io ~all" | DNS Only | Auto |
|
||||
| TXT | dkim._domainkey | v=DKIM1; p=`IREDMAIL-DKIM-VALUE` | DNS Only | 1 Hour |
|
||||
| TXT | _dmarc | "v=DMARC1; p=reject; pct=100; rua=mailto:postmaster@bunny-lab.io; ruf=mailto:postmaster@bunny-lab.io" | DNS Only | Auto |
|
||||
|
||||
### Port Forwarding
|
||||
Lastly, we need to set up port forwarding to open the ports necessary for the server to send and receive email.
|
||||
|
||||
| **Protocol** | **Port** | **Destination Server** | **Description** |
|
||||
| :--- | :--- | :--- | :--- |
|
||||
| TCP | 995 | 192.168.3.13 | POP3 service: port 110 over STARTTLS |
|
||||
| TCP | 993 | 192.168.3.13 | IMAP service: port 143 over STARTTLS |
|
||||
| TCP | 587 | 192.168.3.13 | SMTP service: port 587 over STARTTLS |
|
||||
| TCP | 25 | 192.168.3.13 | SMTP (Email Server-to-Server Communication) |
|
||||
|
||||
## Install iRedAdmin-Pro
|
||||
When it comes to adding extra features, start by copying the data from this [Bunny Lab repository](https://git.bunny-lab.io/bunny-lab/iRedAdmin-Pro-SQL) to the following folder by running these commands first:
|
||||
|
||||
``` sh
|
||||
# Stop the iRedMail Services
|
||||
sudo systemctl stop postfix dovecot nginx
|
||||
|
||||
# Grant Temporary Access to the iRedAdmin Files and Folders
|
||||
sudo chown nicole:nicole -R /opt/www/iRedAdmin-2.5
|
||||
|
||||
# Copy the data from the repository mentioned above into this folder, merging identical folders and files. Feel free to use your preferred file transfer tool tool / method (e.g. MobaXTerm / WinSCP).
|
||||
|
||||
# Change permissions back to normal
|
||||
sudo chown iredadmin:iredadmin -R /opt/www/iRedAdmin-2.5
|
||||
|
||||
# Reboot the Server
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
### Activate iRedAdmin-Pro
|
||||
At this point, if you want to use iRedAdmin-Pro, you either have a valid license key, or you adjust the python function responsible for checking license keys to bypass the check, effectively forcing iRedAdmin to be activated. In this instance, we will be forcing activation by adjusting this function, seen below.
|
||||
|
||||
There is someone else who outlined all of these changes, and additional (aesthetic) ones, like removing the renew license button from the license page, but the core functionality is seen below. If you want to see the original repository this was inspired from, it can be found [Here](https://github.com/marcus-alicia/iRedAdmin-Pro-SQL)
|
||||
|
||||
``` sh
|
||||
# Take permission of the python script
|
||||
sudo chown nicole:nicole /opt/www/iRedAdmin-2.5/libs/sysinfo.py
|
||||
```
|
||||
|
||||
=== "Original Activation Function"
|
||||
|
||||
```jsx title="/opt/www/iRedAdmin-2.5/libs/sysinfo.py"
|
||||
def get_license_info():
|
||||
if len(__id__) != 32:
|
||||
web.conn_iredadmin.delete("updatelog")
|
||||
session.kill()
|
||||
raise web.seeother("/login?msg=INVALID_PRODUCT_ID")
|
||||
|
||||
params = {
|
||||
"v": __version__,
|
||||
"f": __id__,
|
||||
"lang": settings.default_language,
|
||||
"host": get_hostname(),
|
||||
"backend": settings.backend,
|
||||
"webmaster": settings.webmaster,
|
||||
"mac": ",".join(get_all_mac_addresses()),
|
||||
}
|
||||
|
||||
url = "https://lic.iredmail.org/check_version/licenseinfo/" + __id__ + ".json"
|
||||
url += "?" + urllib.parse.urlencode(params)
|
||||
|
||||
try:
|
||||
urlopen = __get_proxied_urlopen()
|
||||
_json = urlopen(url).read()
|
||||
lic_info = json.loads(_json)
|
||||
lic_info["id"] = __id__
|
||||
return True, lic_info
|
||||
except Exception as e:
|
||||
return False, web.urlquote(e)
|
||||
```
|
||||
|
||||
=== "Bypassed Activation Function"
|
||||
|
||||
```jsx title="/opt/www/iRedAdmin-2.5/libs/sysinfo.py"
|
||||
def get_license_info():
|
||||
return True, {
|
||||
"status": "active",
|
||||
"product": "iRedAdmin-Pro-SQL",
|
||||
"licensekey": "forcefully-open-source",
|
||||
"upgradetutorials": "https://docs.iredmail.org/iredadmin-pro.releases.html",
|
||||
"purchased": "Never",
|
||||
"contacts": "nicole.rappe@bunny-lab.io",
|
||||
"latestversion": "5.5",
|
||||
"expired": "Never",
|
||||
"releasenotes": "https://docs.iredmail.org/iredadmin-pro.releases.html",
|
||||
"id": __id__
|
||||
}
|
||||
```
|
||||
|
||||
``` sh
|
||||
# Revert permission of the python script
|
||||
sudo chown iredadmin:iredadmin /opt/www/iRedAdmin-2.5/libs/sysinfo.py
|
||||
|
||||
# Reboot the Server (To be safe)
|
||||
sudo reboot
|
||||
```
|
||||
|
||||
!!! success "Successful Activation"
|
||||
At this point, if you navigate to the [iRedAdmin-Pro License Page](https://mail.bunny-lab.io/iredadmin/system/license) you should see the server is activated successfully.
|
||||
@@ -0,0 +1,42 @@
|
||||
---
|
||||
tags:
|
||||
- IredMail
|
||||
- SMTP
|
||||
- Email
|
||||
---
|
||||
|
||||
## Purpose
|
||||
You may need to troubleshoot the outgoing SMTP email queue / active sessions in iRedMail for one reason or another. This can provide useful insight into the reason why emails are not being delivered, etc.
|
||||
|
||||
### Overall Queue Backlog
|
||||
You can run the following command to get the complete backlog of all email senders in the queue. This can be useful for tracking the queue's "drainage" over-time.
|
||||
|
||||
```sh
|
||||
# List the total number of queued messages
|
||||
postqueue -p | egrep -c '^[A-F0-9]'
|
||||
|
||||
# Itemize and count the queued messages based on sender.
|
||||
postqueue -p | awk '/^[A-F0-9]/ {id=$1} /from=<[^>]+>/ && $0 !~ /from=<>/ {print id; exit}'
|
||||
```
|
||||
|
||||
!!! example "Example Output"
|
||||
- 10392 problematic@bunny-lab.io
|
||||
- 301 prettybad@bunny-lab.io
|
||||
- 39 infrastructure@bunny-lab.io
|
||||
- 20 nicole.rappe@bunny-lab.io
|
||||
|
||||
### Investigating Individual Emails
|
||||
You can run the following command to list all queued messages: `postqueue -p`. You can then run `postcat -vq <message-ID>` to read detailed information on any specific queued SMTP message:
|
||||
|
||||
```sh
|
||||
postqueue -p
|
||||
postcat -vq 4dgHry5LZnzH6x08 # (1)
|
||||
```
|
||||
|
||||
1. Example message ID gathered from the previous `postqueue -p` command.
|
||||
|
||||
### Attempt to Gracefully Reload Postfix
|
||||
You may want to try to unstick things by gracefully "reloading" the postfix service via `postfix reload`. This will ensure that we don't drop / disconnect / lose all of the active outgoing SMTP sessions in the queue. It may not help resolve issues, but it's worth noting down:
|
||||
|
||||
### Reattempt Delivery
|
||||
You can attempt redelivery via running `postqueue -f` to try to free up the queue. Postfix will immediately re-attempt delivery of all queued messages instead of waiting for their scheduled retry time. It does not override remote rejections or fix underlying delivery errors; it only accelerates the next delivery attempt.
|
||||
@@ -0,0 +1,9 @@
|
||||
---
|
||||
tags:
|
||||
- IredMail
|
||||
- Email
|
||||
---
|
||||
|
||||
| Server | Port(s) | Security | Auth Method | Username |
|
||||
|:------------------|:----------------------------------------------|:----------|:----------------|:-------------------|
|
||||
| `mail.bunny-lab.io` | **IMAP:** 143 `Internal`, 993 `External`<br>**SMTP:** 587, 25 `Fallback` | STARTTLS | Normal Password | user@bunny-lab.io |
|
||||
238
deployments/services/email/mailcow.md
Normal file
238
deployments/services/email/mailcow.md
Normal file
@@ -0,0 +1,238 @@
|
||||
---
|
||||
tags:
|
||||
- Mailcow
|
||||
- Email
|
||||
- Docker
|
||||
---
|
||||
|
||||
!!! warning "Under Construction"
|
||||
The deployment of Mailcow is mostly correct here, but with the exception that we dont point DNS records to the reverse proxy (internally) because it's currently not functioning as expected. So for the time being, you would open all of the ports up to the Mailcow server's internal IP address via port forwarding on your firewall.
|
||||
|
||||
## Purpose
|
||||
The purpose of this document is to illustrate how to deploy Mailcow in a dockerized format.
|
||||
|
||||
!!! note "Assumptions"
|
||||
It is assumed that you are deploying Mailcow into an existing Ubuntu Server environment. If you are using a different operating system, refer to the [official documentation](https://docs.mailcow.email/getstarted/install/).
|
||||
|
||||
### Setting Up Docker
|
||||
Go ahead and set up docker and docker-compose with the following commands:
|
||||
```bash
|
||||
sudo su # (1)
|
||||
curl -sSL https://get.docker.com/ | CHANNEL=stable sh # (2)
|
||||
apt install docker-compose-plugin # (3)
|
||||
systemctl enable --now docker # (4)
|
||||
```
|
||||
|
||||
1. Make yourself root.
|
||||
2. Install `Docker`
|
||||
3. Install `Docker-Compose`
|
||||
4. Make docker run automatically when the server is booted.
|
||||
|
||||
### Download and Deploy Mailcow
|
||||
Run the following commands to pull down the mailcow deployment files and install them with docker. Go get a cup of coffee as the `docker compose pull` command may take a while to run.
|
||||
|
||||
!!! note "Potential `Docker Compose` Issues"
|
||||
If you run the `docker-compose pull` command and it fails for some reason, change the command to `docker compose pull` instead. This is just the difference between the plugin version of compose versus the standalone version. Both will have the same result.
|
||||
|
||||
```bash
|
||||
cd /opt
|
||||
git clone https://github.com/mailcow/mailcow-dockerized
|
||||
cd mailcow-dockerized
|
||||
./generate_config.sh # (1)
|
||||
docker-compose pull # (2)
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
1. Generate a configuration file. Use a FQDN (`host.domain.tld`) as hostname when asked.
|
||||
2. If you get an error about the ports of the `nginx-mailcow` service in the `docker-compose.yml` stack, change the ports for that service as follows:
|
||||
```yaml
|
||||
ports:
|
||||
- "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
|
||||
- "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
|
||||
```
|
||||
|
||||
### Reverse-Proxy Configuration
|
||||
For the purposes of this document, it will be assumed that you are deploying Mailcow behind Traefik. You can use the following dynamic configuration file to achieve this:
|
||||
```yaml title="/srv/containers/traefik/config/dynamic/mail.bunny-lab.io.yml"
|
||||
# ========================
|
||||
# Mailcow / Traefik Config
|
||||
# ========================
|
||||
|
||||
# ----------------------------------------------------
|
||||
# HTTP Section - Handles Mailcow web UI via Traefik
|
||||
# ----------------------------------------------------
|
||||
http:
|
||||
routers:
|
||||
mailcow-server:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: mailcow-http
|
||||
rule: Host(`mail.bunny-lab.io`)
|
||||
services:
|
||||
mailcow-http:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.3.61:80
|
||||
passHostHeader: true
|
||||
|
||||
# ----------------------------------------------------
|
||||
# TCP Section - Handles all mail protocols
|
||||
# ----------------------------------------------------
|
||||
tcp:
|
||||
routers:
|
||||
# -----------
|
||||
# SMTP Router (Port 25, non-TLS, all mail deliveries)
|
||||
# -----------
|
||||
mailcow-smtp:
|
||||
entryPoints:
|
||||
- smtp
|
||||
rule: "" # Empty rule = accept ALL connections on port 25 (plain SMTP)
|
||||
service: mailcow-smtp
|
||||
|
||||
# -----------
|
||||
# SMTPS Router (Port 465, implicit TLS)
|
||||
# -----------
|
||||
mailcow-smtps:
|
||||
entryPoints:
|
||||
- smtps
|
||||
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
|
||||
service: mailcow-smtps
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
# -----------
|
||||
# Submission Router (Port 587, implicit TLS or STARTTLS)
|
||||
# -----------
|
||||
mailcow-submission:
|
||||
entryPoints:
|
||||
- submission
|
||||
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
|
||||
service: mailcow-submission
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
# -----------
|
||||
# IMAPS Router (Port 993, implicit TLS)
|
||||
# -----------
|
||||
mailcow-imaps:
|
||||
entryPoints:
|
||||
- imaps
|
||||
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
|
||||
service: mailcow-imaps
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
# -----------
|
||||
# IMAP Router (Port 143, can be STARTTLS)
|
||||
# -----------
|
||||
mailcow-imap:
|
||||
entryPoints:
|
||||
- imap
|
||||
rule: "HostSNI(`*`)" # Match any SNI (for TLS connections)
|
||||
service: mailcow-imap
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
# -----------
|
||||
# POP3S Router (Port 995, implicit TLS)
|
||||
# -----------
|
||||
mailcow-pop3s:
|
||||
entryPoints:
|
||||
- pop3s
|
||||
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
|
||||
service: mailcow-pop3s
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
# -----------
|
||||
# Dovecot Managesieve (Port 4190, implicit TLS)
|
||||
# -----------
|
||||
mailcow-dovecot-managesieve:
|
||||
entryPoints:
|
||||
- pop3s
|
||||
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
|
||||
service: dovecot-managesieve
|
||||
tls:
|
||||
passthrough: true
|
||||
|
||||
services:
|
||||
# SMTP (Port 25, plain)
|
||||
mailcow-smtp:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:25"
|
||||
|
||||
# SMTPS (Port 465, implicit TLS)
|
||||
mailcow-smtps:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:465"
|
||||
|
||||
# Submission (Port 587, implicit TLS or STARTTLS)
|
||||
mailcow-submission:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:587"
|
||||
|
||||
# IMAPS (Port 993, implicit TLS)
|
||||
mailcow-imaps:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:993"
|
||||
|
||||
# IMAP (Port 143, plain/STARTTLS)
|
||||
mailcow-imap:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:143"
|
||||
|
||||
# POP3S (Port 995, implicit TLS)
|
||||
mailcow-pop3s:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:995"
|
||||
|
||||
# Dovecot Managesieve (Port 4190, implicit TLS)
|
||||
dovecot-managesieve:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- address: "192.168.3.61:4190"
|
||||
```
|
||||
|
||||
### Traefik-Specific Configuration
|
||||
You will need to add some extra entrypoints and ports to Traefik itself so it can listen for this new traffic.
|
||||
```yaml
|
||||
#Entrypoints
|
||||
- "--entrypoints.smtp.address=:25"
|
||||
- "--entrypoints.smtps.address=:465"
|
||||
- "--entrypoints.submission.address=:587"
|
||||
- "--entrypoints.imap.address=:143"
|
||||
- "--entrypoints.imaps.address=:993"
|
||||
- "--entrypoints.pop3.address=:110"
|
||||
- "--entrypoints.pop3s.address=:995"
|
||||
- "--entrypoints.dovecot-managesieve.address=:4190"
|
||||
|
||||
#Ports
|
||||
- "25:25"
|
||||
- "110:110"
|
||||
- "143:143"
|
||||
- "465:465"
|
||||
- "587:587"
|
||||
- "993:993"
|
||||
- "995:995"
|
||||
- "4190:4190"
|
||||
```
|
||||
|
||||
### Login to Mailcow
|
||||
At this point, the Mailcow server has been deployed so you can log into it.
|
||||
|
||||
- **Administrators**: `https://${MAILCOW_HOSTNAME}/admin` (Username: `admin` | Password: `moohoo`)
|
||||
- **Regular Mailbox Users**: `https://${MAILCOW_HOSTNAME}` (*FQDN only*)
|
||||
|
||||
### Mail-Client Considerations
|
||||
You need to ensure that you generate an app password if you have MFA enabled within Mailcow. (MFA is non-functional in Roundcube/SoGo, you set it up via Mailcow itself). You can access it via the Mailcow configuration page: https://mail.bunny-lab.io/user, then look for the "**App Passwords**" tab.
|
||||
|
||||
### Running Updates
|
||||
If you want to run updates, just SSH into the server, and navigate to `/opt/mailcow-dockerized` and run `./update.sh`. I recommend avoiding the IPv6 implementation section. Be patient, and the upgrade will be fully-automated.
|
||||
@@ -0,0 +1,78 @@
|
||||
---
|
||||
tags:
|
||||
- Microsoft Exchange
|
||||
- Lets Encrypt
|
||||
- Email
|
||||
---
|
||||
|
||||
**Purpose**: If you want to set up automatic Let's Encrypt SSL certificates on a Microsoft Exchange server, you have to go through a few steps to install the WinACME bot, and configure it to automatically renew certificates.
|
||||
|
||||
!!! note "ACME Bot Provisioning Considerations"
|
||||
This document assumes you want a fully-automated one-liner command for configuring the ACME Bot, it is also completely valid to go step-by-step through the bot to configure the SSL certificate, the IIS server, etc, and it will automatically create a Scheduled Task to renew on its own. The whole process is very straight-forward with most answers being the default option.
|
||||
|
||||
### Download the Win-ACME Bot:
|
||||
|
||||
* Log into the on-premise Exchange Server via Datto RMM
|
||||
* Navigate to: [https://www.win-acme.com/](https://www.win-acme.com/)
|
||||
* On the top-right of the website, you will see a "**Download**" button with the most recent version of the Win-ACME bot
|
||||
* Extract the contents of the ZIP file to "**C:\\Program Files (x86)\\Lets Encrypt**"
|
||||
* Make the "**Lets Encrypt**" folder if it does not already exist
|
||||
|
||||
### Configure `settings_default.json`:
|
||||
|
||||
* The next step involves us making a modification to the configuration of the Win-ACME bot that allows us to export the necessary private key data for Exchange
|
||||
* Using a text editor, open the "**settings\_default.json**" file
|
||||
* Look for the setting called "**PrivateKeyExportable**" and change the value from "**false**" to "**true**"
|
||||
* Save and close the file
|
||||
|
||||
### Download and Install the SSL Certificate:
|
||||
|
||||
* Open an administrative Command Line (DO NOT USE POWERSHELL)
|
||||
* Navigate to the Let's Encrypt bot directory: `CD "C:\Program Files (x86)\Lets Encrypt"`
|
||||
* Invoke the bot to automatically download and install the certificate into the IIS Server that Exchange uses to host the Exchange Server
|
||||
* Be sure to change the placeholder subdomains to match the domain of the actual Exchange Server
|
||||
* (e.g. "**mail.example.org**" | "**autodiscover.example.org**")
|
||||
```
|
||||
wacs.exe --target manual --host mail.example.org,autodiscover.example.org --certificatestore My --acl-fullcontrol "network service,administrators" --installation iis,script --installationsiteid 1 --script "./Scripts/ImportExchange.ps1" --scriptparameters "'{CertThumbprint}' 'IIS,SMTP,IMAP' 1 '{CacheFile}' '{CachePassword}' '{CertFriendlyName}'" --verbose
|
||||
```
|
||||
|
||||
* When the command is running, it will ask for an email address for alerts and abuse notifications, just put "**infrastructure@bunny-lab.io**"
|
||||
* If you run into any unexpected errors that result in anything other than exiting with a status "0", consult with Nicole Rappe to proceed
|
||||
* Check that the domain of the Exchange Server is reachable on port 80 as Let's Encrypt uses this to build the cert.
|
||||
* Searching the external IP of the server on [Shodan](https://www.shodan.io/) will reveal all open ports.
|
||||
|
||||
### Troubleshooting:
|
||||
If you find that any of the services such as [https://mail.example.org/ecp](https://mail.example.org/ecp), [https://autodiscover.example.org](https://autodiscover.example.org), or [https://mail.example.org/owa](https://mail.example.org/owa) do not let you log in, proceed with the steps below to correct the "Certificate Binding" in IIS Manager:
|
||||
|
||||
* Open "**Server Manager**" > Tools > "**Internet Information Services (IIS) Manager**"
|
||||
* Expand the "**Connections**" server tree on the left-hand side of the IIS Manager
|
||||
* Expand the "**Sites**" folder
|
||||
* Click on "**Default Web Site**"
|
||||
* On the right-hand Actions menu, click on "**Bindings...**"
|
||||
* A table will appear with different endpoints on the Exchange server > What you are looking for is an entry that looks like the following:
|
||||
* **Type**: https
|
||||
* **Host Name**: autodiscover.example.org
|
||||
* **Port**: 443
|
||||
* Double-click on the row, or click one then click the "**Edit**" button to open the settings for that endpoint
|
||||
* Under "**SSL Certificate**" > Make sure the certificate name matches the following format: "**\[Manual\] autodiscover.example.org @ YYYY/MM/DD**"
|
||||
* If it does not match the above, use the dropdown menu to correct it and click the "**OK**" button
|
||||
* **Type**: https
|
||||
* **Host Name**: mail.example.org
|
||||
* **Port**: 443
|
||||
* Repeat the steps seen above, except this time for "**mail.example.org**"
|
||||
* Click on "**Exchange Back End**"
|
||||
* On the right-hand Actions menu, click on "**Bindings...**"
|
||||
* A table will appear with different endpoints on the Exchange server > What you are looking for is an entry that looks like the following:
|
||||
* **Type**: https
|
||||
* **Host Name**: <blank>
|
||||
* **Port**: 444
|
||||
* Repeat the steps seen above, ensuring that the "**\[Manual\] autodiscover.example.org @ YYYY/MM/DD**" certificate is selected and applied
|
||||
* Click the "**OK**" button
|
||||
* On the left-hand menu under "**Connections**" in IIS Manager, click on the server name itself
|
||||
* (e.g. "**EXAMPLE-EXCHANGE (DOMAIN\\dptadmin**")
|
||||
* On the right-hand "**Actions**" menu > Under "Manage Server" > Select "Restart"
|
||||
* Wait for the IIS server to restart itself, then try accessing the webpages for Exchange that were exhibiting issues logging in
|
||||
|
||||
### Additional Documentation:
|
||||
|
||||
* [https://www.alitajran.com/install-free-lets-encrypt-certificate-in-exchange-server/](https://www.alitajran.com/install-free-lets-encrypt-certificate-in-exchange-server/)
|
||||
@@ -0,0 +1,123 @@
|
||||
---
|
||||
tags:
|
||||
- Microsoft Exchange
|
||||
- Email
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
This document is meant to be an abstract guide on what to do before installing Cumulative Updates on Microsoft Exchange Server. There are a few considerations that need to be made ahead of time. This list was put together through shere brute-force while troubleshooting an update issue for a server on 12/16/2024.
|
||||
|
||||
!!! abstract "Overview"
|
||||
We are looking to add an administrative user to several domain security groups, adjust local security policy to put them into the "Manage Auditing and Security Logs" security policy, and run the setup.exe included on the Cumulative Update ISO images within a `SeSecurityPrivilege` operational context.
|
||||
|
||||
## Domain Group Membership
|
||||
You have to be logged in with a domain user that possesses the following domain group memberships, if these group memberships are missing, the upgrade process will fail.
|
||||
|
||||
- `Enterprise Admins`
|
||||
- `Schema Admins`
|
||||
- `Organization Management`
|
||||
|
||||
## User Rights Management
|
||||
You have to be part of the "**Local Policies > User Rights Assignment > "Manage Auditing and Security Logs**" security policy. You can set this via group policy management or locally on the Exchange server via `secpol.msc`. This is required for the "Monitoring Tools" portion of the upgrade.
|
||||
|
||||
It's recommended to reboot the server after making this change to be triple-sure that everything was applied correctly.
|
||||
|
||||
!!! note "Security Policy Only Required on Exchange Server"
|
||||
While the `Enterprise Admins`, `Schema Admins`, and `Organization Management` security group memberships are required on a domain-wide level, the security policy membership for "Manage Auditing and Security Logs" mentioned above is only required on the Exchange Server itself. You can create a group policy that only targets the Exchange Server to add this, or you can make your user a domain-wide member of "Manage Auditing and Security Logs" (Optional). If no existing policies are in-place affecting the Exchange server, you can just use `secpol.msc` to manually add your user to this security policy for the duration of the upgrade/update (or leave it there for future updates).
|
||||
|
||||
## Running Updater within `SeSecurityPrivilege` Operational Context
|
||||
At this point, you would technically be ready to invoke `setup.exe` on the Cumulative Update ISO image to launch the upgrade process, but we are going to go the extra mile to manually "Enable" the `SeSecurityPrivilege` within a Powershell session, then use that same session to invoke the `setup.exe` so the updater runs within that context. This is not really necessary, but something I added as a "hail mary" to make the upgrade successful.
|
||||
|
||||
### Open Powershell ISE
|
||||
The first thing we are going to do, is open the Powershell ISE so we can copy/paste the following powershell script, this script will explicitely enable `SeSecurityPrivilege` for anyone who holds that privilege within the powershell session.
|
||||
|
||||
!!! warning "Run Powershell ISE as Administrator"
|
||||
In order for everything to work correctly, the ISE has to be launched by right-clicking "Run as Administrator", otherwise it is guarenteed that the updater application will fail at some point.
|
||||
|
||||
```powershell title="SeSecurityPrivilege Enablement Script"
|
||||
# Create a Privilege Adjustment
|
||||
$definition = @"
|
||||
using System;
|
||||
using System.Runtime.InteropServices;
|
||||
|
||||
public class Privilege
|
||||
{
|
||||
const int SE_PRIVILEGE_ENABLED = 0x00000002;
|
||||
const int TOKEN_ADJUST_PRIVILEGES = 0x0020;
|
||||
const int TOKEN_QUERY = 0x0008;
|
||||
const string SE_SECURITY_NAME = "SeSecurityPrivilege";
|
||||
|
||||
[DllImport("advapi32.dll", SetLastError = true)]
|
||||
public static extern bool OpenProcessToken(IntPtr ProcessHandle, int DesiredAccess, out IntPtr TokenHandle);
|
||||
|
||||
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
|
||||
public static extern bool LookupPrivilegeValue(string lpSystemName, string lpName, out long lpLuid);
|
||||
|
||||
[DllImport("advapi32.dll", SetLastError = true)]
|
||||
public static extern bool AdjustTokenPrivileges(IntPtr TokenHandle, bool DisableAllPrivileges, ref TOKEN_PRIVILEGES NewState, int BufferLength, IntPtr PreviousState, IntPtr ReturnLength);
|
||||
|
||||
[StructLayout(LayoutKind.Sequential, Pack = 1)]
|
||||
public struct TOKEN_PRIVILEGES
|
||||
{
|
||||
public int PrivilegeCount;
|
||||
public long Luid;
|
||||
public int Attributes;
|
||||
}
|
||||
|
||||
public static bool EnablePrivilege()
|
||||
{
|
||||
IntPtr tokenHandle;
|
||||
TOKEN_PRIVILEGES tokenPrivileges;
|
||||
|
||||
if (!OpenProcessToken(System.Diagnostics.Process.GetCurrentProcess().Handle, TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, out tokenHandle))
|
||||
return false;
|
||||
|
||||
if (!LookupPrivilegeValue(null, SE_SECURITY_NAME, out tokenPrivileges.Luid))
|
||||
return false;
|
||||
|
||||
tokenPrivileges.PrivilegeCount = 1;
|
||||
tokenPrivileges.Attributes = SE_PRIVILEGE_ENABLED;
|
||||
|
||||
return AdjustTokenPrivileges(tokenHandle, false, ref tokenPrivileges, 0, IntPtr.Zero, IntPtr.Zero);
|
||||
}
|
||||
}
|
||||
"@
|
||||
|
||||
Add-Type -TypeDefinition $definition
|
||||
[Privilege]::EnablePrivilege()
|
||||
```
|
||||
|
||||
### Validate Privilege
|
||||
At this point, we now have a powershell session operating with the `SeSecurityPrivilege` privilege enabled. We want to confirm this by running the following commands:
|
||||
|
||||
```powershell
|
||||
whoami # (1)
|
||||
whoami /priv # (2)
|
||||
```
|
||||
|
||||
1. Output will appear similar to "bunny-lab\nicole.rappe", prefixing the username of the person running the command with the domain they belong to.
|
||||
2. Reference the privilege table seen below to validate the output of this command matches what you see below.
|
||||
|
||||
| **Privilege Name** | **Description** | **State** |
|
||||
| :--- | :--- | :--- |
|
||||
| `SeSecurityPrivilege` | Manage auditing and security log | Enabled |
|
||||
|
||||
### Execute `setup.exe`
|
||||
Finally, at the last stage, we mount the ISO file for the Cumulative Update ISO (e.g. 6.6GB ISO image), and using this powershell session we made above, we navigate to the drive it is running on, and invoke setup.exe, causing it to run under the `SeSecurityPrivilege` operational state.
|
||||
|
||||
```powershell
|
||||
D: <ENTER> # (1)
|
||||
.\Setup.EXE /m:upgrade /IAcceptExchangeServersLicenseTerms_DiagnosticDataON # (2)
|
||||
```
|
||||
|
||||
1. Replace this drive letter with whatever letter was assigned when you mounted the ISO image for the Exchange Updater.
|
||||
2. This launches the Exchange updater application. Be patient and give it time to launch. At this point, you should be good to proceed with the update. You can optionally change the argument to `/IAcceptExchangeServersLicenseTerms_DiagnosticDataOFF` if you do not need diagnostic data.
|
||||
|
||||
!!! success "Ready to Proceed with Updating Exchange"
|
||||
At this point, after doing the three sections above, you should be safe to do the upgrade/update of Microsoft Exchange Server. The installer will run its own readiness checks for other aspects such as IIS Rewrite Modules and will give you a link to download / upgrade it separately, then giving you the option to "**Retry**" after installing the module for the installer to re-check and proceed.
|
||||
|
||||
## Post-Update Health Checks
|
||||
After the update(s) are installed, you will likely want to check to ensure things are healthy and operational, validating mail flow in both directions, running `Get-Queue` to check for backlogged emails, etc.
|
||||
|
||||
!!! note "Under Construction"
|
||||
This section is under construction and will be based on some feedback from others to help build the section out.
|
||||
@@ -0,0 +1,542 @@
|
||||
---
|
||||
tags:
|
||||
- DFS
|
||||
- Windows Server
|
||||
- Windows
|
||||
- File Services
|
||||
---
|
||||
|
||||
## Purpose
|
||||
If you want data available from a single, consistent UNC path while hosting it on multiple file servers, use **DFS Namespaces (DFSN)**. A namespace presents a *virtual* folder tree (for example, `\\bunny-lab.io\Projects`) whose folders point to one or more **folder targets** (actual SMB shares on your servers).
|
||||
**DFS Replication (DFSR)** is a *separate* feature you configure to keep the contents of those targets in sync.
|
||||
|
||||
This document walks through creating a domain-based DFS namespace and enabling DFS Replication for two servers.
|
||||
|
||||
!!! info "Assumptions"
|
||||
You have two Windows Server machines (e.g., `LAB-FPS-01` and `LAB-FPS-02`) running an edition that supports DFS (Standard or Datacenter), both activated, domain-joined, and using static IPs.
|
||||
|
||||
### Installing Server Roles
|
||||
Install the roles on **both servers**:
|
||||
|
||||
* **Server Manager → Manage → Add Roles and Features**
|
||||
* Click **Next** to **Server Roles**
|
||||
* Expand **File and Storage Services**
|
||||
* Expand **File and iSCSI Services**
|
||||
* Check **File Server**
|
||||
* Check **DFS Namespaces**
|
||||
* Check **DFS Replication**
|
||||
* **Next → Next → Install**, then finish.
|
||||
|
||||
### Create & Configure Network Shares
|
||||
Create (or identify) the folders you want to publish in the namespace, and share them on **each** server. Be sure to enable **Access-based Enumeration** on all of the folder shares for additional security. You only need to ensure that the files exist on one of the file servers,then you need to create empty top-level folders with the same names on the replica servers, data will be replicated automatically from the file server to the empty folders.
|
||||
|
||||
Additionally, it is recommended (if possible) to set the share names to be hidden. For example `\\LAB-FPS-01\Projects$`, that way it ensures that users access the share via DFS at `\\bunny-lab.io\Projects` and users don't accidentally access the network shares directly, bypassing DFS. For example, the local path would be `Z:\Projects` but the network share would be `\\LAB-FPS-01\Projects$`. *This wouldn't break things like replication, but it would muck things up a little bit organizationally. The data would still be replicated between both servers, we just dont want users using direct server shares like that, which bypasses the high-availability and load-balancing features of DFS*
|
||||
|
||||
!!! warning "What must match vs. what can differ"
|
||||
- **Must exist on each server:** a shared folder to act as the *folder target* (path can differ per server).
|
||||
- **Share permissions:** are **not replicated**; set them on each server.
|
||||
- **NTFS permissions inside the replicated folder:** **are replicated** by DFSR and should be consistent.
|
||||
- Targets do **not** have to use identical share names/paths, but keeping them consistent simplifies things.
|
||||
|
||||
| **Permission Type** | **User / Group** | **Access** | Level** |
|
||||
| :---- | :---- | :---- | :---- |
|
||||
| Share | `Everyone` (or `Authenticated Users`) | Full Control | Best practice is to grant broad Full Control on the **share** and enforce access with NTFS. |
|
||||
| NTFS | `SYSTEM` | Full Control | Required for DFSR service. |
|
||||
| NTFS | `Share_Admins` | Full Control | Optional admin group for data management. |
|
||||
| NTFS | *Business groups needing access* | Modify | Grant least privilege to required users/groups. |
|
||||
|
||||
!!! info "Note On Inheritance"
|
||||
Disabling inheritance is **not required** for DFS/DFSR. Keep it enabled unless you have a clear reason to flatten ACLs; inheritance often reduces long-term admin overhead.
|
||||
|
||||
### DFS Breakdown
|
||||
A **namespace** is a logical view like `\\bunny-lab.io\Projects`. Inside it, you create DFS **folders** (e.g., `Scripting`) that point to one or more **folder targets**, such as:
|
||||
|
||||
* `\\LAB-FPS-01\Projects$\Scripting`
|
||||
* `\\LAB-FPS-02\Projects$\Scripting`
|
||||
|
||||
The namespace root itself isn't where you store data; it's a directory of links. Place data in the folder targets the DFS folder points to.
|
||||
|
||||
### DFS Configuration
|
||||
You can run these steps from either server (or any admin workstation with the RSAT tools). DFSN configuration is stored in AD and on namespace servers and applies across members automatically.
|
||||
|
||||
#### Create Namespace
|
||||
|
||||
* **Server Manager → Tools → DFS Management**
|
||||
* Right-click **Namespaces** → **New Namespace...**
|
||||
* Choose a server to host the namespace (e.g., `LAB-FPS-01`) → **Next**
|
||||
* Name the namespace (e.g., `Projects`) → **Next**
|
||||
* You can leave **Edit Settings** at defaults; those control the local folder that backs the namespace root, not your data.
|
||||
* Choose **Domain-based namespace** and check **Enable Windows Server 2008 mode** (required for larger scale and Access-based enumeration).
|
||||
* Resulting path: `\\bunny-lab.io\Projects`
|
||||
* **Next → Create**
|
||||
|
||||
#### Make Namespace Highly-Available
|
||||
We have to perform an extra step to ensure that every file server can act as within a multi-master context, allowing for high availability. To do this in this example, we will add `LAB-FPS-02` as a secondary namespace server for every namespace that we create.
|
||||
|
||||
- Right-Click **DFS Management** > **Namespaces** > `\\bunny-lab.io\Projects`
|
||||
- Click **Add Namespace Server...**
|
||||
- Under "Namespace Server" enter `LAB-FPS-02` then click **OK**.
|
||||
|
||||
#### Enable Access-Based Enumeration on Namespace
|
||||
|
||||
- Right-Click **DFS Management** > **Namespaces** > `\\bunny-lab.io\Projects`
|
||||
- Click **Properties**
|
||||
- Click **Advanced**
|
||||
- Check **Enable access-based enumeration for this namespace**
|
||||
- Click **OK**
|
||||
|
||||
#### Link Folders to Namespace
|
||||
Create the DFS folders and add folder targets:
|
||||
|
||||
* Right-click the new namespace (e.g., `\\bunny-lab.io\Projects`) → **New Folder...**
|
||||
* **Name:** `Scripting`
|
||||
* **Add** folder targets (one per server), e.g.:
|
||||
* `\\LAB-FPS-01\Projects$\Scripting`
|
||||
* `\\LAB-FPS-02\Projects$\Scripting`
|
||||
* You can simply copy-paste the previous server location and substitute the hostname (e.g. switching `01` to `02`) instead of browsing for the folder.
|
||||
* You *may* be prompted to create the folder because it does not exist on `LAB-FPS-02`, in this circumstance, you can tell it to create the folder automatically with read-only permissions. *Don't worry, when replication from `LAB-FPS-01` occurs, NTFS permissions will be overwritten to the correct users and groups.*
|
||||
* When prompted *"Create a replication group to synchronize the folder targets?"*, click **Yes** to launch the DFS Replication wizard.
|
||||
|
||||
!!! info "**Be patient**"
|
||||
The Replication wizard can take ~1 minute to appear.
|
||||
|
||||
#### Configure Replication Group
|
||||
In the Replication wizard that appears after about a minute, you can configure the replication group for the folder:
|
||||
|
||||
!!! bug "If Wizard did Not Appear (or Crashed)"
|
||||
In my homelab testing, I had two times when the wizard crashed or simply never opened. If this happens to you, you can manually re-trigger the wizard for the target folder by right-clicking the folder (e.g. `\\bunny-lab.io\Projects\Scripting`) and selecting **Replicate Folder**.
|
||||
|
||||
* **Replication Group Name**: *(leave as suggested)*
|
||||
* **Replicated Folder Name**: *(leave as suggested)*
|
||||
* **Next → Next**
|
||||
* **Primary member**: pick the server with the **most up-to-date** copy of the data (e.g., `LAB-FPS-01`).
|
||||
|
||||
!!! abstract "Replication Behavior and Expectations"
|
||||
When you first create a replication group, DFSR needs a baseline copy of the data to start from. You designate one server as the Primary Member to serve as that baseline. (e.g. `LAB-FPS-01`) During the first sync, DFSR assumes that whatever exists on the primary member's folder is the "truth." So if the same file exists on another server (e.g. `LAB-FPS-02`) but with different timestamps, sizes, or hashes, the primary member's copy wins - but only during this first synchronization. After that initial sync is complete, the "primary" flag loses all authority. Replication becomes multi-master, meaning every member can make changes, and DFSR uses its conflict resolution algorithm (based on version vectors, update sequence numbers, and timestamps) to decide which change wins going forward. In other words, no server remains “the boss” after initialization. Files unique to other member servers that only exist on them will not be wiped and will be replicated across all member servers including the primary member.
|
||||
|
||||
* **Topology**: `Full mesh` (good for two servers; for many sites, consider hub-and-spoke).
|
||||
* **Replication schedule**: leave **Full** (24x7) unless you need bandwidth windows.
|
||||
* **Create**
|
||||
|
||||
!!! success "Replication group created"
|
||||
You should see green ticks for the following. Give everything some time to replicate as it depends on active directory replication speeds to push out the configuration across the DFS member servers and begin the replication.
|
||||
|
||||
- ✅Create replication group
|
||||
- ✅Create members
|
||||
- ✅Update folder security
|
||||
- ✅Create replicated folder
|
||||
- ✅Create membership objects
|
||||
- ✅Update folder properties
|
||||
- ✅Create connections
|
||||
|
||||
### Troubleshooting / Diagnostics
|
||||
#### Checking DFS Status
|
||||
You may want to put together a simple table report of the DFS namespaces, replication info, and target folders. You can run the following powershell script to generate a nice table-based report of the current structure of the DFS namespaces in your domain.
|
||||
|
||||
??? example "Powershell Reporting Script"
|
||||
```powershell
|
||||
# Automatically detect current AD domain and use it as DFS prefix
|
||||
try {
|
||||
$Domain = ([System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()).Name
|
||||
$DomainPrefix = "\\$Domain"
|
||||
} catch {
|
||||
Write-Warning "Unable to detect domain automatically. Falling back to manual value."
|
||||
$DomainPrefix = "\\bunny-lab.io"
|
||||
}
|
||||
|
||||
Import-Module DFSN -ErrorAction Stop
|
||||
Import-Module DFSR -ErrorAction Stop
|
||||
|
||||
function Get-ServerNameFromPath {
|
||||
param([string]$Path)
|
||||
if ([string]::IsNullOrWhiteSpace($Path)) { return $null }
|
||||
if ($Path -like "\\*") { return ($Path -split '\\')[2] }
|
||||
return $null
|
||||
}
|
||||
function Get-Max3 {
|
||||
param([int[]]$Values)
|
||||
if (-not $Values) { return 0 }
|
||||
return (($Values | Measure-Object -Maximum).Maximum)
|
||||
}
|
||||
|
||||
# Build: GroupName (lower) -> memberships[]
|
||||
$allGroups = Get-DfsReplicationGroup -ErrorAction SilentlyContinue
|
||||
$groupMembershipMap = @{}
|
||||
foreach ($g in $allGroups) {
|
||||
$ms = Get-DfsrMembership -GroupName $g.GroupName -ErrorAction SilentlyContinue
|
||||
$groupMembershipMap[$g.GroupName.ToLower()] = $ms
|
||||
}
|
||||
|
||||
# Flatten all memberships for regex fallback
|
||||
$allMemberships = @()
|
||||
foreach ($arr in $groupMembershipMap.Values) { if ($arr) { $allMemberships += $arr } }
|
||||
|
||||
$rows = New-Object System.Collections.Generic.List[psobject]
|
||||
|
||||
# Enumerate namespace roots
|
||||
$roots = Get-DfsnRoot -ErrorAction Stop | Where-Object { $_.Path -like "$DomainPrefix\*" }
|
||||
|
||||
Write-Host "DFS Namespace and Replication Overview" -ForegroundColor Cyan
|
||||
Write-Host "------------------------------------------------------`n"
|
||||
|
||||
foreach ($root in $roots) {
|
||||
|
||||
$rootPath = $root.Path
|
||||
$rootLeaf = ($rootPath -split '\\')[-1]
|
||||
|
||||
$nsServers = @()
|
||||
$rootTargets = Get-DfsnRootTarget -Path $rootPath -ErrorAction SilentlyContinue
|
||||
foreach ($rt in $rootTargets) {
|
||||
$srv = Get-ServerNameFromPath $rt.TargetPath
|
||||
if ($srv) { $nsServers += $srv }
|
||||
}
|
||||
|
||||
# Folders under this root
|
||||
$folders = Get-DfsnFolder -Path "$rootPath\*" -ErrorAction SilentlyContinue | Sort-Object Path
|
||||
|
||||
foreach ($f in $folders) {
|
||||
$namespaceFull = $f.Path
|
||||
$leaf = ($f.Path -split '\\')[-1]
|
||||
|
||||
# DFSN folder targets
|
||||
$targets = Get-DfsnFolderTarget -Path $f.Path -ErrorAction SilentlyContinue
|
||||
$targets = @($targets | Sort-Object { Get-ServerNameFromPath $_.TargetPath }) # ensure array
|
||||
|
||||
# Map to DFSR group by naming; fallback to regex on ContentPath
|
||||
$candidateGroup = ((($rootPath -replace '^\\\\','') + '\' + $leaf).ToLower())
|
||||
if ($groupMembershipMap.ContainsKey($candidateGroup)) {
|
||||
$msForFolder = $groupMembershipMap[$candidateGroup]
|
||||
} else {
|
||||
$escapedRootLeaf = [regex]::Escape($rootLeaf)
|
||||
$escapedLeaf = [regex]::Escape($leaf)
|
||||
$regex = "\\$escapedRootLeaf\\$escapedLeaf($|\\)"
|
||||
$msForFolder = $allMemberships | Where-Object { $_.ContentPath -imatch $regex }
|
||||
}
|
||||
$msForFolder = @($msForFolder) # normalize to array
|
||||
|
||||
# Build aligned rows: one per target
|
||||
$targetLines = @()
|
||||
$replLines = @()
|
||||
|
||||
foreach ($t in $targets) {
|
||||
$tServer = Get-ServerNameFromPath $t.TargetPath
|
||||
$targetLines += $t.TargetPath
|
||||
|
||||
$msForServer = $null
|
||||
if ($msForFolder.Count -gt 0) {
|
||||
$msForServer = $msForFolder | Where-Object { $_.ComputerName -ieq $tServer } | Select-Object -First 1
|
||||
}
|
||||
if ($msForServer -and $msForServer.ContentPath) { $replLines += $msForServer.ContentPath } else { $replLines += '' }
|
||||
}
|
||||
|
||||
# Max line count for row expansion (PS 5.1 safe)
|
||||
$maxLines = Get-Max3 @($targetLines.Count, $replLines.Count, $nsServers.Count)
|
||||
|
||||
for ($i = 0; $i -lt $maxLines; $i++) {
|
||||
|
||||
# Precompute values (PS 5.1: no inline-if in hashtables)
|
||||
$nsVal = ''
|
||||
if ($i -eq 0) { $nsVal = $namespaceFull }
|
||||
|
||||
$targetVal = ''
|
||||
if ($i -lt $targetLines.Count) { $targetVal = $targetLines[$i] }
|
||||
|
||||
$replVal = ''
|
||||
if ($i -lt $replLines.Count) { $replVal = $replLines[$i] }
|
||||
|
||||
$nsServerVal = ''
|
||||
if ($i -lt $nsServers.Count) { $nsServerVal = $nsServers[$i] }
|
||||
|
||||
$row = [PSCustomObject]@{
|
||||
'Namespace' = $nsVal
|
||||
'Member Folder Target(s)' = $targetVal
|
||||
'Replication Locations' = $replVal
|
||||
'Namespace Servers' = $nsServerVal
|
||||
}
|
||||
$rows.Add($row) | Out-Null
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
# Render as a PowerShell bordered grid with one-space left/right padding in every cell
|
||||
function Write-DfsGrid {
|
||||
[CmdletBinding()]
|
||||
param(
|
||||
[Parameter(Mandatory)]
|
||||
[System.Collections.IEnumerable]$Data,
|
||||
|
||||
[string[]]$Columns = @('Namespace','Member Folder Target(s)','Replication Locations','Namespace Servers'),
|
||||
|
||||
# Reasonable max widths; tune to your console (these are content+padding widths)
|
||||
[int[]]$MaxWidths = @(70, 70, 52, 30),
|
||||
|
||||
[switch]$Ascii # use +-| instead of box-drawing if your console garbles Unicode
|
||||
)
|
||||
|
||||
# Ensure arrays align
|
||||
if ($MaxWidths.Count -lt $Columns.Count) {
|
||||
$pad = New-Object System.Collections.Generic.List[int]
|
||||
$pad.AddRange($MaxWidths)
|
||||
for ($i=$MaxWidths.Count; $i -lt $Columns.Count; $i++) { $pad.Add(40) }
|
||||
$MaxWidths = $pad.ToArray()
|
||||
}
|
||||
|
||||
# Characters
|
||||
if ($Ascii) {
|
||||
$H = @{ tl='+'; tr='+'; bl='+'; br='+'; hz='-'; vt='|'; tj='+'; mj='+'; bj='+' }
|
||||
} else {
|
||||
# Box-drawing
|
||||
$H = @{ tl='┌'; tr='┐'; bl='└'; br='┘'; hz='─'; vt='│'; tj='┬'; mj='┼'; bj='┴' }
|
||||
try { [Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8 } catch {}
|
||||
}
|
||||
|
||||
function TruncPad([string]$s, [int]$w) {
|
||||
if ($null -eq $s) { $s = '' }
|
||||
$s = $s -replace '\r','' -replace '\t',' '
|
||||
if ($s.Length -le $w) { return $s.PadRight($w, ' ') }
|
||||
if ($w -le 1) { return $s.Substring(0, $w) }
|
||||
return ($s.Substring(0, $w-1) + '…')
|
||||
}
|
||||
|
||||
# Materialize and compute widths (include one-space left/right padding for header and data)
|
||||
$rows = @($Data | ForEach-Object {
|
||||
$o = @{}
|
||||
foreach ($c in $Columns) { $o[$c] = [string]($_.$c) }
|
||||
[pscustomobject]$o
|
||||
})
|
||||
|
||||
$widths = @()
|
||||
for ($i=0; $i -lt $Columns.Count; $i++) {
|
||||
$col = $Columns[$i]
|
||||
# Start with header length including padding
|
||||
$max = (" " + $col + " ").Length
|
||||
foreach ($r in $rows) {
|
||||
$len = (" " + [string]$r.$col + " ").Length
|
||||
if ($len -gt $max) { $max = $len }
|
||||
}
|
||||
$widths += [Math]::Min($max, $MaxWidths[$i])
|
||||
}
|
||||
|
||||
# Line builders
|
||||
function DrawTop() {
|
||||
$line = $H.tl
|
||||
for ($i = 0; $i -lt $widths.Count; $i++) {
|
||||
$line += ($H.hz * $widths[$i])
|
||||
if ($i -lt ($widths.Count - 1)) {
|
||||
$line += $H.tj
|
||||
} else {
|
||||
$line += $H.tr
|
||||
}
|
||||
}
|
||||
$line
|
||||
}
|
||||
function DrawMid([string[]]$Columns, [int[]]$widths, $H) {
|
||||
$line = $H.vt
|
||||
for ($i=0; $i -lt $widths.Count; $i++) {
|
||||
$line += TruncPad (" " + $Columns[$i] + " ") $widths[$i]
|
||||
$line += $H.vt
|
||||
}
|
||||
$line
|
||||
}
|
||||
function DrawSep() {
|
||||
$line = $H.vt
|
||||
for ($i=0; $i -lt $widths.Count; $i++) {
|
||||
$line += ($H.hz * $widths[$i])
|
||||
$line += $H.vt
|
||||
}
|
||||
$line
|
||||
}
|
||||
function DrawHeaderSep() {
|
||||
$line = $H.vt
|
||||
for ($i=0; $i -lt $widths.Count; $i++) {
|
||||
$line += ($H.hz * $widths[$i])
|
||||
$line += $H.vt
|
||||
}
|
||||
$line
|
||||
}
|
||||
function DrawBottom() {
|
||||
$line = $H.bl
|
||||
for ($i = 0; $i -lt $widths.Count; $i++) {
|
||||
$line += ($H.hz * $widths[$i])
|
||||
if ($i -lt ($widths.Count - 1)) {
|
||||
$line += $H.bj
|
||||
} else {
|
||||
$line += $H.br
|
||||
}
|
||||
}
|
||||
$line
|
||||
}
|
||||
function DrawRow($r, [string[]]$Columns, [int[]]$widths, $H) {
|
||||
$line = $H.vt
|
||||
for ($i=0; $i -lt $widths.Count; $i++) {
|
||||
$val = [string]$r.($Columns[$i])
|
||||
$line += TruncPad (" " + $val + " ") $widths[$i]
|
||||
$line += $H.vt
|
||||
}
|
||||
$line
|
||||
}
|
||||
|
||||
# Render with group separators between namespaces (when the Namespace cell is non-empty)
|
||||
Write-Host (DrawTop)
|
||||
Write-Host (DrawMid -Columns $Columns -widths $widths -H $H)
|
||||
Write-Host (DrawHeaderSep)
|
||||
|
||||
$first = $true
|
||||
foreach ($r in $rows) {
|
||||
if (-not $first -and ([string]$r.$($Columns[0])) ) {
|
||||
# Namespace changed → draw a separator
|
||||
Write-Host (DrawSep)
|
||||
}
|
||||
$first = $false
|
||||
Write-Host (DrawRow -r $r -Columns $Columns -widths $widths -H $H)
|
||||
}
|
||||
|
||||
Write-Host (DrawBottom)
|
||||
}
|
||||
|
||||
Write-DfsGrid -Data $rows
|
||||
```
|
||||
|
||||
#### Fixing Inconsistent DFS Management GUI
|
||||
Sometimes the GUI for managing DFS becomes "inconsistent" whereas the namespaces and replication groups are different between member servers, and may be missing namspaces or missing replication groups. DFS Management is an MMC snap-in. MMC persists per-user console state under `%APPDATA%\Microsoft\MMC\`. If that state gets out of sync (common after service hiccups or server crashes), the snap-in can render partial/incorrect namespace/replication trees even when DFS itself is fine. Deleting the cached dfsmgmt* console forces a fresh enumeration. We will also include a few extra commands for extra thouroughness.
|
||||
|
||||
Before anything, we want to make sure that active directory itself is not having replication issues, as this would be a deeper, more complicated issue. Run the following command on one of your domain controllers:
|
||||
```powershell
|
||||
repadmin /syncall /AdeP
|
||||
repadmin /replsummary
|
||||
```
|
||||
|
||||
If AD-level replication is successful and timely, you can proceed to run the commands below (one-line-at-a-time):
|
||||
```sh
|
||||
# Pull-Down DFS Configuration from Active Directory & Restart DFS
|
||||
dfsrdiag pollad
|
||||
net stop dfsr
|
||||
net start dfsr
|
||||
|
||||
# Clear DFS Management Snap-In Cache
|
||||
taskkill /im mmc.exe /f
|
||||
del "%appdata%\Microsoft\MMC\dfsmgmt*"
|
||||
dfsmgmt.msc
|
||||
```
|
||||
|
||||
!!! success "DFS Management GUI Restored"
|
||||
At this point, the DFS Management snap-in (should) be successfully showing all of the DFS namespaces and replication groups when you re-open "DFS Management".
|
||||
|
||||
#### Check Replication Progress
|
||||
You may want to check that replication is occurring bi-directionally between every member server in your DFS deployment. I wrote a script below that effectively shows you every replication group and each directional backlog status.
|
||||
|
||||
```powershell
|
||||
# --- CONFIG ---
|
||||
$Members = @("LAB-FPS-01","LAB-FPS-02")
|
||||
$SummarizeAcrossFolders = $true # $true = one line per direction per RG; $false = per-folder lines
|
||||
|
||||
function Invoke-DfsrBacklogStatus {
|
||||
param(
|
||||
[Parameter(Mandatory)] [string] $RG,
|
||||
[Parameter(Mandatory)] [string] $RF,
|
||||
[Parameter(Mandatory)] [string] $Send,
|
||||
[Parameter(Mandatory)] [string] $Recv
|
||||
)
|
||||
|
||||
$out = & dfsrdiag backlog /rgname:"$RG" /rfname:"$RF" /sendingmember:"$Send" /receivingmember:"$Recv" 2>&1 | Out-String
|
||||
$outTrim = ($out -split "`r?`n" | ForEach-Object { $_.Trim() }) | Where-Object { $_ -ne "" }
|
||||
|
||||
if ($out -match 'No Backlog') {
|
||||
return [pscustomobject]@{ Status="No Backlog"; Count=0; Detail=$null }
|
||||
}
|
||||
|
||||
$count = $null
|
||||
$countLine = $outTrim | Where-Object { $_ -match '(?i)backlog' } | Select-Object -First 1
|
||||
if ($countLine -and ($countLine -match '(\d+)')) { $count = [int]$matches[1] }
|
||||
|
||||
$detail = ($outTrim | Select-Object -First 8) -join " | "
|
||||
|
||||
return [pscustomobject]@{
|
||||
Status = if ($count -ne $null) { "Backlog: $count" } else { "Backlog/Check Output" }
|
||||
Count = $count
|
||||
Detail = $detail
|
||||
}
|
||||
}
|
||||
|
||||
$groups = Get-DfsReplicationGroup | Sort-Object GroupName
|
||||
|
||||
foreach ($g in $groups) {
|
||||
$rg = $g.GroupName
|
||||
$rfs = Get-DfsReplicatedFolder -GroupName $rg | Sort-Object FolderName
|
||||
|
||||
Write-Host ""
|
||||
Write-Host ("== Replication Group: {0} ==" -f $rg)
|
||||
|
||||
foreach ($send in $Members) {
|
||||
foreach ($recv in $Members) {
|
||||
if ($send -eq $recv) { continue }
|
||||
|
||||
if ($SummarizeAcrossFolders) {
|
||||
$worstCount = 0
|
||||
$nonZero = @()
|
||||
$errorsOrDetails = @()
|
||||
|
||||
foreach ($rfObj in $rfs) {
|
||||
$rf = $rfObj.FolderName
|
||||
$res = Invoke-DfsrBacklogStatus -RG $rg -RF $rf -Send $send -Recv $recv
|
||||
|
||||
if ($res.Status -ne "No Backlog") {
|
||||
$nonZero += [pscustomobject]@{ RF=$rf; Status=$res.Status; Count=$res.Count; Detail=$res.Detail }
|
||||
if ($res.Count -ne $null -and $res.Count -gt $worstCount) { $worstCount = $res.Count }
|
||||
|
||||
# ✅ FIX: ${rf} avoids the ':' parsing issue
|
||||
if ($res.Detail) { $errorsOrDetails += "RF=${rf}: $($res.Detail)" }
|
||||
}
|
||||
}
|
||||
|
||||
if ($nonZero.Count -eq 0) {
|
||||
Write-Host ("{0} -> {1}: No Backlog" -f $send, $recv)
|
||||
} else {
|
||||
if ($worstCount -gt 0) {
|
||||
Write-Host ("{0} -> {1}: Backlog (max {2} across RFs)" -f $send, $recv, $worstCount)
|
||||
} else {
|
||||
Write-Host ("{0} -> {1}: Backlog/Errors (see details)" -f $send, $recv)
|
||||
}
|
||||
|
||||
$errorsOrDetails | Select-Object -First 5 | ForEach-Object { Write-Host (" - {0}" -f $_) }
|
||||
if ($errorsOrDetails.Count -gt 5) { Write-Host " - ... (more omitted)" }
|
||||
}
|
||||
}
|
||||
else {
|
||||
foreach ($rfObj in $rfs) {
|
||||
$rf = $rfObj.FolderName
|
||||
$res = Invoke-DfsrBacklogStatus -RG $rg -RF $rf -Send $send -Recv $recv
|
||||
|
||||
if ($res.Status -eq "No Backlog") {
|
||||
Write-Host ("{0} -> {1} [{2}]: No Backlog" -f $send, $recv, $rf)
|
||||
} else {
|
||||
Write-Host ("{0} -> {1} [{2}]: {3}" -f $send, $recv, $rf, $res.Status)
|
||||
if ($res.Detail) { Write-Host (" - {0}" -f $res.Detail) }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
!!! example "Example Output"
|
||||
You will see output like the following when you run the script.
|
||||
|
||||
```powershell
|
||||
== Replication Group: bunny-lab.io\music\fl studio plugins ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\music\personal music ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\music\shared music ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\projects\coding ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
```
|
||||
72
deployments/services/gaming/ark-survival-ascended.md
Normal file
72
deployments/services/gaming/ark-survival-ascended.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
tags:
|
||||
- ARK
|
||||
- Gaming
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
This document outlines some of the prerequisites as well as deployment process for an ARK: Survival Ascended Server
|
||||
|
||||
## Prerequisites
|
||||
We need to install the Visual C++ Redistributable for both x86 and x64
|
||||
|
||||
- [Download Visual C++ Redistributable (x64)](https://aka.ms/vs/17/release/vc_redist.x64.exe)
|
||||
- [Download Visual C++ Redistributable (x86)](https://aka.ms/vs/17/release/vc_redist.x86.exe)
|
||||
|
||||
|
||||
## Run Unreal Engine Certificate Trust Script
|
||||
There is an issue where if you run a dedicated server, part of that requires API access to Epic Games and that will not work without installing a few certificates. The original Github page can be found [here](https://github.com/Ch4r0ne/UnrealEngine_Dedicated_Server_Install_CA/tree/main), which details the reason for it in more detail.
|
||||
|
||||
!!! note "Run as Administrator"
|
||||
You need to run the command as an administrator. This command will download the script automatically and temporarily bypass the script execution policy to run the script:
|
||||
```
|
||||
PowerShell -ExecutionPolicy Bypass -Command "irm 'https://raw.githubusercontent.com/Ch4r0ne/UnrealEngine_Dedicated_Server_Install_CA/main/Install_Certificate.ps1' | iex"
|
||||
```
|
||||
|
||||
## SteamCMD Deployment Script
|
||||
You will need to make a folder somewhere on the computer, such as the desktop, and name it something like "ARK Updater", then put the following script into it. You will need to run this script before you can proceed to the next step.
|
||||
|
||||
```jsx title="C:\Users\nicole.rappe\Desktop\ARK_Updater\Update_Server.bat"
|
||||
@echo off
|
||||
set STEAMCMDDIR="C:\SteamCMD\"
|
||||
set SERVERDIR="C:\ASAServer\"
|
||||
set ARKAPPID=2430930
|
||||
cd /d %STEAMCMDDIR%
|
||||
del steamcmd.exe
|
||||
timeout /t 5 /nobreak
|
||||
curl -o steamcmd.zip https://steamcdn-a.akamaihd.net/client/installer/steamcmd.zip
|
||||
powershell Expand-Archive -Path .\steamcmd.zip -DestinationPath .\
|
||||
start "" /wait steamcmd.exe +force_install_dir "%SERVERDIR%" +login anonymous +app_update %ARKAPPID% validate +quit
|
||||
exit
|
||||
```
|
||||
|
||||
## Launch Script
|
||||
Now you need to configure a launch script to actually start the dedicated server. This can be placed anywhere, but I suggest putting it into `C:\asaserver\ShooterGame\Saved` along with the world save data.
|
||||
|
||||
```jsx title="C:\asaserver\ShooterGame\Saved\Launch_Server.bat"
|
||||
@echo off
|
||||
start C:\asaserver\ShooterGame\Binaries\Win64\ArkAscendedServer.exe ScorchedEarth_WP?listen?SessionName=BunnyLab?Port=7777?QueryPort=27015?ServerPassword=SomethingSecure?ServerAdminPassword=SomethingVerySecure -WinLiveMaxPlayers=50 -log -crossplay-enable-pc -crossplay-enable-wingdk -mods=928548,928621,928597,928818,929543,937546,930684,930404,940022,941697,930851,948051,932365,929420,967786,930494
|
||||
exit
|
||||
```
|
||||
|
||||
!!! tip "Adding Mods"
|
||||
When you are adding mods, you will notice they are found on [CurseForge](https://www.curseforge.com/ark-survival-ascended). When you are looking for the mod ID, it is actually listed under CurseForge as the `Project ID`. Just copy that number and put it in a comma-separated list such as what is seen in the example above.
|
||||
|
||||
## Dump Configuration .ini Files
|
||||
At this point, you will want to launch the server and have someone join it so it can generate the necessary world files / configuration data. Then you will run the following commands in the console (from the server hosting the ARK server) in order to dump the configuration (ini) files to disk.
|
||||
|
||||
```
|
||||
enablecheats <AdminPassword>
|
||||
cheat SaveWorld
|
||||
cheat DoExit
|
||||
```
|
||||
|
||||
You will find the dumped configuration files at `C:\asaserver\ShooterGame\Saved\Config\WindowsServer`. The files you care about are `Game.ini` and `GameUserSettings.ini`.
|
||||
|
||||
!!! warning "Do not modify while server is running"
|
||||
If you modify these configuration files while the server is running, it will overwrite the values when the server is stopped again. Be sure to either set the variables in-game via the console so it dumps them to disk, or wait until the server is stopped to make configuration ini file changes.
|
||||
|
||||
!!! info "Optional: Generate Files from Singleplayer World"
|
||||
You may want to start a singleplayer world and set all of the configuration variables to your desired values, then load into the world. Once you have made landfall, quit out of the game to shut down the singleplayer world.
|
||||
|
||||
From this point, you can find your `Game.ini` and `GameUserSettings.ini` files in `steamapps\common\ARK Survival Ascended\ShooterGame\Saved\Config\Windows`. Simply copy these two files into your server's configuration folder located at `C:\asaserver\ShooterGame\Saved\Config\WindowsServer` and launch the server.
|
||||
52
deployments/services/gaming/pterodactyl.md
Normal file
52
deployments/services/gaming/pterodactyl.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
tags:
|
||||
- Pterodactyl
|
||||
- Gaming
|
||||
---
|
||||
|
||||
**Purpose**: Pterodactyl is the open-source game server management panel built with PHP, React, and Go. Designed with security in mind, Pterodactyl runs all game servers in isolated Docker containers while exposing a beautiful and intuitive UI to administrators and users.
|
||||
[Official Website](https://pterodactyl.io/panel/1.0/getting_started.html)
|
||||
|
||||
!!! note
|
||||
This documentation assumes you are running Rocky Linux 9.3 or higher.
|
||||
|
||||
**Install EPEL Repository and other tools**:
|
||||
```bash
|
||||
sudo yum -y install epel-release curl ca-certificates gnupg
|
||||
```
|
||||
|
||||
**Add Redis Repository**:
|
||||
```bash
|
||||
sudo rpm --import https://packages.redis.io/gpg
|
||||
echo "[redis6]
|
||||
name=Redis 6 repository
|
||||
baseurl=https://packages.redis.io/rpm/6/rhel/8/\$basearch/
|
||||
enabled=1
|
||||
gpgcheck=1
|
||||
gpgkey=https://packages.redis.io/gpg" | sudo tee /etc/yum.repos.d/redis.repo
|
||||
```
|
||||
|
||||
**Add MariaDB Repository**:
|
||||
```bash
|
||||
sudo curl -LsS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
|
||||
```
|
||||
|
||||
**Update Repositories List**:
|
||||
```bash
|
||||
sudo yum update
|
||||
```
|
||||
|
||||
**Install Dependencies**:
|
||||
Before installing PHP, check the available PHP versions in your enabled repositories. Install PHP and other dependencies as follows:
|
||||
```bash
|
||||
sudo yum -y install php php-{common,cli,gd,mysql,mbstring,bcmath,xml,fpm,curl,zip} mariadb-server nginx tar unzip git redis
|
||||
```
|
||||
|
||||
7. **Installing Composer**:
|
||||
```bash
|
||||
curl -sS https://getcomposer.org/installer | php
|
||||
sudo mv composer.phar /usr/local/bin/composer
|
||||
chmod +x /usr/local/bin/composer
|
||||
```
|
||||
|
||||
This script should work well with Rocky Linux and similar RHEL-based distributions, using `yum` for package management. However, keep in mind that package names and versions may vary between repositories, so you might need to adjust them based on what's available in your system's repositories.
|
||||
39
deployments/services/gaming/valheim.md
Normal file
39
deployments/services/gaming/valheim.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
tags:
|
||||
- Valheim
|
||||
- Gaming
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
This document outlines some of the prerequisites as well as deployment process for an dedicated Valheim server.
|
||||
|
||||
## Prerequisites
|
||||
We need to install the Visual C++ Redistributable for both x86 and x64
|
||||
|
||||
- [Download Visual C++ Redistributable (x64)](https://download.visualstudio.microsoft.com/download/pr/1754ea58-11a6-44ab-a262-696e194ce543/3642E3F95D50CC193E4B5A0B0FFBF7FE2C08801517758B4C8AEB7105A091208A/VC_redist.x64.exe)
|
||||
- [Download Visual C++ Redistributable (x86)](https://download.visualstudio.microsoft.com/download/pr/b4834f47-d829-4e11-80f6-6e65081566b5/A32DD41EAAB0C5E1EAA78BE3C0BB73B48593DE8D97A7510B97DE3FD993538600/VC_redist.x86.exe)
|
||||
|
||||
## SteamCMD Deployment Script
|
||||
You will need to make a folder somewhere on the computer, such as the desktop, and name it something like "ARK Updater", then put the following script into it. You will need to run this script before you can proceed to the next step.
|
||||
|
||||
```jsx title="C:\Users\nicole.rappe\Downloads\SteamCMD\Update_Server.bat"
|
||||
@echo off
|
||||
steamcmd.exe +force_install_dir "C:\Valheim_Dedicated_Server" +login anonymous +app_update 896660 -beta public validate +quit
|
||||
```
|
||||
|
||||
## Launch Script
|
||||
Now you need to configure a launch script to actually start the dedicated server. This can be placed anywhere, but I suggest putting it into `C:\asaserver\ShooterGame\Saved` along with the world save data.
|
||||
|
||||
```jsx title="C:\valheim_dedicated_server\Launch_Server.bat"
|
||||
@echo off
|
||||
set SteamAppId=892970
|
||||
|
||||
echo "Starting server PRESS CTRL-C to exit"
|
||||
|
||||
valheim_server -nographics -batchmode -name "Bunny Lab" -port 2456 -world "Dedicated" -password "SomethingVerySecure" -crossplay -saveinterval 300 -backups 72 -backupshort 600 -backuplong 21600
|
||||
```
|
||||
|
||||
!!! warning "Launch Script Considerations"
|
||||
- Make a local copy of this script to avoid it being overwritten by steam.
|
||||
- Minimum password length is 5 characters & Password cant be in the server name.
|
||||
- You need to make sure the ports TCP/UDP 2456-2457 is being forwarded to your server through your server VM & firewall.
|
||||
56
deployments/services/home-and-iot/frigate.md
Normal file
56
deployments/services/home-and-iot/frigate.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
tags:
|
||||
- Frigate
|
||||
- IoT
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.9"
|
||||
services:
|
||||
frigate:
|
||||
container_name: frigate
|
||||
privileged: true # this may not be necessary for all setups
|
||||
restart: unless-stopped
|
||||
image: blakeblackshear/frigate:stable
|
||||
shm_size: "256mb" # update for your cameras based on calculation above
|
||||
# devices:
|
||||
# - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
|
||||
# - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
# - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /mnt/1TB_STORAGE/frigate/config.yml:/config/config.yml:ro
|
||||
- /mnt/1TB_STORAGE/frigate/media:/media/frigate
|
||||
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
||||
target: /tmp/cache
|
||||
tmpfs:
|
||||
size: 4000000000
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "1935:1935" # RTMP feeds
|
||||
environment:
|
||||
FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.201
|
||||
|
||||
mqtt:
|
||||
container_name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
ports:
|
||||
- "1883:1883"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.202
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
FRIGATE_RTSP_PASSWORD=SomethingSecure101
|
||||
```
|
||||
44
deployments/services/home-and-iot/homeassistant.md
Normal file
44
deployments/services/home-and-iot/homeassistant.md
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
tags:
|
||||
- Home Assistant
|
||||
- IoT
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
homeassistant:
|
||||
container_name: homeassistant
|
||||
image: "ghcr.io/home-assistant/home-assistant:stable"
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/Home-Assistant-Core:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
restart: always
|
||||
privileged: true
|
||||
ports:
|
||||
- 8123:8123
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.252
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.homeassistant.rule=Host(`automation.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.homeassistant.entrypoints=websecure"
|
||||
- "traefik.http.routers.homeassistant.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
40
deployments/services/index.md
Normal file
40
deployments/services/index.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
tags:
|
||||
- Services
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Services
|
||||
## Purpose
|
||||
Deployable services and applications in the lab (auth, email, monitoring, etc).
|
||||
|
||||
## Includes
|
||||
- Service deployments and configs
|
||||
- Dependencies and integrations
|
||||
- Operational notes specific to the service
|
||||
|
||||
## New Document Template
|
||||
````markdown
|
||||
# <Document Title>
|
||||
## Purpose
|
||||
<what this service does and why it exists>
|
||||
|
||||
!!! info "Assumptions"
|
||||
- <platform assumptions>
|
||||
- <dependency assumptions>
|
||||
|
||||
## Dependencies
|
||||
- <required services, ports, DNS, storage>
|
||||
|
||||
## Procedure
|
||||
```sh
|
||||
# Commands or deployment steps
|
||||
```
|
||||
|
||||
## Validation
|
||||
- <command + expected result>
|
||||
|
||||
## Rollback
|
||||
- <how to undo or recover>
|
||||
````
|
||||
68
deployments/services/media-and-gaming/emulatorjs.md
Normal file
68
deployments/services/media-and-gaming/emulatorjs.md
Normal file
@@ -0,0 +1,68 @@
|
||||
---
|
||||
tags:
|
||||
- EmulatorJS
|
||||
- Media
|
||||
- Gaming
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Emulatorjs is a browser web based emulation portable to nearly any device for many retro consoles. A mix of emulators is used between Libretro and EmulatorJS.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
---
|
||||
services:
|
||||
emulatorjs:
|
||||
image: lscr.io/linuxserver/emulatorjs:latest
|
||||
container_name: emulatorjs
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
- SUBFOLDER=/ #optional
|
||||
volumes:
|
||||
- /srv/containers/emulatorjs/config:/config
|
||||
- /srv/containers/emulatorjs/data:/data
|
||||
ports:
|
||||
- 3000:3000
|
||||
- 80:80
|
||||
- 4001:4001 #optional
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.200
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
git:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: emulatorjs
|
||||
rule: Host(`emulatorjs.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
emulatorjs:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.200:80
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
!!! note
|
||||
Port 80 = Frontend
|
||||
Port 3000 = Management Backend
|
||||
86
deployments/services/media-and-gaming/pyload.md
Normal file
86
deployments/services/media-and-gaming/pyload.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
tags:
|
||||
- Pyload
|
||||
- Media
|
||||
- Gaming
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: pyLoad-ng is a Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web.
|
||||
|
||||
[Detailed LinuxServer.io Deployment Info](https://docs.linuxserver.io/images/docker-pyload-ng/)
|
||||
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.9'
|
||||
|
||||
services:
|
||||
pyload-ng:
|
||||
image: lscr.io/linuxserver/pyload-ng:latest
|
||||
container_name: pyload-ng
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/pyload-ng/config:/config
|
||||
- nfs-share:/downloads
|
||||
ports:
|
||||
- 8000:8000
|
||||
- 9666:9666 #optional
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.30
|
||||
|
||||
volumes:
|
||||
nfs-share:
|
||||
driver: local
|
||||
driver_opts:
|
||||
type: nfs
|
||||
o: addr=192.168.3.3,nolock,soft,rw # Options for the NFS mount
|
||||
device: ":/mnt/STORAGE/Downloads" # NFS path on the server
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. Set this to your own timezone.
|
||||
2. This is optional. Additional documentation needed to convey what this port is used for. Possibly API access.
|
||||
3. This assumes you want your download folder to be a SMB network share, this section allows you to connect to the share so Pyload can download content directly into the network folder. Replace the username and `REDACTED` password with your actual credentials. Remove the `domain` argument if the SMB server is not domain-joined.
|
||||
4. This is the destination network share to target with the given credentials in section 3.
|
||||
|
||||
!!! note "NFS Mount Assumptions"
|
||||
The NFS folder in this example is both exported via NFS on a TrueNAS Core server, while also being exported as an NFS export. `mapall user` and `mapall group` is configured to the user and group owners of the folder set in the permissions of the dataset in TrueNAS Core. In this case, the mapall user is `BUNNY-LAB\nicole.rappe` and the mapall group is `BUNNY-LAB\Domain Admins`.
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
pyload:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: pyload
|
||||
rule: Host(`pyload.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
pyload:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.30:8000
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
!!! warning "Change Default Admin Credentials"
|
||||
Pyload ships with the username `pyload` and password `pyload`. Make sure you change the credentials immediately after initial login.
|
||||
Navigate to "**Settings > Users > Pyload:"Change Password"**"
|
||||
15
deployments/services/microsoft-365/change-mfa-settings.md
Normal file
15
deployments/services/microsoft-365/change-mfa-settings.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
tags:
|
||||
- MFA
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Sometimes you may need to change the MFA on an account, by adding a new email or phone number for SMS-based MFA. This can be done fairly quickly and only involves a few steps:
|
||||
|
||||
- Navigate to the [Azure Web Portal](https://portal.azure.com) and log in using your Office365 admin credentials.
|
||||
- Navigate to the [Azure Active Directory (Microsoft Entra ID) Users List](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers)
|
||||
- Click on the User Account that needs their MFA information changed / wiped
|
||||
- On the left-hand navigation menu, click on "**Authentication Methods**" at the bottom
|
||||
- Make adjustments to existing methods or click on "**+ Add Authentication Method**"
|
||||
- Valid options generally are Phone Numbers, Email Addresses, and a "**Temporary Access Pass**"
|
||||
- Save the changes by clicking the "**Add**" button, then have the user attempt to log in again using their MFA method configured
|
||||
81
deployments/services/monitoring/gatus.md
Normal file
81
deployments/services/monitoring/gatus.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
tags:
|
||||
- Gatus
|
||||
- Monitoring
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Gatus Service Status Server.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.9"
|
||||
services:
|
||||
postgres:
|
||||
image: postgres
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/gatus/database:/var/lib/postgresql
|
||||
ports:
|
||||
- "5432:5432"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.9
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-postgres}"]
|
||||
interval: 10s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
|
||||
gatus:
|
||||
image: twinproduction/gatus:latest
|
||||
restart: always
|
||||
ports:
|
||||
- "8080:8080"
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- /srv/containers/gatus/config:/config
|
||||
depends_on:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
dns:
|
||||
- 192.168.3.25
|
||||
- 192.168.3.26
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.8
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
status-bunny-lab:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: status-bunny-lab
|
||||
rule: Host(`status.bunny-lab.io`)
|
||||
middlewares:
|
||||
- "auth-bunny-lab-io" # Referencing the Keycloak Server
|
||||
|
||||
services:
|
||||
status-bunny-lab:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.8:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
108
deployments/services/monitoring/speedtest-tracker.md
Normal file
108
deployments/services/monitoring/speedtest-tracker.md
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
tags:
|
||||
- Speedtest Tracker
|
||||
- Monitoring
|
||||
- Docker
|
||||
---
|
||||
|
||||
## Purpose:
|
||||
Speedtest Tracker is a self-hosted application that monitors the performance and uptime of your internet connection over time.
|
||||
[Detailed Configuration Reference](https://docs.speedtest-tracker.dev/getting-started/installation)
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
speedtest-tracker:
|
||||
image: lscr.io/linuxserver/speedtest-tracker:latest
|
||||
restart: unless-stopped
|
||||
container_name: speedtest-tracker
|
||||
ports:
|
||||
- 8080:80
|
||||
- 8443:443
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=${TIMEZONE}
|
||||
- ASSET_URL=${PUBLIC_FQDN}
|
||||
- APP_TIMEZONE=${TIMEZONE}
|
||||
- DISPLAY_TIMEZONE=${TIMEZONE}
|
||||
- SPEEDTEST_SCHEDULE=*/15 * * * * # (1)
|
||||
- SPEEDTEST_SERVERS=61622 # (3)
|
||||
- APP_KEY=${BASE64_APPKEY} # (2)
|
||||
- DB_CONNECTION=pgsql
|
||||
- DB_HOST=db
|
||||
- DB_PORT=5432
|
||||
- DB_DATABASE=${DB_DATABASE}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD=${DB_PASSWORD}
|
||||
volumes:
|
||||
- /srv/containers/speedtest-tracker/config:/config
|
||||
- /srv/containers/speedtest-tracker/custom-ssl-keys:/config/keys
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.38
|
||||
|
||||
db:
|
||||
image: postgres:17
|
||||
restart: always
|
||||
environment:
|
||||
- POSTGRES_DB=${DB_DATABASE}
|
||||
- POSTGRES_USER=${DB_USERNAME}
|
||||
- POSTGRES_PASSWORD=${DB_PASSWORD}
|
||||
- TZ=${TIMEZONE}
|
||||
volumes:
|
||||
- /srv/containers/speedtest-tracker/db:/var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
|
||||
interval: 5s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.39
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. You can use [Crontab Guru](https://crontab.guru) to generate a cron expression to schedule automatic speedtests. e.g. `*/15 * * * *` runs a speedtest every 15 minutes.
|
||||
|
||||
2. You can generate a secure appkey with the following command: `echo -n 'base64:'; openssl rand -base64 32;` > Copy this key including the `base64:` prefix and paste it as your APP_KEY environment variable value.
|
||||
|
||||
3. This restricts the speedtest target to a specific speedtest server. In this example, it is a Missoula, MT speedtest server. You can get these codes from the yellow Speedtest button menu in the WebUI and then come back and redeploy the stack with the number entered here.
|
||||
|
||||
```yaml title=".env"
|
||||
DB_PASSWORD=SecurePassword
|
||||
DB_DATABASE=speedtest_tracker
|
||||
DB_USERNAME=speedtest_tracker
|
||||
TIMEZONE=America/Denver
|
||||
PUBLIC_FQDN=https://speedtest.bunny-lab.io
|
||||
BASE64_APPKEY=SECUREAPPKEY
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
speedtest-tracker:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: speedtest-tracker
|
||||
rule: Host(`speedtest.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
speedtest-tracker:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.38:80
|
||||
passHostHeader: true
|
||||
```
|
||||
40
deployments/services/monitoring/uptimekuma.md
Normal file
40
deployments/services/monitoring/uptimekuma.md
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
tags:
|
||||
- Uptime Kuma
|
||||
- Monitoring
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Deploy Uptime Kuma uptime monitor to monitor services in the homelab and send notifications to various services.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
uptimekuma:
|
||||
image: louislam/uptime-kuma
|
||||
ports:
|
||||
- 3001:3001
|
||||
volumes:
|
||||
- /mnt/uptimekuma:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# Allow status page to exist within an iframe
|
||||
- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=1
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.uptime-kuma.rule=Host(`status.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.uptime-kuma.entrypoints=websecure"
|
||||
- "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.uptime-kuma.loadbalancer.server.port=3001"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.211
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
43
deployments/services/notifications/ntfy.md
Normal file
43
deployments/services/notifications/ntfy.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
tags:
|
||||
- ntfy
|
||||
- Notifications
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: ntfy (pronounced notify) is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It's infinitely flexible, and 100% free software.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
container_name: ntfy
|
||||
command:
|
||||
- serve
|
||||
environment:
|
||||
- NTFY_ATTACHMENT_CACHE_DIR=/var/lib/ntfy/attachments
|
||||
- NTFY_BASE_URL=https://ntfy.bunny-lab.io
|
||||
- TZ=America/Denver # optional: Change to your desired timezone
|
||||
#user: UID:GID # optional: Set custom user/group or uid/gid
|
||||
volumes:
|
||||
- /srv/containers/ntfy/cache:/var/cache/ntfy
|
||||
- /srv/containers/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 80:80
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.45
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
126
deployments/services/productivity/collabora-code-server.md
Normal file
126
deployments/services/productivity/collabora-code-server.md
Normal file
@@ -0,0 +1,126 @@
|
||||
---
|
||||
tags:
|
||||
- Collabora
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
## Purpose:
|
||||
The Collabora CODE Server is used by Nextcloud Office to open and edit documents and spreadsheets collaboratively. When Nextcloud is not deployed in a [Nextcloud AIO](./nextcloud-aio.md) way, and is instead installed not as a container, you (may) run into stability issues with Collabora CODE Server just randomly breaking and not allowing users to edit documents. If this happens, you can follow this document to stand-up a dedicated Collabora CODE Server on the same host as your Nextcloud server.
|
||||
|
||||
!!! info "Assumptions"
|
||||
|
||||
- It is assumed that you are running an ACME Certificate Bot on your Nextcloud server to generate certificates for Nextcloud.
|
||||
- It is also assumed that you are running Ubuntu Server 24.04.3 LTS. *This document does not outline the process for setting up an ACME Certificate Bot*.
|
||||
- It is lastly assumed that (until changes are made to allow such) this will only work for internal access. Unless you port-forward port `9980` Collabora will not function for public internet-facing access.
|
||||
|
||||
### Install Docker and Configure Portainer
|
||||
The first thing you need to do is install Docker then Portainer. You can do this by following the [Portainer Deployment](../../platforms/containerization/docker/deploy-portainer.md) documentation.
|
||||
|
||||
### Portainer Stack
|
||||
```yaml title="docker-compose.yml"
|
||||
name: app
|
||||
services:
|
||||
code:
|
||||
image: collabora/code
|
||||
container_name: collabora
|
||||
restart: always
|
||||
networks:
|
||||
- collabora-net
|
||||
environment:
|
||||
- domain=${NEXTCLOUD_COLLABORA_URL}
|
||||
- aliasgroup1=${NEXTCLOUD_COLLABORA_URL}
|
||||
- username=${CODESERVER_ADMIN_USER} # Used to login @ https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html
|
||||
- password=${CODESERVER_ADMIN_PASSWORD} # Used to login @ https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html
|
||||
# CODE speaks HTTP internally, TLS is terminated at nginx
|
||||
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
|
||||
# no direct port mapping; only reachable via proxy
|
||||
|
||||
collabora-proxy:
|
||||
image: nginx:alpine
|
||||
container_name: collabora-proxy
|
||||
restart: always
|
||||
depends_on:
|
||||
- code
|
||||
networks:
|
||||
- collabora-net
|
||||
ports:
|
||||
# Host port 9980 -> container port 443 (HTTPS)
|
||||
- "9980:443"
|
||||
volumes:
|
||||
# Our nginx vhost config (this exists outside of the container anywhere you want to put it, by default "/opt/collabora/nginx.conf")
|
||||
- /opt/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
|
||||
|
||||
# Mount the entire letsencrypt tree so symlinks keep working
|
||||
- /etc/letsencrypt:/etc/letsencrypt:ro
|
||||
|
||||
networks:
|
||||
collabora-net:
|
||||
driver: bridge
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
NEXTCLOUD_COLLABORA_URL=cloud\\.bunny-lab\\.io
|
||||
CODESERVER_ADMIN_USER=admin
|
||||
CODESERVER_ADMIN_PASSWORD=ChangeThisPassword
|
||||
```
|
||||
|
||||
## NGINX Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml title="/opt/collabora/nginx.conf"
|
||||
map $http_upgrade $connection_upgrade {
|
||||
default upgrade;
|
||||
'' close;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name cloud.bunny-lab.io;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/cloud.bunny-lab.io/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/cloud.bunny-lab.io/privkey.pem;
|
||||
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
# Main proxy to CODE
|
||||
location / {
|
||||
proxy_pass http://collabora:9980;
|
||||
|
||||
# Required for WebSockets
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
|
||||
# Standard headers
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
|
||||
proxy_read_timeout 36000;
|
||||
proxy_connect_timeout 36000;
|
||||
proxy_send_timeout 36000;
|
||||
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuring Nextcloud Office
|
||||
Now that the Collabora CODE Server was deployed and instructed to use the existing LetsEncrypt SSL Certificates located in `/etc/letsencrypt/live/cloud.bunny-lab.io/` on the Ubuntu host, we can proceed to reconfiguring Nextcloud to use this new server.
|
||||
|
||||
- Login to the Nextcloud server as an administrator
|
||||
- Navigate to "**Apps**"
|
||||
- Ensure that any existing ONLYOFFICE or Built-in Collabora CODE Server apps are disabled / removed from Nextcloud itself
|
||||
- Navigate to "**Administration Settings**"
|
||||
- In the left-hand "**Administration**" sidebar, look for something like "**Office**" or "**Nextcloud Office**" and click on it
|
||||
- Check the radio box that says "**Use your own server**"
|
||||
- For the URL, enter `https://cloud.bunny-lab.io:9980` and uncheck the "**Disable certificate verification (insecure)**" checkbox, then click the "**Save**" button.
|
||||
|
||||
!!! success "Collabora Online Server is Reachable"
|
||||
At this point, you should see a green banner at the top of the Nextcloud webpage stating something like "**Collabora Online Development Edition 25.04.7.2 a246f9ab3c**". This would indicate that Nextcloud should be able to successfully talk with the Collabora CODE Server and that you can now proceed to verify that everything is working by trying to create and edit some documents and spreadsheets.
|
||||
|
||||
### Administrating Collabora CODE Server
|
||||
As aforementioned, we can manage Collabora CODE Server sessions and useful metrics about who is editing documents and being able to terminate their sessions if they get stuck or something can be useful. You can login to the management web interface at https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html using the `CODESERVER_ADMIN_USER` and `CODESERVER_ADMIN_PASSWORD` credentials.
|
||||
|
||||
169
deployments/services/productivity/nextcloud-aio.md
Normal file
169
deployments/services/productivity/nextcloud-aio.md
Normal file
@@ -0,0 +1,169 @@
|
||||
---
|
||||
tags:
|
||||
- Nextcloud AIO
|
||||
- Nextcloud
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Deploy a Nextcloud AIO Server. [Official Nextcloud All-in-One Documentation](https://github.com/nextcloud/all-in-one).
|
||||
This version of Nextcloud consists of 12 containers that are centrally managed by a single "master" container. It is more orchestrated and automates the implementation of Nextcloud Office, Nextcloud Talk, and other integrations / apps.
|
||||
|
||||
!!! note "Assumptions"
|
||||
It is assumed you are running Rocky Linux 9.3.
|
||||
|
||||
It is also assumed that you are using Traefik as your reverse proxy in front of Nextcloud AIO. If it isnt, refer to the [reverse proxy documentation](https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md) to configure other reverse proxies such as NGINX.
|
||||
|
||||
=== "Simplified Docker-Compose.yml"
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
nextcloud-aio-mastercontainer:
|
||||
image: nextcloud/all-in-one:latest
|
||||
init: true
|
||||
restart: always
|
||||
container_name: nextcloud-aio-mastercontainer
|
||||
volumes:
|
||||
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
ports:
|
||||
- 8080:8080
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
environment:
|
||||
- APACHE_PORT=11000
|
||||
- APACHE_IP_BINDING=0.0.0.0
|
||||
- NEXTCLOUD_MEMORY_LIMIT=4096M
|
||||
- NEXTCLOUD_ADDITIONAL_APKS=imagemagick
|
||||
- NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick
|
||||
volumes:
|
||||
nextcloud_aio_mastercontainer:
|
||||
name: nextcloud_aio_mastercontainer
|
||||
```
|
||||
|
||||
=== "Extended Docker-Compose.yml"
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
services:
|
||||
nextcloud-aio-mastercontainer:
|
||||
image: nextcloud/all-in-one:latest
|
||||
init: true
|
||||
restart: always
|
||||
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
|
||||
volumes:
|
||||
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don't forget to also set 'WATCHTOWER_DOCKER_SOCKET_PATH'!
|
||||
ports:
|
||||
# - 80:80 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
|
||||
- 8080:8080
|
||||
# - 8443:8443 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
environment: # Is needed when using any of the options below
|
||||
# AIO_DISABLE_BACKUP_SECTION: false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
|
||||
- APACHE_PORT=11000 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
|
||||
- APACHE_IP_BINDING=0.0.0.0 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
|
||||
# BORG_RETENTION_POLICY: --keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
|
||||
# COLLABORA_SECCOMP_DISABLED: false # Setting this to true allows to disable Collabora's Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
|
||||
# NEXTCLOUD_DATADIR: /mnt/ncdata # Allows to set the host directory for Nextcloud's datadir. ⚠️⚠️⚠️ Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
|
||||
# NEXTCLOUD_MOUNT: /mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
|
||||
# NEXTCLOUD_UPLOAD_LIMIT: 10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
|
||||
# NEXTCLOUD_MAX_TIME: 3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
|
||||
- NEXTCLOUD_MEMORY_LIMIT=4096M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
|
||||
# NEXTCLOUD_TRUSTED_CACERTS_DIR: /path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
|
||||
# NEXTCLOUD_STARTUP_APPS="deck twofactor_totp tasks calendar contacts notes" # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
|
||||
- NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
|
||||
- NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
|
||||
# NEXTCLOUD_ENABLE_DRI_DEVICE: true # This allows to enable the /dev/dri device in the Nextcloud container. ⚠️⚠️⚠️ Warning: this only works if the '/dev/dri' device is present on the host! If it should not exist on your host, don't set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
|
||||
# NEXTCLOUD_KEEP_DISABLED_APPS: false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
|
||||
# TALK_PORT: 3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
|
||||
# WATCHTOWER_DOCKER_SOCKET_PATH: /var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default '/var/run/docker.sock'. Otherwise mastercontainer updates will fail. For macos it needs to be '/var/run/docker.sock'
|
||||
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
|
||||
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
|
||||
# security_opt: ["label:disable"] # Is needed when using SELinux
|
||||
|
||||
# # Optional: Caddy reverse proxy. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
|
||||
# # You can find further examples here: https://github.com/nextcloud/all-in-one/discussions/588
|
||||
# caddy:
|
||||
# image: caddy:alpine
|
||||
# restart: always
|
||||
# container_name: caddy
|
||||
# volumes:
|
||||
# - ./Caddyfile:/etc/caddy/Caddyfile
|
||||
# - ./certs:/certs
|
||||
# - ./config:/config
|
||||
# - ./data:/data
|
||||
# - ./sites:/srv
|
||||
# network_mode: "host"
|
||||
|
||||
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
|
||||
nextcloud_aio_mastercontainer:
|
||||
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
|
||||
|
||||
# # Optional: If you need ipv6, follow step 1 and 2 of https://github.com/nextcloud/all-in-one/blob/main/docker-ipv6-support.md first and then uncomment the below config in order to activate ipv6 for the internal nextcloud-aio network.
|
||||
# # Please make sure to uncomment also the networking lines of the mastercontainer above in order to actually create the network with docker-compose
|
||||
# networks:
|
||||
# nextcloud-aio:
|
||||
# name: nextcloud-aio # This line is not allowed to be changed as otherwise the created network will not be used by the other containers of AIO
|
||||
# driver: bridge
|
||||
# enable_ipv6: true
|
||||
# ipam:
|
||||
# driver: default
|
||||
# config:
|
||||
# - subnet: fd12:3456:789a:2::/64 # IPv6 subnet to use
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
```yaml title="cloud.bunny-lab.io.yml"
|
||||
http:
|
||||
routers:
|
||||
nextcloud-aio:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: nextcloud-aio
|
||||
middlewares:
|
||||
- nextcloud-chain
|
||||
rule: Host(`cloud.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
nextcloud-aio:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.3.29:11000
|
||||
|
||||
middlewares:
|
||||
nextcloud-secure-headers:
|
||||
headers:
|
||||
hostsProxyHeaders:
|
||||
- "X-Forwarded-Host"
|
||||
referrerPolicy: "same-origin"
|
||||
|
||||
https-redirect:
|
||||
redirectscheme:
|
||||
scheme: https
|
||||
|
||||
nextcloud-chain:
|
||||
chain:
|
||||
middlewares:
|
||||
# - ... (e.g. rate limiting middleware)
|
||||
- https-redirect
|
||||
- nextcloud-secure-headers
|
||||
```
|
||||
|
||||
## Initial Setup
|
||||
You will need to navigate to https://192.168.3.29:8080 to access the Nextcloud AIO configuration tool. This is where you will get the AIO password, encryption passphrase for backups, and be able to configure the timezone, among other things.
|
||||
|
||||
### Domain Validation
|
||||
It will ask you to provide a domain name. In this example, we will use `cloud.bunny-lab.io`. Assuming you have configured the Traefik reverse proxy as seen above, when you press the "**Validate Domain**" button, Nextcloud will spin up a container named something similar to `domain-validator`. This will spin up a server listening on https://cloud.bunny-lab.io. If you visit that address, it should give you something similar to `f940935260b41691ac2246ba9e7823a301a1605ae8a023ee`. This will confirm that the domain validation will succeed.
|
||||
|
||||
!!! warning "Domain Validation Failing"
|
||||
If visiting the web server at https://cloud.bunny-lab.io results in an error 502 or 404, try to destroy the domain validation container in Portainer / Docker, then click the validation button in the Nextcloud AIO WebUI to spin up a new container automatically, at which point it should be function.
|
||||
|
||||
### Configuring Additional Packages
|
||||
At this point, the rest of the setup is fairly straightforward. You just check every checkbox for the apps you want to install automatically, and be patient while Nextcloud deploys about 11 containers. You can track the progress more accurately if you log into Portainer and watch the container listing and logs to follow-along until every container reports "**Healthy**" indicating everything is ready, then press the "**Refresh**" button on the Nextcloud AIO WebUI to confirm it's ready to be used.
|
||||
71
deployments/services/productivity/nextcloud.md
Normal file
71
deployments/services/productivity/nextcloud.md
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
tags:
|
||||
- Nextcloud
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Deploy a Nextcloud and PostgreSQL database together.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
app:
|
||||
image: nextcloud:apache
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`files.bunny-lab.io`)"
|
||||
- "traefik.http.routers.nextcloud.entrypoints=websecure"
|
||||
- "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_HOST=${POSTGRES_HOST}
|
||||
- OVERWRITEPROTOCOL=https
|
||||
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
|
||||
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
|
||||
- NEXTCLOUD_TRUSTED_DOMAINS=${NEXTCLOUD_TRUSTED_DOMAINS}
|
||||
volumes:
|
||||
- /srv/containers/nextcloud/html:/var/www/html
|
||||
ports:
|
||||
- 443:443
|
||||
- 80:80
|
||||
restart: always
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.17
|
||||
db:
|
||||
image: postgres:12-alpine
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
volumes:
|
||||
- /srv/containers/nextcloud/db:/var/lib/postgresql/data
|
||||
ports:
|
||||
- 5432:5432
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.18
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
TZ=America/Denver
|
||||
POSTGRES_PASSWORD=SomeSecurePassword
|
||||
POSTGRES_USER=ncadmin
|
||||
POSTGRES_HOST=192.168.5.18
|
||||
POSTGRES_DB=nextcloud
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=SomeSuperSecurePassword
|
||||
NEXTCLOUD_TRUSTED_DOMAINS=cloud.bunny-lab.io
|
||||
```
|
||||
70
deployments/services/productivity/onlyoffice-ee.md
Normal file
70
deployments/services/productivity/onlyoffice-ee.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
tags:
|
||||
- OnlyOffice
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: ONLYOFFICE offers a secure online office suite highly compatible with MS Office formats. Generally used with Nextcloud to edit documents directly within the web browser.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: onlyoffice/documentserver-ee
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- /srv/containers/onlyoffice/DocumentServer/logs:/var/log/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data
|
||||
- /srv/containers/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/db:/var/lib/postgresql
|
||||
- /srv/containers/onlyoffice/DocumentServer/fonts:/usr/share/fonts/truetype/custom
|
||||
- /srv/containers/onlyoffice/DocumentServer/forgotten:/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten
|
||||
- /srv/containers/onlyoffice/DocumentServer/rabbitmq:/var/lib/rabbitmq
|
||||
- /srv/containers/onlyoffice/DocumentServer/redis:/var/lib/redis
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.rule=Host(`office.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.entrypoints=websecure"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.cyberstrawberry-onlyoffice.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.middlewares=onlyoffice-headers"
|
||||
- "traefik.http.middlewares.onlyoffice-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
#- "traefik.http.middlewares.onlyoffice-headers.headers.accessControlAllowOrigin=*"
|
||||
environment:
|
||||
- JWT_ENABLED=true
|
||||
- JWT_SECRET=REDACTED #SET THIS TO SOMETHING SECURE
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.143
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
:::tip
|
||||
If you wish to use this in a non-commercial homelab environment without limits, [this script](https://wiki.muwahhid.ru/ru/Unraid/Docker/Onlyoffice-Document-Server) does an endless trial without functionality limits.
|
||||
```
|
||||
docker stop office-document-server-ee
|
||||
docker rm office-document-server-ee
|
||||
rm -r /mnt/user/appdata/onlyoffice/DocumentServer
|
||||
sleep 5
|
||||
<USE A PORTAINER WEBHOOK TO RECREATE THE CONTAINER OR REFERENCE THE DOCKER RUN METHOD BELOW>
|
||||
```
|
||||
|
||||
Docker Run Method:
|
||||
```
|
||||
docker run -d --name='office-document-server-ee' --net='bridge' -e TZ="Europe/Moscow" -e HOST_OS="Unraid" -e 'JWT_ENABLED'='true' -e 'JWT_SECRET'='mySecret' -p '8082:80/tcp' -p '4432:443/tcp' -v '/mnt/user/appdata/onlyoffice/DocumentServer/logs':'/var/log/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/data':'/var/www/onlyoffice/Data':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/lib':'/var/lib/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/db':'/var/lib/postgresql':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/fonts':'/usr/share/fonts/truetype/custom':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/forgotten':'/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/rabbitmq':'/var/lib/rabbitmq':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/redis':'/var/lib/redis':'rw' 'onlyoffice/documentserver-ee'
|
||||
```
|
||||
:::
|
||||
|
||||
66
deployments/services/productivity/stirling-pdf.md
Normal file
66
deployments/services/productivity/stirling-pdf.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
tags:
|
||||
- Stirling PDF
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: This is a powerful locally hosted web based PDF manipulation tool using docker that allows you to perform various operations on PDF files, such as splitting merging, converting, reorganizing, adding images, rotating, compressing, and more. This locally hosted web application started as a 100% ChatGPT-made application and has evolved to include a wide range of features to handle all your PDF needs.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: frooodle/s-pdf:latest
|
||||
container_name: stirling-pdf
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
- DOCKER_ENABLE_SECURITY=false
|
||||
volumes:
|
||||
- /srv/containers/stirling-pdf/datastore:/datastore
|
||||
- /srv/containers/stirling-pdf/trainingData:/usr/share/tesseract-ocr/5/tessdata #Required for extra OCR languages
|
||||
- /srv/containers/stirling-pdf/extraConfigs:/configs
|
||||
- /srv/containers/stirling-pdf/customFiles:/customFiles/
|
||||
- /srv/containers/stirling-pdf/logs:/logs/
|
||||
|
||||
ports:
|
||||
- 8080:8080
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.54
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
stirling-pdf:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: stirling-pdf
|
||||
rule: Host(`pdf.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
stirling-pdf:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.54:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
56
deployments/services/productivity/trilium.md
Normal file
56
deployments/services/productivity/trilium.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
tags:
|
||||
- Trilium
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Build your personal knowledge base with [Trilium Notes](https://github.com/zadam/trilium/tree/master).
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '2.1'
|
||||
services:
|
||||
trilium:
|
||||
image: zadam/trilium
|
||||
restart: always
|
||||
environment:
|
||||
- TRILIUM_DATA_DIR=/home/node/trilium-data
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /srv/containers/trilium:/home/node/trilium-data
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.11
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
# Traefik Configuration
|
||||
```yaml title="notes.bunny-lab.io.yml"
|
||||
http:
|
||||
routers:
|
||||
notes:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: notes
|
||||
rule: Host(`notes.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
notes:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.11:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
56
deployments/services/productivity/wordpress.md
Normal file
56
deployments/services/productivity/wordpress.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
tags:
|
||||
- WordPress
|
||||
- Productivity
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: At its core, WordPress is the simplest, most popular way to create your own website or blog. In fact, WordPress powers over 43.3% of all the websites on the Internet. Yes – more than one in four websites that you visit are likely powered by WordPress.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
wordpress:
|
||||
image: wordpress:latest
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: 192.168.5.216
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
|
||||
WORDPRESS_DB_NAME: wordpress
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/Server:/var/www/html
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.217
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: lscr.io/linuxserver/mariadb
|
||||
restart: always
|
||||
ports:
|
||||
- 3306:3306
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
REMOTE_SQL: http://URL1/your.sql,https://URL2/your.sql
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/DB:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.216
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
WORDPRESS_DB_PASSWORD=SecurePassword101
|
||||
MYSQL_ROOT_PASSWORD=SecurePassword202
|
||||
```
|
||||
156
deployments/services/remote-access/apache-guacamole.md
Normal file
156
deployments/services/remote-access/apache-guacamole.md
Normal file
@@ -0,0 +1,156 @@
|
||||
---
|
||||
tags:
|
||||
- Apache Guacamole
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: HTML5-based Remote Access Broker for SSH, RDP, and VNC. Useful for remote access into an environment.
|
||||
|
||||
### Docker Compose Stack
|
||||
=== "docker-compose.yml"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: jasonbean/guacamole
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/guacamole:/config
|
||||
environment:
|
||||
- OPT_MYSQL=Y
|
||||
- OPT_MYSQL_EXTENSION=N
|
||||
- OPT_SQLSERVER=N
|
||||
- OPT_LDAP=N
|
||||
- OPT_DUO=N
|
||||
- OPT_CAS=N
|
||||
- OPT_TOTP=Y # (1)
|
||||
- OPT_QUICKCONNECT=N
|
||||
- OPT_HEADER=N
|
||||
- OPT_SAML=N
|
||||
- PUID=99
|
||||
- PGID=100
|
||||
- TZ=America/Denver # (2)
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.43
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. Enable this if you want multi-factor authentication enabled. Must be set BEFORE the container is initially deployed. Cannot be added retroactively.
|
||||
2. Set to your own timezone.
|
||||
|
||||
=== "docker-compose.yml (OpenID / Keycloak Integration)"
|
||||
|
||||
```yaml
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: jasonbean/guacamole
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/apache-guacamole:/config
|
||||
environment:
|
||||
- OPT_MYSQL=Y
|
||||
- OPT_MYSQL_EXTENSION=N
|
||||
- OPT_SQLSERVER=N
|
||||
- OPT_LDAP=N
|
||||
- OPT_DUO=N
|
||||
- OPT_CAS=N
|
||||
- OPT_TOTP=N
|
||||
- OPT_QUICKCONNECT=N
|
||||
- OPT_HEADER=N
|
||||
- OPT_SAML=N
|
||||
- OPT_OIDC=Y # Enable OpenID Connect
|
||||
- OIDC_ISSUER=${OPENID_REALM_URL} # Your Keycloak realm URL
|
||||
- OIDC_CLIENT_ID=${OPENID_CLIENT_ID} # Client ID for Guacamole in Keycloak
|
||||
- OIDC_CLIENT_SECRET=${OPENID_CLIENT_SECRET} # Client Secret for Guacamole in Keycloak
|
||||
- OIDC_REDIRECT_URI=${OPENID_REDIRECT_URI} # Redirect URI for Guacamole
|
||||
- PUID=99
|
||||
- PGID=100
|
||||
- TZ=America/Denver
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.43
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
1. You cannot enable TOTP / Multi-factor authentication if you have OpenID configured. This is just a known issue.
|
||||
2. Set to your own timezone.
|
||||
|
||||
### Environment Variables
|
||||
=== ".env"
|
||||
|
||||
``` sh
|
||||
N/A
|
||||
```
|
||||
|
||||
=== ".env (OpenID / Keycloak Integration)"
|
||||
|
||||
```yaml
|
||||
OPENID_REALM_URL=https://auth.bunny-lab.io/realms/master
|
||||
OPENID_CLIENT_ID=apache-guacamole
|
||||
OPENID_CLIENT_SECRET=<YOUR-CLIENT-ID-SECRET>
|
||||
OPENID_REDIRECT_URI=http://remote.bunny-lab.io
|
||||
```
|
||||
|
||||
## Reverse Proxy Configuration
|
||||
|
||||
=== "Traefik"
|
||||
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
apache-guacamole:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: apache-guacamole
|
||||
rule: Host(`remote.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
apache-guacamole:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.43:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
=== "NGINX"
|
||||
|
||||
```yaml
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name remote.bunny-lab.io;
|
||||
client_max_body_size 0;
|
||||
ssl on;
|
||||
location / {
|
||||
proxy_pass http://192.168.5.43:8080;
|
||||
proxy_buffering off;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
access_log off;
|
||||
}
|
||||
}
|
||||
```
|
||||
110
deployments/services/remote-access/firefox.md
Normal file
110
deployments/services/remote-access/firefox.md
Normal file
@@ -0,0 +1,110 @@
|
||||
---
|
||||
tags:
|
||||
- Firefox
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Sometimes you just want an instance of Firefox running on an Alpine Linux container, that has persistence (Extensions, bookmarks, history, etc) outside of the container (with bind-mapped folders). This is useful for a number of reasons, but insecure by default, so you have to protect it behind something like a [Keycloak Server](../authentication/keycloak/deployment.md) so it is not misused.
|
||||
|
||||
## Keycloak Authentication Sequence
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant User
|
||||
participant Traefik as Traefik Reverse Proxy
|
||||
participant Keycloak
|
||||
participant RockyLinux as Rocky Linux VM
|
||||
participant FirewallD as FirewallD
|
||||
participant Alpine as Alpine Container
|
||||
|
||||
User->>Traefik: Access https://work-environment.bunny-lab.io
|
||||
Traefik->>Keycloak: Redirect to Authenticate against Work Realm
|
||||
User->>Keycloak: Authenticate
|
||||
Keycloak->>User: Authorization Cookie Stored on Internet Browser
|
||||
User->>Traefik: Pass Authorization Cookie to Traefik
|
||||
Traefik->>RockyLinux: Traefik Forwards Traffic to Rocky Linux VM
|
||||
RockyLinux->>FirewallD: Traffic Passes Local Firewall
|
||||
FirewallD->>RockyLinux: Filter traffic (Port 5800)
|
||||
FirewallD->>Alpine: Allow Traffic from Traefik
|
||||
Alpine->>User: WebUI Access to Firefox Work Environment Granted
|
||||
```
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
firefox:
|
||||
image: jlesage/firefox # Docker image for Firefox
|
||||
environment:
|
||||
- TZ=America/Denver # Timezone setting
|
||||
- DARK_MODE=1 # Enable dark mode
|
||||
- WEB_AUDIO=1 # Enable web audio
|
||||
- KEEP_APP_RUNNING=1 # Keep the application running
|
||||
ports:
|
||||
- "5800:5800" # Port mapping for VNC WebUI
|
||||
volumes:
|
||||
- /srv/containers/firefox:/config:rw # Persistent storage for configuration
|
||||
restart: always # Always restart the container in case of failure
|
||||
network_mode: host # Use the host network
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Local Firewall Hardening
|
||||
It is important, due to how this browser just allows anyone to access it, to lock it down to only allow access to the SSH port and port 5800 to specifically-allowed devices, in this case, the Traefik Reverse Proxy. This ensures that it only allows the proxy to communicate with Firefox's container, keeping it securely protected behind Keycloak's middware in Traefik.
|
||||
|
||||
These rules will drop all traffic by default, allow port 22, and restrict access to port 5800.
|
||||
|
||||
``` sh
|
||||
# Set the default zone to drop
|
||||
sudo firewall-cmd --set-default-zone=drop
|
||||
|
||||
# Create a new zone named custom-trusted
|
||||
sudo firewall-cmd --permanent --new-zone=traefik-proxy
|
||||
|
||||
# Allow traffic to port 5800 only from 192.168.5.29 in the traefik-proxy zone
|
||||
sudo firewall-cmd --permanent --zone=traefik-proxy --add-source=192.168.5.29
|
||||
sudo firewall-cmd --permanent --zone=traefik-proxy --add-port=5800/tcp
|
||||
|
||||
# Allow SSH traffic on port 22 from any IP in the drop zone
|
||||
sudo firewall-cmd --permanent --zone=drop --add-service=ssh
|
||||
|
||||
# Reload FirewallD to apply the changes
|
||||
sudo firewall-cmd --reload
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
work-environment:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: work-environment
|
||||
rule: Host(`work-environment.bunny-lab.io`)
|
||||
middlewares:
|
||||
- work-environment # Referencing the Keycloak Server
|
||||
|
||||
services:
|
||||
work-environment:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.4:5800
|
||||
passHostHeader: true
|
||||
# # Adding forwardingTimeouts to set the send and read timeouts to 1 hour (3600 seconds)
|
||||
# forwardingTimeouts:
|
||||
# dialTimeout: "3600s"
|
||||
# responseHeaderTimeout: "3600s"
|
||||
```
|
||||
|
||||
## Firefox Special Configurations
|
||||
Due to the nature of how this is deployed, you need to make some additional configurations to the Firefox settings after-the-fact. Some of this could be automated with environment variables at deployment time, but for now will be handled manually.
|
||||
|
||||
- **Install Power Tabs Extension**: This extension is useful for keeping things organized.
|
||||
- **Install Merge All Windows Extension**: At times, you may misclick somewhere in the Firefox environment causing Firefox to open a new instance / window losing all of your tabs, and because there is no window manager, there is no way to alt+tab or switch between the instances of Firefox, effectively breaking your current session forcing you to re-open tabs. With this extension, you can merge all of the windows, collapsing them into one window, resolving the issue.
|
||||
- **Configure New Tab behavior**: If a new tab opens in a new window, it will absolutely throw everything into disarray, that is why all hyperlinks will be forced to open in a new tab instead of a new window. You can do this by navigating to `about:config` and setting the variable `browser.link.open_newwindow.restriction` to a value of `0`. [Original Reference Documentation](https://support.mozilla.org/en-US/questions/1066799)
|
||||
|
||||
39
deployments/services/rmm/tacticalrmm.md
Normal file
39
deployments/services/rmm/tacticalrmm.md
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
tags:
|
||||
- Tactical RMM
|
||||
- RMM
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
Tactical RMM is a remote monitoring & management tool built with Django, Vue and Golang. [Official Documentation](https://docs.tacticalrmm.com/install_server/).
|
||||
|
||||
!!! Requirements
|
||||
Ubuntu Server 22.04 LTS, 8GB RAM, 64GB Storage.
|
||||
|
||||
## Deployment Script
|
||||
```
|
||||
# Check for Updates
|
||||
sudo apt update
|
||||
sudo apt install -y wget curl sudo ufw
|
||||
sudo apt -y upgrade
|
||||
|
||||
# Create TacticalRMM User
|
||||
sudo useradd -m -G sudo -s /bin/bash tactical
|
||||
sudo passwd tactical
|
||||
|
||||
# Configure Firewall Rules
|
||||
sudo ufw default deny incoming
|
||||
sudo ufw default allow outgoing
|
||||
sudo ufw allow https
|
||||
sudo ufw allow ssh
|
||||
echo "y" | sudo ufw enable
|
||||
sudo ufw reload
|
||||
|
||||
# Switch to TacticalRMM User
|
||||
sudo su - tactical
|
||||
|
||||
# Deploy TacticalRMM via Deployment Script
|
||||
wget https://raw.githubusercontent.com/amidaware/tacticalrmm/master/install.sh
|
||||
chmod +x install.sh
|
||||
./install.sh
|
||||
```
|
||||
66
deployments/services/security-and-utility/changedetection.md
Normal file
66
deployments/services/security-and-utility/changedetection.md
Normal file
@@ -0,0 +1,66 @@
|
||||
---
|
||||
tags:
|
||||
- ChangeDetection
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: dgtlmoon/changedetection.io
|
||||
container_name: changedetection.io
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/changedetection/datastore:/datastore
|
||||
ports:
|
||||
- 5000:5000
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.changedetection.rule=Host(`changedetection.bunny-lab.io`)"
|
||||
- "traefik.http.routers.changedetection.entrypoints=websecure"
|
||||
- "traefik.http.routers.changedetection.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.changedetection.loadbalancer.server.port=5000"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.49
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
changedetection:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: changedetection
|
||||
rule: Host(`changedetection.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
changedetection:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.49:5000
|
||||
passHostHeader: true
|
||||
```
|
||||
35
deployments/services/security-and-utility/cyberchef.md
Normal file
35
deployments/services/security-and-utility/cyberchef.md
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
tags:
|
||||
- CyberChef
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: mpepping/cyberchef:latest
|
||||
container_name: cyberchef
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- 8000:8000
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.55
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
33
deployments/services/security-and-utility/it-tools.md
Normal file
33
deployments/services/security-and-utility/it-tools.md
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
tags:
|
||||
- IT-Tools
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Collection of handy online tools for developers, with great UX.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: corentinth/it-tools:latest
|
||||
container_name: it-tools
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.16
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
89
deployments/services/security-and-utility/password-pusher.md
Normal file
89
deployments/services/security-and-utility/password-pusher.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
tags:
|
||||
- Password Pusher
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: An application to securely communicate passwords over the web. Passwords automatically expire after a certain number of views and/or time has passed. Track who, what and when.
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
passwordpusher:
|
||||
image: docker.io/pglombardo/pwpush:release
|
||||
expose:
|
||||
- 5100
|
||||
restart: always
|
||||
environment:
|
||||
# Read Documention on how to generate a master key, then put it below
|
||||
- PWPUSH_MASTER_KEY=${PWPUSH_MASTER_KEY}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.170
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.passwordpusher.rule=Host(`temp.bunny-lab.io`)"
|
||||
- "traefik.http.routers.passwordpusher.entrypoints=websecure"
|
||||
- "traefik.http.routers.passwordpusher.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.passwordpusher.loadbalancer.server.port=5100"
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
PWPUSH_MASTER_KEY=<PASSWORD>
|
||||
PWP__BRAND__TITLE="Bunny Lab"
|
||||
PWP__BRAND__SHOW_FOOTER_MENU=false
|
||||
PWP__BRAND__LIGHT_LOGO="https://cloud.bunny-lab.io/apps/theming/image/logo?v=22"
|
||||
PWP__BRAND__DARK_LOGO="https://cloud.bunny-lab.io/apps/theming/image/logo?v=22"
|
||||
PWP__BRAND__TAGLINE="Secure Temporary Information Exchange"
|
||||
PWP__MAIL__RAISE_DELIVERY_ERRORS=true
|
||||
PWP__MAIL__SMTP_ADDRESS=mail.bunny-lab.io
|
||||
PWP__MAIL__SMTP_PORT=587
|
||||
PWP__MAIL__SMTP_USER_NAME=noreply@bunny-lab.io
|
||||
PWP__MAIL__SMTP_PASSWORD=<SMTP_CREDENTIALS>
|
||||
PWP__MAIL__SMTP_AUTHENTICATION=plain
|
||||
PWP__MAIL__SMTP_STARTTLS=true
|
||||
PWP__MAIL__SMTP_OPEN_TIMEOUT=10
|
||||
PWP__MAIL__SMTP_READ_TIMEOUT=10
|
||||
PWP__HOST_DOMAIN=bunny-lab.io
|
||||
PWP__HOST_PROTOCOL=https
|
||||
PWP__MAIL__MAILER_SENDER='"noreply" <noreply@bunny-lab.io>'
|
||||
PWP__SHOW_VERSION=false
|
||||
PWP__ENABLE_FILE_PUSHES=true
|
||||
PWP__FILES__EXPIRE_AFTER_DAYS_DEFAULT=2
|
||||
PWP__FILES__EXPIRE_AFTER_DAYS_MAX=7
|
||||
PWP__FILES__EXPIRE_AFTER_VIEWS_DEFAULT=5
|
||||
PWP__FILES__EXPIRE_AFTER_VIEWS_MAX=10
|
||||
PWP__FILES__RETRIEVAL_STEP_DEFAULT=true
|
||||
PWP__ENABLE_URL_PUSHES=true
|
||||
PWP__LOG_LEVEL=info
|
||||
```
|
||||
|
||||
!!! note "PWPUSH_MASTER_KEY"
|
||||
Generate a master key by visiting the [official online key generator](https://pwpush.com/en/pages/generate_key).
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
password-pusher:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: password-pusher
|
||||
rule: Host(`temp.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
password-pusher:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.170:5100
|
||||
passHostHeader: true
|
||||
```
|
||||
58
deployments/services/security-and-utility/searx.md
Normal file
58
deployments/services/security-and-utility/searx.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
tags:
|
||||
- Searx
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Deploys a SearX Meta Search Engine Server
|
||||
|
||||
## Docker Configuration
|
||||
```yaml title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
searx:
|
||||
image: searx/searx:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/searx/:/etc/searx
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.searx.rule=Host(`searx.bunny-lab.io`)"
|
||||
- "traefik.http.routers.searx.entrypoints=websecure"
|
||||
- "traefik.http.routers.searx.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.searx.loadbalancer.server.port=8080"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.124
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
searx:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: searx
|
||||
rule: Host(`searx.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
searx:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.124:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
69
deployments/services/security-and-utility/vaultwarden.md
Normal file
69
deployments/services/security-and-utility/vaultwarden.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
tags:
|
||||
- Vaultwarden
|
||||
- Security
|
||||
- Docker
|
||||
---
|
||||
|
||||
**Purpose**: Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
vaultwarden:
|
||||
image: vaultwarden/server:latest
|
||||
container_name: vaultwarden
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
- INVITATIONS_ALLOWED=false
|
||||
- SIGNUPS_ALLOWED=false
|
||||
- WEBSOCKET_ENABLED=false
|
||||
- ADMIN_TOKEN=REDACTED #PUT A REALLY REALLY REALLY SECURE PASSWORD HERE
|
||||
volumes:
|
||||
- /srv/containers/vaultwarden:/data
|
||||
ports:
|
||||
- 80:80
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.15
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bunny-vaultwarden.rule=Host(`vault.bunny-lab.io`)"
|
||||
- "traefik.http.routers.bunny-vaultwarden.entrypoints=websecure"
|
||||
- "traefik.http.routers.bunny-vaultwarden.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.bunny-vaultwarden.loadbalancer.server.port=80"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
!!! warning "ADMIN_TOKEN"
|
||||
It is **CRITICAL** that you never share the `ADMIN_TOKEN` with anyone. It allows you to log into the instance at https://vault.example.com/admin to add users, delete users, make changes system wide, etc.
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
```yaml
|
||||
http:
|
||||
routers:
|
||||
bunny-vaultwarden:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: vaultwarden
|
||||
rule: Host(`vault.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
vaultwarden:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.15:80
|
||||
passHostHeader: true
|
||||
```
|
||||
Reference in New Issue
Block a user