Renamed Processes to Workflows

This commit is contained in:
2024-02-05 20:22:11 -07:00
parent 6506203f7f
commit 83f41fb3a4
13 changed files with 0 additions and 0 deletions

View File

@ -0,0 +1,46 @@
**Purpose**: Pterodactyl is the open-source game server management panel built with PHP, React, and Go. Designed with security in mind, Pterodactyl runs all game servers in isolated Docker containers while exposing a beautiful and intuitive UI to administrators and users.
[Official Website](https://pterodactyl.io/panel/1.0/getting_started.html)
!!! note
This documentation assumes you are running Rocky Linux 9.3 or higher.
**Install EPEL Repository and other tools**:
```bash
sudo yum -y install epel-release curl ca-certificates gnupg
```
**Add Redis Repository**:
```bash
sudo rpm --import https://packages.redis.io/gpg
echo "[redis6]
name=Redis 6 repository
baseurl=https://packages.redis.io/rpm/6/rhel/8/\$basearch/
enabled=1
gpgcheck=1
gpgkey=https://packages.redis.io/gpg" | sudo tee /etc/yum.repos.d/redis.repo
```
**Add MariaDB Repository**:
```bash
sudo curl -LsS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
```
**Update Repositories List**:
```bash
sudo yum update
```
**Install Dependencies**:
Before installing PHP, check the available PHP versions in your enabled repositories. Install PHP and other dependencies as follows:
```bash
sudo yum -y install php php-{common,cli,gd,mysql,mbstring,bcmath,xml,fpm,curl,zip} mariadb-server nginx tar unzip git redis
```
7. **Installing Composer**:
```bash
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
chmod +x /usr/local/bin/composer
```
This script should work well with Rocky Linux and similar RHEL-based distributions, using `yum` for package management. However, keep in mind that package names and versions may vary between repositories, so you might need to adjust them based on what's available in your system's repositories.

View File

@ -0,0 +1,7 @@
# MicroCloud
Canonical MicroCloud is a useful clustering tool for deploying virtual machines and managing containers.
!!! note
This section is currently under-construction. Information here will change as the documentation evolves and the deployment process is refined.
PLACEHOLDER DATA

View File

@ -0,0 +1,76 @@
# OpenStack
OpenStack is basically a virtual machine hypervisor that is HA and cluster-friendly. This particular variant is deployed via Canonical's MiniStack environment using SNAP. It will deploy OpenStack onto a single node, which can later be expanded to additional nodes. You can also use something like OpenShift to deploy a Kubernetes Cluster onto OpenStack automatically via its various APIs.
**Reference Documentation**:
- https://discourse.ubuntu.com/t/single-node-guided/35765
- https://microstack.run/docs/single-node-guided
!!! note
This document assumes your bare-metal host server is running Ubuntu 22.04 LTS, has at least 16GB of Memory (**32GB for Multi-Node Deployments**), two network interfaces (one for management, one for remote VM access), 200GB of Disk Space for the root filesystem, another 200GB disk for Ceph distributed storage, and 4 processor cores. See [Single-Node Mode System Requirements](https://ubuntu.com/openstack/install)
!!! note Assumed Networking on the First Cluster Node
- **eth0** = 192.168.3.5
- **eth1** = 192.168.5.200
### Update APT then install upgrades
```
sudo apt update && sudo apt upgrade -y && sudo apt install htop ncdu iptables nano -y
```
!!! tip
At this time, it would be a good idea to take a checkpoint/snapshot of the server (if it is a virtual machine). This gives you a starting point to come back to as you troubleshoot inevitable deployment issues.
### Update SNAP then install OpenStack SNAP
```
sudo snap refresh
sudo snap install openstack --channel 2023.1
```
### Install & Configure Dependencies
Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in MicroStack.
```
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
sudo reboot
```
### Bootstrapping
Deploy the OpenStack cloud using the cluster bootstrap command.
```
sunbeam cluster bootstrap
```
!!! warning
If you get an "Unable to connect to websocket" error, run `sudo snap restart lxd`.
[Known Bug Report](https://bugs.launchpad.net/snap-openstack/+bug/2033400)
!!! note
Management networks shared by hosts = `192.168.3.0/24`
MetalLB address allocation range (supports multiple ranges, comma separated) (10.20.21.10-10.20.21.20): `192.168.3.50-192.168.3.60`
### Cloud Initialization:
- nicole@moon-stack-01:~$ `sunbeam configure --openrc demo-openrc`
- Local or remote access to VMs [local/remote] (local): `remote`
- CIDR of network to use for external networking (10.20.20.0/24): `192.168.5.0/24`
- IP address of default gateway for external network (192.168.5.1):
- Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
- Username to use for access to OpenStack (demo): `nicole`
- Password to use for access to OpenStack (Vb********): `<PASSWORD>`
- Network range to use for project network (192.168.122.0/24):
- List of nameservers guests should use for DNS resolution (192.168.3.11 192.168.3.10):
- Enable ping and SSH access to instances? [y/n] (y):
- Start of IP allocation range for external network (192.168.5.2): `192.168.5.201`
- End of IP allocation range for external network (192.168.5.254): `192.168.5.251`
- Network type for access to external network [flat/vlan] (flat):
- Free network interface that will be configured for external traffic: `eth1`
- WARNING: Interface eth1 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
### Pull Down / Generate the Dashboard URL
```
sunbeam openrc > admin-openrc
sunbeam dashboard-url
```
### Launch a Test VM:
Verify the cloud by launching a VM called test based on the ubuntu image (Ubuntu 22.04 LTS).
```
sunbeam launch ubuntu --name test
```
!!! note Sample output:
- Launching an OpenStack instance ...
- Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@10.20.20.200`

View File

@ -0,0 +1,151 @@
## Initial Installation / Configuration
Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.
!!! note
This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.
### Create the first Node
You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox.com/en/downloads) from the Official Proxmox Website. Once it is downloaded, you can use [Balena Etcher](https://etcher.balena.io/#download-etcher) or [Rufus](https://rufus.ie/en/) to deploy Proxmox onto a server.
!!! warning
If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the [Official Documentation](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/enable-nested-virtualization) to ensure that nested virtualization is enabled. An example is listed below:
```
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)
```
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](https://docs.bunny-lab.io/Containers/Docker/Docker%20Networking/) leveraging MACVLAN technology.
### Networking
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
| Hostname | IP Address | Subnet Mask | Gateway | DNS Server | iSCSI Portal IP |
| --------------- | --------------- | ------------------- | ------- | ---------- | ----------------- |
| proxmox-node-01 | 192.168.101.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.101.100 |
| proxmox-node-01 | 192.168.103.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.103.100 |
| proxmox-node-02 | 192.168.102.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.102.100 |
| proxmox-node-02 | 192.168.104.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.104.100 |
### iSCSI Initator Configuration
You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "**Sharing > Block Shares (iSCSI) > Initiators Groups**"
In this instance, we will reference Group ID: `2`. We need to add the iniator to the "**Allowed Initiators (IQN)**" section. This also includes the following networks that are allowed to connect to the iSCSI portal:
- `192.168.101.0/24`
- `192.168.102.0/24`
- `192.168.103.0/24`
- `192.168.104.0/24`
To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at `https://<IP>:8006` then log in with username `root` and whatever you set the password to during initial setup when the ISO image was mounted earlier.
- On the left-hand side, click on the name of the server node (e.g. `proxmox-node-01` or `proxmox-node-02`)
- Click on "**Shell**" to open a CLI to the server
- Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
``` sh
cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
```
!!! example
Output of this command will look something like `iqn.1993-08.org.debian:01:b16b0ff1778`.
## Disable Enterprise Subscription functionality
You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.
Add Unstable Update Repository:
```jsx title="/etc/apt/sources.list"
# Add to the end of the file
# Non-Production / Unstable Updates
deb https://download.proxmox.com/debian bookworm pve-no-subscription
```
!!! warning
Please note the reference to `bookworm` in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.
Comment-Out Enterprise Repository:
```jsx title="/etc/apt/sources.list.d/pve-enterprise.list"
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
```
Pull / Install Available Updates:
``` sh
apt-get update
apt dist-upgrade
reboot
```
## NIC Teaming
You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.
- Ensure that all of the network interfaces appear as something similar to the following:
```jsx title="/etc/network/interfaces"
iface eno1 inet manual
iface eno2 inet manual
# etc
```
- Adjust the network interfaces to add a bond:
```jsx title="/etc/network/interfaces"
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 192.168.0.11/24
gateway 192.168.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
# bridge-vlan-aware yes # I do not use VLANs
# bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
```
!!! warning
Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.
- Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.
## Generalizing VMs for Cloning / Templating:
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
!!! note
If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps wont be necessary!
```jsx title="Change Hostname"
sudo nano /etc/hostname
```
```jsx title="Change Hosts File"
sudo nano /etc/hosts
```
```jsx title="Reset the Machine ID"
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
```
```jsx title="Regenerate SSH Keys"
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
```
```jsx title="Reboot the Server to Apply Changes"
reboot
```
## Configure Alerting
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.
https://technotim.live/posts/proxmox-alerts/

View File

@ -0,0 +1,51 @@
**Purpose**: Rancher Harvester is an awesome tool that acts like a self-hosted cloud VDI provider, similar to AWS, Linode, and other online cloud compute platforms. In most scenarios, you will deploy "Rancher" in addition to Harvester to orchestrate the deployment, management, and rolling upgrades of a Kubernetes Cluster. You can also just run standalone Virtual Machines, similar to Hyper-V, RHEV, oVirt, Bhyve, XenServer, XCP-NG, and VMware ESXi.
:::note Prerequisites
This document assumes your bare-metal host has at least 32GB of Memory, 200GB of Disk Space, and 8 processor cores. See [Recommended System Requirements](https://docs.harvesterhci.io/v1.1/install/requirements)
:::
## First Harvester Node
### Download Installer ISO
You will need to navigate to the Rancher Harvester GitHub to download the [latest ISO release of Harvester](https://releases.rancher.com/harvester/v1.1.2/harvester-v1.1.2-amd64.iso), currently **v1.1.2**. Then image it onto a USB flashdrive using a tool like [Rufus](https://github.com/pbatard/rufus/releases/download/v4.2/rufus-4.2p.exe). Proceed to boot the bare-metal server from the USB drive to begin the Harvester installation process.
### Begin Setup Process
You will be waiting a few minutes while the server boots from the USB drive, but you will eventually land on a page where it asks you to set up various values to use for networking and the cluster itself.
The values seen below are examples and represent how my homelab is configured.
- **Management Interface(s)**: `eno1,eno2,eno3,eno4`
- **Network Bond Mode**: `Active-Backup`
- **IP Address**: `192.168.3.254/24` *<---- **Note:** Be sure to add CIDR Notation*.
- **Gateway**: `192.168.3.1`
- **DNS Server(s)**: `1.1.1.1,1.0.0.1,8.8.8.8,8.8.4.4`
- **Cluster VIP (Virtual IP)**: `192.168.3.251` *<---- **Note**: See "VIRTUAL IP CONFIGURATION" note below.*
- **Cluster Node Token**: `19-USED-when-JOINING-more-NODES-to-EXISTING-cluster-55`
- **NTP Server(s)**: `0.suse.pool.ntp.org`
:::caution Virtual IP Configuration
The VIP assigned to the first node in the cluster will act as a proxy to the built-in load-balancing system. It is important that you do not create a second node with the same VIP (Could cause instability in existing cluster), or use an existing VIP as the Node IP address of a new Harvester Cluster Node.
:::
:::tip
Based on your preference, it would be good to assign the device a static DHCP reservation, or use numbers counting down from **.254** (e.g. `192.168.3.254`, `192.168.3.253`, `192.168.3.252`, etc...)
:::
### Wait for Installation to Complete
The installation process will take quite some time, but when it is finished, the Harvester Node will reboot and take you to a splash screen with the Harvester logo, with indicators as to what the VIP and Management Interface IPs are configured as, and whether or not the associated systems are operational and ready. **Be patient until both statuses say `READY`**. If after 15 minutes the status has still not changed to `READY` both for fields, see the note below.
:::caution Issues with `rancher-harvester-repo` Image
During my initial deployment efforts with Harvester v.1.1.2, I noticed that the Harvester Node never came online. That was because something bugged-out during installation and the `rancher-harvester-repo` image was not properly installed prior to node initialization. This will effectively soft-lock the node unless you reinstall the node from scratch, as the Docker Hub Registry that Harvester is looking for to finish the deployment does not exist anymore and depends on the local image bundled with the installer ISO.
If this happens, you unfortunately need to start over and reinstall Harvester and hope that it works the second time around. No other workarounds are currently known at this time on version 1.1.2.
:::
## Additional Harvester Nodes
If you work in a production environment, you will want more than one Harvester node to allow live-migrations, high-availability, and better load-balancing in the Harvester Cluster. The section below will outline the steps necessary to create additional Harvester nodes, join them to the existing Harvester cluster, and validate that they are functioning without issues.
### Installation Process
Not Documented Yet
### Joining Node to Existing Cluster
Not Documented Yet
## Installing Rancher
If you plan on using Harvester for more than just running Virtual Machines (e.g. Containers), you will want to deploy Rancher inside of the Harvester Cluster in order or orchestrate the deployment, management, and rolling upgrades of various forms of Kubernetes Clusters (RKE2 Suggested). The steps below will go over the process of deploying a High-Availability Rancher environment to "adopt" Harvester as a VDI/compute platform for deploying the Kubernetes Cluster.
### Provision ControlPlane Node(s) VMs on Harvester
Not Documented Yet
### Adopt Harvester as Cluster Target
Not Documented Yet
### Deploy Production Kubernetes Cluster to Harvester
Not Documented Yet

View File

@ -0,0 +1,124 @@
**Purpose**:
Self-Hosted Open-Source email server that can be setup in minutes, and is enterprise-grade if upgraded with an iRedAdmin-Pro license.
!!! note "Assumptions"
It is assumed you are running at least Rocky Linux 9.3. While you can use CentOS Stream, Alma, Debian, Ubuntu, FreeBSD, and OpenBSD, the more enterprise-level sections of my homelab are built on Rocky Linux.
## Overview
The instructions below are specific to my homelab environment, but can be easily ported depending on your needs. This guide also assumes you want to operate a PostgreSQL-based iRedMail installation. You can follow along with the official documentation on [Installation](https://docs.iredmail.org/install.iredmail.on.rhel.html) as well as [DNS Record Configuration](https://docs.iredmail.org/setup.dns.html) if you want more detailed explanations throughout the installation process.
## Configure FQDN
Ensure the FQDN of the server is correctly set in `/etc/hostname`. The `/etc/hosts` file will be automatically injected using the FQDN from `/etc/hostname` in a script further down, don't worry about editing it.
## Disable SELinux
iRedMail doesn't work with SELinux, so please disable it by setting below value in its config file /etc/selinux/config. After server reboot, SELinux will be completely disabled.
``` sh
# Elevate to Root User
sudo su
# Disable SELinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config # (1)
setenforce 0
```
1. If you prefer to let SELinux prints warnings instead of enforcing, you can set this value instead: `SELINUX=permissive`
## Set Domain and iRedMail Version
Start by connecting to the server / VM via SSH, then set silent deployment variables below.
``` sh
# Define some deployment variables.
VERSION="1.6.8" # (1)
MAIL_DOMAIN="bunny-lab.io" # (2)
```
1. This is the version of iRedMail you are deploying. You can find the newest version on the [iRedMail Download Page](https://www.iredmail.org/download.html).
2. This is the domain suffix that appears after mailbox names. e.g. `first.last@bunny-lab.io` would use a domain value of `bunny-lab.io`.
You will then proceed to bootstrap a silent unattended installation of iRedMail. (I've automated as much as I can to make this as turn-key as possible). Just copy/paste this whole thing into your terminal and hit ENTER.
!!! danger "Storage Space Requirements"
You absolutely need to ensure that `/var/vmail` has a lot of space. At least 16GB. This is where all of your emails / mailboxes / a lot of settings will be. If possible, create a second physical/virtual disk specifically for the `/var` partition, or specifically for `/var/vmail` at minimum, so you can expand it over time if necessary. LVM-based provisioning is recommended but not required.
``` sh
# Automatically configure the /etc/hosts file to point to the server listed in "/etc/hostname".
sudo sed -i "1i 127.0.0.1 $(cat /etc/hostname) $(cut -d '.' -f 1 /etc/hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4" /etc/hosts
# Check for Updates in the Package Manager
yum update -y
# Install Extra Packages for Enterprise Linux
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
# Download the iRedMail binaries and extract them
cd /root
curl https://codeload.github.com/iredmail/iRedMail/tar.gz/refs/tags/$VERSION -o iRedMail-$VERSION.tar.gz
tar zxf iRedMail-$VERSION.tar.gz
# Create the unattend config file for silent deployment. This will automatically generate random 32-character passwords for all of the databases.
(echo "export STORAGE_BASE_DIR='/var/vmail'"; echo "export WEB_SERVER='NGINX'"; echo "export BACKEND_ORIG='PGSQL'"; echo "export BACKEND='PGSQL'"; for var in VMAIL_DB_BIND_PASSWD VMAIL_DB_ADMIN_PASSWD MLMMJADMIN_API_AUTH_TOKEN NETDATA_DB_PASSWD AMAVISD_DB_PASSWD IREDADMIN_DB_PASSWD RCM_DB_PASSWD SOGO_DB_PASSWD SOGO_SIEVE_MASTER_PASSWD IREDAPD_DB_PASSWD FAIL2BAN_DB_PASSWD PGSQL_ROOT_PASSWD DOMAIN_ADMIN_PASSWD_PLAIN; do echo "export $var='$(openssl rand -base64 48 | tr -d '+/=' | head -c 32)'"; done; echo "export FIRST_DOMAIN='$MAIL_DOMAIN'"; echo "export USE_IREDADMIN='YES'"; echo "export USE_SOGO='YES'"; echo "export USE_NETDATA='YES'"; echo "export USE_FAIL2BAN='YES'"; echo "#EOF") > /root/iRedMail-$VERSION/config
# Make Config Read-Only
chmod 400 /root/iRedMail-$VERSION/config
# Set Environment Variables for Silent Deployment
cd /root/iRedMail-$VERSION
# Deploy iRedMail via the Install Script
AUTO_USE_EXISTING_CONFIG_FILE=y \
AUTO_INSTALL_WITHOUT_CONFIRM=y \
AUTO_CLEANUP_REMOVE_SENDMAIL=y \
AUTO_CLEANUP_REPLACE_FIREWALL_RULES=y \
AUTO_CLEANUP_RESTART_FIREWALL=n \
AUTO_CLEANUP_REPLACE_MYSQL_CONFIG=y \
bash iRedMail.sh
```
When the installation is completed, take note of any output it gives you for future reference. Then reboot the server to finalize the server installation.
```
reboot
```
!!! warning "Automatically-Generated Postmaster Password"
When you deploy iRedMail, it will give you a username and password for the postmaster account. If you accidentally forget to document this, you can log back into the server via SSH and see the credentials at `/root/iRedMail-$VERSION/iRedMail.tips`. This file is critical and contains passwords and DNS information such as DKIM record information as well.
## Nested Reverse Proxy Configuration
In my homelab environment, I run Traefik reverse proxy in front of everything, which includes the NGINX reverse proxy that iRedMail creates. In my scenario, I have to make some custom adjustments to the reverse proxy dynamic configuration data to ensure it will allow self-signed certificates from iRedMail to communicate with the Traefik reverse proxy successfully. You will see an example Traefik configuration file below.
``` sh
# ROUTER
http:
routers:
mail:
entryPoints:
- websecure
rule: "Host(`mail.bunny-lab.io`)"
service: mail
middlewares:
- add-real-ip-header
- add-host-header
tls:
certResolver: myresolver
# MIDDLEWARE (May not actually be necessary)
middlewares:
add-real-ip-header:
headers:
customRequestHeaders:
X-Real-IP: ""
add-host-header:
headers:
customRequestHeaders:
Host: "mail.bunny-lab.io"
# SERVICE
mail:
loadBalancer:
serversTransport: insecureTransport
servers:
- url: "https://192.168.3.13:443"
passHostHeader: true
# TRANSPORT
serversTransports:
insecureTransport:
insecureSkipVerify: true
```

View File

@ -0,0 +1,138 @@
**Purpose**: privacyIDEA is a modular authentication system. Using privacyIDEA you can enhance your existing applications like local login, VPN, remote access, SSH connections, access to web sites or web portals with a second factor during authentication.
!!! info "Assumptions"
It is assumed you have a provisioned virtual machine / physical machine, running Ubuntu Server 22.04 to deploy a privacyIDEA server.
## AWX Deployment
### Add Server to Inventory and Pull Inventory/Playbook Updates from Gitea
You need to target the new server using a template in AWX (preferrably).
- We will assume the FQDN of the server is `auth.bunny-lab.io` or just `auth`
- Be sure to add the host into the [AWX Homelab Inventory File](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini)
- Update / Sync the "**Bunny-Lab**" project in AWX ([Resources > Projects > Bunny-Lab > Sync](https://awx.bunny-lab.io/#/projects/8/details))
- Update / Sync the git.bunny-lab.io Inventory Source ([Resources > Inventories > Homelab > Sources > git.bunny-lab.io > Sync](https://awx.bunny-lab.io/#/inventories/inventory/2/sources/9/details))
### Create a Template
Next, you want to make a template to automate the deployment of privacyIDEA on any servers that are members of the `[privacyideaServers]` inventory host group. This is useful for development / testing, as well as rapid re-deployment / scaling.
- Navigate to **Resources > Templates > Add**
| **Field** | **Value** |
| :--- | :--- |
| Template Name | `Deploy PrivacyIDEA Server` |
| Description | `Ubuntu Server 22.04 Required` |
| Project | `Bunny-Lab` *(Click the Magnifying Lens)* |
| Inventory | `Homelab` |
| Playbook | `playbooks/Linux/Deployments/privacyIDEA.yml` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Credentials | `SSH: (LINUX) nicole` |
**Options**:
- [X] Privilege Escalation: Checked
- [X] Enable Fact Storage: Checked
### Launch the Template
Now we need to launch the template. Assuming all of the above was completed, we can now deploy the playbook/template against the Ubuntu Server via SSH.
- Launch the Template (Rocket Button)
- As the template runs, you will see deployment progress output on the screen
!!! success
You will know if everything was successful if you see something that looks like the following:
``` sh
ok: [auth]
TASK [Install wget and software-properties-common] *****************************
ok: [auth]
TASK [Download PrivacyIDEA signing key] ****************************************
changed: [auth]
TASK [Add signing key for Ubuntu 22.04LTS] *************************************
changed: [auth]
TASK [Add PrivacyIDEA repository] **********************************************
changed: [auth]
TASK [Update apt cache] ********************************************************
changed: [auth]
TASK [Install PrivacyIDEA with Apache2] ****************************************
changed: [auth]
PLAY RECAP *********************************************************************auth : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Admin Access to WebUI
### Create a privacyIDEA Administrator Account
You will need to use the CLI in the server in order to create the first administrative account. Run the following command and provide a password for the administrator account.
``` sh
sudo pi-manage admin add nicole.rappe -e nicole.rappe@bunny-lab.io
```
### Log into the WebUI
Assuming you created an `A` record in the DNS server pointing to the IP address of the privacyIDEA server, Navigate to https://auth.bunny-lab.io and sign in with your newly-created username and password. (e.g. `nicole.rappe`)
## Connect to Active Directory/LDAP
### Create a LDAP User ID Resolver
This is what will connect privacyIDEA to an LDAP backend to pull-down users for authentication in Active Directory. Begin by navigating to "**Config > Users > New LDAP Resolver**"
| **Field** | **Value** |
| :--- | :--- |
| Resolver Name | `BunnyLab-LDAP` |
| Server URI | `ldap://bunny-dc-01.bunny-lab.io, ldap://bunny-db-02.bunny.lab.io` |
| Pooling Strategy | `ROUND_ROBIN` |
| StartTLS | `<Unchecked>` |
| Base DN | `CN=Users,DC=bunny-lab,DC=io` |
| Scope | `SUBTREE` |
| Bind Type | `Simple` |
| Bind DN | `CN=Nicole Rappe,CN=Users,DC=bunny-lab,DC=io`
| Bind Password | `<Domain Admin Password for "nicole.rappe">` |
- Click the "**Preset Active Directory**" button.
- Click the "**Test LDAP Resolver**" button.
### Associate User ID Resolver with a Realm
Now we need to create what is called a "**Realm**". Users need to be in realms to have tokens assigned. A user, who is not member of a realm can not have a token assigned and can not authenticate. You can combine several different User ID Resolvers (see UserIdResolvers) into a realm. Navigate to "**Config > Realms**"
| **Field** | **Value** |
| :--- | :--- |
| Realm Name | `Bunny-Lab` |
| Resolver(s) | `BunnyLab-LDAP` |
## Configure Push Notifications
### Create Policies
You will need to create several policies, you can make them all individual, or merge the ones with identical scopes together to keep things more organized. To begin, navigate to "**Config > Policies > Create New Policy**"
- **Scope**: `Enrollment` > "**push_firebase_configuration**" = `poll only`
- **Scope**: `Enrollment` > "**push_registration_url**" = `https://auth.bunny-lab.io/ttype/push`
- **Scope**: `Enrollment` > "**push_ssl_verify**" = `0`
- **Scope**: `Authentication` > "**push_allow_polling**" = `allow`
## Enrolling the First Token
!!! bug "Push Notifications Broken"
Currently, the push notification system (e.g. Cisco DUO") is not behaving as-expected. For now, you can use other authentication methods for the tokens, such as HOTP (on-demand MFA codes) or TOTP (conventional time-based MFA codes).
### TOTP Token
Navigate to "**Tokens > Enroll Token**"
| **Field** | **Value** |
| :--- | :--- |
| Token Type | `TOTP` |
| Realm | `Bunny-Lab` |
| Username | `[256da6f8-9ddb-4ec5-9409-1a95fea27615] nicole.rappe (Nicole Rappe)` |
Use any MFA authenticator app like Bitwarden or Google Authenticator to add the code and store the secret key somewhere safe.
## Install Credential Provider
### Install Credential Provider Subscription File
In order to use the Credential Provider, you have to upload a subscription file. The free-tier allows up to 50 devices using the Credential Provider, but you can alter the source code of privacyIDEA to ignore subscriptions and just unlock everything (custom python code planned).
When you want to leverage MFA in an environment using the server, you need to have a domain-joined computer running the Credential Provider, which can be found on the [Official Credential Provider Github Page](https://github.com/privacyidea/privacyidea-credential-provider/releases).
- Download the MSI
- Run the installer on the computer
- Click "**Next**"
- Check the "**Agree**" checkbox, then click "**Next**"
- Hostname: `auth.bunny-lab.io`
- Path: `/path/to/pi`
- [x] Ignore Unknown CA Errors when Using SSL
- [x] Ignore Invalid Common Name Errors when Using SSL
- Click "**Next**" > "**Next**" > "**Next**"
- Click "**Install**" then "**Finish**"
You can now log out and verify that the credential provider is displayed as an option, and can log in using your domain username, domain password, and TOTP that you configured in the privacyIDEA WebUI.

View File

@ -0,0 +1,20 @@
**Purpose**: You may find that you need to adopt a device that was onboarded by a different Veeam Backup & Replication server. Maybe the old server died, or maybe you are restructuring your backup infrastructure, and want a new server taking over the backup responsibilities for the device.
If this happens, Veeam will complain that the device is managed by a different server. To circumvent this, perform the following changes in the Windows Registry based on the version of Veeam Backup & Replication you are currently using, then try to Update the Agent / Backup the agent again, and it should be successful after the registry changes are made.
**Reference Material**:
https://forums.veeam.com/servers-workstations-f49/how-do-we-move-agent-to-associate-with-a-new-veeam-server-t79977.html
=== "VBR v11"
```jsx title="HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication"
AgentDiscoveryIgnoreOwnership
REG_DWORD (32-bit) Value: 1
```
=== "VBR v12"
```jsx title="HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication"
ProtectionGroupIgnoreOwnership
REG_DWORD (32-bit) Value: 1
```

View File

@ -0,0 +1,66 @@
# Changing Windows Editions
### Changing Editions:
Windows Server: `DISM /ONLINE /set-edition:serverstandard /productkey:AAAAA-BBBBB-CCCCC-DDDDD-EEEEE /AcceptEula`
Windows (Home/Pro): `DISM /ONLINE /set-edition:professional /productkey:AAAAA-BBBBB-CCCCC-DDDDD-EEEEE /AcceptEula`
### Force Activation / Edition Switcher:
`irm https://massgrave.dev/get | iex`
## Generic Install Keys
### Windows 10
| Windows Edition | RTM Generic Key (Retail) | [**KMS Client Setup Key**](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj612867(v%3dws.11)) |
| :--- | :--- | :--- |
| Windows 10 Home | YTMG3-N6DKC-DKB77-7M9GH-8HVX7 | TX9XD-98N7V-6WMQ6-BX7FG-H8Q99 |
| Windows 10 Home N | 4CPRK-NM3K3-X6XXQ-RXX86-WXCHW | 3KHY7-WNT83-DGQKR-F7HPR-844BM |
| Windows 10 Home Single Language | BT79Q-G7N6G-PGBYW-4YWX6-6F4BT | 7HNRX-D7KGG-3K4RQ-4WPJ4-YTDFH |
| Windows 10 Pro | VK7JG-NPHTM-C97JM-9MPGT-3V66T | W269N-WFGWX-YVC9B-4J6C9-T83GX |
| Windows 10 Pro N | 2B87N-8KFHP-DKV6R-Y2C8J-PKCKT | MH37W-N47XK-V7XM9-C7227-GCQG9 |
| Windows 10 Pro for Workstations | DXG7C-N36C4-C4HTG-X4T3X-2YV77 | NRG8B-VKK3Q-CXVCJ-9G2XF-6Q84J |
| Windows 10 Pro N for Workstations | WYPNQ-8C467-V2W6J-TX4WX-WT2RQ | 9FNHH-K3HBT-3W4TD-6383H-6XYWF |
| Windows 10 S | 3NF4D-GF9GY-63VKH-QRC3V-7QW8P | |
| Windows 10 Education | YNMGQ-8RYV3-4PGQ3-C8XTP-7CFBY | NW6C2-QMPVW-D7KKK-3GKT6-VCFB2 |
| Windows 10 Education N | 84NGF-MHBT6-FXBX8-QWJK7-DRR8H | 2WH4N-8QGBV-H22JP-CT43Q-MDWWJ |
| Windows 10 Pro Education | 8PTT6-RNW4C-6V7J2-C2D3X-MHBPB | 6TP4R-GNPTD-KYYHQ-7B7DP-J447Y |
| Windows 10 Pro Education N | GJTYN-HDMQY-FRR76-HVGC7-QPF8P | YVWGF-BXNMC-HTQYQ-CPQ99-66QFC |
| Windows 10 Enterprise | XGVPP-NMH47-7TTHJ-W3FW7-8HV2C | NPPR9-FWDCX-D2C8J-H872K-2YT43 |
| Windows 10 Enterprise G | | YYVX9-NTFWV-6MDM3-9PT4T-4M68B |
| Windows 10 Enterprise G N | FW7NV-4T673-HF4VX-9X4MM-B4H4T | 44RPN-FTY23-9VTTB-MP9BX-T84FV |
| Windows 10 Enterprise N | WGGHN-J84D6-QYCPR-T7PJ7-X766F | DPH2V-TTNVB-4X9Q3-TJR4H-KHJW4 |
| Windows 10 Enterprise S | NK96Y-D9CD8-W44CQ-R8YTK-DYJWX | FWN7H-PF93Q-4GGP8-M8RF3-MDWWW |
| Windows 10 Enterprise 2015 LTSB | | WNMTR-4C88C-JK8YV-HQ7T2-76DF9 |
| Windows 10 Enterprise 2015 LTSB N | | 2F77B-TNFGY-69QQF-B8YKP-D69TJ |
| Windows 10 Enterprise LTSB 2016 | | DCPHK-NFMTC-H88MJ-PFHPY-QJ4BJ |
| Windows 10 Enterprise N LTSB 2016 | RW7WN-FMT44-KRGBK-G44WK-QV7YK | QFFDN-GRT3P-VKWWX-X7T3R-8B639 |
| Windows 10 Enterprise LTSC 2019 | | M7XTQ-FN8P6-TTKYV-9D4CC-J462D |
| Windows 10 Enterprise N LTSC 2019 | | 92NFX-8DJQP-P6BBQ-THF9C-7CG2H |
| Windows 10 Home | 37GNV-YCQVD-38XP9-T848R-FC2HD | | |
| Windows 10 Home N | 33CY4-NPKCC-V98JP-42G8W-VH636 | | |
| Windows 10 Pro | NF6HC-QH89W-F8WYV-WWXV4-WFG6P | | |
| Windows 10 Pro N | NH7W7-BMC3R-4W9XT-94B6D-TCQG3 | | |
| Windows 10 SL | NTRHT-XTHTG-GBWCG-4MTMP-HH64C | | |
| Windows 10 CHN SL | 7B6NC-V3438-TRQG7-8TCCX-H6DDY | | |
| Windows 10 Home | 46J3N-RY6B3-BJFDY-VBFT9-V22HG | | |
| Windows 10 Home N | PGGM7-N77TC-KVR98-D82KJ-DGPHV | | |
| Windows 10 Pro | RHGJR-N7FVY-Q3B8F-KBQ6V-46YP4 | | |
| Windows 10 Pro N | 2KMWQ-NRH27-DV92J-J9GGT-TJF9R | | |
| Windows 10 SL | GH37Y-TNG7X-PP2TK-CMRMT-D3WV4 | | |
| Windows 10 CHN SL | 68WP7-N2JMW-B676K-WR24Q-9D7YC | | |
### Windows Server
| Windows Edition | RTM Generic Key (Retail) | [**KMS Client Setup Key**](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj612867(v%3dws.11)) |
| :--- | :--- | :--- |
| Windows Server 2016 Datacenter | | CB7KF-BWN84-R7R2Y-793K2-8XDDG |
| Windows Server 2016 Standard | | WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY |
| Windows Server 2016 Essentials | | JCKRF-N37P4-C2D82-9YXRT-4M63B |
| Windows Server 2019 Datacenter | | WMDGN-G9PQG-XVVXX-R3X43-63DFG |
| Windows Server 2019 Standard | | N69G4-B89J2-4G8F4-WWYCC-J464C |
| Windows Server 2019 Essentials | | WVDHN-86M7X-466P6-VHXV7-YY726 |
| Windows Server 2022 Standard | | VDYBN-27WPP-V4HQT-9VMD4-VMK7H |
| Windows Server 2022 Datacenter Azure | | NTBV8-9K7Q8-V27C6-M2BTV-KHMXV |
| Windows Server 2022 Datacenter | | WX4NM-KYWYW-QJJR4-XV3QB-6VM33 |
## Additional Reference Documentation:
https://www.tenforums.com/tutorials/95922-generic-product-keys-install-windows-10-editions.html
[https://learn.microsoft.com/en-us/windows-server/get-started/kms-client-activation-keys](https://learn.microsoft.com/en-us/windows-server/get-started/kms-client-activation-keys)

View File

@ -0,0 +1,23 @@
**Purpose**: Sometimes you may find that you need to convert a `.crt` or `.pem` certificate file into a `.pfx` file that Microsoft IIS Server Manager can import for something like Exchange Server or another custom IIS-based server.
# Download the Certificate Files
This step will vary based on how you are obtaining the certificates. The primary thing to focus on is making sure you have the certificate file and the private key.
```jsx title="Certificate Folder Structure"
certificate.crt
certificate.pem
gd-g2_iis_intermediates.p7b
private.key
```
# Convert using OpenSSL
You will need a linux machine such as Ubuntu 22.04LTS, or to download the Windows equivelant of OpenSSL in order to run the necessary commands to convert and package the files into a `.pfx` file that IIS Server Manager can use.
:::note
You need to make sure that all of the certificate files as well as private key are in the same folder (to keep things simple) during the conversion process. **It will prompt you to enter a password for the PFX file, choose anything you want.**
:::
```jsx title="OpenSSL Conversion Command"
openssl pkcs12 -export -out IIS-Certificate.pfx -inkey private.key -in gd-g2_iis_intermediates.p7b -in certificate.crt
```
:::tip
You can rename the files anything you want for organizational purposes. Afterall, they are just plaintext files. For example, you could rename `gd-g2_iis_intermediates.p7b` to `intermediate.bundle` and it would still work without issue in the command. During the import phase in IIS Server Manager, you can check a box to enable Exporting the certificate, effectively reverse-engineering it back into a certificate and private key.
:::

View File

@ -0,0 +1,27 @@
**Purpose**:
To deploy a shortcut to the desktop pointing to a network share's root path. (e.g. `\\storage.bunny-lab.io`). There is a quirk with how Windows handles network shares and shortcuts and doesn't like when you point the shortcut to a root UNC path.
### Group Policy Location
``` mermaid
graph LR
A[Create Group Policy] --> B[User Configuration]
B --> C[Preferences]
C --> D[Windows Settings]
D --> E[Shortcuts]
```
### Group Policy Settings
- **Action**: `Update`
- **Name**: `<FriendlyName>`
- **Target Type**: `File System Object`
- **Location**: `Desktop`
- **Target Path**: `C:\windows\explorer.exe`
- **Arguments**: `\\storage.bunny-lab.io`
- **Start In**: `<Blank>`
- **Shortcut Key**: `<None>`
- **Run**: `Normal Window`
- **Icon File Path**: `%SystemRoot%\System32\SHELL32.dll`
- **Icon Index**: `9`
### Additional Notes
Navigate to the "**Common**" tab in the properties of the shortcut, and check the "**Run in logged-on user's security context (user policy option)**".

View File

@ -0,0 +1,135 @@
**Purpose**: Deploying a Windows Server Node into the Hyper-V Failover Cluster is an essential part of rebuilding and expanding the backbone of my homelab. The documentation below goes over the process of setting up a bare-metal host from scratch and integrating it into the Hyper-V Failover Cluster.
!!! note "Prerequisites"
This document assumes you are have installed and are running a bare-metal Hewlett-Packard Enterprise server running iLO (Integrated Lights Out) with the latest build of **Windows Server 2022 Datacenter (Desktop Experience)**. Windows will prompt you that your build is expired if it is too old.
Download the newest build ISO of Windows Server 2022 at the [Microsoft Evaluation Center](https://go.microsoft.com/fwlink/p/?linkid=2195686&clcid=0x409&culture=en-us&country=us)
!!! info "Assumption that Cluster Already Exists"
This document also assumes that you are adding an additional server node to an existing Hyper-V Failover Cluster. This document does not outline the exact process of setting up a Hyper-V Failover Cluster from-scratch, setting up a domain, DNS server, etc. Those are assumed to already exist in the environment.
## Preparation
### Enable Remote Desktop
The first thing you will want to do is get remote access via Remote Desktop. This will enable higher resolution, faster response times with the GUI, and the ability to transfer files to and from the server more easily.
- Connect to the server via the iLO Remote Console
- Login using your `Administrator` credentials you created during the operating system installation
- Open **Server Manager**
* Navigate to "Local Server"
* Under "Remote Management"
* Click on "Disabled"
* Un-check: "Allow Remote Connections to this Computer"
!!! warning "Disable NLA (Network Level Authentication)"
Ensure that "Allow Connections only from computers running Remote Desktop with Network Level Authentication" is un-checked. This is important because if you are running a Hyper-V Failover Cluster, if the domain controller(s) are not running, you may be effectively locked out from using Remote Desktop to access the failover cluster's nodes, forcing you to use iLO or a physical console into the server to log in and bootstrap the cluster's Guest VMs online.
This step can be disregarded if the domain controller(s) exist outside of the Hyper-V Failover Cluster.
- Locate the (*current*) DHCP-issued IP address of the server for Remote Desktop
* You will want to use Remote Desktop for the next stage of deployment to transfer an ISO file to the server
* Log into the server with Remote Desktop using the `Administrator` credentials you created when initially installing the operating system
* You can use `ipconfig /all` to locate the current DHCP-issued IP address
### Provision Server Role & Domain Join
You will want to rename the computer so it has the correct naming scheme before installing any server roles or domain joining it. The general naming convention is `MOON-NODE-<0#>`. Use a domain administrator credential for the join command when prompted. Restart the computer to finalize the changes.
**Increment the hostname number based on the existing servers in the cluster / homelab.**
``` powershell
# Rename the server
Rename-Computer MOON-NODE-01
# Domain-join the server
Add-Computer MOONGATE.local
# Install Hyper-V server role
Install-WindowsFeature -Name Hyper-V -IncludeManagementTools
# Install the Failover Clustering feature
Install-WindowsFeature -Name Failover-Clustering -IncludeManagementTools
# Restart the server to apply all pending configurations
Restart-Computer
```
## Failover Cluster Configuration
### Configure Cluster SET Networking
You will need to start off by configuring a Switch Embedded Teaming (SET) team. This is the backbone that the server will use for all Guest VM traffic as well as remote-desktop access to the server node itself. You will need to rename the network adapters to make management easier.
- Navigate to "Network Connections" then "Change Adapter Options"
* Rename the network adapters with simpler names. e.g. (`Embedded LOM 1 Port 1` becomes `Port_1`)
* For the sake of demonstration, assume there are 4 NICs (`Port_1`, `Port_2`, `Port_3`, and `Port_4`)
!!! warning "10GbE Network Adapters"
Be sure to leave the dual 10GbE network adapters out of the renaming work. They will be used later with the iSCSI Initiator.
``` powershell
# Switch Embedded Teaming (SET) team
New-VMSwitch -Name Cluster_SET -NetAdapterName Port_1, Port_2, Port_3, Port_4 -EnableEmbeddedTeaming $true
```
### Configure Static IP Address
You may be booted out of the Remote Desktop session at this time due to how the network team changed the configuration. Leverage iLO to remotely access the server again to configure a static IP address on the new `vEthernet (Cluster_SET)` NIC using the following configuration. **While in the NIC Properties, disable IPv6.**
| IP ADDRESS | SUBNET MASK | GATEWAY | PRIMARY DNS | SECONDARY DNS |
| ----------- | ------------- | ----------- | ------------ | ------------- |
| 192.168.3.5 | 255.255.255.0 | 192.168.3.1 | 192.168.3.10 | 192.168.3.11 |
### Configure Static IP Addresses for 10GbE Networking
You will now want to set up the network adapters for the 10GbE iSCSI back-end. Configure both of the `Intel(R) Ethernet Controller x540-AT2` 10GbE NICs and change their IP addresses to match the table below. Rename the NICs to match a `NIC1` and `NIC2` naming scheme. Also disable IPv6.
!!! warning
Make sure that you test that each interface can ping their respective iSCSI target by performing a ping using the IP address in the "ISCSI PING IP" column of the table. If it fails to successfully ping, swap the IP addresses of the 10GbE NICs until it succeeds.
| IP Address | Subnet Mask | Gateway | Primary DNS | Secondary DNS | iSCSI Ping IP |
| --------------- | ------------- | ------------- | ------------ | ------------- | --------------- |
| 192.168.102.200 | 255.255.255.0 | <Leave Blank> | 192.168.3.10 | 192.168.3.11 | 192.168.102.100 |
| 192.168.104.200 | 255.255.255.0 | <Leave Blank> | 192.168.3.10 | 192.168.3.11 | 192.168.104.100 |
### Configure iSCSI Initiator to Connect to TrueNAS Core Server
At this point, now that we have verified that the 10GbE NICs can ping their respective iSCSI target server IP addresses, we can add them to the iSCSI Initiator in Server Manager which will allow us to mount the cluster storage for the Hyper-V Failover Cluster.
- Open **Server Manager**
* Click on the "Tools" dropdown menu
* Click on "iSCSI Initiator"
* You will be prompted to start the Microsoft iSCSI service. Click on "Yes" to proceed.
* Click on the "Discovery" tab
* Click the "Discover Portal" button
* Enter the IP addresses of the "iSCSI Ping IP(s)" from the previous section. Leave the port as "3260".
* Navigate to the [TrueNAS Core server](https://storage.cyberstrawberry.net) and add the "Initiator Name" seen on the "Configuration" tab to the `Sharing > iSCSI > Initiator Groups` > "Hyper-V Failover Cluster Hosts"
* Example Initiator Name: `iqn.1991-05.com.microsoft:moon-node-01.moongate.local`
* This is not explicitly documented at this time and is different from lab to lab in regards to the iSCSI implementation
* Click the "Targets" tab to go back to the main page
* Click the "Refresh" button to display available iSCSI Targets
* Click on the first iSCSI Target `failover-cluster-storage` then click the "Connect" button
* Check the "Enable Multi-Path" checkbox
* Click the "Advanced" button
* Click the "OK" button
* Repeat the connection process seen above for all remaining iSCSI Targets
* Close out of the iSCSI Initiator window
* Navigate to "Disk Management" to bring the iSCSI drives "Online"
## Initialize and Join to Existing Failover-Cluster
### Validate Server is Ready to Join Cluster
Now it is time to set up the Failover Cluster itself so we can join the server to the existing cluster.
- Open **Server Manager**
* Click on the "Tools" dropdown menu
* Click on "Failover Cluster Manager"
* Click the "Validate Configuration" button in the middle of the window that appears
* Click "Next"
* Enter Server Name: `MOON-NODE-01.moongate.local`
* Click the "Add" button, then "Next"
* Ensure "Run All Tests (Recommended)" is selected, then click "Next", then click "Next" to start.
### Join Server to Failover Cluster
* On the left-hand side, right-click on the "Failover Cluster Manager" in the tree
* Click on "Connect to Cluster"
* Enter `MOON-CLUSTER.moongate.local`
* Click "OK"
* Expand "MOON-CLUSTER.moongate.local" on the left-hand tree
* Right-click on "Nodes"
* Click "Add Node..."
* Click "Next"
* Enter Server Name: `MOON-NODE-01.moongate.local`
* Click the "Add" button, then "Next"
* Ensure that "Run Configuration Validation Tests" radio box is checked, then click "Next"
* Validate that the node was successfully added to the Hyper-V Failover Cluster
## Cleanup & Final Touches
### Activate Windows Server
You will need to change the edition from "**Windows Server 2022 Datacenter Evaluation**" to "**Windows Server 2022 Datacenter**". This will ensure that the server does not randomly reboot itself. If you have a license, you can install it now. Otherwise, you can force-activate using the [Changing Windows Edition](https://docs.cyberstrawberry.net/mkdocs-material/homelab/Windows%20Server/Change%20Windows%20Edition/) documentation.
### Run Windows Updates
Ensure that you run all available Windows Updates before delegating guest VM roles to the new server in the failover cluster. This ensures you are up-to-date before you become reliant on the server for production operations.

View File

@ -0,0 +1,80 @@
**Purpose**: If you run an environment with multiple Hyper-V: Failover Clusters, for the purpose of Hyper-V: Failover Cluster Replication via a `Hyper-V Replica Broker` role installed on a host within the Failover Cluster, sometimes a GuestVM will fail to replicate itself to the replica cluster, and in those cases, it may not be able to recover on its own. This guide attempts to outline the process to rebuild replication for GuestVMs on a one-by-one basis.
!!! note "Assumptions"
This guide assumes you have two Hyper-V Failover Clusters, for the sake of the guide, we will refer to the Production cluster as `CLUSTER-01` and the Replication cluster as `CLUSTER-02`. This guide also assumes that Replication was set up beforehand, and does not include instructions on how to deploy a Replica Broker (at this time).
## Production Cluster - CLUSTER-01
### Locate the GuestVM
You need to start by locating the GuestVM in the Production cluster, CLUSTER-01. You will know you found the VM if the "Replication Health" is either `Unhealthy`, `Warning`, or `Critical`.
### Remove Replication from GuestVM
- Within a node of the Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Remove Replication**"
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
## Replication Cluster - CLUSTER-02
### Note the storage GUID of the GuestVM in the replication cluster
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
- Right-Click the same GuestVM and click "Manage..." `This will open Hyper-V Manager`
- Right-Click the GuestVM and click "Settings..."
- Navigate to "**ISCSI Controller**"
- Click on one of the Virtual Disks attached to the replica VM, and note the full folder path for later. e.g. `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS\020C9A30-EB02-41F3-8D8B-3561C4521182`
!!! warning "Noting the GUID of the GuestVM"
You need to note the folder location so you have the GUID. Without the GUID, cleaning up the old storage associated with the GuestVM replica files will be much more difficult / time-consuming. Note it down somewhere safe, and reference it later in this guide.
### Delete the GuestVM from the Replication Cluster
Now that you have noted the GUID of the storage folder of the GuestVM, we can safely move onto removing the GuestVM from the replication cluster.
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Remove Replication**"
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
- Right-Click the GuestVM (again) `You will see that "Enable Replication" is an option now, indicating it was successfully removed.`
!!! note "Replica Checkpoint Merges"
When you removed replication, there may have been replication checkpoints that automatically try to merge together with a `Merge in Progress` status. Just let it finish before moving forward.
- Within the same node of the replication cluster's Hyper-V: Failover Cluster Manager `Switch back from Hyper-V Manager`
- Right-Click the GuestVM and click "**Remove**"
- Confirm the action by clicking the "**Yes**" button
### Delete the GuestVM manually from Hyper-V Manager on all replication cluster hosts
At this point in time, we need to remove the GuestVM from all of the servers in the cluster. Just because we removed it from the Hyper-V: Failover Cluster did not remove it from the cluster's nodes. We can automate part of this work by opening Hyper-V Manager on the same Failover Node we have been working on thus far, and from there we can connect the rest of the replication nodes to the manager to have one place to connect to all of the nodes, avoiding hopping between servers.
- Open Hyper-V Manager
- Right-Click "Hyper-V Manager" on the left-hand navigation menu
- Click "Connect to Server..."
- Type the names of every node in the replication cluster to connect to each of them, repeating the two steps above for every node
- Remove GuestVM from the node it appears on
- On one of the replication cluster nodes, we will see the GuestVM listed, we are going to Right-Click the GuestVM and select "**Delete**"
### Delete the GuestVM's replicated VHDX storage from replication ClusterStorage
Now we need to clean up the storage left behind by the replication cluster.
- Within a node of the replication cluster
- Navigate to `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS`
- Delete the entire GUID folder noted in the previous steps. `e.g. 020C9A30-EB02-41F3-8D8B-3561C4521182`
## Production Cluster - CLUSTER-01
### Re-Enable Replication on GuestVM in Cluster-01 (Production Cluster)
At this point, we have disabled replication for the GuestVM and cleaned up traces of it in the replication cluster. Now we need to re-enable replication on the GuestVM back in the production cluster.
- Within a node of the production Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Enable Replication...**"
- Click "Next"
- For the "**Replica Server**", enter the name of the role of the Hyper-V Replica Broker role in the (replication cluster's) Failover Cluster. `e.g. CLUSTER-02-REPL`, then click "Next"
- Click the "Select Certificate" button, since the Broker was configured with Certificate-based authentication instead of Kerberos (in this example environment). It will prompt you to accept the certificate by clicking "OK". (e.g. `HV Replica Root CA`), then click "Next"
- Make sure every drive you want replicated is checked, then click "Next"
- Replication Frequency: `5 Minutes`, then click "Next"
- Additional Recovery Points: `Maintain only the latest recovery point`, then click "Next"
- Initial Replication Method: `Send initial copy over the network`
- Schedule Initial Replication: `Start replication immediately`
- Click "Next"
- Click "Finish"
!!! success "Replication Enabled"
If everything was successful, you will see a dialog box named "Enable replication for `<GuestVM>`" with a message similar to the following: "Replica virtual machine `<GuestVM>` was successfully created on the specified Replica server `<Node-in-Replication-Cluster>`.
At this point, you can click "Close" to finish the process. Under the GuestVM details, you will see "Replication State": `Initial Replication in Progress`.