Documentation Restructure
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s

This commit is contained in:
2026-02-27 04:02:06 -07:00
parent 52e6f83418
commit 554c04aa32
201 changed files with 378 additions and 47 deletions

42
workflows/index.md Normal file
View File

@@ -0,0 +1,42 @@
---
tags:
- Workflows
- Index
- Documentation
---
# Workflows
## Purpose
Runbooks for maintenance, troubleshooting, backups, and day-2 operations.
## Includes
- Backup and DR workflows
- Routine maintenance tasks
- Troubleshooting runbooks
## New Document Template
````markdown
# <Document Title>
## Purpose
<what this runbook exists to solve>
!!! warning "Risk"
- <irreversible actions or data impact>
## Procedure
```sh
# Commands or steps (grouped and annotated)
```
## Validation
- <command + expected result>
## Troubleshooting
### Symptoms
- <what you see>
### Resolution
```sh
# Fix steps
```
````

View File

@@ -0,0 +1,210 @@
---
tags:
- Ansible
- AWX
- Kerberos
- Automation
---
## Kerberos Implementation
You may find that you need to be able to run playbooks on domain-joined Windows devices using Kerberos. You need to go through some extra steps to set this up after you have successfully fully deployed AWX Operator into Kubernetes.
### Configure Windows Devices
You will need to prepare the Windows devices to allow them to be remotely controlled by Ansible playbooks. Run the following powershell script on all of the devices that will be managed by the Ansible AWX environment.
- [WinRM Prerequisite Setup Script](../enable-winrm-on-windows-devices.md)
### Create an AWX Instance Group
At this point, we need to make an "Instance Group" for the AWX Execution Environments that will use both a Keytab file and custom DNS servers defined by configmap files created below. Reference information was found [here](https://github.com/kurokobo/awx-on-k3s/blob/main/tips/use-kerberos.md#create-container-group). This group allows for persistence across playbooks/templates, so that if you establish a Kerberos authentication in one playbook, it will persist through the entire job's workflow.
Create the following files in the `/awx` folder on the AWX Operator server you deployed earlier when setting up the Kubernetes Cluster and deploying AWX Operator into it so we can later mount them into the new Execution Environment we will be building.
=== "Custom DNS Records"
```yaml title="/awx/custom_dns_records.yml"
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-dns
namespace: awx
data:
custom-hosts: |
192.168.3.25 LAB-DC-01.bunny-lab.io LAB-DC-01
192.168.3.26 LAB-DC-02.bunny-lab.io LAB-DC-02
192.168.3.4 VIRT-NODE-01.bunny-lab.io VIRT-NODE-01
192.168.3.5 BUNNY-NODE-02.bunny-lab.io BUNNY-NODE-02
```
=== "Kerberos Keytab File"
```ini title="/awx/krb5.conf"
[libdefaults]
default_realm = BUNNY-LAB.IO
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
BUNNY-LAB.IO = {
kdc = 192.168.3.25
kdc = 192.168.3.26
admin_server = 192.168.3.25
}
[domain_realm]
192.168.3.25 = BUNNY-LAB.IO
192.168.3.26 = BUNNY-LAB.IO
.bunny-lab.io = BUNNY-LAB.IO
bunny-lab.io = BUNNY-LAB.IO
```
Then we apply these configmaps to the AWX namespace with the following commands:
``` sh
cd /awx
kubectl -n awx create configmap awx-kerberos-config --from-file=/awx/krb5.conf
kubectl apply -f custom_dns_records.yml
```
- Open AWX UI and click on "**Instance Groups**" under the "**Administration**" section, then press "**Add > Add container group**".
- Enter a descriptive name as you like (e.g. `Kerberos`) and click the toggle "**Customize Pod Specification**".
- Put the following YAML string in "**Custom pod spec**" then press the "**Save**" button
```yaml title="Custom Pod Spec"
apiVersion: v1
kind: Pod
metadata:
namespace: awx
spec:
serviceAccountName: default
automountServiceAccountToken: false
initContainers:
- name: init-hosts
image: busybox
command:
- sh
- '-c'
- cat /etc/custom-dns/custom-hosts >> /etc/hosts
volumeMounts:
- name: custom-dns
mountPath: /etc/custom-dns
containers:
- image: quay.io/ansible/awx-ee:latest
name: worker
args:
- ansible-runner
- worker
- '--private-data-dir=/runner'
resources:
requests:
cpu: 250m
memory: 100Mi
volumeMounts:
- name: awx-kerberos-volume
mountPath: /etc/krb5.conf
subPath: krb5.conf
volumes:
- name: awx-kerberos-volume
configMap:
name: awx-kerberos-config
- name: custom-dns
configMap:
name: custom-dns
```
### Job Template & Inventory Examples
At this point, you need to adjust your exist Job Template(s) that need to communicate via Kerberos to domain-joined Windows devices to use the "Instance Group" of "**Kerberos**" while keeping the same Execution Environment you have been using up until this point. This will change the Execution Environment to include the Kerberos Keytab file in the EE at playbook runtime. When the playbook has completed running, (or if you are chain-loading multiple playbooks in a workflow job template), it will cease to exist. The kerberos keytab data will be regenerated at the next runtime.
Also add the following variables to the job template you have associated with the playbook below:
``` yaml
---
kerberos_user: nicole.rappe@BUNNY-LAB.IO
kerberos_password: <DomainPassword>
```
You will want to ensure your inventory file is configured to use Kerberos Authentication as well, so the following example is a starting point:
```ini
virt-node-01 ansible_host=virt-node-01.bunny-lab.io
bunny-node-02 ansible_host=bunny-node-02.bunny-lab.io
[virtualizationHosts]
virt-node-01
bunny-node-02
[virtualizationHosts:vars]
ansible_connection=winrm
ansible_port=5986
ansible_winrm_transport=kerberos
ansible_winrm_scheme=https
ansible_winrm_server_cert_validation=ignore
#kerberos_user=nicole.rappe@BUNNY-LAB.IO #Optional, if you define this in the Job Template, it is not necessary here.
#kerberos_password=<DomainPassword> #Optional, if you define this in the Job Template, it is not necessary here.
```
!!! failure "Usage of Fully-Quality Domain Names"
It is **critical** that you define Kerberos-authenticated devices with fully qualified domain names. This is just something I found out from 4+ hours of troubleshooting. If the device is Linux or you are using NTLM authentication instead of Kerberos authentication, you can skip this warning. If you do not define the inventory using FQDNs, it will fail to run the commands against the targeted device(s).
In this example, the host is defined via FQDN: `virt-node-01 ansible_host=virt-node-01.bunny-lab.io`
### Kerberos Connection Playbook
At this point, you need a playbook that you can run in a Workflow Job Template (to keep things modular and simplified) to establish a connection to an Active Directory Domain Controller via Kerberos before running additional playbooks/templates against the actual devices.
You can visualize the connection workflow below:
``` mermaid
graph LR
A[Update AWX Project] --> B[Update Project Inventory]
B --> C[Establish Kerberos Connection]
C --> D[Run Playbook against Windows Device]
```
The following playbook is an example pulled from https://git.bunny-lab.io
!!! note "Playbook Redundancies"
I have several areas where I could optimize this playbook and remove redundancies. I just have not had enough time to iterate through it deeply-enough to narrow down exact things I can remove, so for now, it will remain as-is, since it functions as-expected with the example below.
```yaml title="Establish_Kerberos_Connection.yml"
---
- name: Generate Kerberos Ticket to Communicate with Domain-Joined Windows Devices
hosts: localhost
vars:
kerberos_password: "{{ lookup('env', 'KERBEROS_PASSWORD') }}" # Alternatively, you can set this as an environment variable
# BE SURE TO PASS "kerberos_user: nicole.rappe@BUNNY-LAB.IO" and "kerberos_password: <domain_admin_password>" to the template variables when running this playbook in a template.
tasks:
- name: Generate the keytab file
ansible.builtin.shell: |
ktutil <<EOF
addent -password -p {{ kerberos_user }} -k 1 -e aes256-cts
{{ kerberos_password }}
wkt /tmp/krb5.keytab
quit
EOF
environment:
KRB5_CONFIG: /etc/krb5.conf
register: generate_keytab_result
- name: Ensure keytab file was generated successfully
fail:
msg: "Failed to generate keytab file"
when: generate_keytab_result.rc != 0
- name: Keytab successfully generated
ansible.builtin.debug:
msg: "Keytab successfully generated at /tmp/krb5.keytab"
when: generate_keytab_result.rc == 0
- name: Acquire Kerberos ticket using keytab
ansible.builtin.shell: |
kinit -kt /tmp/krb5.keytab {{ kerberos_user }}
environment:
KRB5_CONFIG: /etc/krb5.conf
register: kinit_result
- name: Ensure Kerberos ticket was acquired successfully
fail:
msg: "Failed to acquire Kerberos ticket"
when: kinit_result.rc != 0
- name: Kerberos ticket successfully acquired
ansible.builtin.debug:
msg: "Kerberos ticket successfully acquired for user {{ kerberos_user }}"
when: kinit_result.rc == 0
```

View File

@@ -0,0 +1,75 @@
---
tags:
- Ansible
- AWX
- Gitea
- Automation
---
**Purpose**: Once AWX is deployed, you will want to connect Gitea at https://git.bunny-lab.io. The reason for this is so we can pull in our playbooks, inventories, and templates automatically into AWX, making it more stateless overall and more resilient to potential failures of either AWX or the underlying Kubernetes Cluster hosting it.
## Obtain Gitea Token
You already have this documented in Vaultwarden's password notes for awx.bunny-lab.io, but in case it gets lost, go to the [Gitea Token Page](https://git.bunny-lab.io/user/settings/applications) to set up an application token with read-only access for AWX, with a descriptive name.
## Create Gitea Credentials
Before you make move on and make the project, you need to associate the Gitea token with an AWX "Credential". Navigate to **Resources > Credentials > Add**
| **Field** | **Value** |
| :--- | :--- |
| Credential Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Organization | `Default` *(Click the Magnifying Lens)* |
| Credential Type | `Source Control` |
| Username | `Gitea Username` *(e.g. `nicole`)* |
| Password | `<Gitea Token>` |
## Create an AWX Project
In order to link AWX to Gitea, you have to connect the two of them together with an AWX "Project". Navigate to **Resources > Projects > Add**
**Project Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Project Name | `Bunny-Lab` |
| Description | `Homelab Environment` |
| Organization | `Default` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source Control Type | `Git` |
**Gitea-specific Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Source Control URL | `https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git` |
| Source Control Branch/Tag/Commit | `main` |
| Source Control Credential | `git.bunny-lab.io` *(Click the Magnifying Lens)* |
## Add Playbooks
AWX automatically imports any playbooks it finds from the project, and makes them available for templates operating within the same project-space. (e.g. "Bunny-Lab"). This means no special configuration is needed for the playbooks.
## Create an Inventory
You will want to associate an inventory with the Gitea project now. Navigate to **Resources > Inventories > Add**
| **Field** | **Value** |
| :--- | :--- |
| Inventory Name | `Homelab` |
| Description | `Homelab Inventory` |
| Organization | `Default` |
### Add Gitea Inventory Source
Now you will want to connect this inventory to the inventory file(s) hosted in the aforementioned Gitea repository. Navigate to **Resources > Inventories > Homelab > Sources > Add**
| **Field** | **Value** |
| :--- | :--- |
| Source Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source | `Sourced from a Project` |
| Project | `Bunny-Lab` |
| Inventory File | `inventories/homelab.ini` |
!!! info "Overwriting Existing Inventory Data"
You want to make sure that the checkboxes for "**Overwrite**" and "**Overwrite Variables**" are checked. This ensures that if devices and/or group variables are removed from the inventory file in Gitea, they will also be removed from the inventory inside AWX.
## Webhooks
Optionally, set up webhooks in Gitea to trigger inventory updates in AWX upon changes in the repository. This section is not documented yet, but will eventually be documented.

View File

@@ -0,0 +1,35 @@
---
tags:
- Ansible
- WinRM
- Automation
---
# WinRM (Kerberos)
**Name**: "Kerberos WinRM"
```jsx title="Input Configuration"
fields:
- id: username
type: string
label: Username
- id: password
type: string
label: Password
secret: true
- id: krb_realm
type: string
label: Kerberos Realm (Domain)
required:
- username
- password
- krb_realm
```
```jsx title="Injector Configuration"
extra_vars:
ansible_user: '{{ username }}'
ansible_password: '{{ password }}'
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
```

View File

@@ -0,0 +1,40 @@
---
sidebar_position: 1
tags:
- Ansible
- Automation
---
# AWX Credential Types
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
## Windows-based Credentials
### NTLM
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
Credential Type: Machine
Username: nicole.rappe@MOONGATE.LOCAL
Password: <Encrypted>
Privilege Escalation Method: runas
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
```
### Kerberos
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
```jsx title="(Kerberos WinRM) nicole.rappe"
Credential Type: Kerberos WinRM
Username: nicole.rappe
Password: <Encrypted>
Kerberos Realm (Domain): MOONGATE.LOCAL
```
## Linux-based Credentials
```jsx title="(LINUX) nicole"
Credential Type: Machine
Username: nicole
Password: <Encrypted>
Privilege Escalation Method: sudo
Privilege Escalation Username: root
```
:::note
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
:::

View File

@@ -0,0 +1,79 @@
---
tags:
- Ansible
- WinRM
- Windows
- Automation
---
**Purpose**:
You will need to enable secure WinRM management of the Windows devices you are running playbooks against, as compared to the Linux devices. The following powershell script needs to be ran on every Windows device you intend to run Ansible playbooks on. This script can also be useful for simply enabling / resetting WinRM configurations for Hyper-V hosts in general, just omit the Powershell script remote signing section if you dont plan on using it for Ansible.
``` powershell
# Script to configure WinRM over HTTPS on the Hyper-V host
# Ensure WinRM is enabled
Write-Host "Enabling WinRM..."
winrm quickconfig -force
# Generate a self-signed certificate (Optional: Use your certificate if you have one)
$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "$(Get-WmiObject -Class Win32_ComputerSystem).DomainName"
$certThumbprint = $cert.Thumbprint
# Function to delete existing HTTPS listener
function Remove-HTTPSListener {
Write-Host "Removing existing HTTPS listener if it exists..."
$listeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
foreach ($listener in $listeners) {
if ($listener.Transport -eq "HTTPS") {
Write-Host "Deleting listener with Address: $($listener.Address) and Transport: $($listener.Transport)"
Remove-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{Address=$listener.Address; Transport=$listener.Transport}
}
}
Start-Sleep -Seconds 5 # Wait for a few seconds to ensure deletion
}
# Remove existing HTTPS listener
Remove-HTTPSListener
# Confirm deletion
$existingListeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
if ($existingListeners | Where-Object { $_.Transport -eq "HTTPS" }) {
Write-Host "Failed to delete the existing HTTPS listener. Exiting script."
exit 1
}
# Create a new HTTPS listener
Write-Host "Creating a new HTTPS listener..."
$listenerCmd = "winrm create winrm/config/Listener?Address=*+Transport=HTTPS '@{Hostname=`"$(Get-WmiObject -Class Win32_ComputerSystem).DomainName`"; CertificateThumbprint=`"$certThumbprint`"}'"
Invoke-Expression $listenerCmd
# Set TrustedHosts to allow connections from any IP address (adjust as needed for security)
Write-Host "Setting TrustedHosts to allow any IP address..."
winrm set winrm/config/client '@{TrustedHosts="*"}'
# Enable the firewall rule for WinRM over HTTPS
Write-Host "Enabling firewall rule for WinRM over HTTPS..."
$existingFirewallRule = Get-NetFirewallRule -DisplayName "WinRM HTTPS" -ErrorAction SilentlyContinue
if (-not $existingFirewallRule) {
New-NetFirewallRule -Name "WINRM-HTTPS-In-TCP-PUBLIC" -DisplayName "WinRM HTTPS" -Enabled True -Direction Inbound -Protocol TCP -LocalPort 5986 -RemoteAddress Any -Action Allow
}
# Ensure Kerberos authentication is enabled
Write-Host "Enabling Kerberos authentication for WinRM..."
winrm set winrm/config/service/auth '@{Kerberos="true"}'
# Configure the WinRM service to use HTTPS and Kerberos
Write-Host "Configuring WinRM service to use HTTPS and Kerberos..."
winrm set winrm/config/service '@{AllowUnencrypted="false"}'
# Configure the WinRM client to use Kerberos
Write-Host "Configuring WinRM client to use Kerberos..."
winrm set winrm/config/client/auth '@{Kerberos="true"}'
# Ensure the PowerShell execution policy is set to allow remotely running scripts
Write-Host "Setting PowerShell execution policy to RemoteSigned..."
Set-ExecutionPolicy RemoteSigned -Force
Write-Host "Configuration complete. The Hyper-V host is ready for remote management over HTTPS with Kerberos authentication."
```

View File

@@ -0,0 +1,41 @@
---
tags:
- Ansible
- Automation
---
# Host Inventories
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook. You can see an example inventory file below.
Keep in mind the "Group Variables" section varies based on your environment. NTLM is considered insecure, but may be necessary when you are interacting with Windows servers that are not domain-joined. Otherwise you want to use Kerberos authentication. This is outlined more in the [AWX Kerberos Implementation](../awx/awx-kerberos-implementation.md#job-template-inventory-examples) documentation.
!!! note "Inventory Data Relationships"
An inventory file consists of hosts, groups, and variables. A host belongs to a group, and a group can have variables configured for it. If you run a playbook / job template against a host, it will assign the variables associated to the group that host belongs to (if any) during runtime.
```ini title="https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini"
# Networking
pfsense-example ansible_host=192.168.3.1
# Servers
example01 ansible_host=192.168.3.2
example02 ansible_host=192.168.3.3
example03 ansible_host=example03.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
example04 ansible_host=example04.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
# Group Definitions
[linuxServers]
example01
example02
[domainControllers]
example03
example04
[domainControllers:vars]
ansible_connection=winrm
ansible_winrm_kerberos_delegation=false
ansible_port=5986
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
```

View File

@@ -0,0 +1,62 @@
---
tags:
- Ansible
- Automation
---
!!! warning "DOCUMENT UNDER CONSTRUCTION"
This document is a "scaffold" document. It is missing significant portions of several sections and should not be read with any scrutiny until it is more feature-complete down-the-road. Come back later and I should have added more to this document hopefully by then.
**Purpose**:
This is an indexed list of Ansible Playbooks / Workflows that I have developed to deploy and manage various aspects of my lab environment. The list is not dynamically updated, so it may sometimes be out-of-date.
## Linux Playbooks
### Deployments
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
- Authentik
- [1-Authentik-Bootstrapper.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/1-Authentik-Bootstrapper.yml)
- [2-Deploy-Cluster.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/2-Deploy-Cluster.yml)
- [3-Deploy-Authentik.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/3-Deploy-Authentik.yml)
- [Check_Cluster_Nodes.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Nodes.yml)
- [Check_Cluster_Pods.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Pods.yml)
- Immich
- [Full_Deployment.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Immich/Full_Deployment.yml)
- Keycloak
- [Deploy-Keycloak.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Keycloak/Deploy-Keycloak.yml)
- Portainer
- [Deploy-Portainer.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Portainer/Deploy-Portainer.yml)
- PrivacyIDEA
- [privacyIDEA.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/privacyIDEA.yml)
- Rancher RKE2 Kubernetes Cluster
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
### Kerberos
This playbook is designed to be chain-loaded before any playbooks that involve interacting with Active Directory Domain-Joined Windows Devices. It establishes a connection with Active Directory using domain credentials, sets up a keytab file (among other things), and makes it so the execution environment that the subsequent jobs are running in are able to run against windows devices. This ensures the connection is encrypted the entire time the playbooks are running instead of using lower-security authentication methods like NTLM, which don't even always work in most circumstances. You can find more information in the [Kerberos Authentication](../awx/awx-kerberos-implementation.md#kerberos-implementation) section of the AWX documentation. `It does require additional setup prior to running the playbook.`
- [Establish_Kerberos_Connection.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Establish_Kerberos_Connection.yml)
!!! warning "Ansible w/ Kerberos is **not** for beginners"
I advise against jumping into the deep-end with setting up Kerberos authentication for your playbooks until you have made yourself more comfortable with how Kubernetes works, or at the very least, you need to read the linked documentation above very closely to ensure nothing goes wrong during the setup.
### Security
Security playbooks do things like secure devices with additional auditing functionality, login notifications, enforcing SSH certificate-based authentication, things of that sort.
- Install SSH Public Key Authentication
- [PLACEHOLDER]()
- SSH Login Notifications
- [PLACEHOLDER]()
## Windows Playbooks
### Deployments
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
- Hyper-V - Deploy GuestVM
- [PLACEHOLDER]()
- Query Active Directory Domain Computers
- [PLACEHOLDER]()
- Install BGInfo
- [PLACEHOLDER]()

View File

@@ -0,0 +1,22 @@
---
tags:
- Ansible
- Automation
---
# AWX Projects
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
```jsx title="Ansible Playbooks (Gitea)"
Name: Bunny Lab
Source Control Type: Git
Source Control URL: https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git
Source Control Credential: Bunny Lab (Gitea)
```
```jsx title="Resources > Credentials > Bunny Lab (Gitea)"
Name: Bunny Lab (Gitea)
Credential Type: Source Control
Username: nicole.rappe
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
```

View File

@@ -0,0 +1,27 @@
---
tags:
- Ansible
- Automation
---
# Templates
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
```jsx title="Deploy Hyper-V VM"
Name: Deploy Hyper-V VM
Inventory: (NTLM) MOON-HOST-01
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
Credentials: (NTLM) nicole.rappe@MOONGATE.local
Execution Environment: AWX EE (latest)
Project: Ansible Playbooks (Gitea)
Variables:
---
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
```

View File

@@ -0,0 +1,27 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
**Purpose**: You may find that you need to adopt a device that was onboarded by a different Veeam Backup & Replication server. Maybe the old server died, or maybe you are restructuring your backup infrastructure, and want a new server taking over the backup responsibilities for the device.
If this happens, Veeam will complain that the device is managed by a different server. To circumvent this, perform the following changes in the Windows Registry based on the version of Veeam Backup & Replication you are currently using, then try to Update the Agent / Backup the agent again, and it should be successful after the registry changes are made.
**Reference Material**:
https://forums.veeam.com/servers-workstations-f49/how-do-we-move-agent-to-associate-with-a-new-veeam-server-t79977.html
=== "VBR v11"
```jsx title="HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication"
AgentDiscoveryIgnoreOwnership
REG_DWORD (32-bit) Value: 1
```
=== "VBR v12"
```jsx title="HKEY_LOCAL_MACHINE\SOFTWARE\Veeam\Veeam Backup and Replication"
ProtectionGroupIgnoreOwnership
REG_DWORD (32-bit) Value: 1
```

View File

@@ -0,0 +1,45 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
**Purpose**:
The purpose of this document is to explain the core concepts / terminology of things seen in Veeam Backup & Replication from a relatively high-level. It's more of a quick-reference guide than a formal education.
## Backup Jobs
Backup jobs take many forms, but the most common are explained in more detail below. Note that this is not an exhaustive list of the different kinds of backup jobs, just the ones I am currently most familiar with.
- **Backup**: This is the simplest of the backup job options. A "Backup" backup job will take a backup of a workstation, server, File Server, specific local files and folders on a device, or a GuestVM running in a hypervisor such as Hyper-V, VMWare ESXi, or ProxmoxVE.
- **Backup Copy**:
- This is when you make a copy of backup data stored on the Veeam server, and send it somewhere else, such as an off-site "Service Provider" such as Veeam partners.
- You can also send backup copies to local drives, SMB network shares, NFS shares, File Servers, pretty much anywhere you can send normal backups, but with the key difference being the data is originating from the Veeam backup server itself instead of the original server/VM.
- **SureBackup**: This is where things get a little more complex. SureBackup is where you effectively "Verify" your backups by spinning them up inside of a lab environment. While they are spun up, they are checked to see if they fully boot, they can have antivirus scans, ransomware scans, custom scripts executed, and validate the integrity of the backups. The general core components are listed below:
- **Virtual Lab**: The virtual lab is a virtual machine environment that you set up for Veeam to leverage to spin up backups on a hypervisor that you configure, such as a remote Hyper-V server in the same building, or perhaps if you have Hyper-V locally installed on the same server as Veeam itself, you would configure the virtual lab's hypervisor to point to `127.0.0.1` or `localhost`.
- The virtual lab will have its own unique virtual networking for the VMs to communicate on, so they don't conflict with the production servers/VMs.
- **Application Groups**: Application groups are defined groups of devices that need to be running when the backups are being validated. For example, in my homelab, I have an application group named `Domain Controllers`, and I put `LAB-DC-01` and `LAB-DC-02` into that application group. I use this as the application group associated with the Virtual Lab because most of my services are authenticated with Active Directory, and if the DCs were missing during backup verification, a variety of issues would ensue. When the Backup Verification Lab (Virtual Lab) is launched on the targeted hypervisor, it spins up the application group devices from backups first, ensuring they are running and functional, before the virtual lab starts verifying backup objects designated in the "Linked Jobs", seen in the next section.
- **Linked Jobs**: These are the "Backup Jobs" you want to verify in in the virtual lab mentioned above. If you have a large backup job with a bunch of machines you don't want verified, you can configure "Exclusions" in the SureBackup job settings to exclude those objects/devices from verification.
## Replication Jobs
As the name states, Veeam Backup & Replication can also handle replicating Servers/VMs from either their original locations or from a recent backup and push them into a hypervisor for rapid failover/failback functionality. Very useful for workloads that need to be spun up nearly immediately due to strict RTO requirements. There are some additional notes regarding replication seen below.
!!! warning "Orchestrate Replication & Failover via Veeam, not the Hypervisor"
You want to coordinate anything replication-wise directly in Veeam Backup & Replication, not directly on the hypervisor itself. While you can do this, it is not only slower, but does not give you the option to failback replicas back into production if you spin up a replica directly on its hypervisor.
- **Replication Restore Points**: Similar to backups, replicas can have multiple restore points associated with them, so you have more than one option when spinning up a replica in a hypervisor.
- **Planned Failover**: A planned failover is when you are scheduling the hypervisor to be offline and simply don't have enough resources to live-migrate it to another cluster host, or you might not even have a virtualization cluster to work with in the first place. In cases like this, a "Planned Failover" tells Veeam to make a fresh replica right now, then shuts down the production VM on its hypervisor, and spins up the replica on the replica server. (If you installed Hyper-V on the Veeam server, it would spin up the replica on the backup server itself).
- A "Planned Failover" allows you to perform a "**Failback to Production**" when the failover event has concluded. This means that while the production VM was offline and the replica took over the production load, any changes made such as new files added, applications installed, etc will be replicated back to the production VM when the replica is "Failed back to Production". **This is the ideal choice in most circumstances**.
- **Failover Now**: Failover now means that the production hypervisor is likely completely dead, and may need to be re-built, or you simply dont need to replicate changes back to production hypervisor after the failover event has concluded, such as on a low-priority print server. Any changes made while the replica is operational will be completely lost when the production VM is turned back on again or a restore is pushed back onto a new hypervisor.
## Backup Infrastructure
### Backup Repository
A backup repository is simply a destination to send the backups or backup copies. It can be anything from direct attached storage to a SMB file share on a NAS, or even off-site storage like Backblaze B2 or Amazon S3.
- If you use object storage like Backblaze B2 or Amazon S3, you can configure an "Immutability Period" for backups that are sent to these destinations, meaning if your backup server was hit by ransomware or a malicious actor, neither they nor you could delete the backups in the off-site storage such as Backblaze B2 until the immutability period had passed, such as 7 days, 30 days, or however long you configured.
- You can adjust the immutability period after-the-fact, but backups that have already been pushed to a backup repository will be immutable for the time period configured when they were originally uploaded, and attempts to delete them will tell you when you are allowed to delete them. You won't be able to delete them even from Amazon or Backblaze's own internal tools / websites during this immutability period.
### Backup Proxy
A backup "proxy" simply refers to a machine that is running the "**Veeam Backup Transport**" agent on it. The Veeam Backup & Replication server installs a proxy onto itself, but it also deploys proxies onto workstations, servers, and hypervisors. These proxies are how the "Veeam Backup & Replication Console" interacts with the devices and performs backups and restores.
### Service Provider
Service Providers are not the same as cloud storage providers such as Backblaze B2, Amazon S3, etc. Service Providers are Veeam "partners" who manage, maintain, and deploy Veeam backup appliances at client environments, as well as providing support to clients within the Veeam ecosystem. You can also use Service Providers as a cloud backup destination in Veeam Backup & Replication for off-site backups.
## Misc Terminology
- **Unstructured Data**: This refers to a device such as a windows or linux server that you can use WinRM or SSH to access, and want to backup specific files and folders without backing up the entire device / VM. This is useful in cases where you cannot install a Veeam Agent or the operating system is unsupported by Veeam, or if the device is not operating under a hypervisor, such as a bare-metal server.
- When you add a device to Veeam's "Inventory" via the "Unstructured Data" section, if you want to perform backups on the device, you will have to make a special backup job under "**Backups > File Server**", because Veeam will treat the unstructured data as a file server.

View File

@@ -0,0 +1,33 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
**Purpose**:
There may come a time that you need to free up space in a Veeam Backup & Replication backup repository because you are running out of space. In these cases, you need to manually trim the older backups in a specific way to ensure this is non-destructive.
## Manual Removal of Backup Data
You need to perform these steps to carefully delete the oldest full backup chain w/ incrementals.
- Log into the Veeam Backup & Replication server and locate the local folder hosting the backup repository that is running out of space
- Locate the oldest "**Full**" backup and delete it along with all of the "**incremental**" backups after it leading to the second-most-recent full backup.
!!! warning "Incremental Backups Affecting the Chain"
Be mindful that if you delete the incremental backups but not the full backup associated with those incrementals you will break the backup chain. In the event this happens, either un-delete the incrementals and try again, or go one-level-deeper and delete the second-oldest full backup and all incrementals to correct the chain's structure.
## Rescan Backup Repository
At this point, you can just re-scan the backup repository within Veeam so the Veeam database gets updated to notice the missing backup files that you just deleted. [Rescanning Backup Repositories](https://helpcenter.veeam.com/docs/backup/vsphere/rescanning_backup_repositories.html?ver=120)
- Launch Veeam Backup & Replication Console
- Navigate to "**Backup Infrastructure > Backup Repositories**"
- Locate the backup repository you deleted backup files from, then "**Right-click > Rescan**"
## Removing Restore Points from Database
At this point, you have deleted the backup files and re-scanned the backup repository(s) to ensure that Veeam updated its database to notice the now-missing backup files. Now you need to tell Veeam to "forget" about the older backups you deleted so they are no longer displayed within Veeam itself. [Removing Missing Restore Points](https://helpcenter.veeam.com/docs/backup/vsphere/remove_missing_point.html?ver=120)
- Navigate to "**Home > Backups > Disk**"
- Locate the backup job associated with the device's backup files you deleted
- Right-click the associated backup job > "**Properties...**"
- In the Backup Properties window, right-click the missing restore point(s) and click "**Forget**" > "**All Unavailable Backups**"

View File

@@ -0,0 +1,65 @@
---
tags:
- Proxmox
- Veeam
- Backup
- Disaster Recovery
---
**Purpose**:
When you migrate virtual machines from Hyper-V (and possibly other platforms) to ProxmoxVE, you may run into several issues, from the disk formats being in `.raw` format instead of `.qcow2`, among other things. One thing in particular, which is the reason for this document, is that if you migrate Rocky Linux from Hyper-V into ProxmoxVE using Veeam Backup & Replication, it will break the storage system so badly that the operating system will not boot.
### Fixing Boot Issues
Some high-level things to do to fix this are listed below:
- Switch the VM processor type to `host`.
- The socket and core counts are reversed, so a single socket CPU with 16 cores will appear like 16 sockets with one core each, flip these around to correct this issue.
- The storage controller needs to be set to `VirtIO iSCSI`
- The display driver needs set to `Default`
#### Dracut Emergency Shell
If you start the VM and you reach a "dracut" prompt, then the bootloader got nuked and needs to be regenerated. Follow the steps below to work through this process:
- Boot from a Rocky Linux 9.5+ installation ISO in the broken Rocky Linux VM
- Select "**Troubleshooting -->**" in the boot menu
- Select "**Rescue a Rocky Linux System**"
- Press through the prompt with value `1` and `Continue` to select the automatic mounting of the detected operating system of the virtual machine
- Press **<ENTER>** to enter the shell, then run the following commands to fix the booting issues
```sh
chroot /mnt/sysroot
dracut --force --regenerate-all
grub2-mkconfig -o /boot/grub2/grub.cfg
exit
exit
```
!!! info "Boot Fix May Trigger Reboot Twice"
During the process, you may notice that the VM reboots itself a second-time. This is normal and can be left alone. The VM will eventually reach the login screen. Once you get this far, you can login and fix the networking issues in the VM to get it stabilized.
### Fixing Network Issues
The VM will lose the adapter name of `eth0` and put something else like `ens18` that needs to be reconfigured manually to get networking functional again:
- Type `ethtool ens18`, and if the link speed is `Unknown!`, then poweroff the VM and switch the network adapter from `VirtIO (paravirtualized)` to `Intel E1000`, then boot the VM back up.
- Run the following commands to assign the new `ens18` interface as a networking interface for the VM to use:
```sh
# Create the Interface (Replace the IP & DNS Variables)
nmcli connection add type ethernet ifname ens18 con-name ens18 ipv4.method manual ipv4.addresses 192.168.3.21/24 ipv4.gateway 192.168.3.1 ipv4.dns "1.1.1.1 1.0.0.1"
# Bring the Connection Online
nmcli connection up ens18
```
!!! success "VM Successfully Fixed"
At this point, the virtual machine should be booting, and have network access, bringing it back into production use.
### Convert VM Disk from `.RAW` to `.QCOW2`
Given that the migration process via Veeam Backup & Replication ignores the destination disk format (at the time of writing this), it is necessary to convert the format of the disk from `.raw` to `.qcow2` so that you can perform things like VM snapshots, which are essential during updates, development, and testing.
Open a shell onto the ProxmoxVE server that is currently holding the VM that you need to convert the disks for, then locate the disks (this is not explained here, yet), and run the following commands to convert them.
```sh
# Convert a Single Disk
qemu-img convert -f raw -O qcow2 source.raw destination.qcow2
# Convert All Disks in a Given Directory
find . -type f -name "*.raw" -exec sh -c 'qemu-img convert -f raw -O qcow2 "$1" "${1%.raw}.qcow2"' _ {} \;
```

View File

@@ -0,0 +1,18 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
## Purpose
If you find that you need to migrate cloud backups that are being sent to a server running the Veeam VSCP due to issues like exhausted storage space on existing repositories.
### Migrate Backup Repository Data
- Log into VSPC website and disable all associated jobs that need to be migrated
- Log into the endpoint (or veeam backup proxy running at the client location, if there is one) and disable the associated job that we need to migrate the data for.
- Navigate to the directory structure where the backup is located on-disk, such as `E:\Backups\clientname` and move it to the destination backup repository, such as `F:\Backups\clientname`
- Log back into Veeam Backup & Replication Console and re-scan the new repositories
### Move Backup Job Location in VSPC Portal
At this point, we need to migrate point the backup job(s) that were affected to the new location. This is a job-level change, not company-level change.

View File

@@ -0,0 +1,24 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
**Purpose**:
This is meant as a high-level generally-speaking best practice retention policy in most use-cases. This document will generally be pretty bare-bones, but the general idea is the following advanced GFS retention period is generally configured on backup copy jobs, specifically ones that have off-site backups, but can also be used for local backup repositories.
Navigate to Jobs > Backup (or Backup Copy) > (Find a Backup Job) > Right-Click > Edit > Storage (or Target) > "**Keep Certain Full Backups for Archival Purposes**: Checked" > Click on the "**Configure**" button.
Optional: Click the "**Save as Default**" button before clicking the "**OK**" button to make this default behavior for new backup jobs.
| **Description** | **Status** | **Value** |
| :--- | :--- | :--- |
| Keep Weekly Full Backups | Enabled | 4 |
| Keep Monthly Full Backups | Enabled | 3 |
| Keep Yearly Full Backups | Enabled | 1 (`3 - 7 for Medical HIPAA`) |
!!! note "7 Daily Backups Assumption"
This document assumes that you at (least) keep 7 daily backups in the normal backup schedule. Meaning **7 daily, 4 weekly, 3 monthly, and 1 yearly** backup is maintained at all times.
**7 daily, 4 weekly, 3 monthly, and 1 yearly**

View File

@@ -0,0 +1,36 @@
---
tags:
- Veeam
- Backup
- Disaster Recovery
---
### Symptoms
When you try to run a backup to a remote backup server, the backup job fails and gives the following error:
!!! failure "Error: No cloud gateways are available: failed to validate certificates of some gateways"
### Reason
This means that the SSL certificate installed on the Veeam Backup & Replication Server being used by the endpoint is expired. While you can update the `Web UI` and `Server` SSL certificates, the "**Gateway Connect**" certificate is different. When the certificate is expired, the backup agent stops trusting the backup server and fails all backup jobs to that server until the certificate is updated.
### Resolution Steps
You need to remotely log into the Veeam Backup & Replication server, doing this via RMM tools, RDP, etc. Once logged-in, you need to open Veeam Backup & Replication and login.
- At the bottom-left sidebar of Veeam Backup & Replication window, you will see tabs such as `Home`, `Inventory`, `Backup Infrastructure`, `Storage Infrastructure`, `Tape Infrastructure`, and `Cloud Connect`. Proceed to click on the "**Cloud Connect**" tab in the sidebar.
- Click on the "**Manage Certificates**" button
- Click on the "**Select an existing certificate from the certificate store**" radio button
- You will be prompted that "*This certificate is used by one or more Cloud Gateways*", simply click on "**Yes**" to proceed.
- Look for the current and up-to-date / valid certificate from the certificate list then click "**Next**"
- e.g. `*.bunny-lab.io`
- You will be given a summary of the changes > Click the "**Finish**" button to finish updating the gateway's certificate.
### Re-Attempt Backup
At this point, the endpoint should immediately trust the new certificate from the remote backup server (assuming the server is `Managed` and not `Standalone`). The backups should be running successfully the next time you run them.
!!! info "Standalone Mode"
In the event that the device is in-fact standalone, you can run the following command on the device via commandline to tell it to immediately sync the configuration settings of the remote backup server with the local backup agent:
```batch
::Connect to the backup server and download current configuration settings.
"C:\Program Files\Veeam\Endpoint Backup\Veeam.Agent.Configurator.exe" -syncnow
```

View File

@@ -0,0 +1,21 @@
---
tags:
- iLO
- Hardware
- Licensing
---
!!! info "Assumptions of Usage"
It should go without saying, using one of these keys does not entitle you to support by Hewlett-Packard Enterprise. These are meant for homelab environments where licensing / auditing does not matter.
| **iLO Version** | **License Key** |
| :--- | :--- |
| iLO Standard Trial | `34T6L-4C9PX-X8D9C-GYD26-8SQWM` |
| iLO 1 Advanced | `247RH-ZPJ8S-7B17D-FCE55-DDD17` |
| iLO 2 / 3 / 4 Advanced | `35DPH-SVSXJ-HGBJN-C7N5R-2SS4W` |
| iLO 2 / 3 / 4 Advanced | `35SCR-RYLML-CBK7N-TD3B9-GGBW2` |
!!! warning "Do not Use in Production Work Environments"
In (rare) cases, these keys can be used as a temporary solution when working in a work environment, then promptly removed after the work is performed. Leaving them installed on a server could lead to legal consequences if Hewlett-Packard Enterprise asked for it while providing support, and it was using one of these keys, it could fail a software licensing audit.
`REMOVE THE KEY AFTER USAGE`

View File

@@ -0,0 +1,48 @@
---
tags:
- ZFS
- iSCSI
- Linux
- Filesystems
---
**Purpose**:
The purpose of this workflow is to illustrate the process of expanding storage for a Linux server that uses an iSCSI-based ZFS storage. We want the VM to have more storage space, so this document will go over the steps to expand that usable space.
!!! info "Assumptions"
It is assumed you are using an Ubuntu based operating system, as these commands may not be the same on other distributions of Linux.
This document also assumes you did not enable Logical Volume Management (LVM) when deploying your server. If you did, you will need to perform additional LVM-specific steps after increasing the space.
## Increase iSCSI Disk Size
This part should be fairly straight-forward. Using whatever hypervisor / storage appliance hosting the iSCSI target, expand the disk space of the LUN to the desired size.
## Extend ZFS Pool
This step goes over how to increase the usable space of the ZFS pool within the server itself after it was expanded.
``` sh
iscsiadm -m session --rescan # (1)
lsblk # (2)
parted /dev/sdX # (3)
unit TB # (4)
resizepart X XXTB # (5)
zpool list # (6)
zpool online -e <POOL-NAME> /dev/sdX # (7)
zpool scrub <POOL-NAME> # (8)
```
1. Re-scan iSCSI targets for changes.
2. Leverage `lsblk` to ensure that the storage size increase from the hypervisor / storage appliance reflects correctly.
3. Open partitioning utility on the ZFS volume / LUN / iSCSI disk. Replace `dev/sdX` with the actual device name.
4. Self-explanatory storage measurement.
5. Resizes whatever partition is given to fit the new storage capacity. Replace `X` with the partition number. Replace `XXTB` with a valid value, such as `10TB`.
6. This will allow you to list all ZFS pools that are available for the next command.
7. Brings the ZFS Pool back online. Replace `<POOL-NAME>` with the actual name of the ZFS pool.
8. This tells the system to scan the ZFS pool for any errors or corruption and correct them. Think of it as a form of housekeeping.
## Check on Scrubbing Progress
At this point, the ZFS pool has been expanded and a scrub task has been started. The scrubbing task can take several hours / days to run, so to keep track of it, you can run the following command to check the status of the ZFS pool / scrubbing task.
```sh
zpool status
```

View File

@@ -0,0 +1,150 @@
---
tags:
- Linux
- Filesystems
---
**Purpose**:
The purpose of this workflow is to illustrate the process of expanding storage for a RHEL-based Linux server acting as a GuestVM. We want the VM to have more storage space, so this document will go over the steps to expand that usable space.
!!! info "Assumptions"
It is assumed you are using a RHEL variant of linux such as Rocky Linux. This should apply to any version of Linux, but was written in a Rocky Linux 9.4 lab environment.
This document also assumes you did not enable Logical Volume Management (LVM) when deploying your server. If you did, you will need to perform additional LVM-specific steps after increasing the space.
!!! abstract "Oracle Linux Disk / LVM Terminology Idiosyncrasy"
Oracle Linux refers to disks as `/dev/hda` and /dev/hda2` and not something like `/dev/sda` / `/dev/sda2`. You will see certain parts of this document mention `/dev/hda`, in those cases, you may need to switch it to a standard `/dev/sda<#>` in order to make it work in your particular environment.
## Increase GuestVM Virtual Disk Size
This part should be fairly straight-forward. Using whatever hypervisor is running the Linux GuestVM, expand the disk space of the disk to the desired size.
## Extend Partition Table
This step goes over how to increase the usable space of the virtual disk within the GuestVM itself after it was expanded within the hypervisor.
!!! warning "Be Careful"
When you follow these steps, you will be deleting the existing partition and immediately re-creating it. If you do not use the **EXACT SAME** starting sector for the new partition, you will destroy data. Be sure to read every annotation next to each command to fully understand what you are doing.
=== "Using GDISK"
``` sh
sudo dnf install gdisk -y
gdisk /dev/<diskNumber> # (1)
p <ENTER> # (2)
d <ENTER> # (3)
4 <ENTER> # (4)
n <ENTER> # (5)
4 <ENTER> # (6)
<DEFAULT-FIRST-SECTOR-VALUE> (Just press ENTER) # (7)
<DEFAULT-LAST-SECTOR-VALUE> (Just press ENTER) # (8)
<FILESYSTEM-TYPE=8300 (Linux Filesystem)> (Just press ENTER) # (9)
w <ENTER> # (10)
```
??? info "Detailed Command Breakdown"
1. The first command needs you to enter the disk identifier. In most cases, this will likely be the first disk, such as `/dev/sda`. You do not need to indicate a partition number in this step, as you will be asked for one in a later step after identifying all of the partitions on this disk in the next command.
2. This will list all of the partitions on the disk.
3. This will ask you for a partition number to delete. Generally this is the last partition number listed. In the example below, you would type `4` then press ++enter++ to schedule the deletion of the partition.
4. See the previous annotation for details on what entering `4` does in this context.
5. This tells gdisk to create a new partition.
6. This tells gdisk to re-make partition 4 (the one we just deleted in the example).
7. We just want to leave this as the default. In my example, it would look like this:
`First sector (34-2147483614, default = 19826688) or {+-}size{KMGTP}: 19826688`
8. We just want to leave this as the default. In my example, it would look like this:
`Last sector (19826688-2147483614, default = 2147483614) or {+-}size{KMGTP}: 2147483614`
9. Just leave this as-is and press ++enter++ without entering any values. Assuming you are using XFS, as this guide was written for, the default "Linux Filesystem" is what you want for XFS.
10. This will write the changes to the partition table making them reality instead of just staging the changes.
!!! example "Example Output"
```
Command (? for help): p
Disk /dev/sda: 2147483648 sectors, 1024.0 GiB
Model: Virtual Disk
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 8A5C2469-B07B-42AC-8E57-E756E62D37D1
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 2147483614
Partitions will be aligned on 2048-sector boundaries
Total free space is 1073743838 sectors (512.0 GiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1230847 600.0 MiB EF00 EFI System Partition
2 1230848 3327999 1024.0 MiB 8300
3 3328000 19826687 7.9 GiB 8200
4 19826688 1073741790 502.5 GiB 8300 Linux filesystem
```
=== "Using FDISK"
``` sh
pvdisplay # (1)
fdisk /dev/hda # (2)
p <ENTER> # List Partitions
d <ENTER> # Delete a partition
2 <ENTER> # Delete Partition 2 (e.g. /dev/hda2)
n <ENTER> # Make a new Partition
p <ENTER> # Primary Partition Type
Starting Sector: <ENTER> # Use Default Value
Ending Sector: <ENTER> # Use Default Value
w <ENTER> # Commit all queued-up changes and write them to the disk
```
??? info "Detailed Command Breakdown"
1. Use pvdisplay to get the target disk identifier
2. Replace `/dev/hda` with the target disk identifier found in the previous step
**Point of No Return**:
When you press `w` in both cases of `gdisk` or `fdisk`, then ++enter++ the changes will be written to disk, meaning there is no turning back unless you have full GuestVM backups or a snapshot to rollback with, or something like Veeam Backup & Replication. Be certain the first and last sector values are correctly configured before proceeding. (Default values generally are good for this)
## Detect the New Partition Sizes
At this point, the operating system wont detect the changes without a reboot, so we are going to force the operating system to detect them immediately with the following commands to avoid a reboot (if we can avoid it).
``` sh
sudo partprobe /dev/<drive> # Drive Example: /dev/sda (Rocky) or /dev/hda (Oracle Linux)
sudo partx -u /dev/<diskNumber>
```
!!! bug "Partition Size Not Expanded? Reboot."
If you notice the partition still has not expanded to the desired size, you may have no choice but to reboot the server, then re-run the `gdisk` or `fdisk` commands a second time. In my lab environment, it didn't work until I rebooted. This might have been a hiccup on my end, but it's something to keep in mind if you run into the same issue of the size not changing.
``` sh
sudo reboot
```
## Resize the Filesystem
=== "XFS Filesystem"
``` sh
sudo xfs_growfs /
```
=== "Ext4 Filesystem"
``` sh
resize2fs /dev/sda
```
=== "Ext4 Filesystem w/ LVM"
``` sh
# Increase the Physical Volume Group Size
pvdisplay # Check the Current Size of the Physical Volume
pvresize /dev/hda2 # Enlarge the Physical Volume to Fit the New Partition Size
pvdisplay # Validatre the Size of the Physical Volume Increased to the New Size
# Increase the Logical Volume Group Size
lvextend -l +100%FREE /dev/VolGroup00/LogVol00 # Get this from running "lvdisplay" to find the correct Logical Volume Name
# Resize the Filesystem of the Disk to Fit the new Logical Volume
resize2fs /dev/VolGroup00/LogVol00
```
## Validate Storage Expansion
At this point, you can leverage `lsblk` or `df -h` to determine if the usable storage space was successfully increased or not. In this example, you can see that I increased my storage space from 512GB to 1TB.
!!! example "Example Command Output"
Command: `lsblk | grep "sda4"`
```
└─sda4 8:4 0 1014.5G 0 part /
```
Command: `df -h | grep "sda4"`
```
/dev/sda4 1015G 145G 871G 15% /
```

View File

@@ -0,0 +1,58 @@
---
tags:
- Fedora
- Linux
- Workstation
---
**Purpose**:
This document serves as a general guideline for my workstation deployment process when working with Fedora Workstation 41 and up. This document will constantly evolve over time based on my needs.
## Automate Initial Configurations
```sh
# Set Hostname
sudo hostnamectl set-hostname lab-desktop-01
# Setup Automatic Drive Mounting
echo "/dev/disk/by-uuid/B865-7BDB /mnt/500GB_WINDOWS_OS auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/C006EBA006EB95A6 /mnt/640GB_HDD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/24C82CFEC82CCFBA /mnt/1TB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/D64E9F534E9F2AEF /mnt/120GB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/16D05248D0522E6D /mnt/2TB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0" | sudo tee -a /etc/fstab
# Install Software
sudo yum update -y
sudo yum install -y steam firefox
sudo dnf install -y @xfce-desktop-environment
# Reboot Workstation
sudo reboot
```
!!! warning "Read-Only NTFS Disks (When Using Dual-Boot)"
If you want to dual boot, you need to ensure that the Windows side does not have "Fast Boot" enabled. You can locate the Fast Boot setting by locating the "Change what the power button does" settings, and unchecking the "Fast Boot" checkbox, then shutting down.
The problem with Fast Boot is that it effectively leaves the shared disks between Windows and Linux in a locked read-only state, which makes installing Steam games and software impossible.
## Manually Address Remaining Things
At this point, we need to do some manual work, since not everything can be handled by the terminal.
### Install Software (Software Manager)
Now we need to install a few things:
- NVIDIA Graphics Drivers Control Panel
- Discord Canary
- Betterbird
- Visual Studio Code
- Signal Desktop
- Solaar # (Logitech Unifying Software equivalant in Linux)
### Import XFCE Panel Configuration
At this point, we want to restore our custom taskbar / panels in XFCE, so the easiest way to do that is to import the configuration backup located in Nextcloud.
Backups are located here: https://cloud.bunny-lab.io/f/792649
### Configure Window Snapping
By default, XFCE has a really small threshold for telling windows to "snap" to the sides of the screens, such as a half:half arrangement. This can be adjusted by navigating to "**Applications Menu > Settings > Settings Manager > Windows Manager Tweaks > Placement**"
Once you have reached this window, you will see a slider from "**Small**" to "**Large**". Slide the slider all the way to the right, facing "**Large**". Now windows will snap to the sides of the screen successfully.

View File

@@ -0,0 +1,76 @@
---
tags:
- Fedora
- Linux
- Desktop Environment
- Workstation
---
## Purpose
You may find that you need to install an XFCE desktop environment or something into Fedora Server, if this is the case, for installing something like Rustdesk remote access, you can follow the steps below.
### Install & Configure XFCE
We need to install XFCE and configure it to be the default environment when the server turns on.
```sh
sudo dnf install @xfce-desktop-environment -y
sudo systemctl set-default graphical.target
sudo reboot
```
#### Install Rustdesk:
We need to install Rustdesk into the server.
```sh
curl -L -o /tmp/rustdesk_installer.rpm https://github.com/rustdesk/rustdesk/releases/download/1.4.0/rustdesk-1.4.0-0.x86_64.rpm
cd /tmp
sudo yum install rustdesk_installer.rpm -y
```
!!! info "Configure Rustdesk"
You need to use a tool like "MobaXTerm" or "PuTTy" to leverage X11-Forwarding to allow you to run `rustdesk` in a GUI on your local workstation. From there, you need to configure the relay server information (if you are using a self-hosted Relay). This is also where you would set up a permanent password to the server and document the device ID number.
Be sure to check the box for "**Enable remote configuration modification**" when setting up Rustdesk.
### Configure Automatic Login
For Rustdesk specifically, we have to configure XFCE to automatically login via SDDM then immediately lock the computer once it's logged in, so the XFCE session is running, allowing Rustdesk to connect to it.
**Create SDDM Config File**:
```sh
sudo mkdir -p /etc/sddm.conf.d/
sudo nano /etc/sddm.conf.d/autologin.conf
```
```ini title="/etc/sddm.conf.d/autologin.conf"
[Autologin]
User=nicole
Session=xfce.desktop
```
!!! note "Determining Session Strings"
If you're unsure of the correct session string, check what's available by typing `ls /usr/share/xsessions/`. You will be looking for something like `xfce.desktop`
### Configure Lock on Initial Login
At this point, its not the most secure thing to just leave a server logged-in upon boot, so the following steps will instantly lock the server after logging in, allowing the XFCE session to persist so Rustdesk can attach to it for remote management of the server.
!!! warning "Not Functional Yet"
I have tried implementing the below, but it seems to just ignore it and stay logged-in without locking the device. This needs to be troubleshot further.
```sh
mkdir -p ~/.config/autostart
nano ~/.config/autostart/xfce-lock.desktop
```
```ini title="~/.config/autostart/xfce-lock.desktop"
[Desktop Entry]
Type=Application
Exec=xfce4-screensaver-command -l
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name=Auto Lock
Comment=Lock the screen on login
```
Lastly, test that everything is working by rebooting the server.
```sh
sudo reboot
```

View File

@@ -0,0 +1,21 @@
---
tags:
- Fedora
- Linux
- Flatpak
- Workstation
---
## Purpose
You may need to install flatpak packages like Signal in your workstation environment. If you need to do this, you only need to run a few commands.
```sh
# Usually already installed
sudo dnf install flatpak
# Add Flathub Repo
flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
# Install Signal
flatpak install flathub org.signal.Signal
```

View File

@@ -0,0 +1,18 @@
---
tags:
- Fedora
- Linux
- Workstation
---
**Purpose**:
If you want to upgrade Fedora Workstation to a new version (e.g. 41 --> 42) you can run the following commands to do so. The overall process is fairly straightforward and requires a reboot.
```sh
sudo dnf upgrade --refresh
sudo dnf system-upgrade download --releasever=43
sudo dnf system-upgrade reboot
```
**Additional Documentation**:
https://docs.fedoraproject.org/en-US/quick-docs/upgrading-fedora-new-release/

View File

@@ -0,0 +1,47 @@
---
tags:
- UPS
- APC
- Power
---
**Purpose**: When an APC battery backup's battery dies, you can manually replace the cells and 'refurbish' the battery. The following diagram is how you rewire the cells.
!!! warning "Work in Progress"
This document is still being written
## Wiring Diagram
``` mermaid
graph TB
%% Define cells and connections
Cell1["Cell 1<br>Black (Negative) to Black (Negative)"] -.-> AndersonNeg["Anderson Connector Negative<br>(Black)"]
Cell1 -->|"Red (Positive) to Black (Negative)"| Cell2["Cell 2<br>Red (Positive) to Black (Negative)"]
Cell2 -->|"Red (Positive) to Black (Negative)"| Cell3["Cell 3<br>Red (Positive) to Black (Negative)"]
Cell3 -->|"Red (Positive) to Fuse"| Fuse["30A Fuse"]
Fuse -->|"Red (Positive) to Black (Negative)"| Cell4["Cell 4<br>Red (Positive) to Black (Negative)"]
Cell4 -->|"Red (Positive) to Anderson Connector Positive"| AndersonPos["Anderson Connector Positive<br>(Red)"]
%% Define styles
classDef battery fill:#f2f2f2,stroke:#000,stroke-width:2px;
class Cell1,Cell2,Cell3,Cell4 battery;
classDef fuse fill:#ffcc00,stroke:#000,stroke-width:2px;
class Fuse fuse;
classDef anderson fill:#00ccff,stroke:#000,stroke-width:2px;
class AndersonPos,AndersonNeg anderson;
classDef positive fill:#ff0000,stroke:#000,stroke-width:2px;
class AndersonPos positive;
classDef negative fill:#000000,stroke:#fff,stroke-width:2px;
class AndersonNeg negative;
%% Define line colors for clarity
linkStyle 0 stroke:#000,stroke-width:2px;
linkStyle 1 stroke:#ff0000,stroke-width:2px;
linkStyle 2 stroke:#ff0000,stroke-width:2px;
linkStyle 3 stroke:#ff0000,stroke-width:2px;
linkStyle 4 stroke:#ff0000,stroke-width:2px;
linkStyle 5 stroke:#ff0000,stroke-width:2px;
```

View File

@@ -0,0 +1,13 @@
---
tags:
- UPS
- Backup
- Power
---
| **Battery Backup** | **Status** | **Connected Device(s)** | **Estimated Runtime** | **Shutdown Threshold** | **UPS Web Management** |
| :--- | :--- | :--- | :--- | :--- | :---: |
| Outer-Left `#1` | ![](https://status.bunny-lab.io/api/v1/endpoints/battery-backups_outer-left-1-(virt-node-01--10-port-10gbe-network-switch--pfsense-firewall)/uptimes/7d/badge.svg) | - VIRT-NODE-01<br>- 10-Port 10GbE Network Switch<br>- pfSense Firewall | 10 Minutes | 3 Minutes Remaining | [:fontawesome-solid-car-battery: Manage](http://192.168.3.4:3052){ .md-button } |
| Inner-Left `#2` | ![](https://status.bunny-lab.io/api/v1/endpoints/battery-backups_inner-left-2-(bunny-node-02--24-port-1gbe-network-switch)/uptimes/7d/badge.svg) | - BUNNY-NODE-02<br>- 24-Port 1GbE Network Switch | 12 Minutes | 3 Minutes Remaining | [:fontawesome-solid-car-battery: Manage](http://192.168.3.5:3052){ .md-button } |
| Inner-Right `#3` | ![](https://status.bunny-lab.io/api/v1/endpoints/battery-backups_inner-right-3-(moon-storage-01--wireless-ap)/uptimes/7d/badge.svg) | - MOON-STORAGE-01<br>- Wireless AP | 16 Minutes | 3 Minutes Remaining | [:fontawesome-solid-car-battery: Manage](http://192.168.3.3:3052){ .md-button } |
| Outer-Right `#4` | ![](https://status.bunny-lab.io/api/v1/endpoints/battery-backups_outer-right-4-(lab-draas-01--lab-pool-01--8-port-1gbe-network-switch--internet-modem--poe-surveillance-cameras)/uptimes/7d/badge.svg) | - LAB-DRAAS-01<br>- LAB-POOL-01<br>- 8-Port 1GbE Network Switch<br>- Internet Modem<br>- PoE Surveillance Cameras | 13 Minutes | 3 Minutes Remaining | [:fontawesome-solid-car-battery: Manage](http://192.168.3.33:3052){ .md-button } |

View File

@@ -0,0 +1,71 @@
---
tags:
- Windows
---
# Changing Windows Editions
### Changing Editions:
Windows Server: `DISM /ONLINE /set-edition:serverstandard /productkey:AAAAA-BBBBB-CCCCC-DDDDD-EEEEE /AcceptEula`
Windows (Home/Pro): `DISM /ONLINE /set-edition:professional /productkey:AAAAA-BBBBB-CCCCC-DDDDD-EEEEE /AcceptEula`
### Force Activation / Edition Switcher:
`irm https://get.activated.win | iex`
## Generic Install Keys
### Windows 10
| Windows Edition | RTM Generic Key (Retail) | [**KMS Client Setup Key**](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj612867(v%3dws.11)) |
| :--- | :--- | :--- |
| Windows 10 Home | YTMG3-N6DKC-DKB77-7M9GH-8HVX7 | TX9XD-98N7V-6WMQ6-BX7FG-H8Q99 |
| Windows 10 Home N | 4CPRK-NM3K3-X6XXQ-RXX86-WXCHW | 3KHY7-WNT83-DGQKR-F7HPR-844BM |
| Windows 10 Home Single Language | BT79Q-G7N6G-PGBYW-4YWX6-6F4BT | 7HNRX-D7KGG-3K4RQ-4WPJ4-YTDFH |
| Windows 10 Pro | VK7JG-NPHTM-C97JM-9MPGT-3V66T | W269N-WFGWX-YVC9B-4J6C9-T83GX |
| Windows 10 Pro N | 2B87N-8KFHP-DKV6R-Y2C8J-PKCKT | MH37W-N47XK-V7XM9-C7227-GCQG9 |
| Windows 10 Pro for Workstations | DXG7C-N36C4-C4HTG-X4T3X-2YV77 | NRG8B-VKK3Q-CXVCJ-9G2XF-6Q84J |
| Windows 10 Pro N for Workstations | WYPNQ-8C467-V2W6J-TX4WX-WT2RQ | 9FNHH-K3HBT-3W4TD-6383H-6XYWF |
| Windows 10 S | 3NF4D-GF9GY-63VKH-QRC3V-7QW8P | |
| Windows 10 Education | YNMGQ-8RYV3-4PGQ3-C8XTP-7CFBY | NW6C2-QMPVW-D7KKK-3GKT6-VCFB2 |
| Windows 10 Education N | 84NGF-MHBT6-FXBX8-QWJK7-DRR8H | 2WH4N-8QGBV-H22JP-CT43Q-MDWWJ |
| Windows 10 Pro Education | 8PTT6-RNW4C-6V7J2-C2D3X-MHBPB | 6TP4R-GNPTD-KYYHQ-7B7DP-J447Y |
| Windows 10 Pro Education N | GJTYN-HDMQY-FRR76-HVGC7-QPF8P | YVWGF-BXNMC-HTQYQ-CPQ99-66QFC |
| Windows 10 Enterprise | XGVPP-NMH47-7TTHJ-W3FW7-8HV2C | NPPR9-FWDCX-D2C8J-H872K-2YT43 |
| Windows 10 Enterprise G | | YYVX9-NTFWV-6MDM3-9PT4T-4M68B |
| Windows 10 Enterprise G N | FW7NV-4T673-HF4VX-9X4MM-B4H4T | 44RPN-FTY23-9VTTB-MP9BX-T84FV |
| Windows 10 Enterprise N | WGGHN-J84D6-QYCPR-T7PJ7-X766F | DPH2V-TTNVB-4X9Q3-TJR4H-KHJW4 |
| Windows 10 Enterprise S | NK96Y-D9CD8-W44CQ-R8YTK-DYJWX | FWN7H-PF93Q-4GGP8-M8RF3-MDWWW |
| Windows 10 Enterprise 2015 LTSB | | WNMTR-4C88C-JK8YV-HQ7T2-76DF9 |
| Windows 10 Enterprise 2015 LTSB N | | 2F77B-TNFGY-69QQF-B8YKP-D69TJ |
| Windows 10 Enterprise LTSB 2016 | | DCPHK-NFMTC-H88MJ-PFHPY-QJ4BJ |
| Windows 10 Enterprise N LTSB 2016 | RW7WN-FMT44-KRGBK-G44WK-QV7YK | QFFDN-GRT3P-VKWWX-X7T3R-8B639 |
| Windows 10 Enterprise LTSC 2019 | | M7XTQ-FN8P6-TTKYV-9D4CC-J462D |
| Windows 10 Enterprise N LTSC 2019 | | 92NFX-8DJQP-P6BBQ-THF9C-7CG2H |
| Windows 10 Home | 37GNV-YCQVD-38XP9-T848R-FC2HD | | |
| Windows 10 Home N | 33CY4-NPKCC-V98JP-42G8W-VH636 | | |
| Windows 10 Pro | NF6HC-QH89W-F8WYV-WWXV4-WFG6P | | |
| Windows 10 Pro N | NH7W7-BMC3R-4W9XT-94B6D-TCQG3 | | |
| Windows 10 SL | NTRHT-XTHTG-GBWCG-4MTMP-HH64C | | |
| Windows 10 CHN SL | 7B6NC-V3438-TRQG7-8TCCX-H6DDY | | |
| Windows 10 Home | 46J3N-RY6B3-BJFDY-VBFT9-V22HG | | |
| Windows 10 Home N | PGGM7-N77TC-KVR98-D82KJ-DGPHV | | |
| Windows 10 Pro | RHGJR-N7FVY-Q3B8F-KBQ6V-46YP4 | | |
| Windows 10 Pro N | 2KMWQ-NRH27-DV92J-J9GGT-TJF9R | | |
| Windows 10 SL | GH37Y-TNG7X-PP2TK-CMRMT-D3WV4 | | |
| Windows 10 CHN SL | 68WP7-N2JMW-B676K-WR24Q-9D7YC | | |
### Windows Server
| Windows Edition | RTM Generic Key (Retail) | [**KMS Client Setup Key**](https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012/jj612867(v%3dws.11)) |
| :--- | :--- | :--- |
| Windows Server 2016 Datacenter | | CB7KF-BWN84-R7R2Y-793K2-8XDDG |
| Windows Server 2016 Standard | | WC2BQ-8NRM3-FDDYY-2BFGV-KHKQY |
| Windows Server 2016 Essentials | | JCKRF-N37P4-C2D82-9YXRT-4M63B |
| Windows Server 2019 Datacenter | | WMDGN-G9PQG-XVVXX-R3X43-63DFG |
| Windows Server 2019 Standard | | N69G4-B89J2-4G8F4-WWYCC-J464C |
| Windows Server 2019 Essentials | | WVDHN-86M7X-466P6-VHXV7-YY726 |
| Windows Server 2022 Standard | | VDYBN-27WPP-V4HQT-9VMD4-VMK7H |
| Windows Server 2022 Datacenter Azure | | NTBV8-9K7Q8-V27C6-M2BTV-KHMXV |
| Windows Server 2022 Datacenter | | WX4NM-KYWYW-QJJR4-XV3QB-6VM33 |
## Additional Reference Documentation:
https://www.tenforums.com/tutorials/95922-generic-product-keys-install-windows-10-editions.html
[https://learn.microsoft.com/en-us/windows-server/get-started/kms-client-activation-keys](https://learn.microsoft.com/en-us/windows-server/get-started/kms-client-activation-keys)

View File

@@ -0,0 +1,50 @@
---
tags:
- Windows
---
**Purpose**:
Sometimes you are running a virtual machine and are running out of space, and want to expand the operating system disk. However, there is a recovery partition to-the-right of the operating system partition. When this happens, you have to delete that partition in order to expand the storage space for the operating system.
These commands can be run in a headless environment using just powershell.
!!! warning "Use Correct Drive & Partition Numbers"
In my example codeblock, I assume the OS drive is `0` and the recovery partition is `4`. Please validate your own drive and partition numbers with the supplied `list disk` and `list partition` commands. Failure to identify the correct drive and/or partition could result in the unintended destruction of data.
**From within the VM** > Open a powershell window and run the following commands:
```powershell
diskpart # (1)
list disk # (2)
select disk 0 # (3)
list partition # (4)
select partition 4 # (5)
delete partition override # (6)
select partition 3 # (7)
extend # (8)
exit # (9)
```
1. This opens the disk management CLI tool.
2. This displays all disks attached to the device.
3. Ensure this disk number corresponds to the operating system disk. Open the Disk Management GUI if you are not 100% certain.
4. List all partitions on the previously-selected disk.
5. This partition number is for the partition of type "**Recovery**". If you see a different partition with a type of "**Recovery**" use that partition number instead.
6. This instructs the computer to delete the partition and ignore the fact that it was a recovery partition.
7. You want to select the operating system partition now, so we can expand it. This partition will generally be of a type "**Primary**" and be the largest size partition on the disk.
8. This will expand the operating system partition into the unallocated space that is now available to it.
9. Gracefully close the disk management CLI utility.
## Free Space Validation
From this point, you might want to verify the free space has been accounted for, so you can run the following command to check for free space:
```powershell
Get-Volume | Select-Object DriveLetter, FileSystem, @{Name="FreeSpace(GB)"; Expression={"{0:N2}" -f ($_.SizeRemaining / 1GB)}}, @{Name="TotalSize(GB)"; Expression={"{0:N2}" -f ($_.Size / 1GB)}}
```
!!! example "Output Example"
```
DriveLetter FileSystem FreeSpace(GB) TotalSize(GB)
----------- ---------- ------------- -------------
C NTFS 398.40 476.20
FAT32 0.06 0.09
NTFS 0.11 0.63
```

View File

@@ -0,0 +1,39 @@
---
tags:
- Windows
- VSS
- Backup
---
## Purpose
There are times when you may need to delete shadow copies (Volume Shadow Copies) from a drive, commonly to free up disk space. While this is usually straightforward, you may encounter scenarios where shadow copies cannot be deleted through normal means. The following methods provide ways to forcibly remove all shadow copies from a specific volume.
!!! warning
The examples below will **permanently delete all shadow copies** on the specified drive. The examples use drive `D:` > Adjust the drive letter as needed.
## Method 1: Delete Shadow Copies Using `vssadmin`
The `vssadmin` utility is the standard tool for managing shadow copies. It is typically safe and handles deletions gracefully.
However, some antivirus or endpoint protection software may block its execution due to its similarity to behavior used by ransomware. If `vssadmin` fails, use the `diskshadow` method described below.
```cmd
vssadmin delete shadows /for=D: /all /quiet
```
* `/for=D:` specifies the target volume.
* `/all` removes all shadow copies on that volume.
* `/quiet` suppresses confirmation prompts.
## Method 2: Delete Shadow Copies Using `diskshadow`
`diskshadow` is a more direct and lower-level tool than `vssadmin`. It should be used as a fallback option if `vssadmin` fails or is blocked.
```cmd
diskshadow
set context persistent nowriters
delete shadows volume D:
exit
```
Explanation:
* `set context persistent nowriters` ensures the command does not involve writer components (e.g., for backups), reducing the chance of interference.
* `delete shadows volume D:` removes all persistent shadow copies for volume `D:`.

View File

@@ -0,0 +1,21 @@
---
tags:
- Windows 11
- Windows
- User Accounts
---
**Purpose:** You may find that Windows 11 does not allow you to install it with a local account. This is a documented case of Microsoft attempting to push Microsoft accounts, and can be bypassed by following the workflow below:
## Initial Boot to Windows 11 Installer
- Begin installing the OS as normal, selecting the region/language
- Before you accept the EULA, press `SHIFT+F10` to open the Administrative Command Prompt
- ~Type `OOBE\BYPASSNRO` > This will reboot the computer back into the installer~
- The `OOBE\BYPASSNRO` method no longer works, instead type the command `start ms-cxh:localonly`, you will then be prompted to enter a local user account and the OOBE will finalize.
## Second Boot to Windows 11 Installer
- When prompted for an internet connection, select `I don't have Internet`
- Set up the local administrator account as normal and finish the OS installation process
!!! warning "Disconnect Internet"
To ensure a clean installation devoid of additional issues, make sure to disconnect the physical/virtual network from the device before proceeding to install Windows 11 as normal. This time, you will not be prompted to login with a Microsoft account.

View File

@@ -0,0 +1,29 @@
---
tags:
- Windows Server
- Windows
- SSL
---
**Purpose**: Sometimes you may find that you need to convert a `.crt` or `.pem` certificate file into a `.pfx` file that Microsoft IIS Server Manager can import for something like Exchange Server or another custom IIS-based server.
# Download the Certificate Files
This step will vary based on how you are obtaining the certificates. The primary thing to focus on is making sure you have the certificate file and the private key.
```jsx title="Certificate Folder Structure"
certificate.crt
certificate.pem
gd-g2_iis_intermediates.p7b
private.key
```
# Convert using OpenSSL
You will need a linux machine such as Ubuntu 22.04LTS, or to download the Windows equivelant of OpenSSL in order to run the necessary commands to convert and package the files into a `.pfx` file that IIS Server Manager can use.
!!! note
You need to make sure that all of the certificate files as well as private key are in the same folder (to keep things simple) during the conversion process. **It will prompt you to enter a password for the PFX file, choose anything you want.**
```jsx title="OpenSSL Conversion Command"
openssl pkcs12 -export -out IIS-Certificate.pfx -inkey private.key -in gd-g2_iis_intermediates.p7b -in certificate.crt
```
!!! tip
You can rename the files anything you want for organizational purposes. Afterall, they are just plaintext files. For example, you could rename `gd-g2_iis_intermediates.p7b` to `intermediate.bundle` and it would still work without issue in the command. During the import phase in IIS Server Manager, you can check a box to enable Exporting the certificate, effectively reverse-engineering it back into a certificate and private key.

View File

@@ -0,0 +1,275 @@
---
tags:
- Kubernetes
- Docker
- Containerization
---
# Migrating `docker-compose.yml` to Rancher RKE2 Cluster
You may be comfortable operating with Portainer or `docker-compose`, but there comes a point where you might want to migrate those existing workloads to a Kubernetes cluster as easily-as-possible. Lucklily, there is a way to do this using a tool called "**Kompose**'. Follow the instructions seen below to convert and deploy your existing `docker-compose.yml` into a Kubernetes cluster such as Rancher RKE2.
!!! info "RKE2 Cluster Deployment"
This document assumes that you have an existing Rancher RKE2 cluster deployed. If not, you can deploy one following the [Deploy RKE2 Cluster](../../../../deployments/platforms/containerization/kubernetes/deployment/rancher-rke2.md) documentation.
We also assume that the cluster name within Rancher RKE2 is named `local`, which is the default cluster name when setting up a Kubernetes Cluster in the way seen in the above documentation.
## Installing Kompose
The first step involves downloading Kompose from https://kompose.io/installation. Once you have it downloaded and installed onto your environment of choice, save a copy of your `docker-compose.yml` file somewhere on-disk, then open up a terminal and run the following command:
```sh
kompose --file docker-compose.yaml convert --stdout > ntfy-k8s.yaml
```
This will attempt to convert the `docker-compose.yml` file into a Kubernetes manifest YAML file. The Before and after example can be seen below:
=== "(Original) docker-compose.yml"
``` yaml
version: "2.1"
services:
ntfy:
image: binwiederhier/ntfy
container_name: ntfy
command:
- serve
environment:
- NTFY_ATTACHMENT_CACHE_DIR=/var/lib/ntfy/attachments
- NTFY_BASE_URL=https://ntfy.bunny-lab.io
- TZ=America/Denver # optional: Change to your desired timezone
#user: UID:GID # optional: Set custom user/group or uid/gid
volumes:
- /srv/containers/ntfy/cache:/var/cache/ntfy
- /srv/containers/ntfy/etc:/etc/ntfy
ports:
- 80:80
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.45
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
=== "(Converted) ntfy-k8s.yaml"
``` yaml
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
kompose.version: 1.37.0 (fb0539e64)
labels:
io.kompose.service: ntfy
name: ntfy
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: ntfy
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
kompose.version: 1.37.0 (fb0539e64)
labels:
io.kompose.service: ntfy
name: ntfy
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: ntfy
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
kompose.version: 1.37.0 (fb0539e64)
labels:
io.kompose.service: ntfy
spec:
containers:
- args:
- serve
env:
- name: NTFY_ATTACHMENT_CACHE_DIR
value: /var/lib/ntfy/attachments
- name: NTFY_BASE_URL
value: https://ntfy.bunny-lab.io
- name: TZ
value: America/Denver
image: binwiederhier/ntfy
name: ntfy
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/cache/ntfy
name: ntfy-claim0
- mountPath: /etc/ntfy
name: ntfy-claim1
restartPolicy: Always
volumes:
- name: ntfy-claim0
persistentVolumeClaim:
claimName: ntfy-claim0
- name: ntfy-claim1
persistentVolumeClaim:
claimName: ntfy-claim1
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: ntfy-claim0
name: ntfy-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
io.kompose.service: ntfy-claim1
name: ntfy-claim1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
```
## Deploy Workload into Rancher RKE2 Cluster
At this point, you need to import the yaml file you created into the Kubernetes cluster. This will occur in four sequential stages:
- Setting up a "**Project**" to logically organize your containers
- Setting up a "**Namespace**" for your container to isolate it from other containers in your Kubernetes cluster
- Importing the YAML file into the aforementioned namespace
- Configuring Ingress to allow external access to the container / service stack.
### Create a Project
The purpose of the project is to logically organize your services together. This can be something like `Home Automation`, `Log Analysis Systems`, `Network Tools`, etc. You can do this by logging into your Rancher RKE2 cluster (e.g. https://rke2-cluster.bunny-lab.io). This Project name is unique to Rancher and purely used for organizational purposes and does not affect the namespaces / containers in any way.
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > "Create Project"**
- **Name**: <Friendly Name> (e.g. `Home Automation`)
- **Description**: <Useful Description for the Group of Services> (e.g. `Various services that automate things within Bunny Lab`)
- Click the "**Create**" button
### Create a Namespace within the Project
At this point, we need to create a namespace. This basically isolates the networking, credentials, secrets, and storage between the services/stacks. This ensures that if someone exploits one of your services, they will not be able to laterally move into another service within the same Kubernetes cluster.
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > <ProjectName> > "Create Namespace"**
- The name for the namespace should be named based on its operational-context, such as `prod-ntfy` or `dev-ntfy`.
### Import Converted YAML Manifest into Namespace
At this point, we can now proceed to import the YAML file we generated in the beginning of this document.
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces**
- At the top-right of the screen will be an upload / up-arrow button with tooltip text stating "Import YAML" > Click on this button
- Click the "**Read from File**" button
- Navigate to your `ntfy-k8s.yaml` file. (Name will differ from your actual converted file) > then click the "**Open**" button.
- On the top-right of the dialog box will be a "**Default Namespace**" dropdown menu, select the `prod-ntfy` namespace we created earlier.
- Click the blue "**Import** button at the bottom of the dialog box.
!!! warning "Be Patient"
This part of the process can take a while depending on the container stack and complexity of the service. It has to download container images and deploy them into newly spun-up pods within Kubernetes. Just be patient and click on the `prod-ntfy` namespace, then look at the "**Workloads**" tab to see if the "ntfy" service exists and is Active, then you can move onto the next step.
### Configuring Ingress
This final step within Kubernetes itself involves reconfiguring the container to list via a "NodePort" instead of "ClusterIP". Don't worry, you do not have to mangle with the ports that the container uses, this is entirely within Kubernetes itself and does not make changes to the original `docker-compose.yml` ports of the container(s) you imported.
- Navigate to: **Clusters > `local` > Service Discovery > Services > ntfy**
- On the top-right, click on the blue "**Show Configuration**" button
- On the bottom-right, click the blue "**Edit Config**" button
- On the bottom-right, click the "**Edit as YAML**" button
- Within the yaml editor, you will see a section named `spec:`, within that section is a subsection named `type:`. You will see a value of `type: ClusterIP` > You want to change that to `type: NodePort`
- On the bottom-right, click the blue "**Save**" button and wait for the process to finish.
- On the new page that appears, click on the `ntfy` service again
- Click on the "**Ports**" tab
- You will see a column of the table labeled "Node Port" with a number in the 30,000s such as `30996`. This will be important for later.
!!! success "Verifying Access Before Configuring Reverse Proxy"
At this point, you will want to verify that you can access the service via the cluster node IP addresses such as the examples seen below, all of the cluster nodes should route the traffic to the container's service and will be used for load-balancing later in the reverse proxy configuration file.
- http://192.168.3.69:30996
- http://192.168.3.70:30996
- http://192.168.3.71:30996
- http://192.168.3.72:30996
## Configuring Reverse Proxy
If you were able to successfully verify access to the service when talking to it directly via one of the cluster node IP addresses with its given Node Port port number, then you can proceed to creating a reverse proxy configuration file for the service. This will be very similar to the original `docker-compose.yml` version of the reverse proxy configuration file, but with additional IP addresses to load-balance across the Kubernetes cluster nodes.
!!! info "Section Considerations"
This section of the document does not (*currently*) cover the process of setting up health checks to ensure that the load-balanced server destinations in the reverse proxy are online before redirecting traffic to them. This is on my to-do list of things to implement to further harden the deployment process.
This section also does not cover the process of setting up a reverse proxy. If you want to follow along with this document, you can deploy a Traefik reverse proxy via the [Traefik](../../../../deployments/services/edge/traefik.md) deployment documentation.
With the above considerations in-mind, we just need to make some small changes to the existing Traefik configuration file to ensure that it load-balanced across every node of the cluster to ensure high-availability functions as-expected.
=== "(Original) ntfy.bunny-lab.io.yml"
``` yaml
http:
routers:
ntfy:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: ntfy
rule: Host(`ntfy.bunny-lab.io`)
services:
ntfy:
loadBalancer:
passHostHeader: true
servers:
- url: http://192.168.5.45:80
```
=== "(Updated) ntfy.bunny-lab.io.yml"
``` yaml
http:
routers:
ntfy:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: ntfy
rule: Host(`ntfy.bunny-lab.io`)
services:
ntfy:
loadBalancer:
passHostHeader: true
servers:
- url: http://192.168.3.69:30996
- url: http://192.168.3.70:30996
- url: http://192.168.3.71:30996
- url: http://192.168.3.72:30996
```
!!! success "Verify Access via Reverse Proxy"
If everything worked, you should be able to access the service at https://ntfy.bunny-lab.io, and if one of the cluster nodes goes offline, Rancher will automatically migrate the load to another cluster node which will take over the web request.

View File

@@ -0,0 +1,85 @@
---
tags:
- Documentation
---
**Purpose**: If you run an environment with multiple Hyper-V: Failover Clusters, for the purpose of Hyper-V: Failover Cluster Replication via a `Hyper-V Replica Broker` role installed on a host within the Failover Cluster, sometimes a GuestVM will fail to replicate itself to the replica cluster, and in those cases, it may not be able to recover on its own. This guide attempts to outline the process to rebuild replication for GuestVMs on a one-by-one basis.
!!! note "Assumptions"
This guide assumes you have two Hyper-V Failover Clusters, for the sake of the guide, we will refer to the Production cluster as `CLUSTER-01` and the Replication cluster as `CLUSTER-02`. This guide also assumes that Replication was set up beforehand, and does not include instructions on how to deploy a Replica Broker (at this time).
## Production Cluster - CLUSTER-01
### Locate the GuestVM
You need to start by locating the GuestVM in the Production cluster, CLUSTER-01. You will know you found the VM if the "Replication Health" is either `Unhealthy`, `Warning`, or `Critical`.
### Remove Replication from GuestVM
- Within a node of the Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Remove Replication**"
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
## Replication Cluster - CLUSTER-02
### Note the storage GUID of the GuestVM in the replication cluster
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
- Right-Click the same GuestVM and click "Manage..." `This will open Hyper-V Manager`
- Right-Click the GuestVM and click "Settings..."
- Navigate to "**ISCSI Controller**"
- Click on one of the Virtual Disks attached to the replica VM, and note the full folder path for later. e.g. `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS\020C9A30-EB02-41F3-8D8B-3561C4521182`
!!! warning "Noting the GUID of the GuestVM"
You need to note the folder location so you have the GUID. Without the GUID, cleaning up the old storage associated with the GuestVM replica files will be much more difficult / time-consuming. Note it down somewhere safe, and reference it later in this guide.
### Delete the GuestVM from the Replication Cluster
Now that you have noted the GUID of the storage folder of the GuestVM, we can safely move onto removing the GuestVM from the replication cluster.
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Remove Replication**"
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
- Right-Click the GuestVM (again) `You will see that "Enable Replication" is an option now, indicating it was successfully removed.`
!!! note "Replica Checkpoint Merges"
When you removed replication, there may have been replication checkpoints that automatically try to merge together with a `Merge in Progress` status. Just let it finish before moving forward.
- Within the same node of the replication cluster's Hyper-V: Failover Cluster Manager `Switch back from Hyper-V Manager`
- Right-Click the GuestVM and click "**Remove**"
- Confirm the action by clicking the "**Yes**" button
### Delete the GuestVM manually from Hyper-V Manager on all replication cluster hosts
At this point in time, we need to remove the GuestVM from all of the servers in the cluster. Just because we removed it from the Hyper-V: Failover Cluster did not remove it from the cluster's nodes. We can automate part of this work by opening Hyper-V Manager on the same Failover Node we have been working on thus far, and from there we can connect the rest of the replication nodes to the manager to have one place to connect to all of the nodes, avoiding hopping between servers.
- Open Hyper-V Manager
- Right-Click "Hyper-V Manager" on the left-hand navigation menu
- Click "Connect to Server..."
- Type the names of every node in the replication cluster to connect to each of them, repeating the two steps above for every node
- Remove GuestVM from the node it appears on
- On one of the replication cluster nodes, we will see the GuestVM listed, we are going to Right-Click the GuestVM and select "**Delete**"
### Delete the GuestVM's replicated VHDX storage from replication ClusterStorage
Now we need to clean up the storage left behind by the replication cluster.
- Within a node of the replication cluster
- Navigate to `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS`
- Delete the entire GUID folder noted in the previous steps. `e.g. 020C9A30-EB02-41F3-8D8B-3561C4521182`
## Production Cluster - CLUSTER-01
### Re-Enable Replication on GuestVM in Cluster-01 (Production Cluster)
At this point, we have disabled replication for the GuestVM and cleaned up traces of it in the replication cluster. Now we need to re-enable replication on the GuestVM back in the production cluster.
- Within a node of the production Hyper-V: Failover Cluster Manager
- Right-Click the GuestVM
- Navigate to "**Replication > Enable Replication...**"
- Click "Next"
- For the "**Replica Server**", enter the name of the role of the Hyper-V Replica Broker role in the (replication cluster's) Failover Cluster. `e.g. CLUSTER-02-REPL`, then click "Next"
- Click the "Select Certificate" button, since the Broker was configured with Certificate-based authentication instead of Kerberos (in this example environment). It will prompt you to accept the certificate by clicking "OK". (e.g. `HV Replica Root CA`), then click "Next"
- Make sure every drive you want replicated is checked, then click "Next"
- Replication Frequency: `5 Minutes`, then click "Next"
- Additional Recovery Points: `Maintain only the latest recovery point`, then click "Next"
- Initial Replication Method: `Send initial copy over the network`
- Schedule Initial Replication: `Start replication immediately`
- Click "Next"
- Click "Finish"
!!! success "Replication Enabled"
If everything was successful, you will see a dialog box named "Enable replication for `<GuestVM>`" with a message similar to the following: "Replica virtual machine `<GuestVM>` was successfully created on the specified Replica server `<Node-in-Replication-Cluster>`.
At this point, you can click "Close" to finish the process. Under the GuestVM details, you will see "Replication State": `Initial Replication in Progress`.

View File

@@ -0,0 +1,34 @@
## Purpose
If you have a GuestVM that will not stop gracefully either because the Hyper-V host is goofed-up or the VMMS service won't allow you to restart it. You can perform a hail-mary to forcefully stop the GuestVM's Hyper-V process.
!!! warning "May Cause GuestVM to be Inconsistent"
This is meant as a last-resort when there are no other options on-the-table. You may end up corrupting the GuestVM by doing this.
### Get the VMID of the GuestVM
```powershell
Get-VM SERVER-01 | Select VMName, VMId
# Example Output
# VMName VMId
# ------ ------------------------------------
# SERVER-01 3e4b6f91-6c6c-4075-9b7e-389d46315074
```
### Extrapolate Process ID
Now you need to hunt-down the process ID associated with the GuestVM.
```powershell
Get-CimInstance Win32_Process -Filter "Name='vmwp.exe'" |
Where-Object { $_.CommandLine -match "3e4b6f91-6c6c-4075-9b7e-389d46315074" } |
Select-Object ProcessId, CommandLine
# Example Output
# ProcessId CommandLine
# --------- ---------------------------------------------------------
# 12488 "C:\Windows\System32\vmwp.exe" -vmid 3e4b6f91-6c6c-4075-9b7e-389d46315074
```
### Terminate Process
Lastly, you terminate the process by its ID.
```powershell
Stop-Process -Id 12488 -Force
```

View File

@@ -0,0 +1,40 @@
---
tags:
- Kerberos
---
**Purpose**:
You may find that you want to be able to live-migrate guestVMs on a Hyper-V environment that is not clustered as a Hyper-V Failover Cluster, you will have permission issues. One way to work around this is to use CredSSP as the authentication mechanism, which is not ideal but useful in a pinch, or you can use Kerberos-based authentication.
This document will cover both scenarios.
=== "Kerberos Authentication (*Preferred*)"
- Log into a domain controller that both Hyper-V hosts are capable of communicating with
- Open "**Server Manager > Tools " Active Directory Users & Computers**"
- Locate the computer objects representing both of the Hyper-V servers and repeat the steps below for each Hyper-V computer object:
- Right-Click > "**Properties**"
- Click on the "**Delegation**" Tab
- Check the radiobox for the open "**Trust this computer for delegation to specified services only.**"
- Ensure that "**User Kerberos Only** is checked
- Click on the "**Add**" button
- Click the "**Users or Computers...**" button
- Within the object search field, type in the name of the Hyper-V server you want to delegate access to (this will be the opposite host. e.g. VIRT-NODE-02, then repeat these steps later to delegate access for VIRT-NODE-01, etc)
- You will see a list of services that you can allow delegation to, add the following services:
- `cisvc`
- `mcsvc`
- `cifs`
- `Virtual Machine Migration Service`
- `Microsoft Virtualization Console`
- Click the "**Apply**" button, then click the "**OK**" button to finalize these changes.
- Repeat the above steps for the opposite Hyper-V host. This way both hosts are delegated to eachother
- e.g. `VIRT-NODE-01 <---(delegation)---> VIRT-NODE-02`
=== "CredSSP Authentication"
- Log into both Hyper-V Hosts as the same administrative user. Preferrably a domain administrator
- From the Hyper-V host currently running the GuestVM that needs to be migrated, open Hyper-V Manager and right-click > "**Move**" the guestVM.
- Select the destination by providing the fully-qualified domain name of the destination server (or in some cases the shorthand hostname of the destination server)
- It should begin the migration process.
**Note**: Do not perform a "Pull" from source to the destination. You want to always "Push" the VM to its destination. It will generally fail if you try to "Pull" the VM to its destination due to the way that CredSSP works in this context.

View File

@@ -0,0 +1,12 @@
---
tags:
- Proxmox
---
**Purpose**: The purpose of this document is to outline common tasks that you may need to run in your cluster to perform various tasks.
## Delete Node from Cluster
Sometimes you may need to delete a node from the cluster if you have re-built it or had issues and needed to destroy it. In these instances, you would run the following command (assuming you have a 3-node quorum in your cluster).
```
pvecm delnode promox-node-01
```

View File

@@ -0,0 +1,20 @@
---
tags:
- Proxmox
---
## Purpose
Sometimes in some very specific situations, you will find that an LVM / VG just won't come online in ProxmoxVE. If this happens, you can run the following commands (and replace the placeholder location) to manually bring the storage online.
```sh
lvchange -an local-vm-storage/local-vm-storage
lvchange -an local-vm-storage/local-vm-storage_tmeta
lvchange -an local-vm-storage/local-vm-storage_tdata
vgchange -ay local-vm-storage
```
!!! info "Be Patient"
It can take some time for everything to come online.
!!! success
If you see something like this: `6 logical volume(s) in volume group "local-vm-storage" now active`, then you successfully brought the volume online.

View File

@@ -0,0 +1,43 @@
---
tags:
- Proxmox
---
## Purpose
There are a few steps you have to take when upgrading ProxmoxVE from 8.4.1+ to 9.0+. The process is fairly straightforward, so just follow the instructions seen below.
!!! info "GuestVM Assumptions"
It is assumed that if you are running a ProxmoxVE cluster, you will migrate all GuestVMs to another cluster node. If this is a standalone ProxmoxVE server, you will shut down all GuestVMs safely before proceeding.
!!! warning "Perform `pve8to9` Readiness Check"
It's critical that you run the `pve8to9` command to ensure that your ProxmoxVE server meets all of the requirements and doesn't have any failures or potentially server-breaking warnings. If the `pve8to9` command is unknown, then run `apt update && apt dist-upgrade` in the shell then try again. Warnings should be addressed ad-hoc, but *CPU Microcode warnings can be safely ignored*.
**Example pve8to9 Summary Output**:
```sh
= SUMMARY =
TOTAL: 48
PASSED: 39
SKIPPED: 8
WARNINGS: 1
FAILURES: 0
```
### Update Repositories from `bookworm` to `trixie`
```sh
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/pve-install-repo.list
apt update
```
### Upgrade to ProxmoxVE 9.0
!!! warning "Run Upgrade Commands in iLO/iDRAC/IPMI"
At this point, its very likely that if you are using SSH, it may unexpectedly have the session terminated, so you absolutely want to use a local or remote console to the server to run the commands below, both to ensure you maintain access to the console, as well as to see if any issues arise during POST after the reboot.
```sh
apt dist-upgrade -y
reboot
```
!!! note "Disable `pve-enterprise` Repository"
At this point, the ProxmoxVE server should be running on v9.0+, you will want to disable the `pve-enterprise` repository as it will goof up future updates if you don't disable it.