Additional Doc Restructure
All checks were successful
GitOps Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 4s
GitOps Automatic Documentation Deployment / Sync Docs to https://docs.bunny-lab.io (push) Successful in 6s

This commit is contained in:
2026-01-27 05:57:50 -07:00
parent e73bb0376f
commit 886fd0db07
78 changed files with 0 additions and 0 deletions

View File

@@ -0,0 +1,202 @@
## Kerberos Implementation
You may find that you need to be able to run playbooks on domain-joined Windows devices using Kerberos. You need to go through some extra steps to set this up after you have successfully fully deployed AWX Operator into Kubernetes.
### Configure Windows Devices
You will need to prepare the Windows devices to allow them to be remotely controlled by Ansible playbooks. Run the following powershell script on all of the devices that will be managed by the Ansible AWX environment.
- [WinRM Prerequisite Setup Script](../enable-winrm-on-windows-devices.md)
### Create an AWX Instance Group
At this point, we need to make an "Instance Group" for the AWX Execution Environments that will use both a Keytab file and custom DNS servers defined by configmap files created below. Reference information was found [here](https://github.com/kurokobo/awx-on-k3s/blob/main/tips/use-kerberos.md#create-container-group). This group allows for persistence across playbooks/templates, so that if you establish a Kerberos authentication in one playbook, it will persist through the entire job's workflow.
Create the following files in the `/awx` folder on the AWX Operator server you deployed earlier when setting up the Kubernetes Cluster and deploying AWX Operator into it so we can later mount them into the new Execution Environment we will be building.
=== "Custom DNS Records"
```yaml title="/awx/custom_dns_records.yml"
apiVersion: v1
kind: ConfigMap
metadata:
name: custom-dns
namespace: awx
data:
custom-hosts: |
192.168.3.25 LAB-DC-01.bunny-lab.io LAB-DC-01
192.168.3.26 LAB-DC-02.bunny-lab.io LAB-DC-02
192.168.3.4 VIRT-NODE-01.bunny-lab.io VIRT-NODE-01
192.168.3.5 BUNNY-NODE-02.bunny-lab.io BUNNY-NODE-02
```
=== "Kerberos Keytab File"
```ini title="/awx/krb5.conf"
[libdefaults]
default_realm = BUNNY-LAB.IO
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
BUNNY-LAB.IO = {
kdc = 192.168.3.25
kdc = 192.168.3.26
admin_server = 192.168.3.25
}
[domain_realm]
192.168.3.25 = BUNNY-LAB.IO
192.168.3.26 = BUNNY-LAB.IO
.bunny-lab.io = BUNNY-LAB.IO
bunny-lab.io = BUNNY-LAB.IO
```
Then we apply these configmaps to the AWX namespace with the following commands:
``` sh
cd /awx
kubectl -n awx create configmap awx-kerberos-config --from-file=/awx/krb5.conf
kubectl apply -f custom_dns_records.yml
```
- Open AWX UI and click on "**Instance Groups**" under the "**Administration**" section, then press "**Add > Add container group**".
- Enter a descriptive name as you like (e.g. `Kerberos`) and click the toggle "**Customize Pod Specification**".
- Put the following YAML string in "**Custom pod spec**" then press the "**Save**" button
```yaml title="Custom Pod Spec"
apiVersion: v1
kind: Pod
metadata:
namespace: awx
spec:
serviceAccountName: default
automountServiceAccountToken: false
initContainers:
- name: init-hosts
image: busybox
command:
- sh
- '-c'
- cat /etc/custom-dns/custom-hosts >> /etc/hosts
volumeMounts:
- name: custom-dns
mountPath: /etc/custom-dns
containers:
- image: quay.io/ansible/awx-ee:latest
name: worker
args:
- ansible-runner
- worker
- '--private-data-dir=/runner'
resources:
requests:
cpu: 250m
memory: 100Mi
volumeMounts:
- name: awx-kerberos-volume
mountPath: /etc/krb5.conf
subPath: krb5.conf
volumes:
- name: awx-kerberos-volume
configMap:
name: awx-kerberos-config
- name: custom-dns
configMap:
name: custom-dns
```
### Job Template & Inventory Examples
At this point, you need to adjust your exist Job Template(s) that need to communicate via Kerberos to domain-joined Windows devices to use the "Instance Group" of "**Kerberos**" while keeping the same Execution Environment you have been using up until this point. This will change the Execution Environment to include the Kerberos Keytab file in the EE at playbook runtime. When the playbook has completed running, (or if you are chain-loading multiple playbooks in a workflow job template), it will cease to exist. The kerberos keytab data will be regenerated at the next runtime.
Also add the following variables to the job template you have associated with the playbook below:
``` yaml
---
kerberos_user: nicole.rappe@BUNNY-LAB.IO
kerberos_password: <DomainPassword>
```
You will want to ensure your inventory file is configured to use Kerberos Authentication as well, so the following example is a starting point:
```ini
virt-node-01 ansible_host=virt-node-01.bunny-lab.io
bunny-node-02 ansible_host=bunny-node-02.bunny-lab.io
[virtualizationHosts]
virt-node-01
bunny-node-02
[virtualizationHosts:vars]
ansible_connection=winrm
ansible_port=5986
ansible_winrm_transport=kerberos
ansible_winrm_scheme=https
ansible_winrm_server_cert_validation=ignore
#kerberos_user=nicole.rappe@BUNNY-LAB.IO #Optional, if you define this in the Job Template, it is not necessary here.
#kerberos_password=<DomainPassword> #Optional, if you define this in the Job Template, it is not necessary here.
```
!!! failure "Usage of Fully-Quality Domain Names"
It is **critical** that you define Kerberos-authenticated devices with fully qualified domain names. This is just something I found out from 4+ hours of troubleshooting. If the device is Linux or you are using NTLM authentication instead of Kerberos authentication, you can skip this warning. If you do not define the inventory using FQDNs, it will fail to run the commands against the targeted device(s).
In this example, the host is defined via FQDN: `virt-node-01 ansible_host=virt-node-01.bunny-lab.io`
### Kerberos Connection Playbook
At this point, you need a playbook that you can run in a Workflow Job Template (to keep things modular and simplified) to establish a connection to an Active Directory Domain Controller via Kerberos before running additional playbooks/templates against the actual devices.
You can visualize the connection workflow below:
``` mermaid
graph LR
A[Update AWX Project] --> B[Update Project Inventory]
B --> C[Establish Kerberos Connection]
C --> D[Run Playbook against Windows Device]
```
The following playbook is an example pulled from https://git.bunny-lab.io
!!! note "Playbook Redundancies"
I have several areas where I could optimize this playbook and remove redundancies. I just have not had enough time to iterate through it deeply-enough to narrow down exact things I can remove, so for now, it will remain as-is, since it functions as-expected with the example below.
```yaml title="Establish_Kerberos_Connection.yml"
---
- name: Generate Kerberos Ticket to Communicate with Domain-Joined Windows Devices
hosts: localhost
vars:
kerberos_password: "{{ lookup('env', 'KERBEROS_PASSWORD') }}" # Alternatively, you can set this as an environment variable
# BE SURE TO PASS "kerberos_user: nicole.rappe@BUNNY-LAB.IO" and "kerberos_password: <domain_admin_password>" to the template variables when running this playbook in a template.
tasks:
- name: Generate the keytab file
ansible.builtin.shell: |
ktutil <<EOF
addent -password -p {{ kerberos_user }} -k 1 -e aes256-cts
{{ kerberos_password }}
wkt /tmp/krb5.keytab
quit
EOF
environment:
KRB5_CONFIG: /etc/krb5.conf
register: generate_keytab_result
- name: Ensure keytab file was generated successfully
fail:
msg: "Failed to generate keytab file"
when: generate_keytab_result.rc != 0
- name: Keytab successfully generated
ansible.builtin.debug:
msg: "Keytab successfully generated at /tmp/krb5.keytab"
when: generate_keytab_result.rc == 0
- name: Acquire Kerberos ticket using keytab
ansible.builtin.shell: |
kinit -kt /tmp/krb5.keytab {{ kerberos_user }}
environment:
KRB5_CONFIG: /etc/krb5.conf
register: kinit_result
- name: Ensure Kerberos ticket was acquired successfully
fail:
msg: "Failed to acquire Kerberos ticket"
when: kinit_result.rc != 0
- name: Kerberos ticket successfully acquired
ansible.builtin.debug:
msg: "Kerberos ticket successfully acquired for user {{ kerberos_user }}"
when: kinit_result.rc == 0
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

View File

@@ -0,0 +1,67 @@
**Purpose**: Once AWX is deployed, you will want to connect Gitea at https://git.bunny-lab.io. The reason for this is so we can pull in our playbooks, inventories, and templates automatically into AWX, making it more stateless overall and more resilient to potential failures of either AWX or the underlying Kubernetes Cluster hosting it.
## Obtain Gitea Token
You already have this documented in Vaultwarden's password notes for awx.bunny-lab.io, but in case it gets lost, go to the [Gitea Token Page](https://git.bunny-lab.io/user/settings/applications) to set up an application token with read-only access for AWX, with a descriptive name.
## Create Gitea Credentials
Before you make move on and make the project, you need to associate the Gitea token with an AWX "Credential". Navigate to **Resources > Credentials > Add**
| **Field** | **Value** |
| :--- | :--- |
| Credential Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Organization | `Default` *(Click the Magnifying Lens)* |
| Credential Type | `Source Control` |
| Username | `Gitea Username` *(e.g. `nicole`)* |
| Password | `<Gitea Token>` |
## Create an AWX Project
In order to link AWX to Gitea, you have to connect the two of them together with an AWX "Project". Navigate to **Resources > Projects > Add**
**Project Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Project Name | `Bunny-Lab` |
| Description | `Homelab Environment` |
| Organization | `Default` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source Control Type | `Git` |
**Gitea-specific Variables**:
| **Field** | **Value** |
| :--- | :--- |
| Source Control URL | `https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git` |
| Source Control Branch/Tag/Commit | `main` |
| Source Control Credential | `git.bunny-lab.io` *(Click the Magnifying Lens)* |
## Add Playbooks
AWX automatically imports any playbooks it finds from the project, and makes them available for templates operating within the same project-space. (e.g. "Bunny-Lab"). This means no special configuration is needed for the playbooks.
## Create an Inventory
You will want to associate an inventory with the Gitea project now. Navigate to **Resources > Inventories > Add**
| **Field** | **Value** |
| :--- | :--- |
| Inventory Name | `Homelab` |
| Description | `Homelab Inventory` |
| Organization | `Default` |
### Add Gitea Inventory Source
Now you will want to connect this inventory to the inventory file(s) hosted in the aforementioned Gitea repository. Navigate to **Resources > Inventories > Homelab > Sources > Add**
| **Field** | **Value** |
| :--- | :--- |
| Source Name | `git.bunny-lab.io` |
| Description | `Gitea` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Source | `Sourced from a Project` |
| Project | `Bunny-Lab` |
| Inventory File | `inventories/homelab.ini` |
!!! info "Overwriting Existing Inventory Data"
You want to make sure that the checkboxes for "**Overwrite**" and "**Overwrite Variables**" are checked. This ensures that if devices and/or group variables are removed from the inventory file in Gitea, they will also be removed from the inventory inside AWX.
## Webhooks
Optionally, set up webhooks in Gitea to trigger inventory updates in AWX upon changes in the repository. This section is not documented yet, but will eventually be documented.

View File

@@ -0,0 +1,139 @@
# Deploy AWX on Minikube Cluster
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
!!! note Prerequisites
This document assumes you are running **Ubuntu Server 20.04** or later.
## Install Minikube Cluster
### Update the Ubuntu Server
```
sudo apt update
sudo apt upgrade -y
sudo apt autoremove -y
```
### Download and Install Minikube (Ubuntu Server)
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
```
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
sudo dpkg -i minikube_latest_amd64.deb
# Download Docker and Common Tools
sudo apt install docker.io nfs-common iptables nano htop -y
# Configure Docker User
sudo usermod -aG docker nicole
```
:::caution
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
:::
### Fully Logout then sign back in to the server
```
exit
```
### Validate that permissions allow you to run docker commands while non-root
```
docker ps
```
### Initialize Minikube Cluster
Additional Documentation: https://github.com/ansible/awx-operator
```
minikube start --driver=docker
minikube kubectl -- get nodes
minikube kubectl -- get pods -A
```
### Make sure Minikube Cluster Automatically Starts on Boot
```jsx title="/etc/systemd/system/minikube.service"
[Unit]
Description=Minikube service
After=network.target
[Service]
Type=oneshot
RemainAfterExit=yes
User=nicole
ExecStart=/usr/bin/minikube start --driver=docker
ExecStop=/usr/bin/minikube stop
[Install]
WantedBy=multi-user.target
```
:::caution
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
:::
:::info
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
:::
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
```
sudo systemctl daemon-reload
sudo systemctl enable minikube
sudo systemctl start minikube
```
### Make command alias for `kubectl`
Be sure to add the following to the bottom of your existing profile file noted below.
```jsx title="~/.bashrc"
...
alias kubectl="minikube kubectl --"
```
:::tip
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
:::
## Make AWX Operator Kustomization File:
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
```jsx title="kustomization.yml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref=2.4.0
- awx.yml
images:
- name: quay.io/ansible/awx-operator
newTag: 2.4.0
namespace: awx
```
```jsx title="awx.yml"
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
---
apiVersion: v1
kind: Service
metadata:
name: awx-service
namespace: awx
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Choose an available port in the range of 30000-32767
selector:
app.kubernetes.io/name: awx-web
```
### Apply Configuration File
Run from the same directory as the `awx-operator.yaml` file.
```
kubectl apply -k .
```
:::info
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
:::
### View Logs / Track Deployment Progress
```
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
```
### Get AWX WebUI Address
```
minikube service -n awx awx-service --url
```
### Get WebUI Password:
```
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
```

View File

@@ -0,0 +1,191 @@
**Purpose**:
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
!!! note Prerequisites
This document assumes you are running **Ubuntu Server 22.04** or later with at least 16GB of memory, 8 CPU cores, and 64GB of storage.
## Deploy Rancher RKE2 Cluster
You will need to deploy a [Rancher RKE2 Cluster](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
!!! tip "Checkpoint/Snapshot Reminder"
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
## Server Configuration
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
``` sh
# Make the deployment folder
mkdir -p /awx
cd /awx
```
We need to increase filesystem access limits:
Temporarily Set the Limits Now:
``` sh
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
```
Permanently Set the Limits for Later:
```jsx title="/etc/sysctl.conf"
# <End of File>
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
```
Apply the Settings:
``` sh
sudo sysctl -p
```
### Create AWX Deployment Donfiguration Files
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
=== "awx.yml"
```yaml title="/awx/awx.yml"
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
service_type: ClusterIP
```
=== "ingress.yml"
```yaml title="/awx/ingress.yml"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- host: awx.bunny-lab.io
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: awx-service
port:
number: 80
```
=== "kustomization.yml"
```yaml title="/awx/kustomization.yml"
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/ansible/awx-operator/config/default?ref=2.10.0
- awx.yml
- ingress.yml
images:
- name: quay.io/ansible/awx-operator
newTag: 2.10.0
namespace: awx
```
## Ensure the Kubernetes Cluster is Ready
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](../../../../platforms/containerization/kubernetes/deployment/rancher-rke2.md#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
```
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
kubectl get pods --all-namespaces
```
## Ensure the Timezone / Date is Accurate
You want to make sure that the Kubernetes environment and Node itself have accurate time for a number of reasons, least of which, is if you are using Ansible with Kubernetes authentication, if the date/time is inaccurate, things will not work correctly.
``` sh
sudo timedatectl set-timezone America/Denver
```
## Deploy AWX using Kustomize
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
!!! warning "Be Patient"
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](./awx-operator.md#troubleshooting) section if you want to track the progress after running the commands below.
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
``` sh
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
ensure CRDs are installed first
```
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
You will know that AWX is ready to be accessed in the next step if the output looks like below:
```
NAME READY STATUS RESTARTS AGE
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
awx-postgres-13-0 1/1 Running 0 6m12s
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
```
``` sh
cd /awx
kubectl apply -k .
```
!!! warning "Be Patient - Wait 20 Minutes"
The process may take a while to spin up AWX, postgresql, redis, and other workloads necessary for AWX to function. Depending on the speed of the server, it may take between 5 and 20 minutes for AWX to be ready to connect to. You can watch the progress via the CLI commands listed above, or directly on Rancher's WebUI at https://rancher.bunny-lab.io.
## Access the AWX WebUI behind Ingress Controller
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically due to having an internal Reverse Proxy within the Kubernetes Cluster. SSL certificates generated within Kubernetes/Rancher RKE2 are not covered in this documentation, but suffice to say, the AWX server can be configured on behind another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
### Traefik Implementation
If you want to put this behind traefik, you will need a slightly unique traefik configuration file, seen below, to effectively transparently passthrough traffic into the RKE2 Cluster's reverse proxy.
```yaml title="awx.bunny-lab.io.yml"
tcp:
routers:
awx-tcp-router:
rule: "HostSNI(`awx.bunny-lab.io`)"
entryPoints: ["websecure"]
service: awx-nginx-service
tls:
passthrough: true
# middlewares:
# - auth-bunny-lab-io # Referencing the Keycloak Server
services:
awx-nginx-service:
loadBalancer:
servers:
- address: "192.168.3.10:443"
```
!!! success "Accessing the AWX WebUI"
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
- AWX WebUI: https://awx.bunny-lab.io
![Ansible AWX WebUI](../awx.png)
You may see a prompt about "AWX is currently upgrading. This page will refresh when complete". Be patient, let it finish. When it's done, it will take you to a login page.
AWX will generate its own secure password the first time you set up AWX. Username is `admin`. You can run the following command to retrieve the password:
```
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
```
## Change Admin Password
You will want to change the admin password straight-away. Use the following navigation structure to find where to change the password:
``` mermaid
graph LR
A[AWX Dashboard] --> B[Access]
B --> C[Users]
C --> D[admin]
D --> E[Edit]
```
## Troubleshooting
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
### AWX-Manager Deployment Logs
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
```
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
```
!!! note
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.

View File

@@ -0,0 +1,62 @@
## Upgrading from 2.10.0 to 2.19.1+
There is a known issue with upgrading / install AWX Operator beyond version 2.10.0, because of how the PostgreSQL database upgrades from 13.0 to 15.0, and has changed permissions. The following workflow will help get past that and adjust the permissions in such a way that allows the upgrade to proceed successfully. If this is a clean installation, you can also perform this step if the fresh install of 2.19.1 is not working yet. (It wont work out of the box because of this bug). `The developers of AWX seem to just not care about this issue, and have not implemented an official fix themselves at this time).
### Create a Temporary Pod to Adjust Permissions
We need to create a pod that will mount the PostgreSQL PVC, make changes to permissions, then destroy the v15.0 pod to have the AWX Operator automatically regenerate it.
```yaml title="/awx/temp-pod.yml"
apiVersion: v1
kind: Pod
metadata:
name: temp-pod
namespace: awx
spec:
containers:
- name: temp-container
image: busybox
command: ['sh', '-c', 'sleep 3600']
volumeMounts:
- mountPath: /var/lib/pgsql/data
name: postgres-data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-15-awx-postgres-15-0
restartPolicy: Never
```
``` sh
# Deploy Temporary Pod
kubectl apply -f /awx/temp-pod.yaml
# Open a Shell in the Temporary Pod
kubectl exec -it temp-pod -n awx -- sh
# Adjust Permissions of the PostgreSQL 15.0 Database Folder
chown -R 26:root /var/lib/pgsql/data
exit
# Delete the Temporary Pod
kubectl delete pod temp-pod -n awx
# Delete the Crashlooped PostgreSQL 15.0 Pod to Regenerate It
kubectl delete pod awx-postgres-15-0 -n awx
# Track the Migration
kubectl get pods -n awx
kubectl logs -n awx awx-postgres-15-0
```
!!! warning "Be Patient"
This upgrade may take a few minutes depending on the speed of the node it is running on. Be patient and wait until the output looks something similar to this:
```
root@awx:/awx# kubectl get pods -n awx
NAME READY STATUS RESTARTS AGE
awx-migration-24.6.1-bh5vb 0/1 Completed 0 9m55s
awx-operator-controller-manager-745b55d94b-2dhvx 2/2 Running 0 25m
awx-postgres-15-0 1/1 Running 0 12m
awx-task-7946b46dd6-7z9jm 4/4 Running 0 10m
awx-web-9497647b4-s4gmj 3/3 Running 0 10m
```
If you see a migration pod, like seen in the above example, you can feel free to delete it with the following command: `kubectl delete pod awx-migration-24.6.1-bh5vb -n awx`.

View File

@@ -0,0 +1,28 @@
# WinRM (Kerberos)
**Name**: "Kerberos WinRM"
```jsx title="Input Configuration"
fields:
- id: username
type: string
label: Username
- id: password
type: string
label: Password
secret: true
- id: krb_realm
type: string
label: Kerberos Realm (Domain)
required:
- username
- password
- krb_realm
```
```jsx title="Injector Configuration"
extra_vars:
ansible_user: '{{ username }}'
ansible_password: '{{ password }}'
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
```

View File

@@ -0,0 +1,36 @@
---
sidebar_position: 1
---
# AWX Credential Types
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
## Windows-based Credentials
### NTLM
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
Credential Type: Machine
Username: nicole.rappe@MOONGATE.LOCAL
Password: <Encrypted>
Privilege Escalation Method: runas
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
```
### Kerberos
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
```jsx title="(Kerberos WinRM) nicole.rappe"
Credential Type: Kerberos WinRM
Username: nicole.rappe
Password: <Encrypted>
Kerberos Realm (Domain): MOONGATE.LOCAL
```
## Linux-based Credentials
```jsx title="(LINUX) nicole"
Credential Type: Machine
Username: nicole
Password: <Encrypted>
Privilege Escalation Method: sudo
Privilege Escalation Username: root
```
:::note
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
:::

View File

@@ -0,0 +1,71 @@
**Purpose**:
You will need to enable secure WinRM management of the Windows devices you are running playbooks against, as compared to the Linux devices. The following powershell script needs to be ran on every Windows device you intend to run Ansible playbooks on. This script can also be useful for simply enabling / resetting WinRM configurations for Hyper-V hosts in general, just omit the Powershell script remote signing section if you dont plan on using it for Ansible.
``` powershell
# Script to configure WinRM over HTTPS on the Hyper-V host
# Ensure WinRM is enabled
Write-Host "Enabling WinRM..."
winrm quickconfig -force
# Generate a self-signed certificate (Optional: Use your certificate if you have one)
$cert = New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName "$(Get-WmiObject -Class Win32_ComputerSystem).DomainName"
$certThumbprint = $cert.Thumbprint
# Function to delete existing HTTPS listener
function Remove-HTTPSListener {
Write-Host "Removing existing HTTPS listener if it exists..."
$listeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
foreach ($listener in $listeners) {
if ($listener.Transport -eq "HTTPS") {
Write-Host "Deleting listener with Address: $($listener.Address) and Transport: $($listener.Transport)"
Remove-WSManInstance -ResourceURI winrm/config/listener -SelectorSet @{Address=$listener.Address; Transport=$listener.Transport}
}
}
Start-Sleep -Seconds 5 # Wait for a few seconds to ensure deletion
}
# Remove existing HTTPS listener
Remove-HTTPSListener
# Confirm deletion
$existingListeners = Get-WSManInstance -ResourceURI winrm/config/listener -Enumerate
if ($existingListeners | Where-Object { $_.Transport -eq "HTTPS" }) {
Write-Host "Failed to delete the existing HTTPS listener. Exiting script."
exit 1
}
# Create a new HTTPS listener
Write-Host "Creating a new HTTPS listener..."
$listenerCmd = "winrm create winrm/config/Listener?Address=*+Transport=HTTPS '@{Hostname=`"$(Get-WmiObject -Class Win32_ComputerSystem).DomainName`"; CertificateThumbprint=`"$certThumbprint`"}'"
Invoke-Expression $listenerCmd
# Set TrustedHosts to allow connections from any IP address (adjust as needed for security)
Write-Host "Setting TrustedHosts to allow any IP address..."
winrm set winrm/config/client '@{TrustedHosts="*"}'
# Enable the firewall rule for WinRM over HTTPS
Write-Host "Enabling firewall rule for WinRM over HTTPS..."
$existingFirewallRule = Get-NetFirewallRule -DisplayName "WinRM HTTPS" -ErrorAction SilentlyContinue
if (-not $existingFirewallRule) {
New-NetFirewallRule -Name "WINRM-HTTPS-In-TCP-PUBLIC" -DisplayName "WinRM HTTPS" -Enabled True -Direction Inbound -Protocol TCP -LocalPort 5986 -RemoteAddress Any -Action Allow
}
# Ensure Kerberos authentication is enabled
Write-Host "Enabling Kerberos authentication for WinRM..."
winrm set winrm/config/service/auth '@{Kerberos="true"}'
# Configure the WinRM service to use HTTPS and Kerberos
Write-Host "Configuring WinRM service to use HTTPS and Kerberos..."
winrm set winrm/config/service '@{AllowUnencrypted="false"}'
# Configure the WinRM client to use Kerberos
Write-Host "Configuring WinRM client to use Kerberos..."
winrm set winrm/config/client/auth '@{Kerberos="true"}'
# Ensure the PowerShell execution policy is set to allow remotely running scripts
Write-Host "Setting PowerShell execution policy to RemoteSigned..."
Set-ExecutionPolicy RemoteSigned -Force
Write-Host "Configuration complete. The Hyper-V host is ready for remote management over HTTPS with Kerberos authentication."
```

View File

@@ -0,0 +1,35 @@
# Host Inventories
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook. You can see an example inventory file below.
Keep in mind the "Group Variables" section varies based on your environment. NTLM is considered insecure, but may be necessary when you are interacting with Windows servers that are not domain-joined. Otherwise you want to use Kerberos authentication. This is outlined more in the [AWX Kerberos Implementation](../awx/awx-kerberos-implementation.md#job-template-inventory-examples) documentation.
!!! note "Inventory Data Relationships"
An inventory file consists of hosts, groups, and variables. A host belongs to a group, and a group can have variables configured for it. If you run a playbook / job template against a host, it will assign the variables associated to the group that host belongs to (if any) during runtime.
```ini title="https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini"
# Networking
pfsense-example ansible_host=192.168.3.1
# Servers
example01 ansible_host=192.168.3.2
example02 ansible_host=192.168.3.3
example03 ansible_host=example03.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
example04 ansible_host=example04.domain.com # FQDN is required for Ansible in Windows Domain-Joined Kerberos environments.
# Group Definitions
[linuxServers]
example01
example02
[domainControllers]
example03
example04
[domainControllers:vars]
ansible_connection=winrm
ansible_winrm_kerberos_delegation=false
ansible_port=5986
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
```

View File

@@ -0,0 +1,56 @@
!!! warning "DOCUMENT UNDER CONSTRUCTION"
This document is a "scaffold" document. It is missing significant portions of several sections and should not be read with any scrutiny until it is more feature-complete down-the-road. Come back later and I should have added more to this document hopefully by then.
**Purpose**:
This is an indexed list of Ansible Playbooks / Workflows that I have developed to deploy and manage various aspects of my lab environment. The list is not dynamically updated, so it may sometimes be out-of-date.
## Linux Playbooks
### Deployments
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
- Authentik
- [1-Authentik-Bootstrapper.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/1-Authentik-Bootstrapper.yml)
- [2-Deploy-Cluster.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/2-Deploy-Cluster.yml)
- [3-Deploy-Authentik.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/3-Deploy-Authentik.yml)
- [Check_Cluster_Nodes.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Nodes.yml)
- [Check_Cluster_Pods.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Authentik/Check_Cluster_Pods.yml)
- Immich
- [Full_Deployment.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Immich/Full_Deployment.yml)
- Keycloak
- [Deploy-Keycloak.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Keycloak/Deploy-Keycloak.yml)
- Portainer
- [Deploy-Portainer.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/Portainer/Deploy-Portainer.yml)
- PrivacyIDEA
- [privacyIDEA.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Deployments/privacyIDEA.yml)
- Rancher RKE2 Kubernetes Cluster
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
- [PLACEHOLDER]()
### Kerberos
This playbook is designed to be chain-loaded before any playbooks that involve interacting with Active Directory Domain-Joined Windows Devices. It establishes a connection with Active Directory using domain credentials, sets up a keytab file (among other things), and makes it so the execution environment that the subsequent jobs are running in are able to run against windows devices. This ensures the connection is encrypted the entire time the playbooks are running instead of using lower-security authentication methods like NTLM, which don't even always work in most circumstances. You can find more information in the [Kerberos Authentication](../awx/awx-kerberos-implementation.md#kerberos-implementation) section of the AWX documentation. `It does require additional setup prior to running the playbook.`
- [Establish_Kerberos_Connection.yml](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/playbooks/Linux/Establish_Kerberos_Connection.yml)
!!! warning "Ansible w/ Kerberos is **not** for beginners"
I advise against jumping into the deep-end with setting up Kerberos authentication for your playbooks until you have made yourself more comfortable with how Kubernetes works, or at the very least, you need to read the linked documentation above very closely to ensure nothing goes wrong during the setup.
### Security
Security playbooks do things like secure devices with additional auditing functionality, login notifications, enforcing SSH certificate-based authentication, things of that sort.
- Install SSH Public Key Authentication
- [PLACEHOLDER]()
- SSH Login Notifications
- [PLACEHOLDER]()
## Windows Playbooks
### Deployments
Deployment playbooks are meant to be playbooks (or a series of playbooks forming a "Workflow Job Template") that deploy a server or piece of software.
- Hyper-V - Deploy GuestVM
- [PLACEHOLDER]()
- Query Active Directory Domain Computers
- [PLACEHOLDER]()
- Install BGInfo
- [PLACEHOLDER]()

View File

@@ -0,0 +1,16 @@
# AWX Projects
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
```jsx title="Ansible Playbooks (Gitea)"
Name: Bunny Lab
Source Control Type: Git
Source Control URL: https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git
Source Control Credential: Bunny Lab (Gitea)
```
```jsx title="Resources > Credentials > Bunny Lab (Gitea)"
Name: Bunny Lab (Gitea)
Credential Type: Source Control
Username: nicole.rappe
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
```

View File

@@ -0,0 +1,21 @@
# Templates
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
```jsx title="Deploy Hyper-V VM"
Name: Deploy Hyper-V VM
Inventory: (NTLM) MOON-HOST-01
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
Credentials: (NTLM) nicole.rappe@MOONGATE.local
Execution Environment: AWX EE (latest)
Project: Ansible Playbooks (Gitea)
Variables:
---
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
```

View File

@@ -0,0 +1,30 @@
# Automation
## Purpose
Infrastructure automation, orchestration, and workflow tooling.
## Includes
- Ansible and Puppet patterns
- Inventory and credential conventions
- CI/CD and automation notes
## New Document Template
````markdown
# <Document Title>
## Purpose
<what this automation doc exists to describe>
!!! info "Assumptions"
- <platform or tooling assumptions>
- <privilege assumptions>
## Inputs
- <variables, inventories, secrets>
## Procedure
```sh
# Commands or job steps
```
## Validation
- <command + expected result>
````

View File

@@ -0,0 +1,213 @@
**Purpose**: Puppet Bolt can be leveraged in an Ansible-esque manner to connect to and enroll devices such as Windows Servers, Linux Servers, and various workstations. To this end, it could be used to run ad-hoc tasks or enroll devices into a centralized Puppet server. (e.g. `LAB-PUPPET-01.bunny-lab.io`)
!!! note "Assumptions"
This deployment assumes you are deploying Puppet bolt onto the same server as Puppet. If you have not already, follow the [Puppet Deployment](./puppet.md) documentation to do so before continuing with the Puppet Bolt deployment.
## Initial Preparation
``` sh
# Install Bolt Repository
sudo rpm -Uvh https://yum.puppet.com/puppet-tools-release-el-9.noarch.rpm
sudo yum install -y puppet-bolt
# Verify Installation
bolt --version
# Clone Puppet Bolt Repository into Bolt Directory
#sudo git clone https://git.bunny-lab.io/GitOps/Puppet-Bolt.git /etc/puppetlabs/bolt <-- Disabled for now
sudo mkdir -p /etc/puppetlabs/bolt
sudo chown -R $(whoami):$(whoami) /etc/puppetlabs/bolt
sudo chmod -R 644 /etc/puppetlabs/bolt
#sudo chmod -R u+rwx,g+rx,o+rx /etc/puppetlabs/bolt/modules/bolt <-- Disabled for now
# Initialize A New Bolt Project
cd /etc/puppetlabs/bolt
bolt project init bunny_lab
```
## Configuring Inventory
At this point, you will want to create an inventory file that you can use for tracking devices. For now, this will have hard-coded credentials until a cleaner method is figured out.
``` yaml title="/etc/puppetlabs/bolt/inventory.yaml"
# Inventory file for Puppet Bolt
groups:
- name: linux_servers
targets:
- lab-auth-01.bunny-lab.io
- lab-auth-02.bunny-lab.io
config:
transport: ssh
ssh:
host-key-check: false
private-key: "/etc/puppetlabs/bolt/id_rsa_OpenSSH" # (1)
user: nicole
native-ssh: true
- name: windows_servers
config:
transport: winrm
winrm:
realm: BUNNY-LAB.IO
ssl: true
user: "BUNNY-LAB\\nicole.rappe"
password: DomainPassword # (2)
groups:
- name: domain_controllers
targets:
- lab-dc-01.bunny-lab.io
- lab-dc-02.bunny-lab.io
- name: dedicated_game_servers
targets:
- lab-games-01.bunny-lab.io
- lab-games-02.bunny-lab.io
- lab-games-03.bunny-lab.io
- lab-games-04.bunny-lab.io
- lab-games-05.bunny-lab.io
- name: hyperv_hosts
targets:
- virt-node-01.bunny-lab.io
- bunny-node-02.bunny-lab.io
```
1. Point the inventory file to the private key (if you use key-based authentication instead of password-based SSH authentication.)
2. Replace this with your actual domain admin / domain password.
### Validate Bolt Inventory Works
If the inventory file is created correctly, you will see the hosts listed when you run the command below:
``` sh
cd /etc/puppetlabs/bolt
bolt inventory show
```
??? example "Example Output of `bolt inventory show`"
You should expect to see output similar to the following:
``` sh
[root@lab-puppet-01 bolt-lab]# bolt inventory show
Targets
lab-auth-01.bunny-lab.io
lab-auth-02.bunny-lab.io
lab-dc-01.bunny-lab.io
lab-dc-02.bunny-lab.io
lab-games-01.bunny-lab.io
lab-games-02.bunny-lab.io
lab-games-03.bunny-lab.io
lab-games-04.bunny-lab.io
lab-games-05.bunny-lab.io
virt-node-01.bunny-lab.io
bunny-node-02.bunny-lab.io
Inventory source
/tmp/bolt-lab/inventory.yaml
Target count
11 total, 11 from inventory, 0 adhoc
Additional information
Use the '--targets', '--query', or '--rerun' option to view specific targets
Use the '--detail' option to view target configuration and data
```
## Configuring Kerberos
If you work with Windows-based devices in a domain environment, you will need to set up Puppet so it can perform Kerberos authentication while interacting with Windows devices. This involves a little bit of setup, but nothing too crazy.
### Install Krb5
We need to install the necessary software on the puppet server to allow Kerberos authentication to occur.
=== "Rocky, CentOS, RHEL, Fedora"
``` sh
sudo yum install krb5-workstation
```
=== "Debian, Ubuntu"
``` sh
sudo apt-get install krb5-user
```
=== "SUSE"
``` sh
sudo zypper install krb5-client
```
### Prepare `/etc/krb5.conf` Configuration
We need to configure Kerberos to know how to reach the domain, this is achieved by editing `/etc/krb5.conf` to look similar to the following, with your own domain substituting the example values.
``` ini
[libdefaults]
default_realm = BUNNY-LAB.IO
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 7d
forwardable = true
[realms]
BUNNY-LAB.IO = {
kdc = LAB-DC-01.bunny-lab.io # (1)
kdc = LAB-DC-02.bunny-lab.io # (2)
admin_server = LAB-DC-01.bunny-lab.io # (3)
}
[domain_realm]
.bunny-lab.io = BUNNY-LAB.IO
bunny-lab.io = BUNNY-LAB.IO
```
1. Your primary domain controller
2. Your secondary domain controller (if applicable)
3. This is your Primary Domain Controller (PDC)
### Initialize Kerberos Connection
Now we need to log into the domain using (preferrably) domain administrator credentials, such as the example below. You will be prompted to enter your domain password.
``` sh
kinit nicole.rappe@BUNNY-LAB.IO
klist
```
??? example "Example Output of `klist`"
You should expect to see output similar to the following. Finding a way to ensure the Kerberos tickets live longer is still under research, as 7 days is not exactly practical for long-term deployments.
``` sh
[root@lab-puppet-01 bolt-lab]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: nicole.rappe@BUNNY-LAB.IO
Valid starting Expires Service principal
11/14/2024 21:57:03 11/15/2024 07:57:03 krbtgt/BUNNY-LAB.IO@BUNNY-LAB.IO
renew until 11/21/2024 21:57:03
```
### Prepare Windows Devices
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
You can find the [WinRM Enablement Script](../../ansible/enable-winrm-on-windows-devices.md) in the Bunny Lab documentation.
## Ad-Hoc Command Examples
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
??? example "Example Output of `bolt command run whoami -t domain_controllers --no-ssl-verify`"
You should expect to see output similar to the following. This is what you will see when leveraging WinRM via Kerberos on Windows devices.
``` sh
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t domain_controllers --no-ssl-verify
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
Started on lab-dc-01.bunny-lab.io...
Started on lab-dc-02.bunny-lab.io...
Finished on lab-dc-02.bunny-lab.io:
bunny-lab\nicole.rappe
Finished on lab-dc-01.bunny-lab.io:
bunny-lab\nicole.rappe
Successful on 2 targets: lab-dc-01.bunny-lab.io,lab-dc-02.bunny-lab.io
Ran on 2 targets in 1.91 sec
```
??? example "Example Output of `bolt command run whoami -t linux_servers`"
You should expect to see output similar to the following. This is what you will see when leveraging native SSH on Linux devices.
``` sh
[root@lab-puppet-01 bolt-lab]# bolt command run whoami -t linux_servers
CLI arguments ["ssl-verify"] might be overridden by Inventory: /tmp/bolt-lab/inventory.yaml [ID: cli_overrides]
Started on lab-auth-01.bunny-lab.io...
Started on lab-auth-02.bunny-lab.io...
Finished on lab-auth-02.bunny-lab.io:
nicole
Finished on lab-auth-01.bunny-lab.io:
nicole
Successful on 2 targets: lab-auth-01.bunny-lab.io,lab-auth-02.bunny-lab.io
Ran on 2 targets in 0.68 sec
```

View File

@@ -0,0 +1,422 @@
**Purpose**:
Puppet is another declarative configuration management tool that excels in system configuration and enforcement. Like Ansible, it's designed to maintain the desired state of a system's configuration but uses a client-server (master-agent) architecture by default.
!!! note "Assumptions"
This document assumes you are deploying Puppet server onto Rocky Linux 9.4. Any version of RHEL/CentOS/Alma/Rocky should behave similarily.
## Architectural Overview
### Detailed
``` mermaid
sequenceDiagram
participant Gitea as Gitea Repo (Puppet Environment)
participant r10k as r10k (Environment Deployer)
participant PuppetMaster as Puppet Server (lab-puppet-01.bunny-lab.io)
participant Agent as Managed Agent (fedora.bunny-lab.io)
participant Neofetch as Neofetch Package
%% PuppetMaster pulling environment updates
PuppetMaster->>Gitea: Pull Puppet Environment updates
Gitea-->>PuppetMaster: Send latest Puppet repository code
%% r10k deployment process
PuppetMaster->>r10k: Deploy environment with r10k
r10k->>PuppetMaster: Fetch and install Puppet modules
r10k-->>PuppetMaster: Compile environments and apply updates
%% Agent enrollment process
Agent->>PuppetMaster: Request to enroll (Agent Check-in)
PuppetMaster->>Agent: Verify SSL Certificate & Authenticate
Agent-->>PuppetMaster: Send facts about system (Facter)
%% PuppetMaster compiles catalog for the agent
PuppetMaster->>PuppetMaster: Compile Catalog
PuppetMaster->>PuppetMaster: Check if 'neofetch' is required in manifest
PuppetMaster-->>Agent: Send compiled catalog with 'neofetch' installation instructions
%% Agent installs neofetch
Agent->>Agent: Check if 'neofetch' is installed
Agent--xNeofetch: 'neofetch' not installed
Agent->>Neofetch: Install 'neofetch'
Neofetch-->>Agent: Installation complete
%% Agent reports back to PuppetMaster
Agent->>PuppetMaster: Report status (catalog applied and neofetch installed)
```
### Simplified
``` mermaid
sequenceDiagram
participant Gitea as Gitea (Puppet Repository)
participant PuppetMaster as Puppet Server
participant Agent as Managed Agent (fedora.bunny-lab.io)
participant Neofetch as Neofetch Package
%% PuppetMaster pulling environment updates
PuppetMaster->>Gitea: Pull environment updates
Gitea-->>PuppetMaster: Send updated code
%% Agent enrollment and catalog request
Agent->>PuppetMaster: Request catalog (Check-in)
PuppetMaster->>Agent: Send compiled catalog (neofetch required)
%% Agent installs neofetch
Agent->>Neofetch: Install neofetch
Neofetch-->>Agent: Installation complete
%% Agent reports back
Agent->>PuppetMaster: Report catalog applied (neofetch installed)
```
### Breakdown
#### 1. **PuppetMaster Pulls Updates from Gitea**
- PuppetMaster uses `r10k` to fetch the latest environment updates from Gitea. These updates include manifests, hiera data, and modules for the specified Puppet environments.
#### 2. **PuppetMaster Compiles Catalogs and Modules**
- After pulling updates, the PuppetMaster compiles the latest node-specific catalogs based on the manifests and modules. It ensures the configuration is ready for agents to retrieve.
#### 3. **Agent (fedora.bunny-lab.io) Checks In**
- The Puppet agent on `fedora.bunny-lab.io` checks in with the PuppetMaster for its catalog. This request tells the PuppetMaster to compile the node's desired configuration.
#### 4. **Agent Downloads and Applies the Catalog**
- The agent retrieves its compiled catalog from the PuppetMaster. It compares the current system state with the desired state outlined in the catalog.
#### 5. **Agent Installs `neofetch`**
- The agent identifies that `neofetch` is missing and installs it using the system's package manager. The installation follows the directives in the catalog.
#### 6. **Agent Reports Success**
- Once changes are applied, the agent sends a report back to the PuppetMaster. The report includes details of the changes made, confirming `neofetch` was installed.
## Deployment Steps:
You will need to perform a few steps outlined in the [official Puppet documentation](https://www.puppet.com/docs/puppet/7/install_puppet.html) to get a Puppet server operational. A summarized workflow is seen below:
### Install Puppet Repository
**Installation Scope**: Puppet Server / Managed Devices
``` sh
# Add Puppet Repository / Enable Puppet on YUM
sudo rpm -Uvh https://yum.puppet.com/puppet7-release-el-9.noarch.rpm
```
### Install Puppet Server
**Installation Scope**: Puppet Server
``` sh
# Install the Puppet Server
sudo yum install -y puppetserver
systemctl enable --now puppetserver
# Validate Successful Deployment
exec bash
puppetserver -v
```
### Install Puppet Agent
**Installation Scope**: Puppet Server / Managed Devices
``` sh
# Install Puppet Agent (This will already be installed on the Puppet Server)
sudo yum install -y puppet-agent
# Enable the Puppet Agent
sudo /opt/puppetlabs/bin/puppet resource service puppet ensure=running enable=true
# Configure Puppet Server to Connect To
puppet config set server lab-puppet-01.bunny-lab.io --section main
# Establish Secure Connection to Puppet Server
puppet ssl bootstrap
# ((On the Puppet Server))
# You will see an error stating: "Couldn't fetch certificate from CA server; you might still need to sign this agent's certificate (fedora.bunny-lab.io)."
# Run the following command (as root) on the Puppet Server to generate a certificate
sudo su
puppetserver ca sign --certname fedora.bunny-lab.io
```
#### Validate Agent Functionality
At this point, you want to ensure that the device being managed by the agent is able to pull down configurations from the Puppet Server. You will know if it worked by getting a message similar to `Notice: Applied catalog in X.XX seconds` after running the following command:
``` sh
puppet agent --test
```
## Install r10k
At this point, we need to configure Gitea as the storage repository for the Puppet "Environments" (e.g. `Production` and `Development`). We can do this by leveraging a tool called "r10k" which pulls a Git repository and configures it as the environment in Puppet.
``` sh
# Install r10k Pre-Requisites
sudo dnf install -y ruby ruby-devel gcc make
# Install r10k Gem (The Software)
# Note: If you encounter any issues with permissions, you can install the gem with "sudo gem install r10k --no-document".
sudo gem install r10k
# Verify the Installation (Run this as a non-root user)
r10k version
```
### Configure r10k
``` sh
# Create the r10k Configuration Directory
sudo mkdir -p /etc/puppetlabs/r10k
# Create the r10k Configuration File
sudo nano /etc/puppetlabs/r10k/r10k.yaml
```
```yaml title="/etc/puppetlabs/r10k/r10k.yaml"
---
# Cache directory for r10k
cachedir: '/var/cache/r10k'
# Sources define which repositories contain environments (Be sure to use the SSH URL, not the Git URL)
sources:
puppet:
remote: 'https://git.bunny-lab.io/GitOps/Puppet.git'
basedir: '/etc/puppetlabs/code/environments'
```
``` sh
# Lockdown the Permissions of the Configuration File
sudo chmod 600 /etc/puppetlabs/r10k/r10k.yaml
# Create r10k Cache Directory
sudo mkdir -p /var/cache/r10k
sudo chown -R puppet:puppet /var/cache/r10k
```
## Configure Gitea
At this point, we need to set up the branches and file/folder structure of the Puppet repository on Gitea.
You will make a repository on Gitea with the following files and structure as noted by each file's title. You will make a mirror copy of all of the files below in both the `Production` and `Development` branches of the repository. For the sake of this example, the repository will be located at `https://git.bunny-lab.io/GitOps/Puppet.git`
!!! example "Example Agent & Neofetch"
You will notice there is a section for `fedora.bunny-lab.io` as well as mentions of `neofetch`. These are purely examples in my homelab of a computer I was testing against during the development of the Puppet Server and associated documentation. You can feel free to not include the entire `modules/neofetch/manifests/init.pp` file in the Gitea repository, as well as remove this entire section from the `manifests/site.pp` file:
``` yaml
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
```
=== "Puppetfile"
This file is used by the Puppet Server (PuppetMaster) to prepare the environment by installing modules / Forge packages into the environment prior to devices getting their configurations. It's important and the modules included in this example are the bare-minimum to get things working with PuppetDB functionality.
```json title="Puppetfile"
forge 'https://forge.puppet.com'
mod 'puppetlabs-stdlib', '9.6.0'
mod 'puppetlabs-puppetdb', '8.1.0'
mod 'puppetlabs-postgresql', '10.3.0'
mod 'puppetlabs-firewall', '8.1.0'
mod 'puppetlabs-inifile', '6.1.1'
mod 'puppetlabs-concat', '9.0.2'
mod 'puppet-systemd', '7.1.0'
```
=== "environment.conf"
This file is mostly redundant, as it states the values below, which are the default values Puppet works with. I only included it in case I had a unique use-case that required a more custom approach to the folder structure. (This is very unlikely).
```yaml title="environment.conf"
# Specifies the module path for this environment
modulepath = modules:$basemodulepath
# Optional: Specifies the manifest file for this environment
manifest = manifests/site.pp
# Optional: Set the environment's config_version (e.g., a script to output the current Git commit hash)
# config_version = scripts/config_version.sh
# Optional: Set the environment's environment_timeout
# environment_timeout = 0
```
=== "site.pp"
This file is kind of like an inventory of devices and their states. In this example, you will see that the puppet server itself is named `lab-puppet-01.bunny-lab.io` and the agent device is named `fedora.bunny-lab.io`. By "including" modules like PuppetDB, it installs the PuppetDB role and configures it automatically on the Puppet Server. By stating the firewall rules, it also ensures that those firewall ports are open no matter what, and if they close, Puppet will re-open them automatically. Port 8140 is for Agent communication, and port 8081 is for PuppetDB functionality.
!!! example "Neofetch Example"
In the example configuration below, you will notice this section. This tells Puppet to deploy the neofetch package to any device that has `include neofetch` written. Grouping devices etc is currently undocumented as of writing this.
``` sh
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
```
```yaml title="manifests/site.pp"
# Node definition for the Puppet Server
node 'lab-puppet-01.bunny-lab.io' {
# Include the puppetdb class with custom parameters
class { 'puppetdb':
listen_address => '0.0.0.0', # Allows access from all network interfaces
}
# Configure the Puppet Server to use PuppetDB
include puppetdb
include puppetdb::master::config
# Ensure the required iptables rules are in place using Puppet's firewall resources
firewall { '100 allow Puppet traffic on 8140':
proto => 'tcp',
dport => '8140',
jump => 'accept', # Corrected parameter from action to jump
chain => 'INPUT',
ensure => 'present',
}
firewall { '101 allow PuppetDB traffic on 8081':
proto => 'tcp',
dport => '8081',
jump => 'accept', # Corrected parameter from action to jump
chain => 'INPUT',
ensure => 'present',
}
}
# Node definition for the Fedora agent
node 'fedora.bunny-lab.io' {
# Include the neofetch class to ensure Neofetch is installed
include neofetch
}
# Default node definition (optional)
node default {
# This can be left empty or include common classes for all other nodes
}
```
=== "init.pp"
This is used by the neofetch class noted in the `site.pp` file. This is basically the declaration of how we want neofetch to be on the devices that include the neofetch "class". In this case, we don't care how it does it, but it will install Neofetch, whether that is through yum, dnf, or apt. A few lines of code is OS-agnostic. The formatting / philosophy is similar in a way to the modules in Ansible playbooks, and how they declare the "state" of things.
```yaml title="modules/neofetch/manifests/init.pp"
class neofetch {
package { 'neofetch':
ensure => installed,
}
}
```
### Storing Credentials to Gitea
We need to be able to pull down the data from Gitea's Puppet repository under the root user so that r10k can automatically pull down any changes made to the Puppet environments (e.g. `Production` and `Development`). Each Git branch represents a different Puppet environment. We will use an application token to do this.
Navigate to "**Gitea > User (Top-Right) > Settings > Applications
- Token Name: `Puppet r10k`
- Permissions: `Repository > Read Only`
- Click the "**Generate Token**" button to finish.
!!! warning "Securely Store the Application Token"
It is critical that you store the token somewhere safe like a password manager as you will need to reference it later and might need it in the future if you re-build the r10k environment.
Now we want to configure Gitea to store the credentials for later use by r10k:
``` sh
# Enable Stored Credentials (We will address security concerns further down...)
sudo yum install -y git
sudo git config --global credential.helper store
# Clone the Git Repository Once to Store the Credentials (Use the Application Token as the password)
# Username: nicole.rappe
# Password: <Application Token Value>
sudo git clone https://git.bunny-lab.io/GitOps/Puppet.git /tmp/PuppetTest
# Verify the Credentials are Stored
sudo cat /root/.git-credentials
# Lockdown Permissions
sudo chmod 600 /root/.git-credentials
# Cleanup After Ourselves
sudo rm -rf /tmp/PuppetTest
```
Finally we validate that everything is working by pulling down the Puppet environments using r10k on the Puppet Server:
``` sh
# Deploy Puppy Environments from Gitea
sudo /usr/local/bin/r10k deploy environment -p
# Validate r10k is Installing Modules in the Environments
sudo ls /etc/puppetlabs/code/environments/production/modules
sudo ls /etc/puppetlabs/code/environments/development/modules
```
!!! success "Successful Puppet Environment Deployment
If you got no errors about Puppetfile formatting or Gitea permissions errors, then you are good to move onto the next step.
## External Node Classifier (ENC)
An ENC allows you to define node-specific data, including the environment, on the Puppet Server. The agent requests its configuration, and the Puppet Server provides the environment and classes to apply.
**Advantages**:
- **Centralized Control**: Environments and classifications are managed from the server.
- **Security**: Agents cannot override their assigned environment.
- **Scalability**: Suitable for managing environments for hundreds or thousands of nodes.
### Create an ENC Script
``` sh
sudo mkdir -p /opt/puppetlabs/server/data/puppetserver/scripts/
```
```ruby title="/opt/puppetlabs/server/data/puppetserver/scripts/enc.rb"
#!/usr/bin/env ruby
# enc.rb
require 'yaml'
node_name = ARGV[0]
# Define environment assignments
node_environments = {
'fedora.bunny-lab.io' => 'development',
# Add more nodes and their environments as needed
}
environment = node_environments[node_name] || 'production'
# Define classes to include per node (optional)
node_classes = {
'fedora.bunny-lab.io' => ['neofetch'],
# Add more nodes and their classes as needed
}
classes = node_classes[node_name] || []
# Output the YAML document
output = {
'environment' => environment,
'classes' => classes
}
puts output.to_yaml
```
``` sh
# Ensure the File is Executable
sudo chmod +x /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
```
### Configure Puppet Server to Use the ENC
Edit the Puppet Server's `puppet.conf` and set the `node_terminus` and `external_nodes` parameters:
```ini title="/etc/puppetlabs/puppet/puppet.conf"
[master]
node_terminus = exec
external_nodes = /opt/puppetlabs/server/data/puppetserver/scripts/enc.rb
```
Restart the Puppet Service
``` sh
sudo systemctl restart puppetserver
```
## Pull Puppet Environments from Gitea
At this point, we can tell r10k to pull down the Puppet environments (e.g. `Production` and `Development`) that we made in the Gitea repository in previous steps. Run the following command on the Puppet Server to pull down the environments. This will download / configure any Puppet Forge modules as well as any hand-made modules such as Neofetch.
``` sh
sudo /usr/local/bin/r10k deploy environment -p
# OPTIONAL: You can pull down a specific environment instead of all environments if you specify the branch name, seen here:
#sudo /usr/local/bin/r10k deploy environment development -p
```
### Apply Configuration to Puppet Server
At this point, we are going to deploy the configuration from Gitea to the Puppet Server itself so it installs PuppetDB automatically as well as configures firewall ports and other small things to functional properly. Once this is completed, you can add additional agents / managed devices and they will be able to communicate with the Puppet Server over the network.
``` sh
sudo /opt/puppetlabs/bin/puppet agent -t
```
!!! success "Puppet Server Deployed and Validated"
Congradulations! You have successfully deployed an entire Puppet Server, as well as integrated Gitea and r10k to deploy environment changes in a versioned environment, as well as validated functionality against a managed device using the agent (such as a spare laptop/desktop). If you got this far, be proud, because it took me over 12 hours write this documentation allowing you to deploy a server in less than 30 minutes.

View File

@@ -0,0 +1,177 @@
## Purpose
This document defines the **authoritative documentation style contract** used throughout the Bunny Lab homelab documentation.
It is intended to be provided to:
- AI assistants
- Automation tools
- Contributors
The goal is to ensure all documentation is:
- Technically precise
- CLI-first
- Easy to audit
- Easy to reproduce
---
## General Writing Principles
- Write for experienced operators, not beginners
- Prefer **explicit commands** over descriptive prose
- Avoid narrative filler
- Assume the reader understands the underlying technologies
---
## Document Flow and Structure
Documentation is written with the assumption that the reader:
- Reads **top to bottom**
- Executes actions within the **current section**
- Does not require explicit step numbering
Sections define **context and scope**.
Ordering is implicit and intentional.
---
## Core Sections (Recommended)
Most documents should include, at minimum:
- **Purpose** (why this doc exists)
- **Assumptions** (platform, privileges, prerequisites)
- **Procedure** (commands and configuration)
Include these when applicable:
- **Architectural Overview** (diagram or flow)
- **Validation** (explicit checks with expected output)
- **Troubleshooting** → **Symptoms** / **Resolution**
---
## Headings
- `#` — Document title (one per document)
- `##` — Major logical phases or topics
- `###` — Subsections only when needed
Headings replace the need for numbered steps.
Avoid over-fragmentation.
---
## Admonitions
Admonitions are **intentional and sparse**, not decorative.
Use them to:
- Highlight irreversible actions
- Call out one-time decisions
- Enforce safety boundaries
Common forms:
```markdown
!!! warning "Important"
!!! note
!!! tip
!!! success
```
Do **not** restate obvious information inside admonitions.
---
## Code Blocks (Critical)
Code blocks are the **primary instructional vehicle**.
### Rules
- Always fenced
- Always copy/paste-ready
- Prefer fewer, larger blocks over many small ones
- Use inline shell comments (`#`) to explain intent
Example:
```sh
# Enable iSCSI service and persist across reboots
service iscsitarget start
sysrc iscsitarget_enable=YES
```
Avoid explanatory prose between every command.
---
## Shell Fencing
- Use ```sh for shell commands
- Use ``` for diagrams or pseudo-structure
- Do not mix command output with commands unless explicitly labeled
---
## Inline Code
Use backticks for:
- Dataset names
- Volume groups
- Filenames
- Command names
- One-off parameters
Example:
`CLUSTER-STORAGE/iscsi-proxmox`
---
## Lists
- Use bullet lists for inventories, criteria, and checks
- Avoid numbered lists for procedures
- Ordering is conveyed by section layout, not numbering
---
## Diagrams
- ASCII diagrams only
- Used to describe hierarchy or flow
- Must reinforce understanding, not decorate
---
## Validation Sections
Validation is **mandatory** for any procedure that affects:
- Storage
- Networking
- Virtualization
- Data integrity
Validation lists should be explicit and testable.
For lower-risk or informational documents, validation is optional.
---
## Tone and Voice
- Neutral
- Operational
- Conservative
- “Boring is correct”
Avoid:
- Marketing language
- Storytelling
- Over-explanation
---
## Anti-Patterns (Do Not Use)
- Numbered procedural steps
- GUI-only workflows when CLI exists
- Excessive screenshots
- One-command-per-codeblock sprawl
- Implicit assumptions
- Hidden prerequisites
---
## Summary
Bunny Lab documentation prioritizes:
- Determinism
- Safety
- Reproducibility
- Auditability
If a step cannot be reproduced from the documentation alone, it is incomplete.

View File

@@ -0,0 +1,40 @@
# Foundations
## Purpose
Defines the baseline documentation standards, shared references, and structural conventions used everywhere else in this knowledgebase.
## Includes
- Documentation styling contract
- Inventory and naming conventions
- Shared templates and glossary references
## New Document Template
````markdown
# <Document Title>
## Purpose
<one paragraph describing why this exists>
!!! info "Assumptions"
- <OS / platform / privilege assumptions>
- <required tools or prerequisites>
## Scope
- <what is covered>
- <what is explicitly out of scope>
## Procedure
```sh
# Commands go here (grouped and annotated)
```
## Validation
- <command + expected result>
## Troubleshooting
### Symptoms
- <what you see>
### Resolution
```sh
# Fix steps
```
````

View File

@@ -0,0 +1,31 @@
**Purpose**: PLACEHOLDER
## Docker Configuration
```yaml title="docker-compose.yml"
PLACEHOLDER
```
```yaml title=".env"
PLACEHOLDER
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
PLACEHOLDER:
entryPoints:
- websecure
tls:
certResolver: myresolver
service: PLACEHOLDER
rule: Host(`PLACEHOLDER.bunny-lab.io`)
services:
PLACEHOLDER:
loadBalancer:
servers:
- url: http://PLACEHOLDER:80
passHostHeader: true
```

View File

@@ -0,0 +1,17 @@
*Purpose*: Sometimes you need two linux computers to be able to talk to eachother without requiring a password. Passwordless SSH can be achieved by running the following commands:
!!! note "Non-Root Key Storage Considerations"
When you generate SSH keys, they will be stored in a specific user's profile, the one currently executing the commands. If you want to have passwordless SSH, you would run the commands from a non-root user (e.g. `nicole`).
``` sh
ssh-keygen # (1)
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@192.168.3.18 # (2)
ssh -i /home/nicole/.ssh/id_rsa nicole@192.168.3.18 # (3)
```
1. Just leave all of the default options and do not put a password on the SSH key. )
2. Change the directories to account for your given username, and change the destination to the user@IP corresponding to the remote server. You will be prompted to enter the password once to store the SSH public key on the remote computer.
3. This command is to validate that everything worked. If the remote user is the same as the local user (e.g. `nicole`) then you dont need to add the `-i /home/nicole/.ssh/id_rsa` section to the SSH command.
!!! warning "Run before configuring Global SSH Infrastructure Key"
There is a global automation that leverages a [Global Infrastructure Public SSH Key](https://git.bunny-lab.io/Infrastructure/LinuxServer_SSH_PublicKey). If this runs before you run the commands above, you will be unable to configure SSH key relationships and it will need to be done manually.

View File

@@ -0,0 +1,5 @@
``` sh
xrandr --auto
xrandr --setprovideroutputsource 4 0
xrandr --output HDMI-1 --primary --mode 1920x1080 --rate 75.00 --output DVI-I-1-1 --mode 1920x1080 --rate 60.00 --right-of HDMI-1 --output -eDP-1 --off
```

View File

@@ -0,0 +1,61 @@
# Git Repo Updater (Script)
## Purpose
Standalone `repo_watcher.sh` script used by the Git Repo Updater container. This script clones or pulls one or more repositories and rsyncs them into destination paths.
For the containerized version and deployment details, see the [Git Repo Updater container doc](../../platforms/containerization/docker/custom-containers/git-repo-updater.md).
## Script
```sh
#!/bin/sh
# Function to process each repo-destination pair
process_repo() {
FULL_REPO_URL=$1
DESTINATION=$2
# Extract the URL without credentials for logging and notifications
CLEAN_REPO_URL=$(echo "$FULL_REPO_URL" | sed 's/https:\/\/[^@]*@/https:\/\//')
# Directory to hold the repository locally
REPO_DIR="/root/Repo_Cache/$(basename $CLEAN_REPO_URL .git)"
# Clone the repo if it doesn't exist, or navigate to it if it does
if [ ! -d "$REPO_DIR" ]; then
curl -d "Cloning: $CLEAN_REPO_URL" $NTFY_URL
git clone "$FULL_REPO_URL" "$REPO_DIR" > /dev/null 2>&1
fi
cd "$REPO_DIR" || exit
# Fetch the latest changes
git fetch origin main > /dev/null 2>&1
# Check if the local repository is behind the remote
LOCAL=$(git rev-parse @)
REMOTE=$(git rev-parse @{u})
if [ "$LOCAL" != "$REMOTE" ]; then
curl -d "Updating: $CLEAN_REPO_URL" $NTFY_URL
git pull origin main > /dev/null 2>&1
rsync -av --delete --exclude '.git/' ./ "$DESTINATION" > /dev/null 2>&1
fi
}
# Main loop
while true; do
# Iterate over each environment variable matching 'REPO_[0-9]+'
env | grep '^REPO_[0-9]\+=' | while IFS='=' read -r name value; do
# Split the value by comma and read into separate variables
OLD_IFS="$IFS" # Save the original IFS
IFS=',' # Set IFS to comma for splitting
set -- $value # Set positional parameters ($1, $2, ...)
REPO_URL="$1" # Assign first parameter to REPO_URL
DESTINATION="$2" # Assign second parameter to DESTINATION
IFS="$OLD_IFS" # Restore original IFS
process_repo "$REPO_URL" "$DESTINATION"
done
# Wait for 5 seconds before the next iteration
sleep 5
done
```

View File

@@ -0,0 +1,19 @@
**Purpose**:
You may need to install the QEMU guest agent on linux VMs manually, while Windows-based devices work out-of-the-box after installing the VirtIO guest tools installer.
=== "Ubuntu Server"
```sh
sudo su
apt update
apt install -y qemu-guest-agent
systemctl enable --now qemu-guest-agent
```
=== "Rocky Linux"
```sh
sudo su
dnf install -y qemu-guest-agent
systemctl enable --now qemu-guest-agent
```

View File

@@ -0,0 +1,18 @@
**Purpose**:
If you need to set up RDP access to a Linux environment, you will want to install XRDP. Once it is installed, you can leverage other tools such as Apache Guacamole to remotely connect to it.
```
# Install and Start XRDP Service
sudo dnf install epel-release -y
sudo dnf install xrdp -y
sudo systemctl enable --now xrdp
# Open Firewall Rules for RDP Traffic
sudo firewall-cmd --permanent --add-port=3389/tcp
sudo firewall-cmd --reload
# Configure Desktop Environment to Launch when you Login via RDP (Run as Non-Root User)
# XFCE4 Desktop Environment
echo "startxfce4" > ~/.Xclients
chmod +x ~/.Xclients
```

View File

@@ -0,0 +1,5 @@
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
``` sh
sudo mdadm --grow /dev/md0 -l 5
cat /proc/mdstat
```

View File

@@ -0,0 +1,7 @@
**Purpose**:
If you want to check if a certain TCP port is open on a server.
## Netcat Command
``` sh
netcat -z -n -v <IP ADDRESS> <PORT>
```

View File

@@ -0,0 +1,61 @@
## Purpose
This script is ran via cronjob on `cluster-node-02` at midnight to rollback the deeplab environment automatically to a previous snapshot nightly.
### Bash Script
```sh title="/root/deeplab-rollback.sh"
#!/usr/bin/env bash
# ProxmoxVE Nightly DeepLab Rollback Script
SNAPNAME="ROLLBACK"
DC=140
WIN10=141
WIN11=111
ALL=("$DC" "$WIN10" "$WIN11")
log(){ echo "[$(date '+%F %T')] $*"; }
# Force Stop DeepLab VMs
for id in "${ALL[@]}"; do
log "Force stopping VM $id"
/usr/sbin/qm stop "$id" || true
done
# Rollback Snapshots
for id in "${ALL[@]}"; do
log "Rolling back VM $id to snapshot $SNAPNAME"
/usr/sbin/qm rollback "$id" "$SNAPNAME"
done
# Start DC
log "Starting DC ($DC)"
/usr/sbin/qm start "$DC"
# Wait 2 minutes
log "Waiting 2 minutes for DC to initialize..."
sleep 120
# Start Win10 + Win11
log "Starting WIN10 ($WIN10) and WIN11 ($WIN11)"
/usr/sbin/qm start "$WIN10" &
/usr/sbin/qm start "$WIN11" &
wait
log "Lab Rollback Complete."
```
### Crontab Scheduling
Type `crontab -e` to add an entry to run the job at midnight every day.
=== "With Logging"
``` sh
0 0 * * * /root/deeplab-rollback.sh >> /var/log/deeplab-rollback.log 2>&1
```
=== "Without Logging"
``` sh
0 0 * * * /root/deeplab-rollback.sh 2>&1
```

View File

@@ -0,0 +1,11 @@
The commands outlined in this short document are meant to be a quick-reference for setting the timezone and date/time of a Linux-based server.
### Set Timezone:
```sh
sudo timedatectl set-timezone America/Denver
```
### Set Time & Date
```sh
date -s "1 JAN 2025 03:30:00"
```

View File

@@ -0,0 +1,49 @@
**Purpose**:
If you find that you need to migrate a container, along with any supporting files, permissions, etc from an old server to a new server, rsync helps make this as painless as possible.
Be sure to perform the following steps to make sure that you can copy the container's files.
!!! warning
You need to stop the running containers on the old server before copying their data over, otherwise the state of the data may be unstable. Once you have migrated the data, you can spin up the containers on the new server and confirm they work before deleting the data on the old server.
On the destination (new) server, the directory needs to exist and be writable via the person copying the data over SSH:
## Copying Data Between the Old and New Servers
=== "Safe Method"
``` sh
sudo mkdir -p /srv/containers/example
sudo chmod 740 /srv/containers/example
sudo chown nicole:nicole /srv/containers/example
```
=== "Quick & Dirty Method"
``` sh
sudo mkdir -p /srv/containers
sudo chmod 777 /srv/containers
```
On the source (old) server, perform an rsync over to the new server, authenticating yourself as you will be prompted to do so:
=== "Safe Method"
``` sh
rsync -avz -e ssh --progress /srv/containers/example/* nicole@192.168.3.30:/srv/containers/example
```
=== "Quick & Dirty Method"
``` sh
rsync -avz -e ssh --progress /srv/containers/example nicole@192.168.3.30:/srv/containers
```
=== "Quick & Dirty w/ Provided SSH Key Method"
This method assumes that you have the private key for your SSH-based authentication locally on the server somewhere safe with permissions `chmod 600` applied to it. In this example, I placed the private key at `/tmp/id_rsa_OpenSSH`.
``` sh
rsync -avz -e "ssh -i /tmp/id_rsa_OpenSSH" --progress /srv/containers/pihole nicole@192.168.3.62:/srv/containers
```
## Spinning Up Docker / Portainer Stack
Once everything has been moved over, copy the `docker-compose` and `.env` (environment variables) from the old server to the new one, pointing to the same location since we maintained the same folder structure, and the container should spin up like nothing ever happened.

View File

@@ -0,0 +1,19 @@
**Purpose**: You may find that you need to transfer a file, such as a public SSH key, or some other kind of file between two devices. In this scenario, we assume both devices have the `netcat` command available to them. By putting a network listener on the device recieving the file, then sending the file to that device's IP and port, you can successfully transfer data between computers without needing to set up SSH, FTP, or anything else to establish initial trust between the devices. [Original Reference Material](https://www.youtube.com/shorts/1j17UBGqSog).
!!! warning
The data being transferred will not be encrypted. If you are transferring relatively-safe files such as public SSH keys, etc, this should be fine.
### Destination Computer
Run the following command on the computer that will be recieving the file.
``` sh
netcat -l <random-port> > /tmp/OUTPUT-AS-FILE.txt
```
### Source Computer
Run the following command on the computer that will be sending the file to the destination computer.
``` sh
cat INPUT-DATA.txt | netcat <IP-of-Destination-Computer> <Port-of-Destination-Computer> -q 0
```
!!! info
The `-q 0` command argument causes the netcat connection to close itself automatically when the transfer is complete.

View File

@@ -0,0 +1,22 @@
``` batch
@echo off
REM Change to the Blue Iris 5 directory
CD "C:\Program Files\Blue Iris 5"
:LOOP
REM Check if the BlueIrisAdmin.exe process is running
tasklist /FI "IMAGENAME eq BlueIris.exe" | find /I "BlueIris.exe" >nul
REM If the process is not found, start the process
if errorlevel 1 (
REM Start BlueIrisAdmin.exe
BlueIrisAdmin.exe
)
REM Wait for 10 seconds before checking again
timeout /t 10 /nobreak >nul
REM Go back to the beginning of the loop
GOTO :LOOP
```

View File

@@ -0,0 +1,28 @@
Robocopy is a useful tool that can be leveraged to copy files and folders from one location to another (e.g. Over the network to another server) without losing file and folder ACLs (permissions / ownership data).
!!! warning "Run as Domain Admin"
When you run Robocopy, especially when transferring data across the network to another remote server, you need to be sure to run the command prompt under the session of a domain admin. Secondly, it needs to be ran as an administrator to ensure the command is successful. This can be done by going to the start menu and typing "**Command Prompt**" > **Right Clicking** > "**Run as Administrator**" while logged in as a domain administrator.
An example of using Robocopy is below, with a full breakdown:
```powershell
robocopy "E:\Source" "Z:\Destination" /Z /B /R:5 /W:5 /MT:4 /COPYALL /E
```
- `robocopy "Source" "Destination"` : Initiates the Robocopy command to copy files from the specified source directory to the designated destination directory.
- `/Z` : Enables Robocopy's restartable mode, which allows it to resume file transfer from the point of interruption once the network connection is re-established.
- `/B` : Activates Backup Mode, enabling Robocopy to override Access Control Lists (ACLs) and copy files regardless of the existing file or folder permissions.
- `R:5` : Sets the maximum retry count to 5, meaning Robocopy will attempt to copy a file up to five times if the initial attempt fails.
- `W:5` : Configures a wait time of 5 seconds between retry attempts, providing a brief pause before trying to copy a file again.
- `/MT:4` : Employs multi-threading with 4 threads, allowing Robocopy to process multiple files simultaneously, each in its own thread.
- `/COPYALL` : Instructs Robocopy to preserve all file and folder attributes, including security permissions, timestamps, and ownership information during the copy process.
- `/E` : Directs Robocopy to include all subdirectories in the copy operation, ensuring even empty directories are replicated in the destination.
!!! tip "Usage of Administrative Shares"
Whenever dealing with copying data from one server to another, try to leverage "Administrative Shares", also referred to as "Default Shares". These exist in such a way that, if the server exists in a Windows-based domain, you can type something like `\\SERVER\C$` or `\\SERVER\E$` to access files and bypass most file access restrictions (ACLs). This generally only applies to read-access, write-access may be denied in some circumstances.
An adjusted example can be seen below to account for this usage.
**This example assumes you are running robocopy from the destination computer**.
**Remember**: You are always **PULLING** data with administrative shares, not pushing it, the source should be the administrative share, and the destination should be local (in this example). There are scenarios where you can move data between two network shares, but its best (and cleaner) to always have a remote/local relationship in the transfer.
```powershell
robocopy "\\SERVER\E$\SOURCE" "E:\DESTINATION" /Z /B /R:5 /W:5 /MT:4 /COPYALL /E
```

View File

@@ -0,0 +1,27 @@
# Reference
## Purpose
Quick-use scripts and snippets for day-to-day operations.
## Includes
- Bash, PowerShell, and Batch snippets
- One-off utilities and helpers
## New Document Template
````markdown
# <Script Title>
## Purpose
<why this script exists>
## Script
```sh
# Script content
```
## Usage
```sh
# Example usage
```
## Notes
- <edge cases or caveats>
````

View File

@@ -0,0 +1,14 @@
!!! info "Prerequesite: [Connect to Azure AD](./connect-to-azure-ad.md)"
The uppercase `SMTP` address is the primary address, while lowercase `smtp` are aliases. You can find the value in active directory in **"User > Attribute Editor > proxyAddresses"**.
``` powershell
Get-AzureADUser -ObjectId "user@domain.com" | Select -Property ProxyAddresses
```
!!! example "Example Output"
``` powershell
smtp:alias@domain.com
smtp:alias@domain.onmicrosoft.com
SMTP:primaryaddress@domain.com
```

View File

@@ -0,0 +1,18 @@
**Purpose**: Sometimes you will need to connect to Azure AD via powershell in order to perform troubleshooting / automation.
## Update Nuget Package Manager
``` powershell
Install-PackageProvider -Name NuGet -Force -ForceBootstrap
```
## Install AzureAD Powershell Modules
You will need to install the modules for AzureAD before you can run the commands necessary for querying Azure.
``` powershell
Install-Module -Name AzureAD
```
## Connect to AzureAD
When you run the following command, it will open a dialog box to take the username, password, and MFA code (if applicable) for an administrative account in the Azure Active Directory.
``` powershell
Connect-AzureAD
```

View File

@@ -0,0 +1,19 @@
**Purpose**: Sometimes you will need to connect to Office365 via powershell in order to perform troubleshooting / automation that either is too complex to do via the website, or is not exposed / possible to do via the website.
## Update Nuget Package Manager
``` powershell
Install-PackageProvider -Name NuGet -Force -ForceBootstrap
```
## Install ExchangeOnlineManagement Powershell Modules
You will need to install and import the modules for Exchange Online before you can run the commands necessary for interacting with it.
``` powershell
Install-Module -Name ExchangeOnlineManagement -Force
Import-Module ExchangeOnlineManagement
```
## Connect to Exchange Online
When you run the following command, it will open a dialog box to take the username, password, and MFA code (if applicable) for an administrative account in the Exchange Online environment.
``` powershell
Connect-ExchangeOnline -UserPrincipalName admin@domain.com
```

View File

@@ -0,0 +1,48 @@
**Purpose**:
Sometimes you just need a basic script that outputs a pretty directory and file tree. This script offers files and folders to ignore, and outputs a fancy directory tree.
```powershell
function Export-Tree {
param (
[string]$Path = ".",
[string]$OutFile = "directory_tree.txt"
)
$global:TreeLines = @()
$global:IgnoreList = @(
".git",
"Dependencies"
)
function Walk-Tree {
param (
[string]$Folder,
[string]$Prefix
)
$items = Get-ChildItem -Path $Folder -Force | Where-Object {
$_.Name -ne "." -and $_.Name -ne ".." -and
($global:IgnoreList -notcontains $_.Name)
} | Sort-Object PSIsContainer, Name
$count = $items.Count
for ($i = 0; $i -lt $count; $i++) {
$item = $items[$i]
$connector = if ($i -eq $count - 1) { "└── " } else { "├── " }
$global:TreeLines += "$Prefix$connector$($item.Name)"
if ($item.PSIsContainer) {
$newPrefix = if ($i -eq $count - 1) { "$Prefix " } else { "$Prefix" }
Walk-Tree -Folder $item.FullName -Prefix $newPrefix
}
}
}
Walk-Tree -Folder $Path -Prefix ""
$global:TreeLines | Set-Content -Path $OutFile -Encoding UTF8
}
# Run it
Export-Tree -Path "." -OutFile "directory_tree.txt"
```

View File

@@ -0,0 +1,94 @@
## Purpose
When it comes to best-practices with Windows-based DNS servers, you never want to have `127.0.0.1` or the IP of the server itself as the primary DNS server, you want to have a *different* DNS server as primary, and `127.0.0.1` as the secondary or tertiary DNS server instead.
The following script will automatically detect which network interface has a default gateway (there should only ever be one default gateway on a server's networking). Then it will check if the primary DNS server is the same IP as the localhost. If it is, it checks for a secondary DNS server, if it finds one, it performs an `nslookup` on the secondary DNS server, and if it succeeds, it swaps the secondary DNS server as the primary, and the primary becomes the secondary (loopback).
```powershell
<#
Section: Information Gathering
- Gather the adapter(s) with an IP, DNS servers, AND a default gateway set via WMI.
#>
$adapters = Get-WmiObject -Class Win32_NetworkAdapterConfiguration | Where-Object {
$_.IPAddress -ne $null -and
$_.DNSServerSearchOrder -ne $null -and
$_.DefaultIPGateway -ne $null -and
$_.DefaultIPGateway.Count -gt 0
}
foreach ($adapter in $adapters) {
Write-Host "-----------------------------------------------------------"
Write-Host "Adapter Name: $($adapter.Description)"
Write-Host "IP Address: $($adapter.IPAddress -join ', ')"
Write-Host "Default Gateway: $($adapter.DefaultIPGateway -join ', ')"
Write-Host "DNS Server(s): $($adapter.DNSServerSearchOrder -join ', ')"
$localIPs = $adapter.IPAddress + "127.0.0.1"
<#
Section: Information Analysis
- Identify primary and secondary DNS.
- Check if primary DNS matches any local IP.
#>
$primaryDNS = $adapter.DNSServerSearchOrder[0]
$secondaryDNS = $null
if ($adapter.DNSServerSearchOrder.Count -ge 2) {
$secondaryDNS = $adapter.DNSServerSearchOrder[1]
}
$isPrimaryLocal = $false
foreach ($local in $localIPs) {
if ($primaryDNS -eq $local) {
$isPrimaryLocal = $true
break
}
}
if ($isPrimaryLocal) {
Write-Host "Primary DNS matches local IP: Yes"
} else {
Write-Host "Primary DNS matches local IP: No"
}
<#
Section: Information Processing
- If the primary DNS is a local IP and a secondary exists:
a. Test the secondary DNS with nslookup on google.com.
b. Only swap if nslookup is successful.
#>
if ($isPrimaryLocal -and $secondaryDNS) {
Write-Host "Testing nslookup on secondary DNS ($secondaryDNS)..."
$nslookupResult = nslookup google.com $secondaryDNS 2>&1
# Simple check for nslookup success
$nslookupSuccess = $false
if ($nslookupResult -match "Name:\s*google\.com") { $nslookupSuccess = $true }
if ($nslookupResult -match "Non-authoritative answer:") { $nslookupSuccess = $true }
if ($nslookupResult -match "Address:") { $nslookupSuccess = $true }
if ($nslookupSuccess) {
Write-Host "NSlookup via secondary DNS: SUCCESS"
# Swap
$newDnsServers = @($secondaryDNS, $primaryDNS)
if ($adapter.DNSServerSearchOrder.Count -gt 2) {
$newDnsServers += $adapter.DNSServerSearchOrder[2..($adapter.DNSServerSearchOrder.Count - 1)]
}
$result = $adapter.SetDNSServerSearchOrder($newDnsServers)
if ($result.ReturnValue -eq 0) {
Write-Host "DNS servers swapped. New primary: $secondaryDNS, New secondary: $primaryDNS"
} else {
Write-Host "Failed to set new DNS order. Return code: $($result.ReturnValue)"
}
} else {
Write-Host "NSlookup via secondary DNS: FAILED"
Write-Host "DNS servers NOT swapped."
}
} elseif ($isPrimaryLocal -and -not $secondaryDNS) {
Write-Host "No secondary DNS set. No changes made."
} else {
Write-Host "DNS servers are correct. No changes needed."
}
Write-Host "-----------------------------------------------------------"
}
Write-Host "DNS check and correction completed for adapters with a default gateway."
```

View File

@@ -0,0 +1,37 @@
**Purpose**:
Locate specific files, and copy them with a renamed datestamp appended to a specific directory.
``` powershell
# Define an array of objects, each having a prefix and a suffix
$files = @(
@{Prefix="name"; Suffix="Extension"},
@{Prefix="name"; Suffix="Extension"}
)
# Define the destination directory
$destination = "C:\folder\to\copy\to"
# Loop over the file name patterns
foreach ($file in $files) {
# Search for files that start with the current prefix and end with the current suffix
$matches = Get-ChildItem -Path C:\ -Recurse -ErrorAction SilentlyContinue -Filter "$($file.Prefix)*.$($file.Suffix)"
# Loop over the matching files
foreach ($match in $matches) {
# Get the file's last modified date
$lastModifiedDate = $match.LastWriteTime
# Get the file's owner
$owner = (Get-Acl -Path $match.FullName).Owner
# Output the file name, last modified date, and owner
Write-Output "File: $($match.FullName), Last Modified Date: $lastModifiedDate, Owner: $owner"
# Generate a unique name for the copied file by appending the last modified date and time
$newName = "{0}_{1:yyyyMMdd_HHmmss}{2}" -f $match.BaseName, $lastModifiedDate, $match.Extension
# Copy the file to the destination directory with the new name
Copy-Item -Path $match.FullName -Destination (Join-Path -Path $destination -ChildPath $newName)
}
}
```

View File

@@ -0,0 +1,30 @@
## Purpose
Sometimes when you try to run Windows Updates, you may run into issues where updates just fail to install for seemingly nebulous reasons. You can run the following commands (in order) to try to resolve the issue.
```powershell
# Imaging integrity Rrepair tools
DISM /Online /Cleanup-Image /RestoreHealth
DISM /Online /Cleanup-Image /StartComponentCleanup
sfc /scannow
# Stop all Windows Update services (in order) to unlock underlying files and folders.
net stop usosvc
net stop wuauserv
net stop bits
net stop cryptsvc
# Purge the Windows Update cache folders and recreate them
rd /s /q %windir%\SoftwareDistribution
rd /s /q %windir%\System32\catroot2
mkdir %windir%\SoftwareDistribution
mkdir %windir%\System32\catroot2
# Start all Windows Update services (in order)
net start cryptsvc
net start bits
net start wuauserv
net start usosvc
```
!!! info "Attempt Windows Updates"
At this point, you can try re-running Windows Updates and seeing if the device makes it past the errors and installs the updates successfully or not. If not, **panic**.

View File

@@ -0,0 +1,4 @@
``` powershell
$computers = Get-ADComputer -Filter * -SearchBase "OU=Computers,DC=bunny-lab,DC=io"
$computers | ForEach-Object -Process {Invoke-GPUpdate -Computer $_.name -RandomDelayInMinutes 0 -Force}
```

View File

@@ -0,0 +1,325 @@
## Purpose
This script is designed to iterate over every computer device within an Active Directory Domain. It then reaches out to those devices over the network and iterates upon every local user profile on those devices, and using CIM, determines which profiles have not been logged into in X number of days. If executed in a non-dry-run nature, it will then delete those profiles (*this does not delete local or domain users, it just cleans up their local profile data on the workstation*).
!!! note "Windows Servers not Targeted"
For safety, this script is designed to not target servers. There is no telling the potential turmoil of clearing profiles in server environments, and to avoid that risk all-together, we just avoid them entirely.
!!! example "Commandline Arguments"
You can execute the script with the following arguments to change the behavior of the script: `.\UserProfileDataPruner.ps1`
- `-DryRun` : Do not delete local profile data, just report on what (*would*) be deleted.
- `-InactiveDays 90` : Adjust the threshold of the pruning cutoff. (*Default = 90 Days*)
- `-PilotTestingDevices` : Optional comma-separated list of devices to target (*instead of all eligible workstations in Active Directory*)
### Script
You can find the full script below, save it as `UserProfileDataPruner.ps1`:
```powershell
<#
UserProfileDataPruner.ps1
Prune stale local user profile data on Windows workstations.
- Deletes only on-disk profile data via Win32_UserProfile.Delete()
- Never deletes user accounts
- Skips servers (ProductType != 1)
- Parameters:
-DryRun -> shows [DRY-RUN]: lines; no deletions
-InactiveDays [int] -> default 90
-PilotTestingDevices -> optional comma-separated list or string[]; if omitted, auto-discovers all enabled Windows workstations in AD
Output:
[INFO]: Total Hosts Queried: <N>
[INFO]: Skipped host(s) due to WinRM/Connectivity/TimeDifference/SPN issues: <N>
[DRY-RUN]/[INFO]: Deleting X profile(s) on "<HOST>" (user1,user2,...) # per host
<ASCII summary table at the bottom>
#>
[CmdletBinding()]
param(
[switch]$DryRun,
[int]$InactiveDays = 90,
[string[]]$PilotTestingDevices
)
begin {
function Write-Info([string]$msg){ Write-Host "[INFO]: $msg" }
function Write-Dry ([string]$msg){ Write-Host "[DRY-RUN]: $msg" }
function Convert-LastUse {
param([object]$raw)
if ($null -eq $raw) { return $null }
if ($raw -is [datetime]) {
$dt = [datetime]$raw
if ($dt.Kind -eq [System.DateTimeKind]::Utc) { return $dt } else { return $dt.ToUniversalTime() }
}
if ($raw -is [int64] -or $raw -is [uint64] -or $raw -is [int] -or ($raw -is [string] -and $raw -match '^\d+$')) {
try { return [DateTime]::FromFileTimeUtc([int64]$raw) } catch { return $null }
}
if ($raw -is [string] -and $raw -match '^\d{14}\.\d{6}[-+]\d{3}$') {
try { $d = [System.Management.ManagementDateTimeConverter]::ToDateTime($raw); return $d.ToUniversalTime() }
catch { return $null }
}
if ($raw -is [string]) {
$tmp = $null
if ([DateTime]::TryParse($raw, [ref]$tmp)) { return $tmp.ToUniversalTime() }
}
return $null
}
function Try-TranslateSid($sid) {
try {
(New-Object System.Security.Principal.SecurityIdentifier($sid)).Translate([System.Security.Principal.NTAccount]).Value
} catch { $null }
}
function Test-HostOnline {
param([string]$Computer)
try { Test-WSMan -ComputerName $Computer -ErrorAction Stop | Out-Null; return $true }
catch { return $false }
}
function Show-AsciiTable {
param([hashtable]$Data)
# Keep order if [ordered] was used
$keys = @($Data.Keys)
$values = $keys | ForEach-Object { [string]$Data[$_] }
$wKey = [Math]::Max(4, ($keys | ForEach-Object { $_.ToString().Length } | Measure-Object -Maximum).Maximum)
$wVal = [Math]::Max(5, ($values | ForEach-Object { $_.ToString().Length } | Measure-Object -Maximum).Maximum)
$sep = '+' + ('-'*($wKey+2)) + '+' + ('-'*($wVal+2)) + '+'
Write-Host $sep
foreach ($k in $keys) {
$v = [string]$Data[$k]
Write-Host ('| {0} | {1} |' -f $k.PadRight($wKey), $v.PadRight($wVal))
}
Write-Host $sep
}
# Normalize any comma-separated single string into an array
if ($PilotTestingDevices -and $PilotTestingDevices.Count -eq 1 -and $PilotTestingDevices[0] -match ',') {
$PilotTestingDevices = $PilotTestingDevices[0].Split(',') | ForEach-Object { $_.Trim() } | Where-Object { $_ }
}
# Targets: use provided list, else discover workstations from AD
$Targets = @()
if ($PilotTestingDevices -and ($PilotTestingDevices | Where-Object { -not [string]::IsNullOrWhiteSpace($_) })) {
$Targets = $PilotTestingDevices | ForEach-Object { $_.Trim() } | Where-Object { $_ } | Select-Object -Unique
} else {
try { Import-Module ActiveDirectory -ErrorAction Stop }
catch { throw "ActiveDirectory module not found. Install RSAT or specify -PilotTestingDevices." }
# Discover enabled Windows workstations (exclude servers)
$ad = Get-ADComputer -Filter * -Properties OperatingSystem, DNSHostName, Enabled
$Targets = $ad |
Where-Object {
$_.Enabled -and $_.DNSHostName -and
($_.OperatingSystem -like 'Windows*') -and
($_.OperatingSystem -notmatch 'Server')
} |
Select-Object -ExpandProperty DNSHostName
}
if (-not $Targets -or $Targets.Count -eq 0) { throw "No eligible Windows workstations to query." }
$CutoffUtc = [DateTime]::UtcNow.AddDays(-$InactiveDays)
$Throttle = 25
# ---------- Remote blocks ----------
$RemoteEnumerateProfiles = {
param([datetime]$CutoffUtc)
function Convert-LastUse {
param([object]$raw)
if ($null -eq $raw) { return $null }
if ($raw -is [datetime]) {
$dt = [datetime]$raw
if ($dt.Kind -eq [System.DateTimeKind]::Utc) { return $dt } else { return $dt.ToUniversalTime() }
}
if ($raw -is [int64] -or $raw -is [uint64] -or $raw -is [int] -or ($raw -is [string] -and $raw -match '^\d+$')) {
try { return [DateTime]::FromFileTimeUtc([int64]$raw) } catch { return $null }
}
if ($raw -is [string] -and $raw -match '^\d{14}\.\d{6}[-+]\d{3}$') {
try { $d = [System.Management.ManagementDateTimeConverter]::ToDateTime($raw); return $d.ToUniversalTime() }
catch { return $null }
}
if ($raw -is [string]) {
$tmp = $null
if ([DateTime]::TryParse($raw, [ref]$tmp)) { return $tmp.ToUniversalTime() }
}
return $null
}
function Try-TranslateSid($sid) {
try { (New-Object System.Security.Principal.SecurityIdentifier($sid)).Translate([System.Security.Principal.NTAccount]).Value }
catch { $null }
}
# Skip non-workstations
try {
$hasCIM = [bool](Get-Command -Name Get-CimInstance -ErrorAction SilentlyContinue)
$os = if ($hasCIM) { Get-CimInstance Win32_OperatingSystem -ErrorAction Stop }
else { Get-WmiObject Win32_OperatingSystem -ErrorAction Stop }
if ($os.ProductType -ne 1) { return } # not a workstation
} catch { return }
try {
$hasCIM = [bool](Get-Command -Name Get-CimInstance -ErrorAction SilentlyContinue)
$profiles = if ($hasCIM) {
Get-CimInstance -ClassName Win32_UserProfile -ErrorAction Stop
} else {
Get-WmiObject -Class Win32_UserProfile -ErrorAction Stop
}
$profiles = $profiles | Where-Object {
$_.Special -eq $false -and
$_.Loaded -eq $false -and
$_.LocalPath -like 'C:\Users\*'
}
foreach ($p in $profiles) {
$sid = $p.SID
$nameGuess = Split-Path $p.LocalPath -Leaf
$acc = Try-TranslateSid $sid
$accName = if ($acc -and ($acc -like '*\*')) { ($acc -split '\\',2)[1] } else { $nameGuess }
$luUtc = Convert-LastUse ($p.PSObject.Properties['LastUseTime'].Value)
# Optional fast size (not always present)
$sizeBytes = $null
$szProp = $p.PSObject.Properties['Size']
if ($szProp -and $szProp.Value -ne $null) { try { $sizeBytes = [int64]$szProp.Value } catch { } }
$stale = ($null -eq $luUtc) -or ($luUtc -lt $CutoffUtc)
[PSCustomObject]@{
Computer = $env:COMPUTERNAME
SID = $sid
AccountName = $accName
AccountFQN = $acc
LocalPath = $p.LocalPath
LastUseUtc = $luUtc
Eligible = $stale
SizeBytes = $sizeBytes
}
}
} catch {
Write-Error ($_.Exception.Message)
}
}
$RemoteDeleteProfiles = {
param([string[]]$SIDs)
$results = @()
$hasCIM = [bool](Get-Command -Name Get-CimInstance -ErrorAction SilentlyContinue)
foreach ($sid in $SIDs) {
try {
if ($hasCIM) {
$obj = Get-CimInstance -ClassName Win32_UserProfile -Filter ("SID='{0}'" -f $sid) -ErrorAction Stop
if (-not $obj -or $obj.Loaded -or $obj.Special) { $results += [pscustomobject]@{SID=$sid;Deleted=$false;Code='SKIP';Message='Not found or loaded/special'}; continue }
$rv = Invoke-CimMethod -InputObject $obj -MethodName Delete -ErrorAction Stop
if ($rv.ReturnValue -eq 0) { $results += [pscustomobject]@{SID=$sid;Deleted=$true; Code=0; Message='OK'} }
else { $results += [pscustomobject]@{SID=$sid;Deleted=$false;Code=$rv.ReturnValue;Message='Delete returned non-zero'} }
} else {
$obj = Get-WmiObject -Class Win32_UserProfile -Filter ("SID='{0}'" -f $sid) -ErrorAction Stop
if (-not $obj -or $obj.Loaded -or $obj.Special) { $results += [pscustomobject]@{SID=$sid;Deleted=$false;Code='SKIP';Message='Not found or loaded/special'}; continue }
$rv = $obj.Delete()
if ($rv.ReturnValue -eq 0) { $results += [pscustomobject]@{SID=$sid;Deleted=$true; Code=0; Message='OK'} }
else { $results += [pscustomobject]@{SID=$sid;Deleted=$false;Code=$rv.ReturnValue;Message='Delete returned non-zero'} }
}
} catch {
$results += [pscustomobject]@{SID=$sid;Deleted=$false;Code='EXC';Message=$_.Exception.Message}
}
}
return $results
}
$SkippedHosts = New-Object System.Collections.Generic.HashSet[string] # names we couldnt query or errored on
}
process {
# Reachability (WSMan) filter
$TotalHostsQueried = $Targets.Count
$reachable = @()
foreach ($c in $Targets) {
if (Test-HostOnline $c) { $reachable += $c } else { $null = $SkippedHosts.Add([string]$c) }
}
if (-not $reachable) {
Write-Info ("Total Hosts Queried: {0}" -f $TotalHostsQueried)
Write-Info ("Skipped host(s) due to WinRM/Connectivity/TimeDifference/SPN issues: {0}" -f $SkippedHosts.Count)
throw "No reachable hosts via WinRM."
}
# Inventory
$remoteErrors = @()
$inv = Invoke-Command -ComputerName $reachable -ThrottleLimit 25 `
-ScriptBlock $RemoteEnumerateProfiles -ArgumentList $CutoffUtc `
-ErrorAction Continue -ErrorVariable +remoteErrors
foreach ($e in $remoteErrors) {
if ($e.PSComputerName) { $null = $SkippedHosts.Add([string]$e.PSComputerName) }
}
$rows = $inv | Where-Object { $_ -and $_.SID -and $_.LocalPath -like 'C:\Users\*' }
$eligibleRows = $rows | Where-Object { $_.Eligible -eq $true }
# Plan per host
$plan = @{}
foreach ($r in $eligibleRows) {
if (-not $plan.ContainsKey($r.Computer)) {
$plan[$r.Computer] = [PSCustomObject]@{
SIDs = New-Object System.Collections.Generic.List[string]
Names = New-Object System.Collections.Generic.List[string]
Size = [int64]0
}
}
$plan[$r.Computer].SIDs.Add($r.SID)
$plan[$r.Computer].Names.Add($r.AccountName)
if ($r.SizeBytes -ne $null) { $plan[$r.Computer].Size += [int64]$r.SizeBytes }
}
# Host-level counters first
Write-Info ("Total Hosts Queried: {0}" -f $TotalHostsQueried)
Write-Info ("Skipped host(s) due to WinRM/Connectivity/TimeDifference/SPN issues: {0}" -f $SkippedHosts.Count)
# Per-host summary lines
$hostKeys = $plan.Keys | Sort-Object
foreach ($h in $hostKeys) {
$sids = $plan[$h].SIDs | Sort-Object -Unique
$names = $plan[$h].Names | Where-Object { $_ } | Sort-Object -Unique
$list = '(' + ($names -join ',') + ')'
if ($DryRun) { Write-Dry ("Deleting {0} profile(s) on ""{1}"" {2}" -f $sids.Count, $h, $list) }
else { Write-Info("Deleting {0} profile(s) on ""{1}"" {2}" -f $sids.Count, $h, $list) }
}
# Execute deletes when not DryRun
if (-not $DryRun -and $hostKeys.Count -gt 0) {
foreach ($h in $hostKeys) {
$sids = $plan[$h].SIDs | Sort-Object -Unique
try {
$null = Invoke-Command -ComputerName $h -ThrottleLimit 1 `
-ScriptBlock $RemoteDeleteProfiles -ArgumentList (,$sids) `
-ErrorAction Stop
} catch {
Write-Info ("Host ""{0}"": delete attempt failed: {1}" -f $h, $_.Exception.Message)
}
}
}
# ---------- Bottom summary table ----------
$analyzed = $rows.Count
$evaluated = $rows.Count
$eligibleCount = $eligibleRows.Count
$fleetBytes = ($eligibleRows | Where-Object { $_.SizeBytes -ne $null } | Measure-Object -Property SizeBytes -Sum).Sum
$fleetGB = if ($fleetBytes -and $fleetBytes -gt 0) { "{0} GB" -f ([Math]::Round($fleetBytes / 1GB, 2)) } else { "N/A" }
$summary = [ordered]@{
"Local User Profiles Analyzed" = "$analyzed"
"User Profiles Evaluated" = "$evaluated"
("User Profiles Not Logged in for {0}+ Days" -f $InactiveDays) = "$eligibleCount"
"Estimated Data To Remove (if executed)" = "$fleetGB"
}
Show-AsciiTable -Data $summary
}
end { }
```

View File

@@ -0,0 +1,47 @@
Rclone is a command-line program to manage files on cloud storage. It is a feature-rich alternative to cloud vendors' web storage interfaces. Over 70 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
!!! warning "Be Mindful of Sync Type"
The `bisync` command is meant to keep multiple locations synced with eachother, while in contrast the `sync` command forces the source to overwrite the destination. If you just want to dump the source into the destination on top of existing data, use the `copy` command within rclone.
## Usage Overview
There is a lot to keep in mind when using rclone, primarily with the `sync` command. You can find more information in the [Official Documentation](https://rclone.org/commands/)
## rClone `bisync` Implementation
Perform bi-directional synchronization between two locations (e.g. `Local` and `Remote`). Bisync provides a bi-directional cloud sync solution in rclone. It retains the file structure and history data in both the `Local` and `Remote` locations from the first time you run bisync.
### Example Usage
The following commands illustrate how to use bisync to synchronize a local folder and a remote folder (assumed to be Google Drive).
!!! example "Explanation of Command Arguments"
- `--drive-skip-gdocs`: This prevents the sync from syncing Google Drive specific documents such as `*.gsheet`, `*.gdoc`, etc.
- `--resilient`: This means that if there are network interruptions, rclone will attempt to recover on its own automatically.
- `--conflict-resolve newer`: This is how the bisync determines how to declare a "*winner*" and a "*loser*".
- The winner being the newer file, and the loser being the older file.
- `--conflict-loser delete`: This is the action to perform to the older file when a conflict is found in either direction.
- `--update`: This skips files that are newer on the destination, allowing us to ensure that the newest changes on the remote storage are pulled down before performing our first bisync.
=== "Initial Sync"
We want to first sync down any files that are from the remote location (Google Drive/Remote Folder/Network Share/etc) and overwrite any local files with the newer files. This ONLY overwrites local files that are older than the remote files, but if the local files are newer, they are left alone.
```powershell
.\rclone.exe sync "Remote" "Local" --update --log-level INFO --drive-skip-gdocs --create-empty-src-dirs --progress
```
=== "Subsequent Syncs"
At this point, the local directory has the newest remote version of all of the files that exist in both locations, so if anyone made changes to a file in Google Drive, and those changes are newer than the local files, it overwrites the local files, but if the local files were newer, they were left alone. This second command performs the first and all subsequent bisyncs, with conflict resolution, meaning:
- If the remote file was newer, it deletes the older local file and overwrites it with the newer remote file,
- If the local file was newer, it deletes the older remote file and overwrites it with the newer local file
```powershell
.\rclone.exe bisync "Local" "Remote" --create-empty-src-dirs --conflict-resolve newer --conflict-loser delete --compare size,modtime,checksum --resilient --log-level ERROR --drive-skip-gdocs --fix-case --force --progress --exclude="**/*.lnk"
```
=== "Repairing a Broken BiSync"
If you find your bisync has somehow gone awry, and you need to re-create the differencing databases that are used by rclone to determine which files are local and which are remote, you can run the following command to (non-destructively) re-build the databases to restore bisync functionality.
The only core difference between this command and the "Subsequent Sync" command, is the addition of `--resync` to the argument list.
```powershell
.\rclone.exe bisync "Local" "Remote" --create-empty-src-dirs --conflict-resolve newer --conflict-loser delete --compare size,modtime,checksum --resilient --log-level ERROR --drive-skip-gdocs --fix-case --force --progress --exclude="**/*.lnk" --resync
```

View File

@@ -0,0 +1,34 @@
**Purpose**:
Sometimes you need to restart a service across every computer in an Active Directory Domain. This powershell script will restart a specific service by name domain-wide. Each device will be processed in a serialized nature, one-by-one.
!!! warning "Under Connstruction"
This document is under construction and not generalized for general purpose use yet. Manual work needs to be done to repurpose this script for general usage.
```powershell
# Clear the screen before running the script
Clear-Host
Write-Host "Starting Domain-Wide Service Restart" -ForegroundColor Green
# Main Script -------------------------------------------------------------------------------------------------------------------------
# Get a list of all servers from Active Directory
Write-Host "Retrieving server list from Active Directory..."
$servers = Get-ADComputer -Filter * -Property OperatingSystem | Where-Object {
$_.OperatingSystem -like "*Server*"
} | Select-Object -ExpandProperty Name
# Loop through all servers and start the 'cagservice' service
foreach ($server in $servers) {
Write-Host ("Attempting to start 'cagservice' on " + $server + "...")
try {
Invoke-Command -ComputerName $server -ScriptBlock {
Start-Service -Name cagservice -ErrorAction Stop
Write-Host "'cagservice' started successfully on $env:COMPUTERNAME"
} -ErrorAction Stop
} catch {
Write-Host ("Failed to start 'cagservice' on " + $server + ": " + $_.Exception.Message) -ForegroundColor Red
}
}
Write-Host "Script execution completed." -ForegroundColor Green
```

View File

@@ -0,0 +1,511 @@
**Purpose**:
You may need to upgrade a device to Windows 11 using an ISO stored on a UNC Network Share, the script below handles that.
!!! note "Environment Variables"
You may need to consider a few environment variables to be set when running the script. This can be done by hardcoding them into the script, or setting them in the Powershell session before executing the script within the same session.
### Environment Variables
| **Variable** | **Type** | **Default Value** | **Additional Notes** |
| :--- | :--- | :--- | :--- |
|usrReboot|Boolean|true|Configure whether to reboot the device immediately once it is ready to install Windows 11.|
|usrImagePath|String|Supply URI here|The network URI of the Windows 11 ISO to download using BITS. Windows 11 ISO links can be generated using a newly-made link on the Microsoft website.
|usrOverrideChecks|Boolean|false|Override blocking issues?|
|usrShowOOBE|Selection|`Skip Out-of-Box Experience`|Display or skip the post-install Out-of-Box Experience dialogue (Alternative is `Show Out-of-Box Experience`).|
```powershell
function generateSHA256 ($executable, $storedHash) {
$fileBytes = [io.File]::ReadAllBytes("$executable")
$bytes = [Security.Cryptography.HashAlgorithm]::Create("SHA256").ComputeHash($fileBytes)
$varCalculatedHash=-Join ($bytes | ForEach {"{0:x2}" -f $_})
if ($storedHash -match $varCalculatedHash) {
write-host "+ Filehash verified for file $executable`: $storedHash"
} else {
write-host "! ERROR: Filehash mismatch for file $executable."
write-host " Expected value: $storedHash"
write-host " Received value: $varCalculatedHash"
write-host " Please report this error."
exit 1
}
}
function verifyPackage ($file, $certificate, $thumbprint1, $thumbprint2, $name, $url) { #special two-thumbprint edition
$varChain = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Chain
try {
$varChain.Build((Get-AuthenticodeSignature -FilePath "$file").SignerCertificate) | out-null
} catch [System.Management.Automation.MethodInvocationException] {
write-host "- ERROR: $name installer did not contain a valid digital certificate."
write-host " This could suggest a change in the way $name is packaged; it could"
write-host " also suggest tampering in the connection chain."
write-host "- Please ensure $url is whitelisted and try again."
write-host " If this issue persists across different devices, please file a support ticket."
}
$varIntermediate=($varChain.ChainElements | ForEach-Object {$_.Certificate} | Where-Object {$_.Subject -match "$certificate"}).Thumbprint
if ($varIntermediate -ne $thumbprint1 -and $varIntermediate -ne $thumbprint2) {
write-host "- ERROR: $file did not pass verification checks for its digital signature."
write-host " This could suggest that the certificate used to sign the $name installer"
write-host " has changed; it could also suggest tampering in the connection chain."
write-host `r
if ($varIntermediate) {
write-host ": We received: $varIntermediate"
write-host " We expected: $thumbprint1"
write-host " -OR- : $thumbprint2"
write-host " Please report this issue."
}
write-host "- Installation cannot continue. Exiting."
exit 1
} else {
write-host "+ Digital Signature verification passed."
}
}
function quitOr {
if ($env:usrOverrideChecks -match 'true') {
write-host "! This is a blocking error and should abort the process; however, the usrOverrideChecks"
write-host " flag has been enabled, and the error will thus be ignored."
write-host " Support will not be able to assist with issues that arise as a consequence of this action."
} else {
write-host "! This is a blocking error; the operation has been aborted."
Write-Host " If you do not believe the error to be valid, you can re-run this Component with the"
write-host " `'usrOverrideChecks`' flag enabled, which will ignore blocking errors and proceed."
write-host " Support will not be able to assist with issues that arise as a consequence of this action."
Stop-Process -name setupHost -ErrorAction SilentlyContinue
exit 1
}
}
function makeHTTPRequest ($tempHost) { #makeHTTPRequest v5: make an HTTP request and ensure a status code (any) is returned
$tempRequest = [System.Net.WebRequest]::Create($tempHost)
try {
$tempResponse=$tempRequest.getResponse()
$TempReturn=($tempResponse.StatusCode -as [int])
} catch [System.Exception] {
$tempReturn=$_.Exception.Response.StatusCode.Value__
}
if ($tempReturn -match '200') {
write-host "- Confirmed file at $tempHost is ready for download."
} else {
write-host "! ERROR: No file was found at $temphost."
write-host " If you are downloading from Microsoft, this may mean a bad URL was entered;"
write-host " bear in mind that ISO links generated from Microsoft.com are only valid for"
write-host " 24 hours before needing to be re-calculated."
write-host " Generate new links at https://www.microsoft.com/software-download/windows11."
exit 1
}
}
#===================================================================================================================================================
#kernel data
[int]$varKernel=(get-wmiObject win32_operatingSystem buildNumber).buildNumber
#user text
write-host `r
write-host "Windows 11 Updater: Update Windows 10 2004+ to the latest build of Windows 11"
write-host "==============================================================================="
write-host "`: Upgrading from: Build $varKernel /" (get-WMiObject -Class win32_operatingSystem).caption
############################################### LANGUAGE INFORMATION ###############################################
#language table (for ISO download) :: 2021 seagull/datto, inc.
$varLCID=(Get-ItemProperty hklm:\system\controlset001\control\nls\language -name InstallLanguage).InstallLanguage
$arrLCID=@{
"0401"=[PSCustomObject]@{Title="Arabic"; Localisation="Arabic"; MSKeyword="Arabic"; DattoKeyword="US"}
"0416"=[PSCustomObject]@{Title="Brazilian Portuguese"; Localisation="Brazilian Portuguese"; MSKeyword="BrazilianPortuguese"; DattoKeyword="US"}
"0402"=[PSCustomObject]@{Title="Bulgarian"; Localisation="Bulgarian"; MSKeyword="Bulgarian"; DattoKeyword="US"}
"0804"=[PSCustomObject]@{Title="Chinese (Simplified)"; Localisation="Chinese (Simplified)"; MSKeyword="Chinese(Simplified)"; DattoKeyword="US"}
"0004"=[PSCustomObject]@{Title="Chinese (Simplified)"; Localisation="Chinese (Simplified)"; MSKeyword="Chinese(Simplified)"; DattoKeyword="US"}
"7804"=[PSCustomObject]@{Title="Chinese (Simplified)"; Localisation="Chinese (Simplified)"; MSKeyword="Chinese(Simplified)"; DattoKeyword="US"}
"1004"=[PSCustomObject]@{Title="Chinese (Singapore)"; Localisation="Chinese (Simplified)"; MSKeyword="Chinese(Simplified)"; DattoKeyword="US"}
"0C04"=[PSCustomObject]@{Title="Chinese (Hong Kong)"; Localisation="Chinese (Traditional)"; MSKeyword="Chinese(Traditional)"; DattoKeyword="US"}
"0404"=[PSCustomObject]@{Title="Chinese (Taiwan)"; Localisation="Chinese (Traditional)"; MSKeyword="Chinese(Traditional)"; DattoKeyword="US"}
"7C04"=[PSCustomObject]@{Title="Chinese (Traditional)"; Localisation="Chinese (Traditional)"; MSKeyword="Chinese(Traditional)"; DattoKeyword="US"}
"041A"=[PSCustomObject]@{Title="Croatian"; Localisation="Croatian"; MSKeyword="Croatian"; DattoKeyword="US"}
"0405"=[PSCustomObject]@{Title="Czech"; Localisation="Czech"; MSKeyword="Czech"; DattoKeyword="US"}
"0005"=[PSCustomObject]@{Title="Czech"; Localisation="Czech"; MSKeyword="Czech"; DattoKeyword="US"}
"0006"=[PSCustomObject]@{Title="Danish"; Localisation="Danish"; MSKeyword="Danish"; DattoKeyword="US"}
"0406"=[PSCustomObject]@{Title="Danish"; Localisation="Danish"; MSKeyword="Danish"; DattoKeyword="US"}
"0013"=[PSCustomObject]@{Title="Dutch"; Localisation="Dutch"; MSKeyword="Dutch"; DattoKeyword="US"}
"0813"=[PSCustomObject]@{Title="Dutch (Belgium)"; Localisation="Dutch"; MSKeyword="Dutch"; DattoKeyword="US"}
"0413"=[PSCustomObject]@{Title="Dutch (Netherlands)"; Localisation="Dutch"; MSKeyword="Dutch"; DattoKeyword="US"}
"0009"=[PSCustomObject]@{Title="English (Generic)"; Localisation="English"; MSKeyword="English"; DattoKeyword="US"}
"0409"=[PSCustomObject]@{Title="English (United States)"; Localisation="English"; MSKeyword="English"; DattoKeyword="US"}
"0809"=[PSCustomObject]@{Title="English (UK)"; Localisation="English (International)"; MSKeyword="EnglishInternational"; DattoKeyword="UK"}
"0C09"=[PSCustomObject]@{Title="English (Australia)"; Localisation="English (International)"; MSKeyword="EnglishInternational"; DattoKeyword="US"}
"1409"=[PSCustomObject]@{Title="English (New Zealand)"; Localisation="English (International)"; MSKeyword="EnglishInternational"; DattoKeyword="US"}
"1009"=[PSCustomObject]@{Title="English (Canada)"; Localisation="English (International)"; MSKeyword="EnglishInternational"; DattoKeyword="US"}
"1C09"=[PSCustomObject]@{Title="English (South Africa)"; Localisation="English (International)"; MSKeyword="EnglishInternational"; DattoKeyword="US"}
"0025"=[PSCustomObject]@{Title="Estonian"; Localisation="Estonian"; MSKeyword="Estonian"; DattoKeyword="US"}
"0425"=[PSCustomObject]@{Title="Estonian"; Localisation="Estonian"; MSKeyword="Estonian"; DattoKeyword="US"}
"000B"=[PSCustomObject]@{Title="Finnish"; Localisation="Finnish"; MSKeyword="Finnish"; DattoKeyword="US"}
"040B"=[PSCustomObject]@{Title="Finnish"; Localisation="Finnish"; MSKeyword="Finnish"; DattoKeyword="US"}
"000C"=[PSCustomObject]@{Title="French"; Localisation="French"; MSKeyword="French"; DattoKeyword="US"}
"040C"=[PSCustomObject]@{Title="French"; Localisation="French"; MSKeyword="French"; DattoKeyword="US"}
"080C"=[PSCustomObject]@{Title="French (Belgium)"; Localisation="French"; MSKeyword="French"; DattoKeyword="US"}
"100C"=[PSCustomObject]@{Title="French (Switzerland)"; Localisation="French"; MSKeyword="French"; DattoKeyword="US"}
"0C0C"=[PSCustomObject]@{Title="French Canadian"; Localisation="French Canadian"; MSKeyword="FrenchCanadian"; DattoKeyword="US"}
"0007"=[PSCustomObject]@{Title="German"; Localisation="German"; MSKeyword="German"; DattoKeyword="US"}
"0407"=[PSCustomObject]@{Title="German"; Localisation="German"; MSKeyword="German"; DattoKeyword="US"}
"0C07"=[PSCustomObject]@{Title="German (Austria)"; Localisation="German"; MSKeyword="German"; DattoKeyword="US"}
"0807"=[PSCustomObject]@{Title="German (Switzerland)"; Localisation="German"; MSKeyword="German"; DattoKeyword="US"}
"0008"=[PSCustomObject]@{Title="Greek"; Localisation="Greek"; MSKeyword="Greek"; DattoKeyword="US"}
"0408"=[PSCustomObject]@{Title="Greek"; Localisation="Greek"; MSKeyword="Greek"; DattoKeyword="US"}
"000D"=[PSCustomObject]@{Title="Hebrew"; Localisation="Hebrew"; MSKeyword="Hebrew"; DattoKeyword="US"}
"040D"=[PSCustomObject]@{Title="Hebrew"; Localisation="Hebrew"; MSKeyword="Hebrew"; DattoKeyword="US"}
"000E"=[PSCustomObject]@{Title="Hungarian"; Localisation="Hungarian"; MSKeyword="Hungarian"; DattoKeyword="US"}
"040E"=[PSCustomObject]@{Title="Hungarian"; Localisation="Hungarian"; MSKeyword="Hungarian"; DattoKeyword="US"}
"0010"=[PSCustomObject]@{Title="Italian"; Localisation="Italian"; MSKeyword="Italian"; DattoKeyword="US"}
"0410"=[PSCustomObject]@{Title="Italian"; Localisation="Italian"; MSKeyword="Italian"; DattoKeyword="US"}
"0810"=[PSCustomObject]@{Title="Italian (Switzerland)"; Localisation="Italian"; MSKeyword="Italian"; DattoKeyword="US"}
"0011"=[PSCustomObject]@{Title="Japanese"; Localisation="Japanese"; MSKeyword="Japanese"; DattoKeyword="US"}
"0411"=[PSCustomObject]@{Title="Japanese"; Localisation="Japanese"; MSKeyword="Japanese"; DattoKeyword="US"}
"0012"=[PSCustomObject]@{Title="Korean"; Localisation="Korean"; MSKeyword="Korean"; DattoKeyword="US"}
"0412"=[PSCustomObject]@{Title="Korean"; Localisation="Korean"; MSKeyword="Korean"; DattoKeyword="US"}
"0026"=[PSCustomObject]@{Title="Latvian"; Localisation="Latvian"; MSKeyword="Latvian"; DattoKeyword="US"}
"0426"=[PSCustomObject]@{Title="Latvian"; Localisation="Latvian"; MSKeyword="Latvian"; DattoKeyword="US"}
"0027"=[PSCustomObject]@{Title="Lithuanian"; Localisation="Lithuanian"; MSKeyword="Lithuanian"; DattoKeyword="US"}
"0427"=[PSCustomObject]@{Title="Lithuanian"; Localisation="Lithuanian"; MSKeyword="Lithuanian"; DattoKeyword="US"}
"0014"=[PSCustomObject]@{Title="Norwegian (Bokm?l)"; Localisation="Norwegian"; MSKeyword="Norwegian"; DattoKeyword="US"}
"7C14"=[PSCustomObject]@{Title="Norwegian (Bokm?l)"; Localisation="Norwegian"; MSKeyword="Norwegian"; DattoKeyword="US"}
"0414"=[PSCustomObject]@{Title="Norwegian (Bokm?l)"; Localisation="Norwegian"; MSKeyword="Norwegian"; DattoKeyword="US"}
"7814"=[PSCustomObject]@{Title="Norwegian (Nynorsk)"; Localisation="Norwegian"; MSKeyword="Norwegian"; DattoKeyword="US"}
"0814"=[PSCustomObject]@{Title="Norwegian (Nynorsk)"; Localisation="Norwegian"; MSKeyword="Norwegian"; DattoKeyword="US"}
"0015"=[PSCustomObject]@{Title="Polish"; Localisation="Polish"; MSKeyword="Polish"; DattoKeyword="US"}
"0415"=[PSCustomObject]@{Title="Polish"; Localisation="Polish"; MSKeyword="Polish"; DattoKeyword="US"}
"0816"=[PSCustomObject]@{Title="Portuguese"; Localisation="Portuguese"; MSKeyword="Portuguese"; DattoKeyword="US"}
"0018"=[PSCustomObject]@{Title="Romanian"; Localisation="Romanian"; MSKeyword="Romanian"; DattoKeyword="US"}
"0418"=[PSCustomObject]@{Title="Romanian"; Localisation="Romanian"; MSKeyword="Romanian"; DattoKeyword="US"}
"0818"=[PSCustomObject]@{Title="Moldovan"; Localisation="Romanian"; MSKeyword="Romanian"; DattoKeyword="US"}
"0019"=[PSCustomObject]@{Title="Russian"; Localisation="Russian"; MSKeyword="Russian"; DattoKeyword="US"}
"0419"=[PSCustomObject]@{Title="Russian"; Localisation="Russian"; MSKeyword="Russian"; DattoKeyword="US"}
"0819"=[PSCustomObject]@{Title="Russian (Moldova)"; Localisation="Russian"; MSKeyword="Russian"; DattoKeyword="US"}
"701A"=[PSCustomObject]@{Title="Serbian (Latin)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"7C1A"=[PSCustomObject]@{Title="Serbian (Latin)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"181A"=[PSCustomObject]@{Title="Serbian (Latin, BO/HE)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"2C1A"=[PSCustomObject]@{Title="Serbian (Latin, MO)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"241A"=[PSCustomObject]@{Title="Serbian (Latin)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"081A"=[PSCustomObject]@{Title="Serbian (Latin, SR/MO)"; Localisation="Serbian Latin"; MSKeyword="SerbianLatin"; DattoKeyword="US"}
"001B"=[PSCustomObject]@{Title="Slovak"; Localisation="Slovak"; MSKeyword="Slovak"; DattoKeyword="US"}
"041B"=[PSCustomObject]@{Title="Slovak"; Localisation="Slovak"; MSKeyword="Slovak"; DattoKeyword="US"}
"0024"=[PSCustomObject]@{Title="Slovenian"; Localisation="Slovenian"; MSKeyword="Slovenian"; DattoKeyword="US"}
"0424"=[PSCustomObject]@{Title="Slovenian"; Localisation="Slovenian"; MSKeyword="Slovenian"; DattoKeyword="US"}
"000A"=[PSCustomObject]@{Title="Spanish (Spain)"; Localisation="Spanish"; MSKeyword="Spanish"; DattoKeyword="US"}
"040A"=[PSCustomObject]@{Title="Spanish (Spain)"; Localisation="Spanish"; MSKeyword="Spanish"; DattoKeyword="US"}
"0C0A"=[PSCustomObject]@{Title="Spanish (Spain)"; Localisation="Spanish"; MSKeyword="Spanish"; DattoKeyword="US"}
"2C0A"=[PSCustomObject]@{Title="Spanish (Argentina)"; Localisation="Spanish (Mexico)"; MSKeyword="Spanish(Mexico)"; DattoKeyword="US"}
"340A"=[PSCustomObject]@{Title="Spanish (Chile)"; Localisation="Spanish (Mexico)"; MSKeyword="Spanish(Mexico)"; DattoKeyword="US"}
"580A"=[PSCustomObject]@{Title="Spanish (Latin America)"; Localisation="Spanish (Mexico)"; MSKeyword="Spanish(Mexico)"; DattoKeyword="US"}
"080A"=[PSCustomObject]@{Title="Spanish (M?xico)"; Localisation="Spanish (Mexico)"; MSKeyword="Spanish(Mexico)"; DattoKeyword="US"}
"001D"=[PSCustomObject]@{Title="Swedish"; Localisation="Swedish"; MSKeyword="Swedish"; DattoKeyword="US"}
"041D"=[PSCustomObject]@{Title="Swedish"; Localisation="Swedish"; MSKeyword="Swedish"; DattoKeyword="US"}
"001E"=[PSCustomObject]@{Title="Thai"; Localisation="Thai"; MSKeyword="Thai"; DattoKeyword="US"}
"041E"=[PSCustomObject]@{Title="Thai"; Localisation="Thai"; MSKeyword="Thai"; DattoKeyword="US"}
"001F"=[PSCustomObject]@{Title="Turkish"; Localisation="Turkish"; MSKeyword="Turkish"; DattoKeyword="US"}
"041F"=[PSCustomObject]@{Title="Turkish"; Localisation="Turkish"; MSKeyword="Turkish"; DattoKeyword="US"}
"0022"=[PSCustomObject]@{Title="Ukrainian"; Localisation="Ukrainian"; MSKeyword="Ukrainian"; DattoKeyword="US"}
"0422"=[PSCustomObject]@{Title="Ukrainian"; Localisation="Ukrainian"; MSKeyword="Ukrainian"; DattoKeyword="US"}
}
#if they're running something we don't understand...
if (!($($arrLCID[$varLCID].DattoKeyword))) {
$arrLCID[$varLCID]=[PSCustomObject]@{Title="Unknown";Localisation="English";MSKeyword="English";DattoKeyword="US"}
}
#output this information
write-host ": Device language: $($arrLCID[$varLCID].Title)"
write-host ": Suggested carryover: $($arrLCID[$varLCID].Localisation)"
############################################### ISO COMPATIBILITY ###############################################
#define an early SKU list and add to it depending on user choice
$arrGoodSKU=@(48,49,98,99,100,101)
if (($env:usrImagePath -as [string]).Length -lt 2 -or $env:usrImagePath -eq 'Supply URI here') {
#nothing
write-host "! ERROR: No image path defined."
write-host " The Component works by downloading a Windows 11 ISO from the Internet"
write-host " (or a local share), mounting it and installing from it."
write-host " Without a link to an ISO, nothing can be downloaded."
write-host `r
write-host " Generate a Windows 11 ISO download link good for 24 hours at:"
write-host " https://www.microsoft.com/software-download/windows11"
exit 1
} elseif ($env:usrImagePath -match 'software-download.microsoft.com') {
#microsoft
write-host ": ISO Download location: Microsoft servers."
write-host " Please be aware that Microsoft's ISO download links expire after 24 hours."
write-host " If the download fails, your link may need to be re-generated."
makeHTTPRequest $env:usrImagePath
#compare ISO region to device region
$varISOLang=$env:usrImagePath.split('_')[1]
write-host ": MS ISO Language: $varISOLang"
if ($varISOLang -ne $($arrLCID[$varLCID].MSKeyword)) {
write-host "! ERROR: Mismatch between device language and Microsoft ISO."
write-host " The languages must match up as closely as possible otherwise the installation will fail."
write-host " This error can be overridden if you are certain this will not pose an issue."
quitOr
}
} else {
#custom
write-host ": ISO location set by user. Edition, Language &c. defined by image."
$arrGoodSKU+=4,27,84,161,162 #add valid SKUs beyond our/MS's reach
}
#separate check: check SKU if the user is not supplying their own ISO
[int]$varSKU=(Get-WmiObject -Class win32_operatingsystem -Property OperatingSystemSKU).OperatingSystemSKU
if ($arrGoodSKU | ? {$_ -eq $varSKU}) {
write-host "+ Device Windows SKU ($varSKU) is supported."
} else {
write-host "! ERROR: Device Windows SKU ($varSKU) not supported."
write-host " Windows 11 can only be installed on devices running Windows 10 2004 onward;"
write-host " meaning devices with SKUs discontinued by Microsoft are not compatible."
write-host " Enterprise, Pro-for-Workstations and Education edition ISOs are not supplied"
write-host " from Microsoft and thus, these cannot be updated from a Microsoft URL."
write-host " This error can be overridden if you are certain the SKU will not pose an issue."
quitOr
}
write-host "`: ISO download path: $env:usrImagePath"
############################################### HARDWARE COMPAT ###############################################
#architecture
if ((Get-WMIObject -Class Win32_Processor).Architecture -ne 9) {
write-host "! ERROR: This device does not have an AMD64/EM64T-capable processor."
write-host " Windows 11 will not run on 32-bit devices."
write-host " Installation cancelled; exiting."
exit 1
} elseif ([intptr]::Size -eq 4) {
write-host ": 32-bit Windows detected, but device processor is AMD64/EM64T-capable."
write-host " An architecture upgrade will be attempted; the device will lose"
write-host " the ability to run 16-bit programs, but 32-bit programs will"
write-host " continue to work using Windows-on-Windows (WOW) emulation."
} else {
write-host "+ 64-bit architecture checks passed."
}
#minimum W10-04
if ($varKernel -lt 19041) {
write-host "! ERROR: Windows 10 version 2004 or higher is required to proceed."
quitOr
}
#services pipe timeout
REG ADD "HKLM\SYSTEM\CurrentControlSet\Control" /v ServicesPipeTimeout /t REG_DWORD /d "300000" /f 2>&1>$null
write-host ": Device service timeout period configured to five minutes."
write-host "+ Target device OS is Windows 10 2004 or greater."
#make sure it's licensed (v3)
if (!(Get-WmiObject SoftwareLicensingProduct | ? { $_.LicenseStatus -eq 1 } | select -ExpandProperty Description | select-string "Windows" -Quiet)) {
write-host "! ERROR: Windows 10 can only be installed on devices with an active Windows licence."
quitOr
}
write-host "+ Target device has a valid Windows licence."
#make sure we have enough disk space - installation plus iso hosting
$varSysFree = [Math]::Round((Get-WMIObject -Class Win32_Volume |Where-Object {$_.DriveLetter -eq $env:SystemDrive} | Select -expand FreeSpace) / 1GB)
if ($varSysFree -lt 20) {
write-host "! ERROR: System drive requires at least 20GB: 13 for installation, 7 for the disc image."
quitOr
}
write-host "+ Target device has at least 20GB of free hard disk space."
#check for RAM
if (((Get-WmiObject -class "cim_physicalmemory" | Measure-Object -Property Capacity -Sum).Sum / 1024 / 1024 / 1024) -lt 4) {
write-host "! ERROR: This machine may not have enough RAM installed."
write-host " Windows 11 requires at least 4GB of system RAM to be installed."
write-host " In case of errors, please check this device's RAM."
quitOr
} else {
write-host "+ Device has at least 4GB of RAM installed."
}
#TPM check
$varTPM=@(0,0,0) # present :: enabled :: activated
if ((Get-WmiObject -Class Win32_TPM -EnableAllPrivileges -Namespace "root\CIMV2\Security\MicrosoftTpm").__SERVER) { # TPM installed
$varTPM[0]=1
if ((Get-WmiObject -Namespace ROOT\CIMV2\Security\MicrosoftTpm -Class Win32_Tpm).IsEnabled().isenabled -eq $true) { # TPM enabled
$varTPM[1]=1
if ((Get-WmiObject -Namespace ROOT\CIMV2\Security\MicrosoftTpm -Class Win32_Tpm).IsActivated().isactivated -eq $true) { # TPM activated
$varTPM[2]=1
} else {
$varTPM[2]=0
}
} else {
$varTPM[1]=0
$varTPM[2]=0
}
}
switch -Regex ($varTPM -as [string]) {
'^0' {
write-host "! ERROR: This system does not contain a Trusted Platform Module (TPM)."
write-host " Windows 11 requires the use of a TPM to install."
write-host " Your device may contain a firmware TPM (fTPM) which can be enabled in the BIOS/uEFI settings. More info:"
write-host " https://support.microsoft.com/en-us/windows/enable-tpm-2-0-on-your-pc-1fd5a332-360d-4f46-a1e7-ae6b0c90645c"
write-host "- Cannot continue; exiting."
quitOr
} '0 0$' {
write-host "! ERROR: Whilst a TPM was detected in this system, the WMI reports that it is disabled."
write-host " Please re-enable the use of the TPM and try installing again."
write-host "- Cannot continue; exiting."
quitOr
} default {
write-host "! ERROR: Whilst a TPM was detected in this system, the WMI reports that it has been deactivated."
write-host " Please re-activate the TPM and try installing again."
write-host "- Cannot continue; exiting."
quitOr
} '1$' {
write-host "+ TPM installed and active."
} $null {
write-host "! ERROR: A fault has occurred during the TPM checking subroutine. Please report this."
quitOr
}
# to those who read my scripts: this logic is taken from the "bitlocker & TPM audit" component, which offers a much more in-depth
# look at a device's bitlocker/TPM status than is offered here. grab it from the comstore today! -- seagull nov '21
}
############################################### IMAGE TRANSFER ###############################################
# Added UNC/SMB support preserves all original error handling / unattended flow
##############################################################################################################
# We still import BITS for HTTP/S transfers, but SMB copies dont rely on it.
try {
Import-Module BitsTransfer -Force -ErrorAction Stop
Write-Host ": BitsTransfer module loaded."
}
catch {
Write-Host ": BitsTransfer module unavailable HTTP/HTTPS downloads will fail."
}
if ($env:usrImagePath -match 'amp;') {
$env:usrImagePath = $env:usrImagePath -replace 'amp;'
}
$isoDestPath = "$env:PUBLIC\Win11.iso"
$srcPath = $env:usrImagePath
function Copy-IsoFromUNC {
param(
[string]$UncPath,
[string]$DestPath
)
Write-Host ": Copying ISO from UNC share: $UncPath"
if (-not (Test-Path $UncPath)) {
Write-Host "! ERROR: UNC path not reachable or permissions denied (`$UncPath`)."
Write-Host " Remember Datto RMM executes as NT AUTHORITY\\SYSTEM; the share must allow"
Write-Host " read access for the computer account **or** the Everyone group."
quitOr
}
# Robocopy provides restartable transfers & built-in hashing.
$roboLog = "$env:PUBLIC\\robocopy_win11_iso.log"
$cmd = @(
'robocopy',
('"' + (Split-Path $UncPath -Parent) + '"'),
('"' + (Split-Path $DestPath -Parent) + '"'),
('"' + (Split-Path $UncPath -Leaf) + '"'),
'/NFL','/NDL','/NJH','/NJS','/NP', # quiet output
'/R:3','/W:5', # retry logic
'/V','/FFT','/Z', # verify, tolerant times, restartable
"/LOG:`"$roboLog`""
) -join ' '
Write-Host ": Executing -> $cmd"
Invoke-Expression $cmd | Out-Null
if (-not (Test-Path $DestPath)) {
Write-Host "! ERROR: Robocopy did not create $DestPath. Check $roboLog."
quitOr
}
Write-Host "+ ISO copied successfully to $DestPath"
}
################################################################
# Decide transfer method #
################################################################
if ($srcPath -match '^(\\\\|//)') {
# --- UNC / SMB path --------------------------------------
Copy-IsoFromUNC -UncPath $srcPath -DestPath $isoDestPath
}
elseif ($srcPath -match '^https?://') {
# --- HTTP / HTTPS download (original BITS logic) ----------
Write-Host ": Downloading ISO via BITS from $srcPath"
Start-BitsTransfer -Source $srcPath -Destination $isoDestPath -DisplayName 'Windows 11 ISO'
}
else {
Write-Host "! ERROR: usrImagePath must begin with http(s):// **or** \\\\server\\share\\Win11.iso"
quitOr
}
################################################################
# Post-transfer validation #
################################################################
if (Test-Path $isoDestPath -ErrorAction SilentlyContinue) {
Write-Host "+ ISO present at $isoDestPath transfer verified."
} else {
Write-Host "! ERROR: ISO not found at $isoDestPath after transfer."
Write-Host " For SMB shares, double-check permissions; for HTTP, ensure URL is valid."
quitOr
}
#extract the image
generateSHA256 7z.dll "DB2897EEEA65401EE1BD8FEEEBD0DBAE8867A27FF4575F12B0B8A613444A5EF7"
generateSHA256 7z.exe "A20D93E7DC3711E8B8A8F63BD148DDC70DE8C952DE882C5495AC121BFEDB749F"
.\7z.exe x -y "$env:PUBLIC\Win11.iso" `-o"$env:PUBLIC\Win11Extract" -aoa -bsp0 -bso0
#verify extraction
if (!(test-path "$env:PUBLIC\Win11Extract\setup.exe" -ErrorAction SilentlyContinue)) {
write-host "! ERROR: Extraction of Windows 11 ISO failed."
write-host " Possible causes/fixes:"
write-host " - Download aborted. Check that the ISO can be mounted."
write-host " - Inadequate allowlisting. Ensure the ISO is reachable over the network."
write-host " - Permission issues. On a UNC share, the ISO must be viewable by LocalSystem."
write-host " - Something caused the extraction to fail (very high CPU usage?)"
write-host " Operations aborted: cannot proceed."
quitOr
}
start-sleep -Seconds 15
Remove-Item "$env:PUBLIC\Win11.iso" -Force
write-host "+ ISO extracted to $env:PUBLIC\Win11Extract. ISO file deleted."
#make a cleanup script to remove the win11 folder post-install :: ps2 compat
@"
@echo off
REM This is a cleanup script. For more information, consult your systems administrator.
rd `"$env:PUBLIC\Win11Extract`" /s /q
del `"$env:PUBLIC\cleanup.bat`" /s /q /f
"@ | set-content -path "$env:PUBLIC\cleanup.bat" -Force
#verify the windows 11 setup.exe -- just to make sure it's legit
verifyPackage "$env:PUBLIC\Win11Extract\setup.exe" 'Microsoft Code Signing PCA' "8BFE3107712B3C886B1C96AAEC89984914DC9B6B" "3CAF9BA2DB5570CAF76942FF99101B993888E257" "Windows 11 Setup" "your network location"
#install
start-sleep -Seconds 30
if ($env:usrReboot -match 'true') {
& "$env:PUBLIC\Win11Extract\setup.exe" /auto upgrade /eula accept /quiet /compat IgnoreWarning /PostOOBE "$env:PUBLIC\cleanup.bat" /showOOBE $env:usrShowOOBE
} else {
& "$env:PUBLIC\Win11Extract\setup.exe" /auto upgrade /eula accept /quiet /compat IgnoreWarning /PostOOBE "$env:PUBLIC\cleanup.bat" /showOOBE $env:usrShowOOBE /NoReboot
}
#close
write-host "================================================================"
write-host "`- The Windows 11 Setup executable has been instructed to begin installation."
write-host " This Component has performed its job and will retire, but the task is still ongoing`;"
write-host " if errors occur with the installation process, logs will be saved automatically in"
write-host " $env:WinDir\logs\SetupDiag\SetupDiagResults.xml after the fact."
if ($env:usrReboot -match 'true') {
write-host " Please be aware that several hours may pass before the device shows visible signs."
} else {
write-host " Please allow ~4 hours for the setup preparation step to conclude and then reboot the"
write-host " device to begin the upgrade process."
}
```

View File

@@ -0,0 +1,163 @@
## Purpose
Sometimes things go awry with backup servers and Hyper-V and a bunch of extra `.avhdx` virtual differencing disks are created, taking up a ton of space. This can be problematic because if you run out of space, the virtual machines running on that underlying storage will stop working. Sometimes this can involve dozens or even hundreds of differencing disks in rare cases that need to be manually merged or "collapsed" down to reclaim the lost space.
This script automatically iterates through the entire differencing disk chain all the way back to the base disk / parent, and automatically collapses the chain downward from the newest checkpoint (provided as an argument to the script) to the original (non-differencing) base disk. This can automate a huge amount of work when this issue happens due to backup servers or other unexplainable anomalies.
## Powershell Script
You need to copy the contents of the following somewhere on your computer and save it as `Get-HyperVParentDisks.ps1`.
``` powershell
param (
[Parameter(Mandatory=$true, HelpMessage="Specify the path to the AVHDX file.")]
[string]$AVHDXPath,
[Parameter(Mandatory=$false, HelpMessage="Specify this flag to merge disks into their parents.")]
[switch]$MergeIntoParents,
[Parameter(Mandatory=$false, HelpMessage="Specify this flag to simulate the merging process without actually performing it.")]
[switch]$DryRun
)
function Get-ParentDisk {
param (
[string]$ChildDisk
)
$diskInfo = Get-VHD -Path $ChildDisk
return $diskInfo.ParentPath
}
function Get-AllParentDisksChain {
param ([string]$CurrentDisk)
$parentDiskChain = @()
while ($CurrentDisk) {
$parentDisk = Get-ParentDisk -ChildDisk $CurrentDisk
if ($parentDisk) {
$parentDiskChain += $CurrentDisk # Add the current disk to the chain before moving to the parent
$CurrentDisk = $parentDisk
} else {
break
}
}
$parentDiskChain += $CurrentDisk # Add the base disk at the end of the chain
return $parentDiskChain
}
function Merge-DiskIntoParent {
param ([string]$ChildDisk, [string]$ParentDisk, [int]$DiskNumber, [int]$TotalDisks)
if ($DryRun) {
Write-Output "[Differential Disk $DiskNumber of $TotalDisks]"
Write-Output "Child: $ChildDisk"
Write-Output "Parent: $ParentDisk"
Write-Output "[Dry Run] Would Merge Child into Parent"
} else {
Write-Output "[Differential Disk $DiskNumber of $TotalDisks]"
Write-Output "Child: $ChildDisk"
Write-Output "Parent: $ParentDisk"
try {
$childDiskInfo = Get-VHD -Path $ChildDisk
if ($childDiskInfo.VhdFormat -ne 'VHDX' -or $childDiskInfo.VhdType -ne 'Differencing') {
Write-Output "Error: $ChildDisk is not a valid differencing disk (AVHDX) and cannot be merged."
throw "Invalid Disk Type for Merging."
}
Merge-VHD -Path $ChildDisk -DestinationPath $ParentDisk -Confirm:$false
Write-Output "Successfully Merged Child into Parent"
} catch {
Write-Output "Failed to Merge ${ChildDisk} into ${ParentDisk}: $_"
Restart-Service -Name vmms
throw "Merge failed. Halting Script."
}
}
}
Write-Output "Starting Parent Disk Chain Search for: $AVHDXPath"
$parentDiskChain = Get-AllParentDisksChain -CurrentDisk $AVHDXPath
$totalDisks = $parentDiskChain.Count
Write-Output "Total Parent Disks Found: $totalDisks"
if ($MergeIntoParents) {
Write-Output "`nStarting Merge Process..."
for ($i = 0; $i -lt ($totalDisks - 1); $i++) {
$currentDisk = $parentDiskChain[$i]
$nextDisk = $parentDiskChain[$i + 1]
Merge-DiskIntoParent -ChildDisk $currentDisk -ParentDisk $nextDisk -DiskNumber ($i + 1) -TotalDisks $totalDisks
}
Write-Output "Merge Process Completed."
} elseif ($DryRun) {
Write-Output "[Dry Run] Merge Simulation Completed."
}
```
## Script Usage Syntax
You can run the script in a few different ways, seen below:
Merge Disks:
`.\Get-HyperVParentDisks.ps1 -MergeIntoParents -AVHDXPath "Z:\Example\Virtual Hard Disks\Example.avhdx"`
!!! info "Example Output"
```
Starting parent disk search for: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_E5F78673-3DAD-4211-AC0A-A3BDEB763B63.avhdx
Total parent disks found: 6
Starting merge process...
[Differential Disk 1 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_E5F78673-3DAD-4211-AC0A-A3BDEB763B63.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_8B9EDF27-6B7D-4766-AE60-ED67BF3055AE.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 2 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_8B9EDF27-6B7D-4766-AE60-ED67BF3055AE.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_6607B03C-E3F8-49CC-A69B-68BA3DACE81F.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 3 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_6607B03C-E3F8-49CC-A69B-68BA3DACE81F.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_BB68092D-626C-47AA-A20D-93DB0FEB4167.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 4 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_BB68092D-626C-47AA-A20D-93DB0FEB4167.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_2E6147A8-1C6E-4A07-ABA8-6DE3AFB79974.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 5 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_2E6147A8-1C6E-4A07-ABA8-6DE3AFB79974.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER.vhdx
[Dry Run] Would merge child into parent
Merge process completed
```
Dry Run (Non-Destructive):
`.\Get-HyperVParentDisks.ps1 -MergeIntoParents -DryRun -AVHDXPath "Z:\Example\Virtual Hard Disks\Example.avhdx"`
!!! info "Example Output"
```
Starting parent disk search for: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_E5F78673-3DAD-4211-AC0A-A3BDEB763B63.avhdx
Total parent disks found: 6
Starting merge process...
[Differential Disk 1 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_E5F78673-3DAD-4211-AC0A-A3BDEB763B63.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_8B9EDF27-6B7D-4766-AE60-ED67BF3055AE.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 2 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_8B9EDF27-6B7D-4766-AE60-ED67BF3055AE.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_6607B03C-E3F8-49CC-A69B-68BA3DACE81F.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 3 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_6607B03C-E3F8-49CC-A69B-68BA3DACE81F.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_BB68092D-626C-47AA-A20D-93DB0FEB4167.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 4 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_BB68092D-626C-47AA-A20D-93DB0FEB4167.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_2E6147A8-1C6E-4A07-ABA8-6DE3AFB79974.avhdx
[Dry Run] Would merge child into parent
[Differential Disk 5 of 6]
Child: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER_2E6147A8-1C6E-4A07-ABA8-6DE3AFB79974.avhdx
Parent: Z:\DISK-MERGE-TESTER\Virtual Hard Disks\DISK-MERGE-TESTER.vhdx
[Dry Run] Would merge child into parent
Merge process completed.
```

View File

@@ -0,0 +1,9 @@
**Purpose**:
You may find that you cannot delete a VHDX file for a virtual machine you removed from Hyper-V and/or Hyper-V Failover Cluster, and either cannot afford to, or do not want to reboot your virtualization host(s) to unlock the file locked by `SYSTEM`.
Run the following commands to unlock the file and delete it:
```powershell
Dismount-VHD -Path "C:\Path\To\Disk.vhdx" -ErrorAction SilentlyContinue
Remove-Item -Path "C:\Path\To\Disk.vhdx" -Force
```

View File

@@ -0,0 +1,59 @@
**Purpose**: Sometimes a Hyper-V Failover Cluster node does not want to shut down, or is having issues preventing you from migrating VMs to another node in the cluster, etc. In these situations, you can run this script to force a cluster node to reboot itself.
!!! warning "Run from a Different Server"
You absolutely do not want to run the script locally on the node that is having the issues. There are commands that can only take place if the script is ran on another node in the cluster (or another domain-joined device) logged-in with a domain administrator account.
```powershell
# PowerShell Script to Reboot a Hyper-V Failover Cluster Node and Kill clussvc
# Prompt for the hostname
$hostName = Read-Host -Prompt "Enter the hostname of the Hyper-V Failover Cluster Node"
# Output the step
Try{
Write-Host "Sending reboot command to $hostName..."
# Send the reboot command
Restart-Computer -ComputerName $hostName -Force -ErrorAction Stop
}
Catch{
Write-Host "Reboot already in queue"
}
# Output waiting
Write-Host "Waiting for 120 seconds..."
# Wait for 120 seconds
Start-Sleep -Seconds 120
# Output stoping clussvc
Write-Host "Checking if Cluster Service needs to be stopped"
# Kill the clussvc service
Invoke-Command -ComputerName $hostName -ScriptBlock {
try {
$service = Get-Service -Name clussvc
$process = Get-Process -Name $service.Name
Stop-Process -Id $process.Id -Force
} catch {
Write-Host "Error stopping clussvc: $_"
}
}
# Output the step
Write-Host "Waiting for 60 seconds..."
Start-Sleep -Seconds 60
# Kill the VMMS service
Invoke-Command -ComputerName $hostName -ScriptBlock {
try {
$service = Get-Service -Name vmms
$process = Get-Process -Name $service.Name
Stop-Process -Id $process.Id -Force
} catch {
Write-Host "Error stopping VMMS: $_"
}
}
# Output the completion
Write-Host "Reboot for $hostName should now be underway."
```

View File

@@ -0,0 +1,69 @@
**Purpose**:
This script *bumps* any replication that has entered a paused state due to a replication error. The script will record failed attempts at restarting the replication. The logs will rotate out every 5-days.
``` powershell
# Define the directory to store the log files
$logDir = "C:\ClusterStorage\Volume1\Scripts\Logs"
if (-not (Test-Path $logDir)) {
New-Item -Path $logDir -ItemType Directory
}
# Get today's date and format it for the log file name
$today = Get-Date -Format "yyyyMMdd"
$logFile = Join-Path -Path $logDir -ChildPath "ReplicationLog_$today.txt"
# Manually create the log file if it doesn't exist
if (-not (Test-Path $logFile)) {
Write-Host "Log file does not exist. Attempting creation..."
try {
New-Item -Path $logFile -ItemType File
Write-Host "Log file $logFile created successfully."
} catch {
Write-Error "Failed to create log file. Error: $_"
}
}
# Delete log files older than 5 days
Get-ChildItem -Path $logDir -Filter "ReplicationLog_*.txt" | Where-Object {
$_.CreationTime -lt (Get-Date).AddDays(-5)
} | Remove-Item
# Get a list of all nodes in the cluster
$clusterNodes = Get-ClusterNode
# Iterate over each cluster node
foreach ($node in $clusterNodes) {
try {
# Get VMs with Critical ReplicationHealth from the current node
$vmsInCriticalState = Get-VMReplication -ComputerName $node.Name | Where-Object { $_.ReplicationHealth -eq 'Critical' }
} catch {
Write-Error "Failed to retrieve VMs from Node: $($node.Name). Error: $_"
# Log the error and continue to the next node
Add-Content -Path $logFile -Value "Failed to retrieve VMs from Node: $($node.Name) at $(Get-Date)"
continue
}
foreach ($vm in $vmsInCriticalState) {
Write-Host "Checking VM: $($vm.Name) on Node: $($node.Name) for replication issues."
Write-Host "Replication State for VM: $($vm.Name) is $($vm.ReplicationState)"
# Check if the replication state is valid to resume
if ($vm.ReplicationState -eq 'Resynchronization required' -or $vm.ReplicationState -eq 'WaitingForStartResynchronize') {
Write-Warning "Replication for VM: $($vm.Name) on Node: $($node.Name) is in '$($vm.ReplicationState)' state. Skipping..."
# Log the VM that is in 'Resynchronization required' or 'WaitingForStartResynchronize' state
Add-Content -Path $logFile -Value "Replication for VM: $($vm.Name) on Node: $($node.Name) is in '$($vm.ReplicationState)' state at $(Get-Date)"
continue
}
try {
# Try to resume replication for the VM
Resume-VMReplication -VMName $vm.Name -ComputerName $node.Name
Write-Host "Resumed replication for VM: $($vm.Name) on Node: $($node.Name)"
} catch {
Write-Error "Failed to resume replication for VM: $($vm.Name) on Node: $($node.Name) - $_"
# Write the failed VM name to the log file
Add-Content -Path $logFile -Value "Failed to resume replication for VM: $($vm.Name) on Node: $($node.Name) at $(Get-Date)"
}
}
}
```

View File

@@ -0,0 +1,154 @@
**Purpose**: This script was purpose-built for the homelab Minecraft servers in my homelab. It may need to be ported based on your own needs.
```powershell
clear
# #################################################
# # BUNNY LAB - MINECRAFT UPGRADE SCRIPT #
# #################################################
# Function to display the banner
function Show-Banner {
Write-Host "#################################################" -ForegroundColor Cyan
Write-Host "# BUNNY LAB - MINECRAFT UPGRADE SCRIPT #" -ForegroundColor Cyan
Write-Host "#################################################" -ForegroundColor Cyan
}
# Function to get user input for the zip file
function Get-ZipFileName {
Write-Host "Step 1: Getting the zip file name from user input..." -ForegroundColor Yellow
Write-Host "Please enter the name of the newest Minecraft ZIP file in the downloads folder (e.g., 'Server-Files-x.xx.zip'): " -ForegroundColor Yellow
$zipFileName = Read-Host
$zipFilePath = "C:\Users\nicole.rappe\Downloads\$zipFileName"
# Check if the zip file exists in the Downloads folder
Write-Host "Verifying if the specified ZIP file exists at: $zipFilePath" -ForegroundColor Yellow
if (-not (Test-Path $zipFilePath)) {
Write-Host "File not found! Please check the file name and try again." -ForegroundColor Red
exit
}
Write-Host "ZIP file found: $zipFilePath" -ForegroundColor Green
return $zipFilePath
}
# Function to unzip the file without nesting
function Unzip-ServerFiles {
param (
[string]$zipFilePath,
[string]$destinationFolder
)
Write-Host "Step 2: Unzipping the server files to: $destinationFolder" -ForegroundColor Yellow
# Create a temporary folder for extraction
$tempFolder = "$env:TEMP\MinecraftTemp"
Write-Host "Creating temporary folder for extraction at: $tempFolder" -ForegroundColor Yellow
# Remove the temporary folder if it exists, then recreate it
if (Test-Path $tempFolder) {
Write-Host "Temporary folder exists. Removing existing folder..." -ForegroundColor Yellow
Remove-Item -Recurse -Force $tempFolder
}
New-Item -ItemType Directory -Path $tempFolder
Write-Host "Unzipping new server files..." -ForegroundColor Green
Expand-Archive -Path $zipFilePath -DestinationPath $tempFolder -Force
Write-Host "Moving unzipped files to destination folder: $destinationFolder" -ForegroundColor Green
# Move the contents of the temporary folder to the destination
Get-ChildItem -Path $tempFolder -Recurse | Move-Item -Destination $destinationFolder -Force
# Clean up the temporary folder
Write-Host "Cleaning up temporary folder..." -ForegroundColor Yellow
Remove-Item -Recurse -Force $tempFolder
}
# Function to copy specific files/folders from the old deployment
function Copy-ServerData {
param (
[string]$sourceFolder,
[string]$destinationFolder
)
Write-Host "Step 3: Copying server data from: $sourceFolder to: $destinationFolder" -ForegroundColor Yellow
# Files to copy
Write-Host "Copying essential files..." -ForegroundColor Yellow
Copy-Item "$sourceFolder\eula.txt" "$destinationFolder\eula.txt" -Force
Copy-Item "$sourceFolder\user_jvm_args.txt" "$destinationFolder\user_jvm_args.txt" -Force
Copy-Item "$sourceFolder\ops.json" "$destinationFolder\ops.json" -Force
Copy-Item "$sourceFolder\server.properties" "$destinationFolder\server.properties" -Force
# Copy-Item "$sourceFolder\mods\ftbbackups2-neoforge-1.21-1.0.28.jar" "$destinationFolder\mods\ftbbackups2-neoforge-1.21-1.0.28.jar" -Force
Copy-Item "$sourceFolder\config\ftbbackups2.json" "$destinationFolder\config\ftbbackups2.json" -Force
Write-Host "Copying world data and backups folder..." -ForegroundColor Yellow
# Folder to copy (recursively)
Copy-Item "$sourceFolder\world" "$destinationFolder\world" -Recurse -Force
# New-Item -ItemType SymbolicLink -Path "\backups" -Target "Z:\"
}
# Function to rename the old folder with the current date
function Rename-OldServer {
param (
[string]$oldFolderPath
)
$currentDate = Get-Date -Format "MM-dd-yyyy"
$backupFolderPath = "$oldFolderPath.backup.$currentDate"
Write-Host "Step 4: Renaming old server folder to: $backupFolderPath" -ForegroundColor Yellow
Rename-Item -Path $oldFolderPath -NewName $backupFolderPath
Write-Host "Old server folder renamed to: $backupFolderPath" -ForegroundColor Green
}
# Function to rename the new deployment to 'ATM10'
function Rename-NewServer {
param (
[string]$newDeploymentPath,
[string]$finalServerPath
)
Write-Host "Step 5: Renaming new deployment folder to 'ATM10' at: $finalServerPath" -ForegroundColor Yellow
Rename-Item -Path $newDeploymentPath -NewName $finalServerPath
Write-Host "New server folder renamed to 'ATM10' at: $finalServerPath" -ForegroundColor Green
}
# Main Script Logic
# Show banner
Show-Banner
# Variables for folder paths
$oldServerFolder = "C:\Users\nicole.rappe\Desktop\Minecraft_Server\ATM10"
$newDeploymentFolder = "C:\Users\nicole.rappe\Desktop\Minecraft_Server\ATM10_NewDeployment"
$finalServerFolder = "C:\Users\nicole.rappe\Desktop\Minecraft_Server\ATM10"
# Step 1: Get the zip file name from the user
$zipFilePath = Get-ZipFileName
# Step 2: Unzip the file to the new deployment folder without nesting
Unzip-ServerFiles -zipFilePath $zipFilePath -destinationFolder $newDeploymentFolder
# Step 3: Copy necessary files/folders from the old server
Copy-ServerData -sourceFolder $oldServerFolder -destinationFolder $newDeploymentFolder
# Step 4: Rename the old server folder with the current date
Rename-OldServer -oldFolderPath $oldServerFolder
# Step 5: Rename the new deployment folder to 'ATM10'
Rename-NewServer -newDeploymentPath $newDeploymentFolder -finalServerPath $finalServerFolder
# Step 6. Create Symbolic Link to Backup Drive
Write-Host "Step 6: Create Symbolic Link to Backup Drive" -ForegroundColor Cyan
cd "C:\Users\nicole.rappe\Desktop\Minecraft_Server\ATM10"
cmd.exe /c mklink /D backups Z:\
# Step 7: Notify the user that the server is ready to launch
Write-Host "Step 7: Server Ready to Launch!" -ForegroundColor Cyan
Write-Host "Press any key to exit the script"
[System.Console]::ReadKey($true) # Waits for a key press and doesn't display the pressed key
clear
```

View File

@@ -0,0 +1,149 @@
**Purpose**: In some unique cases, you want to be able to either perform backups of data or exfiltrate data to Nextcloud from a local device via the use of a script. Doing such a thing with Nextcloud as the destination is not very documented, but you can achieve that result by running a script like what is seen below:
## Windows
!!! abstract "Environment Variables"
You will need to assign the following variables either within the script or externally via environment variables at the time the script is executed.
| **Variable** | **Default Value** | **Description** |
| :--- | :--- | :--- |
| `NEXTCLOUD_SERVER_URL` | `https://cloud.bunny-lab.io` | This is the base URL of the Nextcloud server that data will be copied to. |
| `NEXTCLOUD_SHARE_PASSWORD` | `<Share Password>` | You need to create a share on Nextcloud, and configure it as a `File Drop`, then put a share password to protect it. Put that password here. |
| `NEXTCLOUD_SHARE` | `ABCDEFGHIJK` | The tail-end of a nextcloud share link, e.g. `https://cloud.bunny-lab.io/s/<<ABCDEFGHIJK>>` |
| `IGNORE_LIST` | `AppData;AMD;Drivers;Radeon;Program Files;Program Files (x86);Windows;$SysReset;$WinREAgent;PerfLogs;ProgramData;Recovery;System Volume Information;hiberfile.sys;pagefile.sys;swapfile.sys` | This is a list of files/folders to ignore when iterating through directories. A sensible default is selected if you choose to copy everything from the root C:\ directory. |
| `PRIMARY_DIR` | `C:\Users\Example` | This directory target is the primary focus of the upload / backup / exfiltration. The script will iterate through this target first before it moves onto the secondary target. The target can be a directory or a single file. This will act as the main priority of the transfer. |
| `SECONDARY_DIR` | `C:\` | This is the secondary target, it's less important but nice-to-have with the upload / backup / exfiltration once the primary copy is completed. The target can be a directory or a single file. |
| `LOGFILE` | `C:\Windows\Temp\nc_pull.log` | This file is how the script has "persistence". In case the computer is shut down, rebooted, etc, when it comes back online and the script is re-ran against it, it reads this file to pick up where it last was, and attempts to resume from that point. If this transfer is meant to be hidden, put this file somewhere someone is not likely to find it easily. |
### Powershell Script
``` powershell
# --------------------------
# Function for File Upload Logic
# --------------------------
Function Upload-Files ($targetDir) {
Get-ChildItem -Path $targetDir -Recurse -File -Force -ErrorAction SilentlyContinue | ForEach-Object {
try {
# --------------------------
# Check Ignore List
# --------------------------
$ignore = $false # Initialize variable to check if current folder should be ignored
foreach ($item in $IGNORE_LIST) {
if ($_.Directory -match [regex]::Escape($item)) {
$ignore = $true
break
}
}
if ($ignore) {
Write-Host "Ignoring file $($_.FullName) due to directory match in ignore list."
return
}
# --------------------------
# Upload File Process
# --------------------------
$filename = $_.Name # Extract just the filename
# Check if this file has been uploaded before by searching in the log file
if ((Get-Content $LOGFILE) -notcontains $_.FullName) {
Write-Host "Uploading $($_.FullName) ..."
# Upload the file
$response = Invoke-RestMethod -Uri ($URL + $filename) -Method Put -InFile $_.FullName -Headers @{'X-Requested-With' = 'XMLHttpRequest'} -Credential $credentials
# Record this file in the log since it was successfully uploaded
Add-Content -Path $LOGFILE -Value $_.FullName
} else {
Write-Host "Skipping previously uploaded file $($_.FullName)"
}
} catch {
Write-Host "Error encountered while processing $($_.FullName): $_.Exception.Message"
}
}
}
# --------------------------
# Initialize Environment Variables
# --------------------------
$securePassword = ConvertTo-SecureString $env:NEXTCLOUD_SHARE_PASSWORD -AsPlainText -Force
$credentials = New-Object System.Management.Automation.PSCredential ($env:NEXTCLOUD_SHARE, $securePassword)
$PRIMARY_DIR = $env:PRIMARY_DIR
$SECONDARY_DIR = $env:SECONDARY_DIR
$URL = "$env:NEXTCLOUD_SERVER_URL/public.php/webdav/"
$IGNORE_LIST = $env:IGNORE_LIST -split ';' # Splitting the folder names into an array
# --------------------------
# Checking Log File
# --------------------------
if (-not (Test-Path $LOGFILE)) {
New-Item -Path $LOGFILE -ItemType "file"
}
# --------------------------
# Perform Uploads
# --------------------------
Write-Host "Uploading files from primary directory: $PRIMARY_DIR"
Upload-Files $PRIMARY_DIR
Write-Host "Uploading files from secondary directory: $SECONDARY_DIR"
Upload-Files $SECONDARY_DIR
```
## MacOS/Linux
!!! abstract "Environment Variables"
You will need to assign the following variables either within the script or externally via environment variables at the time the script is executed.
| **Variable** | **Default Value** | **Description** |
| :--- | :--- | :--- |
| `NEXTCLOUD_SERVER_URL` | `https://cloud.bunny-lab.io` | This is the base URL of the Nextcloud server that data will be copied to. |
| `NEXTCLOUD_SHARE_PASSWORD` | `<Share Password>` | You need to create a share on Nextcloud, and configure it as a `File Drop`, then put a share password to protect it. Put that password here. |
| `NEXTCLOUD_SHARE` | `ABCDEFGHIJK` | The tail-end of a nextcloud share link, e.g. `https://cloud.bunny-lab.io/s/<<ABCDEFGHIJK>>` |
| `DATA_TO_COPY` | `/home/bunny/example` | This directory target is the primary focus of the upload / backup / exfiltration. The script will iterate through this target first before it moves onto the secondary target. The target can be a directory or a single file. This will act as the main priority of the transfer. |
| `LOGFILE` | `/tmp/uploaded_files.log` | This file is how the script has "persistence". In case the computer is shut down, rebooted, etc, when it comes back online and the script is re-ran against it, it reads this file to pick up where it last was, and attempts to resume from that point. If this transfer is meant to be hidden, put this file somewhere someone is not likely to find it easily. |
### Bash Script
``` sh
#!/bin/bash
# Directory to search
DIR=$DATA_TO_COPY
# URL for the upload
URL="$NEXTCLOUD_SERVER_URL/public.php/webdav/"
# Check if log file exists. If not, create one.
if [ ! -f "$LOGFILE" ]; then
touch "$LOGFILE"
fi
# Iterate over each file in the directory and its subdirectories
find "$DIR" -type f -print0 | while IFS= read -r -d '' file; do
# Extract just the filename
filename=$(basename "$file")
# Check if this file has been uploaded before
if ! grep -q "$file" "$LOGFILE"; then
echo "Uploading $file ..."
# Upload the file
response=$(curl -k -s -T "$file" -u "$NEXTCLOUD_SHARE:$NEXTCLOUD_SHARE_PASSWORD" -H 'X-Requested-With: XMLHttpRequest' "$URL$filename")
# Get the HTTP status code
status_code=$(curl -s -o /dev/null -w ''%{http_code}'' "$URL$filename")
# # Print the HTTP status code
# echo "HTTP status code: $status_code"
# # Check the HTTP status code
# if [[ "$status_code" = "200" ]]; then
# # If upload was successful, record this file in the log
echo "$file" >> "$LOGFILE"
# else
# echo "Failed to upload $file"
# fi
else
echo "Skipping previously uploaded file $file"
fi
done
```

View File

@@ -0,0 +1,6 @@
**Purpose**:
Sometimes you need a report of every user in a domain, and if/when their passwords will expire. This one-liner command will help automate that reporting.
``` powershell
Get-Aduser -filter "enabled -eq 'true'" -properties passwordlastset, passwordneverexpires | ft Name, passwordlastset, Passwordneverexpires > C:\PWReport.txt
```

View File

@@ -0,0 +1,5 @@
``` powershell
$DaysInactive = 30
$time = (Get-Date).Adddays(-($DaysInactive))
Get-ADComputer -Filter {LastLogonTimeStamp -lt $time} -ResultPageSize 2000 -resultSetSize $null -Properties Name | Select Name
```

View File

@@ -0,0 +1,6 @@
``` powershell
InactiveDays = 30
$Days = (Get-Date).Adddays(-($InactiveDays))
Get-ADUser -Filter {LastLogonTimeStamp -lt $Days -and enabled -eq $true} -Properties LastLogonTimeStamp |
select-object Name,@{Name="Date"; Expression={[DateTime]::FromFileTime($_.lastLogonTimestamp).ToString('MM-dd-yyyy')}}
```

View File

@@ -0,0 +1,21 @@
**Purpose**:
This script will iterate over all network shares hosted by the computer it is running on, and will give *recursive* permissions to all folders, subfolders, and files, including hidden ones. It is very I/O intensive given it iterates recursively on every file/folder being shared.
``` powershell
$AllShares = Get-SMBShare | Where-Object {$_.Description -NotMatch "Default share|Remote Admin|Remote IPC|Printer Drivers"} | Select-Object -ExpandProperty Path
$Output = @()
ForEach ($SMBDirectory in $AllShares)
{
$FolderPath = Get-ChildItem -Directory -Path $SMBDirectory -Recurse -Force
ForEach ($Folder in $FolderPath) {
$Acl = Get-Acl -Path $Folder.FullName
ForEach ($Access in $Acl.Access)
{
$Properties = [ordered]@{'Folder Name'=$Folder.FullName;'Group/User'=$Access.IdentityReference;'Permissions'=$Access.FileSystemRights;'Inherited'=$Access.IsInherited}
$Output += New-Object -TypeName PSObject -Property $Properties
}
}
}
$Output | Export-CSV -Path C:\SMB_REPORT.csv -NoTypeInformation -Append
```

View File

@@ -0,0 +1,10 @@
**Purpose**:
This script will iterate over all network shares hosted by the computer it is running on, and will give *top-level* permissions to all the shared folders. It will not navigate deeper than the top-level in its report. Very I/O friendly.
``` powershell
$AllShares = Get-SMBShare | Where-Object {$_.Description -NotMatch "Default share|Remote Admin|Remote IPC|Printer Drivers"} | Select-Object -ExpandProperty Name
ForEach ($SMBDirectory in $AllShares)
{
Get-SMBShareAccess -Name $SMBDirectory | Export-CSV -Path C:\SMB_REPORT.csv -NoTypeInformation -Append
}
```