Compare commits
29 Commits
1f5b2c89e0
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| de4757b0c7 | |||
| 9e566f4c86 | |||
| 0b4e6ced95 | |||
| d71189db1f | |||
| ca9606ea23 | |||
| bf464c1f34 | |||
| 51cdd1fdb6 | |||
| 554c04aa32 | |||
| 52e6f83418 | |||
| 39ffb700f0 | |||
| 3fdbfbd3c3 | |||
| 8b56b6bb71 | |||
| 91875cd475 | |||
| af8ec84ece | |||
| 417321150b | |||
| 54b41b1fdf | |||
| 7342cb64bc | |||
| daf24d7480 | |||
| 2b82ef254d | |||
| f85e77dcba | |||
| 328ccc1d09 | |||
| ed88b2d0f9 | |||
| 0f0036809a | |||
| 1e374ec423 | |||
| 6a1de1a436 | |||
| b0a094e6fa | |||
| e6e8f489ae | |||
| d682e0b54f | |||
| 6921fd4b3f |
37
.gitea/workflows/automatic-deployment.yml
Normal file
37
.gitea/workflows/automatic-deployment.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
name: Automatic Documentation Deployment
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
zensical_deploy:
|
||||
name: Sync Docs to https://kb.bunny-lab.io
|
||||
runs-on: zensical-host
|
||||
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Stop Zensical Service
|
||||
run: sudo /usr/bin/systemctl stop zensical-watchdog.service
|
||||
|
||||
- name: Sync repository into /srv/zensical/docs
|
||||
run: |
|
||||
rsync -rlD --delete \
|
||||
--exclude='.git/' \
|
||||
--exclude='.gitea/' \
|
||||
--exclude='assets/' \
|
||||
--exclude='schema/' \
|
||||
--exclude='stylesheets/' \
|
||||
--exclude='schema.json' \
|
||||
--chmod=D2775,F664 \
|
||||
. /srv/zensical/docs/
|
||||
|
||||
- name: Start Zensical Service
|
||||
run: sudo /usr/bin/systemctl start zensical-watchdog.service
|
||||
|
||||
- name: Notify via NTFY
|
||||
if: always()
|
||||
run: |
|
||||
curl -d "https://kb.bunny-lab.io - Zensical job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
|
||||
@@ -1,65 +0,0 @@
|
||||
name: GitOps Automatic Documentation Deployment
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
|
||||
jobs:
|
||||
mkdocs_deploy:
|
||||
name: Sync Docs to https://docs.bunny-lab.io
|
||||
runs-on: gitea-runner-mkdocs
|
||||
|
||||
steps:
|
||||
- name: Install Node.js, git, rsync, and curl
|
||||
run: |
|
||||
apk add --no-cache nodejs npm git rsync curl
|
||||
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Copy Repository Data to Production Server
|
||||
run: |
|
||||
rsync -a --delete \
|
||||
--exclude='.git/' \
|
||||
--exclude='.gitea/' \
|
||||
--exclude='assets/' \
|
||||
--exclude='schema/' \
|
||||
--exclude='stylesheets/' \
|
||||
--exclude='schema.json' \
|
||||
. /Gitops_Destination/
|
||||
|
||||
- name: Trigger Material MKDocs Container Re-Deployment
|
||||
run: |
|
||||
curl --fail --show-error --silent --insecure \
|
||||
-X POST \
|
||||
"https://192.168.3.48:9443/api/stacks/webhooks/c891d2b5-7eca-42ef-8c3f-896bffbae803"
|
||||
|
||||
- name: Notify via NTFY
|
||||
if: always()
|
||||
run: |
|
||||
curl -d "https://docs.bunny-lab.io - MKDocs job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
|
||||
|
||||
zensical_deploy:
|
||||
name: Sync Docs to https://kb.bunny-lab.io
|
||||
runs-on: zensical-host
|
||||
|
||||
steps:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Sync repository into /srv/zensical/docs
|
||||
run: |
|
||||
rsync -rlD --delete \
|
||||
--exclude='.git/' \
|
||||
--exclude='.gitea/' \
|
||||
--exclude='assets/' \
|
||||
--exclude='schema/' \
|
||||
--exclude='stylesheets/' \
|
||||
--exclude='schema.json' \
|
||||
--chmod=D2775,F664 \
|
||||
. /srv/zensical/docs/
|
||||
|
||||
- name: Notify via NTFY
|
||||
if: always()
|
||||
run: |
|
||||
curl -d "https://kb.bunny-lab.io - Zensical job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
|
||||
@@ -15,7 +15,7 @@ tags:
|
||||
---
|
||||
|
||||
# Learning to Leverage Gitea Runners
|
||||
When I first started my journey with a GitOps mentality to transition a portion of my homelab's infrastructure to an "**Intrastructure-as-Code**" structure, I had made my own self-made Docker container that I called the [Git-Repo-Updater](../../platforms/containerization/docker/custom-containers/git-repo-updater.md). This self-made tool was useful to me because it copied the contents of Gitea repositories into bind-mounted container folders on my Portainer servers. This allowed me to set up configurations for Homepage-Docker, Material MkDocs, Traefik Reverse Proxy, and others to pull configuration changes from Gitea directly into the production servers, causing them to hot-load the changes instantly. (within 10 seconds, give or take).
|
||||
When I first started my journey with a GitOps mentality to transition a portion of my homelab's infrastructure to an "**Intrastructure-as-Code**" structure, I had made my own self-made Docker container that I called the [Git-Repo-Updater](../../deployments/platforms/containerization/docker/custom-containers/git-repo-updater.md). This self-made tool was useful to me because it copied the contents of Gitea repositories into bind-mounted container folders on my Portainer servers. This allowed me to set up configurations for Homepage-Docker, Material MkDocs, Traefik Reverse Proxy, and others to pull configuration changes from Gitea directly into the production servers, causing them to hot-load the changes instantly. (within 10 seconds, give or take).
|
||||
|
||||
## Criticisms of Git-Repo-Updater
|
||||
When I made the [Git-Repo-Updater docker container stack](https://git.bunny-lab.io/container-registry/git-repo-updater), I ran into the issue of having made something I knew existing solutions existed for but simply did not understand well-enough to use yet. This caused me to basically delegate the GitOps workflow to a bash script with a few environment variables, running inside of an Alpine Linux container. While the container did it's job, it would occassionally have hiccups, caching issues, or repository branch errors that made no sense. This lack of transparency and the need to build an entire VSCode development environment to push new docker package updates to Gitea's [package repository for Git-Repo-Updater](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest) caused a lot of development headaches.
|
||||
@@ -115,3 +115,4 @@ Gitea Act Runners are a beautiful thing, and it's a damn shame it took me this l
|
||||
|
||||

|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ tags:
|
||||
So, I want to start with a little context. As part of a long-standing project I have been working on, I have tried to deploy OpenStack. OpenStack is sort of envisioned as "Infrastructure as a Service (IAAS)". Basically you deploy an OpenStack cluster, which can run its own KVM for virtual machine and containers, or it can interface with an existing Hypervisor infrastructure, such as Hyper-V. In most cases, people branch out the "Control", "Compute", and "Storage" roles into different physical servers, but in my homelab, I have been attempting to deploy it via a "Converged" model, of having Control, Compute, and Storage on each node, spanning a high-availability cluster of 3 nodes.
|
||||
|
||||
## The Problem
|
||||
The problems come into the overall documentation provided for deploying either [Canonical Openstack](https://ubuntu.com/openstack/install) which I have detailed my frustrations of the system in my own attempted re-write of the documentation [here](../../platforms/virtualization/openstack/canonical-openstack.md). I have also attempted to deploy it via [Ansible OpenStack](https://docs.openstack.org/project-deploy-guide/openstack-ansible/2024.1/), whereas my documentation thus far in my homelab is visible [here](../../platforms/virtualization/openstack/ansible-openstack.md).
|
||||
The problems come into the overall documentation provided for deploying either [Canonical Openstack](https://ubuntu.com/openstack/install) which I have detailed my frustrations of the system in my own attempted re-write of the documentation [here](../../deployments/platforms/virtualization/openstack/canonical-openstack.md). I have also attempted to deploy it via [Ansible OpenStack](https://docs.openstack.org/project-deploy-guide/openstack-ansible/2024.1/), whereas my documentation thus far in my homelab is visible [here](../../deployments/platforms/virtualization/openstack/ansible-openstack.md).
|
||||
|
||||
You see, OpenStack is like icecream, it has many different ways to deploy it, and it can be as simple, or as overtly-complex as you need it to be, and it scales *really well* across a fleet of servers in a datacenter. My problems come in where the Canonical deployment has never worked fully / properly, and their own development team is hesitant to recommend the current documentation, and the Ansible OpenStack deployment process, while relatively simple, requires a base of existing knowledge that makes translating the instructions into more user-friendly instructions in my homelab documentation a difficult task. Eventually I want to automate much of the process as much as I can, but that will take time.
|
||||
|
||||
@@ -30,3 +30,4 @@ The common issue I've seen while trying to deploy OpenStack is understanding the
|
||||
|
||||
I will post an update later if I figure things out!
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 122 KiB After Width: | Height: | Size: 122 KiB |
@@ -182,7 +182,7 @@ klist
|
||||
### Prepare Windows Devices
|
||||
Windows devices need to be prepared ahead-of-time in order for WinRM functionality to work as-expected. I have prepared a powershell script that you can run on each device that needs remote management functionality. You can port this script based on your needs, and deploy it via whatever methods you have available to you. (e.g. Ansible, Group Policies, existing RMM software, manually via remote desktop, etc).
|
||||
|
||||
You can find the [WinRM Enablement Script](../../ansible/enable-winrm-on-windows-devices.md) in the Bunny Lab documentation.
|
||||
You can find the [WinRM Enablement Script](../../../../workflows/operations/automation/ansible/enable-winrm-on-windows-devices.md) in the Bunny Lab documentation.
|
||||
|
||||
## Ad-Hoc Command Examples
|
||||
At this point, you should finally be ready to connect to Windows and Linux devices and run commands on them ad-hoc. Puppet Bolt Modules and Plans will be discussed further down the road.
|
||||
15
deployments/index.md
Normal file
15
deployments/index.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
tags:
|
||||
- Deployments
|
||||
- Index
|
||||
- Documentation
|
||||
---
|
||||
|
||||
# Deployments
|
||||
## Purpose
|
||||
Build and deployment documentation for platforms, services, and automation stacks.
|
||||
|
||||
## Includes
|
||||
- Platform deployments (virtualization and containerization)
|
||||
- Service deployments and integration patterns
|
||||
- Automation stack deployment guides
|
||||
@@ -5,7 +5,7 @@ tags:
|
||||
- Containerization
|
||||
---
|
||||
|
||||
**Purpose**: Docker container running Alpine Linux that automates and improves upon much of the script mentioned in the [Git Repo Updater](../../../../reference/bash/git-repo-updater.md) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings, and can have an infinite number of monitored repositories.
|
||||
**Purpose**: Docker container running Alpine Linux that automates and improves upon much of the script mentioned in the [Git Repo Updater](../../../../../scripts/bash/git-repo-updater.md) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings, and can have an infinite number of monitored repositories.
|
||||
|
||||
### Deployment
|
||||
You can find the current up-to-date Gitea repository that includes the `docker-compose.yml` and `.env` files that you need to deploy everything [here](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest)
|
||||
@@ -54,7 +54,7 @@ Alternative Methods:
|
||||
2. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
|
||||
|
||||
### Configure Docker Network
|
||||
I highly recomment setting up a [Dedicated Docker MACVLAN Network](../../../networking/docker-networking/docker-networking.md). You can use it to keep your containers on their own subnet.
|
||||
I highly recomment setting up a [Dedicated Docker MACVLAN Network](../../../../reference/infrastructure/networking/docker-networking/docker-networking.md). You can use it to keep your containers on their own subnet.
|
||||
|
||||
### Access Portainer WebUI
|
||||
You will be able to access the Portainer WebUI at the following address: `https://<IP Address>:9443`
|
||||
@@ -20,7 +20,7 @@ You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox
|
||||
```
|
||||
|
||||
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
|
||||
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](../../../networking/docker-networking/docker-networking.md) leveraging MACVLAN technology.
|
||||
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](../../../../reference/infrastructure/networking/docker-networking/docker-networking.md) leveraging MACVLAN technology.
|
||||
|
||||
### Networking
|
||||
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
|
||||
@@ -129,7 +129,8 @@ In the Replication wizard that appears after about a minute, you can configure t
|
||||
- ✅Update folder properties
|
||||
- ✅Create connections
|
||||
|
||||
### Checking DFS Status
|
||||
### Troubleshooting / Diagnostics
|
||||
#### Checking DFS Status
|
||||
You may want to put together a simple table report of the DFS namespaces, replication info, and target folders. You can run the following powershell script to generate a nice table-based report of the current structure of the DFS namespaces in your domain.
|
||||
|
||||
??? example "Powershell Reporting Script"
|
||||
@@ -395,3 +396,147 @@ You may want to put together a simple table report of the DFS namespaces, replic
|
||||
|
||||
Write-DfsGrid -Data $rows
|
||||
```
|
||||
|
||||
#### Fixing Inconsistent DFS Management GUI
|
||||
Sometimes the GUI for managing DFS becomes "inconsistent" whereas the namespaces and replication groups are different between member servers, and may be missing namspaces or missing replication groups. DFS Management is an MMC snap-in. MMC persists per-user console state under `%APPDATA%\Microsoft\MMC\`. If that state gets out of sync (common after service hiccups or server crashes), the snap-in can render partial/incorrect namespace/replication trees even when DFS itself is fine. Deleting the cached dfsmgmt* console forces a fresh enumeration. We will also include a few extra commands for extra thouroughness.
|
||||
|
||||
Before anything, we want to make sure that active directory itself is not having replication issues, as this would be a deeper, more complicated issue. Run the following command on one of your domain controllers:
|
||||
```powershell
|
||||
repadmin /syncall /AdeP
|
||||
repadmin /replsummary
|
||||
```
|
||||
|
||||
If AD-level replication is successful and timely, you can proceed to run the commands below (one-line-at-a-time):
|
||||
```sh
|
||||
# Pull-Down DFS Configuration from Active Directory & Restart DFS
|
||||
dfsrdiag pollad
|
||||
net stop dfsr
|
||||
net start dfsr
|
||||
|
||||
# Clear DFS Management Snap-In Cache
|
||||
taskkill /im mmc.exe /f
|
||||
del "%appdata%\Microsoft\MMC\dfsmgmt*"
|
||||
dfsmgmt.msc
|
||||
```
|
||||
|
||||
!!! success "DFS Management GUI Restored"
|
||||
At this point, the DFS Management snap-in (should) be successfully showing all of the DFS namespaces and replication groups when you re-open "DFS Management".
|
||||
|
||||
#### Check Replication Progress
|
||||
You may want to check that replication is occurring bi-directionally between every member server in your DFS deployment. I wrote a script below that effectively shows you every replication group and each directional backlog status.
|
||||
|
||||
```powershell
|
||||
# --- CONFIG ---
|
||||
$Members = @("LAB-FPS-01","LAB-FPS-02")
|
||||
$SummarizeAcrossFolders = $true # $true = one line per direction per RG; $false = per-folder lines
|
||||
|
||||
function Invoke-DfsrBacklogStatus {
|
||||
param(
|
||||
[Parameter(Mandatory)] [string] $RG,
|
||||
[Parameter(Mandatory)] [string] $RF,
|
||||
[Parameter(Mandatory)] [string] $Send,
|
||||
[Parameter(Mandatory)] [string] $Recv
|
||||
)
|
||||
|
||||
$out = & dfsrdiag backlog /rgname:"$RG" /rfname:"$RF" /sendingmember:"$Send" /receivingmember:"$Recv" 2>&1 | Out-String
|
||||
$outTrim = ($out -split "`r?`n" | ForEach-Object { $_.Trim() }) | Where-Object { $_ -ne "" }
|
||||
|
||||
if ($out -match 'No Backlog') {
|
||||
return [pscustomobject]@{ Status="No Backlog"; Count=0; Detail=$null }
|
||||
}
|
||||
|
||||
$count = $null
|
||||
$countLine = $outTrim | Where-Object { $_ -match '(?i)backlog' } | Select-Object -First 1
|
||||
if ($countLine -and ($countLine -match '(\d+)')) { $count = [int]$matches[1] }
|
||||
|
||||
$detail = ($outTrim | Select-Object -First 8) -join " | "
|
||||
|
||||
return [pscustomobject]@{
|
||||
Status = if ($count -ne $null) { "Backlog: $count" } else { "Backlog/Check Output" }
|
||||
Count = $count
|
||||
Detail = $detail
|
||||
}
|
||||
}
|
||||
|
||||
$groups = Get-DfsReplicationGroup | Sort-Object GroupName
|
||||
|
||||
foreach ($g in $groups) {
|
||||
$rg = $g.GroupName
|
||||
$rfs = Get-DfsReplicatedFolder -GroupName $rg | Sort-Object FolderName
|
||||
|
||||
Write-Host ""
|
||||
Write-Host ("== Replication Group: {0} ==" -f $rg)
|
||||
|
||||
foreach ($send in $Members) {
|
||||
foreach ($recv in $Members) {
|
||||
if ($send -eq $recv) { continue }
|
||||
|
||||
if ($SummarizeAcrossFolders) {
|
||||
$worstCount = 0
|
||||
$nonZero = @()
|
||||
$errorsOrDetails = @()
|
||||
|
||||
foreach ($rfObj in $rfs) {
|
||||
$rf = $rfObj.FolderName
|
||||
$res = Invoke-DfsrBacklogStatus -RG $rg -RF $rf -Send $send -Recv $recv
|
||||
|
||||
if ($res.Status -ne "No Backlog") {
|
||||
$nonZero += [pscustomobject]@{ RF=$rf; Status=$res.Status; Count=$res.Count; Detail=$res.Detail }
|
||||
if ($res.Count -ne $null -and $res.Count -gt $worstCount) { $worstCount = $res.Count }
|
||||
|
||||
# ✅ FIX: ${rf} avoids the ':' parsing issue
|
||||
if ($res.Detail) { $errorsOrDetails += "RF=${rf}: $($res.Detail)" }
|
||||
}
|
||||
}
|
||||
|
||||
if ($nonZero.Count -eq 0) {
|
||||
Write-Host ("{0} -> {1}: No Backlog" -f $send, $recv)
|
||||
} else {
|
||||
if ($worstCount -gt 0) {
|
||||
Write-Host ("{0} -> {1}: Backlog (max {2} across RFs)" -f $send, $recv, $worstCount)
|
||||
} else {
|
||||
Write-Host ("{0} -> {1}: Backlog/Errors (see details)" -f $send, $recv)
|
||||
}
|
||||
|
||||
$errorsOrDetails | Select-Object -First 5 | ForEach-Object { Write-Host (" - {0}" -f $_) }
|
||||
if ($errorsOrDetails.Count -gt 5) { Write-Host " - ... (more omitted)" }
|
||||
}
|
||||
}
|
||||
else {
|
||||
foreach ($rfObj in $rfs) {
|
||||
$rf = $rfObj.FolderName
|
||||
$res = Invoke-DfsrBacklogStatus -RG $rg -RF $rf -Send $send -Recv $recv
|
||||
|
||||
if ($res.Status -eq "No Backlog") {
|
||||
Write-Host ("{0} -> {1} [{2}]: No Backlog" -f $send, $recv, $rf)
|
||||
} else {
|
||||
Write-Host ("{0} -> {1} [{2}]: {3}" -f $send, $recv, $rf, $res.Status)
|
||||
if ($res.Detail) { Write-Host (" - {0}" -f $res.Detail) }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
!!! example "Example Output"
|
||||
You will see output like the following when you run the script.
|
||||
|
||||
```powershell
|
||||
== Replication Group: bunny-lab.io\music\fl studio plugins ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\music\personal music ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\music\shared music ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
|
||||
== Replication Group: bunny-lab.io\projects\coding ==
|
||||
LAB-FPS-01 -> LAB-FPS-02: No Backlog
|
||||
LAB-FPS-02 -> LAB-FPS-01: No Backlog
|
||||
```
|
||||
@@ -16,9 +16,9 @@ This document outlines the Microsoft-recommended best practices for deploying a
|
||||
!!! note "Certificate Authority Server Provisioning Assumptions"
|
||||
- OS = Windows Server 2022/2025 bare-metal or as a VM
|
||||
- You should give it at least 4GB of RAM.
|
||||
- [Change the edition of Windows Server from "**Evaluation**" to "**Standard**" via DISM](../../../operations/windows/change-windows-edition.md)
|
||||
- [Change the edition of Windows Server from "**Evaluation**" to "**Standard**" via DISM](../../../../workflows/operations/windows/change-windows-edition.md)
|
||||
- Ensure the server is fully updated
|
||||
- [Ensure the server is activated](../../../operations/windows/change-windows-edition.md#force-activation-edition-switcher)
|
||||
- [Ensure the server is activated](../../../../workflows/operations/windows/change-windows-edition.md#force-activation-edition-switcher)
|
||||
- Ensure the timezone is correctly configured
|
||||
- Ensure the hostname is correctly configured
|
||||
|
||||
@@ -214,6 +214,14 @@ sudo useradd --system --create-home --home /var/lib/gitea_runner --shell /usr/sb
|
||||
# Allow the runner to write documentation changes
|
||||
sudo usermod -aG zensical gitearunner
|
||||
|
||||
# Allow the runner to start and stop Zensical Watchdog Service
|
||||
sudo tee /etc/sudoers.d/gitearunner-systemctl > /dev/null <<'EOF'
|
||||
gitearunner ALL=NOPASSWD: /usr/bin/systemctl start zensical-watchdog.service, /usr/bin/systemctl stop zensical-watchdog.service
|
||||
EOF
|
||||
sudo chmod 440 /etc/sudoers.d/gitearunner-systemctl
|
||||
sudo chown root:root /etc/sudoers.d/gitearunner-systemctl
|
||||
sudo visudo -c
|
||||
|
||||
# Download Newest Gitea Runner Binary (https://gitea.com/gitea/act_runner/releases)
|
||||
cd /tmp
|
||||
wget https://gitea.com/gitea/act_runner/releases/download/v0.2.13/act_runner-0.2.13-linux-amd64
|
||||
@@ -286,8 +294,8 @@ sudo systemctl enable --now gitea-runner.service
|
||||
### Repository Workflow
|
||||
Place the following file into your documentation repository at the given location and this will enable the runner to execute when changes happen to the repository data.
|
||||
|
||||
```yaml title="gitea/workflows/gitops-automatic-deployment.yml"
|
||||
name: GitOps Automatic Documentation Deployment
|
||||
```yaml title="gitea/workflows/automatic-deployment.yml"
|
||||
name: Automatic Documentation Deployment
|
||||
|
||||
on:
|
||||
push:
|
||||
@@ -302,6 +310,9 @@ jobs:
|
||||
- name: Checkout Repository
|
||||
uses: actions/checkout@v3
|
||||
|
||||
- name: Stop Zensical Service
|
||||
run: sudo /usr/bin/systemctl stop zensical-watchdog.service
|
||||
|
||||
- name: Sync repository into /srv/zensical/docs
|
||||
run: |
|
||||
rsync -rlD --delete \
|
||||
@@ -314,10 +325,14 @@ jobs:
|
||||
--chmod=D2775,F664 \
|
||||
. /srv/zensical/docs/
|
||||
|
||||
- name: Start Zensical Service
|
||||
run: sudo /usr/bin/systemctl start zensical-watchdog.service
|
||||
|
||||
- name: Notify via NTFY
|
||||
if: always()
|
||||
run: |
|
||||
curl -d "https://kb.bunny-lab.io - Zensical job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
|
||||
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy
|
||||
31
index.md
31
index.md
@@ -1,33 +1,21 @@
|
||||
# Home
|
||||
## Homelab Documentation Structure
|
||||
This documentation details the design, setup, and day-to-day management of my homelab environment. The goal is to keep it deterministic, CLI-first, and easy to audit or reproduce.
|
||||
This documentation details the design, setup, and day-to-day management of my homelab environment. The goal is to keep it deterministic, CLI-first, and easy to audit or reproduce. Some of the decisions made may not involve using the best security practices. Use your own discretion when following this documentation.
|
||||
|
||||
---
|
||||
|
||||
## Top-Level Sections
|
||||
**Foundations**
|
||||
- Conventions, templates, glossary, and shared standards
|
||||
**Deployments**
|
||||
- Platform, service, and automation deployment guides
|
||||
|
||||
**Hardware**
|
||||
- Node build sheets, storage layouts, and physical inventory
|
||||
**Workflows**
|
||||
- Day-2 runbooks, maintenance procedures, and troubleshooting flows
|
||||
|
||||
**Networking**
|
||||
- Addressing plans, firewall rules, VPNs, and network services
|
||||
|
||||
**Platforms**
|
||||
- Virtualization and containerization stacks (hypervisors, Kubernetes, Docker)
|
||||
|
||||
**Services**
|
||||
- Deployable apps and services (auth, docs, email, monitoring, etc.)
|
||||
|
||||
**Automation**
|
||||
- Ansible, Puppet, and workflow automation notes
|
||||
|
||||
**Operations**
|
||||
- Runbooks for maintenance, backups, and troubleshooting
|
||||
**Scripts**
|
||||
- Quick-use Bash, PowerShell, and Batch scripts/snippets
|
||||
|
||||
**Reference**
|
||||
- Quick scripts and snippets for day-to-day tasks
|
||||
- Foundations, hardware inventory, and networking reference material
|
||||
|
||||
**Blog**
|
||||
- Narrative posts and lessons learned
|
||||
@@ -40,7 +28,7 @@ This documentation details the design, setup, and day-to-day management of my ho
|
||||
- **Personal Environment:** These docs reflect my own environment, goals, and risk tolerance.
|
||||
- **Security & Scale:** Approaches described here are suited to homelab or SMB use, and may need adjustments for enterprise-scale, regulatory compliance, or higher security standards.
|
||||
- **No Credentials:** All sensitive info is redacted or generalized.
|
||||
- **Assumptions:** Some guides assume specific tools, e.g. [Portainer](./platforms/containerization/docker/deploy-portainer.md), [AWX](./automation/ansible/awx/deployment/awx-operator.md), etc. Substitute with your preferred tools as needed.
|
||||
- **Assumptions:** Some guides assume specific tools, e.g. [Portainer](deployments/platforms/containerization/docker/deploy-portainer.md), [AWX](./deployments/automation/ansible/awx/deployment/awx-operator.md), etc. Substitute with your preferred tools as needed.
|
||||
|
||||
---
|
||||
|
||||
@@ -52,3 +40,4 @@ This documentation details the design, setup, and day-to-day management of my ho
|
||||
---
|
||||
|
||||
> _“Homelabs are for learning, breaking things, and sharing the journey. Hope you find something helpful here!”_
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user