Documentation Restructure
All checks were successful
GitOps Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 4s
GitOps Automatic Documentation Deployment / Sync Docs to https://docs.bunny-lab.io (push) Successful in 6s

This commit is contained in:
2026-01-27 05:25:22 -07:00
parent 3ea11e04ff
commit e73bb0376f
205 changed files with 469 additions and 146 deletions

View File

@@ -0,0 +1,76 @@
**Purpose**: Homebox is the inventory and organization system built for the Home User! With a focus on simplicity and ease of use, Homebox is the perfect solution for your home inventory, organization, and management needs.
[Reference Documentation](https://hay-kot.github.io/homebox/quick-start/)
!!! warning "Protect with Keycloak"
The GitHub project for this software appears to have been archived in a read-only state in June 2024. There is no default admin credential, so setting the environment variable `HBOX_OPTIONS_ALLOW_REGISTRATION` to `false` will literally make you unable to log into the system. You also cannot change it after-the-fact, so you cannot just register an account then disable it and restart the container, it doesn't work that way.
Due to this behavior, it is imperative that you deploy this either only internally, or if its external, put it behind something like [Authentik](../authentication/authentik.md) or [Keycloak](../authentication/keycloak/deployment.md).
## Docker Configuration
```yaml title="docker-compose.yml"
version: "3.4"
services:
homebox:
image: ghcr.io/hay-kot/homebox:latest
container_name: homebox
restart: always
environment:
- HBOX_LOG_LEVEL=info
- HBOX_LOG_FORMAT=text
- HBOX_WEB_MAX_UPLOAD_SIZE=10
- HBOX_MODE=production
- HBOX_OPTIONS_ALLOW_REGISTRATION=true
- HBOX_WEB_MAX_UPLOAD_SIZE=50
- HBOX_WEB_READ_TIMEOUT=20
- HBOX_WEB_WRITE_TIMEOUT=20
- HBOX_WEB_IDLE_TIMEOUT=60
- HBOX_MAILER_HOST=${HBOX_MAILER_HOST}
- HBOX_MAILER_PORT=${HBOX_MAILER_PORT}
- HBOX_MAILER_USERNAME=${HBOX_MAILER_USERNAME}
- HBOX_MAILER_PASSWORD=${HBOX_MAILER_PASSWORD}
- HBOX_MAILER_FROM=${HBOX_MAILER_FROM}
volumes:
- /srv/containers/homebox:/data/
ports:
- 7745:7745
networks:
docker_network:
ipv4_address: 192.168.5.25
networks:
docker_network:
external: true
```
```yaml title=".env"
HBOX_MAILER_HOST=mail.bunny-lab.io
HBOX_MAILER_PORT=587
HBOX_MAILER_USERNAME=noreply@bunny-lab.io
HBOX_MAILER_PASSWORD=REDACTED
HBOX_MAILER_FROM=noreply@bunny-lab.io
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
homebox:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: homebox
rule: Host(`box.bunny-lab.io`)
middlewares:
- "auth-bunny-lab-io" # Referencing the Keycloak Server
services:
homebox:
loadBalancer:
servers:
- url: http://192.168.5.25:7745
passHostHeader: true
```

View File

@@ -0,0 +1,136 @@
**Purpose**: A free open source IT asset/license management system.
!!! warning
The Snipe-IT container will attempt to launch after the MariaDB container starts, but MariaDB takes a while set itself up before it can accept connections; as a result, Snipe-IT will fail to initialize the database. Just wait about 30 seconds after deploying the stack, then restart the Snipe-IT container to initialize the database. You will know it worked if you see notes about data being `Migrated`.
## Docker Configuration
```yaml title="docker-compose.yml"
version: '3.7'
services:
snipeit:
image: snipe/snipe-it
ports:
- "8000:80"
depends_on:
- db
env_file:
- stack.env
volumes:
- /srv/containers/snipe-it:/var/lib/snipeit
networks:
docker_network:
ipv4_address: 192.168.5.50
redis:
image: redis:6.2.5-buster
ports:
- "6379:6379"
env_file:
- stack.env
networks:
docker_network:
ipv4_address: 192.168.5.51
db:
image: mariadb:10.5
ports:
- "3306:3306"
env_file:
- stack.env
volumes:
- /srv/containers/snipe-it/db:/var/lib/mysql
networks:
docker_network:
ipv4_address: 192.168.5.52
mailhog:
image: mailhog/mailhog:v1.0.1
ports:
# - 1025:1025
- "8025:8025"
env_file:
- stack.env
networks:
docker_network:
ipv4_address: 192.168.5.53
networks:
docker_network:
external: true
```
```yaml title=".env"
APP_ENV=production
APP_DEBUG=false
APP_KEY=base64:SomethingSecure
APP_URL=https://assets.bunny-lab.io
APP_TIMEZONE='America/Denver'
APP_LOCALE=en
MAX_RESULTS=500
PRIVATE_FILESYSTEM_DISK=local
PUBLIC_FILESYSTEM_DISK=local_public
DB_CONNECTION=mysql
DB_HOST=db
DB_DATABASE=snipedb
DB_USERNAME=snipeuser
DB_PASSWORD=SomethingSecure
DB_PREFIX=null
DB_DUMP_PATH='/usr/bin'
DB_CHARSET=utf8mb4
DB_COLLATION=utf8mb4_unicode_ci
IMAGE_LIB=gd
MYSQL_DATABASE=snipedb
MYSQL_USER=snipeuser
MYSQL_PASSWORD=SomethingSecure
MYSQL_ROOT_PASSWORD=SomethingSecure
REDIS_HOST=redis
REDIS_PASSWORD=SomethingSecure
REDIS_PORT=6379
MAIL_DRIVER=smtp
MAIL_HOST=mail.bunny-lab.io
MAIL_PORT=587
MAIL_USERNAME=assets@bunny-lab.io
MAIL_PASSWORD=SomethingSecure
MAIL_ENCRYPTION=starttls
MAIL_FROM_ADDR=assets@bunny-lab.io
MAIL_FROM_NAME='Bunny Lab Asset Management'
MAIL_REPLYTO_ADDR=assets@bunny-lab.io
MAIL_REPLYTO_NAME='Bunny Lab Asset Management'
MAIL_AUTO_EMBED_METHOD='attachment'
DATA_LOCATION=/srv/containers/snipe-it
APP_TRUSTED_PROXIES=192.168.5.29
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
assets-bunny-lab-io:
entryPoints:
- websecure
rule: "Host(`assets.bunny-lab.io`)"
service: "assets-bunny-lab-io"
tls:
certResolver: letsencrypt
middlewares:
- "assets-bunny-lab-io"
- "auth-bunny-lab-io" # Referencing the Keycloak Server
middlewares:
assets-bunny-lab-io:
headers:
customRequestHeaders:
X-Forwarded-Proto: "https"
X-Forwarded-Host: "assets.bunny-lab.io"
customResponseHeaders:
X-Custom-Header: "CustomValue" # Example of a static header
services:
assets-bunny-lab-io:
loadBalancer:
servers:
- url: "http://192.168.5.50:8080"
passHostHeader: true
```

View File

@@ -0,0 +1,227 @@
## Purpose
This document outlines the Microsoft-recommended best practices for deploying a secure, internal-use-only, two-tier Public Key Infrastructure (PKI) using Windows Server 2022 or newer. The PKI supports securing S/MIME email, 802.1X Wi-Fi with NPS, and LDAP over SSL (LDAPS).
!!! abstract "CA Deployment Breakdown"
The environment will consist of at least 2 virtual machines. For the purposes of this document they will be named `LAB-CA-01` and `LAB-CA-02`. This stands for "*Lab Certificate Authority [01|02]*". In a two-tier hierarchy, an offline (*you intentionally keep this VM offline*) Root CA signs a single "*Subordinate*" Enterprise CA certificate. The Subordinate CA is domain-joined and handles all certificate requests. Clients trust the PKI via Group Policy and Active Directory integration.
In this case, `LAB-CA-01` is the Root CA, while `LAB-CA-02` is the Intermediary/Subordinate CA. You can add more than one subordinate CA if you desire more redundancy in your environment. Making them operate together is generally automatic and does not require manual intervention.
!!! note "Certificate Authority Server Provisioning Assumptions"
- OS = Windows Server 2022/2025 bare-metal or as a VM
- You should give it at least 4GB of RAM.
- [Change the edition of Windows Server from "**Evaluation**" to "**Standard**" via DISM](../../../operations/windows/change-windows-edition.md)
- Ensure the server is fully updated
- [Ensure the server is activated](../../../operations/windows/change-windows-edition.md#force-activation-edition-switcher)
- Ensure the timezone is correctly configured
- Ensure the hostname is correctly configured
!!! note "Domain Environment Assumptions"
It is assumed that you already have existing infrastructure hosting an Active Directory Domain with at least one domain controller. This document does not outline how to set up a domain controller, you will need to figure that out on your own.
## Offline (Non-Domain-Joined) Root CA `LAB-CA-01`
### Role Deployment
This is the initial deployment of the root certificate authority, the settings here should be double and triple checked before proceeding through each step.
- Provision a **non-domain-joined** Windows Server
- This is critical that this device is not domain-joined for security purposes
- Navigate to "**Server Manager > Manage > Add Roles and Features**"
- Check "**Active Directory Certificate Services**"
- When prompted to confirm, click the "**Add Features**" button
- Ensure the "**Include management tools (if applicable)**" checkbox is checked.
- Click "**Next**" > "**Next**" > "**Next**"
- You will be told that the name of the server cannot be changed after this point, and it will be associated with `WORKGROUP` > This is fine and you can proceed.
- Check the boxes for the following role services:
- `Certification Authority`
- `Certification Authority Web Enrollment`
- When prompted to confirm multiple times, click the "**Add Features**" button
- Ensure the "**Include management tools (if applicable)**" checkbox is checked.
- There are additional steps such as `Configure AIA and CDP extensions with HTTP paths` and `Publish root cert and CRL to AD and internal HTTP`, but these do not apply to an LDAPS-only deployment, and are more meant for websites / webhosting. (current understanding)
- Click "**Next**" > "**Next**" > "**Next**" > "**Install**"
- Restart the Server
### Role Configuration
We have a few things we need to configure within the CA to make it ready to handle certificate requests.
- Navigate to "**Server Manager > (Alert Flag) > Post-deployment Configuration: Active Directory Certificate Services**"
- You will be prompted for an admin user, in this example, you will use the pre-populated `LAB-CA-01\Administrator`
- Check the boxes for `Certification Authority` and `Certification Authority Web Enrollment` then click "**Next**"
- Check the "**Standalone CA**" radio box then click "**Next**"
- Check the "**Root CA** radio box then click "**Next**"
- Check the "**Create a new private key**" radio box then click "**Next**"
- Click the dropdown menu for "**Select a crypotographic provider**" and ensure that "**RSA#Microsoft Software Key Storage Provider**" is selected
- *Microsoft Software Key Storage Provider (KSP) is the latest, most flexible provider designed to work with the Cryptography Next Generation (CNG) APIs. It offers better support for modern algorithms and improved security management (such as support for key attestation, better hardware integration, and improved key protection mechanisms).*
- Set the key length to `4096`
- Set the hash algorithm to `SHA256`
- Click "**Next**"
- **Common Name for this CA**: `BunnyLab-RootCA`
- **Distinguished name suffix**: `O=Bunny Lab,C=US`
- **Preview of distinguished name**: `CN=BunnyLab-RootCA,O=Bunny Lab,C=US`
- Click "**Next**"
- Specify the validity period: `10 Years` then click "**Next**" > "**Next**" > "**Configure**"
You will see a finalization screen confirming everything we have configured, it should look something like what is seen below:
| **Field** | **Value** |
| :--- | :--- |
| CA Type | Standalone Root |
| Cryptographic provider | RSA#Microsoft Software Key Storage Provider |
| Hash Algorithm | SHA256 |
| Key Length | 4096 |
| Allow Administrator Interaction | Disabled |
| Certificate Validity Period | `<10 Years from Today>` |
| Distinguished Name | CN=BunnyLab-RootCA,O=Bunny Lab,C=US |
| Certificate Database Location | C:\Windows\system32\CertLog |
| Certificate Database Log Location | C:\Windows\system32\CertLog |
!!! success "Active Directory Certificate Services"
If everything went well, you will see that the "**Certificate Authority**" and "**Certification Authority Web Enrollment**" both have a status of "**Configuration succeeded**". At this point, you can click the "**Close**" button to conclude the Root CA configuration.
## Online (Domain-Joined) Subordinate/Intermediary CA `LAB-CA-02`
### Role Deployment
Now that we have set up the root certificate authority, we can focus on setting up the subordinate CA.
!!! warning "Enterprise Admin Requirement"
When you are setting up the role, you **absolutely** have to use an "*Enterprise*" Admin account. This could be a service account like `svcCertAdmin` or something similar.
- Navigate to "**Server Manager > (Alert Flag) > Post-deployment Configuration: Active Directory Certificate Services**"
- Under credentials, enter the username for an Enterprise Admin. (e.g. `BUNNY-LAB\nicole.rappe`)
- Click "**Next**"
- Check the following roles (*we will add the rest after setting up the core CA functionality*)
- `Certification Authority`
- `Certification Authority Web Enrollment`
- Check the "**Enterprise CA**" radio box then click "**Next**"
- Check the "**Subordinate CA**" radio box then click "**Next**"
- Check the "**Create a new private key**" radio box then click "**Next**"
- Click the dropdown menu for "**Select a crypotographic provider**" and ensure that "**RSA#Microsoft Software Key Storage Provider**" is selected
- *Microsoft Software Key Storage Provider (KSP) is the latest, most flexible provider designed to work with the Cryptography Next Generation (CNG) APIs. It offers better support for modern algorithms and improved security management (such as support for key attestation, better hardware integration, and improved key protection mechanisms).*
- Set the key length to `4096`
- Set the hash algorithm to `SHA256`
- Click "**Next**"
- **Common Name for this CA**: `BunnyLab-SubordinateCA-01`
- **Distinguished name suffix**: `DC=bunny-lab,DC=io`
- This will be auto-filled based on the domain that the CA is joined to
- **Preview of distinguished name**: `CN=BunnyLab-SubordinateCA-01,DC=bunny-lab,DC=io`
- Click "**Next**"
- Select the "**Save a certificate request to file on the target machine**" radio button
- This will auto-populate the destination to something like "`C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req`"
- Click "**Next**" > "**Next**" > "**Configure**"
You will see a finalization screen confirming everything we have configured, it should look something like what is seen below:
| **Field** | **Value** |
| :--- | :--- |
| CA Type | Enterprise Subordinate |
| Cryptographic provider | RSA#Microsoft Software Key Storage Provider |
| Hash Algorithm | SHA256 |
| Key Length | 4096 |
| Allow Administrator Interaction | Disabled |
| Certificate Validity Period | Determined by the parent CA |
| Distinguished Name | CN=BunnyLab-SubordinateCA-01,DC=bunny-lab,DC=io |
| Offline Request File Location | `C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req` |
| Certificate Database Location | C:\Windows\system32\CertLog |
| Certificate Database Log Location | C:\Windows\system32\CertLog |
!!! quote "Pending Certificate Signing Request"
You will see a screen telling you that the **Certification Authority Web Enrollment** was successful but it will give a warning about the **Certification Authority**. The Active Directory Certificate Services installation is incomplete. To complete the installation, use the request file <file-name> to obtain a certificate from the parent CA [*The RootCA*]. Then, use the Certification Authority snap-in to install the certificate. To complete this procedure, right-click the node with the name of the CA, and then click "Install CA Certificate".
### Role Configuration
At this point, we will need to focus on getting the certificate signing request generated on `LAB-CA-02` to `LAB-CA-01` (the rootCA), this can be via temporary network access or via a USB flashdrive.
!!! danger
If using a USB flashdrive is not viable, don't leave the RootCA server on the network any longer than what is absolutely necessary.
- Once the certificate signing request file `C:\LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req` is on `LAB-CA-01` (RootCA) we can proceed to get it signed.
- Navigate to "**Server Manager > Tools > Certification Authority**"
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-RootCA`)
- Click on "**All Tasks" > "Submit new request...**"
- Browse to and select the subordinate CAs .req file (e.g. `LAB-CA-02.bunny-lab.io_bunny-lab-LAB-CA-02-CA.req`)
- Click on "**BunnyLab-RootCA > Pending Requests**
- Right-click the request we just imported, and select "**All Tasks > Issue**"
- Click on ""**BunnyLab-RootCA > Issued Certificates**"
- Locate the new subordinate CA certificate, and double-click it.
- Click the "**Details**" tab
- Click the "**Copy to File**" button
- Click "**Next**"
- Choose `DER encoded binary X.509 (.CER)` and save as `LAB-CA-02-SubCA.cer`.
- Export the Root CA certificate:
- Right-click the `BunnyLab-RootCA` node > Properties > View Certificate > Details > Copy to File...
- Save as `RootCA.cer`
- Copy both `LAB-CA-02-SubCA.cer` (the signed subordinate CA cert) and `RootCA.cer` (the root CA cert) to the subordinate CA (`LAB-CA-02`), using a secure method (e.g. USB drive).
- On `LAB-CA-02` (Subordinate CA), Navigate to "**Server Manager > Tools > Certification Authority**"
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-SubordinateCA-01`)
- Click on "**All Tasks" > "Install CA Certificate**"
- Browse to and select `LAB-CA-02-SubCA.cer` (*you may need to change the cert file extension filter to `X.509 Certificate`*)
- When prompted for the CA chain or root certificate, browse for and select the `RootCA.cer` you transferred earlier along with the `LAB-CA-02-SubCA.cer`
- Launch `certlm.msc` to open the `[Certificates - Local Computer]` management window
- Right-Click "**Trusted Root Certification Authorities**" > All Tasks > Import
- Click "**Next**"
- Browse to the `BunnyLab-RootCA.crl` located on `\\LAB-CA-01\CertEnroll\BunnyLab-RootCA.crl` (*if the RootCA is temporarily on the network*) or copy the file manually via USB drive from `C:\Windows\System32\certsrv\CertEnroll\BunnyLab-RootCA.crl`
- Place all certificates in the following store: "Trusted Root Certification Authorities"
- Click "**Next**" and finish importing the Certificate Revocation List
- Right-click the CA node in the treeview on the left-hand sidebar (e.g. `BunnyLab-SubordinateCA-01`)
- Click on "**All Tasks" > "Start Service**"
- Verify that the CA status is now green (running).
## Create Auto-Enrollment Group Policy
The Certificate Auto-Enrollment Group Policy enables domain-joined devices (*computers, including domain controllers*) to automatically request, renew, and install certificates from the Enterprise CA (in this case, the Subordinate CA `LAB-CA-02`).
### Create GPO
- Open the Group Policy Management editor on one of your domain controllers, then "Create a GPO in this domain, and link it here" wherever it will be able to target the domain controllers, this may be at the root, or in a specific OU that holds domain controllers. (e.g. `bunny-lab.io\Domain Controllers` )
- Name the new GPO something like "**Certificate Auto-Enrollment**"
- Edit the GPO
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies**"
- Find and open "**Certificate Services Client - Auto-Enrollment.**"
- Set the Configuration Model to "**Enabled**"
- Check both checkboxes for "**Renew expired certificates, update pending certificates, and remove revoked certificates**" and "**Update certificates that use certificate templates**"
- Click "**OK**"
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Trusted Root Certification Authorities**"
- Right-click the "**Trusted Root Certification Authorities**" folder and select "**Import...**" > Proceed to browse for the `RootCA.cer` that you previously generated. (*copy it to the domain controller if needed from one of the Certificate Authorities*)
- Proceed to import the certificate, clicking-through all of the prompts and confirmations until it finishes the import.
- Navigate to "**Computer Configuration > Policies > Windows Settings > Security Settings > Public Key Policies > Intermediate Certification Authorities**"
- Right-click the "**Trusted Root Certification Authorities**" folder and select "**Import...**" > Proceed to browse for the `LAB-CA-02-SubCA.cer` that you previously generated. (*copy it to the domain controller if needed from one of the Certificate Authorities*)
- Proceed to import the certificate, clicking-through all of the prompts and confirmations until it finishes the import.
- Run a `gpupdate /force` on your domain controller(s) and give it a few minutes to pull down their new domain controller certificates
### Validate Auto-Enrollment Functionality
At this point, you need to check that there is a certificate installed within "**Certificates - Local Computer > Personal > Certificates**" for "Domain Controller Server Authentication"
- Load the Certificate - Local Machine (`certlm.msc`) and navigate to "**Personal > Certificates**" > You should see something similar to the following:
| **Issued To** | **Issued By** | **Expiration Date** | **Intended Purposes** | **Certificate Template** |
| :--- | :--- | :--- | :--- | :--- |
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Directory Service Email Replication | Directory Email Replication |
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Client Authentication, Server Authentication, Smart Card Logon | Domain Controller Authentication |
| LAB-DC-01.bunny-lab.io | BunnyLab-SubordinateCA-01 | 7/15/2026 | Client Authentication, Server Authentication, Smart Card Logon, KDC Authentication | Kerberos Authentication |
### Validate LDAPS Connectivity
Lastly, we want to ensure that LDAPS is functioning. By default, once these certs are enrolled on the domain controller(s), LDAPS *should* just work out of the box. To verify this, you can run this command on any device on the same network as the domain controllers. If it comes back successful like in the following example output, then you are golden:
```powershell
PS C:\Users\nicole.rappe> Test-NetConnection LAB-DC-01.bunny-lab.io -Port 636
ComputerName : LAB-DC-01.bunny-lab.io
RemoteAddress : 192.168.3.25
RemotePort : 636
InterfaceAlias : Ethernet
SourceAddress : 192.168.3.254
TcpTestSucceeded : True
PS C:\Users\nicole.rappe> Test-NetConnection LAB-DC-02.bunny-lab.io -Port 636
ComputerName : LAB-DC-02.bunny-lab.io
RemoteAddress : 192.168.3.26
RemotePort : 636
InterfaceAlias : Ethernet
SourceAddress : 192.168.3.254
TcpTestSucceeded : True
```
!!! success "Successful LDAPS Connectivity"
LDAPS should now be functional on your domain controller(s).
!!! abstract "Raw Unprocessed/Unimplemented Steps"
Publish CRLs regularly, configure overlap periods, and monitor expiration. Enable Delta CRLs on the Subordinate CA, but not on the Root.
Security Recommendations
- Harden CA servers; limit access to PKI admins.
- Use BitLocker or HSM for key protection.
- Monitor issuance and renewals with audit logs and scripts.

View File

@@ -0,0 +1,27 @@
**Purpose**:
To deploy a shortcut to the desktop pointing to a network share's root path. (e.g. `\\storage.bunny-lab.io`). There is a quirk with how Windows handles network shares and shortcuts and doesn't like when you point the shortcut to a root UNC path.
### Group Policy Location
``` mermaid
graph LR
A[Create Group Policy] --> B[User Configuration]
B --> C[Preferences]
C --> D[Windows Settings]
D --> E[Shortcuts]
```
### Group Policy Settings
- **Action**: `Update`
- **Name**: `<FriendlyName>`
- **Target Type**: `File System Object`
- **Location**: `Desktop`
- **Target Path**: `C:\windows\explorer.exe`
- **Arguments**: `\\storage.bunny-lab.io`
- **Start In**: `<Blank>`
- **Shortcut Key**: `<None>`
- **Run**: `Normal Window`
- **Icon File Path**: `%SystemRoot%\System32\SHELL32.dll`
- **Icon Index**: `9`
### Additional Notes
Navigate to the "**Common**" tab in the properties of the shortcut, and check the "**Run in logged-on user's security context (user policy option)**".

View File

@@ -0,0 +1,12 @@
**Purpose**: LDAP settings are used in various services from privacyIDEA to Nextcloud. This will outline the basic parameters in my homelab that are necessary to make it function.
| **Field** | **Value** | **Description** |
| :--- | :--- | :--- |
| Server Address(s) | `ldap://bunny-dc-01.bunny-lab.io` / `192.168.3.8`, `ldap://bunny-db-02.bunny.lab.io` / `192.168.3.9` | Domain Controllers |
| Port | `389` | Unencrypted LDAP |
| STARTTLS | `Disabled` | |
| Base DN | `CN=Users,DC=bunny-lab,DC=io` | This is where users are pulled from |
| User / Bind DN | `CN=Nicole Rappe,CN=Users,DC=bunny-lab,DC=io` | This is the domain admin used to connect to LDAP |
| User / Bind Password | `<Password for User / Bind DN>` | Domain Credentials for Domain Admin account |
| Login Attribute | ` LDAP Filter: (&(&(|(objectclass=person))(|(|(memberof=CN=Domain Users,CN=Users,DC=bunny-lab,DC=io)(primaryGroupID=513))))(samaccountname=%uid)) ` | Used by Nextcloud |
| Login Attribute | `(sAMAccountName=*)(objectCategory=person)` | Used by PrivacyIDEA |

View File

@@ -0,0 +1,8 @@
## Purpose
If you have a device that lost trust in the domain for some reason, and won't let you login using domain credentials, run the following command as a local administrator on the device to repair trust.
```powershell
Test-ComputerSecureChannel -Repair -Credential (Get-Credential)
```
If it outputs `True`, go ahead and log out then try to login again with the domain credentials.

View File

@@ -0,0 +1,45 @@
**Purpose**: Authelia is an open-source authentication and authorization server and portal fulfilling the identity and access management (IAM) role of information security in providing multi-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for common reverse proxies.
```yaml title="docker-compose.yml"
services:
authelia:
image: authelia/authelia
container_name: authelia
volumes:
- /mnt/authelia/config:/config
networks:
docker_network:
ipv4_address: 192.168.5.159
expose:
- 9091
restart: unless-stopped
healthcheck:
disable: true
environment:
- TZ=America/Denver
redis:
image: redis:alpine
container_name: redis
volumes:
- /mnt/authelia/redis:/data
networks:
docker_network:
ipv4_address: 192.168.5.158
expose:
- 6379
restart: unless-stopped
environment:
- TZ=America/Denver
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
Not Applicable
```

View File

@@ -0,0 +1,168 @@
!!! bug
The docker-compose version of the deployment appears bugged and has known issues, deployment via Kubernetes is required to stability and support.
**Purpose**: Authentik is an open-source Identity Provider, focused on flexibility and versatility. With authentik, site administrators, application developers, and security engineers have a dependable and secure solution for authentication in almost any type of environment. There are robust recovery actions available for the users and applications, including user profile and password management. You can quickly edit, deactivate, or even impersonate a user profile, and set a new password for new users or reset an existing password.
This document is based on the [Official Docker-Compose Documentation](https://goauthentik.io/docs/installation/docker-compose). It is meant for testing / small-scale production deployments.
## Docker Configuration
```yaml title="docker-compose.yml"
---
version: "3.4"
services:
postgresql:
image: docker.io/library/postgres:12-alpine
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
start_period: 20s
interval: 30s
retries: 5
timeout: 5s
volumes:
- /srv/containers/authentik/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${PG_PASS:?database password required}
POSTGRES_USER: ${PG_USER:-authentik}
POSTGRES_DB: ${PG_DB:-authentik}
env_file:
- stack.env
networks:
docker_network:
ipv4_address: 192.168.5.2
redis:
image: docker.io/library/redis:alpine
command: --save 60 1 --loglevel warning
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
start_period: 20s
interval: 30s
retries: 5
timeout: 3s
volumes:
- /srv/containers/authentik/redis:/data
networks:
docker_network:
ipv4_address: 192.168.5.3
server:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.7}
restart: unless-stopped
command: server
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
volumes:
- /srv/containers/authentik/media:/media
- /srv/containers/authentik/custom-templates:/templates
env_file:
- stack.env
ports:
- "${COMPOSE_PORT_HTTP:-9000}:9000"
- "${COMPOSE_PORT_HTTPS:-9443}:9443"
depends_on:
- postgresql
- redis
networks:
docker_network:
ipv4_address: 192.168.5.4
worker:
image: ${AUTHENTIK_IMAGE:-ghcr.io/goauthentik/server}:${AUTHENTIK_TAG:-2023.10.7}
restart: unless-stopped
command: worker
environment:
AUTHENTIK_REDIS__HOST: redis
AUTHENTIK_POSTGRESQL__HOST: postgresql
AUTHENTIK_POSTGRESQL__USER: ${PG_USER:-authentik}
AUTHENTIK_POSTGRESQL__NAME: ${PG_DB:-authentik}
AUTHENTIK_POSTGRESQL__PASSWORD: ${PG_PASS}
# `user: root` and the docker socket volume are optional.
# See more for the docker socket integration here:
# https://goauthentik.io/docs/outposts/integrations/docker
# Removing `user: root` also prevents the worker from fixing the permissions
# on the mounted folders, so when removing this make sure the folders have the correct UID/GID
# (1000:1000 by default)
user: root
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /srv/containers/authentik/media:/media
- /srv/containers/authentik/certs:/certs
- /srv/containers/authentik/custom-templates:/templates
env_file:
- stack.env
depends_on:
- postgresql
- redis
networks:
docker_network:
ipv4_address: 192.168.5.5
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
PG_PASS=<See Below>
AUTHENTIK_SECRET_KEY=<See Below>
AUTHENTIK_BOOTSTRAP_PASSWORD=<SecurePassword>
AUTHENTIK_BOOTSTRAP_TOKEN=<SecureOneTimePassword>
AUTHENTIK_BOOTSTRAP_EMAIL=nicole.rappe@bunny-lab.io
## SMTP Host Emails are sent to
#AUTHENTIK_EMAIL__HOST=localhost
#AUTHENTIK_EMAIL__PORT=25
## Optionally authenticate (don't add quotation marks to your password)
#AUTHENTIK_EMAIL__USERNAME=
#AUTHENTIK_EMAIL__PASSWORD=
## Use StartTLS
#AUTHENTIK_EMAIL__USE_TLS=false
## Use SSL
#AUTHENTIK_EMAIL__USE_SSL=false
#AUTHENTIK_EMAIL__TIMEOUT=10
## Email address authentik will send from, should have a correct @domain
#AUTHENTIK_EMAIL__FROM=authentik@localhost
```
!!! note "Generating Passwords"
Navigate to the online [PWGen Password Generator](https://pwgen.io/en/) to generate the passwords for `PG_PASS` (40 characters) and `AUTHENTIK_SECRET_KEY` (50 characters).
Because of a PostgreSQL limitation, only passwords up to 99 characters are supported
See https://www.postgresql.org/message-id/09512C4F-8CB9-4021-B455-EF4C4F0D55A0@amazon.com
!!! warning "Password Symbols"
You may encounter the Authentik WebUI throwing `Forbidden` errors, and this is likely caused by you using a password with "problematic" characters for the `PG_PASS` environment variable. Try to avoid using `,` or `;` or `:` in the password you generate.
## WebUI Initial Setup
To start the initial setup, navigate to https://192.168.5.4:9443/if/flow/initial-setup/
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
``` yaml
http:
routers:
PLACEHOLDER:
entryPoints:
- websecure
tls:
certResolver: myresolver
service: PLACEHOLDER
rule: Host(`PLACEHOLDER.bunny-lab.io`)
services:
PLACEHOLDER:
loadBalancer:
servers:
- url: http://PLACEHOLDER:80
passHostHeader: true
```

View File

@@ -0,0 +1,231 @@
**Purpose**: Keycloak is an open source identity and access management system for modern applications and services.
- [Original Reference Compose File](https://github.com/JamesTurland/JimsGarage/blob/main/Keycloak/docker-compose.yaml)
- [Original Reference Deployment Video](https://www.youtube.com/watch?v=6ye4lP9EA2Y)
- [Theme Customization Documentation](https://www.baeldung.com/spring-keycloak-custom-themes)
## Keycloak Authentication Sequence
``` mermaid
sequenceDiagram
participant User
participant Traefik as Traefik Reverse Proxy
participant Keycloak
participant Services
User->>Traefik: Access service URL
Traefik->>Keycloak: Redirect to Keycloak for authentication
User->>Keycloak: Provide credentials for authentication
Keycloak->>User: Return authorization token/cookie
User->>Traefik: Send request with authorization token/cookie
Traefik->>Keycloak: Validate token/cookie
Keycloak->>Traefik: Token/cookie is valid
Traefik->>Services: Forward request to services
Services->>Traefik: Response back to Traefik
Traefik->>User: Return service response
```
## Docker Configuration
=== "docker-compose.yml"
```yaml
version: '3.7'
services:
postgres:
image: postgres:16.2
volumes:
- /srv/containers/keycloak/db:/var/lib/postgresql/data
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
healthcheck:
test: ["CMD-SHELL", "pg_isready -U keycloak"]
interval: 10s
timeout: 5s
retries: 5
networks:
keycloak_internal_network: # Network for internal communication
ipv4_address: 172.16.238.3 # Static IP for PostgreSQL in internal network
keycloak:
image: quay.io/keycloak/keycloak:23.0.6
command: start
volumes:
- /srv/containers/keycloak/themes:/opt/keycloak/themes
- /srv/containers/keycloak/base-theme:/opt/keycloak/themes/base
environment:
TZ: America/Denver # (1)
KC_PROXY_ADDRESS_FORWARDING: true # (2)
KC_HOSTNAME_STRICT: false
KC_HOSTNAME: auth.bunny-lab.io # (3)
KC_PROXY: edge # (4)
KC_HTTP_ENABLED: true
KC_DB: postgres
KC_DB_USERNAME: ${POSTGRES_USER}
KC_DB_PASSWORD: ${POSTGRES_PASSWORD}
KC_DB_URL_HOST: postgres
KC_DB_URL_PORT: 5432
KC_DB_URL_DATABASE: ${POSTGRES_DB}
KC_TRANSACTION_RECOVERY: true
KEYCLOAK_ADMIN: ${KEYCLOAK_ADMIN}
KEYCLOAK_ADMIN_PASSWORD: ${KEYCLOAK_ADMIN_PASSWORD}
KC_HEALTH_ENABLED: true
DB_POOL_MAX_SIZE: 20 # (5)
DB_POOL_MIN_SIZE: 5 # (6)
DB_POOL_ACQUISITION_TIMEOUT: 30 # (7)
DB_POOL_IDLE_TIMEOUT: 300 # (8)
JDBC_PARAMS: "connectTimeout=30"
KC_HOSTNAME_DEBUG: false # (9)
ports:
- 8080:8080
restart: always
depends_on:
postgres:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/auth"] # Health check for Keycloak
interval: 30s # Health check interval
timeout: 10s # Health check timeout
retries: 3 # Health check retries
networks:
docker_network:
ipv4_address: 192.168.5.2
keycloak_internal_network: # Network for internal communication
ipv4_address: 172.16.238.2 # Static IP for Keycloak in internal network
networks:
default:
external:
name: docker_network
docker_network:
external: true
keycloak_internal_network: # Internal network for private communication
driver: bridge # Network driver
ipam: # IP address management
config:
- subnet: 172.16.238.0/24 # Subnet for internal network
```
1. This sets the timezone of the Keycloak server to your timezone. This is not really necessary according to the official documentation, however I just like to add it to all of my containers as a baseline environment variable to add
2. This assumes you are running Keycloak behind a reverse proxy, in my particular case, Traefik
3. Set this to the FQDN that you are expecting to reach the Keycloak server at behind your reverse proxy
4. This assumes you are running Keycloak behind a reverse proxy, in my particular case, Traefik
5. Maximum connections in the database pool
6. Minimum idle connections in the database pool
7. Timeout for acquiring a connection from the database pool
8. Timeout for closing idle connections to the database
9. If this is enabled, Navigate to https://auth.bunny-lab.io/realms/master/hostname-debug to troubleshoot issues with the deployment if you experience any issues logging into the web portal or admin UI
=== ".env"
```yaml
POSTGRES_DB=keycloak
POSTGRES_USER=keycloak
POSTGRES_PASSWORD=SomethingSecure # (1)
KEYCLOAK_ADMIN=admin
KEYCLOAK_ADMIN_PASSWORD=SomethingSuperSecureToLoginAsAdmin # (2)
```
1. This is used internally by Keycloak to interact with the PostgreSQL database server
2. This is used to log into the web admin portal at https://auth.bunny-lab.io
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
auth:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: auth
rule: Host(`auth.bunny-lab.io`)
middlewares:
- auth-headers
services:
auth:
loadBalancer:
servers:
- url: http://192.168.5.2:8080
passHostHeader: true
middlewares:
auth-headers:
headers:
sslRedirect: true
stsSeconds: 31536000
stsIncludeSubdomains: true
stsPreload: true
forceSTSHeader: true
customRequestHeaders:
X-Forwarded-Proto: https
X-Forwarded-Port: "443"
```
# Traefik Keycloak Middleware
At this point, we need to add the official Keycloak plugin to Traefik's main configuration. In this example, it will be assumed you need to configure this in Portainer/Docker Compose, and not via a static yml/toml file. Assume you follow the [Docker Compose based Traefik Deployment](../../edge/traefik.md).
## Install Keycloak Plugin
If you do not already have the following added to the end of your `command:` section of the docker-compose.yml file in Portainer, go ahead and add it:
``` yaml
# Keycloak plugin configuration
- "--experimental.plugins.keycloakopenid.moduleName=github.com/Gwojda/keycloakopenid"
- "--experimental.plugins.keycloakopenid.version=v0.1.34"
```
## Add Middleware to Traefik Dynamic Configuration
You will want to ensure the following exists in the dynamically-loaded config file folder, you can name the file whatever you want, but it will be a one-all middleware for any services you want to have communicating as a specific OAuth2 `Client ID`. For example, you might want to have some services exist in a particular realm of Keycloak, or to have different client rules apply to certain services. If this is the case, you can create multiple middlewares in this single yaml file, each handling a different service / realm. It can get pretty complicated if you want to handle a multi-tenant environment, such as one seen in an enterprise environment.
```jsx title="keycloak-middleware.yml"
http:
middlewares:
auth-bunny-lab-io:
plugin:
keycloakopenid:
KeycloakURL: "https://auth.bunny-lab.io" # <- Also supports complete URL, e.g. https://my-keycloak-url.com/auth
ClientID: "traefik-reverse-proxy"
ClientSecret: "https://auth.bunny-lab.io > Clients > traefik-reverse-proxy > Credentials > Client Secret"
KeycloakRealm: "master"
Scope: "openid profile email"
TokenCookieName: "AUTH_TOKEN"
UseAuthHeader: "false"
# IgnorePathPrefixes: "/api,/favicon.ico [comma deliminated] (optional)"
```
## Configure Valid Redirect URLs
At this point, within Keycloak, you need to configure domains that you are allowed to visit after authenticating. You can do this with wildcards, but generally you navigate to "**https://auth.bunny-lab.io > Clients > traefik-reverse-proxy > Valid redirect URIs**" A simple example is adding `https://tools.bunny-lab.io/*` to the list of valid redirect URLs. If the site is not in this list, even if it has the middleware configured in Traefik, it will fail to authenticate and not let the user proceed to the website being protected behind Keycloak.
## Adding Middleware to Dynamic Traefik Service Config Files
At this point, you are in the final stretch, you just need to add the middleware to the Traefik dynamic config files to ensure that it routes the traffic to Keycloak when someone attempts to access that service. Put the following middleware section under the `routers:` section of the config file.
```yaml
middlewares:
- auth-bunny-lab-io # Referencing the Keycloak Server
```
A full example config file would look like the following:
```yaml
http:
routers:
example:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: example
rule: Host(`example.bunny-lab.io`)
middlewares:
- auth-bunny-lab-io # Referencing the Keycloak Server Traefik Middleware
services:
example:
loadBalancer:
servers:
- url: http://192.168.5.16:80
passHostHeader: true
```

View File

@@ -0,0 +1,2 @@
You can deploy Keycloak via a [docker-compose stack](../deployment.md) found within the "Containerization" section of the documentation.

View File

@@ -0,0 +1,12 @@
### OAuth2 Configuration
These are variables referenced by the associated service to connect its authentication system to [Keycloak](../deployment.md).
| **Parameter** | **Value** |
| :--- | :--- |
| Authentication Name | `auth-bunny-lab-io` |
| OAuth2 Provider | `OpenID Connect` |
| Client ID (Key) | `git-bunny-lab-io` |
| Client Secret | `https://auth.bunny-lab.io > Clients > git-bunny-lab-io > Credentials > Client Secret` |
| OpenID Connect Auto Discovery URL | `https://auth.bunny-lab.io/realms/master/.well-known/openid-configuration` |
| Skip Local 2FA | Yes |

View File

@@ -0,0 +1,15 @@
### OAuth2 Configuration
These are variables referenced by the associated service to connect its authentication system to [Keycloak](../deployment.md).
| **Parameter** | **Value** |
| :--- | :--- |
| Client ID | `container-node-01` |
| Client Secret | `https://auth.bunny-lab.io > Clients > container-node-01 > Credentials > Client Secret` |
| Authorization URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/auth` |
| Access Token URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/token` |
| Resource URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/userinfo` |
| Redirect URL | `https://192.168.3.19:9443` |
| Logout URL | `https://auth.bunny-lab.io/realms/master/protocol/openid-connect/logout` |
| User Identifier | `email` |
| Scopes | `email openid profile` |

View File

@@ -0,0 +1,138 @@
**Purpose**: privacyIDEA is a modular authentication system. Using privacyIDEA you can enhance your existing applications like local login, VPN, remote access, SSH connections, access to web sites or web portals with a second factor during authentication.
!!! info "Assumptions"
It is assumed you have a provisioned virtual machine / physical machine, running Ubuntu Server 22.04 to deploy a privacyIDEA server.
## AWX Deployment
### Add Server to Inventory and Pull Inventory/Playbook Updates from Gitea
You need to target the new server using a template in AWX (preferrably).
- We will assume the FQDN of the server is `auth.bunny-lab.io` or just `auth`
- Be sure to add the host into the [AWX Homelab Inventory File](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini)
- Update / Sync the "**Bunny-Lab**" project in AWX ([Resources > Projects > Bunny-Lab > Sync](https://awx.bunny-lab.io/#/projects/8/details))
- Update / Sync the git.bunny-lab.io Inventory Source ([Resources > Inventories > Homelab > Sources > git.bunny-lab.io > Sync](https://awx.bunny-lab.io/#/inventories/inventory/2/sources/9/details))
### Create a Template
Next, you want to make a template to automate the deployment of privacyIDEA on any servers that are members of the `[privacyideaServers]` inventory host group. This is useful for development / testing, as well as rapid re-deployment / scaling.
- Navigate to **Resources > Templates > Add**
| **Field** | **Value** |
| :--- | :--- |
| Template Name | `Deploy PrivacyIDEA Server` |
| Description | `Ubuntu Server 22.04 Required` |
| Project | `Bunny-Lab` *(Click the Magnifying Lens)* |
| Inventory | `Homelab` |
| Playbook | `playbooks/Linux/Deployments/privacyIDEA.yml` |
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
| Credentials | `SSH: (LINUX) nicole` |
**Options**:
- [X] Privilege Escalation: Checked
- [X] Enable Fact Storage: Checked
### Launch the Template
Now we need to launch the template. Assuming all of the above was completed, we can now deploy the playbook/template against the Ubuntu Server via SSH.
- Launch the Template (Rocket Button)
- As the template runs, you will see deployment progress output on the screen
!!! success
You will know if everything was successful if you see something that looks like the following:
``` sh
ok: [auth]
TASK [Install wget and software-properties-common] *****************************
ok: [auth]
TASK [Download PrivacyIDEA signing key] ****************************************
changed: [auth]
TASK [Add signing key for Ubuntu 22.04LTS] *************************************
changed: [auth]
TASK [Add PrivacyIDEA repository] **********************************************
changed: [auth]
TASK [Update apt cache] ********************************************************
changed: [auth]
TASK [Install PrivacyIDEA with Apache2] ****************************************
changed: [auth]
PLAY RECAP *********************************************************************auth : ok=7 changed=5 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
## Admin Access to WebUI
### Create a privacyIDEA Administrator Account
You will need to use the CLI in the server in order to create the first administrative account. Run the following command and provide a password for the administrator account.
``` sh
sudo pi-manage admin add nicole.rappe -e nicole.rappe@bunny-lab.io
```
### Log into the WebUI
Assuming you created an `A` record in the DNS server pointing to the IP address of the privacyIDEA server, Navigate to https://auth.bunny-lab.io and sign in with your newly-created username and password. (e.g. `nicole.rappe`)
## Connect to Active Directory/LDAP
### Create a LDAP User ID Resolver
This is what will connect privacyIDEA to an LDAP backend to pull-down users for authentication in Active Directory. Begin by navigating to "**Config > Users > New LDAP Resolver**"
| **Field** | **Value** |
| :--- | :--- |
| Resolver Name | `BunnyLab-LDAP` |
| Server URI | `ldap://bunny-dc-01.bunny-lab.io, ldap://bunny-db-02.bunny.lab.io` |
| Pooling Strategy | `ROUND_ROBIN` |
| StartTLS | `<Unchecked>` |
| Base DN | `CN=Users,DC=bunny-lab,DC=io` |
| Scope | `SUBTREE` |
| Bind Type | `Simple` |
| Bind DN | `CN=Nicole Rappe,CN=Users,DC=bunny-lab,DC=io`
| Bind Password | `<Domain Admin Password for "nicole.rappe">` |
- Click the "**Preset Active Directory**" button.
- Click the "**Test LDAP Resolver**" button.
### Associate User ID Resolver with a Realm
Now we need to create what is called a "**Realm**". Users need to be in realms to have tokens assigned. A user, who is not member of a realm can not have a token assigned and can not authenticate. You can combine several different User ID Resolvers (see UserIdResolvers) into a realm. Navigate to "**Config > Realms**"
| **Field** | **Value** |
| :--- | :--- |
| Realm Name | `Bunny-Lab` |
| Resolver(s) | `BunnyLab-LDAP` |
## Configure Push Notifications
### Create Policies
You will need to create several policies, you can make them all individual, or merge the ones with identical scopes together to keep things more organized. To begin, navigate to "**Config > Policies > Create New Policy**"
- **Scope**: `Enrollment` > "**push_firebase_configuration**" = `poll only`
- **Scope**: `Enrollment` > "**push_registration_url**" = `https://auth.bunny-lab.io/ttype/push`
- **Scope**: `Enrollment` > "**push_ssl_verify**" = `0`
- **Scope**: `Authentication` > "**push_allow_polling**" = `allow`
## Enrolling the First Token
!!! bug "Push Notifications Broken"
Currently, the push notification system (e.g. Cisco DUO") is not behaving as-expected. For now, you can use other authentication methods for the tokens, such as HOTP (on-demand MFA codes) or TOTP (conventional time-based MFA codes).
### TOTP Token
Navigate to "**Tokens > Enroll Token**"
| **Field** | **Value** |
| :--- | :--- |
| Token Type | `TOTP` |
| Realm | `Bunny-Lab` |
| Username | `[256da6f8-9ddb-4ec5-9409-1a95fea27615] nicole.rappe (Nicole Rappe)` |
Use any MFA authenticator app like Bitwarden or Google Authenticator to add the code and store the secret key somewhere safe.
## Install Credential Provider
### Install Credential Provider Subscription File
In order to use the Credential Provider, you have to upload a subscription file. The free-tier allows up to 50 devices using the Credential Provider, but you can alter the source code of privacyIDEA to ignore subscriptions and just unlock everything (custom python code planned).
When you want to leverage MFA in an environment using the server, you need to have a domain-joined computer running the Credential Provider, which can be found on the [Official Credential Provider Github Page](https://github.com/privacyidea/privacyidea-credential-provider/releases).
- Download the MSI
- Run the installer on the computer
- Click "**Next**"
- Check the "**Agree**" checkbox, then click "**Next**"
- Hostname: `auth.bunny-lab.io`
- Path: `/path/to/pi`
- [x] Ignore Unknown CA Errors when Using SSL
- [x] Ignore Invalid Common Name Errors when Using SSL
- Click "**Next**" > "**Next**" > "**Next**"
- Click "**Install**" then "**Finish**"
You can now log out and verify that the credential provider is displayed as an option, and can log in using your domain username, domain password, and TOTP that you configured in the privacyIDEA WebUI.

View File

@@ -0,0 +1,70 @@
**Purpose**: Self-hosted open-source no-code business automation tool.
```yaml title="docker-compose.yml"
version: '3.0'
services:
activepieces:
image: activepieces/activepieces:0.3.11
container_name: activepieces
restart: unless-stopped
privileged: true
ports:
- '8080:80'
environment:
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
env_file: stack.env
depends_on:
- postgres
- redis
networks:
docker_network:
ipv4_address: 192.168.5.62
postgres:
image: 'postgres:14.4'
container_name: postgres
restart: unless-stopped
environment:
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
volumes:
- /srv/containers/activepieces/postgresql:/var/lib/postgresql/data'
networks:
docker_network:
ipv4_address: 192.168.5.61
redis:
image: 'redis:7.0.7'
container_name: redis
restart: unless-stopped
volumes:
- /srv/containers/activepieces/redis:/data'
networks:
docker_network:
ipv4_address: 192.168.5.60
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
AP_ENGINE_EXECUTABLE_PATH=dist/packages/engine/main.js
AP_ENCRYPTION_KEY=e81f8754faa04acaa7b13caa5d2c6a5a
AP_JWT_SECRET=REDACTED #BE SURE TO SET THIS WITH A VALID JWT SECRET > REFER TO OFFICIAL DOCUMENTATION
AP_ENVIRONMENT=prod
AP_FRONTEND_URL=https://ap.cyberstrawberry.net
AP_NODE_EXECUTABLE_PATH=/usr/local/bin/node
AP_POSTGRES_DATABASE=activepieces
AP_POSTGRES_HOST=192.168.5.61
AP_POSTGRES_PORT=5432
AP_POSTGRES_USERNAME=postgres
AP_POSTGRES_PASSWORD=REDACTED #USE A SECURE SHORT PASSWORD > ENSURE ITS NOT TOO LONG FOR POSTGRESQL
AP_REDIS_HOST=redis
AP_REDIS_PORT=6379
AP_SANDBOX_RUN_TIME_SECONDS=600
AP_TELEMETRY_ENABLED=true
```

View File

@@ -0,0 +1,29 @@
**Purpose**: Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
```yaml title="docker-compose.yml"
version: "3.7"
services:
node-red:
image: nodered/node-red:latest
environment:
- TZ=America/Denver
ports:
- "1880:1880"
networks:
docker_network:
ipv4_address: 192.168.5.92
volumes:
- /srv/containers/node-red:/data
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,77 @@
**Purpose**: User friendly web interface for executing Ansible playbooks, Terraform, OpenTofu code and Bash scripts. It is designed to make your automation tasks easier and more enjoyable.
[Website Details](https://semaphoreui.com/)
!!! info "Standalone VM Assumption"
It is assumed that you are deploying Semaphore UI in its own standalone virtual machine. These instructions dont accomodate MACVLAN docker networking, and assume that Semaphore UI and its PostgreSQL database backend share their IP address with the VM they are running on.
## Docker Configuration
```yaml title="docker-compose.yml"
services:
semaphore-ui:
ports:
- 3000:3000
image: public.ecr.aws/semaphore/pro/server:v2.13.12
privileged: true
environment:
SEMAPHORE_DB_DIALECT: postgres
SEMAPHORE_DB_HOST: postgres
SEMAPHORE_DB_NAME: semaphore
SEMAPHORE_DB_USER: root
SEMAPHORE_DB_PASS: SuperSecretDBPassword
SEMAPHORE_ADMIN: nicole
SEMAPHORE_ADMIN_PASSWORD: SuperSecretPassword
SEMAPHORE_ADMIN_NAME: Nicole Rappe
SEMAPHORE_ADMIN_EMAIL: infrastructure@bunny-lab.io
SEMAPHORE_EMAIL_SENDER: "noreply@bunny-lab.io"
SEMAPHORE_EMAIL_HOST: "mail.bunny-lab.io"
SEMAPHORE_EMAIL_PORT: "587"
SEMAPHORE_EMAIL_USERNAME: "noreply@bunny-lab.io"
SEMAPHORE_EMAIL_PASSWORD: "SuperSecretSMTPPassword"
ANSIBLE_HOST_KEY_CHECKING: "False"
volumes:
- /srv/containers/semaphore-ui/data:/var/lib/semaphore
- /srv/containers/semaphore-ui/config:/etc/semaphore
- /srv/containers/semaphore-ui/tmp:/tmp/semaphore
depends_on:
- postgres
postgres:
image: postgres:12-alpine
ports:
- 5432:5432
volumes:
- /srv/containers/semaphore-ui/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=semaphore
- POSTGRES_USER=root
- POSTGRES_PASSWORD=SuperSecretDBPassword
- TZ=America/Denver
restart: always
```
```yaml title=".env"
N/A - Will be cleaned up later.
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
semaphore:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: semaphore
rule: Host(`semaphore.bunny-lab.io`)
services:
semaphore:
loadBalancer:
servers:
- url: http://192.168.3.51:3000
passHostHeader: true
```

43
services/backup/kopia.md Normal file
View File

@@ -0,0 +1,43 @@
**Purpose**: Cross-platform backup tool for Windows, macOS & Linux with fast, incremental backups, client-side end-to-end encryption, compression and data deduplication. CLI and GUI included.
```yaml title="docker-compose.yml"
version: '3.7'
services:
kopia:
image: kopia/kopia:latest
hostname: kopia-backup
user: root
restart: always
ports:
- 51515:51515
environment:
- KOPIA_PASSWORD=${KOPIA_ENRYPTION_PASSWORD}
- TZ=America/Denver
privileged: true
volumes:
- /srv/containers/kopia/config:/app/config
- /srv/containers/kopia/cache:/app/cache
- /srv/containers/kopia/logs:/app/logs
- /srv:/srv
- /usr/share/zoneinfo:/usr/share/zoneinfo
entrypoint: ["/bin/kopia", "server", "start", "--insecure", "--timezone=America/Denver", "--address=0.0.0.0:51515", "--override-username=${KOPIA_SERVER_USERNAME}", "--server-username=${KOPIA_SERVER_USERNAME}", "--server-password=${KOPIA_SERVER_PASSWORD}", "--disable-csrf-token-checks"]
networks:
docker_network:
ipv4_address: 192.168.5.14
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
!!! note "Credentials"
Your username will be `kopia@kopia-backup` and the password will be the value you set for `--server-password` in the entrypoint section of the compose file. The `KOPIA_PASSWORD:` is used by the backup repository, such as Backblaze B2, to encrypt/decrypt the backed-up data, and must be updated in the compose file if the repository is changed / updated.
```yaml title=".env"
KOPIA_ENRYPTION_PASSWORD=PasswordUsedToEncryptDataOnBackblazeB2
KOPIA_SERVER_PASSWORD=ThisIsUsedToLogIntoKopiaWebUI
KOPIA_SERVER_USERNAME=kopia@kopia-backup
```

View File

@@ -0,0 +1,45 @@
**Purpose**: Niltalk is a web based disposable chat server. It allows users to create password protected disposable, ephemeral chatrooms and invite peers to chat rooms.
```yaml title="docker-compose.yml"
version: "3.7"
services:
redis:
image: redis:alpine
volumes:
- /srv/niltalk
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.196
niltalk:
image: kailashnadh/niltalk:latest
ports:
- "9000:9000"
depends_on:
- redis
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.197
labels:
- "traefik.enable=true"
- "traefik.http.routers.niltalk.rule=Host(`temp.cyberstrawberry.net`)"
- "traefik.http.routers.niltalk.entrypoints=websecure"
- "traefik.http.routers.niltalk.tls.certresolver=myresolver"
- "traefik.http.services.niltalk.loadbalancer.server.port=9000"
networks:
default:
external:
name: docker_network
docker_network:
external: true
volumes:
niltalk-data:
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,11 @@
**Purpose**:
When someone types a message that includes a ticket number (e.g. `T00000000.0000`) we want to replace that text with an API-friendly URL that leverages Markdown language as well.
From RocketChat, navigate to the "Marketplace" and look for "**Word Replacer**". You can find the application's [GitHub Page](https://github.com/Dimsday/WordReplacer) for additional information / source code review. Proceed to install the application. Once it has been installed, use the following RegEx filter / string in the application's settings:
``` json
[{"search": "T(\\d{8}\\.\\d{4})", "replace": "[$&](https://ww15.autotask.net/Autotask/AutotaskExtend/ExecuteCommand.aspx?Code=OpenTicketDetail&TicketNumber=$&)"}]
```
!!! success
Now everything should be functional and replacing ticket numbers with valid links that open the ticket in Autotask.

View File

@@ -0,0 +1,100 @@
**Purpose**: Deploy a RocketChat and MongoDB database together.
!!! caution Folder Pre-Creation
You need to make the folders for the Mongo database before launching the container stack for the first time. If you do not make this folder ahead of time, Mongo will give Permission Denied errors to the data directorry. You can create the folder as well as adjust permissions with the following commands:
``` sh
mkdir -p /srv/containers/rocketchat/mongodb/data
chmod -R 777 /srv/containers/rocketchat
```
```yaml title="docker-compose.yml"
services:
rocketchat:
image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
restart: always
# labels:
# traefik.enable: "true"
# traefik.http.routers.rocketchat.rule: Host(`${DOMAIN:-}`)
# traefik.http.routers.rocketchat.tls: "true"
# traefik.http.routers.rocketchat.entrypoints: https
# traefik.http.routers.rocketchat.tls.certresolver: le
environment:
MONGO_URL: "${MONGO_URL:-\
mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
${MONGODB_DATABASE:-rocketchat}?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
MONGO_OPLOG_URL: "${MONGO_OPLOG_URL:\
-mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
local?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
ROOT_URL: ${ROOT_URL:-http://localhost:${HOST_PORT:-3000}}
PORT: ${PORT:-3000}
DEPLOY_METHOD: docker
DEPLOY_PLATFORM: ${DEPLOY_PLATFORM:-}
REG_TOKEN: ${REG_TOKEN:-}
depends_on:
- rc_mongodb
expose:
- ${PORT:-3000}
dns:
- 1.1.1.1
- 1.0.0.1
- 8.8.8.8
- 8.8.4.4
ports:
- "${BIND_IP:-0.0.0.0}:${HOST_PORT:-3000}:${PORT:-3000}"
networks:
docker_network:
ipv4_address: 192.168.5.2
rc_mongodb:
image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-5.0}
restart: always
volumes:
- /srv/containers/rocket.chat/mongodb:/bitnami/mongodb
environment:
MONGODB_REPLICA_SET_MODE: primary
MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}
MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}
MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-rc_mongodb}
MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}
MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}
MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}
ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}
networks:
docker_network:
ipv4_address: 192.168.5.3
networks:
docker__network:
external: true
```
```yaml title=".env"
TZ=America/Denver
RELEASE=6.3.0
PORT=3000 #Redundant - Can be Removed
MONGODB_VERSION=6.0
MONGODB_INITIAL_PRIMARY_HOST=rc_mongodb #Redundant - Can be Removed
MONGODB_ADVERTISED_HOSTNAME=rc_mongodb #Redundant - Can be Removed
```
## Reverse Proxy Configuration
```yaml title="nginx.conf"
# Rocket.Chat Server
server {
listen 443 ssl;
server_name rocketchat.domain.net;
error_log /var/log/nginx/new_rocketchat_error.log;
client_max_body_size 500M;
location / {
proxy_pass http://192.168.5.2:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Nginx-Proxy true;
proxy_redirect off;
}
}
```

View File

@@ -0,0 +1,10 @@
## Purpose
This documentation helps you deploy an email server within a cPanel hosted environment.
!!! note "Assumptions"
It is assumed that the cPanel environment is set up (prior) to following this documentation, as deploying cPanel itself is not covered in this document.
## Step
### Sub-Step

View File

@@ -0,0 +1,59 @@
**Purpose**: A self-hostable personal dashboard built for you. Includes status-checking, widgets, themes, icon packs, a UI editor and tons more!
```yaml title="docker-compose.yml"
version: "3.8"
services:
dashy:
container_name: Dashy
# Pull latest image from DockerHub
image: lissy93/dashy
# Set port that web service will be served on. Keep container port as 80
ports:
- 4000:80
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashy.rule=Host(`dashboard.cyberstrawberry.net`)"
- "traefik.http.routers.dashy.entrypoints=websecure"
- "traefik.http.routers.dashy.tls.certresolver=myresolver"
- "traefik.http.services.dashy.loadbalancer.server.port=80"
# Set any environmental variables
environment:
- NODE_ENV=production
- UID=1000
- GID=1000
# Pass in your config file below, by specifying the path on your host machine
volumes:
- /srv/Containers/Dashy/conf.yml:/app/public/conf.yml
- /srv/Containers/Dashy/item-icons:/app/public/item-icons
# Specify restart policy
restart: unless-stopped
# Configure healthchecks
healthcheck:
test: ['CMD', 'node', '/app/services/healthcheck']
interval: 1m30s
timeout: 10s
retries: 3
start_period: 40s
# Connect container to Docker_Network
networks:
docker_network:
ipv4_address: 192.168.5.57
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
Not Applicable
```

View File

@@ -0,0 +1,58 @@
**Purpose**: A highly customizable homepage (or startpage / application dashboard) with Docker and service API integrations.
```yaml title="docker-compose.yml"
version: '3.8'
services:
homepage:
image: ghcr.io/gethomepage/homepage:latest
container_name: homepage
volumes:
- /srv/containers/homepage-docker:/config
- /srv/containers/homepage-docker/icons:/app/public/icons
ports:
- 80:80
- 443:443
- 3000:3000
environment:
- PUID=1000
- PGID=1000
- TZ=America/Denver
- HOMEPAGE_ALLOWED_HOSTS=servers.bunny-lab.io
dns:
- 192.168.3.25
- 192.168.3.26
restart: unless-stopped
extra_hosts:
- "rancher.bunny-lab.io:192.168.3.21"
networks:
docker_network:
ipv4_address: 192.168.5.44
dockerproxy:
image: ghcr.io/tecnativa/docker-socket-proxy:latest
container_name: dockerproxy
environment:
- CONTAINERS=1 # Allow access to viewing containers
- SERVICES=1 # Allow access to viewing services (necessary when using Docker Swarm)
- TASKS=1 # Allow access to viewing tasks (necessary when using Docker Swarm)
- POST=0 # Disallow any POST operations (effectively read-only)
ports:
- 127.0.0.1:2375:2375
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro # Mounted as read-only
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.46
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

95
services/devops/gitea.md Normal file
View File

@@ -0,0 +1,95 @@
**Purpose**: Gitea is a painless self-hosted all-in-one software development service, it includes Git hosting, code review, team collaboration, package registry and CI/CD. It is similar to GitHub, Bitbucket and GitLab. Gitea was forked from Gogs originally and almost all the code has been changed.
[Detailed SMTP Configuration Reference](https://docs.gitea.com/administration/config-cheat-sheet)
## Docker Configuration
```yaml title="docker-compose.yml"
version: "3"
services:
server:
image: gitea/gitea:latest
container_name: gitea
privileged: true
environment:
- USER_UID=1000
- USER_GID=1000
- TZ=America/Denver
- GITEA__mailer__ENABLED=true
- GITEA__mailer__FROM=${GITEA__mailer__FROM:?GITEA__mailer__FROM not set}
- GITEA__mailer__PROTOCOL=smtp+starttls
- GITEA__mailer__HOST=${GITEA__mailer__HOST:?GITEA__mailer__HOST not set}
- GITEA__mailer__IS_TLS_ENABLED=true
- GITEA__mailer__USER=${GITEA__mailer__USER:-apikey}
- GITEA__mailer__PASSWD="""${GITEA__mailer__PASSWD:?GITEA__mailer__PASSWD not set}"""
restart: always
volumes:
- /srv/containers/gitea:/data
# - /etc/timezone:/etc/timezone:ro
# - /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "222:22"
networks:
docker_network:
ipv4_address: 192.168.5.70
# labels:
# - "traefik.enable=true"
# - "traefik.http.routers.gitea.rule=Host(`git.bunny-lab.io`)"
# - "traefik.http.routers.gitea.entrypoints=websecure"
# - "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
# - "traefik.http.services.gitea.loadbalancer.server.port=3000"
depends_on:
- postgres
postgres:
image: postgres:12-alpine
ports:
- 5432:5432
volumes:
- /srv/containers/gitea/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=gitea
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- TZ=America/Denver
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.71
networks:
docker_network:
external: true
```
```yaml title=".env"
GITEA__mailer__FROM=noreply@bunny-lab.io
GITEA__mailer__HOST=mail.bunny-lab.io
GITEA__mailer__PASSWD=SecureSMTPPassword
GITEA__mailer__USER=noreply@bunny-lab.io
POSTGRES_PASSWORD=SomethingSuperSecure
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
git:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: git
rule: Host(`git.bunny-lab.io`)
services:
git:
loadBalancer:
servers:
- url: http://192.168.5.70:3000
passHostHeader: true
```

View File

@@ -0,0 +1,30 @@
**Purpose**: AdGuard Home is a network-wide software for blocking ads & tracking. After you set it up, it will cover ALL your home devices, and you dont need any client-side software for that. With the rise of Internet-Of-Things and connected devices, it becomes more and more important to be able to control your whole network.
```yaml title="docker-compose.yml"
version: '3'
services:
app:
image: adguard/adguardhome
ports:
- 3000:3000
- 53:53
- 80:80
volumes:
- /srv/containers/adguard_home/workingdir:/opt/adguardhome/work
- /srv/containers/adguard_home/config:/opt/adguardhome/conf
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.189
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
Not Applicable
```

41
services/dns/pi-hole.md Normal file
View File

@@ -0,0 +1,41 @@
**Purpose**: Pi-hole is a Linux network-level advertisement and Internet tracker blocking application which acts as a DNS sinkhole and optionally a DHCP server, intended for use on a private network.
```yaml title="docker-compose.yml"
version: "3"
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
ports:
- "53:53/tcp"
- "53:53/udp"
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
- "80:80/tcp"
environment:
TZ: 'America/Denver'
WEBPASSWORD: 'REDACTED' #USE A SECURE PASSWORD HERE
# Volumes store your data between container upgrades
volumes:
- /srv/containers/pihole/app:/etc/pihole
- /srv/containers/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.190
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,88 @@
## Purpose
This document outlines best practices for DNS server configuration in Active Directory environments, focusing on both performance and security considerations. The goal is to enhance the stability, efficiency, and security of DNS infrastructure within enterprise networks.
## Performance Best Practices
!!! note "Performance Recommendations Overview"
The following list is organized in order of priority, with the most critical practices listed first.
### Redundancy and High Availability
* **Always have at least two DNS servers, preferably three (1 master, 2 slaves).**
Ensures redundancy and high availability.
### Internal DNS Usage
* **Domain-joined computers should only use internal DNS servers.**
This ensures that end-user computers can always resolve internal resources and simplifies troubleshooting and management.
* **Extended Reason:** Using only internal DNS servers increases security and streamlines DNS operations.
### DNS Server Self-Referencing
* **A DNS server should have `127.0.0.1` loopback as a secondary or tertiary DNS server.**
Improves the DNS servers own performance and availability.
* **Extended Reason:** Setting the loopback address as the primary DNS can prevent Active Directory from locating replication partners. Use as secondary or tertiary only.
!!! info "Recent Changes"
The usage of `127.0.0.1` has been changed to pointing to the actual full IP address of the server itself. I need to research this more to determine where this updated guideline came from. For example, if the DNS server IP was `192.168.3.25` you would set that as the value for the secondary DNS server.
!!! warning "Do **NOT** Use `127.0.0.1` as Primary DNS Server"
When you are setting up domain controllers / DNS servers, you do not want to use the DC itself as the primary. This can cause all sorts of unexpected issues with reliability and replication. Always have another DNS server as the primary, THEN set the 127.0.0.1 localhost as secondary or tertiary.
### DNS Server Prioritization
* **Prioritize DNS servers based on proximity to endpoints.**
Assign the primary DNS server as the local server, and secondary as a remote branch server, to improve lookup speeds.
### DNS Record Aging and Scavenging
* **Enable DNS record aging/scavenging (preferably 7 days).**
Keeps DNS recordsets manageable, which improves lookup performance and troubleshooting.
### Use of CNAME Records
* **Use CNAME records for DNS aliasing. Avoid A records for aliases.**
Updating one host record updates all associated aliases, and PTR records remain properly configured.
## Security Best Practices
!!! note "Security Recommendations Overview"
The following list is organized in order of priority, with the most critical practices listed first.
### Network Exposure
* **DNS servers should never be publicly accessible from the internet.**
This prevents attackers from performing reconnaissance or planning attacks using exposed DNS infrastructure.
### Administrative Access
* **Restrict RDP/remote desktop access to DNS servers/domain controllers to a limited list of administrators.**
Reduces the risk of reconnaissance, reverse shell attacks, and malware installation.
### Use of Slave DNS Servers
* **End-users should be issued only replicated/slave DNS servers.**
Protects the master/authoritative DNS server from being directly exposed as an attack vector.
* **Extended Reason:** In branch office scenarios, assign the local replicated server as primary, and main office replicated servers as secondary and tertiary, keeping the master server isolated.
### DNS Server Cache Lockdown
* **Lock the DNS server cache to 100% (read-only).**
Prevents DNS cache poisoning by allowing cache changes only after TTL expiry.
### DNS Logging
* **Enable DNS logging.**
Facilitates troubleshooting and administration.
### DNS Security Filtering
* **Enable DNS security filtering via DNS forwarder or a security appliance.**
Use secure public DNS (e.g., 9.9.9.9) or a firewall appliance (e.g., Sophos XG Firewall) to add a security layer to all DNS queries.
### Enable DNSSEC
* **Enable DNSSEC (DNS Security Extensions).**
Protects against DNS record spoofing and related attacks.
### DNS Socket Port Randomization
* **Enable DNS socket port randomization.**
Prevents network attacks by making DNS queries originate from unpredictable ports.
* **Note:** Enabled by default on Windows Server 2016 and newer.
## Additional Notes
!!! note "Best Practices Analyzer"
It is recommended to run the official Windows Server DNS Best Practices Analyzer (BPA) on your managed servers for insights specific to your domain environment.
## Sources / References
* [Active Directory Pro: DNS Best Practices](https://activedirectorypro.com/dns-best-practices/)
* [Spiceworks: DNS Server Best Practice](https://community.spiceworks.com/topic/1110865-best-practice-for-dns-servers)
* [Microsoft Docs: Creating a DNS Infrastructure Design](https://docs.microsoft.com/en-us/windows-server/identity/ad-ds/plan/creating-a-dns-infrastructure-design)
* [PhoenixNAP: DNS Best Practices Security](https://phoenixnap.com/kb/dns-best-practices-security)
* [Monitis: Best Practices for Active Directory Integrated DNS](https://www.monitis.com/blog/best-practices-for-active-directory-integrated-dns)
* [DNS Knowledge: Authoritative Name Server](https://www.dnsknowledge.com/whatis/authoritative-name-server/)

View File

@@ -0,0 +1,34 @@
**Purpose**: An optimized site generator in React. Docusaurus helps you to move fast and write content. Build documentation websites, blogs, marketing pages, and more.
```yaml title="docker-compose.yml"
version: "3"
services:
docusaurus:
image: awesometic/docusaurus
container_name: docusaurus
environment:
- TARGET_UID=1000
- TARGET_GID=1000
- AUTO_UPDATE=true
- WEBSITE_NAME=docusaurus
- TEMPLATE=classic
- TZ=America/Denver
restart: always
volumes:
- /srv/containers/docusaurus:/docusaurus
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "80:80"
networks:
docker_network:
ipv4_address: 192.168.5.72
networks:
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,180 @@
**Purpose**: Documentation that simply works. Write your documentation in Markdown and create a professional static site for your Open Source or commercial project in minutes searchable, customizable, more than 60 languages, for all devices.
## Deploy Material MKDocs
```yaml title="docker-compose.yml"
version: '3'
services:
mkdocs:
container_name: mkdocs
image: squidfunk/mkdocs-material
restart: always
environment:
- TZ=America/Denver
ports:
- "8000:8000"
volumes:
- /srv/containers/material-mkdocs/docs:/docs
networks:
docker_network:
ipv4_address: 192.168.5.76
networks:
docker_network:
external: true
```
```yaml title=".env"
N/A
```
## Config Example
When you deploy MKDocs, you will need to give it a configuration to tell MKDocs how to structure itself. The configuration below is what I used in my deployment. This file is one folder level higher than the `/docs` folder that holds the documentation of the website.
```yaml title="/srv/containers/material-mkdocs/docs/mkdocs.yml"
# Project information
site_name: Bunny Lab
site_url: https://kb.bunny-lab.io
site_author: Nicole Rappe
site_description: >-
Server, Script, Workflow, and Networking Documentation
repo_url: https://git.bunny-lab.io/bunny-lab/docs
repo_name: bunny-lab/docs
edit_uri: _edit/main/
# Configuration
theme:
name: material
custom_dir: material/overrides
features:
- announce.dismiss
- content.action.edit
# - content.action.view
- content.code.annotate
- content.code.copy
- content.code.select
- content.tabs.link
- content.tooltips
# - header.autohide
# - navigation.expand
# - navigation.footer
- navigation.indexes
- navigation.instant
- navigation.instant.prefetch
- navigation.instant.progress
- navigation.prune
- navigation.path
# - navigation.sections
- navigation.tabs
- navigation.tabs.sticky
- navigation.top
- navigation.tracking
- search.highlight
- search.share
- search.suggest
- toc.follow
# - toc.integrate ## If this is enabled, the TOC will appear on the left navigation menu.
palette:
- media: "(prefers-color-scheme)"
toggle:
icon: material/link
name: Switch to light mode
- media: "(prefers-color-scheme: light)"
scheme: default
primary: deep purple
accent: deep purple
toggle:
icon: material/toggle-switch
name: Switch to dark mode
- media: "(prefers-color-scheme: dark)"
scheme: slate
primary: black
accent: deep purple
toggle:
icon: material/toggle-switch-off
name: Switch to system preference
font:
text: Roboto
code: Roboto Mono
favicon: assets/favicon.png
icon:
logo: logo
# Plugins
plugins:
- search:
separator: '[\s\u200b\-_,:!=\[\]()"`/]+|\.(?!\d)|&[lg]t;|(?!\b)(?=[A-Z][a-z])'
- minify:
minify_html: true
- blog
- tags
# Hooks
hooks:
- material/overrides/hooks/shortcodes.py
- material/overrides/hooks/translations.py
# Additional configuration
extra:
status:
new: Recently added
deprecated: Deprecated
extra_css:
- stylesheets/extra.css
# Extensions
markdown_extensions:
- abbr
- admonition
- attr_list
- def_list
- footnotes
- md_in_html
- toc:
permalink: true
toc_depth: 3
- pymdownx.arithmatex:
generic: true
- pymdownx.betterem:
smart_enable: all
- pymdownx.caret
- pymdownx.details
- pymdownx.emoji:
emoji_generator: !!python/name:material.extensions.emoji.to_svg
emoji_index: !!python/name:material.extensions.emoji.twemoji
- pymdownx.highlight:
anchor_linenums: true
line_spans: __span
pygments_lang_class: true
- pymdownx.inlinehilite
- pymdownx.keys
- pymdownx.magiclink:
normalize_issue_symbols: true
repo_url_shorthand: true
user: squidfunk
repo: mkdocs-material
- pymdownx.mark
- pymdownx.smartsymbols
- pymdownx.snippets:
auto_append:
- includes/mkdocs.md
- pymdownx.superfences:
custom_fences:
- name: mermaid
class: mermaid
format: !!python/name:pymdownx.superfences.fence_code_format
- pymdownx.tabbed:
alternate_style: true
combine_header_slug: true
slugify: !!python/object/apply:pymdownx.slugs.slugify
kwds:
case: lower
- pymdownx.tasklist:
custom_checkbox: true
- pymdownx.tilde
```
## Cleaning up
When the server is deployed, it will come with a bunch of unnecessary documentation that tells you how to use it. You will want to go into the `/docs` folder, and delete everything except `assets/favicon.png`, `schema.json`, and `/schema`. These files are necessary to allow MKDocs to automatically detect and structure the documentation based on the file folder structure under `/docs`.
## Hotloading Bug Workaround
There is a [known bug](https://github.com/mkdocs/mkdocs/issues/4055) with the most recent version of Material MKDocs (as of writing) that causes it to not hotload changes immediately. This can be fixed by entering a shell in the docker container using `/bin/sh` then running the following command to downgrade the python "click" package: `pip install click==8.2.1`. After running the command, restart the container and hotloaded changes should start working again. You will have to run this command every time you re-deploy Material MKDocs until the issue is resolved officially.

View File

@@ -0,0 +1,325 @@
## Purpose
After many years of using Material for MKDocs and it being updated with new features and security updates, it finally reached EOL around the end of 2025. The project maintainers started pivoting to a new successor called [Zensical](https://zensical.org/docs/get-started/). This document outlines my particular process for setting up a standalone documentation server within a virtual machine.
!!! info "Assumptions"
It is assumed that you are deploying this server into `Ubuntu Server 24.04.2 LTS (Minimal)`. It is also assumed that you are running every command as a user with superuser privileges (e.g. `root`).
You are generally safe to have a GuestVM with 16GB for the virtual disk, and expand it over-time based on your needs. CPU count and RAM allocation can also be extremely low based on your preferences, since this is simply a static page website at the end of the day.
## Architectural Overview
It is useful to understand the flow of data and how everything inter-connects, so I have provided a sequence diagram that you can follow below:
``` mermaid
sequenceDiagram
autonumber
actor Author as Doc Author
participant Gitea as Gitea (Repo + Actions)
participant Runner as Act Runner
participant Zensical as Zensical Server (watch + build)
participant NGINX as NGINX (serves static site)
Author->>Gitea: Push to main
Gitea-->>Runner: Trigger workflow job
Runner->>Zensical: rsync docs → /srv/zensical/docs
Zensical-->>Zensical: Watch detects change
Zensical->>Zensical: Rebuild site → /srv/zensical/site
NGINX-->>NGINX: Serve files from /srv/zensical/site
```
## Setup Python Environment
The first thing we need to do is install the necessary python packages and install the zensical software stack inside of it.
```sh
sudo apt update && sudo apt upgrade -y
sudo apt install -y nano python3 python3.12-venv
mkdir -p /srv/zensical
cd /srv/zensical
python3 -m venv .venv
source .venv/bin/activate
pip install zensical
zensical new .
deactivate
# Remove Placeholder Example Docs
rm -rf /srv/zensical/docs/{*,.*}
```
## Zensical
### Configure Settings
Now we want to set some sensible defaults for Zensical to style it to look as close to Material for MKDocs as possible.
```sh
sudo tee /srv/zensical/zensical.toml > /dev/null <<'EOF'
[project]
site_name = "Bunny Lab"
site_description = "Server, Script, Workflow, and Networking Documentation"
site_author = "Nicole Rappe"
site_url = "https://kb.bunny-lab.io/"
repo_url = "https://git.bunny-lab.io/bunny-lab/docs"
repo_name = "bunny-lab/docs"
edit_uri = "_edit/main/"
[project.theme]
variant = "classic"
language = "en"
features = [
"announce.dismiss",
"content.action.edit",
"content.code.annotate",
"content.code.copy",
"content.code.select",
"content.footnote.tooltips",
"content.tabs.link",
"content.tooltips",
"navigation.indexes",
"navigation.instant",
"navigation.instant.prefetch",
"navigation.instant.progress",
"navigation.path",
"navigation.tabs",
"navigation.tabs.sticky",
"navigation.top",
"navigation.tracking",
"search.highlight",
]
[[project.theme.palette]]
scheme = "default"
toggle.icon = "lucide/sun"
toggle.name = "Switch to dark mode"
[[project.theme.palette]]
scheme = "slate"
toggle.icon = "lucide/moon"
toggle.name = "Switch to light mode"
EOF
```
### Create Watchdog Service
Since NGINX has taken over hosting the webpages, this does not need to be accessible from other servers, only NGINX itself which runs on the same host as Zensical. We only want to use the `zensical serve` command to keep a watchdog on the documentation folder and automatically rebuild the static site content when changes are detected. These changes are then served by NGINX's webserver.
```sh
# Create Service User, Assign Access, and Lockdown Zensical Data
sudo useradd --system --home /srv/zensical --shell /usr/sbin/nologin zensical || true
sudo chown -R zensical:zensical /srv/zensical
sudo find /srv/zensical -type d -exec chmod 2775 {} \;
sudo find /srv/zensical -type f -exec chmod 664 {} \; # This step likes to take a while, sometimes up to a minute.
```
```sh
# Make Zensical Binary Executable for Service
sudo chmod +x /srv/zensical/.venv/bin/zensical
# Add Additional User(s) to Folder for Extra Access (Such as Doc Runners)
sudo usermod -aG zensical nicole
# Create Service
sudo tee /etc/systemd/system/zensical-watchdog.service > /dev/null <<'EOF'
[Unit]
Description=Zensical Document Changes Watchdog (zensical serve)
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=zensical
Group=zensical
WorkingDirectory=/srv/zensical
# Run the venv binary directly; no activation needed
ExecStart=/srv/zensical/.venv/bin/zensical serve
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
EOF
# Start & Enable Automatic Startup of Service
sudo systemctl daemon-reload
sudo systemctl enable --now zensical-watchdog.service
```
## NGINX Webserver
We need to deploy NGINX as a webserver, because when using reverse proxies like Traefik, it seems to not get along with Zensical at all. Attempts to resolve this all failed, so putting the statically-built copies of site data that Zensical generates into NGINX's root directory is the second-best solution I came up with. Traefik can be reasonably expected to behave when interacting with NGINX versus Zensical's built-in webserver.
```sh
sudo apt install -y nginx
sudo rm -f /etc/nginx/sites-enabled/default
sudo tee /etc/nginx/sites-available/zensical.conf > /dev/null <<'EOF'
server {
listen 80;
listen [::]:80;
server_name _;
root /srv/zensical/site;
index index.html;
# Primary document handling
location / {
try_files $uri $uri/ /index.html;
}
# Static asset caching (safe for docs)
location ~* \.(css|js|png|jpg|jpeg|gif|svg|ico|woff2?)$ {
expires 7d;
add_header Cache-Control "public, max-age=604800, immutable";
try_files $uri =404;
}
# Prevent access to source or metadata
location ~* \.(toml|md)$ {
deny all;
}
}
EOF
sudo ln -s /etc/nginx/sites-available/zensical.conf /etc/nginx/sites-enabled/zensical.conf
sudo nginx -t
sudo systemctl reload nginx
sudo systemctl enable nginx
```
## Gitea ACT Runner
Now is time for the arguably most-important stage of deployment, which is setting up a [Gitea Act Runner](https://docs.gitea.com/usage/actions/act-runner). This is how document changes in a Gitea repository will propagate automatically into Zensical's `/srv/zensical/docs` folder.
```sh
# Install Dependencies
sudo apt install -y nodejs npm git rsync curl
# Create dedicated Gitea runner service account
sudo useradd --system --create-home --home /var/lib/gitea_runner --shell /usr/sbin/nologin gitearunner || true
# Allow the runner to write documentation changes
sudo usermod -aG zensical gitearunner
# Download Newest Gitea Runner Binary (https://gitea.com/gitea/act_runner/releases)
cd /tmp
wget https://gitea.com/gitea/act_runner/releases/download/v0.2.13/act_runner-0.2.13-linux-amd64
sudo install -m 0755 act_runner-0.2.13-linux-amd64 /usr/local/bin/gitea_runner
gitea_runner --version
# Generate Gitea Runner Configuration
sudo mkdir -p /etc/gitea_runner
sudo chown gitearunner:gitearunner /etc/gitea_runner
sudo -u gitearunner gitea_runner generate-config > /etc/gitea_runner/config.yaml
```
### Configure Registration Token
- Navigate to: "**<Gitea Repo> > Settings > Actions > Runners**"
- If you don't see this, it needs to be enabled. Navigate to: "**<Gitea Repo> > Settings > "Enable Repository Actions: Enabled" > Update Settings**"
- Click the "**Create New Runner**" button on the top-right of the page and copy the registration token somewhere temporarily.
- Navigate back to the GuestVM running Zensical and run the following commands.
```sh
# Start Token Registration Process
sudo -u gitearunner env HOME=/var/lib/gitea_runner /usr/local/bin/gitea_runner register --config /etc/gitea_runner/config.yaml
# Gitea Instance URL: https://git.bunny-lab.io
# Gitea Runner Token: <Gitea-Runner-Token>
# Runner Name: zensical-docs-runner
# Move Runner Config to Correct Location & Configure Permissions
sudo mv /tmp/.runner /var/lib/gitea_runner/.runner
sudo chown gitearunner:gitearunner /var/lib/gitea_runner/.runner
sudo chmod 600 /var/lib/gitea_runner/.runner
```
### Create Service
Now we need to configure the Gitea runner to start automatically via a service just like the Zensical Watchdog service.
```sh
# Create Gitea Runner Service
sudo tee /etc/systemd/system/gitea-runner.service > /dev/null <<'EOF'
[Unit]
Description=Gitea Actions Runner (gitea_runner)
After=network-online.target
Wants=network-online.target
[Service]
Environment=HOME=/var/lib/gitea_runner
User=gitearunner
Group=gitearunner
WorkingDirectory=/var/lib/gitea_runner
ExecStart=/usr/local/bin/gitea_runner daemon --config /etc/gitea_runner/config.yaml
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target
EOF
# Remove Container-Based Configurations to Force Runner to Run in Host Mode
sudo sed -i \
'/^[[:space:]]*labels:/,/^[[:space:]]*cache:/{
/^[[:space:]]*labels:/c\ labels:\n - "zensical-host:host"
/^[[:space:]]*cache:/!d
}' \
/etc/gitea_runner/config.yaml
# Enable and Start the Service
sudo systemctl daemon-reload
sudo systemctl enable --now gitea-runner.service
```
### Repository Workflow
Place the following file into your documentation repository at the given location and this will enable the runner to execute when changes happen to the repository data.
```yaml title="gitea/workflows/gitops-automatic-deployment.yml"
name: GitOps Automatic Documentation Deployment
on:
push:
branches: [ main ]
jobs:
zensical_deploy:
name: Sync Docs to https://kb.bunny-lab.io
runs-on: zensical-host
steps:
- name: Checkout Repository
uses: actions/checkout@v3
- name: Sync repository into /srv/zensical/docs
run: |
rsync -rlD --delete \
--exclude='.git/' \
--exclude='.gitea/' \
--exclude='assets/' \
--exclude='schema/' \
--exclude='stylesheets/' \
--exclude='schema.json' \
--chmod=D2775,F664 \
. /srv/zensical/docs/
- name: Notify via NTFY
if: always()
run: |
curl -d "https://kb.bunny-lab.io - Zensical job status: ${{ job.status }}" https://ntfy.bunny-lab.io/gitea-runners
```
## Traefik Reverse Proxy
It is assumed that you use a [Traefik](../edge/traefik.md) reverse proxy and are configured to use [dynamic configuration files](../edge/traefik.md#dynamic-configuration-files). Add the file below to expose the Zensical service to the rest of the world.
```yaml title="kb.bunny-lab.io.yml"
http:
routers:
kb:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: kb
rule: Host(`kb.bunny-lab.io`)
services:
kb:
loadBalancer:
servers:
- url: http://192.168.3.8:80
passHostHeader: true
```

34
services/edge/nginx.md Normal file
View File

@@ -0,0 +1,34 @@
**Purpose**: NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.
```yaml title="docker-compose.yml"
---
version: "2.1"
services:
nginx:
image: lscr.io/linuxserver/nginx:latest
container_name: nginx
environment:
- PUID=1000
- PGID=1000
- TZ=America/Denver
volumes:
- /srv/containers/nginx-portfolio-website:/config
ports:
- 80:80
- 443:443
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.12
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

191
services/edge/traefik.md Normal file
View File

@@ -0,0 +1,191 @@
**Purpose**: A traefik reverse proxy is a server that sits between your network firewall and servers hosting various web services on your private network(s). Traefik automatically handles the creation of Let's Encrypt SSL certificates if you have a domain registrar that is supported by Traefik such as CloudFlare; by leveraging API keys, Traefik can automatically make the DNS records for Let's Encrypt's DNS "challenges" whenever you add a service behind the Traefik reverse proxy.
!!! info "Assumptions"
This Traefik deployment document assumes you have deployed [Portainer](../../platforms/containerization/docker/deploy-portainer.md) to either a Rocky Linux or Ubuntu Server environment. Other docker-compose friendly operating systems have not been tested, so your mileage may vary regarding successful deployment ouside of these two operating systems.
Portainer makes deploying and updating Traefik so much easier than via a CLI. It's also much more intuitive.
## Deployment on Portainer
- Login to Portainer (e.g. https://<portainer-ip>:9443)
- Navigate to "**Environment (usually "local") > Stacks > "+ Add Stack"**"
- Enter the following `docker-compose.yml` and `.env` environment variables into the webpage
- When you have finished making adjustments to the environment variables (and docker-compose data if needed), click the "**Deploy the Stack**" button
!!! warning "Get DNS Registrar API Keys BEFORE DEPLOYMENT"
When you are deploying this container, you have to be mindful to set valid data for the environment variables related to the DNS registrar. In this example, it is CloudFlare.
```jsx title="Environment Variables"
CF_API_EMAIL=nicole.rappe@bunny-lab.io
CF_API_KEY=REDACTED-CLOUDFLARE-DOMAIN-API-KEY
```
If these are not set, Traefik will still work, but SSL certificates will not be issued from Let's Encrypt, and SSL traffic will be terminated using a self-signed Traefik-based certificate, which is only good for local non-production testing.
If you plan on using HTTP-based challenges, you will need to make the following changes in the docker-compose.yml data:
- Un-comment `"--certificatesresolvers.myresolver.acme.tlschallenge=true"`
- Comment-out `"--certificatesresolvers.letsencrypt.acme.dnschallenge=true"`
- Comment-out `"--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare"`
- Lastly, you need to ensure that port 80 on your firewall is opened to the IP of the Traefik Reverse Proxy to allow Let's Encrypt to do TLS-based challenges.
### Stack Deployment Information
```yaml title="docker-compose.yml"
version: "3.3"
services:
traefik:
image: "traefik:latest"
restart: always
container_name: "traefik-bunny-lab-io"
cap_add:
- NET_ADMIN
entrypoint:
- /bin/sh
- -lc
- |
ip link set dev eth0 mtu 1500
exec traefik "$@"
ulimits:
nofile:
soft: 65536
hard: 65536
labels:
- "traefik.http.routers.traefik-proxy.middlewares=my-buffering"
- "traefik.http.middlewares.my-buffering.buffering.maxRequestBodyBytes=104857600"
- "traefik.http.middlewares.my-buffering.buffering.maxResponseBodyBytes=104857600"
- "traefik.http.middlewares.my-buffering.buffering.memRequestBodyBytes=2097152"
- "traefik.http.middlewares.my-buffering.buffering.memResponseBodyBytes=2097152"
- "traefik.http.middlewares.my-buffering.buffering.retryExpression=IsNetworkError() && Attempts() <= 2"
command:
# Globals
- "--log.level=ERROR"
- "--api.insecure=true"
- "--global.sendAnonymousUsage=false"
# Docker
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
# File Provider
- "--providers.file.directory=/etc/traefik/dynamic"
- "--providers.file.watch=true"
# Entrypoints
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--entrypoints.web.http.redirections.entrypoint.to=websecure" # Redirect HTTP to HTTPS
- "--entrypoints.web.http.redirections.entrypoint.scheme=https" # Redirect HTTP to HTTPS
- "--entrypoints.web.http.redirections.entrypoint.permanent=true" # Redirect HTTP to HTTPS
# LetsEncrypt
### - "--certificatesresolvers.myresolver.acme.tlschallenge=true" # Enable if doing Port 80 Let's Encrypt Challenges
- "--certificatesresolvers.letsencrypt.acme.dnschallenge=true" # Disable if doing Port 80 Let's Encrypt Challenges
- "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare" # Disable if doing Port 80 Let's Encrypt Challenges
- "--certificatesresolvers.letsencrypt.acme.email=${LETSENCRYPT_EMAIL}"
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
# Keycloak plugin configuration
- "--experimental.plugins.keycloakopenid.moduleName=github.com/Gwojda/keycloakopenid" # Optional if you have Keycloak Deployed
- "--experimental.plugins.keycloakopenid.version=v0.1.34" # Optional if you have Keycloak Deployed
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- "/srv/containers/traefik/letsencrypt:/letsencrypt"
- "/srv/containers/traefik/config:/etc/traefik"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "/srv/containers/traefik/cloudflare:/cloudflare"
networks:
docker_network:
ipv4_address: 192.168.5.29
environment:
- CF_API_EMAIL=${CF_API_EMAIL}
- CF_API_KEY=${CF_API_KEY}
extra_hosts:
- "mail.bunny-lab.io:192.168.3.13" # Just an Example
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
CF_API_EMAIL=nicole.rappe@bunny-lab.io
CF_API_KEY=REDACTED-CLOUDFLARE-DOMAIN-API-KEY
LETSENCRYPT_EMAIL=nicole.rappe@bunny-lab.io
```
!!! info
There is a distinction between the "Global API Key" and a "Token API Key". The main difference being that the "Global API Key" can change anything in Cloudflare, while the "Token API Key" can only change what it was granted delegated permissions to.
## Adding Servers / Services to Traefik
Traefik operates in two ways, the first is labels, while the second are dynamic configuration files. We will go over each below.
### Docker-Compose Labels
The first is that it reads "labels" from the docker-compose file of any deployed containers on the same host as Traefik. These labels typically look something like the following:
```yaml title="docker-compose.yml"
labels:
- "traefik.enable=true"
- "traefik.http.routers.gitea.rule=Host(`example.bunny-lab.io`)"
- "traefik.http.routers.gitea.entrypoints=websecure"
- "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
- "traefik.http.services.gitea.loadbalancer.server.port=8080"
```
By adding these labels to any container on the same server as Traefik, traefik will automatically "adopt" this service and route traffic to it as well as assign an SSL certificate to it from Let's Encrypt. The only downside is as mentioned above, if you are dealing with something that is not just a container, or maybe a container on a different physical server, you need to rely on dynamic configuration files, such as the one seen below.
### Dynamic Configuration Files
Dynamic configuration files exist under the Traefik container located at `/etc/traefik/dynamic`. Any `*.yml` files located in this folder will be hot-loaded anytime they are modified. This makes it convenient to leverage something such as the [Git Repo Updater](../../platforms/containerization/docker/custom-containers/git-repo-updater.md) container to leverage [Gitea](../devops/gitea.md) to push configuration files from Git into the production environment, saving yourself headache and enabling version control over every service behind the reverse proxy.
An example of a dynamic configuration file would look something like this:
```yaml title="/etc/traefik/dynamic/example.bunny-lab.io.yml"
http:
routers:
example:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: example
rule: Host(`example.bunny-lab.io`)
services:
example:
loadBalancer:
servers:
- url: http://192.168.5.70:8080
passHostHeader: true
```
You can see the similarities between the labeling method and how you designate the proxy name `example.bunny-lab.io` the internal ip address `192.168.5.70` the protocol to request the data from the service internally `http`, and the port the server is listening on internally `8080`. If you want to know more about the parameters such as `passHostHeader: true` then you will need to do some of your own research into it.
!!! example "Service Naming Considerations"
When you deploy a service into a Traefik-based reverse proxy, the name of the `router` and `service` have to be unique. The router can have the same name as the service, such as `example`, but I recommend naming the services to match the FQDN of the service itself.
For example, `remote.bunny-lab.io` would be written as `remote-bunny-lab-io`. This keeps things organized and easy to read if you are troubleshooting things in Traefik's logs or webUI. The complete configuration file would look like the example below:
```yaml title="/etc/traefik/dynamic/remote.bunny-lab.io.yml"
http:
routers:
remote-bunny-lab-io:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: remote-bunny-lab-io
rule: Host(`remote.bunny-lab.io`)
services:
remote-bunny-lab-io:
loadBalancer:
servers:
- url: http://192.168.5.70:8080
passHostHeader: true
```

View File

@@ -0,0 +1,273 @@
**Purpose**:
Self-Hosted Open-Source email server that can be setup in minutes, and is enterprise-grade if upgraded with an iRedAdmin-Pro license.
!!! note "Assumptions"
It is assumed you are running at least Rocky Linux 9.3. While you can use CentOS Stream, Alma, Debian, Ubuntu, FreeBSD, and OpenBSD, the more enterprise-level sections of my homelab are built on Rocky Linux.
!!! warning "iRedMail / iRedAdmin-Pro Version Mismatching"
This document assumes you are deploying iRedMail 1.6.8, which at the time of writing, coincided with iRedAdmin-Pro 5.5. If you are not careful, you may end up with mismatched versions down the road as iRedMail keeps getting updates. Due to how you have to pay for a license in order to get access to the original iRedAdmin-Pro-SQL repository data, if a newer version of iRedAdmin-Pro comes out after February 2025, this document may not account for that, leaving you on an older version of the software. This is unavoidable if you want to avoid paying $500/year for licensing this software.
## Overview
The instructions below are specific to my homelab environment, but can be easily ported depending on your needs. This guide also assumes you want to operate a PostgreSQL-based iRedMail installation. You can follow along with the official documentation on [Installation](https://docs.iredmail.org/install.iredmail.on.rhel.html) as well as [DNS Record Configuration](https://docs.iredmail.org/setup.dns.html) if you want more detailed explanations throughout the installation process.
## Configure FQDN
Ensure the FQDN of the server is correctly set in `/etc/hostname`. The `/etc/hosts` file will be automatically injected using the FQDN from `/etc/hostname` in a script further down, don't worry about editing it.
## Disable SELinux
iRedMail doesn't work with SELinux, so please disable it by setting below value in its config file /etc/selinux/config. After server reboot, SELinux will be completely disabled.
``` sh
# Elevate to Root User
sudo su
# Disable SELinux
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config # (1)
setenforce 0
```
1. If you prefer to let SELinux prints warnings instead of enforcing, you can set this value instead: `SELINUX=permissive`
## iRedMail Installation
### Set Domain and iRedMail Version
Start by connecting to the server / VM via SSH, then set silent deployment variables below.
``` sh
# Define some deployment variables.
VERSION="1.6.8" # (1)
MAIL_DOMAIN="bunny-lab.io" # (2)
```
1. This is the version of iRedMail you are deploying. You can find the newest version on the [iRedMail Download Page](https://www.iredmail.org/download.html).
2. This is the domain suffix that appears after mailbox names. e.g. `first.last@bunny-lab.io` would use a domain value of `bunny-lab.io`.
You will then proceed to bootstrap a silent unattended installation of iRedMail. (I've automated as much as I can to make this as turn-key as possible). Just copy/paste this whole thing into your terminal and hit ENTER.
!!! danger "Storage Space Requirements"
You absolutely need to ensure that `/var/vmail` has a lot of space. At least 16GB. This is where all of your emails / mailboxes / a lot of settings will be. If possible, create a second physical/virtual disk specifically for the `/var` partition, or specifically for `/var/vmail` at minimum, so you can expand it over time if necessary. LVM-based provisioning is recommended but not required.
### Install iRedMail
``` sh
# Automatically configure the /etc/hosts file to point to the server listed in "/etc/hostname".
sudo sed -i "1i 127.0.0.1 $(cat /etc/hostname) $(cut -d '.' -f 1 /etc/hostname) localhost localhost.localdomain localhost4 localhost4.localdomain4" /etc/hosts
# Check for Updates in the Package Manager
yum update -y
# Install Extra Packages for Enterprise Linux
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
# Download the iRedMail binaries and extract them
cd /root
curl https://codeload.github.com/iredmail/iRedMail/tar.gz/refs/tags/$VERSION -o iRedMail-$VERSION.tar.gz
tar zxf iRedMail-$VERSION.tar.gz
# Create the unattend config file for silent deployment. This will automatically generate random 32-character passwords for all of the databases.
(echo "export STORAGE_BASE_DIR='/var/vmail'"; echo "export WEB_SERVER='NGINX'"; echo "export BACKEND_ORIG='PGSQL'"; echo "export BACKEND='PGSQL'"; for var in VMAIL_DB_BIND_PASSWD VMAIL_DB_ADMIN_PASSWD MLMMJADMIN_API_AUTH_TOKEN NETDATA_DB_PASSWD AMAVISD_DB_PASSWD IREDADMIN_DB_PASSWD RCM_DB_PASSWD SOGO_DB_PASSWD SOGO_SIEVE_MASTER_PASSWD IREDAPD_DB_PASSWD FAIL2BAN_DB_PASSWD PGSQL_ROOT_PASSWD DOMAIN_ADMIN_PASSWD_PLAIN; do echo "export $var='$(openssl rand -base64 48 | tr -d '+/=' | head -c 32)'"; done; echo "export FIRST_DOMAIN='$MAIL_DOMAIN'"; echo "export USE_IREDADMIN='YES'"; echo "export USE_SOGO='YES'"; echo "export USE_NETDATA='YES'"; echo "export USE_FAIL2BAN='YES'"; echo "#EOF") > /root/iRedMail-$VERSION/config
# Make Config Read-Only
chmod 400 /root/iRedMail-$VERSION/config
# Set Environment Variables for Silent Deployment
cd /root/iRedMail-$VERSION
# Deploy iRedMail via the Install Script
AUTO_USE_EXISTING_CONFIG_FILE=y \
AUTO_INSTALL_WITHOUT_CONFIRM=y \
AUTO_CLEANUP_REMOVE_SENDMAIL=y \
AUTO_CLEANUP_REPLACE_FIREWALL_RULES=y \
AUTO_CLEANUP_RESTART_FIREWALL=n \
AUTO_CLEANUP_REPLACE_MYSQL_CONFIG=y \
bash iRedMail.sh
```
When the installation is completed, take note of any output it gives you for future reference. Then reboot the server to finalize the server installation.
```
reboot
```
!!! warning "Automatically-Generated Postmaster Password"
When you deploy iRedMail, it will give you a username and password for the postmaster account. If you accidentally forget to document this, you can log back into the server via SSH and see the credentials at `/root/iRedMail-$VERSION/iRedMail.tips`. This file is critical and contains passwords and DNS information such as DKIM record information as well.
## Networking Configuration
### Nested Reverse Proxy Configuration
In my homelab environment, I run Traefik reverse proxy in front of everything, which includes the NGINX reverse proxy that iRedMail creates. In my scenario, I have to make some custom adjustments to the reverse proxy dynamic configuration data to ensure it will step aside and let the NGINX reverse proxy inside of iRedMail handle everything, including handling its own SSL termination with Let's Encrypt.
``` sh
tcp:
routers:
mail-tcp-router:
rule: "HostSNI(`mail.bunny-lab.io`)"
entryPoints: ["websecure"]
service: mail-nginx-service
tls:
passthrough: true
services:
mail-nginx-service:
loadBalancer:
servers:
- address: "192.168.3.13:443"
```
### Let's Encrypt ACME Certbot
At this point, we want to set up automatic Let's Encrypt SSL termination inside of iRedMail so we don't have to manually touch this in the future.
#### Generate SSL Certificate
=== "Debian/Ubuntu"
``` sh
# Download the Certbot
sudo apt update
sudo apt install -y certbot
sudo certbot certonly --webroot -w /var/www/html -d mail.bunny-lab.io
# Set up Symbolic Links (Where iRedMail Expects Them)
sudo mv /etc/ssl/certs/iRedMail.crt{,.bak}
sudo mv /etc/ssl/private/iRedMail.key{,.bak}
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/fullchain.pem /etc/ssl/certs/iRedMail.crt
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/privkey.pem /etc/ssl/private/iRedMail.key
# Restart iRedMail Services
sudo systemctl restart postfix dovecot nginx
```
=== "CentOS/Rocky/AlmaLinux"
``` sh
# Download the Certbot
sudo yum install -y epel-release
sudo yum install -y certbot
sudo certbot certonly --webroot -w /var/www/html -d mail.bunny-lab.io
# Set up Symbolic Links (Where iRedMail Expects Them)
sudo mv /etc/pki/tls/certs/iRedMail.crt{,.bak}
sudo mv /etc/pki/tls/private/iRedMail.key{,.bak}
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/fullchain.pem /etc/pki/tls/certs/iRedMail.crt
sudo ln -s /etc/letsencrypt/live/mail.bunny-lab.io/privkey.pem /etc/pki/tls/private/iRedMail.key
# Restart iRedMail Services
sudo systemctl restart postfix dovecot nginx
```
#### Configure Automatic Renewal
To automate the renewal process, set up a cron job that runs the certbot renew command regularly. This command will renew certificates that are due to expire within 30 days.
Open the crontab editor with the following command:
```
sudo crontab -e
```
Add the following line to run the renewal process daily at 3:01 AM:
```
1 3 * * * certbot renew --post-hook 'systemctl restart postfix dovecot nginx'
```
### DNS Records
Now you need to set up DNS records in Cloudflare (or the DNS Registrar you have configured) so that the mail server can be found and validated.
| **Type** | **Name** | **Content** | **Proxy Status** | **TTL** |
| :--- | :--- | :--- | :--- | :--- |
| MX | bunny-lab.io | mail.bunny-lab.io | DNS Only | Auto |
| TXT | bunny-lab.io | "v=spf1 a:mail.bunny-lab.io ~all" | DNS Only | Auto |
| TXT | dkim._domainkey | v=DKIM1; p=`IREDMAIL-DKIM-VALUE` | DNS Only | 1 Hour |
| TXT | _dmarc | "v=DMARC1; p=reject; pct=100; rua=mailto:postmaster@bunny-lab.io; ruf=mailto:postmaster@bunny-lab.io" | DNS Only | Auto |
### Port Forwarding
Lastly, we need to set up port forwarding to open the ports necessary for the server to send and receive email.
| **Protocol** | **Port** | **Destination Server** | **Description** |
| :--- | :--- | :--- | :--- |
| TCP | 995 | 192.168.3.13 | POP3 service: port 110 over STARTTLS |
| TCP | 993 | 192.168.3.13 | IMAP service: port 143 over STARTTLS |
| TCP | 587 | 192.168.3.13 | SMTP service: port 587 over STARTTLS |
| TCP | 25 | 192.168.3.13 | SMTP (Email Server-to-Server Communication) |
## Install iRedAdmin-Pro
When it comes to adding extra features, start by copying the data from this [Bunny Lab repository](https://git.bunny-lab.io/bunny-lab/iRedAdmin-Pro-SQL) to the following folder by running these commands first:
``` sh
# Stop the iRedMail Services
sudo systemctl stop postfix dovecot nginx
# Grant Temporary Access to the iRedAdmin Files and Folders
sudo chown nicole:nicole -R /opt/www/iRedAdmin-2.5
# Copy the data from the repository mentioned above into this folder, merging identical folders and files. Feel free to use your preferred file transfer tool tool / method (e.g. MobaXTerm / WinSCP).
# Change permissions back to normal
sudo chown iredadmin:iredadmin -R /opt/www/iRedAdmin-2.5
# Reboot the Server
sudo reboot
```
### Activate iRedAdmin-Pro
At this point, if you want to use iRedAdmin-Pro, you either have a valid license key, or you adjust the python function responsible for checking license keys to bypass the check, effectively forcing iRedAdmin to be activated. In this instance, we will be forcing activation by adjusting this function, seen below.
There is someone else who outlined all of these changes, and additional (aesthetic) ones, like removing the renew license button from the license page, but the core functionality is seen below. If you want to see the original repository this was inspired from, it can be found [Here](https://github.com/marcus-alicia/iRedAdmin-Pro-SQL)
``` sh
# Take permission of the python script
sudo chown nicole:nicole /opt/www/iRedAdmin-2.5/libs/sysinfo.py
```
=== "Original Activation Function"
```jsx title="/opt/www/iRedAdmin-2.5/libs/sysinfo.py"
def get_license_info():
if len(__id__) != 32:
web.conn_iredadmin.delete("updatelog")
session.kill()
raise web.seeother("/login?msg=INVALID_PRODUCT_ID")
params = {
"v": __version__,
"f": __id__,
"lang": settings.default_language,
"host": get_hostname(),
"backend": settings.backend,
"webmaster": settings.webmaster,
"mac": ",".join(get_all_mac_addresses()),
}
url = "https://lic.iredmail.org/check_version/licenseinfo/" + __id__ + ".json"
url += "?" + urllib.parse.urlencode(params)
try:
urlopen = __get_proxied_urlopen()
_json = urlopen(url).read()
lic_info = json.loads(_json)
lic_info["id"] = __id__
return True, lic_info
except Exception as e:
return False, web.urlquote(e)
```
=== "Bypassed Activation Function"
```jsx title="/opt/www/iRedAdmin-2.5/libs/sysinfo.py"
def get_license_info():
return True, {
"status": "active",
"product": "iRedAdmin-Pro-SQL",
"licensekey": "forcefully-open-source",
"upgradetutorials": "https://docs.iredmail.org/iredadmin-pro.releases.html",
"purchased": "Never",
"contacts": "nicole.rappe@bunny-lab.io",
"latestversion": "5.5",
"expired": "Never",
"releasenotes": "https://docs.iredmail.org/iredadmin-pro.releases.html",
"id": __id__
}
```
``` sh
# Revert permission of the python script
sudo chown iredadmin:iredadmin /opt/www/iRedAdmin-2.5/libs/sysinfo.py
# Reboot the Server (To be safe)
sudo reboot
```
!!! success "Successful Activation"
At this point, if you navigate to the [iRedAdmin-Pro License Page](https://mail.bunny-lab.io/iredadmin/system/license) you should see the server is activated successfully.

View File

@@ -0,0 +1,35 @@
## Purpose
You may need to troubleshoot the outgoing SMTP email queue / active sessions in iRedMail for one reason or another. This can provide useful insight into the reason why emails are not being delivered, etc.
### Overall Queue Backlog
You can run the following command to get the complete backlog of all email senders in the queue. This can be useful for tracking the queue's "drainage" over-time.
```sh
# List the total number of queued messages
postqueue -p | egrep -c '^[A-F0-9]'
# Itemize and count the queued messages based on sender.
postqueue -p | awk '/^[A-F0-9]/ {id=$1} /from=<[^>]+>/ && $0 !~ /from=<>/ {print id; exit}'
```
!!! example "Example Output"
- 10392 problematic@bunny-lab.io
- 301 prettybad@bunny-lab.io
- 39 infrastructure@bunny-lab.io
- 20 nicole.rappe@bunny-lab.io
### Investigating Individual Emails
You can run the following command to list all queued messages: `postqueue -p`. You can then run `postcat -vq <message-ID>` to read detailed information on any specific queued SMTP message:
```sh
postqueue -p
postcat -vq 4dgHry5LZnzH6x08 # (1)
```
1. Example message ID gathered from the previous `postqueue -p` command.
### Attempt to Gracefully Reload Postfix
You may want to try to unstick things by gracefully "reloading" the postfix service via `postfix reload`. This will ensure that we don't drop / disconnect / lose all of the active outgoing SMTP sessions in the queue. It may not help resolve issues, but it's worth noting down:
### Reattempt Delivery
You can attempt redelivery via running `postqueue -f` to try to free up the queue. Postfix will immediately re-attempt delivery of all queued messages instead of waiting for their scheduled retry time. It does not override remote rejections or fix underlying delivery errors; it only accelerates the next delivery attempt.

View File

@@ -0,0 +1,3 @@
| Server | Port(s) | Security | Auth Method | Username |
|:------------------|:----------------------------------------------|:----------|:----------------|:-------------------|
| `mail.bunny-lab.io` | **IMAP:** 143 `Internal`, 993 `External`<br>**SMTP:** 587, 25 `Fallback` | STARTTLS | Normal Password | user@bunny-lab.io |

231
services/email/mailcow.md Normal file
View File

@@ -0,0 +1,231 @@
!!! warning "Under Construction"
The deployment of Mailcow is mostly correct here, but with the exception that we dont point DNS records to the reverse proxy (internally) because it's currently not functioning as expected. So for the time being, you would open all of the ports up to the Mailcow server's internal IP address via port forwarding on your firewall.
## Purpose
The purpose of this document is to illustrate how to deploy Mailcow in a dockerized format.
!!! note "Assumptions"
It is assumed that you are deploying Mailcow into an existing Ubuntu Server environment. If you are using a different operating system, refer to the [official documentation](https://docs.mailcow.email/getstarted/install/).
### Setting Up Docker
Go ahead and set up docker and docker-compose with the following commands:
```bash
sudo su # (1)
curl -sSL https://get.docker.com/ | CHANNEL=stable sh # (2)
apt install docker-compose-plugin # (3)
systemctl enable --now docker # (4)
```
1. Make yourself root.
2. Install `Docker`
3. Install `Docker-Compose`
4. Make docker run automatically when the server is booted.
### Download and Deploy Mailcow
Run the following commands to pull down the mailcow deployment files and install them with docker. Go get a cup of coffee as the `docker compose pull` command may take a while to run.
!!! note "Potential `Docker Compose` Issues"
If you run the `docker-compose pull` command and it fails for some reason, change the command to `docker compose pull` instead. This is just the difference between the plugin version of compose versus the standalone version. Both will have the same result.
```bash
cd /opt
git clone https://github.com/mailcow/mailcow-dockerized
cd mailcow-dockerized
./generate_config.sh # (1)
docker-compose pull # (2)
docker-compose up -d
```
1. Generate a configuration file. Use a FQDN (`host.domain.tld`) as hostname when asked.
2. If you get an error about the ports of the `nginx-mailcow` service in the `docker-compose.yml` stack, change the ports for that service as follows:
```yaml
ports:
- "${HTTPS_BIND:-0.0.0.0}:${HTTPS_PORT:-443}:${HTTPS_PORT:-443}"
- "${HTTP_BIND:-0.0.0.0}:${HTTP_PORT:-80}:${HTTP_PORT:-80}"
```
### Reverse-Proxy Configuration
For the purposes of this document, it will be assumed that you are deploying Mailcow behind Traefik. You can use the following dynamic configuration file to achieve this:
```yaml title="/srv/containers/traefik/config/dynamic/mail.bunny-lab.io.yml"
# ========================
# Mailcow / Traefik Config
# ========================
# ----------------------------------------------------
# HTTP Section - Handles Mailcow web UI via Traefik
# ----------------------------------------------------
http:
routers:
mailcow-server:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: mailcow-http
rule: Host(`mail.bunny-lab.io`)
services:
mailcow-http:
loadBalancer:
servers:
- url: http://192.168.3.61:80
passHostHeader: true
# ----------------------------------------------------
# TCP Section - Handles all mail protocols
# ----------------------------------------------------
tcp:
routers:
# -----------
# SMTP Router (Port 25, non-TLS, all mail deliveries)
# -----------
mailcow-smtp:
entryPoints:
- smtp
rule: "" # Empty rule = accept ALL connections on port 25 (plain SMTP)
service: mailcow-smtp
# -----------
# SMTPS Router (Port 465, implicit TLS)
# -----------
mailcow-smtps:
entryPoints:
- smtps
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
service: mailcow-smtps
tls:
passthrough: true
# -----------
# Submission Router (Port 587, implicit TLS or STARTTLS)
# -----------
mailcow-submission:
entryPoints:
- submission
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
service: mailcow-submission
tls:
passthrough: true
# -----------
# IMAPS Router (Port 993, implicit TLS)
# -----------
mailcow-imaps:
entryPoints:
- imaps
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
service: mailcow-imaps
tls:
passthrough: true
# -----------
# IMAP Router (Port 143, can be STARTTLS)
# -----------
mailcow-imap:
entryPoints:
- imap
rule: "HostSNI(`*`)" # Match any SNI (for TLS connections)
service: mailcow-imap
tls:
passthrough: true
# -----------
# POP3S Router (Port 995, implicit TLS)
# -----------
mailcow-pop3s:
entryPoints:
- pop3s
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
service: mailcow-pop3s
tls:
passthrough: true
# -----------
# Dovecot Managesieve (Port 4190, implicit TLS)
# -----------
mailcow-dovecot-managesieve:
entryPoints:
- pop3s
rule: "HostSNI(`*`)" # Match any SNI (required for TLS)
service: dovecot-managesieve
tls:
passthrough: true
services:
# SMTP (Port 25, plain)
mailcow-smtp:
loadBalancer:
servers:
- address: "192.168.3.61:25"
# SMTPS (Port 465, implicit TLS)
mailcow-smtps:
loadBalancer:
servers:
- address: "192.168.3.61:465"
# Submission (Port 587, implicit TLS or STARTTLS)
mailcow-submission:
loadBalancer:
servers:
- address: "192.168.3.61:587"
# IMAPS (Port 993, implicit TLS)
mailcow-imaps:
loadBalancer:
servers:
- address: "192.168.3.61:993"
# IMAP (Port 143, plain/STARTTLS)
mailcow-imap:
loadBalancer:
servers:
- address: "192.168.3.61:143"
# POP3S (Port 995, implicit TLS)
mailcow-pop3s:
loadBalancer:
servers:
- address: "192.168.3.61:995"
# Dovecot Managesieve (Port 4190, implicit TLS)
dovecot-managesieve:
loadBalancer:
servers:
- address: "192.168.3.61:4190"
```
### Traefik-Specific Configuration
You will need to add some extra entrypoints and ports to Traefik itself so it can listen for this new traffic.
```yaml
#Entrypoints
- "--entrypoints.smtp.address=:25"
- "--entrypoints.smtps.address=:465"
- "--entrypoints.submission.address=:587"
- "--entrypoints.imap.address=:143"
- "--entrypoints.imaps.address=:993"
- "--entrypoints.pop3.address=:110"
- "--entrypoints.pop3s.address=:995"
- "--entrypoints.dovecot-managesieve.address=:4190"
#Ports
- "25:25"
- "110:110"
- "143:143"
- "465:465"
- "587:587"
- "993:993"
- "995:995"
- "4190:4190"
```
### Login to Mailcow
At this point, the Mailcow server has been deployed so you can log into it.
- **Administrators**: `https://${MAILCOW_HOSTNAME}/admin` (Username: `admin` | Password: `moohoo`)
- **Regular Mailbox Users**: `https://${MAILCOW_HOSTNAME}` (*FQDN only*)
### Mail-Client Considerations
You need to ensure that you generate an app password if you have MFA enabled within Mailcow. (MFA is non-functional in Roundcube/SoGo, you set it up via Mailcow itself). You can access it via the Mailcow configuration page: https://mail.bunny-lab.io/user, then look for the "**App Passwords**" tab.
### Running Updates
If you want to run updates, just SSH into the server, and navigate to `/opt/mailcow-dockerized` and run `./update.sh`. I recommend avoiding the IPv6 implementation section. Be patient, and the upgrade will be fully-automated.

View File

@@ -0,0 +1,71 @@
**Purpose**: If you want to set up automatic Let's Encrypt SSL certificates on a Microsoft Exchange server, you have to go through a few steps to install the WinACME bot, and configure it to automatically renew certificates.
!!! note "ACME Bot Provisioning Considerations"
This document assumes you want a fully-automated one-liner command for configuring the ACME Bot, it is also completely valid to go step-by-step through the bot to configure the SSL certificate, the IIS server, etc, and it will automatically create a Scheduled Task to renew on its own. The whole process is very straight-forward with most answers being the default option.
### Download the Win-ACME Bot:
* Log into the on-premise Exchange Server via Datto RMM
* Navigate to: [https://www.win-acme.com/](https://www.win-acme.com/)
* On the top-right of the website, you will see a "**Download**" button with the most recent version of the Win-ACME bot
* Extract the contents of the ZIP file to "**C:\\Program Files (x86)\\Lets Encrypt**"
* Make the "**Lets Encrypt**" folder if it does not already exist
### Configure `settings_default.json`:
* The next step involves us making a modification to the configuration of the Win-ACME bot that allows us to export the necessary private key data for Exchange
* Using a text editor, open the "**settings\_default.json**" file
* Look for the setting called "**PrivateKeyExportable**" and change the value from "**false**" to "**true**"
* Save and close the file
### Download and Install the SSL Certificate:
* Open an administrative Command Line (DO NOT USE POWERSHELL)
* Navigate to the Let's Encrypt bot directory: `CD "C:\Program Files (x86)\Lets Encrypt"`
* Invoke the bot to automatically download and install the certificate into the IIS Server that Exchange uses to host the Exchange Server
* Be sure to change the placeholder subdomains to match the domain of the actual Exchange Server
* (e.g. "**mail.example.org**" | "**autodiscover.example.org**")
```
wacs.exe --target manual --host mail.example.org,autodiscover.example.org --certificatestore My --acl-fullcontrol "network service,administrators" --installation iis,script --installationsiteid 1 --script "./Scripts/ImportExchange.ps1" --scriptparameters "'{CertThumbprint}' 'IIS,SMTP,IMAP' 1 '{CacheFile}' '{CachePassword}' '{CertFriendlyName}'" --verbose
```
* When the command is running, it will ask for an email address for alerts and abuse notifications, just put "**infrastructure@bunny-lab.io**"
* If you run into any unexpected errors that result in anything other than exiting with a status "0", consult with Nicole Rappe to proceed
* Check that the domain of the Exchange Server is reachable on port 80 as Let's Encrypt uses this to build the cert.
* Searching the external IP of the server on [Shodan](https://www.shodan.io/) will reveal all open ports.
### Troubleshooting:
If you find that any of the services such as [https://mail.example.org/ecp](https://mail.example.org/ecp), [https://autodiscover.example.org](https://autodiscover.example.org), or [https://mail.example.org/owa](https://mail.example.org/owa) do not let you log in, proceed with the steps below to correct the "Certificate Binding" in IIS Manager:
* Open "**Server Manager**" > Tools > "**Internet Information Services (IIS) Manager**"
* Expand the "**Connections**" server tree on the left-hand side of the IIS Manager
* Expand the "**Sites**" folder
* Click on "**Default Web Site**"
* On the right-hand Actions menu, click on "**Bindings...**"
* A table will appear with different endpoints on the Exchange server > What you are looking for is an entry that looks like the following:
* **Type**: https
* **Host Name**: autodiscover.example.org
* **Port**: 443
* Double-click on the row, or click one then click the "**Edit**" button to open the settings for that endpoint
* Under "**SSL Certificate**" > Make sure the certificate name matches the following format: "**\[Manual\] autodiscover.example.org @ YYYY/MM/DD**"
* If it does not match the above, use the dropdown menu to correct it and click the "**OK**" button
* **Type**: https
* **Host Name**: mail.example.org
* **Port**: 443
* Repeat the steps seen above, except this time for "**mail.example.org**"
* Click on "**Exchange Back End**"
* On the right-hand Actions menu, click on "**Bindings...**"
* A table will appear with different endpoints on the Exchange server > What you are looking for is an entry that looks like the following:
* **Type**: https
* **Host Name**: <blank>
* **Port**: 444
* Repeat the steps seen above, ensuring that the "**\[Manual\] autodiscover.example.org @ YYYY/MM/DD**" certificate is selected and applied
* Click the "**OK**" button
* On the left-hand menu under "**Connections**" in IIS Manager, click on the server name itself
* (e.g. "**EXAMPLE-EXCHANGE (DOMAIN\\dptadmin**")
* On the right-hand "**Actions**" menu > Under "Manage Server" > Select "Restart"
* Wait for the IIS server to restart itself, then try accessing the webpages for Exchange that were exhibiting issues logging in
### Additional Documentation:
* [https://www.alitajran.com/install-free-lets-encrypt-certificate-in-exchange-server/](https://www.alitajran.com/install-free-lets-encrypt-certificate-in-exchange-server/)

View File

@@ -0,0 +1,117 @@
**Purpose**:
This document is meant to be an abstract guide on what to do before installing Cumulative Updates on Microsoft Exchange Server. There are a few considerations that need to be made ahead of time. This list was put together through shere brute-force while troubleshooting an update issue for a server on 12/16/2024.
!!! abstract "Overview"
We are looking to add an administrative user to several domain security groups, adjust local security policy to put them into the "Manage Auditing and Security Logs" security policy, and run the setup.exe included on the Cumulative Update ISO images within a `SeSecurityPrivilege` operational context.
## Domain Group Membership
You have to be logged in with a domain user that possesses the following domain group memberships, if these group memberships are missing, the upgrade process will fail.
- `Enterprise Admins`
- `Schema Admins`
- `Organization Management`
## User Rights Management
You have to be part of the "**Local Policies > User Rights Assignment > "Manage Auditing and Security Logs**" security policy. You can set this via group policy management or locally on the Exchange server via `secpol.msc`. This is required for the "Monitoring Tools" portion of the upgrade.
It's recommended to reboot the server after making this change to be triple-sure that everything was applied correctly.
!!! note "Security Policy Only Required on Exchange Server"
While the `Enterprise Admins`, `Schema Admins`, and `Organization Management` security group memberships are required on a domain-wide level, the security policy membership for "Manage Auditing and Security Logs" mentioned above is only required on the Exchange Server itself. You can create a group policy that only targets the Exchange Server to add this, or you can make your user a domain-wide member of "Manage Auditing and Security Logs" (Optional). If no existing policies are in-place affecting the Exchange server, you can just use `secpol.msc` to manually add your user to this security policy for the duration of the upgrade/update (or leave it there for future updates).
## Running Updater within `SeSecurityPrivilege` Operational Context
At this point, you would technically be ready to invoke `setup.exe` on the Cumulative Update ISO image to launch the upgrade process, but we are going to go the extra mile to manually "Enable" the `SeSecurityPrivilege` within a Powershell session, then use that same session to invoke the `setup.exe` so the updater runs within that context. This is not really necessary, but something I added as a "hail mary" to make the upgrade successful.
### Open Powershell ISE
The first thing we are going to do, is open the Powershell ISE so we can copy/paste the following powershell script, this script will explicitely enable `SeSecurityPrivilege` for anyone who holds that privilege within the powershell session.
!!! warning "Run Powershell ISE as Administrator"
In order for everything to work correctly, the ISE has to be launched by right-clicking "Run as Administrator", otherwise it is guarenteed that the updater application will fail at some point.
```powershell title="SeSecurityPrivilege Enablement Script"
# Create a Privilege Adjustment
$definition = @"
using System;
using System.Runtime.InteropServices;
public class Privilege
{
const int SE_PRIVILEGE_ENABLED = 0x00000002;
const int TOKEN_ADJUST_PRIVILEGES = 0x0020;
const int TOKEN_QUERY = 0x0008;
const string SE_SECURITY_NAME = "SeSecurityPrivilege";
[DllImport("advapi32.dll", SetLastError = true)]
public static extern bool OpenProcessToken(IntPtr ProcessHandle, int DesiredAccess, out IntPtr TokenHandle);
[DllImport("advapi32.dll", SetLastError = true, CharSet = CharSet.Unicode)]
public static extern bool LookupPrivilegeValue(string lpSystemName, string lpName, out long lpLuid);
[DllImport("advapi32.dll", SetLastError = true)]
public static extern bool AdjustTokenPrivileges(IntPtr TokenHandle, bool DisableAllPrivileges, ref TOKEN_PRIVILEGES NewState, int BufferLength, IntPtr PreviousState, IntPtr ReturnLength);
[StructLayout(LayoutKind.Sequential, Pack = 1)]
public struct TOKEN_PRIVILEGES
{
public int PrivilegeCount;
public long Luid;
public int Attributes;
}
public static bool EnablePrivilege()
{
IntPtr tokenHandle;
TOKEN_PRIVILEGES tokenPrivileges;
if (!OpenProcessToken(System.Diagnostics.Process.GetCurrentProcess().Handle, TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, out tokenHandle))
return false;
if (!LookupPrivilegeValue(null, SE_SECURITY_NAME, out tokenPrivileges.Luid))
return false;
tokenPrivileges.PrivilegeCount = 1;
tokenPrivileges.Attributes = SE_PRIVILEGE_ENABLED;
return AdjustTokenPrivileges(tokenHandle, false, ref tokenPrivileges, 0, IntPtr.Zero, IntPtr.Zero);
}
}
"@
Add-Type -TypeDefinition $definition
[Privilege]::EnablePrivilege()
```
### Validate Privilege
At this point, we now have a powershell session operating with the `SeSecurityPrivilege` privilege enabled. We want to confirm this by running the following commands:
```powershell
whoami # (1)
whoami /priv # (2)
```
1. Output will appear similar to "bunny-lab\nicole.rappe", prefixing the username of the person running the command with the domain they belong to.
2. Reference the privilege table seen below to validate the output of this command matches what you see below.
| **Privilege Name** | **Description** | **State** |
| :--- | :--- | :--- |
| `SeSecurityPrivilege` | Manage auditing and security log | Enabled |
### Execute `setup.exe`
Finally, at the last stage, we mount the ISO file for the Cumulative Update ISO (e.g. 6.6GB ISO image), and using this powershell session we made above, we navigate to the drive it is running on, and invoke setup.exe, causing it to run under the `SeSecurityPrivilege` operational state.
```powershell
D: <ENTER> # (1)
.\Setup.EXE /m:upgrade /IAcceptExchangeServersLicenseTerms_DiagnosticDataON # (2)
```
1. Replace this drive letter with whatever letter was assigned when you mounted the ISO image for the Exchange Updater.
2. This launches the Exchange updater application. Be patient and give it time to launch. At this point, you should be good to proceed with the update. You can optionally change the argument to `/IAcceptExchangeServersLicenseTerms_DiagnosticDataOFF` if you do not need diagnostic data.
!!! success "Ready to Proceed with Updating Exchange"
At this point, after doing the three sections above, you should be safe to do the upgrade/update of Microsoft Exchange Server. The installer will run its own readiness checks for other aspects such as IIS Rewrite Modules and will give you a link to download / upgrade it separately, then giving you the option to "**Retry**" after installing the module for the installer to re-check and proceed.
## Post-Update Health Checks
After the update(s) are installed, you will likely want to check to ensure things are healthy and operational, validating mail flow in both directions, running `Get-Queue` to check for backlogged emails, etc.
!!! note "Under Construction"
This section is under construction and will be based on some feedback from others to help build the section out.

View File

@@ -0,0 +1,72 @@
## Purpose
If you operate an Exchange Database Availability Group (DAG) with 2 or more servers, you may need to do maintenance to one of the members, and during that maintenance, it's possible that one of the databases of the server that was rebooted etc will be out-of-date. In case this happens, it may suspend the database replication to one of the DAG's member servers.
## Checking DAG Database Replication Status
You will want to first log into one of the DAG servers and open the *"Exchange Management Shell"*. From there, run the following command to get the status of database replication. An example of the kind of output you would see is below the command.
```powershell
Get-MailboxDatabaseCopyStatus * | Format-Table Name, Status, CopyQueueLength, ReplayQueueLength, ContentIndexState
```
| **Name** | **Status** | **CopyQueueLength** | **ReplayQueueLength** | **ContentIndexState** |
| :--- | ---: | ---: | ---: | ---: |
| DB01\MX-DAG-01 | Mounted | 0 | 0 | Healthy |
| DB01\MX-DAG-02 | Healthy | 0 | 0 | Healthy |
!!! info "Example Output Breakdown"
In the above example output, you can see that there are two member servers in the DAG, `MX-DAG-01` and `MX-DAG-02`. Then you will see that there is a status of `Mounted`, this means that `MX-DAG-01` is the active production server; this means that it is handling all mailflow and web requests / webmail.
**CopyQueueLength**: This is a number of database "*transaction logs*" that have taken place since a replica database stopped getting updates. This is the queue of all database transactions that are being copied from the production (mounted) database to replica databases. This data is not immediately written to the replica database(s).
**CopyReplayLength**: This represents the queue of all data that was successfully copied from the production database to the replica database on the given DAG member that still needs to process on the replica database. The "**CopyQueueLength**" will need to reach zero before the "**CopyReplayLength**" will start making meaningful progress to reaching zero.
When both the "**CopyQueueLength**" and "**CopyReplayLength**" queues have reached zero, the replica database(s) will have reached 100% parity with the production (active/mounted) database.
## Changing Active/Mounted DAG Member
You may find that you need to perform work on one of the DAG members, and that requires you to failover the responsibility of hosting the Exchange environment to one of the other members of the DAG. You can generally do this with one command, seen below:
```powershell
Move-ActiveMailboxDatabase -Identity "DB01" -ActivateOnServer "MX-DAG-02" -MountDialOverride BestAvailability
```
!!! info "Argument Breakdown"
`-MountDialOverride`
Specifies how tolerant Exchange should be to database copy health when mounting a database on the target server. This setting controls the level of availability Exchange requires before mounting the mailbox database after the move.
`-MountDialOverride`
Instructs Exchange to mount the database as long as at least one healthy copy is available. This option maximizes uptime by allowing a database to mount even if some copies are unhealthy, prioritizing availability over strict health checks.
## Troubleshooting
You may run into issues where either the `Status` or `ContentIndexState` are either Unhealthy, Suspended, or Failed. If this happens, you need to resume replication of the database from the production active/mounted server to the server that is having issues. In the worst-case, you would re-seed the replica database from-scratch.
### If `Status` is Unhealthy or Suspended
If one of the DAG members has a status of "**Unhealthy**", you can run the following command to attempt to resume replication.
```powershell
Resume-MailboxDatabaseCopy -Identity "DB01\MX-DAG-02"
```
If this fails to cause replication to resume, you can try telling the database to just focus on replication, which tells it to copy the queues and replay them on the replica database, while avoiding interacting with the "**ContentIndexState**" which can be individually fixed in the commands below:
```powershell
Resume-MailboxDatabaseCopy -Identity "DB01\MX-DAG-02" -ReplicationOnly
```
### If `Status` is `ServiceDown`
If you see this, it generally means that the Exchange Services for some reason or another are not running. You can remediate this with a powershell script. You will then have to double-check your work to ensure that all "Microsoft Exchange" services that have a startup mode of "Automatic" are running, if not, manually start them, then check on the status of the DAG again to see if the status changes from `ServiceDown` to `Healthy`. Depending on the speed of the Exchange server, it may take a few minutes, 5-10 minutes, for the services to fully initialize and be ready to handle requests. Go get a coffee and come back and check on the status of the DAG at that time.
[:material-powershell: Restart Exchange Services Script](../restart-exchange-services.md){ .md-button }
### If `ContentIndexState` is Unhealthy or Suspended
If you see that the "ContentIndexState" is unhappy, you can run the following command to force it to re-seed / rebuild itself. (This is non-destructive this this is happening on a replica database).
```powershell
Update-MailboxDatabaseCopy "DB01\MX05" -CatalogOnly -BeginSeed
```
### If Replica Database is FUBAR
If the replica database just is not playing nice, you can take the *nuclear option* of completely rebuilding the replica database.
!!! warning
This will destroy the replica database, so be careful to ensure you have a backup (if possible) before you do this. The following command will completely replace the replica database and replicate the data from the production active/mounted database to the newly-created replica database.
```powershell
Update-MailboxDatabaseCopy -Identity "DB01\MX-DAG-02" -SourceServer "MX-DAG-01"
```

View File

@@ -0,0 +1,14 @@
### Purpose:
Sometimes Microsoft Exchange Server will misbehave and the services will need to be *bumped* to fix them. This script iterates over all of the Exchange-related services and restarts them automatically for you.
``` powershell
$servicelist = Get-Service | Where-Object {$_.DisplayName -like "Microsoft Exchange *"}
$servicelist += Get-Service | Where-Object {$_.DisplayName -eq "IIS Admin Service"}
$servicelist += Get-Service | Where-Object { $_.DisplayName eq "Windows Management Instrumentation" }
$servicelist += Get-Service | Where-Object { $_.DisplayName eq "World Wide Web Publishing Service" }
foreach($service in $servicelist){
Set-Service $service -StartupType Automatic
Start-Service $service
}
```

View File

@@ -0,0 +1,20 @@
**Purpose**: Sometimes you need to set an autoreply on a mailbox on behalf of someone else. In these cases, you can leverage the "Exchange Admin Shell" to configure an auto-reply to anyone who sends an email to the mailbox.
In the example below, replace `<username>` with the shortened username of the target user. (e.g. `nicole.rappe` not `nicole.rappe@bunny-lab.io`)
``` powershell
Set-MailboxAutoReplyConfiguration -Identity <username> -AutoReplyState Scheduled -StartTime "1/1/2025 00:00:00" -EndTime "1/15/2025 00:00:00" -InternalMessage "Example,<br><br>Message here.<br><br>Thank you." -ExternalMessage "Example,<br><br>Message here.<br><br>Thank you."
```
!!! note "Internal vs External"
When you configure auto-replies, you can have different replies sent to people within the same organization versus external senders, keep this in mind based on the roles of the person.
!!! example "Example Email Reply"
The email auto reply will look something like this based on the command above.
```
Example,
Message Here.
Thank you.
```

View File

@@ -0,0 +1,389 @@
## Purpose
If you want data available from a single, consistent UNC path while hosting it on multiple file servers, use **DFS Namespaces (DFSN)**. A namespace presents a *virtual* folder tree (for example, `\\bunny-lab.io\Projects`) whose folders point to one or more **folder targets** (actual SMB shares on your servers).
**DFS Replication (DFSR)** is a *separate* feature you configure to keep the contents of those targets in sync.
This document walks through creating a domain-based DFS namespace and enabling DFS Replication for two servers.
!!! info "Assumptions"
You have two Windows Server machines (e.g., `LAB-FPS-01` and `LAB-FPS-02`) running an edition that supports DFS (Standard or Datacenter), both activated, domain-joined, and using static IPs.
### Installing Server Roles
Install the roles on **both servers**:
* **Server Manager → Manage → Add Roles and Features**
* Click **Next** to **Server Roles**
* Expand **File and Storage Services**
* Expand **File and iSCSI Services**
* Check **File Server**
* Check **DFS Namespaces**
* Check **DFS Replication**
* **Next → Next → Install**, then finish.
### Create & Configure Network Shares
Create (or identify) the folders you want to publish in the namespace, and share them on **each** server. Be sure to enable **Access-based Enumeration** on all of the folder shares for additional security. You only need to ensure that the files exist on one of the file servers,then you need to create empty top-level folders with the same names on the replica servers, data will be replicated automatically from the file server to the empty folders.
Additionally, it is recommended (if possible) to set the share names to be hidden. For example `\\LAB-FPS-01\Projects$`, that way it ensures that users access the share via DFS at `\\bunny-lab.io\Projects` and users don't accidentally access the network shares directly, bypassing DFS. For example, the local path would be `Z:\Projects` but the network share would be `\\LAB-FPS-01\Projects$`. *This wouldn't break things like replication, but it would muck things up a little bit organizationally. The data would still be replicated between both servers, we just dont want users using direct server shares like that, which bypasses the high-availability and load-balancing features of DFS*
!!! warning "What must match vs. what can differ"
- **Must exist on each server:** a shared folder to act as the *folder target* (path can differ per server).
- **Share permissions:** are **not replicated**; set them on each server.
- **NTFS permissions inside the replicated folder:** **are replicated** by DFSR and should be consistent.
- Targets do **not** have to use identical share names/paths, but keeping them consistent simplifies things.
| **Permission Type** | **User / Group** | **Access** | Level** |
| :---- | :---- | :---- | :---- |
| Share | `Everyone` (or `Authenticated Users`) | Full Control | Best practice is to grant broad Full Control on the **share** and enforce access with NTFS. |
| NTFS | `SYSTEM` | Full Control | Required for DFSR service. |
| NTFS | `Share_Admins` | Full Control | Optional admin group for data management. |
| NTFS | *Business groups needing access* | Modify | Grant least privilege to required users/groups. |
!!! info "Note On Inheritance"
Disabling inheritance is **not required** for DFS/DFSR. Keep it enabled unless you have a clear reason to flatten ACLs; inheritance often reduces long-term admin overhead.
### DFS Breakdown
A **namespace** is a logical view like `\\bunny-lab.io\Projects`. Inside it, you create DFS **folders** (e.g., `Scripting`) that point to one or more **folder targets**, such as:
* `\\LAB-FPS-01\Projects$\Scripting`
* `\\LAB-FPS-02\Projects$\Scripting`
The namespace root itself isn't where you store data; it's a directory of links. Place data in the folder targets the DFS folder points to.
### DFS Configuration
You can run these steps from either server (or any admin workstation with the RSAT tools). DFSN configuration is stored in AD and on namespace servers and applies across members automatically.
#### Create Namespace
* **Server Manager → Tools → DFS Management**
* Right-click **Namespaces****New Namespace...**
* Choose a server to host the namespace (e.g., `LAB-FPS-01`) → **Next**
* Name the namespace (e.g., `Projects`) → **Next**
* You can leave **Edit Settings** at defaults; those control the local folder that backs the namespace root, not your data.
* Choose **Domain-based namespace** and check **Enable Windows Server 2008 mode** (required for larger scale and Access-based enumeration).
* Resulting path: `\\bunny-lab.io\Projects`
* **Next → Create**
#### Make Namespace Highly-Available
We have to perform an extra step to ensure that every file server can act as within a multi-master context, allowing for high availability. To do this in this example, we will add `LAB-FPS-02` as a secondary namespace server for every namespace that we create.
- Right-Click **DFS Management** > **Namespaces** > `\\bunny-lab.io\Projects`
- Click **Add Namespace Server...**
- Under "Namespace Server" enter `LAB-FPS-02` then click **OK**.
#### Enable Access-Based Enumeration on Namespace
- Right-Click **DFS Management** > **Namespaces** > `\\bunny-lab.io\Projects`
- Click **Properties**
- Click **Advanced**
- Check **Enable access-based enumeration for this namespace**
- Click **OK**
#### Link Folders to Namespace
Create the DFS folders and add folder targets:
* Right-click the new namespace (e.g., `\\bunny-lab.io\Projects`) → **New Folder...**
* **Name:** `Scripting`
* **Add** folder targets (one per server), e.g.:
* `\\LAB-FPS-01\Projects$\Scripting`
* `\\LAB-FPS-02\Projects$\Scripting`
* You can simply copy-paste the previous server location and substitute the hostname (e.g. switching `01` to `02`) instead of browsing for the folder.
* You *may* be prompted to create the folder because it does not exist on `LAB-FPS-02`, in this circumstance, you can tell it to create the folder automatically with read-only permissions. *Don't worry, when replication from `LAB-FPS-01` occurs, NTFS permissions will be overwritten to the correct users and groups.*
* When prompted *"Create a replication group to synchronize the folder targets?"*, click **Yes** to launch the DFS Replication wizard.
!!! info "**Be patient**"
The Replication wizard can take ~1 minute to appear.
#### Configure Replication Group
In the Replication wizard that appears after about a minute, you can configure the replication group for the folder:
!!! bug "If Wizard did Not Appear (or Crashed)"
In my homelab testing, I had two times when the wizard crashed or simply never opened. If this happens to you, you can manually re-trigger the wizard for the target folder by right-clicking the folder (e.g. `\\bunny-lab.io\Projects\Scripting`) and selecting **Replicate Folder**.
* **Replication Group Name**: *(leave as suggested)*
* **Replicated Folder Name**: *(leave as suggested)*
* **Next → Next**
* **Primary member**: pick the server with the **most up-to-date** copy of the data (e.g., `LAB-FPS-01`).
!!! abstract "Replication Behavior and Expectations"
When you first create a replication group, DFSR needs a baseline copy of the data to start from. You designate one server as the Primary Member to serve as that baseline. (e.g. `LAB-FPS-01`) During the first sync, DFSR assumes that whatever exists on the primary member's folder is the "truth." So if the same file exists on another server (e.g. `LAB-FPS-02`) but with different timestamps, sizes, or hashes, the primary member's copy wins - but only during this first synchronization. After that initial sync is complete, the "primary" flag loses all authority. Replication becomes multi-master, meaning every member can make changes, and DFSR uses its conflict resolution algorithm (based on version vectors, update sequence numbers, and timestamps) to decide which change wins going forward. In other words, no server remains “the boss” after initialization. Files unique to other member servers that only exist on them will not be wiped and will be replicated across all member servers including the primary member.
* **Topology**: `Full mesh` (good for two servers; for many sites, consider hub-and-spoke).
* **Replication schedule**: leave **Full** (24x7) unless you need bandwidth windows.
* **Create**
!!! success "Replication group created"
You should see green ticks for the following. Give everything some time to replicate as it depends on active directory replication speeds to push out the configuration across the DFS member servers and begin the replication.
- ✅Create replication group
- ✅Create members
- ✅Update folder security
- ✅Create replicated folder
- ✅Create membership objects
- ✅Update folder properties
- ✅Create connections
### Checking DFS Status
You may want to put together a simple table report of the DFS namespaces, replication info, and target folders. You can run the following powershell script to generate a nice table-based report of the current structure of the DFS namespaces in your domain.
??? example "Powershell Reporting Script"
```powershell
# Automatically detect current AD domain and use it as DFS prefix
try {
$Domain = ([System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain()).Name
$DomainPrefix = "\\$Domain"
} catch {
Write-Warning "Unable to detect domain automatically. Falling back to manual value."
$DomainPrefix = "\\bunny-lab.io"
}
Import-Module DFSN -ErrorAction Stop
Import-Module DFSR -ErrorAction Stop
function Get-ServerNameFromPath {
param([string]$Path)
if ([string]::IsNullOrWhiteSpace($Path)) { return $null }
if ($Path -like "\\*") { return ($Path -split '\\')[2] }
return $null
}
function Get-Max3 {
param([int[]]$Values)
if (-not $Values) { return 0 }
return (($Values | Measure-Object -Maximum).Maximum)
}
# Build: GroupName (lower) -> memberships[]
$allGroups = Get-DfsReplicationGroup -ErrorAction SilentlyContinue
$groupMembershipMap = @{}
foreach ($g in $allGroups) {
$ms = Get-DfsrMembership -GroupName $g.GroupName -ErrorAction SilentlyContinue
$groupMembershipMap[$g.GroupName.ToLower()] = $ms
}
# Flatten all memberships for regex fallback
$allMemberships = @()
foreach ($arr in $groupMembershipMap.Values) { if ($arr) { $allMemberships += $arr } }
$rows = New-Object System.Collections.Generic.List[psobject]
# Enumerate namespace roots
$roots = Get-DfsnRoot -ErrorAction Stop | Where-Object { $_.Path -like "$DomainPrefix\*" }
Write-Host "DFS Namespace and Replication Overview" -ForegroundColor Cyan
Write-Host "------------------------------------------------------`n"
foreach ($root in $roots) {
$rootPath = $root.Path
$rootLeaf = ($rootPath -split '\\')[-1]
$nsServers = @()
$rootTargets = Get-DfsnRootTarget -Path $rootPath -ErrorAction SilentlyContinue
foreach ($rt in $rootTargets) {
$srv = Get-ServerNameFromPath $rt.TargetPath
if ($srv) { $nsServers += $srv }
}
# Folders under this root
$folders = Get-DfsnFolder -Path "$rootPath\*" -ErrorAction SilentlyContinue | Sort-Object Path
foreach ($f in $folders) {
$namespaceFull = $f.Path
$leaf = ($f.Path -split '\\')[-1]
# DFSN folder targets
$targets = Get-DfsnFolderTarget -Path $f.Path -ErrorAction SilentlyContinue
$targets = @($targets | Sort-Object { Get-ServerNameFromPath $_.TargetPath }) # ensure array
# Map to DFSR group by naming; fallback to regex on ContentPath
$candidateGroup = ((($rootPath -replace '^\\\\','') + '\' + $leaf).ToLower())
if ($groupMembershipMap.ContainsKey($candidateGroup)) {
$msForFolder = $groupMembershipMap[$candidateGroup]
} else {
$escapedRootLeaf = [regex]::Escape($rootLeaf)
$escapedLeaf = [regex]::Escape($leaf)
$regex = "\\$escapedRootLeaf\\$escapedLeaf($|\\)"
$msForFolder = $allMemberships | Where-Object { $_.ContentPath -imatch $regex }
}
$msForFolder = @($msForFolder) # normalize to array
# Build aligned rows: one per target
$targetLines = @()
$replLines = @()
foreach ($t in $targets) {
$tServer = Get-ServerNameFromPath $t.TargetPath
$targetLines += $t.TargetPath
$msForServer = $null
if ($msForFolder.Count -gt 0) {
$msForServer = $msForFolder | Where-Object { $_.ComputerName -ieq $tServer } | Select-Object -First 1
}
if ($msForServer -and $msForServer.ContentPath) { $replLines += $msForServer.ContentPath } else { $replLines += '' }
}
# Max line count for row expansion (PS 5.1 safe)
$maxLines = Get-Max3 @($targetLines.Count, $replLines.Count, $nsServers.Count)
for ($i = 0; $i -lt $maxLines; $i++) {
# Precompute values (PS 5.1: no inline-if in hashtables)
$nsVal = ''
if ($i -eq 0) { $nsVal = $namespaceFull }
$targetVal = ''
if ($i -lt $targetLines.Count) { $targetVal = $targetLines[$i] }
$replVal = ''
if ($i -lt $replLines.Count) { $replVal = $replLines[$i] }
$nsServerVal = ''
if ($i -lt $nsServers.Count) { $nsServerVal = $nsServers[$i] }
$row = [PSCustomObject]@{
'Namespace' = $nsVal
'Member Folder Target(s)' = $targetVal
'Replication Locations' = $replVal
'Namespace Servers' = $nsServerVal
}
$rows.Add($row) | Out-Null
}
}
}
# Render as a PowerShell bordered grid with one-space left/right padding in every cell
function Write-DfsGrid {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[System.Collections.IEnumerable]$Data,
[string[]]$Columns = @('Namespace','Member Folder Target(s)','Replication Locations','Namespace Servers'),
# Reasonable max widths; tune to your console (these are content+padding widths)
[int[]]$MaxWidths = @(70, 70, 52, 30),
[switch]$Ascii # use +-| instead of box-drawing if your console garbles Unicode
)
# Ensure arrays align
if ($MaxWidths.Count -lt $Columns.Count) {
$pad = New-Object System.Collections.Generic.List[int]
$pad.AddRange($MaxWidths)
for ($i=$MaxWidths.Count; $i -lt $Columns.Count; $i++) { $pad.Add(40) }
$MaxWidths = $pad.ToArray()
}
# Characters
if ($Ascii) {
$H = @{ tl='+'; tr='+'; bl='+'; br='+'; hz='-'; vt='|'; tj='+'; mj='+'; bj='+' }
} else {
# Box-drawing
$H = @{ tl='┌'; tr='┐'; bl='└'; br='┘'; hz='─'; vt='│'; tj='┬'; mj='┼'; bj='┴' }
try { [Console]::OutputEncoding = [Text.UTF8Encoding]::UTF8 } catch {}
}
function TruncPad([string]$s, [int]$w) {
if ($null -eq $s) { $s = '' }
$s = $s -replace '\r','' -replace '\t',' '
if ($s.Length -le $w) { return $s.PadRight($w, ' ') }
if ($w -le 1) { return $s.Substring(0, $w) }
return ($s.Substring(0, $w-1) + '…')
}
# Materialize and compute widths (include one-space left/right padding for header and data)
$rows = @($Data | ForEach-Object {
$o = @{}
foreach ($c in $Columns) { $o[$c] = [string]($_.$c) }
[pscustomobject]$o
})
$widths = @()
for ($i=0; $i -lt $Columns.Count; $i++) {
$col = $Columns[$i]
# Start with header length including padding
$max = (" " + $col + " ").Length
foreach ($r in $rows) {
$len = (" " + [string]$r.$col + " ").Length
if ($len -gt $max) { $max = $len }
}
$widths += [Math]::Min($max, $MaxWidths[$i])
}
# Line builders
function DrawTop() {
$line = $H.tl
for ($i = 0; $i -lt $widths.Count; $i++) {
$line += ($H.hz * $widths[$i])
if ($i -lt ($widths.Count - 1)) {
$line += $H.tj
} else {
$line += $H.tr
}
}
$line
}
function DrawMid([string[]]$Columns, [int[]]$widths, $H) {
$line = $H.vt
for ($i=0; $i -lt $widths.Count; $i++) {
$line += TruncPad (" " + $Columns[$i] + " ") $widths[$i]
$line += $H.vt
}
$line
}
function DrawSep() {
$line = $H.vt
for ($i=0; $i -lt $widths.Count; $i++) {
$line += ($H.hz * $widths[$i])
$line += $H.vt
}
$line
}
function DrawHeaderSep() {
$line = $H.vt
for ($i=0; $i -lt $widths.Count; $i++) {
$line += ($H.hz * $widths[$i])
$line += $H.vt
}
$line
}
function DrawBottom() {
$line = $H.bl
for ($i = 0; $i -lt $widths.Count; $i++) {
$line += ($H.hz * $widths[$i])
if ($i -lt ($widths.Count - 1)) {
$line += $H.bj
} else {
$line += $H.br
}
}
$line
}
function DrawRow($r, [string[]]$Columns, [int[]]$widths, $H) {
$line = $H.vt
for ($i=0; $i -lt $widths.Count; $i++) {
$val = [string]$r.($Columns[$i])
$line += TruncPad (" " + $val + " ") $widths[$i]
$line += $H.vt
}
$line
}
# Render with group separators between namespaces (when the Namespace cell is non-empty)
Write-Host (DrawTop)
Write-Host (DrawMid -Columns $Columns -widths $widths -H $H)
Write-Host (DrawHeaderSep)
$first = $true
foreach ($r in $rows) {
if (-not $first -and ([string]$r.$($Columns[0])) ) {
# Namespace changed → draw a separator
Write-Host (DrawSep)
}
$first = $false
Write-Host (DrawRow -r $r -Columns $Columns -widths $widths -H $H)
}
Write-Host (DrawBottom)
}
Write-DfsGrid -Data $rows
```

View File

@@ -0,0 +1,66 @@
**Purpose**:
This document outlines some of the prerequisites as well as deployment process for an ARK: Survival Ascended Server
## Prerequisites
We need to install the Visual C++ Redistributable for both x86 and x64
- [Download Visual C++ Redistributable (x64)](https://aka.ms/vs/17/release/vc_redist.x64.exe)
- [Download Visual C++ Redistributable (x86)](https://aka.ms/vs/17/release/vc_redist.x86.exe)
## Run Unreal Engine Certificate Trust Script
There is an issue where if you run a dedicated server, part of that requires API access to Epic Games and that will not work without installing a few certificates. The original Github page can be found [here](https://github.com/Ch4r0ne/UnrealEngine_Dedicated_Server_Install_CA/tree/main), which details the reason for it in more detail.
!!! note "Run as Administrator"
You need to run the command as an administrator. This command will download the script automatically and temporarily bypass the script execution policy to run the script:
```
PowerShell -ExecutionPolicy Bypass -Command "irm 'https://raw.githubusercontent.com/Ch4r0ne/UnrealEngine_Dedicated_Server_Install_CA/main/Install_Certificate.ps1' | iex"
```
## SteamCMD Deployment Script
You will need to make a folder somewhere on the computer, such as the desktop, and name it something like "ARK Updater", then put the following script into it. You will need to run this script before you can proceed to the next step.
```jsx title="C:\Users\nicole.rappe\Desktop\ARK_Updater\Update_Server.bat"
@echo off
set STEAMCMDDIR="C:\SteamCMD\"
set SERVERDIR="C:\ASAServer\"
set ARKAPPID=2430930
cd /d %STEAMCMDDIR%
del steamcmd.exe
timeout /t 5 /nobreak
curl -o steamcmd.zip https://steamcdn-a.akamaihd.net/client/installer/steamcmd.zip
powershell Expand-Archive -Path .\steamcmd.zip -DestinationPath .\
start "" /wait steamcmd.exe +force_install_dir "%SERVERDIR%" +login anonymous +app_update %ARKAPPID% validate +quit
exit
```
## Launch Script
Now you need to configure a launch script to actually start the dedicated server. This can be placed anywhere, but I suggest putting it into `C:\asaserver\ShooterGame\Saved` along with the world save data.
```jsx title="C:\asaserver\ShooterGame\Saved\Launch_Server.bat"
@echo off
start C:\asaserver\ShooterGame\Binaries\Win64\ArkAscendedServer.exe ScorchedEarth_WP?listen?SessionName=BunnyLab?Port=7777?QueryPort=27015?ServerPassword=SomethingSecure?ServerAdminPassword=SomethingVerySecure -WinLiveMaxPlayers=50 -log -crossplay-enable-pc -crossplay-enable-wingdk -mods=928548,928621,928597,928818,929543,937546,930684,930404,940022,941697,930851,948051,932365,929420,967786,930494
exit
```
!!! tip "Adding Mods"
When you are adding mods, you will notice they are found on [CurseForge](https://www.curseforge.com/ark-survival-ascended). When you are looking for the mod ID, it is actually listed under CurseForge as the `Project ID`. Just copy that number and put it in a comma-separated list such as what is seen in the example above.
## Dump Configuration .ini Files
At this point, you will want to launch the server and have someone join it so it can generate the necessary world files / configuration data. Then you will run the following commands in the console (from the server hosting the ARK server) in order to dump the configuration (ini) files to disk.
```
enablecheats <AdminPassword>
cheat SaveWorld
cheat DoExit
```
You will find the dumped configuration files at `C:\asaserver\ShooterGame\Saved\Config\WindowsServer`. The files you care about are `Game.ini` and `GameUserSettings.ini`.
!!! warning "Do not modify while server is running"
If you modify these configuration files while the server is running, it will overwrite the values when the server is stopped again. Be sure to either set the variables in-game via the console so it dumps them to disk, or wait until the server is stopped to make configuration ini file changes.
!!! info "Optional: Generate Files from Singleplayer World"
You may want to start a singleplayer world and set all of the configuration variables to your desired values, then load into the world. Once you have made landfall, quit out of the game to shut down the singleplayer world.
From this point, you can find your `Game.ini` and `GameUserSettings.ini` files in `steamapps\common\ARK Survival Ascended\ShooterGame\Saved\Config\Windows`. Simply copy these two files into your server's configuration folder located at `C:\asaserver\ShooterGame\Saved\Config\WindowsServer` and launch the server.

View File

@@ -0,0 +1,46 @@
**Purpose**: Pterodactyl is the open-source game server management panel built with PHP, React, and Go. Designed with security in mind, Pterodactyl runs all game servers in isolated Docker containers while exposing a beautiful and intuitive UI to administrators and users.
[Official Website](https://pterodactyl.io/panel/1.0/getting_started.html)
!!! note
This documentation assumes you are running Rocky Linux 9.3 or higher.
**Install EPEL Repository and other tools**:
```bash
sudo yum -y install epel-release curl ca-certificates gnupg
```
**Add Redis Repository**:
```bash
sudo rpm --import https://packages.redis.io/gpg
echo "[redis6]
name=Redis 6 repository
baseurl=https://packages.redis.io/rpm/6/rhel/8/\$basearch/
enabled=1
gpgcheck=1
gpgkey=https://packages.redis.io/gpg" | sudo tee /etc/yum.repos.d/redis.repo
```
**Add MariaDB Repository**:
```bash
sudo curl -LsS https://downloads.mariadb.com/MariaDB/mariadb_repo_setup | sudo bash
```
**Update Repositories List**:
```bash
sudo yum update
```
**Install Dependencies**:
Before installing PHP, check the available PHP versions in your enabled repositories. Install PHP and other dependencies as follows:
```bash
sudo yum -y install php php-{common,cli,gd,mysql,mbstring,bcmath,xml,fpm,curl,zip} mariadb-server nginx tar unzip git redis
```
7. **Installing Composer**:
```bash
curl -sS https://getcomposer.org/installer | php
sudo mv composer.phar /usr/local/bin/composer
chmod +x /usr/local/bin/composer
```
This script should work well with Rocky Linux and similar RHEL-based distributions, using `yum` for package management. However, keep in mind that package names and versions may vary between repositories, so you might need to adjust them based on what's available in your system's repositories.

View File

@@ -0,0 +1,33 @@
**Purpose**:
This document outlines some of the prerequisites as well as deployment process for an dedicated Valheim server.
## Prerequisites
We need to install the Visual C++ Redistributable for both x86 and x64
- [Download Visual C++ Redistributable (x64)](https://download.visualstudio.microsoft.com/download/pr/1754ea58-11a6-44ab-a262-696e194ce543/3642E3F95D50CC193E4B5A0B0FFBF7FE2C08801517758B4C8AEB7105A091208A/VC_redist.x64.exe)
- [Download Visual C++ Redistributable (x86)](https://download.visualstudio.microsoft.com/download/pr/b4834f47-d829-4e11-80f6-6e65081566b5/A32DD41EAAB0C5E1EAA78BE3C0BB73B48593DE8D97A7510B97DE3FD993538600/VC_redist.x86.exe)
## SteamCMD Deployment Script
You will need to make a folder somewhere on the computer, such as the desktop, and name it something like "ARK Updater", then put the following script into it. You will need to run this script before you can proceed to the next step.
```jsx title="C:\Users\nicole.rappe\Downloads\SteamCMD\Update_Server.bat"
@echo off
steamcmd.exe +force_install_dir "C:\Valheim_Dedicated_Server" +login anonymous +app_update 896660 -beta public validate +quit
```
## Launch Script
Now you need to configure a launch script to actually start the dedicated server. This can be placed anywhere, but I suggest putting it into `C:\asaserver\ShooterGame\Saved` along with the world save data.
```jsx title="C:\valheim_dedicated_server\Launch_Server.bat"
@echo off
set SteamAppId=892970
echo "Starting server PRESS CTRL-C to exit"
valheim_server -nographics -batchmode -name "Bunny Lab" -port 2456 -world "Dedicated" -password "SomethingVerySecure" -crossplay -saveinterval 300 -backups 72 -backupshort 600 -backuplong 21600
```
!!! warning "Launch Script Considerations"
- Make a local copy of this script to avoid it being overwritten by steam.
- Minimum password length is 5 characters & Password cant be in the server name.
- You need to make sure the ports TCP/UDP 2456-2457 is being forwarded to your server through your server VM & firewall.

View File

@@ -0,0 +1,49 @@
**Purpose**: A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
```yaml title="docker-compose.yml"
version: "3.9"
services:
frigate:
container_name: frigate
privileged: true # this may not be necessary for all setups
restart: unless-stopped
image: blakeblackshear/frigate:stable
shm_size: "256mb" # update for your cameras based on calculation above
# devices:
# - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
# - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
# - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
volumes:
- /etc/localtime:/etc/localtime:ro
- /mnt/1TB_STORAGE/frigate/config.yml:/config/config.yml:ro
- /mnt/1TB_STORAGE/frigate/media:/media/frigate
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
target: /tmp/cache
tmpfs:
size: 4000000000
ports:
- "5000:5000"
- "1935:1935" # RTMP feeds
environment:
FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
networks:
docker_network:
ipv4_address: 192.168.5.201
mqtt:
container_name: mqtt
image: eclipse-mosquitto:1.6
ports:
- "1883:1883"
networks:
docker_network:
ipv4_address: 192.168.5.202
networks:
docker_network:
external: true
```
```yaml title=".env"
FRIGATE_RTSP_PASSWORD=SomethingSecure101
```

View File

@@ -0,0 +1,37 @@
**Purpose**: Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
```yaml title="docker-compose.yml"
version: '3'
services:
homeassistant:
container_name: homeassistant
image: "ghcr.io/home-assistant/home-assistant:stable"
environment:
- TZ=America/Denver
volumes:
- /srv/containers/Home-Assistant-Core:/config
- /etc/localtime:/etc/localtime:ro
restart: always
privileged: true
ports:
- 8123:8123
networks:
docker_network:
ipv4_address: 192.168.5.252
labels:
- "traefik.enable=true"
- "traefik.http.routers.homeassistant.rule=Host(`automation.cyberstrawberry.net`)"
- "traefik.http.routers.homeassistant.entrypoints=websecure"
- "traefik.http.routers.homeassistant.tls.certresolver=myresolver"
- "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

33
services/index.md Normal file
View File

@@ -0,0 +1,33 @@
# Services
## Purpose
Deployable services and applications in the lab (auth, email, monitoring, etc).
## Includes
- Service deployments and configs
- Dependencies and integrations
- Operational notes specific to the service
## New Document Template
````markdown
# <Document Title>
## Purpose
<what this service does and why it exists>
!!! info "Assumptions"
- <platform assumptions>
- <dependency assumptions>
## Dependencies
- <required services, ports, DNS, storage>
## Procedure
```sh
# Commands or deployment steps
```
## Validation
- <command + expected result>
## Rollback
- <how to undo or recover>
````

View File

@@ -0,0 +1,60 @@
**Purpose**: Emulatorjs is a browser web based emulation portable to nearly any device for many retro consoles. A mix of emulators is used between Libretro and EmulatorJS.
## Docker Configuration
```yaml title="docker-compose.yml"
---
services:
emulatorjs:
image: lscr.io/linuxserver/emulatorjs:latest
container_name: emulatorjs
environment:
- PUID=1000
- PGID=1000
- TZ=America/Denver
- SUBFOLDER=/ #optional
volumes:
- /srv/containers/emulatorjs/config:/config
- /srv/containers/emulatorjs/data:/data
ports:
- 3000:3000
- 80:80
- 4001:4001 #optional
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.200
networks:
docker_network:
external: true
```
```yaml title=".env"
N/A
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
git:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: emulatorjs
rule: Host(`emulatorjs.bunny-lab.io`)
services:
emulatorjs:
loadBalancer:
servers:
- url: http://192.168.5.200:80
passHostHeader: true
```
!!! note
Port 80 = Frontend
Port 3000 = Management Backend

View File

@@ -0,0 +1,78 @@
**Purpose**: pyLoad-ng is a Free and Open Source download manager written in Python and designed to be extremely lightweight, easily extensible and fully manageable via web.
[Detailed LinuxServer.io Deployment Info](https://docs.linuxserver.io/images/docker-pyload-ng/)
## Docker Configuration
```yaml title="docker-compose.yml"
version: '3.9'
services:
pyload-ng:
image: lscr.io/linuxserver/pyload-ng:latest
container_name: pyload-ng
environment:
- PUID=1000
- PGID=1000
- TZ=America/Denver
volumes:
- /srv/containers/pyload-ng/config:/config
- nfs-share:/downloads
ports:
- 8000:8000
- 9666:9666 #optional
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.30
volumes:
nfs-share:
driver: local
driver_opts:
type: nfs
o: addr=192.168.3.3,nolock,soft,rw # Options for the NFS mount
device: ":/mnt/STORAGE/Downloads" # NFS path on the server
networks:
docker_network:
external: true
```
1. Set this to your own timezone.
2. This is optional. Additional documentation needed to convey what this port is used for. Possibly API access.
3. This assumes you want your download folder to be a SMB network share, this section allows you to connect to the share so Pyload can download content directly into the network folder. Replace the username and `REDACTED` password with your actual credentials. Remove the `domain` argument if the SMB server is not domain-joined.
4. This is the destination network share to target with the given credentials in section 3.
!!! note "NFS Mount Assumptions"
The NFS folder in this example is both exported via NFS on a TrueNAS Core server, while also being exported as an NFS export. `mapall user` and `mapall group` is configured to the user and group owners of the folder set in the permissions of the dataset in TrueNAS Core. In this case, the mapall user is `BUNNY-LAB\nicole.rappe` and the mapall group is `BUNNY-LAB\Domain Admins`.
```yaml title=".env"
N/A
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
pyload:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: pyload
rule: Host(`pyload.bunny-lab.io`)
services:
pyload:
loadBalancer:
servers:
- url: http://192.168.5.30:8000
passHostHeader: true
```
!!! warning "Change Default Admin Credentials"
Pyload ships with the username `pyload` and password `pyload`. Make sure you change the credentials immediately after initial login.
Navigate to "**Settings > Users > Pyload:"Change Password"**"

View File

@@ -0,0 +1,10 @@
**Purpose**:
Sometimes you may need to change the MFA on an account, by adding a new email or phone number for SMS-based MFA. This can be done fairly quickly and only involves a few steps:
- Navigate to the [Azure Web Portal](https://portal.azure.com) and log in using your Office365 admin credentials.
- Navigate to the [Azure Active Directory (Microsoft Entra ID) Users List](https://portal.azure.com/#view/Microsoft_AAD_UsersAndTenants/UserManagementMenuBlade/~/AllUsers)
- Click on the User Account that needs their MFA information changed / wiped
- On the left-hand navigation menu, click on "**Authentication Methods**" at the bottom
- Make adjustments to existing methods or click on "**+ Add Authentication Method**"
- Valid options generally are Phone Numbers, Email Addresses, and a "**Temporary Access Pass**"
- Save the changes by clicking the "**Add**" button, then have the user attempt to log in again using their MFA method configured

View File

@@ -0,0 +1,74 @@
**Purpose**: Gatus Service Status Server.
## Docker Configuration
```yaml title="docker-compose.yml"
version: "3.9"
services:
postgres:
image: postgres
restart: always
volumes:
- /srv/containers/gatus/database:/var/lib/postgresql
ports:
- "5432:5432"
env_file:
- stack.env
networks:
docker_network:
ipv4_address: 192.168.5.9
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-postgres} -d ${POSTGRES_DB:-postgres}"]
interval: 10s
retries: 5
start_period: 30s
gatus:
image: twinproduction/gatus:latest
restart: always
ports:
- "8080:8080"
env_file:
- stack.env
volumes:
- /srv/containers/gatus/config:/config
depends_on:
postgres:
condition: service_healthy
dns:
- 192.168.3.25
- 192.168.3.26
networks:
docker_network:
ipv4_address: 192.168.5.8
networks:
docker_network:
external: true
```
```yaml title=".env"
N/A
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
status-bunny-lab:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: status-bunny-lab
rule: Host(`status.bunny-lab.io`)
middlewares:
- "auth-bunny-lab-io" # Referencing the Keycloak Server
services:
status-bunny-lab:
loadBalancer:
servers:
- url: http://192.168.5.8:8080
passHostHeader: true
```

View File

@@ -0,0 +1,101 @@
## Purpose:
Speedtest Tracker is a self-hosted application that monitors the performance and uptime of your internet connection over time.
[Detailed Configuration Reference](https://docs.speedtest-tracker.dev/getting-started/installation)
## Docker Configuration
```yaml title="docker-compose.yml"
services:
speedtest-tracker:
image: lscr.io/linuxserver/speedtest-tracker:latest
restart: unless-stopped
container_name: speedtest-tracker
ports:
- 8080:80
- 8443:443
environment:
- PUID=1000
- PGID=1000
- TZ=${TIMEZONE}
- ASSET_URL=${PUBLIC_FQDN}
- APP_TIMEZONE=${TIMEZONE}
- DISPLAY_TIMEZONE=${TIMEZONE}
- SPEEDTEST_SCHEDULE=*/15 * * * * # (1)
- SPEEDTEST_SERVERS=61622 # (3)
- APP_KEY=${BASE64_APPKEY} # (2)
- DB_CONNECTION=pgsql
- DB_HOST=db
- DB_PORT=5432
- DB_DATABASE=${DB_DATABASE}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
volumes:
- /srv/containers/speedtest-tracker/config:/config
- /srv/containers/speedtest-tracker/custom-ssl-keys:/config/keys
depends_on:
- db
networks:
docker_network:
ipv4_address: 192.168.5.38
db:
image: postgres:17
restart: always
environment:
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_USER=${DB_USERNAME}
- POSTGRES_PASSWORD=${DB_PASSWORD}
- TZ=${TIMEZONE}
volumes:
- /srv/containers/speedtest-tracker/db:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 5s
retries: 5
timeout: 5s
networks:
docker_network:
ipv4_address: 192.168.5.39
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
1. You can use [Crontab Guru](https://crontab.guru) to generate a cron expression to schedule automatic speedtests. e.g. `*/15 * * * *` runs a speedtest every 15 minutes.
2. You can generate a secure appkey with the following command: `echo -n 'base64:'; openssl rand -base64 32;` > Copy this key including the `base64:` prefix and paste it as your APP_KEY environment variable value.
3. This restricts the speedtest target to a specific speedtest server. In this example, it is a Missoula, MT speedtest server. You can get these codes from the yellow Speedtest button menu in the WebUI and then come back and redeploy the stack with the number entered here.
```yaml title=".env"
DB_PASSWORD=SecurePassword
DB_DATABASE=speedtest_tracker
DB_USERNAME=speedtest_tracker
TIMEZONE=America/Denver
PUBLIC_FQDN=https://speedtest.bunny-lab.io
BASE64_APPKEY=SECUREAPPKEY
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
speedtest-tracker:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: speedtest-tracker
rule: Host(`speedtest.bunny-lab.io`)
services:
speedtest-tracker:
loadBalancer:
servers:
- url: http://192.168.5.38:80
passHostHeader: true
```

View File

@@ -0,0 +1,33 @@
**Purpose**: Deploy Uptime Kuma uptime monitor to monitor services in the homelab and send notifications to various services.
```yaml title="docker-compose.yml"
version: '3'
services:
uptimekuma:
image: louislam/uptime-kuma
ports:
- 3001:3001
volumes:
- /mnt/uptimekuma:/app/data
- /var/run/docker.sock:/var/run/docker.sock
environment:
# Allow status page to exist within an iframe
- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=1
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.uptime-kuma.rule=Host(`status.cyberstrawberry.net`)"
- "traefik.http.routers.uptime-kuma.entrypoints=websecure"
- "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt"
- "traefik.http.services.uptime-kuma.loadbalancer.server.port=3001"
networks:
docker_network:
ipv4_address: 192.168.5.211
networks:
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,36 @@
**Purpose**: ntfy (pronounced notify) is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It's infinitely flexible, and 100% free software.
```yaml title="docker-compose.yml"
version: "2.1"
services:
ntfy:
image: binwiederhier/ntfy
container_name: ntfy
command:
- serve
environment:
- NTFY_ATTACHMENT_CACHE_DIR=/var/lib/ntfy/attachments
- NTFY_BASE_URL=https://ntfy.bunny-lab.io
- TZ=America/Denver # optional: Change to your desired timezone
#user: UID:GID # optional: Set custom user/group or uid/gid
volumes:
- /srv/containers/ntfy/cache:/var/cache/ntfy
- /srv/containers/ntfy/etc:/etc/ntfy
ports:
- 80:80
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.45
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,119 @@
## Purpose:
The Collabora CODE Server is used by Nextcloud Office to open and edit documents and spreadsheets collaboratively. When Nextcloud is not deployed in a [Nextcloud AIO](./nextcloud-aio.md) way, and is instead installed not as a container, you (may) run into stability issues with Collabora CODE Server just randomly breaking and not allowing users to edit documents. If this happens, you can follow this document to stand-up a dedicated Collabora CODE Server on the same host as your Nextcloud server.
!!! info "Assumptions"
- It is assumed that you are running an ACME Certificate Bot on your Nextcloud server to generate certificates for Nextcloud.
- It is also assumed that you are running Ubuntu Server 24.04.3 LTS. *This document does not outline the process for setting up an ACME Certificate Bot*.
- It is lastly assumed that (until changes are made to allow such) this will only work for internal access. Unless you port-forward port `9980` Collabora will not function for public internet-facing access.
### Install Docker and Configure Portainer
The first thing you need to do is install Docker then Portainer. You can do this by following the [Portainer Deployment](../../platforms/containerization/docker/deploy-portainer.md) documentation.
### Portainer Stack
```yaml title="docker-compose.yml"
name: app
services:
code:
image: collabora/code
container_name: collabora
restart: always
networks:
- collabora-net
environment:
- domain=${NEXTCLOUD_COLLABORA_URL}
- aliasgroup1=${NEXTCLOUD_COLLABORA_URL}
- username=${CODESERVER_ADMIN_USER} # Used to login @ https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html
- password=${CODESERVER_ADMIN_PASSWORD} # Used to login @ https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html
# CODE speaks HTTP internally, TLS is terminated at nginx
- extra_params=--o:ssl.enable=false --o:ssl.termination=true
# no direct port mapping; only reachable via proxy
collabora-proxy:
image: nginx:alpine
container_name: collabora-proxy
restart: always
depends_on:
- code
networks:
- collabora-net
ports:
# Host port 9980 -> container port 443 (HTTPS)
- "9980:443"
volumes:
# Our nginx vhost config (this exists outside of the container anywhere you want to put it, by default "/opt/collabora/nginx.conf")
- /opt/collabora/nginx.conf:/etc/nginx/conf.d/default.conf:ro
# Mount the entire letsencrypt tree so symlinks keep working
- /etc/letsencrypt:/etc/letsencrypt:ro
networks:
collabora-net:
driver: bridge
```
```yaml title=".env"
NEXTCLOUD_COLLABORA_URL=cloud\\.bunny-lab\\.io
CODESERVER_ADMIN_USER=admin
CODESERVER_ADMIN_PASSWORD=ChangeThisPassword
```
## NGINX Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml title="/opt/collabora/nginx.conf"
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 443 ssl;
server_name cloud.bunny-lab.io;
ssl_certificate /etc/letsencrypt/live/cloud.bunny-lab.io/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/cloud.bunny-lab.io/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_prefer_server_ciphers on;
# Main proxy to CODE
location / {
proxy_pass http://collabora:9980;
# Required for WebSockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Standard headers
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 36000;
proxy_connect_timeout 36000;
proxy_send_timeout 36000;
proxy_buffering off;
proxy_request_buffering off;
}
}
```
### Configuring Nextcloud Office
Now that the Collabora CODE Server was deployed and instructed to use the existing LetsEncrypt SSL Certificates located in `/etc/letsencrypt/live/cloud.bunny-lab.io/` on the Ubuntu host, we can proceed to reconfiguring Nextcloud to use this new server.
- Login to the Nextcloud server as an administrator
- Navigate to "**Apps**"
- Ensure that any existing ONLYOFFICE or Built-in Collabora CODE Server apps are disabled / removed from Nextcloud itself
- Navigate to "**Administration Settings**"
- In the left-hand "**Administration**" sidebar, look for something like "**Office**" or "**Nextcloud Office**" and click on it
- Check the radio box that says "**Use your own server**"
- For the URL, enter `https://cloud.bunny-lab.io:9980` and uncheck the "**Disable certificate verification (insecure)**" checkbox, then click the "**Save**" button.
!!! success "Collabora Online Server is Reachable"
At this point, you should see a green banner at the top of the Nextcloud webpage stating something like "**Collabora Online Development Edition 25.04.7.2 a246f9ab3c**". This would indicate that Nextcloud should be able to successfully talk with the Collabora CODE Server and that you can now proceed to verify that everything is working by trying to create and edit some documents and spreadsheets.
### Administrating Collabora CODE Server
As aforementioned, we can manage Collabora CODE Server sessions and useful metrics about who is editing documents and being able to terminate their sessions if they get stuck or something can be useful. You can login to the management web interface at https://cloud.bunny-lab.io:9980/browser/dist/admin/admin.html using the `CODESERVER_ADMIN_USER` and `CODESERVER_ADMIN_PASSWORD` credentials.

View File

@@ -0,0 +1,161 @@
**Purpose**:
Deploy a Nextcloud AIO Server. [Official Nextcloud All-in-One Documentation](https://github.com/nextcloud/all-in-one).
This version of Nextcloud consists of 12 containers that are centrally managed by a single "master" container. It is more orchestrated and automates the implementation of Nextcloud Office, Nextcloud Talk, and other integrations / apps.
!!! note "Assumptions"
It is assumed you are running Rocky Linux 9.3.
It is also assumed that you are using Traefik as your reverse proxy in front of Nextcloud AIO. If it isnt, refer to the [reverse proxy documentation](https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md) to configure other reverse proxies such as NGINX.
=== "Simplified Docker-Compose.yml"
```yaml title="docker-compose.yml"
services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- 8080:8080
dns:
- 1.1.1.1
- 1.0.0.1
environment:
- APACHE_PORT=11000
- APACHE_IP_BINDING=0.0.0.0
- NEXTCLOUD_MEMORY_LIMIT=4096M
- NEXTCLOUD_ADDITIONAL_APKS=imagemagick
- NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick
volumes:
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer
```
=== "Extended Docker-Compose.yml"
```yaml title="docker-compose.yml"
services:
nextcloud-aio-mastercontainer:
image: nextcloud/all-in-one:latest
init: true
restart: always
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don't forget to also set 'WATCHTOWER_DOCKER_SOCKET_PATH'!
ports:
# - 80:80 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- 8080:8080
# - 8443:8443 # Can be removed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
dns:
- 1.1.1.1
- 1.0.0.1
environment: # Is needed when using any of the options below
# AIO_DISABLE_BACKUP_SECTION: false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
- APACHE_PORT=11000 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
- APACHE_IP_BINDING=0.0.0.0 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# BORG_RETENTION_POLICY: --keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# COLLABORA_SECCOMP_DISABLED: false # Setting this to true allows to disable Collabora's Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
# NEXTCLOUD_DATADIR: /mnt/ncdata # Allows to set the host directory for Nextcloud's datadir. ⚠️⚠️⚠️ Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# NEXTCLOUD_MOUNT: /mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
# NEXTCLOUD_UPLOAD_LIMIT: 10G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
# NEXTCLOUD_MAX_TIME: 3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
- NEXTCLOUD_MEMORY_LIMIT=4096M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# NEXTCLOUD_TRUSTED_CACERTS_DIR: /path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nexcloud container (Useful e.g. for LDAPS) See See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# NEXTCLOUD_STARTUP_APPS="deck twofactor_totp tasks calendar contacts notes" # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
- NEXTCLOUD_ADDITIONAL_APKS=imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
- NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS=imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# NEXTCLOUD_ENABLE_DRI_DEVICE: true # This allows to enable the /dev/dri device in the Nextcloud container. ⚠️⚠️⚠️ Warning: this only works if the '/dev/dri' device is present on the host! If it should not exist on your host, don't set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-transcoding-for-nextcloud
# NEXTCLOUD_KEEP_DISABLED_APPS: false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# TALK_PORT: 3478 # This allows to adjust the port that the talk container is using. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# WATCHTOWER_DOCKER_SOCKET_PATH: /var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default '/var/run/docker.sock'. Otherwise mastercontainer updates will fail. For macos it needs to be '/var/run/docker.sock'
# networks: # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# - nextcloud-aio # Is needed when you want to create the nextcloud-aio network with ipv6-support using this file, see the network config at the bottom of the file
# security_opt: ["label:disable"] # Is needed when using SELinux
# # Optional: Caddy reverse proxy. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# # You can find further examples here: https://github.com/nextcloud/all-in-one/discussions/588
# caddy:
# image: caddy:alpine
# restart: always
# container_name: caddy
# volumes:
# - ./Caddyfile:/etc/caddy/Caddyfile
# - ./certs:/certs
# - ./config:/config
# - ./data:/data
# - ./sites:/srv
# network_mode: "host"
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work
# # Optional: If you need ipv6, follow step 1 and 2 of https://github.com/nextcloud/all-in-one/blob/main/docker-ipv6-support.md first and then uncomment the below config in order to activate ipv6 for the internal nextcloud-aio network.
# # Please make sure to uncomment also the networking lines of the mastercontainer above in order to actually create the network with docker-compose
# networks:
# nextcloud-aio:
# name: nextcloud-aio # This line is not allowed to be changed as otherwise the created network will not be used by the other containers of AIO
# driver: bridge
# enable_ipv6: true
# ipam:
# driver: default
# config:
# - subnet: fd12:3456:789a:2::/64 # IPv6 subnet to use
```
## Traefik Reverse Proxy Configuration
```yaml title="cloud.bunny-lab.io.yml"
http:
routers:
nextcloud-aio:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: nextcloud-aio
middlewares:
- nextcloud-chain
rule: Host(`cloud.bunny-lab.io`)
services:
nextcloud-aio:
loadBalancer:
servers:
- url: http://192.168.3.29:11000
middlewares:
nextcloud-secure-headers:
headers:
hostsProxyHeaders:
- "X-Forwarded-Host"
referrerPolicy: "same-origin"
https-redirect:
redirectscheme:
scheme: https
nextcloud-chain:
chain:
middlewares:
# - ... (e.g. rate limiting middleware)
- https-redirect
- nextcloud-secure-headers
```
## Initial Setup
You will need to navigate to https://192.168.3.29:8080 to access the Nextcloud AIO configuration tool. This is where you will get the AIO password, encryption passphrase for backups, and be able to configure the timezone, among other things.
### Domain Validation
It will ask you to provide a domain name. In this example, we will use `cloud.bunny-lab.io`. Assuming you have configured the Traefik reverse proxy as seen above, when you press the "**Validate Domain**" button, Nextcloud will spin up a container named something similar to `domain-validator`. This will spin up a server listening on https://cloud.bunny-lab.io. If you visit that address, it should give you something similar to `f940935260b41691ac2246ba9e7823a301a1605ae8a023ee`. This will confirm that the domain validation will succeed.
!!! warning "Domain Validation Failing"
If visiting the web server at https://cloud.bunny-lab.io results in an error 502 or 404, try to destroy the domain validation container in Portainer / Docker, then click the validation button in the Nextcloud AIO WebUI to spin up a new container automatically, at which point it should be function.
### Configuring Additional Packages
At this point, the rest of the setup is fairly straightforward. You just check every checkbox for the apps you want to install automatically, and be patient while Nextcloud deploys about 11 containers. You can track the progress more accurately if you log into Portainer and watch the container listing and logs to follow-along until every container reports "**Healthy**" indicating everything is ready, then press the "**Refresh**" button on the Nextcloud AIO WebUI to confirm it's ready to be used.

View File

@@ -0,0 +1,64 @@
**Purpose**: Deploy a Nextcloud and PostgreSQL database together.
```yaml title="docker-compose.yml"
version: "2.1"
services:
app:
image: nextcloud:apache
labels:
- "traefik.enable=true"
- "traefik.http.routers.nextcloud.rule=Host(`files.bunny-lab.io`)"
- "traefik.http.routers.nextcloud.entrypoints=websecure"
- "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
environment:
- TZ=${TZ}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_HOST=${POSTGRES_HOST}
- OVERWRITEPROTOCOL=https
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
- NEXTCLOUD_TRUSTED_DOMAINS=${NEXTCLOUD_TRUSTED_DOMAINS}
volumes:
- /srv/containers/nextcloud/html:/var/www/html
ports:
- 443:443
- 80:80
restart: always
depends_on:
- db
networks:
docker_network:
ipv4_address: 192.168.5.17
db:
image: postgres:12-alpine
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- /srv/containers/nextcloud/db:/var/lib/postgresql/data
ports:
- 5432:5432
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.18
networks:
docker_network:
external: true
```
```yaml title=".env"
TZ=America/Denver
POSTGRES_PASSWORD=SomeSecurePassword
POSTGRES_USER=ncadmin
POSTGRES_HOST=192.168.5.18
POSTGRES_DB=nextcloud
NEXTCLOUD_ADMIN_USER=admin
NEXTCLOUD_ADMIN_PASSWORD=SomeSuperSecurePassword
NEXTCLOUD_TRUSTED_DOMAINS=cloud.bunny-lab.io
```

View File

@@ -0,0 +1,63 @@
**Purpose**: ONLYOFFICE offers a secure online office suite highly compatible with MS Office formats. Generally used with Nextcloud to edit documents directly within the web browser.
```yaml title="docker-compose.yml"
version: '3'
services:
app:
image: onlyoffice/documentserver-ee
ports:
- 80:80
- 443:443
volumes:
- /srv/containers/onlyoffice/DocumentServer/logs:/var/log/onlyoffice
- /srv/containers/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data
- /srv/containers/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice
- /srv/containers/onlyoffice/DocumentServer/db:/var/lib/postgresql
- /srv/containers/onlyoffice/DocumentServer/fonts:/usr/share/fonts/truetype/custom
- /srv/containers/onlyoffice/DocumentServer/forgotten:/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten
- /srv/containers/onlyoffice/DocumentServer/rabbitmq:/var/lib/rabbitmq
- /srv/containers/onlyoffice/DocumentServer/redis:/var/lib/redis
labels:
- "traefik.enable=true"
- "traefik.http.routers.cyberstrawberry-onlyoffice.rule=Host(`office.cyberstrawberry.net`)"
- "traefik.http.routers.cyberstrawberry-onlyoffice.entrypoints=websecure"
- "traefik.http.routers.cyberstrawberry-onlyoffice.tls.certresolver=myresolver"
- "traefik.http.services.cyberstrawberry-onlyoffice.loadbalancer.server.port=80"
- "traefik.http.routers.cyberstrawberry-onlyoffice.middlewares=onlyoffice-headers"
- "traefik.http.middlewares.onlyoffice-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
#- "traefik.http.middlewares.onlyoffice-headers.headers.accessControlAllowOrigin=*"
environment:
- JWT_ENABLED=true
- JWT_SECRET=REDACTED #SET THIS TO SOMETHING SECURE
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.143
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```
:::tip
If you wish to use this in a non-commercial homelab environment without limits, [this script](https://wiki.muwahhid.ru/ru/Unraid/Docker/Onlyoffice-Document-Server) does an endless trial without functionality limits.
```
docker stop office-document-server-ee
docker rm office-document-server-ee
rm -r /mnt/user/appdata/onlyoffice/DocumentServer
sleep 5
<USE A PORTAINER WEBHOOK TO RECREATE THE CONTAINER OR REFERENCE THE DOCKER RUN METHOD BELOW>
```
Docker Run Method:
```
docker run -d --name='office-document-server-ee' --net='bridge' -e TZ="Europe/Moscow" -e HOST_OS="Unraid" -e 'JWT_ENABLED'='true' -e 'JWT_SECRET'='mySecret' -p '8082:80/tcp' -p '4432:443/tcp' -v '/mnt/user/appdata/onlyoffice/DocumentServer/logs':'/var/log/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/data':'/var/www/onlyoffice/Data':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/lib':'/var/lib/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/db':'/var/lib/postgresql':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/fonts':'/usr/share/fonts/truetype/custom':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/forgotten':'/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/rabbitmq':'/var/lib/rabbitmq':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/redis':'/var/lib/redis':'rw' 'onlyoffice/documentserver-ee'
```
:::

View File

@@ -0,0 +1,59 @@
**Purpose**: This is a powerful locally hosted web based PDF manipulation tool using docker that allows you to perform various operations on PDF files, such as splitting merging, converting, reorganizing, adding images, rotating, compressing, and more. This locally hosted web application started as a 100% ChatGPT-made application and has evolved to include a wide range of features to handle all your PDF needs.
## Docker Configuration
```yaml title="docker-compose.yml"
version: "3.8"
services:
app:
image: frooodle/s-pdf:latest
container_name: stirling-pdf
environment:
- TZ=America/Denver
- DOCKER_ENABLE_SECURITY=false
volumes:
- /srv/containers/stirling-pdf/datastore:/datastore
- /srv/containers/stirling-pdf/trainingData:/usr/share/tesseract-ocr/5/tessdata #Required for extra OCR languages
- /srv/containers/stirling-pdf/extraConfigs:/configs
- /srv/containers/stirling-pdf/customFiles:/customFiles/
- /srv/containers/stirling-pdf/logs:/logs/
ports:
- 8080:8080
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.54
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
N/A
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
stirling-pdf:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: stirling-pdf
rule: Host(`pdf.bunny-lab.io`)
services:
stirling-pdf:
loadBalancer:
servers:
- url: http://192.168.5.54:8080
passHostHeader: true
```

View File

@@ -0,0 +1,49 @@
**Purpose**: Build your personal knowledge base with [Trilium Notes](https://github.com/zadam/trilium/tree/master).
```yaml title="docker-compose.yml"
version: '2.1'
services:
trilium:
image: zadam/trilium
restart: always
environment:
- TRILIUM_DATA_DIR=/home/node/trilium-data
ports:
- "8080:8080"
volumes:
- /srv/containers/trilium:/home/node/trilium-data
networks:
docker_network:
ipv4_address: 192.168.5.11
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
N/A
```
# Traefik Configuration
```yaml title="notes.bunny-lab.io.yml"
http:
routers:
notes:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: notes
rule: Host(`notes.bunny-lab.io`)
services:
notes:
loadBalancer:
servers:
- url: http://192.168.5.11:8080
passHostHeader: true
```

View File

@@ -0,0 +1,49 @@
**Purpose**: At its core, WordPress is the simplest, most popular way to create your own website or blog. In fact, WordPress powers over 43.3% of all the websites on the Internet. Yes more than one in four websites that you visit are likely powered by WordPress.
```yaml title="docker-compose.yml"
version: '3.7'
services:
wordpress:
image: wordpress:latest
restart: always
ports:
- 80:80
environment:
WORDPRESS_DB_HOST: 192.168.5.216
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
WORDPRESS_DB_NAME: wordpress
volumes:
- /srv/Containers/WordPress/Server:/var/www/html
networks:
docker_network:
ipv4_address: 192.168.5.217
depends_on:
- db
db:
image: lscr.io/linuxserver/mariadb
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
REMOTE_SQL: http://URL1/your.sql,https://URL2/your.sql
volumes:
- /srv/Containers/WordPress/DB:/config
networks:
docker_network:
ipv4_address: 192.168.5.216
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```yaml title=".env"
WORDPRESS_DB_PASSWORD=SecurePassword101
MYSQL_ROOT_PASSWORD=SecurePassword202
```

View File

@@ -0,0 +1,150 @@
**Purpose**: HTML5-based Remote Access Broker for SSH, RDP, and VNC. Useful for remote access into an environment.
### Docker Compose Stack
=== "docker-compose.yml"
```yaml
version: '3'
services:
app:
image: jasonbean/guacamole
ports:
- 8080:8080
volumes:
- /srv/containers/guacamole:/config
environment:
- OPT_MYSQL=Y
- OPT_MYSQL_EXTENSION=N
- OPT_SQLSERVER=N
- OPT_LDAP=N
- OPT_DUO=N
- OPT_CAS=N
- OPT_TOTP=Y # (1)
- OPT_QUICKCONNECT=N
- OPT_HEADER=N
- OPT_SAML=N
- PUID=99
- PGID=100
- TZ=America/Denver # (2)
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.43
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
1. Enable this if you want multi-factor authentication enabled. Must be set BEFORE the container is initially deployed. Cannot be added retroactively.
2. Set to your own timezone.
=== "docker-compose.yml (OpenID / Keycloak Integration)"
```yaml
version: '3'
services:
app:
image: jasonbean/guacamole
ports:
- 8080:8080
volumes:
- /srv/containers/apache-guacamole:/config
environment:
- OPT_MYSQL=Y
- OPT_MYSQL_EXTENSION=N
- OPT_SQLSERVER=N
- OPT_LDAP=N
- OPT_DUO=N
- OPT_CAS=N
- OPT_TOTP=N
- OPT_QUICKCONNECT=N
- OPT_HEADER=N
- OPT_SAML=N
- OPT_OIDC=Y # Enable OpenID Connect
- OIDC_ISSUER=${OPENID_REALM_URL} # Your Keycloak realm URL
- OIDC_CLIENT_ID=${OPENID_CLIENT_ID} # Client ID for Guacamole in Keycloak
- OIDC_CLIENT_SECRET=${OPENID_CLIENT_SECRET} # Client Secret for Guacamole in Keycloak
- OIDC_REDIRECT_URI=${OPENID_REDIRECT_URI} # Redirect URI for Guacamole
- PUID=99
- PGID=100
- TZ=America/Denver
restart: unless-stopped
networks:
docker_network:
ipv4_address: 192.168.5.43
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
1. You cannot enable TOTP / Multi-factor authentication if you have OpenID configured. This is just a known issue.
2. Set to your own timezone.
### Environment Variables
=== ".env"
``` sh
N/A
```
=== ".env (OpenID / Keycloak Integration)"
```yaml
OPENID_REALM_URL=https://auth.bunny-lab.io/realms/master
OPENID_CLIENT_ID=apache-guacamole
OPENID_CLIENT_SECRET=<YOUR-CLIENT-ID-SECRET>
OPENID_REDIRECT_URI=http://remote.bunny-lab.io
```
## Reverse Proxy Configuration
=== "Traefik"
``` yaml
http:
routers:
apache-guacamole:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: apache-guacamole
rule: Host(`remote.bunny-lab.io`)
services:
apache-guacamole:
loadBalancer:
servers:
- url: http://192.168.5.43:8080
passHostHeader: true
```
=== "NGINX"
```yaml
server {
listen 443 ssl;
server_name remote.bunny-lab.io;
client_max_body_size 0;
ssl on;
location / {
proxy_pass http://192.168.5.43:8080;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
access_log off;
}
}
```

View File

@@ -0,0 +1,104 @@
**Purpose**: Sometimes you just want an instance of Firefox running on an Alpine Linux container, that has persistence (Extensions, bookmarks, history, etc) outside of the container (with bind-mapped folders). This is useful for a number of reasons, but insecure by default, so you have to protect it behind something like a [Keycloak Server](../authentication/keycloak/deployment.md) so it is not misused.
## Keycloak Authentication Sequence
``` mermaid
sequenceDiagram
participant User
participant Traefik as Traefik Reverse Proxy
participant Keycloak
participant RockyLinux as Rocky Linux VM
participant FirewallD as FirewallD
participant Alpine as Alpine Container
User->>Traefik: Access https://work-environment.bunny-lab.io
Traefik->>Keycloak: Redirect to Authenticate against Work Realm
User->>Keycloak: Authenticate
Keycloak->>User: Authorization Cookie Stored on Internet Browser
User->>Traefik: Pass Authorization Cookie to Traefik
Traefik->>RockyLinux: Traefik Forwards Traffic to Rocky Linux VM
RockyLinux->>FirewallD: Traffic Passes Local Firewall
FirewallD->>RockyLinux: Filter traffic (Port 5800)
FirewallD->>Alpine: Allow Traffic from Traefik
Alpine->>User: WebUI Access to Firefox Work Environment Granted
```
## Docker Configuration
```yaml title="docker-compose.yml"
version: '3'
services:
firefox:
image: jlesage/firefox # Docker image for Firefox
environment:
- TZ=America/Denver # Timezone setting
- DARK_MODE=1 # Enable dark mode
- WEB_AUDIO=1 # Enable web audio
- KEEP_APP_RUNNING=1 # Keep the application running
ports:
- "5800:5800" # Port mapping for VNC WebUI
volumes:
- /srv/containers/firefox:/config:rw # Persistent storage for configuration
restart: always # Always restart the container in case of failure
network_mode: host # Use the host network
```
```yaml title=".env"
N/A
```
## Local Firewall Hardening
It is important, due to how this browser just allows anyone to access it, to lock it down to only allow access to the SSH port and port 5800 to specifically-allowed devices, in this case, the Traefik Reverse Proxy. This ensures that it only allows the proxy to communicate with Firefox's container, keeping it securely protected behind Keycloak's middware in Traefik.
These rules will drop all traffic by default, allow port 22, and restrict access to port 5800.
``` sh
# Set the default zone to drop
sudo firewall-cmd --set-default-zone=drop
# Create a new zone named custom-trusted
sudo firewall-cmd --permanent --new-zone=traefik-proxy
# Allow traffic to port 5800 only from 192.168.5.29 in the traefik-proxy zone
sudo firewall-cmd --permanent --zone=traefik-proxy --add-source=192.168.5.29
sudo firewall-cmd --permanent --zone=traefik-proxy --add-port=5800/tcp
# Allow SSH traffic on port 22 from any IP in the drop zone
sudo firewall-cmd --permanent --zone=drop --add-service=ssh
# Reload FirewallD to apply the changes
sudo firewall-cmd --reload
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
work-environment:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: work-environment
rule: Host(`work-environment.bunny-lab.io`)
middlewares:
- work-environment # Referencing the Keycloak Server
services:
work-environment:
loadBalancer:
servers:
- url: http://192.168.5.4:5800
passHostHeader: true
# # Adding forwardingTimeouts to set the send and read timeouts to 1 hour (3600 seconds)
# forwardingTimeouts:
# dialTimeout: "3600s"
# responseHeaderTimeout: "3600s"
```
## Firefox Special Configurations
Due to the nature of how this is deployed, you need to make some additional configurations to the Firefox settings after-the-fact. Some of this could be automated with environment variables at deployment time, but for now will be handled manually.
- **Install Power Tabs Extension**: This extension is useful for keeping things organized.
- **Install Merge All Windows Extension**: At times, you may misclick somewhere in the Firefox environment causing Firefox to open a new instance / window losing all of your tabs, and because there is no window manager, there is no way to alt+tab or switch between the instances of Firefox, effectively breaking your current session forcing you to re-open tabs. With this extension, you can merge all of the windows, collapsing them into one window, resolving the issue.
- **Configure New Tab behavior**: If a new tab opens in a new window, it will absolutely throw everything into disarray, that is why all hyperlinks will be forced to open in a new tab instead of a new window. You can do this by navigating to `about:config` and setting the variable `browser.link.open_newwindow.restriction` to a value of `0`. [Original Reference Documentation](https://support.mozilla.org/en-US/questions/1066799)

View File

@@ -0,0 +1,33 @@
**Purpose**:
Tactical RMM is a remote monitoring & management tool built with Django, Vue and Golang. [Official Documentation](https://docs.tacticalrmm.com/install_server/).
!!! Requirements
Ubuntu Server 22.04 LTS, 8GB RAM, 64GB Storage.
## Deployment Script
```
# Check for Updates
sudo apt update
sudo apt install -y wget curl sudo ufw
sudo apt -y upgrade
# Create TacticalRMM User
sudo useradd -m -G sudo -s /bin/bash tactical
sudo passwd tactical
# Configure Firewall Rules
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow https
sudo ufw allow ssh
echo "y" | sudo ufw enable
sudo ufw reload
# Switch to TacticalRMM User
sudo su - tactical
# Deploy TacticalRMM via Deployment Script
wget https://raw.githubusercontent.com/amidaware/tacticalrmm/master/install.sh
chmod +x install.sh
./install.sh
```

View File

@@ -0,0 +1,59 @@
**Purpose**: Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more.
## Docker Configuration
```yaml title="docker-compose.yml"
version: "3.8"
services:
app:
image: dgtlmoon/changedetection.io
container_name: changedetection.io
environment:
- TZ=America/Denver
volumes:
- /srv/containers/changedetection/datastore:/datastore
ports:
- 5000:5000
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.changedetection.rule=Host(`changedetection.bunny-lab.io`)"
- "traefik.http.routers.changedetection.entrypoints=websecure"
- "traefik.http.routers.changedetection.tls.certresolver=letsencrypt"
- "traefik.http.services.changedetection.loadbalancer.server.port=5000"
networks:
docker_network:
ipv4_address: 192.168.5.49
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
N/A
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
``` yaml
http:
routers:
changedetection:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
http2:
service: changedetection
rule: Host(`changedetection.bunny-lab.io`)
services:
changedetection:
loadBalancer:
servers:
- url: http://192.168.5.49:5000
passHostHeader: true
```

View File

@@ -0,0 +1,28 @@
**Purpose**: The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis.
```yaml title="docker-compose.yml"
version: "3.8"
services:
app:
image: mpepping/cyberchef:latest
container_name: cyberchef
environment:
- TZ=America/Denver
ports:
- 8000:8000
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.55
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
```jsx title=".env"
N/A
```

View File

@@ -0,0 +1,26 @@
**Purpose**: Collection of handy online tools for developers, with great UX.
```yaml title="docker-compose.yml"
version: "3"
services:
server:
image: corentinth/it-tools:latest
container_name: it-tools
environment:
- TZ=America/Denver
restart: always
ports:
- "80:80"
networks:
docker_network:
ipv4_address: 192.168.5.16
networks:
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```

View File

@@ -0,0 +1,82 @@
**Purpose**: An application to securely communicate passwords over the web. Passwords automatically expire after a certain number of views and/or time has passed. Track who, what and when.
## Docker Configuration
```yaml title="docker-compose.yml"
version: '3'
services:
passwordpusher:
image: docker.io/pglombardo/pwpush:release
expose:
- 5100
restart: always
environment:
# Read Documention on how to generate a master key, then put it below
- PWPUSH_MASTER_KEY=${PWPUSH_MASTER_KEY}
networks:
docker_network:
ipv4_address: 192.168.5.170
labels:
- "traefik.enable=true"
- "traefik.http.routers.passwordpusher.rule=Host(`temp.bunny-lab.io`)"
- "traefik.http.routers.passwordpusher.entrypoints=websecure"
- "traefik.http.routers.passwordpusher.tls.certresolver=letsencrypt"
- "traefik.http.services.passwordpusher.loadbalancer.server.port=5100"
networks:
docker_network:
external: true
```
```yaml title=".env"
PWPUSH_MASTER_KEY=<PASSWORD>
PWP__BRAND__TITLE="Bunny Lab"
PWP__BRAND__SHOW_FOOTER_MENU=false
PWP__BRAND__LIGHT_LOGO="https://cloud.bunny-lab.io/apps/theming/image/logo?v=22"
PWP__BRAND__DARK_LOGO="https://cloud.bunny-lab.io/apps/theming/image/logo?v=22"
PWP__BRAND__TAGLINE="Secure Temporary Information Exchange"
PWP__MAIL__RAISE_DELIVERY_ERRORS=true
PWP__MAIL__SMTP_ADDRESS=mail.bunny-lab.io
PWP__MAIL__SMTP_PORT=587
PWP__MAIL__SMTP_USER_NAME=noreply@bunny-lab.io
PWP__MAIL__SMTP_PASSWORD=<SMTP_CREDENTIALS>
PWP__MAIL__SMTP_AUTHENTICATION=plain
PWP__MAIL__SMTP_STARTTLS=true
PWP__MAIL__SMTP_OPEN_TIMEOUT=10
PWP__MAIL__SMTP_READ_TIMEOUT=10
PWP__HOST_DOMAIN=bunny-lab.io
PWP__HOST_PROTOCOL=https
PWP__MAIL__MAILER_SENDER='"noreply" <noreply@bunny-lab.io>'
PWP__SHOW_VERSION=false
PWP__ENABLE_FILE_PUSHES=true
PWP__FILES__EXPIRE_AFTER_DAYS_DEFAULT=2
PWP__FILES__EXPIRE_AFTER_DAYS_MAX=7
PWP__FILES__EXPIRE_AFTER_VIEWS_DEFAULT=5
PWP__FILES__EXPIRE_AFTER_VIEWS_MAX=10
PWP__FILES__RETRIEVAL_STEP_DEFAULT=true
PWP__ENABLE_URL_PUSHES=true
PWP__LOG_LEVEL=info
```
!!! note "PWPUSH_MASTER_KEY"
Generate a master key by visiting the [official online key generator](https://pwpush.com/en/pages/generate_key).
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
password-pusher:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: password-pusher
rule: Host(`temp.bunny-lab.io`)
services:
password-pusher:
loadBalancer:
servers:
- url: http://192.168.5.170:5100
passHostHeader: true
```

View File

@@ -0,0 +1,51 @@
**Purpose**: Deploys a SearX Meta Search Engine Server
## Docker Configuration
```yaml title="docker-compose.yml"
version: '3'
services:
searx:
image: searx/searx:latest
ports:
- 8080:8080
volumes:
- /srv/containers/searx/:/etc/searx
restart: always
labels:
- "traefik.enable=true"
- "traefik.http.routers.searx.rule=Host(`searx.bunny-lab.io`)"
- "traefik.http.routers.searx.entrypoints=websecure"
- "traefik.http.routers.searx.tls.certresolver=letsencrypt"
- "traefik.http.services.searx.loadbalancer.server.port=8080"
networks:
docker_network:
ipv4_address: 192.168.5.124
networks:
docker_network:
external: true
```
```yaml title=".env"
Not Applicable
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
searx:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: searx
rule: Host(`searx.bunny-lab.io`)
services:
searx:
loadBalancer:
servers:
- url: http://192.168.5.124:8080
passHostHeader: true
```

View File

@@ -0,0 +1,62 @@
**Purpose**: Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs.
```yaml title="docker-compose.yml"
---
version: "2.1"
services:
vaultwarden:
image: vaultwarden/server:latest
container_name: vaultwarden
environment:
- TZ=America/Denver
- INVITATIONS_ALLOWED=false
- SIGNUPS_ALLOWED=false
- WEBSOCKET_ENABLED=false
- ADMIN_TOKEN=REDACTED #PUT A REALLY REALLY REALLY SECURE PASSWORD HERE
volumes:
- /srv/containers/vaultwarden:/data
ports:
- 80:80
restart: always
networks:
docker_network:
ipv4_address: 192.168.5.15
labels:
- "traefik.enable=true"
- "traefik.http.routers.bunny-vaultwarden.rule=Host(`vault.bunny-lab.io`)"
- "traefik.http.routers.bunny-vaultwarden.entrypoints=websecure"
- "traefik.http.routers.bunny-vaultwarden.tls.certresolver=letsencrypt"
- "traefik.http.services.bunny-vaultwarden.loadbalancer.server.port=80"
networks:
default:
external:
name: docker_network
docker_network:
external: true
```
!!! warning "ADMIN_TOKEN"
It is **CRITICAL** that you never share the `ADMIN_TOKEN` with anyone. It allows you to log into the instance at https://vault.example.com/admin to add users, delete users, make changes system wide, etc.
```yaml title=".env"
Not Applicable
```
## Traefik Reverse Proxy Configuration
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
```yaml
http:
routers:
bunny-vaultwarden:
entryPoints:
- websecure
tls:
certResolver: letsencrypt
service: vaultwarden
rule: Host(`vault.bunny-lab.io`)
services:
vaultwarden:
loadBalancer:
servers:
- url: http://192.168.5.15:80
passHostHeader: true
```