Additional Doc Restructure
This commit is contained in:
41
infrastructure/networking/controllers/unifi-controller.md
Normal file
41
infrastructure/networking/controllers/unifi-controller.md
Normal file
@@ -0,0 +1,41 @@
|
||||
**Purpose**: The UniFi® Controller is a wireless network management software solution from Ubiquiti Networks™. It allows you to manage multiple wireless networks using a web browser.
|
||||
|
||||
```yaml title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
controller:
|
||||
image: lscr.io/linuxserver/unifi-controller:latest
|
||||
container_name: controller
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
#- MEM_LIMIT=1024 #optional
|
||||
#- MEM_STARTUP=1024 #optional
|
||||
volumes:
|
||||
- /srv/containers/unifi-controller:/config
|
||||
ports:
|
||||
- 8443:8443
|
||||
- 3478:3478/udp
|
||||
- 10001:10001/udp
|
||||
- 8080:8080
|
||||
- 1900:1900/udp #optional
|
||||
- 8843:8843 #optional
|
||||
- 8880:8880 #optional
|
||||
- 6789:6789 #optional
|
||||
- 5514:5514/udp #optional
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.140
|
||||
# ipv4_address: 192.168.3.140
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```yaml title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
@@ -0,0 +1,39 @@
|
||||
**Purpose**:
|
||||
If you need to deploy Unifi Controller bare-metal into a virtual machine, you can do so with a few simple commands. You can feel free to reference the [original documentation](https://help.ui.com/hc/en-us/articles/220066768-Updating-and-Installing-Self-Hosted-UniFi-Network-Servers-Linux) if additional clarity is needed.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes that you are running Ubuntu Server (22.04 or higher). The instructions are not designed to accomodate RHEL-based Linux distributions.
|
||||
|
||||
!!! warning "INCOMPLETE DOCUMENT"
|
||||
This document was originally written with the intention of comprehensively covering the deployment of the MongoDB server and Unifi Network Controller. However, I opted to use an automated scripted installation approach seen [here](https://community.ui.com/questions/UniFi-Installation-Scripts-or-UniFi-Easy-Update-Script-or-UniFi-Lets-Encrypt-or-UniFi-Easy-Encrypt-/ccbc7530-dd61-40a7-82ec-22b17f027776) that was almost turn-key instead.
|
||||
|
||||
```sh
|
||||
apt-get update; apt-get install ca-certificates curl -y
|
||||
curl -sO https://get.glennr.nl/unifi/install/install_latest/unifi-latest.sh && bash unifi-latest.sh
|
||||
```
|
||||
|
||||
|
||||
## Install Components
|
||||
The installation will consist of a MongoDB server and a Unifi Network (controller) server. You will install the database first, then install the Unifi Controller second, so it can provision the newly-installed local MongoDB database server.
|
||||
|
||||
### General Configuration
|
||||
We need to configure APT with a few commands to ensure that we can download the MongoDB and Unifi packages.
|
||||
```sh
|
||||
sudo apt-get update && sudo apt-get install ca-certificates apt-transport-https
|
||||
echo 'deb [ arch=amd64,arm64 ] https://www.ui.com/downloads/unifi/debian stable ubiquiti' | sudo tee /etc/apt/sources.list.d/100-ubnt-unifi.list
|
||||
sudo wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg
|
||||
echo "deb [trusted=yes] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
|
||||
sudo apt-get update
|
||||
```
|
||||
|
||||
!!! node "Alternative GPG Key Installation"
|
||||
If you run into issues installing the GPG key for the Unifi packages, you can alternatively run the command seen below:
|
||||
```sh
|
||||
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 06E85760C0A52C50
|
||||
```
|
||||
|
||||
### MongoDB Server
|
||||
Run the following commands install and enable automatic startup for the MongoDB server. Original reference documentation can be found [here](https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-ubuntu/).
|
||||
```sh
|
||||
|
||||
```
|
||||
@@ -0,0 +1,19 @@
|
||||
**Purpose**:
|
||||
You may find that you only have one network adapter on a server / VM and need to have multiple virtual networks associated with it. For example, Home Assistant exists on the `192.168.3.0/24` network but it needs to also access devices on the `192.168.4.0/24` surveillance network. To facilitate this, we will make a MACVLAN Sub-Interface. This will make a virtual interface that is parented to the actual physical interface.
|
||||
|
||||
!!! info "Assumptions"
|
||||
It is assumed that you are running Rocky Linux (or CentOS / RedHat).
|
||||
|
||||
## Create the Permanent Sub-Interface
|
||||
You will begin with making a new interface, it will have the name `macvlan0`.
|
||||
``` sh
|
||||
nmcli connection add type macvlan ifname surveillance dev ens18 mode bridge ipv4.method manual ipv4.addresses 192.168.4.100/24
|
||||
nmcli connection up macvlan-surveillance
|
||||
nmcli connection show
|
||||
```
|
||||
|
||||
## Bind a Docker Network to the Sub-Interface
|
||||
Now you need to run the following command to allow docker to use this interface for the `surveillance_network`
|
||||
``` sh
|
||||
docker network create -d macvlan --subnet=192.168.4.0/24 --gateway=192.168.4.1 -o parent=surveillance surveillance_network
|
||||
```
|
||||
@@ -0,0 +1,8 @@
|
||||
### Configure Docker Network
|
||||
We want to use a dedicated subnet / network specifically for containers, so they don't trample over the **SERVER** and **LAN** networks. If you are unsure of the name of the network adapter, in this case `eth0`, just type `ipaddr` in the terminal to list the network interfaces to locate it.
|
||||
```
|
||||
docker network create -d macvlan --subnet=192.168.5.0/24 --gateway=192.168.5.1 -o parent=eth0 docker_network
|
||||
```
|
||||
|
||||
!!! note
|
||||
Be sure to replace `eth0` with the correct interface name using `ip addr` in the terminal. e.g. It may appear as something else like `ens18`, etc. If the interface doesn't exist, Docker will produce an error complaining about it.
|
||||
@@ -0,0 +1,29 @@
|
||||
**Purpose**: You may have a Sophos XGS appliance and need more than one interface to act as additional LAN ports. You can achieve this with bridges.
|
||||
|
||||
!!! info "Assumptions"
|
||||
It is assumed that your Sophos XGS appliance has at least 3 interfaces, one for `WAN`, one for `LAN`, and a third one that will act as a member of the bridge. You can have as many member interfaces of the bridge as needed, but you need at least one.
|
||||
|
||||
## Login to the Firewall
|
||||
You will need to access the firewall either directly on the local network at `https://<IP-of-Firewall>:4444` or remotely in Sophos Central.
|
||||
|
||||
## Configure a LAN bridge
|
||||
Navigate to "**Configure > Network > Interfaces > "Add Interface" > "Add Bridge"**"
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Name | `LAN Bridge` |
|
||||
| Hardware | `br0` |
|
||||
| Enable routing on this bridge pair | `<Unchecked>` |
|
||||
| Member Interfaces | `<Interfaces-of-Additional-Ports> / Zone: "LAN"` |
|
||||
|
||||
!!! warning
|
||||
The LAN interface itself needs to be a member of the bridge. If it is not, the Sophos Appliance will not allow you to use the same IP address as the existing LAN interface.
|
||||
|
||||
### IPv4 Configuration
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| IP Assignment | `Static` |
|
||||
| IPv4/netmask | `<IP-of-LAN-Interface> / <CIDR-of-LAN-Interface>` |
|
||||
| Gateway IP | `<Blank>` |
|
||||
| Member Interfaces | `<Interfaces-of-Additional-Ports> / Zone: "LAN"` |
|
||||
@@ -0,0 +1,159 @@
|
||||
**Purpose**: Generally speaking, when you have site-to-site VPN tunnels, you have to ensure that the *health* of the tunnel is operating as-expected. Sometimes VPN tunnels will report that they are online and connected, but in reality, no traffic is flowing to the remote side of the tunnel. In these instances, we can create a script that pings a device on the remote end, and if it does not respond in a timely manner, the script restart the VPN tunnel automatically.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This document assumes that you will be running a powershell script on a Windows environment. The `curl` commands can be used interchangably in Linux, but the example script provided here will be using `curl.exe` within a powershell script, and instead of running on a schedule using crontab, it will be using Windows Task Scheduler.
|
||||
|
||||
I will attempt to provide Linux-equivalant commands where-possible.
|
||||
|
||||
## Sophos Environment
|
||||
### Configure Sophos XGS Firewall ACLs
|
||||
You need to configure a user account that will be specifically used for leveraging the API controls that allow resetting the VPN tunnel(s). At this stage, you need to log into your Sophos XGS Firewall. For this example, we will assume you can reach your firewall at https://172.16.16.16:4444 and log in as the administrator.
|
||||
|
||||
### Create API Access Profile
|
||||
You need to create a profile that the API User will leverage to issue commands to the firewall's VPN settings. Without this profile, the user may have either not enough, or too much access.
|
||||
|
||||
- Navigate to **System > Profiles > Device Access > "Add"**
|
||||
- Profile Name: `VPNTunnelAPI`
|
||||
- Check the radio box column named "**None**" to Deny all permissions to all areas of the firewall
|
||||
- Expand the "**VPN**" section of the permission tree, and check the box for "**Read-Write**" next to "**Connect Tunnel**"
|
||||
- Click the "**Save**" button to save the access profile
|
||||
|
||||
### Create API Access User
|
||||
Now we need to make a user account that we will use inside the script to authenticate against the firewall using the previously-mentioned access profile
|
||||
|
||||
- Navigate to **Configure > Authentication > Users > "Add"**
|
||||
- Username: `TunnelCheckerAPIUser`
|
||||
- Name: `TunnelCheckerAPIUser`
|
||||
- User Type: `Administrator`
|
||||
- Profile: `VPNTunnelAPI`
|
||||
- Password: `01_placeholder_PASSWORD_here_02`
|
||||
- Group: `Open Group`
|
||||
- Click the "**Save**" button to save the API user account
|
||||
|
||||
### Create Device Access ACL
|
||||
Now we need to configure an ACL within the Firewall to allow API access from the specific server we will be using in the next section.
|
||||
|
||||
- Navigate to **Administration > Device Access > Local service ACL exception rule > "Add"**
|
||||
- Rule Name: `API Access (IPSec Tunnel Heartbeat Script)`
|
||||
- Source Zone: `The Zone of the Server/Device that will be used to run the script, such as a server network.
|
||||
- Source Network/Host: `<IP_HOST_OF_DEVICE_RUNNING_SCRIPT>`
|
||||
- Destination Host: `XGS Firewall (Local IP)` (*This is an IP host pointing to the internal IP of the Firewall*)
|
||||
- Services: `HTTPS`
|
||||
- Action: `Accept`
|
||||
|
||||
### Configure API Access via IP
|
||||
Lastly, you need to configure the API access to allow communication from the IP of the device. I know this seems redundant to the previous "Device Access ACL" but its required for this to work, otherwise you will get an `Sophos API Operations are not allowed from the requester IP address` error when running the script.
|
||||
|
||||
- Navigate to **System > Backup & Firmware > API > API Configuration**
|
||||
- Add the IP of the Server/Device
|
||||
- Click the "**Apply** button
|
||||
|
||||
## Server Environment
|
||||
### Choose a Server
|
||||
It is important to choose a server/device that is able to communicate with the devices on the remote end of the tunnel. If it cannot ping the remote device(s), it will assume that the tunnel is offline and do an infinite loop of restarting the VPN tunnel.
|
||||
|
||||
### Prepare the Script Folder
|
||||
You need a place to put the script (and if on Windows, `curl.exe`). Follow the instructions specific to your platform below:
|
||||
|
||||
=== "Windows"
|
||||
Download `curl.exe` from this location: [Download](https://curl.se/windows/dl-8.10.0_1/curl-8.10.0_1-win64-mingw.zip) and place it somewhere on the operating system, such as `C:\Scripts\VPN_Tunnel_Checker`. Then copy this script into that same folder and call it `Tunnel_Checker.ps1` with the content below:
|
||||
|
||||
!!! note "Curl Files Extraction"
|
||||
You will want to extract all of the files included in the zip file's `bin` folder. Specifically, copy the following files into the `C:\Scripts\VPN_Tunnel_Checker` folder:
|
||||
|
||||
- `curl.exe`
|
||||
- `curl-ca-bundle`
|
||||
- `libcurl-x64.def`
|
||||
- `libcurl-x64.dll`
|
||||
|
||||
``` powershell
|
||||
function Reset-VPN-Tunnel {
|
||||
Write-Host "VPN Tunnel Broken - Bringing VPN Tunnel Down..."
|
||||
.\curl -k https://172.16.16.16:4444/webconsole/APIController?reqxml=<Request><Login><Username>TunnelCheckerAPIUser</Username><Password>01_placeholder_PASSWORD_here_02</Password></Login><Set><VPNIPSecConnection><DeActive><Name>VPN_TUNNEL_NAME</Name></DeActive></VPNIPSecConnection></Set></Request>
|
||||
|
||||
Start-Sleep -Seconds 5
|
||||
|
||||
Write-Host "Bringing VPN Tunnel Up..."
|
||||
.\curl -k https://172.16.16.16:4444/webconsole/APIController?reqxml=<Request><Login><Username>TunnelCheckerAPIUser</Username><Password>01_placeholder_PASSWORD_here_02</Password></Login><Set><VPNIPSecConnection><Active><Name>VPN_TUNNEL_NAME</Name></Active></VPNIPSecConnection></Set></Request>
|
||||
}
|
||||
|
||||
function Check-VPN-Tunnel {
|
||||
# Server Connectivity Check
|
||||
Write-Host "Checking Tunnel Connection to PLACEHOLDER..."
|
||||
if (-not (Test-Connection '10.0.0.29' -Quiet)) {
|
||||
Reset-VPN-Tunnel
|
||||
}
|
||||
|
||||
# Server Connectivity Check
|
||||
Write-Host "Checking Tunnel Connection to PLACEHOLDER..."
|
||||
if (-not (Test-Connection '10.0.0.30' -Quiet)) {
|
||||
Reset-VPN-Tunnel
|
||||
}
|
||||
}
|
||||
|
||||
function Trace-VPN-Tunnel {
|
||||
Write-Host "Tracing Path to PLACEHOLDER:"
|
||||
pathping -n -w 500 -p 100 10.0.0.29
|
||||
|
||||
Write-Host "Tracing Path to PLACEHOLDER:"
|
||||
pathping -n -w 500 -p 100 10.0.0.30
|
||||
}
|
||||
|
||||
CD "C:\Scripts\VPN_Tunnel_Checker"
|
||||
Check-VPN-Tunnel
|
||||
|
||||
#Write-Host "Checking Tunnel Quality After Running Script..."
|
||||
#Trace-VPN-Tunnel
|
||||
```
|
||||
|
||||
!!! note "Optional Reporting"
|
||||
You may find that you want some extra logging enabled so you can track the script doing its job to ensure its working. You can add the following to the script above to add that functionality.
|
||||
|
||||
Add the following to the bottom of each server in the `Check-VPN-Tunnel` function, directly below the `Reset-VPN-Tunnel` function.
|
||||
|
||||
``` powershell
|
||||
Add-Content -Path "C:\Scripts\VPN_Tunnel_Checker\Tunnel.log" -Value "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') PLACEHOLDER Connection Down"
|
||||
```
|
||||
|
||||
Lastly, change the very end of the script under where the `Check-IHS-Tunnel` function is being called to look like this if you want to log heartbeats and not just when a VPN tunnel is down. The purpose of this is to show the script is actually running. I recommend only temporarily implementing it during initial deployment.
|
||||
|
||||
``` powershell
|
||||
CD "C:\Scripts\VPN_Tunnel_Checker"
|
||||
Check-VPN-Tunnel
|
||||
Add-Content -Path "C:\Scripts\VPN_Tunnel_Checker\Tunnel.log" -Value "$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss') Heartbeat"
|
||||
```
|
||||
|
||||
=== "Linux"
|
||||
|
||||
``` sh
|
||||
PLACEHOLDER
|
||||
```
|
||||
|
||||
### Create Scheduled Task
|
||||
At this point, you need this script to run automatically on its own every 5 minutes or so, so you need to create a task in the Windows Task Scheduler in order to achieve this.
|
||||
|
||||
=== "Windows"
|
||||
|
||||
- Open "**Task Scheduler**" on the device
|
||||
- Expand "**Task Scheduler Library**" in the tree on the left-hand side
|
||||
- Right-click anywhere in the task list and select "**Create New Task...**"
|
||||
- **General**:
|
||||
- Name: `Check VPN Tunnel Every 5 Minutes`
|
||||
- When running this task, use the following user account: `SYSTEM`
|
||||
- **Triggers**:
|
||||
- Click "**New...**"
|
||||
- Begin the Task: `On a Schedule`
|
||||
- Settings: `Daily`
|
||||
- Advanced Settings > Repeat Task Every: `5 Minutes` > for a duration of `1 Day`
|
||||
- **Actions**:
|
||||
- Click "**New...**"
|
||||
- Action: `Start a Program`
|
||||
- Program/Script: `C:\Windows\System32\WindowsPowershell\v1.0\powershell.exe`
|
||||
- Add Arguments: `-ExecutionPolicy Bypass -File "C:\Scripts\VPN_Tunnel_Checker\Tunnel_Checker.ps1"`
|
||||
- Press the "**OK**" button to save the scheduled task, then wait for 5 minutes to ensure it triggers as-expected.
|
||||
|
||||
=== "Linux"
|
||||
|
||||
- PLACEHOLDER
|
||||
- PLACEHOLDER
|
||||
|
||||
@@ -0,0 +1,96 @@
|
||||
**Purpose**: You may have two Sophos XGS appliances (or a mixed configuration) and need to set up a site-to-site VPN tunnel between two remote locations. You can achieve this with a simple passphrase-based IPSec VPN tunnel.
|
||||
|
||||
!!! info "Assumptions"
|
||||
This documentation only provides instruction for Sophos XGS based devices. It does not account for third-party vendors or other manufactured hardware. If you need to set up a mixed VPN tunnel with a different brand of networking device, you need to do your best to match the settings on the tunnels manually. (e.g. Encryption Type, Phase Lifetimes, etc).
|
||||
|
||||
## Architecture
|
||||
|
||||
!!! tip "Best Practices - Initiators / Responders"
|
||||
If you have a hub-and-spoke network, where one location acts as a central authority (e.g. domain controllers, auth servers, identity providers, headquarters, etc), you will set up the central "hub" as a VPN responder on its side of the VPN tunnel, and all the remote "spoke" locations would behave as VPN initiators.
|
||||
|
||||
``` mermaid
|
||||
graph TB
|
||||
Responder((Responder<br/>Headquarters))
|
||||
Initiator1((Initiator<br/>Remote Site 1))
|
||||
Initiator2((Initiator<br/>Remote Site 2))
|
||||
Initiator3((Initiator<br/>Remote Site 3))
|
||||
Initiator4((Initiator<br/>Remote Site 4))
|
||||
Initiator5((Initiator<br/>Remote Site 5))
|
||||
|
||||
Initiator1 --> Responder
|
||||
Initiator2 --> Responder
|
||||
Initiator3 --> Responder
|
||||
Initiator4 --> Responder
|
||||
Initiator5 --> Responder
|
||||
```
|
||||
|
||||
## Login to the Firewall
|
||||
You will need to access the firewall either directly on the local network at `https://<IP-of-Firewall>:4444` or remotely in Sophos Central.
|
||||
|
||||
## Configure an IPSec VPN Tunnel
|
||||
Navigate to "**Configure > Site-to-Site VPN > Add**"
|
||||
|
||||
### General settings
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Name | `<ThisLocation> to <RemoteLocation>` |
|
||||
| IP Version | `Dual` |
|
||||
| Connection Type | `Tunnel Interface` (*Also known as a "Route-Based VPN"*) |
|
||||
| Gateway Type | `Initiate the Connection` / `Respond Only` (*See "Best Practices" Section*) |
|
||||
|
||||
### Encryption
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Encryption Profile | `Custom_IKEv2_Initiator` / `Custom_IKEv2_Responder` (*Based on the "Gateway Type"*) |
|
||||
| Authentication Type | `Preshared Key / Passphrase` |
|
||||
|
||||
### Gateway Settings
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Listening Interface | `<WAN Interface / Generally "Port2">` (*Internal IP Address*) |
|
||||
| Gateway Address | `<Public IP of Remote Firewall>` |
|
||||
| Local ID Type | `IP Address` (*Usually Optional*) |
|
||||
| Remote ID Type | `<If the Remote Firewall has one, enter it, otherwise leave blank>` (*Usually Optional*)|
|
||||
| Local Subnet | `<Leave Blank>` |
|
||||
| Remote Subnet | `<Leave Blank>` |
|
||||
|
||||
!!! note "Tunnel IDs / Subnets"
|
||||
If one side of the tunnel indicates a Local ID, you need to input that as the Remote ID on the other end of the tunnel. While Tunnel IDs are generally optional, if one side uses them, both need to.
|
||||
|
||||
- "Route-Based" VPNs do not need subnets indicated / configured
|
||||
- "Policy-based" VPNs require subnets indicated / configured
|
||||
|
||||
## Configure IPSec Encryption Profile
|
||||
Navigate to "**System > Profiles > IPSec Profiles > Custom_IKEv2_`<Initiator>/<Responder>`**"
|
||||
|
||||
!!! info "Explanation of Phases and their Relation to Initiators/Responders"
|
||||
Phase 1 could be described as establishing the initial tunnel's connectivity from the Initiator to the Responder. (Local to Remote). While phase 2 would be considered individual devices establishing connections through the VPN tunnel. (Individual Endpoint Connectivity).
|
||||
|
||||
The responder's phase 1 & 2 lifetime values are 300 seconds longer than the initiator's phase 1 & 2 lifetime values.
|
||||
|
||||
=== "Initiator Phase Lifetime Values"
|
||||
|
||||
| **Field** | **Value** | **Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| Phase 1 Lifetime | *Default Value*: `28800` | `<Longer Lifetime Compared to Phase 2>` |
|
||||
| Phase 2 Lifetime | *Default Value*: `14400` | `<Shorter Lifetime Compared to Phase 1>` |
|
||||
|
||||
=== "Responder Phase Lifetime Values"
|
||||
|
||||
| **Field** | **Value** | **Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| Phase 1 Lifetime | *Default Value + 300 Seconds*: `328800` | `<Longer Lifetime Compared to Phase 2>` |
|
||||
| Phase 2 Lifetime | *Default Value + 300 Seconds*: `314400` | `<Shorter Lifetime Compared to Phase 1>` |
|
||||
|
||||
!!! warning "Remote / Local Phase Lifetimes"
|
||||
Within the context of the remote and local VPN tunnels, the lifetime of the Phase 1 and Phase 2 encryption keys needs to be shorter on the intiator than the responder sides of the VPN tunnel.
|
||||
|
||||
## Repeat Steps on Remote Firewall
|
||||
You will need to repeat the steps on both firewalls, so one firewall is the initiator, and one is configured as the responder. Keep special note of the admonitions regarding initiator / responder / local / remote differences.
|
||||
|
||||
## Connect the IPSec Tunnels
|
||||
Now you need to start the tunnel on the Initiator side first, then start the tunnel on the responder side. If both sides show green status indicators, the tunnel should be active.
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
## Purpose
|
||||
This document exists to outline the generalized process to configuring remote access in a Sophos XGS Firewall to allow a VPN user to RDP into a workstation. *Setting up Remote SSL VPN Access is not covered in this document.*
|
||||
|
||||
### Create MAC Host for Destination Device
|
||||
The first step in the process is to create a MAC address host for the device being RDP'd into, that way if it's IP rotates, the firewall rule will continue to work correctly.
|
||||
|
||||
- Navigate to **Sophos XGS Firewall > [System] Hosts and Services**
|
||||
- Click on the **Mac Host** tab > "**Add**"
|
||||
- Name: `<Device-Hostname>`
|
||||
- Description: `<Workstation Remote Access for (username)>`
|
||||
- Type: `Mac Address`
|
||||
- MAC Address: `<mac address of device>`
|
||||
Click **Save**
|
||||
|
||||
### Configure Firewall Rule
|
||||
- Navigate to **[Protect] Rules and Policies > Add Firewall Rule (New Firewall Rule)**
|
||||
- Rule Name: `Remote Workstation Access for (username)`
|
||||
- Source Zone: `VPN`
|
||||
- Source Networks and Devices: `Any`
|
||||
- Destination Zone: `LAN`
|
||||
- Destination Networks: `<MAC Host We Previously Made>`
|
||||
- Services > Add New Item > `RDP`
|
||||
- If `RDP` does not exist, click "Add", `Services`
|
||||
- Name: `RDP`
|
||||
- Description: `Remote Desktop Protocol`
|
||||
- Type: `TCP/UDP`
|
||||
- Protocol: `TCP`
|
||||
- Source Port: `1:65535`
|
||||
- Destination Port: `3389`
|
||||
Click **Save**
|
||||
- Check **Match Known Users**
|
||||
- Under "Users or Groups" click "Add New Item"
|
||||
- Search for the username of the person using the VPN that needs to access the workstation (e.g. `nicole.rappe@bunny-lab.io`)
|
||||
- Click the **Save** button and have the user try to connect to the VPN, then RDP into their workstation.
|
||||
|
||||
30
infrastructure/networking/index.md
Normal file
30
infrastructure/networking/index.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Networking
|
||||
## Purpose
|
||||
Network topology, addressing, firewalling, VPN, and network service dependencies.
|
||||
|
||||
## Includes
|
||||
- IP tables and address plans
|
||||
- Firewall and VPN configurations
|
||||
- Network controllers and DNS-related services
|
||||
|
||||
## New Document Template
|
||||
````markdown
|
||||
# <Document Title>
|
||||
## Purpose
|
||||
<what this network doc exists to describe>
|
||||
|
||||
!!! info "Assumptions"
|
||||
- <platform or appliance assumptions>
|
||||
- <privilege assumptions>
|
||||
|
||||
## Architecture
|
||||
<ASCII diagram or concise topology notes>
|
||||
|
||||
## Procedure
|
||||
```sh
|
||||
# Commands or config steps
|
||||
```
|
||||
|
||||
## Validation
|
||||
- <command + expected result>
|
||||
````
|
||||
@@ -0,0 +1,10 @@
|
||||
### IP Addresses
|
||||
Documented IP addresses of Hyper-V Failover Cluster VMs that exist behind the Sophos XG Firewall VM. All of these machines are funneled through the Sophos XG Firewall VM before they are allowed to communicate on the physical network with other devices.
|
||||
|
||||
## 172.16.16.0/24 Network
|
||||
| **IP Address** | **FQDN** | **Additional Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| 172.16.16.1 | LAB-SOPHOS-01.bunny-lab.io | Sophos XG Firewall |
|
||||
| 172.16.16.2 | LAB-IRIS-01 | Blue Iris Surveillance |
|
||||
| 172.16.16.3 | `NOT IN USE` | `NOT IN USE` |
|
||||
| 172.16.16.4 | `NOT IN USE` | `NOT IN USE` |
|
||||
@@ -0,0 +1,53 @@
|
||||
### IP Addresses
|
||||
Documented IP addresses of containers.
|
||||
|
||||
## 192.168.5.0/24 Network
|
||||
| **IP Address** | **Hostname** | **Additional Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| 192.168.5.1 | PFSENSE | pfSense Firewall |
|
||||
| 192.168.5.2 | AUTH.bunny-lab.io | Keycloak Server |
|
||||
| 192.168.5.3 | AUTH.bunny-lab.io | Keycloak PostgreSQL DB Server |
|
||||
| 192.168.5.4 | ALPINE-WORK-01.bunny-lab.io | Firefox Containerized Environment |
|
||||
| 192.168.5.5 | ACTUAL.bunny-lab.io | Actual Finance Server |
|
||||
| 192.168.5.6 | Monica | Server |
|
||||
| 192.168.5.7 | Monica | Database |
|
||||
| 192.168.5.8 | Gatus | Server |
|
||||
| 192.168.5.9 | Gatus | Database |
|
||||
| 192.168.5.10 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.11 | NOTES.bunny-lab.io | Trillium Notes |
|
||||
| 192.168.5.12 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.13 | LAB-CLOUD-02 | Docker Proxy |
|
||||
| 192.168.5.14 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.15 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.16 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.17 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.18 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.19 | Immich | Server |
|
||||
| 192.168.5.20 | Immich | Microservices |
|
||||
| 192.168.5.21 | Immich | Machine Learning |
|
||||
| 192.168.5.22 | Immich | Redis DB |
|
||||
| 192.168.5.23 | Immich | PostgreSQL DB |
|
||||
| 192.168.5.24 | Sanctuary | Content Database Server |
|
||||
| 192.168.5.25 | Homebox | Inventory Management System |
|
||||
| 192.168.5.26 | Material MkDocs | Voice Training Documentation |
|
||||
| 192.168.5.27 | MM.bunny-lab.io | Mattermost PostgreSQL DB |
|
||||
| 192.168.5.28 | MM.bunny-lab.io | Mattermost Server |
|
||||
| 192.168.5.29 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.30 | Pyload | Python-Based Download Manager |
|
||||
| 192.168.5.31 | FLATNOTES.bunny-lab.io | Note-Taking Server |
|
||||
| 192.168.5.32 | SEMAPHORE.bunny-lab.io | Ansible Semaphore |
|
||||
| 192.168.5.33 | SEMAPHORE.bunny-lab.io | Ansible Semaphore PostgreSQL Database |
|
||||
| 192.168.5.34 | CONTACT.bunny-lab.io | Linkstack |
|
||||
| 192.168.5.35 | BIFROST.bunny-lab.io | Rustdesk HBBS |
|
||||
| 192.168.5.36 | FLYFF.bunny-lab.io | Material MKDocs |
|
||||
| 192.168.5.37 | traggo.bunny-lab.io | Traggo Time-Management Server |
|
||||
| 192.168.5.38 | speedtest.bunny-lab.io | Speedtest Tracker |
|
||||
| 192.168.5.39 | speedtest.bunny-lab.io | Speedtest Tracker Database |
|
||||
| 192.168.5.40 | apprise.bunny-lab.io | Apprise Notification Relaying Service |
|
||||
| 192.168.5.41 | joplin.bunny-lab.io | Joplin Documentation |
|
||||
| 192.168.5.42 | todo.bunny-lab.io | Tududi Server |
|
||||
| 192.168.5.43 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.44 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.45 | `UNDOCUMENTED - Active` | |
|
||||
| 192.168.5.46 | kb.bunny-lab.io | Zensical Documentation Server |
|
||||
| 192.168.5.47 | help.bunny-lab.io | Rustdesk Info Page |
|
||||
235
infrastructure/networking/ip-tables/homelab-server-inventory.md
Normal file
235
infrastructure/networking/ip-tables/homelab-server-inventory.md
Normal file
@@ -0,0 +1,235 @@
|
||||
## Overview
|
||||
All servers (physical and virtual) are documented within this specific page. They are written in a specific annotated manner in order to make them copy/paste ready for the Ansible AWX Operator server that interacts with devices in the homelab over `SSH` and `WinRM` protocols. This allows me to automate functions such as updates across the entire homelab declaratively versus individually.
|
||||
|
||||
**Note**: This list does not include Docker/Kubernetes-based workloads/servers. Those can be found within the [Container Network IP Table](../../networking/ip-tables/192-168-5-0-container-network.md) document. Given that Ansible does not interact with containers in my homelab (*yet*), these devices are not listed within this document.
|
||||
|
||||
## Updating Ansible Inventory
|
||||
Whenever changes are made here, they need to be replicated to the production Ansible AWX Inventory File. This ensures that Ansible AWX is always up-to-date. Simply copy/paste the codeblock below into the linked inventory file, and commit the change with a comment explaining what was added/removed from the inventory list.
|
||||
|
||||
[:material-ansible: Edit Ansible AWX Inventory File](https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/_edit/main/inventories/homelab.ini){ .md-button }
|
||||
|
||||
```ini title="https://git.bunny-lab.io/GitOps/awx.bunny-lab.io/src/branch/main/inventories/homelab.ini"
|
||||
# ----------------------------------------------------------------- #
|
||||
# Servers (Physical and Virtual) - Hosts Mapped to IPs / FQDNs #
|
||||
# ----------------------------------------------------------------- #
|
||||
# Deep Nodes
|
||||
deep-node ansible_host=10.0.0.20
|
||||
|
||||
lab-pfsense-01 ansible_host=192.168.3.1 # (1)
|
||||
lab-mini-pc ansible_host=192.168.3.2 # (2)
|
||||
lab-storage-01 ansible_host=192.168.3.3 # (3)
|
||||
cluster-node-01 ansible_host=192.168.3.4 # (4)
|
||||
cluster-node-02 ansible_host=192.168.3.5 # (5)
|
||||
lab-games-04 ansible_host=lab-games-04.bunny-lab.io # (6)
|
||||
lab-photos-01 ansible_host=lab-photos-01.bunny-lab.io # (7)
|
||||
kb-bunny-lab-io ansible_host=kb.bunny-lab.io # (8)
|
||||
# NOT-IN-USE ansible_host=192.168.3.9 # (9)
|
||||
lab-ansible-01 ansible_host=lab-ansible-01.bunny-lab.io # (10)
|
||||
lab-games-02 ansible_host=lab-games-02.bunny-lab.io # (11)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (12)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (13)
|
||||
lab-games-03 ansible_host=lab-games-03.bunny-lab.io # (14)
|
||||
#lab-docker-01 ansible_host=lab-docker-01.bunny-lab.io # (15)
|
||||
lab-games-05 ansible_host=lab-games-05.bunny-lab.io # (16)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io @ (17)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (18)
|
||||
#container-node-01 ansible_host=container-node-01.bunny-lab.io # (19)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (20)
|
||||
lab-puppet-01 ansible_host=lab-puppet-01.bunny-lab.io # (21)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (22)
|
||||
usagi-cluster ansible_host=usagi-cluster.bunny-lab.io # (23)
|
||||
lab-pool-01 ansible_host=lab-pool-01.bunny-lab.io # (24)
|
||||
lab-dc-01 ansible_host=lab-dc-01.bunny-lab.io # (25)
|
||||
lab-dc-03 ansible_host=lab-dc-03.bunny-lab.io # (26)
|
||||
lab-camera-01 ansible_host=lab-camera-01.bunny-lab.io # (27)
|
||||
lab-games-01 ansible_host=lab-games-01.bunny-lab.io # (28)
|
||||
lab-cloud-01 ansible_host=lab-cloud-01.bunny-lab.io # (29)
|
||||
deeplab-win11 ansible_host=deeplab-win11.dev-testing.lab # (30)
|
||||
lab-dt-02 ansible_host=lab-dt-02.bunny-lab.io # (31)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (32)
|
||||
cluster-node-03 ansible_host=192.168.3.33 # (33)
|
||||
lab-draas-01-ilo ansible_host=192.168.3.34 # (34)
|
||||
deeplab-dc ansible_host=deeplab-dc-01.dev-testing.lab # (35)
|
||||
deeplab-win10 ansible_host=deeplab-win10.dev-testing.lab # (36)
|
||||
# NOT-IN-USE ansible_host=192.168.3.37 # (37)
|
||||
# NOT-IN-USE ansible_host=192.168.3.38 # (38)
|
||||
# NOT-IN-USE ansible_host=192.168.3.39 # (39)
|
||||
# NOT-IN-USE ansible_host=192.168.3.40 # (40)
|
||||
# NOT-IN-USE ansible_host=192.168.3.41 # (41)
|
||||
# NOT-IN-USE ansible_host=192.168.3.42 # (42)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (43)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (44)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (45)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (46)
|
||||
# NOT-IN-USE ansible_host=NOT-IN-USE.bunny-lab.io # (47)
|
||||
lab-docker-02 ansible_host=192.168.3.48 # (48)
|
||||
lab-fps-01 ansible_host=192.168.3.49 # (49)
|
||||
# NOT-IN-USE ansible_host=192.168.3.50 # (50)
|
||||
# NOT-IN-USE ansible_host=192.168.3.51 # (51)
|
||||
# NOT-IN-USE ansible_host=192.168.3.52 # (52)
|
||||
# NOT-IN-USE ansible_host=192.168.3.53 # (53)
|
||||
veeam-draas-01 ansible_host=192.168.3.54 # (54)
|
||||
# NOT-IN-USE ansible_host=192.168.3.55 # (55)
|
||||
# NOT-IN-USE ansible_host=192.168.3.56 # (56)
|
||||
# NOT-IN-USE ansible_host=192.168.3.57 # (57)
|
||||
lab-photos-02 ansible_host=192.168.3.58 # (58)
|
||||
lab-ca-01 ansible_host=192.168.3.59 # (59)
|
||||
lab-ca-02 ansible_host=192.168.3.60 # (60)
|
||||
lab-mail-02 ansible_host=192.168.3.61 # (61)
|
||||
# NOT-IN-USE ansible_host=192.168.3.62 # (62)
|
||||
# NOT-IN-USE ansible_host=192.168.3.63 # (63)
|
||||
lab-iso-01 ansible_host=192.168.3.64 # (64)
|
||||
# NOT-IN-USE ansible_host=192.168.3.65 # (65)
|
||||
# NOT-IN-USE ansible_host=192.168.3.66 # (66)
|
||||
lab-fps-02 ansible_host=192.168.3.67 # (67)
|
||||
lab-pbs-01 ansible_host=192.168.3.68 # (68)
|
||||
rke2-cluster-node-01 ansible_host=192.168.3.69 # (69)
|
||||
rke2-cluster-node-02 ansible_host=192.168.3.70 # (70)
|
||||
rke2-cluster-node-03 ansible_host=192.168.3.71 # (71)
|
||||
rke2-cluster-node-04 ansible_host=192.168.3.72 # (72)
|
||||
rke2-cluster-node-05 ansible_host=192.168.3.73 # (73)
|
||||
# NOT-IN-USE ansible_host=192.168.3.74 # (74)
|
||||
# NOT-IN-USE ansible_host=192.168.3.75 # (75)
|
||||
deeplab-node-02 ansible_host=192.168.3.253 # (253)
|
||||
# NOT-IN-USE ansible_host=192.168.3.254 # (254)
|
||||
|
||||
# ----------------------------------------------------------------- #
|
||||
# Host Groups - Various Logical Groupings of Devices #
|
||||
# ----------------------------------------------------------------- #
|
||||
[domainControllers]
|
||||
lab-dc-01
|
||||
lab-dc-03
|
||||
|
||||
[windowsDomainServers]
|
||||
lab-games-04
|
||||
lab-games-02
|
||||
lab-games-03
|
||||
lab-games-05
|
||||
lab-dc-01
|
||||
lab-dc-03
|
||||
lab-camera-01
|
||||
lab-games-01
|
||||
lab-ca-01
|
||||
lab-ca-02
|
||||
lab-mail-02
|
||||
|
||||
[windowsStandaloneServers]
|
||||
#lab-dt-02
|
||||
|
||||
[linuxServers]
|
||||
lab-photos-01
|
||||
lab-photos-02
|
||||
lab-puppet-01
|
||||
lab-cloud-01
|
||||
lab-docker-02
|
||||
lab-iso-01
|
||||
|
||||
[workstations]
|
||||
lab-operator-01
|
||||
|
||||
# ----------------------------------------------------------------- #
|
||||
# Host Group Variables - Defining Specific Requirements of Groups #
|
||||
# ----------------------------------------------------------------- #
|
||||
[domainControllers:vars]
|
||||
ansible_connection=winrm
|
||||
ansible_port=5986
|
||||
ansible_winrm_transport=kerberos
|
||||
ansible_winrm_scheme=https
|
||||
ansible_winrm_server_cert_validation=ignore
|
||||
|
||||
[windowsDomainServers:vars]
|
||||
ansible_connection=winrm
|
||||
ansible_port=5986
|
||||
ansible_winrm_transport=kerberos
|
||||
ansible_winrm_scheme=https
|
||||
ansible_winrm_server_cert_validation=ignore
|
||||
|
||||
[windowsStandaloneServers:vars]
|
||||
ansible_connection=winrm
|
||||
ansible_winrm_kerberos_delegation=false
|
||||
ansible_port=5986
|
||||
ansible_winrm_transport=ntlm
|
||||
ansible_winrm_server_cert_validation=ignore
|
||||
|
||||
[linuxServers:vars]
|
||||
ansible_connection=ssh
|
||||
```
|
||||
|
||||
1. pfSense Firewall @ `192.168.3.1` | [Documentation](https://www.pfsense.org/getting-started/)
|
||||
2. 3.7GHz High-Performance Mini-PC @ `192.168.3.2`
|
||||
3. TrueNAS Core Server @ `192.168.3.3` | [Documentation](https://www.truenas.com/truenas-core/)
|
||||
4. ProxmoxVE Virtualization Node @ `192.168.3.4`
|
||||
5. ProxmoxVE Virtualization Node @ `192.168.3.5`
|
||||
6. Satisfactory Dedicated Server @ `192.168.3.6` | [Documentation](https://satisfactory.wiki.gg/wiki/Dedicated_servers)
|
||||
7. Immich Server @ `192.168.3.7` | [Documentation](https://immich.app/docs/install/docker-compose/)
|
||||
8. Zensical Documentation Server @ `192.168.3.8` | [Documentation](https://hub.docker.com/r/zensical/zensical)
|
||||
9. Not Currently In-Use @ `192.168.3.9` | [Documentation](https://example.com)
|
||||
10. Ansible AWX @ `192.168.3.10` | [Documentation](../../automation/ansible/awx/deployment/awx-operator.md)
|
||||
11. Minecraft - All The Mods 9 @ `192.168.3.11` | [Documentation](https://www.curseforge.com/minecraft/modpacks/all-the-mods-9)
|
||||
12. Ferrumgate Server @ `192.168.3.12` | [Documentation](https://ferrumgate.com/)
|
||||
13. NOT IN USE @ `192.168.3.13`
|
||||
14. Minecraft - All The Mods 10 @ `192.168.3.14` | [Documentation](https://www.curseforge.com/minecraft/modpacks/all-the-mods-10)
|
||||
15. Docker Container Environment (Portainer) @ `192.168.3.15` | [Documentation](../../platforms/containerization/docker/deploy-portainer.md)
|
||||
16. Valheim Server @ `192.168.3.16` | [Documentation](../../services/gaming/valheim.md)
|
||||
17. Not Currently In-Use @ `192.168.3.17` | [Documentation](https://example.com)
|
||||
18. Keycloak Server @ `192.168.3.18` | [Documentation](../../services/authentication/keycloak/deployment.md)
|
||||
19. Docker Container Environment (Portainer) @ `192.168.3.19` | [Documentation](../../platforms/containerization/docker/deploy-portainer.md)
|
||||
20. PrivacyIDEA @ `192.168.3.20` | [Documentation](../../services/authentication/privacyidea.md)
|
||||
21. Puppet Server @ `192.168.3.21` | [Documentation](../../automation/puppet/deployment/puppet.md)
|
||||
22. Not Currently In-Use @ `192.168.3.22` | [Documentation](https://example.com)
|
||||
23. Hyper-V Failover Cluster @ `192.168.3.23` | [Documentation](../../platforms/virtualization/hyper-v/failover-cluster/deploy-failover-cluster-node.md)
|
||||
24. TrueNAS SCALE @ `192.168.3.24` | [Documentation](https://www.truenas.com/truenas-scale/)
|
||||
25. Primary Domain Controller @ `192.168.3.25` | [Documentation](https://example.com)
|
||||
26. Secondary Domain Controller @ `192.168.3.26` | [Documentation](https://example.com)
|
||||
27. Blue Iris Surveillance @ `192.168.3.27` | [Documentation](https://blueirissoftware.com/)
|
||||
28. ARK: Survival Ascended Server @ `192.168.3.28` | [Documentation](../../services/gaming/ark-survival-ascended.md)
|
||||
29. Nextcloud AIO @ `192.168.3.29` | [Documentation](../../services/productivity/nextcloud-aio.md)
|
||||
30. Dev-Testing Win11 Lab Environment @ `192.168.3.35` | [Documentation](https://example.com)
|
||||
31. Windows 11 Work VM @ `192.168.3.31` | [Documentation](https://example.com)
|
||||
32. Matrix Synapse HomeServer @ `192.168.3.32` | [Documentation](https://github.com/matrix-org/synapse)
|
||||
33. ProxmoxVE Virtualization Node @ `192.168.3.33`
|
||||
34. LAB-DRAAS-01 iLO IPMI Access Interface @ `192.168.3.34` | [Documentation](https://example.com)
|
||||
35. Dev-Testing DC Lab Environment @ `192.168.3.35` | [Documentation](https://example.com)
|
||||
36. Dev-Testing Win10 Lab Environment @ `192.168.3.35` | [Documentation](https://example.com)
|
||||
37. Security Onion @ `192.168.3.37` | [Documentation](https://docs.securityonion.net/en/2.4/installation.html)
|
||||
38. Wazuh SIEM Security VM @ `192.168.3.38` | [Documentation](https://documentation.wazuh.com/current/quickstart.html)
|
||||
39. Rancher Harvester Hyper-Converged Infrastructure
|
||||
40. Rancher Harvester Hyper-Converged Infrastructure - VIP Management Interface
|
||||
41. Pterodactyl Dedicated Game Server Hosting Dashboard
|
||||
42. NGINX Server Package Caching Server
|
||||
43. NOT IN USE @ `192.168.3.43`
|
||||
44. NOT IN USE @ `192.168.3.44`
|
||||
45. NOT IN USE @ `192.168.3.45`
|
||||
46. NOT IN USE @ `192.168.3.46`
|
||||
47. NOT IN USE @ `192.168.3.47`
|
||||
48. Portainer Docker Node
|
||||
49. Windows File Server
|
||||
50. Flyff VM 01
|
||||
51. Semaphore UI Automation Server
|
||||
52. Headless Laptop Server 01
|
||||
53. Headless Laptop Server 02
|
||||
54. Veeam Backup & Replication Server
|
||||
55. Veeam Backup Worker
|
||||
56. Veeam Backup Worker
|
||||
57. Veeam Backup Worker
|
||||
58. Immich Server @ `192.168.3.58` | [Documentation](https://immich.app/docs/install/docker-compose/)
|
||||
59. Active Directory Root Certificate Services VM
|
||||
60. Active Directory Intermediary Certificate Services VM
|
||||
61. Mailcow Email Server
|
||||
62. NOT IN USE @ `192.168.3.62`
|
||||
63. Proxmox Datacenter Manager
|
||||
64. H5ai File Sharing Server
|
||||
65. Netbird VPN Server
|
||||
66. Kasm Workspaces VM
|
||||
67. Windows File Server
|
||||
68. Proxmox Backup Server
|
||||
69. Rancher RKE2 Kubernetes Laptop Cluster Node
|
||||
70. Rancher RKE2 Kubernetes Laptop Cluster Node
|
||||
71. Rancher RKE2 Kubernetes Laptop Cluster Node
|
||||
72. Rancher RKE2 Kubernetes Laptop Cluster Node
|
||||
73. Rancher RKE2 Kubernetes Laptop Cluster Node
|
||||
74. Rancher Harvester Node
|
||||
75. Rancher Harvester Cluster VIP
|
||||
253. Fedora Workstation 42 VM
|
||||
254. Windows 11 Workstation
|
||||
|
||||
@@ -0,0 +1,28 @@
|
||||
**Purpose**: This is a scaffold document outlining the high level of changing an IP address of a server in either Debian or RHEL based operating systems.
|
||||
|
||||
=== "Ubuntu / Debian"
|
||||
|
||||
``` sh
|
||||
# Edit Netplan File
|
||||
nano /etc/netplan/<name-of-netplan-file>
|
||||
# <edit existing networking> --> <save file>
|
||||
|
||||
# Apply Netplan Changes
|
||||
netplan apply
|
||||
```
|
||||
|
||||
=== "Rocky / Fedora / RHEL"
|
||||
|
||||
``` sh
|
||||
# Modify the Existing Connection via nmcli
|
||||
nmcli connection modify ens18-connection \
|
||||
ipv4.addresses 192.168.3.13/24 \
|
||||
ipv4.gateway 192.168.3.1 \
|
||||
ipv4.dns "192.168.3.25,192.168.3.26" \
|
||||
ipv4.method manual
|
||||
|
||||
# Bring the Connection Online
|
||||
sudo nmcli connection down ens18-connection
|
||||
sudo nmcli connection up ens18-connection
|
||||
```
|
||||
|
||||
25
infrastructure/networking/misc/tuya-smart-lights.md
Normal file
25
infrastructure/networking/misc/tuya-smart-lights.md
Normal file
@@ -0,0 +1,25 @@
|
||||
### pfSense DHCP Reservations for Tuya-Based Smart Devices
|
||||
| **Description** | **IP Address** | **MAC Address** | **Hostname** | **Device ID** | **Local Key** |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| Bottom of Stairs | 10.0.0.200 | bcddc29072bf | ESP\_9072BF | 50316010bcddc29072bf | 5c92ba765b22a96f |
|
||||
| Right Monitor | 10.0.0.201 | bcddc2901aef | ESP\_901AEF | 50316010bcddc2901aef | ea4852cf67fbff52 |
|
||||
| Downstairs Light | 10.0.0.202 | bcddc28fe4c4 | ESP\_8FE4C4 | 74160333bcddc28fe4c4 | c207fffba7143bdb |
|
||||
| Right TV Light | 10.0.0.203 | b4e62d4bc3fe | ESP\_4BC3FE | 36087764b4e62d4bc3fe | 9e807e03a398a31c |
|
||||
| Nightstand | 10.0.0.204 | b4e62d4bc3cb | ESP\_4BC3CB | 36087764b4e62d4bc3cb | 739fca9d40634ad5 |
|
||||
| Top of Stairs | 10.0.0.205 | bcddc2904ed9 | ESP\_904ED9 | 50316010bcddc2904ed9 | 23452028cfe464c0 |
|
||||
| Bathroom | 10.0.0.206 | 2cf432220421 | ESP\_220421 | 105480752cf432220421 | 03099191b6ab585e |
|
||||
| Front Porch | 10.0.0.207 | bcddc2947aae | ESP\_947AAE | 50316010bcddc2947aae | 18d074fa9ff47087 |
|
||||
| Left Monitor | 10.0.0.208 | 2cf43221af1a | ESP\_21AF1A | 105480752cf43221af1a | aef4f6067548c3a9 |
|
||||
| Puppy Nook | 10.0.0.209 | cc50e3feaa2b | ESP\_FEAA2B | 76380710cc50e3feaa2b | 7c881c27da5079b6 |
|
||||
| TV | 10.0.0.210 | cc50e378bab9 | ESP\_78BAB9 | 35138222cc50e378bab9 | 89b366574278e990 |
|
||||
| Left TV Light | 10.0.0.211 | cc50e378916d | ESP\_78916D | 10548075cc50e378916d | a2d0aeb1ad676b9f |
|
||||
| Bedroom Light | 10.0.0.212 | bcddc2907645 | | 50316010bcddc2907645 | d73b0af93d1bf2da |
|
||||
| Garden Water Pump | 10.0.0.213 | 98f4abef7c2c | | 2182401498f4abef7c2c | 4715733dd7850f00 |
|
||||
| wifi Water Timer | 10.0.0.214 | | | eb54e6ae7c7536a4bbawyw | 2a72efa9b2f36437 |
|
||||
| Tech Room Light Strips | 10.0.0.215 | | | eb7dd0deaab376a2ffsqwl | 2a72efa9b2f36437 |
|
||||
| Irrigation Hub | 10.0.0.216 | 10d5615ab16b | | eb3b374a82a993d252j7v9 | 2a72efa9b2f36437 |
|
||||
| Front Lawn Sprinkler | | | | ebace67e93f8fde4ccdv7p | 2a72efa9b2f36437 |
|
||||
|
||||
### Misc Color Profile Notes:
|
||||
- **1 NAME 3 4 0 255 2 5 1500 8000**
|
||||
- 20 NAME 22 23 29 1000 21 24 2700 6500
|
||||
44
infrastructure/networking/vpn/netbird.md
Normal file
44
infrastructure/networking/vpn/netbird.md
Normal file
@@ -0,0 +1,44 @@
|
||||
## Purpose
|
||||
Netbird is a free and open-source VPN server and client platform. The following document will illustrate how to deploy Netbird into a homelab or business environment.
|
||||
|
||||
!!! note "Assumptions"
|
||||
It is assumed that you are running Rocky Linux 10. You can technically use anything, but the command syntax will be different depending on the platform, and this document will not outline every possible operating system.
|
||||
|
||||
### Install Prerequisites
|
||||
You need to install a few things before we can begin with the deployment of Netbird. Run the following commands set up the server environment before Netbird deployment. This also assumes that you opened all of the necessary ports listed in the [official Netbird deployment documentation](https://docs.netbird.io/selfhosted/selfhosted-quickstart) as well as set up a reverse proxy pointing to port 80 on the Netbird server.
|
||||
|
||||
!!! warning "Run as Non-Sudo"
|
||||
Run all of the commands below as a normal user, do not use `sudo su` when deploying Netbird.
|
||||
|
||||
```sh
|
||||
# Update system & install necessary packages
|
||||
sudo dnf update -y
|
||||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
sudo dnf install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin jq
|
||||
sudo systemctl enable docker --now
|
||||
|
||||
# Configure normal user to have docker privileges
|
||||
sudo usermod -aG docker nicole
|
||||
|
||||
# Logout and log back in via SSH
|
||||
exit
|
||||
ssh nicole@192.168.3.65
|
||||
|
||||
# Create Netbird project directory and pull down installation files
|
||||
sudo mkdir -p /srv/containers/netbird
|
||||
sudo chmod -R 770 /srv/containers/netbird
|
||||
sudo chown -R nicole:docker /srv/containers/netbird
|
||||
cd /srv/containers/netbird
|
||||
curl -sSLO https://github.com/netbirdio/netbird/releases/latest/download/getting-started-with-zitadel.sh
|
||||
|
||||
# Deploy Netbird
|
||||
export NETBIRD_DOMAIN=vpn.bunny-lab.io
|
||||
bash getting-started-with-zitadel.sh
|
||||
```
|
||||
|
||||
### Example Deployment Output
|
||||
If everything is working correctly, you can go make some coffee and come back. When everything is done getting set up, you will see output similar to the below:
|
||||
|
||||
```sh
|
||||
|
||||
```
|
||||
Reference in New Issue
Block a user