Re-Structured Documentation

This commit is contained in:
2024-11-17 22:09:46 -07:00
parent a5169d1abd
commit f67c858dd3
97 changed files with 0 additions and 12 deletions

View File

@ -0,0 +1,7 @@
**Purpose**: The purpose of this document is to outline common tasks that you may need to run in your cluster to perform various tasks.
## Delete Node from Cluster
Sometimes you may need to delete a node from the cluster if you have re-built it or had issues and needed to destroy it. In these instances, you would run the following command (assuming you have a 3-node quorum in your cluster).
```
pvecm delnode promox-node-01
```

View File

@ -0,0 +1,151 @@
## Initial Installation / Configuration
Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.
!!! note
This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.
### Create the first Node
You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox.com/en/downloads) from the Official Proxmox Website. Once it is downloaded, you can use [Balena Etcher](https://etcher.balena.io/#download-etcher) or [Rufus](https://rufus.ie/en/) to deploy Proxmox onto a server.
!!! warning
If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the [Official Documentation](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/enable-nested-virtualization) to ensure that nested virtualization is enabled. An example is listed below:
```
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)
```
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](https://docs.bunny-lab.io/Containers/Docker/Docker%20Networking/) leveraging MACVLAN technology.
### Networking
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
| Hostname | IP Address | Subnet Mask | Gateway | DNS Server | iSCSI Portal IP |
| --------------- | --------------- | ------------------- | ------- | ---------- | ----------------- |
| proxmox-node-01 | 192.168.101.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.101.100 |
| proxmox-node-01 | 192.168.103.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.103.100 |
| proxmox-node-02 | 192.168.102.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.102.100 |
| proxmox-node-02 | 192.168.104.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.104.100 |
### iSCSI Initator Configuration
You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "**Sharing > Block Shares (iSCSI) > Initiators Groups**"
In this instance, we will reference Group ID: `2`. We need to add the iniator to the "**Allowed Initiators (IQN)**" section. This also includes the following networks that are allowed to connect to the iSCSI portal:
- `192.168.101.0/24`
- `192.168.102.0/24`
- `192.168.103.0/24`
- `192.168.104.0/24`
To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at `https://<IP>:8006` then log in with username `root` and whatever you set the password to during initial setup when the ISO image was mounted earlier.
- On the left-hand side, click on the name of the server node (e.g. `proxmox-node-01` or `proxmox-node-02`)
- Click on "**Shell**" to open a CLI to the server
- Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
``` sh
cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
```
!!! example
Output of this command will look something like `iqn.1993-08.org.debian:01:b16b0ff1778`.
## Disable Enterprise Subscription functionality
You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.
Add Unstable Update Repository:
```jsx title="/etc/apt/sources.list"
# Add to the end of the file
# Non-Production / Unstable Updates
deb https://download.proxmox.com/debian bookworm pve-no-subscription
```
!!! warning
Please note the reference to `bookworm` in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.
Comment-Out Enterprise Repository:
```jsx title="/etc/apt/sources.list.d/pve-enterprise.list"
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
```
Pull / Install Available Updates:
``` sh
apt-get update
apt dist-upgrade
reboot
```
## NIC Teaming
You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.
- Ensure that all of the network interfaces appear as something similar to the following:
```jsx title="/etc/network/interfaces"
iface eno1 inet manual
iface eno2 inet manual
# etc
```
- Adjust the network interfaces to add a bond:
```jsx title="/etc/network/interfaces"
auto eno1
iface eno1 inet manual
auto eno2
iface eno2 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer2+3
auto vmbr0
iface vmbr0 inet static
address 192.168.0.11/24
gateway 192.168.0.1
bridge-ports bond0
bridge-stp off
bridge-fd 0
# bridge-vlan-aware yes # I do not use VLANs
# bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
```
!!! warning
Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.
- Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.
## Generalizing VMs for Cloning / Templating:
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
!!! note
If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps wont be necessary!
```jsx title="Change Hostname"
sudo nano /etc/hostname
```
```jsx title="Change Hosts File"
sudo nano /etc/hosts
```
```jsx title="Reset the Machine ID"
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
```
```jsx title="Regenerate SSH Keys"
rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
```
```jsx title="Reboot the Server to Apply Changes"
reboot
```
## Configure Alerting
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.
https://technotim.live/posts/proxmox-alerts/

View File

@ -0,0 +1,116 @@
**Purpose**: There is a way to incorporate ProxmoxVE and TrueNAS more deeply using SSH, simplifying the deployment of virtual disks/volumes passed into GuestVMs in ProxmoxVE. Using ZFS over iSCSI will give you the following non-exhaustive list of benefits:
- Automatically make Zvols in a ZFS Storage Pool
- Automatically bind device-based iSCSI Extents/LUNs to the Zvols
- Allow TrueNAS to handle VM snapshots directly
- Simplify the filesystem overhead of using TrueNAS and iSCSI with ProxmoxVE
- Ability to take snapshots of GuestVMs
- Ability to perform live-migrations of GuestVMs between ProxmoxVE cluster nodes
!!! note "Environment Assumptions"
This document assumes you are running at least 2 ProxmoxVE nodes. For the sake of the example, it will assume they are named `proxmox-node-01` and `proxmox-node-02`. We will also assume you are using TrueNAS Core. TrueNAS SCALE (should work) in the same way, but there may be minor operational / setup differences between the two different deployments of TrueNAS.
Secondly, this guide assumes the ProxmoxVE cluster nodes and TrueNAS server exist on the same network `192.168.101.0/24`.
## ZFS over iSCSI Operational Flow
``` mermaid
sequenceDiagram
participant ProxmoxVE as ProxmoxVE Cluster
participant TrueNAS as TrueNAS Core (inc. iSCSI & ZFS Storage)
ProxmoxVE->>TrueNAS: Cluster VM node connects via SSH to create ZVol for VM
TrueNAS->>TrueNAS: Create ZVol in ZFS storage pool
TrueNAS->>TrueNAS: Bind ZVol to iSCSI LUN
ProxmoxVE->>TrueNAS: Connect to iSCSI & attach ZVol as VM storage
ProxmoxVE->>TrueNAS: (On-Demand) Connect via SSH to create VM snapshot of ZVol
TrueNAS->>TrueNAS: Create Snapshot of ZVol/VM
```
## All ZFS Storage Nodes / TrueNAS Servers
### Configure SSH Key Exchange
You first need to make some changes to the SSHD configuration of the ZFS server(s) storing data for your cluster. This is fairly straight-forward and only needs two lines adjusted. This is based on the [Proxmox ZFS over ISCSI](https://pve.proxmox.com/wiki/Legacy:_ZFS_over_iSCSI) documentation. Be sure to restart the SSH service or reboot the storage server after making the changes below before proceeding onto the next steps.
=== "OpenSSH-based OS"
```jsx title="/etc/ssh/sshd_config"
UseDNS no
GSSAPIAuthentication no
```
=== "Solaris-based OS"
```jsx title="/etc/ssh/sshd_config"
LookupClientHostnames no
VerifyReverseMapping no
GSSAPIAuthentication no
```
## All ProxmoxVE Cluster Nodes
### Configure SSH Key Exchange
The first step is creating SSH trust between the ProxmoxVE cluster nodes and the TrueNAS storage appliance. You will leverage the ProxmoxVE `shell` on every node of the cluster to run the following commands.
**Note**: I will be writing the SSH configuration with the name `192.168.101.100` for simplicity so I know what server the identity belongs to. You could also name it something else like `storage.bunny-lab.io_id_rsa`.
``` sh
mkdir /etc/pve/priv/zfs
ssh-keygen -f /etc/pve/priv/zfs/192.168.101.100_id_rsa # (1)
ssh-copy-id -i /etc/pve/priv/zfs/192.168.101.100_id_rsa.pub root@192.168.101.100 # (2)
ssh -i /etc/pve/priv/zfs/192.168.101.100_id_rsa root@192.168.101.100 # (3)
```
1. Do not set a password. It will break the automatic functionality.
2. Send the SSH key to the TrueNAS server.
3. Connect to the TrueNAS server at least once to finish establishing the connection.
### Install & Configure Storage Provider
Now you need to set up the storage provider in TrueNAS. You will run the commands below within a ProxmoxVE shell, then when finished, log out of the ProxmoxVE WebUI, clear the browser cache for ProxmoxVE, then log back in. This will have added a new storage provider called `FreeNAS-API` under the `ZFS over iSCSI` storage type.
``` sh
keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' | gpg --dearmor >> ${keyring_location}
#################################################################
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
# Source: KSATechnologies
# Site: https://cloudsmith.io
# Repository: KSATechnologies / truenas-proxmox
# Description: TrueNAS plugin for Proxmox VE - Production
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main
EOF
#################################################################
apt update
apt install freenas-proxmox
apt full-upgrade
systemctl restart pvedaemon
systemctl restart pveproxy
systemctl restart pvestatd
```
## Primary ProxmoxVE Cluster Node
From this point, we are ready to add the shared storage provider to the cluster via the primary node in the cluster. This is not strictly required, just simplifies the documentation.
Navigate to **"Datacenter (BUNNY-CLUSTER) > Storage > Add > ZFS over iSCSI"**
| **Field** | **Value** | **Additional Notes** |
| :--- | :--- | :--- |
| ID | `bunny-zfs-over-iscsi` | Friendly Name |
| Portal | `192.168.101.100` | IP Address of iSCSI Portal |
| Pool | `PROXMOX-ZFS-STORAGE` | This is the ZFS Storage Pool you will use to store GuestVM Disks |
| ZFS Block Size | `4k` | |
| Target | `iqn.2005-10.org.moon-storage-01.ctl:proxmox-zfs-storage` | The iSCSI Target |
| Target Group | `<Leave Blank>` | |
| Enable | `<Checked>` | |
| iSCSI Provider | `FreeNAS-API` | |
| Thin-Provision | `<Checked>` | |
| Write Cache | `<Checked>` | |
| API use SSL | `<Unchecked>` | Disabled unless you have SSL Enabled on TrueNAS |
| API Username | `root` | This is the account that is allowed to make ZFS zvols / datasets |
| API IPv4 Host | `192.168.101.100` | iSCSI Portal Address |
| API Password | `<Root Password of TrueNAS Box>` | |
| Nodes | `proxmox-node-01,proxmox-node-02` | All ProxmoxVE Cluster Nodes |
!!! success "Storage is Provisioned"
At this point, the storage should propagate throughout the ProxmoxVE cluster, and appear as a location to deploy virtual machines and/or containers. You can now use this storage for snapshots and live-migrations between ProxmoxVE cluster nodes as well.