Documentation Restructure
This commit is contained in:
@@ -0,0 +1,116 @@
|
||||
## Purpose
|
||||
You may need to deploy many copies of a virtual machine rapidly, and don't want to go through the hassle of setting up everything ad-hoc as the needs arise for each VM workload. Creating a cloud-init template allows you to more rapidly deploy production-ready copies of a template VM (that you create below) into a ProxmoxVE environment.
|
||||
|
||||
### Download Image and Import into ProxmoxVE
|
||||
You will first need to pull down the OS image from Ubuntu's website via CLI, as there is currently no way to do this via the WebUI. Using SSH or the Shell within the WebUI of one of the ProxmoxVE servers, run the following commands to download and import the image into ProxmoxVE.
|
||||
```sh
|
||||
# Make a place to keep cloud images
|
||||
mkdir -p /var/lib/vz/template/images/ubuntu && cd /var/lib/vz/template/images/ubuntu
|
||||
|
||||
# Download Ubuntu 24.04 LTS cloud image (amd64, server)
|
||||
wget -q --show-progress https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
|
||||
|
||||
# Create a Placeholder VM to Attach Cloud Image
|
||||
qm create 9000 --name ubuntu-2404-cloud --memory 8192 --cores 8 --net0 virtio,bridge=vmbr0
|
||||
|
||||
# Set UEFI (OVMF) + SCSI controller (Cloud images expect UEFI firmware and SCSI disk.)
|
||||
qm set 9000 --bios ovmf --scsihw virtio-scsi-pci
|
||||
qm set 9000 --efidisk0 nfs-cluster-storage:0,pre-enrolled-keys=1
|
||||
|
||||
# Import the disk into ProxmoxVE
|
||||
qm importdisk 9000 noble-server-cloudimg-amd64.img nfs-cluster-storage --format qcow2
|
||||
|
||||
# Query ProxmoxVE to find out where the volume was created
|
||||
pvesm list nfs-cluster-storage | grep 9000
|
||||
|
||||
# Attach the disk to the placeholder VM
|
||||
qm set 9000 --scsi0 nfs-cluster-storage:9000/vm-9000-disk-0.qcow2
|
||||
|
||||
# Configure Disk to Boot
|
||||
qm set 9000 --boot c --bootdisk scsi0
|
||||
```
|
||||
|
||||
### Add Cloud-Init Drive & Configure Template Defaults
|
||||
Now that the Ubuntu cloud image is attached as the VM’s primary disk, you need to attach a Cloud-Init drive. This special drive is where Proxmox writes your user data (username, SSH keys, network settings, etc.) at clone time.
|
||||
```sh
|
||||
# Add a Cloud-Init drive to the VM
|
||||
qm set 9000 --ide2 nfs-cluster-storage:cloudinit
|
||||
|
||||
# Enable QEMU Guest Agent
|
||||
qm set 9000 --agent enabled=1
|
||||
|
||||
# Set a default Cloud-Init user (replace 'nicole' with your preferred username)
|
||||
qm set 9000 --ciuser nicole
|
||||
|
||||
# Set a default password (this will be resettable per-clone)
|
||||
qm set 9000 --cipassword 'SuperSecretPassword'
|
||||
|
||||
# Set DNS Servers and Search Domain
|
||||
qm set 9000 --nameserver "1.1.1.1 1.0.0.1"
|
||||
qm set 9000 --searchdomain bunny-lab.io
|
||||
|
||||
# Enable automatic package upgrades within the VM on first boot
|
||||
qm set 9000 --ciupgrade 1
|
||||
|
||||
# Download your infrastructure public SSH key onto the Proxmox node
|
||||
wget -O /root/infrastructure_id_rsa.pub \
|
||||
https://git.bunny-lab.io/Infrastructure/LinuxServer_SSH_PublicKey/raw/branch/main/id_rsa.pub
|
||||
|
||||
# Tell Proxmox to inject this key via Cloud-Init
|
||||
qm set 9000 --sshkey /root/infrastructure_id_rsa.pub
|
||||
|
||||
# Configure networking to use DHCP by default (this will be overridden at cloning)
|
||||
qm set 9000 --ipconfig0 ip=dhcp
|
||||
```
|
||||
|
||||
### Setup Packages in VM & Convert to Template
|
||||
At this point, we have a few things we need to do first before we can turn the VM into a template and make clones of it. You will need to boot up the VM we made (id 9000) and run the following commands to prepare it for becoming a template:
|
||||
|
||||
```sh
|
||||
# Install Updates
|
||||
sudo apt update && sudo apt upgrade
|
||||
sudo apt install -y qemu-guest-agent cloud-init
|
||||
sudo systemctl enable qemu-guest-agent --now
|
||||
|
||||
# Magic Stuff Goes Here =============================
|
||||
|
||||
# Convert the placeholder VM into a reusable template (ignore chattr errors on NFS storage backends)
|
||||
qm template 9000
|
||||
```
|
||||
|
||||
### Clone the Template into a New VM
|
||||
You can now create new VMs instantly from the template we created above.
|
||||
|
||||
=== "Via WebUI"
|
||||
|
||||
- Log into the ProxmoxVE node where the template was created
|
||||
- Right-Click the Template > "**Clone**"
|
||||
- Give the new VM a name
|
||||
- Set the "Mode" of the clone to "**Full Clone**"
|
||||
- Navigate to the new GuestVM in ProxmoxVE and click on the "**Cloud-Init**" tab
|
||||
- Change the "**User**" and "**Password**" fields if you want to change them
|
||||
- Double-click on the "**IP Config (net0)**" option
|
||||
- **IPv4/CIDR**: `192.168.3.67/24`
|
||||
- **Gateway (IPv4)**: `192.168.3.1`
|
||||
- Click the "**OK**" button
|
||||
- Start the VM and wait for it to automatically provision itself
|
||||
|
||||
=== "Via CLI"
|
||||
|
||||
``` sh
|
||||
# Create a new VM (example: VM 9100) cloned from the template
|
||||
qm clone 9000 9100 --name ubuntu-2404-test --full
|
||||
|
||||
# Optionally, override Cloud-Init settings for this clone:
|
||||
qm set 9100 --ciuser nicole --cipassword 'AnotherStrongPass'
|
||||
qm set 9100 --ipconfig0 ip=192.168.3.67/24,gw=192.168.3.1
|
||||
|
||||
# Boot the new cloned VM
|
||||
qm start 9100
|
||||
```
|
||||
|
||||
### Configure VM Hostname
|
||||
At this point, the hostname of the VM will be randomized and you will probably want to set it to something statically, you can do that with the following commands after the server has finished starting:
|
||||
```sh
|
||||
|
||||
```
|
||||
7
platforms/virtualization/proxmox/common-tasks.md
Normal file
7
platforms/virtualization/proxmox/common-tasks.md
Normal file
@@ -0,0 +1,7 @@
|
||||
**Purpose**: The purpose of this document is to outline common tasks that you may need to run in your cluster to perform various tasks.
|
||||
|
||||
## Delete Node from Cluster
|
||||
Sometimes you may need to delete a node from the cluster if you have re-built it or had issues and needed to destroy it. In these instances, you would run the following command (assuming you have a 3-node quorum in your cluster).
|
||||
```
|
||||
pvecm delnode promox-node-01
|
||||
```
|
||||
@@ -0,0 +1,245 @@
|
||||
## Purpose
|
||||
This document describes the **end-to-end procedure** for creating a **thick-provisioned iSCSI-backed shared storage target** on **TrueNAS CORE**, and consuming it from a **Proxmox VE cluster** using **shared LVM**.
|
||||
|
||||
This approach is intended to:
|
||||
|
||||
- Provide SAN-style block semantics
|
||||
- Enable Proxmox-native snapshot functionality (LVM volume chains)
|
||||
- Avoid third-party plugins or middleware
|
||||
- Be fully reproducible via CLI
|
||||
|
||||
## Assumptions
|
||||
- TrueNAS **CORE** (not SCALE)
|
||||
- ZFS pool already exists and is healthy
|
||||
- SSH service is enabled on TrueNAS
|
||||
- Proxmox VE nodes have network connectivity to TrueNAS
|
||||
- iSCSI traffic is on a reliable, low-latency network (10GbE recommended)
|
||||
- All VM workloads are drained from at least one Proxmox node for maintenance
|
||||
|
||||
!!! note "Proxmox VE Version Context"
|
||||
This guide assumes **Proxmox VE 9.1.4 (or later)**. Snapshot-as-volume-chain support on shared LVM (e.g., iSCSI) is available and improved, including enhanced handling of vTPM state in offline snapshots.
|
||||
|
||||
!!! warning "Important"
|
||||
`volblocksize` **cannot be changed after zvol creation**. Choose carefully.
|
||||
|
||||
## Target Architecture
|
||||
|
||||
```
|
||||
ZFS Pool
|
||||
└─ Zvol (Thick / Reserved)
|
||||
└─ iSCSI Extent
|
||||
└─ Proxmox LVM PV
|
||||
└─ Shared VG
|
||||
└─ VM Disks
|
||||
```
|
||||
|
||||
## Create a Dedicated Zvol for Proxmox
|
||||
|
||||
### Variables
|
||||
Adjust as needed before execution.
|
||||
|
||||
```sh
|
||||
POOL_NAME="CLUSTER-STORAGE"
|
||||
ZVOL_NAME="iscsi-storage"
|
||||
ZVOL_SIZE="14T"
|
||||
VOLBLOCKSIZE="16K"
|
||||
```
|
||||
|
||||
### Create the Zvol (Thick-Provisioned)
|
||||
```sh
|
||||
zfs create -V ${ZVOL_SIZE} \
|
||||
-o volblocksize=${VOLBLOCKSIZE} \
|
||||
-o compression=lz4 \
|
||||
-o refreservation=${ZVOL_SIZE} \
|
||||
${POOL_NAME}/${ZVOL_NAME}
|
||||
```
|
||||
|
||||
!!! note
|
||||
The `refreservation` enforces **true thick provisioning** and prevents overcommit.
|
||||
|
||||
## Configure iSCSI Target (TrueNAS CORE)
|
||||
|
||||
This section uses a **hybrid approach**:
|
||||
- **CLI** is used for ZFS and LUN (extent backing) creation
|
||||
- **TrueNAS GUI** is used for iSCSI portal, target, and association
|
||||
- **CLI** is used again for validation
|
||||
|
||||
### Enable iSCSI Service
|
||||
|
||||
```sh
|
||||
service ctld start
|
||||
sysrc ctld_enable=YES
|
||||
```
|
||||
|
||||
### Create the iSCSI LUN Backing (CLI)
|
||||
This step creates the **actual block-backed LUN** that will be exported via iSCSI.
|
||||
|
||||
```sh
|
||||
# Sanity check: confirm the backing zvol exists
|
||||
ls -l /dev/zvol/${POOL_NAME}/${ZVOL_NAME}
|
||||
|
||||
# Create CTL LUN backed by the zvol
|
||||
ctladm create -b block \
|
||||
-o file=/dev/zvol/${POOL_NAME}/${ZVOL_NAME} \
|
||||
-S ISCSI-STORAGE \
|
||||
-d ISCSI-STORAGE
|
||||
```
|
||||
|
||||
### Verify the LUN is real and correctly sized
|
||||
|
||||
```sh
|
||||
ctladm devlist -v
|
||||
```
|
||||
|
||||
!!! tip
|
||||
`Size (Blocks)` must be **non-zero** and match the zvol size. If it is `0`, stop and correct before proceeding.
|
||||
|
||||
### Configure iSCSI Portal, Target, and Extent Association (CLI Only)
|
||||
|
||||
!!! warning "Do NOT Use the TrueNAS iSCSI GUI"
|
||||
**Once you choose a CLI-managed iSCSI configuration, the TrueNAS Web UI must never be used for iSCSI.**
|
||||
Opening or modifying **Sharing → Block Shares (iSCSI)** in the GUI will **overwrite CTL runtime state**, invalidate manual `ctladm` configuration, and result in targets that appear correct but expose **no LUNs** to initiators.
|
||||
|
||||
**This configuration is CLI-owned and CLI-managed.**
|
||||
|
||||
- Do **not** add, edit, or view iSCSI objects in the GUI
|
||||
- Do **not** use the iSCSI wizard
|
||||
- Do **not** mix GUI extents with CLI-created LUNs
|
||||
|
||||
#### Create iSCSI Portal (Listen on All Interfaces)
|
||||
|
||||
```sh
|
||||
# Backup any existing ctl.conf
|
||||
cp -av /etc/ctl.conf /etc/ctl.conf.$(date +%Y%m%d-%H%M%S).bak 2>/dev/null || true
|
||||
|
||||
# Write a clean /etc/ctl.conf
|
||||
cat > /etc/ctl.conf <<'EOF'
|
||||
# --- Bunny Lab: Proxmox iSCSI (CLI-only) ---
|
||||
auth-group "no-auth" {
|
||||
auth-type none
|
||||
initiator-name "iqn.1993-08.org.debian:01:5b963dd51f93" # cluster-node-01 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
initiator-name "iqn.1993-08.org.debian:01:1b4df0fa3540" # cluster-node-02 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
initiator-name "iqn.1993-08.org.debian:01:5669aa2d89a2" # cluster-node-03 ("cat /etc/iscsi/initiatorname.iscsi")
|
||||
}
|
||||
|
||||
# Listen on all interfaces on the default iSCSI port
|
||||
portal-group "pg0" {
|
||||
listen 0.0.0.0:3260
|
||||
discovery-auth-group "no-auth"
|
||||
}
|
||||
|
||||
# Create a target IQN
|
||||
target "iqn.2026-01.io.bunny-lab:storage" {
|
||||
portal-group "pg0"
|
||||
auth-group "no-auth"
|
||||
|
||||
# Export LUN 0 backed by the zvol device
|
||||
lun 0 {
|
||||
path /dev/zvol/CLUSTER-STORAGE/iscsi-storage
|
||||
serial "ISCSI-STORAGE"
|
||||
device-id "ISCSI-STORAGE"
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
# Restart ctld to apply the configuration file
|
||||
service ctld restart
|
||||
|
||||
# Verify the iSCSI listener is actually up
|
||||
sockstat -4l | grep ':3260'
|
||||
|
||||
# Verify CTL now shows an iSCSI frontend
|
||||
ctladm portlist -v | egrep -i '(^Port|iscsi|listen=)'
|
||||
```
|
||||
|
||||
!!! success
|
||||
At this point, the iSCSI target is live and correctly exposing a block device to initiators. You may now proceed to **Connect from ProxmoxVE Nodes** section.
|
||||
|
||||
## Connect from ProxmoxVE Nodes
|
||||
Perform the following **on each Proxmox node**.
|
||||
|
||||
```sh
|
||||
# Install iSCSI Utilities
|
||||
apt update
|
||||
apt install -y open-iscsi lvm2
|
||||
|
||||
# Discover Target
|
||||
iscsiadm -m discovery -t sendtargets -p <TRUENAS_IP>
|
||||
|
||||
# Log In
|
||||
iscsiadm -m node --login
|
||||
|
||||
# Rescan SCSI Bus
|
||||
iscsiadm -m session -P 3
|
||||
|
||||
### Verify Device
|
||||
# If everything works successfully, you should see something like "sdi 8:128 0 8T 0 disk".
|
||||
lsblk
|
||||
```
|
||||
|
||||
## Create Shared LVM (Execute on One Node Only)
|
||||
|
||||
!!! warning "Important"
|
||||
**Only run LVM creation on ONE node**. All other nodes will only scan.
|
||||
|
||||
```sh
|
||||
# Initialize Physical Volume
|
||||
pvcreate /dev/sdX
|
||||
|
||||
# Create Volume Group
|
||||
vgcreate vg_proxmox_iscsi /dev/sdX
|
||||
```
|
||||
|
||||
## Register Storage in Proxmox
|
||||
### Rescan LVM (Other Nodes)
|
||||
```sh
|
||||
pvscan
|
||||
vgscan
|
||||
```
|
||||
|
||||
### Add Storage (GUI)
|
||||
**Datacenter → Storage → Add → LVM**
|
||||
|
||||
- ID: `iscsi-cluster-lvm`
|
||||
- Volume Group: `vg_proxmox_iscsi`
|
||||
- Content: `Disk image, Container`
|
||||
- Shared: ✔️
|
||||
- Allow Snapshots as Volume-Chain: ✔️
|
||||
|
||||
## Validation
|
||||
|
||||
- Snapshot create / revert / delete
|
||||
- Live migration between nodes
|
||||
- PBS backup and restore test
|
||||
|
||||
!!! success
|
||||
If all validation tests pass, the storage is production-ready.
|
||||
|
||||
## Expanding iSCSI Storage (No Downtime)
|
||||
If you need to expand the storage space of the newly-created iSCSI LUN, you can run the ZFS commands seen below on the TrueNAS Core server. The first command increases the size, the second command pre-allocated the space (thick-provisioned).
|
||||
|
||||
!!! warning "ProxmoxVE Cluster-specific Notes"
|
||||
|
||||
- `pvresize` must be executed on **exactly one** ProxmoxVE node.
|
||||
- All other nodes should only perform `pvscan` / `vgscan` after the resize.
|
||||
- Running `pvresize` on multiple nodes can corrupt shared LVM metadata.
|
||||
|
||||
```sh
|
||||
# Expand Zvol (TrueNAS)
|
||||
zfs set volsize=16T CLUSTER-STORAGE/iscsi-storage
|
||||
zfs set refreservation=16T CLUSTER-STORAGE/iscsi-storage
|
||||
service ctld restart
|
||||
|
||||
# Rescan the block device on all ProxmoxVE nodes
|
||||
echo 1 > /sys/class/block/sdX/device/rescan
|
||||
|
||||
# Verify on all nodes that the new size is displayed
|
||||
lsblk /dev/sdX
|
||||
|
||||
# Run this on only ONE of the ProxmoxVE nodes.
|
||||
pvresize /dev/sdX
|
||||
|
||||
# Rescan on the other nodes that you did not run the pvresize command on. They will now see the expanded free space.
|
||||
pvscan
|
||||
vgscan
|
||||
```
|
||||
@@ -0,0 +1,15 @@
|
||||
## Purpose
|
||||
Sometimes in some very specific situations, you will find that an LVM / VG just won't come online in ProxmoxVE. If this happens, you can run the following commands (and replace the placeholder location) to manually bring the storage online.
|
||||
|
||||
```sh
|
||||
lvchange -an local-vm-storage/local-vm-storage
|
||||
lvchange -an local-vm-storage/local-vm-storage_tmeta
|
||||
lvchange -an local-vm-storage/local-vm-storage_tdata
|
||||
vgchange -ay local-vm-storage
|
||||
```
|
||||
|
||||
!!! info "Be Patient"
|
||||
It can take some time for everything to come online.
|
||||
|
||||
!!! success
|
||||
If you see something like this: `6 logical volume(s) in volume group "local-vm-storage" now active`, then you successfully brought the volume online.
|
||||
@@ -0,0 +1,38 @@
|
||||
## Purpose
|
||||
There are a few steps you have to take when upgrading ProxmoxVE from 8.4.1+ to 9.0+. The process is fairly straightforward, so just follow the instructions seen below.
|
||||
|
||||
!!! info "GuestVM Assumptions"
|
||||
It is assumed that if you are running a ProxmoxVE cluster, you will migrate all GuestVMs to another cluster node. If this is a standalone ProxmoxVE server, you will shut down all GuestVMs safely before proceeding.
|
||||
|
||||
!!! warning "Perform `pve8to9` Readiness Check"
|
||||
It's critical that you run the `pve8to9` command to ensure that your ProxmoxVE server meets all of the requirements and doesn't have any failures or potentially server-breaking warnings. If the `pve8to9` command is unknown, then run `apt update && apt dist-upgrade` in the shell then try again. Warnings should be addressed ad-hoc, but *CPU Microcode warnings can be safely ignored*.
|
||||
|
||||
**Example pve8to9 Summary Output**:
|
||||
```sh
|
||||
= SUMMARY =
|
||||
|
||||
TOTAL: 48
|
||||
PASSED: 39
|
||||
SKIPPED: 8
|
||||
WARNINGS: 1
|
||||
FAILURES: 0
|
||||
```
|
||||
|
||||
### Update Repositories from `bookworm` to `trixie`
|
||||
```sh
|
||||
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
|
||||
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/pve-install-repo.list
|
||||
apt update
|
||||
```
|
||||
|
||||
### Upgrade to ProxmoxVE 9.0
|
||||
!!! warning "Run Upgrade Commands in iLO/iDRAC/IPMI"
|
||||
At this point, its very likely that if you are using SSH, it may unexpectedly have the session terminated, so you absolutely want to use a local or remote console to the server to run the commands below, both to ensure you maintain access to the console, as well as to see if any issues arise during POST after the reboot.
|
||||
|
||||
```sh
|
||||
apt dist-upgrade -y
|
||||
reboot
|
||||
```
|
||||
|
||||
!!! note "Disable `pve-enterprise` Repository"
|
||||
At this point, the ProxmoxVE server should be running on v9.0+, you will want to disable the `pve-enterprise` repository as it will goof up future updates if you don't disable it.
|
||||
152
platforms/virtualization/proxmox/proxmoxve.md
Normal file
152
platforms/virtualization/proxmox/proxmoxve.md
Normal file
@@ -0,0 +1,152 @@
|
||||
## Initial Installation / Configuration
|
||||
Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.
|
||||
|
||||
!!! note
|
||||
This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.
|
||||
|
||||
### Create the first Node
|
||||
You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox.com/en/downloads) from the Official Proxmox Website. Once it is downloaded, you can use [Balena Etcher](https://etcher.balena.io/#download-etcher) or [Rufus](https://rufus.ie/en/) to deploy Proxmox onto a server.
|
||||
|
||||
!!! warning
|
||||
If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the [Official Documentation](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/enable-nested-virtualization) to ensure that nested virtualization is enabled. An example is listed below:
|
||||
```
|
||||
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
|
||||
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)
|
||||
```
|
||||
|
||||
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
|
||||
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](../../../networking/docker-networking/docker-networking.md) leveraging MACVLAN technology.
|
||||
|
||||
### Networking
|
||||
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
|
||||
|
||||
| Hostname | IP Address | Subnet Mask | Gateway | DNS Server | iSCSI Portal IP |
|
||||
| --------------- | --------------- | ------------------- | ------- | ---------- | ----------------- |
|
||||
| proxmox-node-01 | 192.168.101.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.101.100 |
|
||||
| proxmox-node-01 | 192.168.103.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.103.100 |
|
||||
| proxmox-node-02 | 192.168.102.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.102.100 |
|
||||
| proxmox-node-02 | 192.168.104.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.104.100 |
|
||||
|
||||
### iSCSI Initator Configuration
|
||||
You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "**Sharing > Block Shares (iSCSI) > Initiators Groups**"
|
||||
|
||||
In this instance, we will reference Group ID: `2`. We need to add the iniator to the "**Allowed Initiators (IQN)**" section. This also includes the following networks that are allowed to connect to the iSCSI portal:
|
||||
|
||||
- `192.168.101.0/24`
|
||||
- `192.168.102.0/24`
|
||||
- `192.168.103.0/24`
|
||||
- `192.168.104.0/24`
|
||||
|
||||
To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at `https://<IP>:8006` then log in with username `root` and whatever you set the password to during initial setup when the ISO image was mounted earlier.
|
||||
|
||||
- On the left-hand side, click on the name of the server node (e.g. `proxmox-node-01` or `proxmox-node-02`)
|
||||
- Click on "**Shell**" to open a CLI to the server
|
||||
- Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
|
||||
``` sh
|
||||
cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
|
||||
```
|
||||
|
||||
!!! example
|
||||
Output of this command will look something like `iqn.1993-08.org.debian:01:b16b0ff1778`.
|
||||
|
||||
## Disable Enterprise Subscription functionality
|
||||
You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.
|
||||
|
||||
Add Unstable Update Repository:
|
||||
```jsx title="/etc/apt/sources.list"
|
||||
# Add to the end of the file
|
||||
# Non-Production / Unstable Updates
|
||||
deb https://download.proxmox.com/debian bookworm pve-no-subscription
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Please note the reference to `bookworm` in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.
|
||||
|
||||
Comment-Out Enterprise Repository:
|
||||
```jsx title="/etc/apt/sources.list.d/pve-enterprise.list"
|
||||
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
|
||||
```
|
||||
|
||||
Pull / Install Available Updates:
|
||||
``` sh
|
||||
apt-get update
|
||||
apt dist-upgrade
|
||||
reboot
|
||||
```
|
||||
|
||||
## NIC Teaming
|
||||
You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.
|
||||
|
||||
- Ensure that all of the network interfaces appear as something similar to the following:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
iface eno1 inet manual
|
||||
iface eno2 inet manual
|
||||
# etc
|
||||
```
|
||||
|
||||
- Adjust the network interfaces to add a bond:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
auto eno1
|
||||
iface eno1 inet manual
|
||||
|
||||
auto eno2
|
||||
iface eno2 inet manual
|
||||
|
||||
auto bond0
|
||||
iface bond0 inet manual
|
||||
bond-slaves eno1 eno2
|
||||
bond-miimon 100
|
||||
bond-mode 802.3ad
|
||||
bond-xmit-hash-policy layer2+3
|
||||
|
||||
auto vmbr0
|
||||
iface vmbr0 inet static
|
||||
address 192.168.0.11/24
|
||||
gateway 192.168.0.1
|
||||
bridge-ports bond0
|
||||
bridge-stp off
|
||||
bridge-fd 0
|
||||
# bridge-vlan-aware yes # I do not use VLANs
|
||||
# bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.
|
||||
|
||||
- Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.
|
||||
|
||||
## Generalizing VMs for Cloning / Templating:
|
||||
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
|
||||
|
||||
!!! note
|
||||
If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps won’t be necessary!
|
||||
|
||||
```jsx title="Change Hostname"
|
||||
sudo nano /etc/hostname
|
||||
```
|
||||
|
||||
```jsx title="Change Hosts File"
|
||||
sudo nano /etc/hosts
|
||||
```
|
||||
|
||||
```jsx title="Reset the Machine ID"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Regenerate SSH Keys"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Reboot the Server to Apply Changes"
|
||||
reboot
|
||||
```
|
||||
|
||||
## Configure Alerting
|
||||
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.
|
||||
|
||||
https://technotim.live/posts/proxmox-alerts/
|
||||
|
||||
116
platforms/virtualization/proxmox/zfs-over-iscsi.md
Normal file
116
platforms/virtualization/proxmox/zfs-over-iscsi.md
Normal file
@@ -0,0 +1,116 @@
|
||||
**Purpose**: There is a way to incorporate ProxmoxVE and TrueNAS more deeply using SSH, simplifying the deployment of virtual disks/volumes passed into GuestVMs in ProxmoxVE. Using ZFS over iSCSI will give you the following non-exhaustive list of benefits:
|
||||
|
||||
- Automatically make Zvols in a ZFS Storage Pool
|
||||
- Automatically bind device-based iSCSI Extents/LUNs to the Zvols
|
||||
- Allow TrueNAS to handle VM snapshots directly
|
||||
- Simplify the filesystem overhead of using TrueNAS and iSCSI with ProxmoxVE
|
||||
- Ability to take snapshots of GuestVMs
|
||||
- Ability to perform live-migrations of GuestVMs between ProxmoxVE cluster nodes
|
||||
|
||||
!!! note "Environment Assumptions"
|
||||
This document assumes you are running at least 2 ProxmoxVE nodes. For the sake of the example, it will assume they are named `proxmox-node-01` and `proxmox-node-02`. We will also assume you are using TrueNAS Core. TrueNAS SCALE (should work) in the same way, but there may be minor operational / setup differences between the two different deployments of TrueNAS.
|
||||
|
||||
Secondly, this guide assumes the ProxmoxVE cluster nodes and TrueNAS server exist on the same network `192.168.101.0/24`.
|
||||
|
||||
## ZFS over iSCSI Operational Flow
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant ProxmoxVE as ProxmoxVE Cluster
|
||||
participant TrueNAS as TrueNAS Core (inc. iSCSI & ZFS Storage)
|
||||
|
||||
ProxmoxVE->>TrueNAS: Cluster VM node connects via SSH to create ZVol for VM
|
||||
TrueNAS->>TrueNAS: Create ZVol in ZFS storage pool
|
||||
TrueNAS->>TrueNAS: Bind ZVol to iSCSI LUN
|
||||
ProxmoxVE->>TrueNAS: Connect to iSCSI & attach ZVol as VM storage
|
||||
ProxmoxVE->>TrueNAS: (On-Demand) Connect via SSH to create VM snapshot of ZVol
|
||||
TrueNAS->>TrueNAS: Create Snapshot of ZVol/VM
|
||||
```
|
||||
|
||||
## All ZFS Storage Nodes / TrueNAS Servers
|
||||
### Configure SSH Key Exchange
|
||||
You first need to make some changes to the SSHD configuration of the ZFS server(s) storing data for your cluster. This is fairly straight-forward and only needs two lines adjusted. This is based on the [Proxmox ZFS over ISCSI](https://pve.proxmox.com/wiki/Legacy:_ZFS_over_iSCSI) documentation. Be sure to restart the SSH service or reboot the storage server after making the changes below before proceeding onto the next steps.
|
||||
|
||||
=== "OpenSSH-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
UseDNS no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
=== "Solaris-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
LookupClientHostnames no
|
||||
VerifyReverseMapping no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
## All ProxmoxVE Cluster Nodes
|
||||
### Configure SSH Key Exchange
|
||||
The first step is creating SSH trust between the ProxmoxVE cluster nodes and the TrueNAS storage appliance. You will leverage the ProxmoxVE `shell` on every node of the cluster to run the following commands.
|
||||
|
||||
**Note**: I will be writing the SSH configuration with the name `192.168.101.100` for simplicity so I know what server the identity belongs to. You could also name it something else like `storage.bunny-lab.io_id_rsa`.
|
||||
|
||||
``` sh
|
||||
mkdir /etc/pve/priv/zfs
|
||||
ssh-keygen -f /etc/pve/priv/zfs/192.168.101.100_id_rsa # (1)
|
||||
ssh-copy-id -i /etc/pve/priv/zfs/192.168.101.100_id_rsa.pub root@192.168.101.100 # (2)
|
||||
ssh -i /etc/pve/priv/zfs/192.168.101.100_id_rsa root@192.168.101.100 # (3)
|
||||
```
|
||||
|
||||
1. Do not set a password. It will break the automatic functionality.
|
||||
2. Send the SSH key to the TrueNAS server.
|
||||
3. Connect to the TrueNAS server at least once to finish establishing the connection.
|
||||
|
||||
### Install & Configure Storage Provider
|
||||
Now you need to set up the storage provider in TrueNAS. You will run the commands below within a ProxmoxVE shell, then when finished, log out of the ProxmoxVE WebUI, clear the browser cache for ProxmoxVE, then log back in. This will have added a new storage provider called `FreeNAS-API` under the `ZFS over iSCSI` storage type.
|
||||
|
||||
``` sh
|
||||
keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' | gpg --dearmor >> ${keyring_location}
|
||||
|
||||
#################################################################
|
||||
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
|
||||
# Source: KSATechnologies
|
||||
# Site: https://cloudsmith.io
|
||||
# Repository: KSATechnologies / truenas-proxmox
|
||||
# Description: TrueNAS plugin for Proxmox VE - Production
|
||||
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main
|
||||
|
||||
EOF
|
||||
#################################################################
|
||||
|
||||
apt update
|
||||
apt install freenas-proxmox
|
||||
apt full-upgrade
|
||||
|
||||
systemctl restart pvedaemon
|
||||
systemctl restart pveproxy
|
||||
systemctl restart pvestatd
|
||||
```
|
||||
|
||||
## Primary ProxmoxVE Cluster Node
|
||||
From this point, we are ready to add the shared storage provider to the cluster via the primary node in the cluster. This is not strictly required, just simplifies the documentation.
|
||||
|
||||
Navigate to **"Datacenter (BUNNY-CLUSTER) > Storage > Add > ZFS over iSCSI"**
|
||||
|
||||
| **Field** | **Value** | **Additional Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| ID | `bunny-zfs-over-iscsi` | Friendly Name |
|
||||
| Portal | `192.168.101.100` | IP Address of iSCSI Portal |
|
||||
| Pool | `PROXMOX-ZFS-STORAGE` | This is the ZFS Storage Pool you will use to store GuestVM Disks |
|
||||
| ZFS Block Size | `4k` | |
|
||||
| Target | `iqn.2005-10.org.moon-storage-01.ctl:proxmox-zfs-storage` | The iSCSI Target |
|
||||
| Target Group | `<Leave Blank>` | |
|
||||
| Enable | `<Checked>` | |
|
||||
| iSCSI Provider | `FreeNAS-API` | |
|
||||
| Thin-Provision | `<Checked>` | |
|
||||
| Write Cache | `<Checked>` | |
|
||||
| API use SSL | `<Unchecked>` | Disabled unless you have SSL Enabled on TrueNAS |
|
||||
| API Username | `root` | This is the account that is allowed to make ZFS zvols / datasets |
|
||||
| API IPv4 Host | `192.168.101.100` | iSCSI Portal Address |
|
||||
| API Password | `<Root Password of TrueNAS Box>` | |
|
||||
| Nodes | `proxmox-node-01,proxmox-node-02` | All ProxmoxVE Cluster Nodes |
|
||||
|
||||
!!! success "Storage is Provisioned"
|
||||
At this point, the storage should propagate throughout the ProxmoxVE cluster, and appear as a location to deploy virtual machines and/or containers. You can now use this storage for snapshots and live-migrations between ProxmoxVE cluster nodes as well.
|
||||
Reference in New Issue
Block a user