Files
docs/Servers/Virtualization/Proxmox/Configuring iSCSI-based Cluster Storage.md
Nicole Rappe 63dfb5e71e
All checks were successful
GitOps Automatic Deployment / GitOps Automatic Deployment (push) Successful in 7s
Update Servers/Virtualization/Proxmox/Configuring iSCSI-based Cluster Storage.md
2026-01-07 01:23:24 -07:00

234 lines
6.4 KiB
Markdown

## Purpose
This document describes the **end-to-end procedure** for creating a **thick-provisioned iSCSI-backed shared storage target** on **TrueNAS CORE**, and consuming it from a **Proxmox VE cluster** using **shared LVM**.
This approach is intended to:
- Provide SAN-style block semantics
- Enable Proxmox-native snapshot functionality (LVM volume chains)
- Avoid third-party plugins or middleware
- Be fully reproducible via CLI
## Assumptions
- TrueNAS **CORE** (not SCALE)
- ZFS pool already exists and is healthy
- SSH service is enabled on TrueNAS
- Proxmox VE nodes have network connectivity to TrueNAS
- iSCSI traffic is on a reliable, low-latency network (10GbE recommended)
- All VM workloads are drained from at least one Proxmox node for maintenance
!!! note "Proxmox VE Version Context"
This guide assumes **Proxmox VE 9.1.4 (or later)**. Snapshot-as-volume-chain support on shared LVM (e.g., iSCSI) is available and improved, including enhanced handling of vTPM state in offline snapshots. :contentReference[oaicite:5]{index=5}
!!! warning "Important"
`volblocksize` **cannot be changed after zvol creation**. Choose carefully.
## Target Architecture
```
ZFS Pool
└─ Zvol (Thick / Reserved)
└─ iSCSI Extent
└─ Proxmox LVM PV
└─ Shared VG
└─ VM Disks
```
## Create a Dedicated Zvol for Proxmox
### Variables
Adjust as needed before execution.
```sh
POOL_NAME="CLUSTER-STORAGE"
ZVOL_NAME="iscsi-storage"
ZVOL_SIZE="14T"
VOLBLOCKSIZE="16K"
```
### Create the Zvol (Thick-Provisioned)
```sh
zfs create -V ${ZVOL_SIZE} \
-o volblocksize=${VOLBLOCKSIZE} \
-o compression=lz4 \
-o refreservation=${ZVOL_SIZE} \
${POOL_NAME}/${ZVOL_NAME}
```
!!! note
The `refreservation` enforces **true thick provisioning** and prevents overcommit.
## Configure iSCSI Target (TrueNAS CORE)
This section uses a **hybrid approach**:
- **CLI** is used for ZFS and LUN (extent backing) creation
- **TrueNAS GUI** is used for iSCSI portal, target, and association
- **CLI** is used again for validation
### Enable iSCSI Service
```sh
service ctld start
sysrc ctld_enable=YES
```
### Create the iSCSI LUN Backing (CLI)
This step creates the **actual block-backed LUN** that will be exported via iSCSI.
```sh
# Sanity check: confirm the backing zvol exists
ls -l /dev/zvol/${POOL_NAME}/${ZVOL_NAME}
# Create CTL LUN backed by the zvol
ctladm create -b block \
-o file=/dev/zvol/${POOL_NAME}/${ZVOL_NAME} \
-S ISCSI-STORAGE \
-d ISCSI-STORAGE
```
### Verify the LUN is real and correctly sized
```sh
ctladm devlist -v
```
!!! tip
`Size (Blocks)` must be **non-zero** and match the zvol size. If it is `0`, stop and correct before proceeding.
### Configure iSCSI Portal, Target, and Extent Association (CLI Only)
!!! warning "Do NOT Use the TrueNAS iSCSI GUI"
**Once you choose a CLI-managed iSCSI configuration, the TrueNAS Web UI must never be used for iSCSI.**
Opening or modifying **Sharing → Block Shares (iSCSI)** in the GUI will **overwrite CTL runtime state**, invalidate manual `ctladm` configuration, and result in targets that appear correct but expose **no LUNs** to initiators.
**This configuration is CLI-owned and CLI-managed.**
- Do **not** add, edit, or view iSCSI objects in the GUI
- Do **not** use the iSCSI wizard
- Do **not** mix GUI extents with CLI-created LUNs
#### Create iSCSI Portal (Listen on All Interfaces)
```sh
# Backup any existing ctl.conf
cp -av /etc/ctl.conf /etc/ctl.conf.$(date +%Y%m%d-%H%M%S).bak 2>/dev/null || true
# Write a clean /etc/ctl.conf
cat > /etc/ctl.conf <<'EOF'
# --- Bunny Lab: Proxmox iSCSI (CLI-only) ---
auth-group "no-auth" {
auth-type none
initiator-name "iqn.1993-08.org.debian:01:5b963dd51f93" # cluster-node-01 ("cat /etc/iscsi/initiatorname.iscsi")
initiator-name "iqn.1993-08.org.debian:01:1b4df0fa3540" # cluster-node-02 ("cat /etc/iscsi/initiatorname.iscsi")
initiator-name "iqn.1993-08.org.debian:01:5669aa2d89a2" # cluster-node-03 ("cat /etc/iscsi/initiatorname.iscsi")
}
# Listen on all interfaces on the default iSCSI port
portal-group "pg0" {
listen 0.0.0.0:3260
discovery-auth-group "no-auth"
}
# Create a target IQN
target "iqn.2026-01.io.bunny-lab:storage" {
portal-group "pg0"
auth-group "no-auth"
# Export LUN 0 backed by the zvol device
lun 0 {
path /dev/zvol/CLUSTER-STORAGE/iscsi-storage
serial "ISCSI-STORAGE"
device-id "ISCSI-STORAGE"
}
}
EOF
# Restart ctld to apply the configuration file
service ctld restart
# Verify the iSCSI listener is actually up
sockstat -4l | grep ':3260'
# Verify CTL now shows an iSCSI frontend
ctladm portlist -v | egrep -i '(^Port|iscsi|listen=)'
```
!!! success
At this point, the iSCSI target is live and correctly exposing a block device to initiators. You may now proceed to **Connect from ProxmoxVE Nodes** section.
## Connect from ProxmoxVE Nodes
Perform the following **on each Proxmox node**.
```sh
# Install iSCSI Utilities
apt update
apt install -y open-iscsi lvm2
# Discover Target
iscsiadm -m discovery -t sendtargets -p <TRUENAS_IP>
# Log In
iscsiadm -m node --login
# Rescan SCSI Bus
iscsiadm -m session -P 3
### Verify Device
# If everything works successfully, you should see something like "sdi 8:128 0 8T 0 disk".
lsblk
```
## Create Shared LVM (Execute on One Node Only)
!!! warning "Important"
**Only run LVM creation on ONE node**. All other nodes will only scan.
```sh
# Initialize Physical Volume
pvcreate /dev/sdX
# Create Volume Group
vgcreate vg_proxmox_iscsi /dev/sdX
```
## Register Storage in Proxmox
### Rescan LVM (Other Nodes)
```sh
pvscan
vgscan
```
### Add Storage (GUI)
**Datacenter → Storage → Add → LVM**
- ID: `iscsi-cluster-lvm`
- Volume Group: `vg_proxmox_iscsi`
- Content: `Disk image, Container`
- Shared: ✔️
- Allow Snapshots as Volume-Chain: ✔️
## Validation
- Snapshot create / revert / delete
- Live migration between nodes
- PBS backup and restore test
!!! success
If all validation tests pass, the storage is production-ready.
# Cutover-specific commands, not relevant to commands above if you are setting everything up new without NFS being a pre-existing thing.
## Decommission NFS (After Cutover)
```sh
zfs destroy CLUSTER-STORAGE/NFS-STORAGE
```
## Expand iSCSI Storage (No Downtime)
```sh
# Expand Zvol (TrueNAS)
zfs set volsize=16T CLUSTER-STORAGE/iscsi-storage
zfs set refreservation=16T CLUSTER-STORAGE/iscsi-storage
# Rescan on Proxmox Nodes
pvresize /dev/sdX
```