Files
docs/Servers/Virtualization/Proxmox/Configuring iSCSI-based Cluster Storage.md
Nicole Rappe 5188bf6689
All checks were successful
GitOps Automatic Deployment / GitOps Automatic Deployment (push) Successful in 7s
Update Servers/Virtualization/Proxmox/Configuring iSCSI-based Cluster Storage.md
2026-01-06 23:59:11 -07:00

201 lines
4.9 KiB
Markdown

## Purpose
This document describes the **end-to-end procedure** for creating a **thick-provisioned iSCSI-backed shared storage target** on **TrueNAS CORE**, and consuming it from a **Proxmox VE cluster** using **shared LVM**.
This approach is intended to:
- Provide SAN-style block semantics
- Enable Proxmox-native snapshot functionality (LVM volume chains)
- Avoid third-party plugins or middleware
- Be fully reproducible via CLI
## Assumptions
- TrueNAS **CORE** (not SCALE)
- ZFS pool already exists and is healthy
- SSH service is enabled on TrueNAS
- Proxmox VE nodes have network connectivity to TrueNAS
- iSCSI traffic is on a reliable, low-latency network (10GbE recommended)
- All VM workloads are drained from at least one Proxmox node for maintenance
!!! note "Proxmox VE Version Context"
This guide assumes **Proxmox VE 9.1.4 (or later)**. Snapshot-as-volume-chain support on shared LVM (e.g., iSCSI) is available and improved, including enhanced handling of vTPM state in offline snapshots. :contentReference[oaicite:5]{index=5}
!!! warning "Important"
`volblocksize` **cannot be changed after zvol creation**. Choose carefully.
## Target Architecture
```
ZFS Pool
└─ Zvol (Thick / Reserved)
└─ iSCSI Extent
└─ Proxmox LVM PV
└─ Shared VG
└─ VM Disks
```
## Create a Dedicated Zvol for Proxmox
### Variables
Adjust as needed before execution.
```sh
POOL_NAME="CLUSTER-STORAGE"
ZVOL_NAME="iscsi-storage"
ZVOL_SIZE="14T"
VOLBLOCKSIZE="16K"
```
### Create the Zvol (Thick-Provisioned)
```sh
zfs create -V ${ZVOL_SIZE} \
-o volblocksize=${VOLBLOCKSIZE} \
-o compression=lz4 \
-o refreservation=${ZVOL_SIZE} \
${POOL_NAME}/${ZVOL_NAME}
```
!!! note
The `refreservation` enforces **true thick provisioning** and prevents overcommit.
## Configure iSCSI Target (TrueNAS CORE)
This section uses a **hybrid approach**:
- **CLI** is used for ZFS and LUN (extent backing) creation
- **TrueNAS GUI** is used for iSCSI portal, target, and association
- **CLI** is used again for validation
### Enable iSCSI Service
```sh
service ctld start
sysrc ctld_enable=YES
```
### Create the iSCSI LUN Backing (CLI)
This step creates the **actual block-backed LUN** that will be exported via iSCSI.
```sh
# Sanity check: confirm the backing zvol exists
ls -l /dev/zvol/${POOL_NAME}/${ZVOL_NAME}
# Create CTL LUN backed by the zvol
ctladm create -b block \
-o file=/dev/zvol/${POOL_NAME}/${ZVOL_NAME} \
-S ISCSI-STORAGE \
-d ISCSI-STORAGE
```
### Verify the LUN is real and correctly sized
```sh
ctladm devlist -v
```
!!! tip
`Size (Blocks)` must be **non-zero** and match the zvol size. If it is `0`, stop and correct before proceeding.
### Configure iSCSI Portal, Target, and Association (GUI)
In the TrueNAS Web UI, navigate to **Sharing → Block Shares (iSCSI)** then proceed to perform the tasks seen in the tabs below, from in order from left-to-right:
=== "Portals"
**Portals → Add**
* IP Address: `0.0.0.0`
* Port: `3260`
=== "Targets"
**Targets → Add**
* Target Name: `iqn.2026-01.io.bunny-lab:storage`
* Authentication: `None`
* Portal Group: `<Select the portal created above>`
=== "Extents"
**Extents → Add**
* Extent Name: `ISCSI-STORAGE`
* Extent Type: `Device`
* Device: `/dev/zvol/${POOL_NAME}/${ZVOL_NAME}` (e.g. `CLUSTER-STORAGE/iscsi-storage (8T)`)
=== "Associated Targets"
**Associated Targets → Add**
* Target: `iqn.2026-01.io.bunny-lab:storage`
* Extent: `ISCSI-STORAGE`
## Connect from Proxmox VE Nodes
Perform the following **on each Proxmox node**.
```sh
# Install iSCSI Utilities
apt update
apt install -y open-iscsi lvm2
# Discover Target
iscsiadm -m discovery -t sendtargets -p <TRUENAS_IP>
# Log In
iscsiadm -m node --login
### Verify Device
lsblk
```
## Create Shared LVM (One Node Only)
!!! warning "Important"
**Only run LVM creation on ONE node**. All other nodes will only scan.
```sh
# Initialize Physical Volume
pvcreate /dev/sdX
# Create Volume Group
vgcreate vg_proxmox_iscsi /dev/sdX
```
## Register Storage in Proxmox
### Rescan LVM (Other Nodes)
```sh
pvscan
vgscan
```
### Add Storage (GUI or CLI)
**Datacenter → Storage → Add → LVM**
- ID: `iscsi-lvm`
- Volume Group: `vg_proxmox_iscsi`
- Content: `Disk image`
- Shared: ✔️
## Validation
- Snapshot create / revert / delete
- Live migration between nodes
- PBS backup and restore test
!!! success
If all validation tests pass, the storage is production-ready.
# Cutover-specific commands, not relevant to commands above if you are setting everything up new without NFS being a pre-existing thing.
## Decommission NFS (After Cutover)
```sh
zfs destroy CLUSTER-STORAGE/NFS-STORAGE
```
## Expand iSCSI Storage (No Downtime)
```sh
# Expand Zvol (TrueNAS)
zfs set volsize=16T CLUSTER-STORAGE/iscsi-storage
zfs set refreservation=16T CLUSTER-STORAGE/iscsi-storage
# Rescan on Proxmox Nodes
pvresize /dev/sdX
```