All checks were successful
GitOps Automatic Deployment / GitOps Automatic Deployment (push) Successful in 7s
57 lines
2.7 KiB
Markdown
57 lines
2.7 KiB
Markdown
## Purpose
|
||
You may need to deploy many copies of a virtual machine rapidly, and don't want to go through the hassle of setting up everything ad-hoc as the needs arise for each VM workload. Creating a cloud-init template allows you to more rapidly deploy production-ready copies of a template VM (that you create below) into a ProxmoxVE environment.
|
||
|
||
### Download Image and Import into ProxmoxVE
|
||
You will first need to pull down the OS image from Ubuntu's website via CLI, as there is currently no way to do this via the WebUI. Using SSH or the Shell within the WebUI of one of the ProxmoxVE servers, run the following commands to download and import the image into ProxmoxVE.
|
||
```sh
|
||
# Make a place to keep cloud images
|
||
mkdir -p /var/lib/vz/template/images/ubuntu && cd /var/lib/vz/template/images/ubuntu
|
||
|
||
# Download Ubuntu 24.04 LTS cloud image (amd64, server)
|
||
wget -q --show-progress https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img
|
||
|
||
# Create a Placeholder VM to Attach Cloud Image
|
||
qm create 9000 --name ubuntu-2404-cloud --memory 2048 --cores 2 --net0 virtio,bridge=vmbr0
|
||
|
||
# Set UEFI (OVMF) + SCSI controller (Cloud images expect UEFI firmware and SCSI disk.)
|
||
qm set 9000 --bios ovmf --scsihw virtio-scsi-pci
|
||
|
||
# Import the disk into ProxmoxVE
|
||
qm importdisk 9000 noble-server-cloudimg-amd64.img nfs-cluster-storage --format qcow2
|
||
|
||
# Query ProxmoxVE to find out where the volume was created
|
||
pvesm list nfs-cluster-storage | grep 9000
|
||
|
||
# Attach the disk to the placeholder VM
|
||
qm set 9000 --scsi0 nfs-cluster-storage:9000/vm-9000-disk-0.qcow2
|
||
|
||
# Configure Disk to Boot
|
||
qm set 9000 --boot c --bootdisk scsi0
|
||
```
|
||
|
||
### Add Cloud-Init Drive & Configure Template Defaults
|
||
Now that the Ubuntu cloud image is attached as the VM’s primary disk, you need to attach a Cloud-Init drive. This special drive is where Proxmox writes your user data (username, SSH keys, network settings, etc.) at clone time.
|
||
```sh
|
||
# Add a Cloud-Init drive to the VM
|
||
qm set 9000 --ide2 nfs-cluster-storage:cloudinit
|
||
|
||
# Enable QEMU Guest Agent
|
||
qm set 9000 --agent enabled=1
|
||
|
||
# Set a default Cloud-Init user (replace 'nicole' with your preferred username)
|
||
qm set 9000 --ciuser nicole
|
||
|
||
# Set a default password (this will be resettable per-clone)
|
||
qm set 9000 --cipassword 'SuperSecretPassword'
|
||
|
||
# Download your infrastructure public SSH key onto the Proxmox node
|
||
wget -O /root/infrastructure_id_rsa.pub \
|
||
https://git.bunny-lab.io/Infrastructure/LinuxServer_SSH_PublicKey/raw/branch/main/id_rsa.pub
|
||
|
||
# Tell Proxmox to inject this key via Cloud-Init
|
||
qm set 9000 --sshkey /root/infrastructure_id_rsa.pub
|
||
|
||
# Configure networking to use DHCP by default (this will be overridden at cloning)
|
||
qm set 9000 --ipconfig0 ip=dhcp
|
||
|
||
``` |