Re-Structured Documentation
This commit is contained in:
7
Servers/Virtualization/Canonical/MicroCloud.md
Normal file
7
Servers/Virtualization/Canonical/MicroCloud.md
Normal file
@ -0,0 +1,7 @@
|
||||
# MicroCloud
|
||||
Canonical MicroCloud is a useful clustering tool for deploying virtual machines and managing containers.
|
||||
|
||||
!!! note
|
||||
This section is currently under-construction. Information here will change as the documentation evolves and the deployment process is refined.
|
||||
|
||||
PLACEHOLDER DATA
|
76
Servers/Virtualization/Canonical/OpenStack.md
Normal file
76
Servers/Virtualization/Canonical/OpenStack.md
Normal file
@ -0,0 +1,76 @@
|
||||
# OpenStack
|
||||
OpenStack is basically a virtual machine hypervisor that is HA and cluster-friendly. This particular variant is deployed via Canonical's MiniStack environment using SNAP. It will deploy OpenStack onto a single node, which can later be expanded to additional nodes. You can also use something like OpenShift to deploy a Kubernetes Cluster onto OpenStack automatically via its various APIs.
|
||||
|
||||
**Reference Documentation**:
|
||||
- https://discourse.ubuntu.com/t/single-node-guided/35765
|
||||
- https://microstack.run/docs/single-node-guided
|
||||
|
||||
!!! note
|
||||
This document assumes your bare-metal host server is running Ubuntu 22.04 LTS, has at least 16GB of Memory (**32GB for Multi-Node Deployments**), two network interfaces (one for management, one for remote VM access), 200GB of Disk Space for the root filesystem, another 200GB disk for Ceph distributed storage, and 4 processor cores. See [Single-Node Mode System Requirements](https://ubuntu.com/openstack/install)
|
||||
|
||||
!!! note Assumed Networking on the First Cluster Node
|
||||
- **eth0** = 192.168.3.5
|
||||
- **eth1** = 192.168.5.200
|
||||
|
||||
### Update APT then install upgrades
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y && sudo apt install htop ncdu iptables nano -y
|
||||
```
|
||||
!!! tip
|
||||
At this time, it would be a good idea to take a checkpoint/snapshot of the server (if it is a virtual machine). This gives you a starting point to come back to as you troubleshoot inevitable deployment issues.
|
||||
|
||||
### Update SNAP then install OpenStack SNAP
|
||||
```
|
||||
sudo snap refresh
|
||||
sudo snap install openstack --channel 2023.1
|
||||
```
|
||||
### Install & Configure Dependencies
|
||||
Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in MicroStack.
|
||||
```
|
||||
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
|
||||
sudo reboot
|
||||
```
|
||||
### Bootstrapping
|
||||
Deploy the OpenStack cloud using the cluster bootstrap command.
|
||||
```
|
||||
sunbeam cluster bootstrap
|
||||
```
|
||||
!!! warning
|
||||
If you get an "Unable to connect to websocket" error, run `sudo snap restart lxd`.
|
||||
[Known Bug Report](https://bugs.launchpad.net/snap-openstack/+bug/2033400)
|
||||
|
||||
!!! note
|
||||
Management networks shared by hosts = `192.168.3.0/24`
|
||||
MetalLB address allocation range (supports multiple ranges, comma separated) (10.20.21.10-10.20.21.20): `192.168.3.50-192.168.3.60`
|
||||
|
||||
### Cloud Initialization:
|
||||
- nicole@moon-stack-01:~$ `sunbeam configure --openrc demo-openrc`
|
||||
- Local or remote access to VMs [local/remote] (local): `remote`
|
||||
- CIDR of network to use for external networking (10.20.20.0/24): `192.168.5.0/24`
|
||||
- IP address of default gateway for external network (192.168.5.1):
|
||||
- Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
|
||||
- Username to use for access to OpenStack (demo): `nicole`
|
||||
- Password to use for access to OpenStack (Vb********): `<PASSWORD>`
|
||||
- Network range to use for project network (192.168.122.0/24):
|
||||
- List of nameservers guests should use for DNS resolution (192.168.3.11 192.168.3.10):
|
||||
- Enable ping and SSH access to instances? [y/n] (y):
|
||||
- Start of IP allocation range for external network (192.168.5.2): `192.168.5.201`
|
||||
- End of IP allocation range for external network (192.168.5.254): `192.168.5.251`
|
||||
- Network type for access to external network [flat/vlan] (flat):
|
||||
- Free network interface that will be configured for external traffic: `eth1`
|
||||
- WARNING: Interface eth1 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
|
||||
|
||||
### Pull Down / Generate the Dashboard URL
|
||||
```
|
||||
sunbeam openrc > admin-openrc
|
||||
sunbeam dashboard-url
|
||||
```
|
||||
|
||||
### Launch a Test VM:
|
||||
Verify the cloud by launching a VM called ‘test’ based on the ‘ubuntu’ image (Ubuntu 22.04 LTS).
|
||||
```
|
||||
sunbeam launch ubuntu --name test
|
||||
```
|
||||
!!! note Sample output:
|
||||
- Launching an OpenStack instance ...
|
||||
- Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@10.20.20.200`
|
7
Servers/Virtualization/Proxmox/Common Tasks.md
Normal file
7
Servers/Virtualization/Proxmox/Common Tasks.md
Normal file
@ -0,0 +1,7 @@
|
||||
**Purpose**: The purpose of this document is to outline common tasks that you may need to run in your cluster to perform various tasks.
|
||||
|
||||
## Delete Node from Cluster
|
||||
Sometimes you may need to delete a node from the cluster if you have re-built it or had issues and needed to destroy it. In these instances, you would run the following command (assuming you have a 3-node quorum in your cluster).
|
||||
```
|
||||
pvecm delnode promox-node-01
|
||||
```
|
151
Servers/Virtualization/Proxmox/ProxmoxVE.md
Normal file
151
Servers/Virtualization/Proxmox/ProxmoxVE.md
Normal file
@ -0,0 +1,151 @@
|
||||
## Initial Installation / Configuration
|
||||
Proxmox Virtual Environment is an open source server virtualization management solution based on QEMU/KVM and LXC. You can manage virtual machines, containers, highly available clusters, storage and networks with an integrated, easy-to-use web interface or via CLI.
|
||||
|
||||
!!! note
|
||||
This document assumes you have a storage server that hosts both ISO files via CIFS/SMB share, and has the ability to set up an iSCSI LUN (VM & Container storage). This document assumes that you are using a TrueNAS Core server to host both of these services.
|
||||
|
||||
### Create the first Node
|
||||
You will need to download the [Proxmox VE 8.1 ISO Installer](https://www.proxmox.com/en/downloads) from the Official Proxmox Website. Once it is downloaded, you can use [Balena Etcher](https://etcher.balena.io/#download-etcher) or [Rufus](https://rufus.ie/en/) to deploy Proxmox onto a server.
|
||||
|
||||
!!! warning
|
||||
If you are virtualizing Proxmox under a Hyper-V environment, you will need to follow the [Official Documentation](https://learn.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/enable-nested-virtualization) to ensure that nested virtualization is enabled. An example is listed below:
|
||||
```
|
||||
Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true # (1)
|
||||
Get-VMNetworkAdapter -VMName <VMName> | Set-VMNetworkAdapter -MacAddressSpoofing On # (2)
|
||||
```
|
||||
|
||||
1. This tells Hyper-V to allow the GuestVM to behave as a hypervisor, nested under Hyper-V, allowing the virtualization functionality of the Hypervisor's CPU to be passed-through to the GuestVM.
|
||||
2. This tells Hyper-V to allow your GuestVM to have multiple nested virtual machines with their own independant MAC addresses. This is useful when using nested Virtual Machines, but is also a requirement when you set up a [Docker Network](https://docs.bunny-lab.io/Containers/Docker/Docker%20Networking/) leveraging MACVLAN technology.
|
||||
|
||||
### Networking
|
||||
You will need to set a static IP address, in this case, it will be an address within the 20GbE network. You will be prompted to enter these during the ProxmoxVE installation. Be sure to set the hostname to something that matches the following FQDN: `proxmox-node-01.MOONGATE.local`.
|
||||
|
||||
| Hostname | IP Address | Subnet Mask | Gateway | DNS Server | iSCSI Portal IP |
|
||||
| --------------- | --------------- | ------------------- | ------- | ---------- | ----------------- |
|
||||
| proxmox-node-01 | 192.168.101.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.101.100 |
|
||||
| proxmox-node-01 | 192.168.103.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.103.100 |
|
||||
| proxmox-node-02 | 192.168.102.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.102.100 |
|
||||
| proxmox-node-02 | 192.168.104.200 | 255.255.255.0 (/24) | None | 1.1.1.1 | 192.168.104.100 |
|
||||
|
||||
### iSCSI Initator Configuration
|
||||
You will need to add the iSCSI initiator from the proxmox node to the allowed initiator list in TrueNAS Core under "**Sharing > Block Shares (iSCSI) > Initiators Groups**"
|
||||
|
||||
In this instance, we will reference Group ID: `2`. We need to add the iniator to the "**Allowed Initiators (IQN)**" section. This also includes the following networks that are allowed to connect to the iSCSI portal:
|
||||
|
||||
- `192.168.101.0/24`
|
||||
- `192.168.102.0/24`
|
||||
- `192.168.103.0/24`
|
||||
- `192.168.104.0/24`
|
||||
|
||||
To get the iSCSI Initiator IQN of the current Proxmox node, you need to navigate to the Proxmox server's webUI, typically located at `https://<IP>:8006` then log in with username `root` and whatever you set the password to during initial setup when the ISO image was mounted earlier.
|
||||
|
||||
- On the left-hand side, click on the name of the server node (e.g. `proxmox-node-01` or `proxmox-node-02`)
|
||||
- Click on "**Shell**" to open a CLI to the server
|
||||
- Run the following command to get the iSCSI Initiator (IQN) name to give to TrueNAS Core for the previously-mentioned steps:
|
||||
``` sh
|
||||
cat /etc/iscsi/initiatorname.iscsi | grep "InitiatorName=" | sed 's/InitiatorName=//'
|
||||
```
|
||||
|
||||
!!! example
|
||||
Output of this command will look something like `iqn.1993-08.org.debian:01:b16b0ff1778`.
|
||||
|
||||
## Disable Enterprise Subscription functionality
|
||||
You will likely not be paying for / using the enterprise subscription, so we are going to disable that functionality and enable unstable builds. The unstable builds are surprisingly stable, and should not cause you any issues.
|
||||
|
||||
Add Unstable Update Repository:
|
||||
```jsx title="/etc/apt/sources.list"
|
||||
# Add to the end of the file
|
||||
# Non-Production / Unstable Updates
|
||||
deb https://download.proxmox.com/debian bookworm pve-no-subscription
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Please note the reference to `bookworm` in both the sections above and below this notice, this may be different depending on the version of ProxmoxVE you are deploying. Please reference the version indicated by the rest of the entries in the sources.list file to know which one to use in the added line section.
|
||||
|
||||
Comment-Out Enterprise Repository:
|
||||
```jsx title="/etc/apt/sources.list.d/pve-enterprise.list"
|
||||
# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise
|
||||
```
|
||||
|
||||
Pull / Install Available Updates:
|
||||
``` sh
|
||||
apt-get update
|
||||
apt dist-upgrade
|
||||
reboot
|
||||
```
|
||||
|
||||
## NIC Teaming
|
||||
You will need to set up NIC teaming to configure a LACP LAGG. This will add redundancy and a way for devices outside of the 20GbE backplane to interact with the server.
|
||||
|
||||
- Ensure that all of the network interfaces appear as something similar to the following:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
iface eno1 inet manual
|
||||
iface eno2 inet manual
|
||||
# etc
|
||||
```
|
||||
|
||||
- Adjust the network interfaces to add a bond:
|
||||
```jsx title="/etc/network/interfaces"
|
||||
auto eno1
|
||||
iface eno1 inet manual
|
||||
|
||||
auto eno2
|
||||
iface eno2 inet manual
|
||||
|
||||
auto bond0
|
||||
iface bond0 inet manual
|
||||
bond-slaves eno1 eno2
|
||||
bond-miimon 100
|
||||
bond-mode 802.3ad
|
||||
bond-xmit-hash-policy layer2+3
|
||||
|
||||
auto vmbr0
|
||||
iface vmbr0 inet static
|
||||
address 192.168.0.11/24
|
||||
gateway 192.168.0.1
|
||||
bridge-ports bond0
|
||||
bridge-stp off
|
||||
bridge-fd 0
|
||||
# bridge-vlan-aware yes # I do not use VLANs
|
||||
# bridge-vids 2-4094 # I do not use VLANs (This could be set to any VLANs you want it a member of)
|
||||
```
|
||||
|
||||
!!! warning
|
||||
Be sure to include both interfaces for the (Dual-Port) 10GbE connections in the network configuration. Final example document will be updated at a later point in time once the production server is operational.
|
||||
|
||||
- Reboot the server again to make the networking changes take effect fully. Use iLO / iDRAC / IPMI if you have that functionality on your server in case your configuration goes errant and needs manual intervention / troubleshooting to re-gain SSH control of the proxmox server.
|
||||
|
||||
## Generalizing VMs for Cloning / Templating:
|
||||
These are the commands I run after cloning a Linux machine so that it resets all information for the machine it was cloned from.
|
||||
|
||||
!!! note
|
||||
If you use cloud-init-aware OS images as described under Cloud-Init Support on https://pve.proxmox.com/pve-docs/chapter-qm.html, these steps won’t be necessary!
|
||||
|
||||
```jsx title="Change Hostname"
|
||||
sudo nano /etc/hostname
|
||||
```
|
||||
|
||||
```jsx title="Change Hosts File"
|
||||
sudo nano /etc/hosts
|
||||
```
|
||||
|
||||
```jsx title="Reset the Machine ID"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Regenerate SSH Keys"
|
||||
rm -f /etc/machine-id /var/lib/dbus/machine-id
|
||||
dbus-uuidgen --ensure=/etc/machine-id
|
||||
dbus-uuidgen --ensure
|
||||
```
|
||||
|
||||
```jsx title="Reboot the Server to Apply Changes"
|
||||
reboot
|
||||
```
|
||||
|
||||
## Configure Alerting
|
||||
Setting up alerts in Proxmox is important and critical to making sure you are notified if something goes wrong with your servers.
|
||||
|
||||
https://technotim.live/posts/proxmox-alerts/
|
116
Servers/Virtualization/Proxmox/ZFS-Over-iSCSI.md
Normal file
116
Servers/Virtualization/Proxmox/ZFS-Over-iSCSI.md
Normal file
@ -0,0 +1,116 @@
|
||||
**Purpose**: There is a way to incorporate ProxmoxVE and TrueNAS more deeply using SSH, simplifying the deployment of virtual disks/volumes passed into GuestVMs in ProxmoxVE. Using ZFS over iSCSI will give you the following non-exhaustive list of benefits:
|
||||
|
||||
- Automatically make Zvols in a ZFS Storage Pool
|
||||
- Automatically bind device-based iSCSI Extents/LUNs to the Zvols
|
||||
- Allow TrueNAS to handle VM snapshots directly
|
||||
- Simplify the filesystem overhead of using TrueNAS and iSCSI with ProxmoxVE
|
||||
- Ability to take snapshots of GuestVMs
|
||||
- Ability to perform live-migrations of GuestVMs between ProxmoxVE cluster nodes
|
||||
|
||||
!!! note "Environment Assumptions"
|
||||
This document assumes you are running at least 2 ProxmoxVE nodes. For the sake of the example, it will assume they are named `proxmox-node-01` and `proxmox-node-02`. We will also assume you are using TrueNAS Core. TrueNAS SCALE (should work) in the same way, but there may be minor operational / setup differences between the two different deployments of TrueNAS.
|
||||
|
||||
Secondly, this guide assumes the ProxmoxVE cluster nodes and TrueNAS server exist on the same network `192.168.101.0/24`.
|
||||
|
||||
## ZFS over iSCSI Operational Flow
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant ProxmoxVE as ProxmoxVE Cluster
|
||||
participant TrueNAS as TrueNAS Core (inc. iSCSI & ZFS Storage)
|
||||
|
||||
ProxmoxVE->>TrueNAS: Cluster VM node connects via SSH to create ZVol for VM
|
||||
TrueNAS->>TrueNAS: Create ZVol in ZFS storage pool
|
||||
TrueNAS->>TrueNAS: Bind ZVol to iSCSI LUN
|
||||
ProxmoxVE->>TrueNAS: Connect to iSCSI & attach ZVol as VM storage
|
||||
ProxmoxVE->>TrueNAS: (On-Demand) Connect via SSH to create VM snapshot of ZVol
|
||||
TrueNAS->>TrueNAS: Create Snapshot of ZVol/VM
|
||||
```
|
||||
|
||||
## All ZFS Storage Nodes / TrueNAS Servers
|
||||
### Configure SSH Key Exchange
|
||||
You first need to make some changes to the SSHD configuration of the ZFS server(s) storing data for your cluster. This is fairly straight-forward and only needs two lines adjusted. This is based on the [Proxmox ZFS over ISCSI](https://pve.proxmox.com/wiki/Legacy:_ZFS_over_iSCSI) documentation. Be sure to restart the SSH service or reboot the storage server after making the changes below before proceeding onto the next steps.
|
||||
|
||||
=== "OpenSSH-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
UseDNS no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
=== "Solaris-based OS"
|
||||
|
||||
```jsx title="/etc/ssh/sshd_config"
|
||||
LookupClientHostnames no
|
||||
VerifyReverseMapping no
|
||||
GSSAPIAuthentication no
|
||||
```
|
||||
|
||||
## All ProxmoxVE Cluster Nodes
|
||||
### Configure SSH Key Exchange
|
||||
The first step is creating SSH trust between the ProxmoxVE cluster nodes and the TrueNAS storage appliance. You will leverage the ProxmoxVE `shell` on every node of the cluster to run the following commands.
|
||||
|
||||
**Note**: I will be writing the SSH configuration with the name `192.168.101.100` for simplicity so I know what server the identity belongs to. You could also name it something else like `storage.bunny-lab.io_id_rsa`.
|
||||
|
||||
``` sh
|
||||
mkdir /etc/pve/priv/zfs
|
||||
ssh-keygen -f /etc/pve/priv/zfs/192.168.101.100_id_rsa # (1)
|
||||
ssh-copy-id -i /etc/pve/priv/zfs/192.168.101.100_id_rsa.pub root@192.168.101.100 # (2)
|
||||
ssh -i /etc/pve/priv/zfs/192.168.101.100_id_rsa root@192.168.101.100 # (3)
|
||||
```
|
||||
|
||||
1. Do not set a password. It will break the automatic functionality.
|
||||
2. Send the SSH key to the TrueNAS server.
|
||||
3. Connect to the TrueNAS server at least once to finish establishing the connection.
|
||||
|
||||
### Install & Configure Storage Provider
|
||||
Now you need to set up the storage provider in TrueNAS. You will run the commands below within a ProxmoxVE shell, then when finished, log out of the ProxmoxVE WebUI, clear the browser cache for ProxmoxVE, then log back in. This will have added a new storage provider called `FreeNAS-API` under the `ZFS over iSCSI` storage type.
|
||||
|
||||
``` sh
|
||||
keyring_location=/usr/share/keyrings/ksatechnologies-truenas-proxmox-keyring.gpg
|
||||
curl -1sLf 'https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/gpg.284C106104A8CE6D.key' | gpg --dearmor >> ${keyring_location}
|
||||
|
||||
#################################################################
|
||||
cat << EOF > /etc/apt/sources.list.d/ksatechnologies-repo.list
|
||||
# Source: KSATechnologies
|
||||
# Site: https://cloudsmith.io
|
||||
# Repository: KSATechnologies / truenas-proxmox
|
||||
# Description: TrueNAS plugin for Proxmox VE - Production
|
||||
deb [signed-by=${keyring_location}] https://dl.cloudsmith.io/public/ksatechnologies/truenas-proxmox/deb/debian any-version main
|
||||
|
||||
EOF
|
||||
#################################################################
|
||||
|
||||
apt update
|
||||
apt install freenas-proxmox
|
||||
apt full-upgrade
|
||||
|
||||
systemctl restart pvedaemon
|
||||
systemctl restart pveproxy
|
||||
systemctl restart pvestatd
|
||||
```
|
||||
|
||||
## Primary ProxmoxVE Cluster Node
|
||||
From this point, we are ready to add the shared storage provider to the cluster via the primary node in the cluster. This is not strictly required, just simplifies the documentation.
|
||||
|
||||
Navigate to **"Datacenter (BUNNY-CLUSTER) > Storage > Add > ZFS over iSCSI"**
|
||||
|
||||
| **Field** | **Value** | **Additional Notes** |
|
||||
| :--- | :--- | :--- |
|
||||
| ID | `bunny-zfs-over-iscsi` | Friendly Name |
|
||||
| Portal | `192.168.101.100` | IP Address of iSCSI Portal |
|
||||
| Pool | `PROXMOX-ZFS-STORAGE` | This is the ZFS Storage Pool you will use to store GuestVM Disks |
|
||||
| ZFS Block Size | `4k` | |
|
||||
| Target | `iqn.2005-10.org.moon-storage-01.ctl:proxmox-zfs-storage` | The iSCSI Target |
|
||||
| Target Group | `<Leave Blank>` | |
|
||||
| Enable | `<Checked>` | |
|
||||
| iSCSI Provider | `FreeNAS-API` | |
|
||||
| Thin-Provision | `<Checked>` | |
|
||||
| Write Cache | `<Checked>` | |
|
||||
| API use SSL | `<Unchecked>` | Disabled unless you have SSL Enabled on TrueNAS |
|
||||
| API Username | `root` | This is the account that is allowed to make ZFS zvols / datasets |
|
||||
| API IPv4 Host | `192.168.101.100` | iSCSI Portal Address |
|
||||
| API Password | `<Root Password of TrueNAS Box>` | |
|
||||
| Nodes | `proxmox-node-01,proxmox-node-02` | All ProxmoxVE Cluster Nodes |
|
||||
|
||||
!!! success "Storage is Provisioned"
|
||||
At this point, the storage should propagate throughout the ProxmoxVE cluster, and appear as a location to deploy virtual machines and/or containers. You can now use this storage for snapshots and live-migrations between ProxmoxVE cluster nodes as well.
|
51
Servers/Virtualization/Rancher Harvester/Harvester.md
Normal file
51
Servers/Virtualization/Rancher Harvester/Harvester.md
Normal file
@ -0,0 +1,51 @@
|
||||
**Purpose**: Rancher Harvester is an awesome tool that acts like a self-hosted cloud VDI provider, similar to AWS, Linode, and other online cloud compute platforms. In most scenarios, you will deploy "Rancher" in addition to Harvester to orchestrate the deployment, management, and rolling upgrades of a Kubernetes Cluster. You can also just run standalone Virtual Machines, similar to Hyper-V, RHEV, oVirt, Bhyve, XenServer, XCP-NG, and VMware ESXi.
|
||||
|
||||
:::note Prerequisites
|
||||
This document assumes your bare-metal host has at least 32GB of Memory, 200GB of Disk Space, and 8 processor cores. See [Recommended System Requirements](https://docs.harvesterhci.io/v1.1/install/requirements)
|
||||
:::
|
||||
|
||||
## First Harvester Node
|
||||
### Download Installer ISO
|
||||
You will need to navigate to the Rancher Harvester GitHub to download the [latest ISO release of Harvester](https://releases.rancher.com/harvester/v1.1.2/harvester-v1.1.2-amd64.iso), currently **v1.1.2**. Then image it onto a USB flashdrive using a tool like [Rufus](https://github.com/pbatard/rufus/releases/download/v4.2/rufus-4.2p.exe). Proceed to boot the bare-metal server from the USB drive to begin the Harvester installation process.
|
||||
### Begin Setup Process
|
||||
You will be waiting a few minutes while the server boots from the USB drive, but you will eventually land on a page where it asks you to set up various values to use for networking and the cluster itself.
|
||||
The values seen below are examples and represent how my homelab is configured.
|
||||
- **Management Interface(s)**: `eno1,eno2,eno3,eno4`
|
||||
- **Network Bond Mode**: `Active-Backup`
|
||||
- **IP Address**: `192.168.3.254/24` *<---- **Note:** Be sure to add CIDR Notation*.
|
||||
- **Gateway**: `192.168.3.1`
|
||||
- **DNS Server(s)**: `1.1.1.1,1.0.0.1,8.8.8.8,8.8.4.4`
|
||||
- **Cluster VIP (Virtual IP)**: `192.168.3.251` *<---- **Note**: See "VIRTUAL IP CONFIGURATION" note below.*
|
||||
- **Cluster Node Token**: `19-USED-when-JOINING-more-NODES-to-EXISTING-cluster-55`
|
||||
- **NTP Server(s)**: `0.suse.pool.ntp.org`
|
||||
|
||||
:::caution Virtual IP Configuration
|
||||
The VIP assigned to the first node in the cluster will act as a proxy to the built-in load-balancing system. It is important that you do not create a second node with the same VIP (Could cause instability in existing cluster), or use an existing VIP as the Node IP address of a new Harvester Cluster Node.
|
||||
:::
|
||||
:::tip
|
||||
Based on your preference, it would be good to assign the device a static DHCP reservation, or use numbers counting down from **.254** (e.g. `192.168.3.254`, `192.168.3.253`, `192.168.3.252`, etc...)
|
||||
:::
|
||||
|
||||
### Wait for Installation to Complete
|
||||
The installation process will take quite some time, but when it is finished, the Harvester Node will reboot and take you to a splash screen with the Harvester logo, with indicators as to what the VIP and Management Interface IPs are configured as, and whether or not the associated systems are operational and ready. **Be patient until both statuses say `READY`**. If after 15 minutes the status has still not changed to `READY` both for fields, see the note below.
|
||||
:::caution Issues with `rancher-harvester-repo` Image
|
||||
During my initial deployment efforts with Harvester v.1.1.2, I noticed that the Harvester Node never came online. That was because something bugged-out during installation and the `rancher-harvester-repo` image was not properly installed prior to node initialization. This will effectively soft-lock the node unless you reinstall the node from scratch, as the Docker Hub Registry that Harvester is looking for to finish the deployment does not exist anymore and depends on the local image bundled with the installer ISO.
|
||||
|
||||
If this happens, you unfortunately need to start over and reinstall Harvester and hope that it works the second time around. No other workarounds are currently known at this time on version 1.1.2.
|
||||
:::
|
||||
|
||||
## Additional Harvester Nodes
|
||||
If you work in a production environment, you will want more than one Harvester node to allow live-migrations, high-availability, and better load-balancing in the Harvester Cluster. The section below will outline the steps necessary to create additional Harvester nodes, join them to the existing Harvester cluster, and validate that they are functioning without issues.
|
||||
### Installation Process
|
||||
Not Documented Yet
|
||||
### Joining Node to Existing Cluster
|
||||
Not Documented Yet
|
||||
|
||||
## Installing Rancher
|
||||
If you plan on using Harvester for more than just running Virtual Machines (e.g. Containers), you will want to deploy Rancher inside of the Harvester Cluster in order or orchestrate the deployment, management, and rolling upgrades of various forms of Kubernetes Clusters (RKE2 Suggested). The steps below will go over the process of deploying a High-Availability Rancher environment to "adopt" Harvester as a VDI/compute platform for deploying the Kubernetes Cluster.
|
||||
### Provision ControlPlane Node(s) VMs on Harvester
|
||||
Not Documented Yet
|
||||
### Adopt Harvester as Cluster Target
|
||||
Not Documented Yet
|
||||
### Deploy Production Kubernetes Cluster to Harvester
|
||||
Not Documented Yet
|
Reference in New Issue
Block a user