Initial Commit

Bringing Documentation into Gitea
This commit is contained in:
2023-12-21 01:15:09 -07:00
commit b9aeaabbfb
90 changed files with 26956 additions and 0 deletions

View File

@ -0,0 +1,56 @@
---
sidebar_position: 1
---
# Deploy Rancher Harvester Cluster
Rancher Harvester is an awesome tool that acts like a self-hosted cloud VDI provider, similar to AWS, Linode, and other online cloud compute platforms. In most scenarios, you will deploy "Rancher" in addition to Harvester to orchestrate the deployment, management, and rolling upgrades of a Kubernetes Cluster. You can also just run standalone Virtual Machines, similar to Hyper-V, RHEV, oVirt, Bhyve, XenServer, XCP-NG, and VMware ESXi.
:::note Prerequisites
This document assumes your bare-metal host has at least 32GB of Memory, 200GB of Disk Space, and 8 processor cores. See [Recommended System Requirements](https://docs.harvesterhci.io/v1.1/install/requirements)
:::
## First Harvester Node
### Download Installer ISO
You will need to navigate to the Rancher Harvester GitHub to download the [latest ISO release of Harvester](https://releases.rancher.com/harvester/v1.1.2/harvester-v1.1.2-amd64.iso), currently **v1.1.2**. Then image it onto a USB flashdrive using a tool like [Rufus](https://github.com/pbatard/rufus/releases/download/v4.2/rufus-4.2p.exe). Proceed to boot the bare-metal server from the USB drive to begin the Harvester installation process.
### Begin Setup Process
You will be waiting a few minutes while the server boots from the USB drive, but you will eventually land on a page where it asks you to set up various values to use for networking and the cluster itself.
The values seen below are examples and represent how my homelab is configured.
- **Management Interface(s)**: `eno1,eno2,eno3,eno4`
- **Network Bond Mode**: `Active-Backup`
- **IP Address**: `192.168.3.254/24` *<---- **Note:** Be sure to add CIDR Notation*.
- **Gateway**: `192.168.3.1`
- **DNS Server(s)**: `1.1.1.1,1.0.0.1,8.8.8.8,8.8.4.4`
- **Cluster VIP (Virtual IP)**: `192.168.3.251` *<---- **Note**: See "VIRTUAL IP CONFIGURATION" note below.*
- **Cluster Node Token**: `19-USED-when-JOINING-more-NODES-to-EXISTING-cluster-55`
- **NTP Server(s)**: `0.suse.pool.ntp.org`
:::caution Virtual IP Configuration
The VIP assigned to the first node in the cluster will act as a proxy to the built-in load-balancing system. It is important that you do not create a second node with the same VIP (Could cause instability in existing cluster), or use an existing VIP as the Node IP address of a new Harvester Cluster Node.
:::
:::tip
Based on your preference, it would be good to assign the device a static DHCP reservation, or use numbers counting down from **.254** (e.g. `192.168.3.254`, `192.168.3.253`, `192.168.3.252`, etc...)
:::
### Wait for Installation to Complete
The installation process will take quite some time, but when it is finished, the Harvester Node will reboot and take you to a splash screen with the Harvester logo, with indicators as to what the VIP and Management Interface IPs are configured as, and whether or not the associated systems are operational and ready. **Be patient until both statuses say `READY`**. If after 15 minutes the status has still not changed to `READY` both for fields, see the note below.
:::caution Issues with `rancher-harvester-repo` Image
During my initial deployment efforts with Harvester v.1.1.2, I noticed that the Harvester Node never came online. That was because something bugged-out during installation and the `rancher-harvester-repo` image was not properly installed prior to node initialization. This will effectively soft-lock the node unless you reinstall the node from scratch, as the Docker Hub Registry that Harvester is looking for to finish the deployment does not exist anymore and depends on the local image bundled with the installer ISO.
If this happens, you unfortunately need to start over and reinstall Harvester and hope that it works the second time around. No other workarounds are currently known at this time on version 1.1.2.
:::
## Additional Harvester Nodes
If you work in a production environment, you will want more than one Harvester node to allow live-migrations, high-availability, and better load-balancing in the Harvester Cluster. The section below will outline the steps necessary to create additional Harvester nodes, join them to the existing Harvester cluster, and validate that they are functioning without issues.
### Installation Process
Not Documented Yet
### Joining Node to Existing Cluster
Not Documented Yet
## Installing Rancher
If you plan on using Harvester for more than just running Virtual Machines (e.g. Containers), you will want to deploy Rancher inside of the Harvester Cluster in order or orchestrate the deployment, management, and rolling upgrades of various forms of Kubernetes Clusters (RKE2 Suggested). The steps below will go over the process of deploying a High-Availability Rancher environment to "adopt" Harvester as a VDI/compute platform for deploying the Kubernetes Cluster.
### Provision ControlPlane Node(s) VMs on Harvester
Not Documented Yet
### Adopt Harvester as Cluster Target
Not Documented Yet
### Deploy Production Kubernetes Cluster to Harvester
Not Documented Yet

View File

@ -0,0 +1,84 @@
---
sidebar_position: 2
---
# Canonical OpenStack
OpenStack is basically a virtual machine hypervisor that is HA and cluster-friendly. This particular variant is deployed via Canonical's MiniStack environment using SNAP. It will deploy OpenStack onto a single node, which can later be expanded to additional nodes. You can also use something like OpenShift to deploy a Kubernetes Cluster onto OpenStack automatically via its various APIs.
**Reference Documentation**:
- https://discourse.ubuntu.com/t/single-node-guided/35765
- https://microstack.run/docs/single-node-guided
:::note Prerequisites
This document assumes your bare-metal host server is running Ubuntu 22.04 LTS, has at least 16GB of Memory (**32GB for Multi-Node Deployments**), two network interfaces (one for management, one for remote VM access), 200GB of Disk Space for the root filesystem, another 200GB disk for Ceph distributed storage, and 4 processor cores. See [Single-Node Mode System Requirements](https://ubuntu.com/openstack/install)
:::
:::note Assumed Networking on the First Cluster Node
- **eth0** = 192.168.3.5
- **eth1** = 192.168.5.200
:::
### Update APT then install upgrades
```
sudo apt update && sudo apt upgrade -y && sudo apt install htop ncdu iptables nano -y
```
:::tip
At this time, it would be a good idea to take a checkpoint/snapshot of the server (if it is a virtual machine). This gives you a starting point to come back to as you troubleshoot inevitable deployment issues.
:::
### Update SNAP then install OpenStack SNAP
```
sudo snap refresh
sudo snap install openstack --channel 2023.1
```
### Install & Configure Dependencies
Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in MicroStack.
```
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
sudo reboot
```
### Bootstrapping
Deploy the OpenStack cloud using the cluster bootstrap command.
```
sunbeam cluster bootstrap
```
:::caution
If you get an "Unable to connect to websocket" error, run `sudo snap restart lxd`.
[Known Bug Report](https://bugs.launchpad.net/snap-openstack/+bug/2033400)
:::
:::note Bootstrap Variables:
- Management networks shared by hosts = `192.168.3.0/24`
- MetalLB address allocation range (supports multiple ranges, comma separated) (10.20.21.10-10.20.21.20): `192.168.3.50-192.168.3.60`
:::
### Cloud Initialization:
- nicole@moon-stack-01:~$ `sunbeam configure --openrc demo-openrc`
- Local or remote access to VMs [local/remote] (local): `remote`
- CIDR of network to use for external networking (10.20.20.0/24): `192.168.5.0/24`
- IP address of default gateway for external network (192.168.5.1):
- Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
- Username to use for access to OpenStack (demo): `nicole`
- Password to use for access to OpenStack (Vb********): `<PASSWORD>`
- Network range to use for project network (192.168.122.0/24):
- List of nameservers guests should use for DNS resolution (192.168.3.11 192.168.3.10):
- Enable ping and SSH access to instances? [y/n] (y):
- Start of IP allocation range for external network (192.168.5.2): `192.168.5.201`
- End of IP allocation range for external network (192.168.5.254): `192.168.5.251`
- Network type for access to external network [flat/vlan] (flat):
- Free network interface that will be configured for external traffic: `eth1`
- WARNING: Interface eth1 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
### Pull Down / Generate the Dashboard URL
```
sunbeam openrc > admin-openrc
sunbeam dashboard-url
```
### Launch a Test VM:
Verify the cloud by launching a VM called test based on the ubuntu image (Ubuntu 22.04 LTS).
```
sunbeam launch ubuntu --name test
```
:::note Sample output:
Launching an OpenStack instance ...
Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@10.20.20.200`
:::