Documentation Restructure
This commit is contained in:
150
platforms/virtualization/openstack/ansible-openstack.md
Normal file
150
platforms/virtualization/openstack/ansible-openstack.md
Normal file
@@ -0,0 +1,150 @@
|
||||
!!! warning "Document Under Construction"
|
||||
This document is very unfinished and should **NOT** be followed by anyone for deployment at this time.
|
||||
|
||||
**Purpose**: Deploying OpenStack via Ansible.
|
||||
|
||||
## Required Hardware/Infrastructure Breakdown
|
||||
Every node in the OpenStack environment (including the deployment node) will be running Rocky Linux 9.5, as OpenStack Ansible only supports CentOS/RHEL/Rocky for its deployment.
|
||||
|
||||
| **Hostname** | **IP** | **Storage** | **Memory** | **CPU** | **Network** | **Purpose** |
|
||||
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
|
||||
| OPENSTACK-BOOTSTRAPPER | 192.168.3.46 (eth0) | 32GB (OS) | 4GB | 4-Cores | eth0 | OpenStack Ansible Playbook Deployment Node |
|
||||
| OPENSTACK-NODE-01 | 192.168.3.43 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
| OPENSTACK-NODE-02 | 192.168.3.44 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
| OPENSTACK-NODE-03 | 192.168.3.45 (eth0) | 250GB (OS), 500GB (Ceph Storage) | 32GB | 16-Cores | eth0, eth1 | OpenStack Cluster/Target Node |
|
||||
|
||||
## Configure Hard-Coded DNS for Cluster Nodes
|
||||
We want to ensure everything works even if the nodes have no internet access. By hardcoding the FQDNs, this protects us against several possible stupid situations.
|
||||
|
||||
Run the following script to add the DNS entries.
|
||||
```sh
|
||||
# Make yourself root
|
||||
sudo su
|
||||
```
|
||||
|
||||
!!! note "Run `sudo su` Separately"
|
||||
When I ran `sudo su` and the echo commands below as one block of commands, it did not correctly write the changes to the `/etc/hosts` file. Just run `sudo su` by itself, then you can copy paste the codeblock below for all of the echo lines for each DNS entry.
|
||||
|
||||
```sh
|
||||
# Add the OpenStack node entries to /etc/hosts
|
||||
echo "192.168.3.43 OPENSTACK-NODE-01.bunny-lab.io OPENSTACK-NODE-01" >> /etc/hosts
|
||||
echo "192.168.3.44 OPENSTACK-NODE-02.bunny-lab.io OPENSTACK-NODE-02" >> /etc/hosts
|
||||
echo "192.168.3.45 OPENSTACK-NODE-03.bunny-lab.io OPENSTACK-NODE-03" >> /etc/hosts
|
||||
```
|
||||
|
||||
### Validate DNS Entries Added
|
||||
```sh
|
||||
cat /etc/hosts
|
||||
```
|
||||
|
||||
!!! example "/etc/hosts Example Contents"
|
||||
When you run `cat /etc/hosts`, you should see output similar to the following:
|
||||
```ini title="/etc/hosts"
|
||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
||||
192.168.3.43 OPENSTACK-NODE-01.bunny-lab.io OPENSTACK-NODE-01
|
||||
192.168.3.44 OPENSTACK-NODE-02.bunny-lab.io OPENSTACK-NODE-02
|
||||
192.168.3.45 OPENSTACK-NODE-03.bunny-lab.io OPENSTACK-NODE-03
|
||||
```
|
||||
|
||||
## OpenStack Deployment Node
|
||||
The "Deployment" node / bootstrapper is responsible for running Ansible playbooks against the cluster nodes that will eventually be running OpenStack. [Original Deployment Node Documentation](https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/deploymenthost.html)
|
||||
|
||||
### Install Necessary Software
|
||||
```sh
|
||||
sudo su
|
||||
dnf upgrade
|
||||
dnf install -y git chrony openssh-server python3-devel sudo
|
||||
dnf group install -y "Development Tools"
|
||||
```
|
||||
|
||||
### Configure SSH keys
|
||||
Ansible uses SSH with public key authentication to connect the deployment host and target hosts. Run the following commands to configure this.
|
||||
|
||||
!!! warning "Do not run as root"
|
||||
You want to make sure you run these commands as a normal user. (e.g. `nicole`).
|
||||
|
||||
``` sh
|
||||
# Generate SSH Keys (Private / Public)
|
||||
ssh-keygen
|
||||
|
||||
# Install Public Key on OpenStack Cluster/Target Nodes
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-01.bunny-lab.io
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-02.bunny-lab.io
|
||||
ssh-copy-id -i /home/nicole/.ssh/id_rsa.pub nicole@openstack-node-03.bunny-lab.io
|
||||
|
||||
# Validate that SSH Authentication Works Successfully on Each Node
|
||||
ssh nicole@openstack-node-01.bunny-lab.io
|
||||
ssh nicole@openstack-node-02.bunny-lab.io
|
||||
ssh nicole@openstack-node-03.bunny-lab.io
|
||||
```
|
||||
|
||||
### Install the source and dependencies
|
||||
Install the source and dependencies for the deployment host.
|
||||
```sh
|
||||
sudo su
|
||||
git clone -b master https://opendev.org/openstack/openstack-ansible /opt/openstack-ansible
|
||||
cd /opt/openstack-ansible
|
||||
bash scripts/bootstrap-ansible.sh
|
||||
```
|
||||
|
||||
### Disable Firewalld
|
||||
The `firewalld` service is enabled on most CentOS systems by default and its default ruleset prevents OpenStack components from communicating properly. Stop the firewalld service and mask it to prevent it from starting.
|
||||
```sh
|
||||
systemctl stop firewalld
|
||||
systemctl mask firewalld
|
||||
```
|
||||
|
||||
## OpenStack Target Node (1/3)
|
||||
Now we need to get the cluster/target nodes configured so that OpenStack can be deployed into them via the bootstrapper node later. [Original Target Node Documentation](https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/targethosts.html)
|
||||
|
||||
### Disable SELinux
|
||||
SELinux enabled is not currently supported in OpenStack-Ansible for CentOS/RHEL due to a lack of maintainers for the feature.
|
||||
```sh
|
||||
sudo sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
|
||||
```
|
||||
|
||||
### Disable Firewalld
|
||||
The `firewalld` service is enabled on most CentOS systems by default and its default ruleset prevents OpenStack components from communicating properly. Stop the firewalld service and mask it to prevent it from starting.
|
||||
```sh
|
||||
systemctl stop firewalld
|
||||
systemctl mask firewalld
|
||||
```
|
||||
|
||||
### Install Necessary Software
|
||||
```sh
|
||||
dnf upgrade
|
||||
dnf install -y iputils lsof openssh-server sudo tcpdump python3
|
||||
```
|
||||
|
||||
### Reduce Kernel Logging
|
||||
Reduce the kernel log level by changing the printk value in your sysctls.
|
||||
```sh
|
||||
sudo echo "kernel.printk='4 1 7 4'" >> /etc/sysctl.conf
|
||||
```
|
||||
|
||||
### Configure Local Cinder/Ceph Storage (Optional if using iSCSI)
|
||||
At this point, we need to configure `/dev/sdb` as the local storage for Cinder.
|
||||
```sh
|
||||
pvcreate --metadatasize 2048 /dev/sdb
|
||||
vgcreate cinder-volumes /dev/sdb
|
||||
```
|
||||
|
||||
!!! failure "`Cannot use /dev/sdb: device is partitioned`"
|
||||
You may (in rare cases) see the following error when trying to run `pvcreate --metadatasize 2048 /dev/sdb`, if that happens, just use `lsblk` to get the drive of the expected disk. In my example, we want the 500GB disk located at `/dev/sda`, seen in the example below:
|
||||
```
|
||||
[root@openstack-node-02 nicole]# lsblk
|
||||
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
|
||||
sda 8:0 0 500G 0 disk
|
||||
sdb 8:16 0 250G 0 disk
|
||||
├─sdb1 8:17 0 600M 0 part /boot/efi
|
||||
├─sdb2 8:18 0 1G 0 part /boot
|
||||
├─sdb3 8:19 0 15.7G 0 part [SWAP]
|
||||
└─sdb4 8:20 0 232.7G 0 part /
|
||||
sr0 11:0 1 1024M 0 rom
|
||||
```
|
||||
|
||||
!!! question "End of Current Documentation"
|
||||
This is the end of where I have currently iterated in my lab and followed-along with the official documentation while generalizing it for my specific lab scenarios. The following link is where I am currently at/stuck and need to revisit at my earliest convenience.
|
||||
|
||||
https://docs.openstack.org/project-deploy-guide/openstack-ansible/latest/targethosts.html#configuring-the-network
|
||||
76
platforms/virtualization/openstack/canonical-openstack.md
Normal file
76
platforms/virtualization/openstack/canonical-openstack.md
Normal file
@@ -0,0 +1,76 @@
|
||||
# OpenStack
|
||||
OpenStack is basically a virtual machine hypervisor that is HA and cluster-friendly. This particular variant is deployed via Canonical's MiniStack environment using SNAP. It will deploy OpenStack onto a single node, which can later be expanded to additional nodes. You can also use something like OpenShift to deploy a Kubernetes Cluster onto OpenStack automatically via its various APIs.
|
||||
|
||||
**Reference Documentation**:
|
||||
- https://discourse.ubuntu.com/t/single-node-guided/35765
|
||||
- https://microstack.run/docs/single-node-guided
|
||||
|
||||
!!! note
|
||||
This document assumes your bare-metal host server is running Ubuntu 22.04 LTS, has at least 16GB of Memory (**32GB for Multi-Node Deployments**), two network interfaces (one for management, one for remote VM access), 200GB of Disk Space for the root filesystem, another 200GB disk for Ceph distributed storage, and 4 processor cores. See [Single-Node Mode System Requirements](https://ubuntu.com/openstack/install)
|
||||
|
||||
!!! note Assumed Networking on the First Cluster Node
|
||||
- **eth0** = 192.168.3.5
|
||||
- **eth1** = 192.168.5.200
|
||||
|
||||
### Update APT then install upgrades
|
||||
```
|
||||
sudo apt update && sudo apt upgrade -y && sudo apt install htop ncdu iptables nano -y
|
||||
```
|
||||
!!! tip
|
||||
At this time, it would be a good idea to take a checkpoint/snapshot of the server (if it is a virtual machine). This gives you a starting point to come back to as you troubleshoot inevitable deployment issues.
|
||||
|
||||
### Update SNAP then install OpenStack SNAP
|
||||
```
|
||||
sudo snap refresh
|
||||
sudo snap install openstack --channel 2023.1
|
||||
```
|
||||
### Install & Configure Dependencies
|
||||
Sunbeam can generate a script to ensure that the machine has all of the required dependencies installed and is configured correctly for use in MicroStack.
|
||||
```
|
||||
sunbeam prepare-node-script | bash -x && newgrp snap_daemon
|
||||
sudo reboot
|
||||
```
|
||||
### Bootstrapping
|
||||
Deploy the OpenStack cloud using the cluster bootstrap command.
|
||||
```
|
||||
sunbeam cluster bootstrap
|
||||
```
|
||||
!!! warning
|
||||
If you get an "Unable to connect to websocket" error, run `sudo snap restart lxd`.
|
||||
[Known Bug Report](https://bugs.launchpad.net/snap-openstack/+bug/2033400)
|
||||
|
||||
!!! note
|
||||
Management networks shared by hosts = `192.168.3.0/24`
|
||||
MetalLB address allocation range (supports multiple ranges, comma separated) (10.20.21.10-10.20.21.20): `192.168.3.50-192.168.3.60`
|
||||
|
||||
### Cloud Initialization:
|
||||
- nicole@moon-stack-01:~$ `sunbeam configure --openrc demo-openrc`
|
||||
- Local or remote access to VMs [local/remote] (local): `remote`
|
||||
- CIDR of network to use for external networking (10.20.20.0/24): `192.168.5.0/24`
|
||||
- IP address of default gateway for external network (192.168.5.1):
|
||||
- Populate OpenStack cloud with demo user, default images, flavors etc [y/n] (y):
|
||||
- Username to use for access to OpenStack (demo): `nicole`
|
||||
- Password to use for access to OpenStack (Vb********): `<PASSWORD>`
|
||||
- Network range to use for project network (192.168.122.0/24):
|
||||
- List of nameservers guests should use for DNS resolution (192.168.3.11 192.168.3.10):
|
||||
- Enable ping and SSH access to instances? [y/n] (y):
|
||||
- Start of IP allocation range for external network (192.168.5.2): `192.168.5.201`
|
||||
- End of IP allocation range for external network (192.168.5.254): `192.168.5.251`
|
||||
- Network type for access to external network [flat/vlan] (flat):
|
||||
- Free network interface that will be configured for external traffic: `eth1`
|
||||
- WARNING: Interface eth1 is configured. Any configuration will be lost, are you sure you want to continue? [y/n]: y
|
||||
|
||||
### Pull Down / Generate the Dashboard URL
|
||||
```
|
||||
sunbeam openrc > admin-openrc
|
||||
sunbeam dashboard-url
|
||||
```
|
||||
|
||||
### Launch a Test VM:
|
||||
Verify the cloud by launching a VM called ‘test’ based on the ‘ubuntu’ image (Ubuntu 22.04 LTS).
|
||||
```
|
||||
sunbeam launch ubuntu --name test
|
||||
```
|
||||
!!! note Sample output:
|
||||
- Launching an OpenStack instance ...
|
||||
- Access instance with `ssh -i /home/ubuntu/.config/openstack/sunbeam ubuntu@10.20.20.200`
|
||||
Reference in New Issue
Block a user