Documentation Restructure
All checks were successful
GitOps Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 4s
GitOps Automatic Documentation Deployment / Sync Docs to https://docs.bunny-lab.io (push) Successful in 6s

This commit is contained in:
2026-01-27 05:25:22 -07:00
parent 3ea11e04ff
commit e73bb0376f
205 changed files with 469 additions and 146 deletions

View File

@@ -0,0 +1,40 @@
**Purpose**:
The purpose of this workflow is to illustrate the process of expanding storage for a Linux server that uses an iSCSI-based ZFS storage. We want the VM to have more storage space, so this document will go over the steps to expand that usable space.
!!! info "Assumptions"
It is assumed you are using an Ubuntu based operating system, as these commands may not be the same on other distributions of Linux.
This document also assumes you did not enable Logical Volume Management (LVM) when deploying your server. If you did, you will need to perform additional LVM-specific steps after increasing the space.
## Increase iSCSI Disk Size
This part should be fairly straight-forward. Using whatever hypervisor / storage appliance hosting the iSCSI target, expand the disk space of the LUN to the desired size.
## Extend ZFS Pool
This step goes over how to increase the usable space of the ZFS pool within the server itself after it was expanded.
``` sh
iscsiadm -m session --rescan # (1)
lsblk # (2)
parted /dev/sdX # (3)
unit TB # (4)
resizepart X XXTB # (5)
zpool list # (6)
zpool online -e <POOL-NAME> /dev/sdX # (7)
zpool scrub <POOL-NAME> # (8)
```
1. Re-scan iSCSI targets for changes.
2. Leverage `lsblk` to ensure that the storage size increase from the hypervisor / storage appliance reflects correctly.
3. Open partitioning utility on the ZFS volume / LUN / iSCSI disk. Replace `dev/sdX` with the actual device name.
4. Self-explanatory storage measurement.
5. Resizes whatever partition is given to fit the new storage capacity. Replace `X` with the partition number. Replace `XXTB` with a valid value, such as `10TB`.
6. This will allow you to list all ZFS pools that are available for the next command.
7. Brings the ZFS Pool back online. Replace `<POOL-NAME>` with the actual name of the ZFS pool.
8. This tells the system to scan the ZFS pool for any errors or corruption and correct them. Think of it as a form of housekeeping.
## Check on Scrubbing Progress
At this point, the ZFS pool has been expanded and a scrub task has been started. The scrubbing task can take several hours / days to run, so to keep track of it, you can run the following command to check the status of the ZFS pool / scrubbing task.
```sh
zpool status
```

View File

@@ -0,0 +1,144 @@
**Purpose**:
The purpose of this workflow is to illustrate the process of expanding storage for a RHEL-based Linux server acting as a GuestVM. We want the VM to have more storage space, so this document will go over the steps to expand that usable space.
!!! info "Assumptions"
It is assumed you are using a RHEL variant of linux such as Rocky Linux. This should apply to any version of Linux, but was written in a Rocky Linux 9.4 lab environment.
This document also assumes you did not enable Logical Volume Management (LVM) when deploying your server. If you did, you will need to perform additional LVM-specific steps after increasing the space.
!!! abstract "Oracle Linux Disk / LVM Terminology Idiosyncrasy"
Oracle Linux refers to disks as `/dev/hda` and /dev/hda2` and not something like `/dev/sda` / `/dev/sda2`. You will see certain parts of this document mention `/dev/hda`, in those cases, you may need to switch it to a standard `/dev/sda<#>` in order to make it work in your particular environment.
## Increase GuestVM Virtual Disk Size
This part should be fairly straight-forward. Using whatever hypervisor is running the Linux GuestVM, expand the disk space of the disk to the desired size.
## Extend Partition Table
This step goes over how to increase the usable space of the virtual disk within the GuestVM itself after it was expanded within the hypervisor.
!!! warning "Be Careful"
When you follow these steps, you will be deleting the existing partition and immediately re-creating it. If you do not use the **EXACT SAME** starting sector for the new partition, you will destroy data. Be sure to read every annotation next to each command to fully understand what you are doing.
=== "Using GDISK"
``` sh
sudo dnf install gdisk -y
gdisk /dev/<diskNumber> # (1)
p <ENTER> # (2)
d <ENTER> # (3)
4 <ENTER> # (4)
n <ENTER> # (5)
4 <ENTER> # (6)
<DEFAULT-FIRST-SECTOR-VALUE> (Just press ENTER) # (7)
<DEFAULT-LAST-SECTOR-VALUE> (Just press ENTER) # (8)
<FILESYSTEM-TYPE=8300 (Linux Filesystem)> (Just press ENTER) # (9)
w <ENTER> # (10)
```
??? info "Detailed Command Breakdown"
1. The first command needs you to enter the disk identifier. In most cases, this will likely be the first disk, such as `/dev/sda`. You do not need to indicate a partition number in this step, as you will be asked for one in a later step after identifying all of the partitions on this disk in the next command.
2. This will list all of the partitions on the disk.
3. This will ask you for a partition number to delete. Generally this is the last partition number listed. In the example below, you would type `4` then press ++enter++ to schedule the deletion of the partition.
4. See the previous annotation for details on what entering `4` does in this context.
5. This tells gdisk to create a new partition.
6. This tells gdisk to re-make partition 4 (the one we just deleted in the example).
7. We just want to leave this as the default. In my example, it would look like this:
`First sector (34-2147483614, default = 19826688) or {+-}size{KMGTP}: 19826688`
8. We just want to leave this as the default. In my example, it would look like this:
`Last sector (19826688-2147483614, default = 2147483614) or {+-}size{KMGTP}: 2147483614`
9. Just leave this as-is and press ++enter++ without entering any values. Assuming you are using XFS, as this guide was written for, the default "Linux Filesystem" is what you want for XFS.
10. This will write the changes to the partition table making them reality instead of just staging the changes.
!!! example "Example Output"
```
Command (? for help): p
Disk /dev/sda: 2147483648 sectors, 1024.0 GiB
Model: Virtual Disk
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 8A5C2469-B07B-42AC-8E57-E756E62D37D1
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 2147483614
Partitions will be aligned on 2048-sector boundaries
Total free space is 1073743838 sectors (512.0 GiB)
Number Start (sector) End (sector) Size Code Name
1 2048 1230847 600.0 MiB EF00 EFI System Partition
2 1230848 3327999 1024.0 MiB 8300
3 3328000 19826687 7.9 GiB 8200
4 19826688 1073741790 502.5 GiB 8300 Linux filesystem
```
=== "Using FDISK"
``` sh
pvdisplay # (1)
fdisk /dev/hda # (2)
p <ENTER> # List Partitions
d <ENTER> # Delete a partition
2 <ENTER> # Delete Partition 2 (e.g. /dev/hda2)
n <ENTER> # Make a new Partition
p <ENTER> # Primary Partition Type
Starting Sector: <ENTER> # Use Default Value
Ending Sector: <ENTER> # Use Default Value
w <ENTER> # Commit all queued-up changes and write them to the disk
```
??? info "Detailed Command Breakdown"
1. Use pvdisplay to get the target disk identifier
2. Replace `/dev/hda` with the target disk identifier found in the previous step
**Point of No Return**:
When you press `w` in both cases of `gdisk` or `fdisk`, then ++enter++ the changes will be written to disk, meaning there is no turning back unless you have full GuestVM backups or a snapshot to rollback with, or something like Veeam Backup & Replication. Be certain the first and last sector values are correctly configured before proceeding. (Default values generally are good for this)
## Detect the New Partition Sizes
At this point, the operating system wont detect the changes without a reboot, so we are going to force the operating system to detect them immediately with the following commands to avoid a reboot (if we can avoid it).
``` sh
sudo partprobe /dev/<drive> # Drive Example: /dev/sda (Rocky) or /dev/hda (Oracle Linux)
sudo partx -u /dev/<diskNumber>
```
!!! bug "Partition Size Not Expanded? Reboot."
If you notice the partition still has not expanded to the desired size, you may have no choice but to reboot the server, then re-run the `gdisk` or `fdisk` commands a second time. In my lab environment, it didn't work until I rebooted. This might have been a hiccup on my end, but it's something to keep in mind if you run into the same issue of the size not changing.
``` sh
sudo reboot
```
## Resize the Filesystem
=== "XFS Filesystem"
``` sh
sudo xfs_growfs /
```
=== "Ext4 Filesystem"
``` sh
resize2fs /dev/sda
```
=== "Ext4 Filesystem w/ LVM"
``` sh
# Increase the Physical Volume Group Size
pvdisplay # Check the Current Size of the Physical Volume
pvresize /dev/hda2 # Enlarge the Physical Volume to Fit the New Partition Size
pvdisplay # Validatre the Size of the Physical Volume Increased to the New Size
# Increase the Logical Volume Group Size
lvextend -l +100%FREE /dev/VolGroup00/LogVol00 # Get this from running "lvdisplay" to find the correct Logical Volume Name
# Resize the Filesystem of the Disk to Fit the new Logical Volume
resize2fs /dev/VolGroup00/LogVol00
```
## Validate Storage Expansion
At this point, you can leverage `lsblk` or `df -h` to determine if the usable storage space was successfully increased or not. In this example, you can see that I increased my storage space from 512GB to 1TB.
!!! example "Example Command Output"
Command: `lsblk | grep "sda4"`
```
└─sda4 8:4 0 1014.5G 0 part /
```
Command: `df -h | grep "sda4"`
```
/dev/sda4 1015G 145G 871G 15% /
```

View File

@@ -0,0 +1,51 @@
**Purpose**:
This document serves as a general guideline for my workstation deployment process when working with Fedora Workstation 41 and up. This document will constantly evolve over time based on my needs.
## Automate Initial Configurations
```sh
# Set Hostname
sudo hostnamectl set-hostname lab-desktop-01
# Setup Automatic Drive Mounting
echo "/dev/disk/by-uuid/B865-7BDB /mnt/500GB_WINDOWS_OS auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/C006EBA006EB95A6 /mnt/640GB_HDD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/24C82CFEC82CCFBA /mnt/1TB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/D64E9F534E9F2AEF /mnt/120GB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0
/dev/disk/by-uuid/16D05248D0522E6D /mnt/2TB_SSD_STORAGE auto nosuid,nodev,nofail,x-gvfs-show 0 0" | sudo tee -a /etc/fstab
# Install Software
sudo yum update -y
sudo yum install -y steam firefox
sudo dnf install -y @xfce-desktop-environment
# Reboot Workstation
sudo reboot
```
!!! warning "Read-Only NTFS Disks (When Using Dual-Boot)"
If you want to dual boot, you need to ensure that the Windows side does not have "Fast Boot" enabled. You can locate the Fast Boot setting by locating the "Change what the power button does" settings, and unchecking the "Fast Boot" checkbox, then shutting down.
The problem with Fast Boot is that it effectively leaves the shared disks between Windows and Linux in a locked read-only state, which makes installing Steam games and software impossible.
## Manually Address Remaining Things
At this point, we need to do some manual work, since not everything can be handled by the terminal.
### Install Software (Software Manager)
Now we need to install a few things:
- NVIDIA Graphics Drivers Control Panel
- Discord Canary
- Betterbird
- Visual Studio Code
- Signal Desktop
- Solaar # (Logitech Unifying Software equivalant in Linux)
### Import XFCE Panel Configuration
At this point, we want to restore our custom taskbar / panels in XFCE, so the easiest way to do that is to import the configuration backup located in Nextcloud.
Backups are located here: https://cloud.bunny-lab.io/f/792649
### Configure Window Snapping
By default, XFCE has a really small threshold for telling windows to "snap" to the sides of the screens, such as a half:half arrangement. This can be adjusted by navigating to "**Applications Menu > Settings > Settings Manager > Windows Manager Tweaks > Placement**"
Once you have reached this window, you will see a slider from "**Small**" to "**Large**". Slide the slider all the way to the right, facing "**Large**". Now windows will snap to the sides of the screen successfully.

View File

@@ -0,0 +1,68 @@
## Purpose
You may find that you need to install an XFCE desktop environment or something into Fedora Server, if this is the case, for installing something like Rustdesk remote access, you can follow the steps below.
### Install & Configure XFCE
We need to install XFCE and configure it to be the default environment when the server turns on.
```sh
sudo dnf install @xfce-desktop-environment -y
sudo systemctl set-default graphical.target
sudo reboot
```
#### Install Rustdesk:
We need to install Rustdesk into the server.
```sh
curl -L -o /tmp/rustdesk_installer.rpm https://github.com/rustdesk/rustdesk/releases/download/1.4.0/rustdesk-1.4.0-0.x86_64.rpm
cd /tmp
sudo yum install rustdesk_installer.rpm -y
```
!!! info "Configure Rustdesk"
You need to use a tool like "MobaXTerm" or "PuTTy" to leverage X11-Forwarding to allow you to run `rustdesk` in a GUI on your local workstation. From there, you need to configure the relay server information (if you are using a self-hosted Relay). This is also where you would set up a permanent password to the server and document the device ID number.
Be sure to check the box for "**Enable remote configuration modification**" when setting up Rustdesk.
### Configure Automatic Login
For Rustdesk specifically, we have to configure XFCE to automatically login via SDDM then immediately lock the computer once it's logged in, so the XFCE session is running, allowing Rustdesk to connect to it.
**Create SDDM Config File**:
```sh
sudo mkdir -p /etc/sddm.conf.d/
sudo nano /etc/sddm.conf.d/autologin.conf
```
```ini title="/etc/sddm.conf.d/autologin.conf"
[Autologin]
User=nicole
Session=xfce.desktop
```
!!! note "Determining Session Strings"
If you're unsure of the correct session string, check what's available by typing `ls /usr/share/xsessions/`. You will be looking for something like `xfce.desktop`
### Configure Lock on Initial Login
At this point, its not the most secure thing to just leave a server logged-in upon boot, so the following steps will instantly lock the server after logging in, allowing the XFCE session to persist so Rustdesk can attach to it for remote management of the server.
!!! warning "Not Functional Yet"
I have tried implementing the below, but it seems to just ignore it and stay logged-in without locking the device. This needs to be troubleshot further.
```sh
mkdir -p ~/.config/autostart
nano ~/.config/autostart/xfce-lock.desktop
```
```ini title="~/.config/autostart/xfce-lock.desktop"
[Desktop Entry]
Type=Application
Exec=xfce4-screensaver-command -l
Hidden=false
NoDisplay=false
X-GNOME-Autostart-enabled=true
Name=Auto Lock
Comment=Lock the screen on login
```
Lastly, test that everything is working by rebooting the server.
```sh
sudo reboot
```

View File

@@ -0,0 +1,13 @@
## Purpose
You may need to install flatpak packages like Signal in your workstation environment. If you need to do this, you only need to run a few commands.
```sh
# Usually already installed
sudo dnf install flatpak
# Add Flathub Repo
flatpak --user remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
# Install Signal
flatpak install flathub org.signal.Signal
```

View File

@@ -0,0 +1,11 @@
**Purpose**:
If you want to upgrade Fedora Workstation to a new version (e.g. 41 --> 42) you can run the following commands to do so. The overall process is fairly straightforward and requires a reboot.
```sh
sudo dnf upgrade --refresh
sudo dnf system-upgrade download --releasever=43
sudo dnf system-upgrade reboot
```
**Additional Documentation**:
https://docs.fedoraproject.org/en-US/quick-docs/upgrading-fedora-new-release/