Documentation Restructure
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
All checks were successful
Automatic Documentation Deployment / Sync Docs to https://kb.bunny-lab.io (push) Successful in 5s
This commit is contained in:
@@ -0,0 +1,275 @@
|
||||
---
|
||||
tags:
|
||||
- Kubernetes
|
||||
- Docker
|
||||
- Containerization
|
||||
---
|
||||
|
||||
# Migrating `docker-compose.yml` to Rancher RKE2 Cluster
|
||||
You may be comfortable operating with Portainer or `docker-compose`, but there comes a point where you might want to migrate those existing workloads to a Kubernetes cluster as easily-as-possible. Lucklily, there is a way to do this using a tool called "**Kompose**'. Follow the instructions seen below to convert and deploy your existing `docker-compose.yml` into a Kubernetes cluster such as Rancher RKE2.
|
||||
|
||||
!!! info "RKE2 Cluster Deployment"
|
||||
This document assumes that you have an existing Rancher RKE2 cluster deployed. If not, you can deploy one following the [Deploy RKE2 Cluster](../../../../deployments/platforms/containerization/kubernetes/deployment/rancher-rke2.md) documentation.
|
||||
|
||||
We also assume that the cluster name within Rancher RKE2 is named `local`, which is the default cluster name when setting up a Kubernetes Cluster in the way seen in the above documentation.
|
||||
|
||||
## Installing Kompose
|
||||
The first step involves downloading Kompose from https://kompose.io/installation. Once you have it downloaded and installed onto your environment of choice, save a copy of your `docker-compose.yml` file somewhere on-disk, then open up a terminal and run the following command:
|
||||
|
||||
```sh
|
||||
kompose --file docker-compose.yaml convert --stdout > ntfy-k8s.yaml
|
||||
```
|
||||
|
||||
This will attempt to convert the `docker-compose.yml` file into a Kubernetes manifest YAML file. The Before and after example can be seen below:
|
||||
|
||||
=== "(Original) docker-compose.yml"
|
||||
|
||||
``` yaml
|
||||
version: "2.1"
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
container_name: ntfy
|
||||
command:
|
||||
- serve
|
||||
environment:
|
||||
- NTFY_ATTACHMENT_CACHE_DIR=/var/lib/ntfy/attachments
|
||||
- NTFY_BASE_URL=https://ntfy.bunny-lab.io
|
||||
- TZ=America/Denver # optional: Change to your desired timezone
|
||||
#user: UID:GID # optional: Set custom user/group or uid/gid
|
||||
volumes:
|
||||
- /srv/containers/ntfy/cache:/var/cache/ntfy
|
||||
- /srv/containers/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 80:80
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.45
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
=== "(Converted) ntfy-k8s.yaml"
|
||||
|
||||
``` yaml
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
annotations:
|
||||
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
||||
kompose.version: 1.37.0 (fb0539e64)
|
||||
labels:
|
||||
io.kompose.service: ntfy
|
||||
name: ntfy
|
||||
spec:
|
||||
ports:
|
||||
- name: "80"
|
||||
port: 80
|
||||
targetPort: 80
|
||||
selector:
|
||||
io.kompose.service: ntfy
|
||||
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
annotations:
|
||||
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
||||
kompose.version: 1.37.0 (fb0539e64)
|
||||
labels:
|
||||
io.kompose.service: ntfy
|
||||
name: ntfy
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
io.kompose.service: ntfy
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
annotations:
|
||||
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe --file ntfy-k8s.yaml convert --stdout
|
||||
kompose.version: 1.37.0 (fb0539e64)
|
||||
labels:
|
||||
io.kompose.service: ntfy
|
||||
spec:
|
||||
containers:
|
||||
- args:
|
||||
- serve
|
||||
env:
|
||||
- name: NTFY_ATTACHMENT_CACHE_DIR
|
||||
value: /var/lib/ntfy/attachments
|
||||
- name: NTFY_BASE_URL
|
||||
value: https://ntfy.bunny-lab.io
|
||||
- name: TZ
|
||||
value: America/Denver
|
||||
image: binwiederhier/ntfy
|
||||
name: ntfy
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /var/cache/ntfy
|
||||
name: ntfy-claim0
|
||||
- mountPath: /etc/ntfy
|
||||
name: ntfy-claim1
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: ntfy-claim0
|
||||
persistentVolumeClaim:
|
||||
claimName: ntfy-claim0
|
||||
- name: ntfy-claim1
|
||||
persistentVolumeClaim:
|
||||
claimName: ntfy-claim1
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
labels:
|
||||
io.kompose.service: ntfy-claim0
|
||||
name: ntfy-claim0
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Mi
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
labels:
|
||||
io.kompose.service: ntfy-claim1
|
||||
name: ntfy-claim1
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Mi
|
||||
```
|
||||
|
||||
## Deploy Workload into Rancher RKE2 Cluster
|
||||
At this point, you need to import the yaml file you created into the Kubernetes cluster. This will occur in four sequential stages:
|
||||
|
||||
- Setting up a "**Project**" to logically organize your containers
|
||||
- Setting up a "**Namespace**" for your container to isolate it from other containers in your Kubernetes cluster
|
||||
- Importing the YAML file into the aforementioned namespace
|
||||
- Configuring Ingress to allow external access to the container / service stack.
|
||||
|
||||
### Create a Project
|
||||
The purpose of the project is to logically organize your services together. This can be something like `Home Automation`, `Log Analysis Systems`, `Network Tools`, etc. You can do this by logging into your Rancher RKE2 cluster (e.g. https://rke2-cluster.bunny-lab.io). This Project name is unique to Rancher and purely used for organizational purposes and does not affect the namespaces / containers in any way.
|
||||
|
||||
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > "Create Project"**
|
||||
- **Name**: <Friendly Name> (e.g. `Home Automation`)
|
||||
- **Description**: <Useful Description for the Group of Services> (e.g. `Various services that automate things within Bunny Lab`)
|
||||
- Click the "**Create**" button
|
||||
|
||||
### Create a Namespace within the Project
|
||||
At this point, we need to create a namespace. This basically isolates the networking, credentials, secrets, and storage between the services/stacks. This ensures that if someone exploits one of your services, they will not be able to laterally move into another service within the same Kubernetes cluster.
|
||||
|
||||
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces > <ProjectName> > "Create Namespace"**
|
||||
- The name for the namespace should be named based on its operational-context, such as `prod-ntfy` or `dev-ntfy`.
|
||||
|
||||
### Import Converted YAML Manifest into Namespace
|
||||
At this point, we can now proceed to import the YAML file we generated in the beginning of this document.
|
||||
|
||||
- Navigate to: **Clusters > `local` > Cluster > Projects/Namespaces**
|
||||
- At the top-right of the screen will be an upload / up-arrow button with tooltip text stating "Import YAML" > Click on this button
|
||||
- Click the "**Read from File**" button
|
||||
- Navigate to your `ntfy-k8s.yaml` file. (Name will differ from your actual converted file) > then click the "**Open**" button.
|
||||
- On the top-right of the dialog box will be a "**Default Namespace**" dropdown menu, select the `prod-ntfy` namespace we created earlier.
|
||||
- Click the blue "**Import** button at the bottom of the dialog box.
|
||||
|
||||
!!! warning "Be Patient"
|
||||
This part of the process can take a while depending on the container stack and complexity of the service. It has to download container images and deploy them into newly spun-up pods within Kubernetes. Just be patient and click on the `prod-ntfy` namespace, then look at the "**Workloads**" tab to see if the "ntfy" service exists and is Active, then you can move onto the next step.
|
||||
|
||||
### Configuring Ingress
|
||||
This final step within Kubernetes itself involves reconfiguring the container to list via a "NodePort" instead of "ClusterIP". Don't worry, you do not have to mangle with the ports that the container uses, this is entirely within Kubernetes itself and does not make changes to the original `docker-compose.yml` ports of the container(s) you imported.
|
||||
|
||||
- Navigate to: **Clusters > `local` > Service Discovery > Services > ntfy**
|
||||
- On the top-right, click on the blue "**Show Configuration**" button
|
||||
- On the bottom-right, click the blue "**Edit Config**" button
|
||||
- On the bottom-right, click the "**Edit as YAML**" button
|
||||
- Within the yaml editor, you will see a section named `spec:`, within that section is a subsection named `type:`. You will see a value of `type: ClusterIP` > You want to change that to `type: NodePort`
|
||||
- On the bottom-right, click the blue "**Save**" button and wait for the process to finish.
|
||||
- On the new page that appears, click on the `ntfy` service again
|
||||
- Click on the "**Ports**" tab
|
||||
- You will see a column of the table labeled "Node Port" with a number in the 30,000s such as `30996`. This will be important for later.
|
||||
|
||||
!!! success "Verifying Access Before Configuring Reverse Proxy"
|
||||
At this point, you will want to verify that you can access the service via the cluster node IP addresses such as the examples seen below, all of the cluster nodes should route the traffic to the container's service and will be used for load-balancing later in the reverse proxy configuration file.
|
||||
|
||||
- http://192.168.3.69:30996
|
||||
- http://192.168.3.70:30996
|
||||
- http://192.168.3.71:30996
|
||||
- http://192.168.3.72:30996
|
||||
|
||||
## Configuring Reverse Proxy
|
||||
If you were able to successfully verify access to the service when talking to it directly via one of the cluster node IP addresses with its given Node Port port number, then you can proceed to creating a reverse proxy configuration file for the service. This will be very similar to the original `docker-compose.yml` version of the reverse proxy configuration file, but with additional IP addresses to load-balance across the Kubernetes cluster nodes.
|
||||
|
||||
!!! info "Section Considerations"
|
||||
This section of the document does not (*currently*) cover the process of setting up health checks to ensure that the load-balanced server destinations in the reverse proxy are online before redirecting traffic to them. This is on my to-do list of things to implement to further harden the deployment process.
|
||||
|
||||
This section also does not cover the process of setting up a reverse proxy. If you want to follow along with this document, you can deploy a Traefik reverse proxy via the [Traefik](../../../../deployments/services/edge/traefik.md) deployment documentation.
|
||||
|
||||
With the above considerations in-mind, we just need to make some small changes to the existing Traefik configuration file to ensure that it load-balanced across every node of the cluster to ensure high-availability functions as-expected.
|
||||
|
||||
=== "(Original) ntfy.bunny-lab.io.yml"
|
||||
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
ntfy:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: ntfy
|
||||
rule: Host(`ntfy.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
ntfy:
|
||||
loadBalancer:
|
||||
passHostHeader: true
|
||||
servers:
|
||||
- url: http://192.168.5.45:80
|
||||
```
|
||||
|
||||
=== "(Updated) ntfy.bunny-lab.io.yml"
|
||||
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
ntfy:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: ntfy
|
||||
rule: Host(`ntfy.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
ntfy:
|
||||
loadBalancer:
|
||||
passHostHeader: true
|
||||
servers:
|
||||
- url: http://192.168.3.69:30996
|
||||
- url: http://192.168.3.70:30996
|
||||
- url: http://192.168.3.71:30996
|
||||
- url: http://192.168.3.72:30996
|
||||
```
|
||||
|
||||
!!! success "Verify Access via Reverse Proxy"
|
||||
If everything worked, you should be able to access the service at https://ntfy.bunny-lab.io, and if one of the cluster nodes goes offline, Rancher will automatically migrate the load to another cluster node which will take over the web request.
|
||||
|
||||
@@ -0,0 +1,85 @@
|
||||
---
|
||||
tags:
|
||||
- Documentation
|
||||
---
|
||||
|
||||
**Purpose**: If you run an environment with multiple Hyper-V: Failover Clusters, for the purpose of Hyper-V: Failover Cluster Replication via a `Hyper-V Replica Broker` role installed on a host within the Failover Cluster, sometimes a GuestVM will fail to replicate itself to the replica cluster, and in those cases, it may not be able to recover on its own. This guide attempts to outline the process to rebuild replication for GuestVMs on a one-by-one basis.
|
||||
|
||||
!!! note "Assumptions"
|
||||
This guide assumes you have two Hyper-V Failover Clusters, for the sake of the guide, we will refer to the Production cluster as `CLUSTER-01` and the Replication cluster as `CLUSTER-02`. This guide also assumes that Replication was set up beforehand, and does not include instructions on how to deploy a Replica Broker (at this time).
|
||||
|
||||
## Production Cluster - CLUSTER-01
|
||||
### Locate the GuestVM
|
||||
You need to start by locating the GuestVM in the Production cluster, CLUSTER-01. You will know you found the VM if the "Replication Health" is either `Unhealthy`, `Warning`, or `Critical`.
|
||||
### Remove Replication from GuestVM
|
||||
- Within a node of the Hyper-V: Failover Cluster Manager
|
||||
- Right-Click the GuestVM
|
||||
- Navigate to "**Replication > Remove Replication**"
|
||||
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
|
||||
|
||||
## Replication Cluster - CLUSTER-02
|
||||
### Note the storage GUID of the GuestVM in the replication cluster
|
||||
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
|
||||
- Right-Click the same GuestVM and click "Manage..." `This will open Hyper-V Manager`
|
||||
- Right-Click the GuestVM and click "Settings..."
|
||||
- Navigate to "**ISCSI Controller**"
|
||||
- Click on one of the Virtual Disks attached to the replica VM, and note the full folder path for later. e.g. `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS\020C9A30-EB02-41F3-8D8B-3561C4521182`
|
||||
|
||||
!!! warning "Noting the GUID of the GuestVM"
|
||||
You need to note the folder location so you have the GUID. Without the GUID, cleaning up the old storage associated with the GuestVM replica files will be much more difficult / time-consuming. Note it down somewhere safe, and reference it later in this guide.
|
||||
|
||||
### Delete the GuestVM from the Replication Cluster
|
||||
Now that you have noted the GUID of the storage folder of the GuestVM, we can safely move onto removing the GuestVM from the replication cluster.
|
||||
|
||||
- Within a node of the replication cluster's Hyper-V: Failover Cluster Manager
|
||||
- Right-Click the GuestVM
|
||||
- Navigate to "**Replication > Remove Replication**"
|
||||
- Confirm the removal by clicking the "**Yes**" button. You will know if it removed replication when the "Replication State" of the GuestVM is `Not enabled`
|
||||
- Right-Click the GuestVM (again) `You will see that "Enable Replication" is an option now, indicating it was successfully removed.`
|
||||
|
||||
!!! note "Replica Checkpoint Merges"
|
||||
When you removed replication, there may have been replication checkpoints that automatically try to merge together with a `Merge in Progress` status. Just let it finish before moving forward.
|
||||
|
||||
- Within the same node of the replication cluster's Hyper-V: Failover Cluster Manager `Switch back from Hyper-V Manager`
|
||||
- Right-Click the GuestVM and click "**Remove**"
|
||||
- Confirm the action by clicking the "**Yes**" button
|
||||
|
||||
### Delete the GuestVM manually from Hyper-V Manager on all replication cluster hosts
|
||||
At this point in time, we need to remove the GuestVM from all of the servers in the cluster. Just because we removed it from the Hyper-V: Failover Cluster did not remove it from the cluster's nodes. We can automate part of this work by opening Hyper-V Manager on the same Failover Node we have been working on thus far, and from there we can connect the rest of the replication nodes to the manager to have one place to connect to all of the nodes, avoiding hopping between servers.
|
||||
|
||||
- Open Hyper-V Manager
|
||||
- Right-Click "Hyper-V Manager" on the left-hand navigation menu
|
||||
- Click "Connect to Server..."
|
||||
- Type the names of every node in the replication cluster to connect to each of them, repeating the two steps above for every node
|
||||
- Remove GuestVM from the node it appears on
|
||||
- On one of the replication cluster nodes, we will see the GuestVM listed, we are going to Right-Click the GuestVM and select "**Delete**"
|
||||
|
||||
### Delete the GuestVM's replicated VHDX storage from replication ClusterStorage
|
||||
Now we need to clean up the storage left behind by the replication cluster.
|
||||
|
||||
- Within a node of the replication cluster
|
||||
- Navigate to `C:\ClusterStorage\Volume1\HYPER-V REPLICA\VIRTUAL HARD DISKS`
|
||||
- Delete the entire GUID folder noted in the previous steps. `e.g. 020C9A30-EB02-41F3-8D8B-3561C4521182`
|
||||
|
||||
## Production Cluster - CLUSTER-01
|
||||
### Re-Enable Replication on GuestVM in Cluster-01 (Production Cluster)
|
||||
At this point, we have disabled replication for the GuestVM and cleaned up traces of it in the replication cluster. Now we need to re-enable replication on the GuestVM back in the production cluster.
|
||||
|
||||
- Within a node of the production Hyper-V: Failover Cluster Manager
|
||||
- Right-Click the GuestVM
|
||||
- Navigate to "**Replication > Enable Replication...**"
|
||||
- Click "Next"
|
||||
- For the "**Replica Server**", enter the name of the role of the Hyper-V Replica Broker role in the (replication cluster's) Failover Cluster. `e.g. CLUSTER-02-REPL`, then click "Next"
|
||||
- Click the "Select Certificate" button, since the Broker was configured with Certificate-based authentication instead of Kerberos (in this example environment). It will prompt you to accept the certificate by clicking "OK". (e.g. `HV Replica Root CA`), then click "Next"
|
||||
- Make sure every drive you want replicated is checked, then click "Next"
|
||||
- Replication Frequency: `5 Minutes`, then click "Next"
|
||||
- Additional Recovery Points: `Maintain only the latest recovery point`, then click "Next"
|
||||
- Initial Replication Method: `Send initial copy over the network`
|
||||
- Schedule Initial Replication: `Start replication immediately`
|
||||
- Click "Next"
|
||||
- Click "Finish"
|
||||
|
||||
!!! success "Replication Enabled"
|
||||
If everything was successful, you will see a dialog box named "Enable replication for `<GuestVM>`" with a message similar to the following: "Replica virtual machine `<GuestVM>` was successfully created on the specified Replica server `<Node-in-Replication-Cluster>`.
|
||||
|
||||
At this point, you can click "Close" to finish the process. Under the GuestVM details, you will see "Replication State": `Initial Replication in Progress`.
|
||||
@@ -0,0 +1,34 @@
|
||||
## Purpose
|
||||
If you have a GuestVM that will not stop gracefully either because the Hyper-V host is goofed-up or the VMMS service won't allow you to restart it. You can perform a hail-mary to forcefully stop the GuestVM's Hyper-V process.
|
||||
|
||||
!!! warning "May Cause GuestVM to be Inconsistent"
|
||||
This is meant as a last-resort when there are no other options on-the-table. You may end up corrupting the GuestVM by doing this.
|
||||
|
||||
### Get the VMID of the GuestVM
|
||||
```powershell
|
||||
Get-VM SERVER-01 | Select VMName, VMId
|
||||
|
||||
# Example Output
|
||||
# VMName VMId
|
||||
# ------ ------------------------------------
|
||||
# SERVER-01 3e4b6f91-6c6c-4075-9b7e-389d46315074
|
||||
```
|
||||
|
||||
### Extrapolate Process ID
|
||||
Now you need to hunt-down the process ID associated with the GuestVM.
|
||||
```powershell
|
||||
Get-CimInstance Win32_Process -Filter "Name='vmwp.exe'" |
|
||||
Where-Object { $_.CommandLine -match "3e4b6f91-6c6c-4075-9b7e-389d46315074" } |
|
||||
Select-Object ProcessId, CommandLine
|
||||
|
||||
# Example Output
|
||||
# ProcessId CommandLine
|
||||
# --------- ---------------------------------------------------------
|
||||
# 12488 "C:\Windows\System32\vmwp.exe" -vmid 3e4b6f91-6c6c-4075-9b7e-389d46315074
|
||||
```
|
||||
|
||||
### Terminate Process
|
||||
Lastly, you terminate the process by its ID.
|
||||
```powershell
|
||||
Stop-Process -Id 12488 -Force
|
||||
```
|
||||
@@ -0,0 +1,40 @@
|
||||
---
|
||||
tags:
|
||||
- Kerberos
|
||||
---
|
||||
|
||||
**Purpose**:
|
||||
You may find that you want to be able to live-migrate guestVMs on a Hyper-V environment that is not clustered as a Hyper-V Failover Cluster, you will have permission issues. One way to work around this is to use CredSSP as the authentication mechanism, which is not ideal but useful in a pinch, or you can use Kerberos-based authentication.
|
||||
|
||||
This document will cover both scenarios.
|
||||
|
||||
=== "Kerberos Authentication (*Preferred*)"
|
||||
|
||||
- Log into a domain controller that both Hyper-V hosts are capable of communicating with
|
||||
- Open "**Server Manager > Tools " Active Directory Users & Computers**"
|
||||
- Locate the computer objects representing both of the Hyper-V servers and repeat the steps below for each Hyper-V computer object:
|
||||
- Right-Click > "**Properties**"
|
||||
- Click on the "**Delegation**" Tab
|
||||
- Check the radiobox for the open "**Trust this computer for delegation to specified services only.**"
|
||||
- Ensure that "**User Kerberos Only** is checked
|
||||
- Click on the "**Add**" button
|
||||
- Click the "**Users or Computers...**" button
|
||||
- Within the object search field, type in the name of the Hyper-V server you want to delegate access to (this will be the opposite host. e.g. VIRT-NODE-02, then repeat these steps later to delegate access for VIRT-NODE-01, etc)
|
||||
- You will see a list of services that you can allow delegation to, add the following services:
|
||||
- `cisvc`
|
||||
- `mcsvc`
|
||||
- `cifs`
|
||||
- `Virtual Machine Migration Service`
|
||||
- `Microsoft Virtualization Console`
|
||||
- Click the "**Apply**" button, then click the "**OK**" button to finalize these changes.
|
||||
- Repeat the above steps for the opposite Hyper-V host. This way both hosts are delegated to eachother
|
||||
- e.g. `VIRT-NODE-01 <---(delegation)---> VIRT-NODE-02`
|
||||
|
||||
=== "CredSSP Authentication"
|
||||
|
||||
- Log into both Hyper-V Hosts as the same administrative user. Preferrably a domain administrator
|
||||
- From the Hyper-V host currently running the GuestVM that needs to be migrated, open Hyper-V Manager and right-click > "**Move**" the guestVM.
|
||||
- Select the destination by providing the fully-qualified domain name of the destination server (or in some cases the shorthand hostname of the destination server)
|
||||
- It should begin the migration process.
|
||||
|
||||
**Note**: Do not perform a "Pull" from source to the destination. You want to always "Push" the VM to its destination. It will generally fail if you try to "Pull" the VM to its destination due to the way that CredSSP works in this context.
|
||||
12
workflows/platforms/virtualization/proxmox/common-tasks.md
Normal file
12
workflows/platforms/virtualization/proxmox/common-tasks.md
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
---
|
||||
|
||||
**Purpose**: The purpose of this document is to outline common tasks that you may need to run in your cluster to perform various tasks.
|
||||
|
||||
## Delete Node from Cluster
|
||||
Sometimes you may need to delete a node from the cluster if you have re-built it or had issues and needed to destroy it. In these instances, you would run the following command (assuming you have a 3-node quorum in your cluster).
|
||||
```
|
||||
pvecm delnode promox-node-01
|
||||
```
|
||||
@@ -0,0 +1,20 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
---
|
||||
|
||||
## Purpose
|
||||
Sometimes in some very specific situations, you will find that an LVM / VG just won't come online in ProxmoxVE. If this happens, you can run the following commands (and replace the placeholder location) to manually bring the storage online.
|
||||
|
||||
```sh
|
||||
lvchange -an local-vm-storage/local-vm-storage
|
||||
lvchange -an local-vm-storage/local-vm-storage_tmeta
|
||||
lvchange -an local-vm-storage/local-vm-storage_tdata
|
||||
vgchange -ay local-vm-storage
|
||||
```
|
||||
|
||||
!!! info "Be Patient"
|
||||
It can take some time for everything to come online.
|
||||
|
||||
!!! success
|
||||
If you see something like this: `6 logical volume(s) in volume group "local-vm-storage" now active`, then you successfully brought the volume online.
|
||||
@@ -0,0 +1,43 @@
|
||||
---
|
||||
tags:
|
||||
- Proxmox
|
||||
---
|
||||
|
||||
## Purpose
|
||||
There are a few steps you have to take when upgrading ProxmoxVE from 8.4.1+ to 9.0+. The process is fairly straightforward, so just follow the instructions seen below.
|
||||
|
||||
!!! info "GuestVM Assumptions"
|
||||
It is assumed that if you are running a ProxmoxVE cluster, you will migrate all GuestVMs to another cluster node. If this is a standalone ProxmoxVE server, you will shut down all GuestVMs safely before proceeding.
|
||||
|
||||
!!! warning "Perform `pve8to9` Readiness Check"
|
||||
It's critical that you run the `pve8to9` command to ensure that your ProxmoxVE server meets all of the requirements and doesn't have any failures or potentially server-breaking warnings. If the `pve8to9` command is unknown, then run `apt update && apt dist-upgrade` in the shell then try again. Warnings should be addressed ad-hoc, but *CPU Microcode warnings can be safely ignored*.
|
||||
|
||||
**Example pve8to9 Summary Output**:
|
||||
```sh
|
||||
= SUMMARY =
|
||||
|
||||
TOTAL: 48
|
||||
PASSED: 39
|
||||
SKIPPED: 8
|
||||
WARNINGS: 1
|
||||
FAILURES: 0
|
||||
```
|
||||
|
||||
### Update Repositories from `bookworm` to `trixie`
|
||||
```sh
|
||||
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
|
||||
sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/pve-install-repo.list
|
||||
apt update
|
||||
```
|
||||
|
||||
### Upgrade to ProxmoxVE 9.0
|
||||
!!! warning "Run Upgrade Commands in iLO/iDRAC/IPMI"
|
||||
At this point, its very likely that if you are using SSH, it may unexpectedly have the session terminated, so you absolutely want to use a local or remote console to the server to run the commands below, both to ensure you maintain access to the console, as well as to see if any issues arise during POST after the reboot.
|
||||
|
||||
```sh
|
||||
apt dist-upgrade -y
|
||||
reboot
|
||||
```
|
||||
|
||||
!!! note "Disable `pve-enterprise` Repository"
|
||||
At this point, the ProxmoxVE server should be running on v9.0+, you will want to disable the `pve-enterprise` repository as it will goof up future updates if you don't disable it.
|
||||
Reference in New Issue
Block a user