Restructured Documentation
This commit is contained in:
106
Docker & Kubernetes/Custom Containers/Git Repo Updater.md
Normal file
106
Docker & Kubernetes/Custom Containers/Git Repo Updater.md
Normal file
@ -0,0 +1,106 @@
|
||||
**Purpose**: Docker container running Alpine Linux that automates and improves upon much of the script mentioned in the [Git Repo Updater](https://docs.bunny-lab.io/Scripts/Bash/Git%20Repo%20Updater) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings, and can have an infinite number of monitored repositories.
|
||||
|
||||
### Deployment
|
||||
You can find the current up-to-date Gitea repository that includes the `docker-compose.yml` and `.env` files that you need to deploy everything [here](https://git.bunny-lab.io/container-registry/-/packages/container/git-repo-updater/latest)
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.3'
|
||||
services:
|
||||
git-repo-updater:
|
||||
privileged: true
|
||||
container_name: git-repo-updater
|
||||
env_file:
|
||||
- stack.env
|
||||
image: git.bunny-lab.io/container-registry/git-repo-updater:latest
|
||||
volumes:
|
||||
- /srv/containers:/srv/containers
|
||||
- /srv/containers/git-repo-updater/Repo_Cache:/root/Repo_Cache
|
||||
restart: always
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
# Gitea Credentials
|
||||
GIT_USERNAME=nicole.rappe
|
||||
GIT_PASSWORD=USE-AN-APP-PASSWORD
|
||||
|
||||
# NTFY Push Notification Server URL
|
||||
NTFY_URL=https://ntfy.cyberstrawberry.net/git-repo-updater
|
||||
|
||||
# Repository/Destination Pairs (Add as Many as Needed)
|
||||
REPO_01="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/bunny-lab/docs.git,/srv/containers/material-mkdocs/docs/docs"
|
||||
REPO_02="https://${GIT_USERNAME}:${GIT_PASSWORD}@git.bunny-lab.io/GitOps/servers.bunny-lab.io.git,/srv/containers/homepage-docker"
|
||||
```
|
||||
### Build / Development
|
||||
If you want to learn how the container was assembled, the related build files are located [here](https://git.cyberstrawberry.net/container-registry/git-repo-updater)
|
||||
```jsx title="Dockerfile"
|
||||
# Use Alpine as the base image of the container
|
||||
FROM alpine:latest
|
||||
|
||||
# Install necessary packages
|
||||
RUN apk --no-cache add git curl rsync
|
||||
|
||||
# Add script
|
||||
COPY repo_watcher.sh /repo_watcher.sh
|
||||
RUN chmod +x /repo_watcher.sh
|
||||
|
||||
#Create Directory to store Repositories
|
||||
RUN mkdir -p /root/Repo_Cache
|
||||
|
||||
# Start script (Alpine uses /bin/sh instead of /bin/bash)
|
||||
CMD ["/bin/sh", "-c", "/repo_watcher.sh"]
|
||||
```
|
||||
|
||||
```jsx title="repo_watcher.sh"
|
||||
#!/bin/sh
|
||||
|
||||
# Function to process each repo-destination pair
|
||||
process_repo() {
|
||||
FULL_REPO_URL=$1
|
||||
DESTINATION=$2
|
||||
|
||||
# Extract the URL without credentials for logging and notifications
|
||||
CLEAN_REPO_URL=$(echo "$FULL_REPO_URL" | sed 's/https:\/\/[^@]*@/https:\/\//')
|
||||
|
||||
# Directory to hold the repository locally
|
||||
REPO_DIR="/root/Repo_Cache/$(basename $CLEAN_REPO_URL .git)"
|
||||
|
||||
# Clone the repo if it doesn't exist, or navigate to it if it does
|
||||
if [ ! -d "$REPO_DIR" ]; then
|
||||
curl -d "Cloning: $CLEAN_REPO_URL" $NTFY_URL
|
||||
git clone "$FULL_REPO_URL" "$REPO_DIR" > /dev/null 2>&1
|
||||
fi
|
||||
cd "$REPO_DIR" || exit
|
||||
|
||||
# Fetch the latest changes
|
||||
git fetch origin main > /dev/null 2>&1
|
||||
|
||||
# Check if the local repository is behind the remote
|
||||
LOCAL=$(git rev-parse @)
|
||||
REMOTE=$(git rev-parse @{u})
|
||||
|
||||
if [ "$LOCAL" != "$REMOTE" ]; then
|
||||
curl -d "Updating: $CLEAN_REPO_URL" $NTFY_URL
|
||||
git pull origin main > /dev/null 2>&1
|
||||
rsync -av --delete --exclude '.git/' ./ "$DESTINATION" > /dev/null 2>&1
|
||||
fi
|
||||
}
|
||||
|
||||
# Main loop
|
||||
while true; do
|
||||
# Iterate over each environment variable matching 'REPO_[0-9]+'
|
||||
env | grep '^REPO_[0-9]\+=' | while IFS='=' read -r name value; do
|
||||
# Split the value by comma and read into separate variables
|
||||
OLD_IFS="$IFS" # Save the original IFS
|
||||
IFS=',' # Set IFS to comma for splitting
|
||||
set -- $value # Set positional parameters ($1, $2, ...)
|
||||
REPO_URL="$1" # Assign first parameter to REPO_URL
|
||||
DESTINATION="$2" # Assign second parameter to DESTINATION
|
||||
IFS="$OLD_IFS" # Restore original IFS
|
||||
|
||||
process_repo "$REPO_URL" "$DESTINATION"
|
||||
done
|
||||
|
||||
# Wait for 5 seconds before the next iteration
|
||||
sleep 5
|
||||
done
|
||||
|
||||
```
|
70
Docker & Kubernetes/Docker/Docker Compose/ActivePieces.md
Normal file
70
Docker & Kubernetes/Docker/Docker Compose/ActivePieces.md
Normal file
@ -0,0 +1,70 @@
|
||||
**Purpose**: Self-hosted open-source no-code business automation tool.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.0'
|
||||
services:
|
||||
activepieces:
|
||||
image: activepieces/activepieces:0.3.11
|
||||
container_name: activepieces
|
||||
restart: unless-stopped
|
||||
privileged: true
|
||||
ports:
|
||||
- '8080:80'
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
env_file: stack.env
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.62
|
||||
postgres:
|
||||
image: 'postgres:14.4'
|
||||
container_name: postgres
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
volumes:
|
||||
- /srv/containers/activepieces/postgresql:/var/lib/postgresql/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.61
|
||||
redis:
|
||||
image: 'redis:7.0.7'
|
||||
container_name: redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /srv/containers/activepieces/redis:/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.60
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
AP_ENGINE_EXECUTABLE_PATH=dist/packages/engine/main.js
|
||||
AP_ENCRYPTION_KEY=e81f8754faa04acaa7b13caa5d2c6a5a
|
||||
AP_JWT_SECRET=REDACTED #BE SURE TO SET THIS WITH A VALID JWT SECRET > REFER TO OFFICIAL DOCUMENTATION
|
||||
AP_ENVIRONMENT=prod
|
||||
AP_FRONTEND_URL=https://ap.cyberstrawberry.net
|
||||
AP_NODE_EXECUTABLE_PATH=/usr/local/bin/node
|
||||
AP_POSTGRES_DATABASE=activepieces
|
||||
AP_POSTGRES_HOST=192.168.5.61
|
||||
AP_POSTGRES_PORT=5432
|
||||
AP_POSTGRES_USERNAME=postgres
|
||||
AP_POSTGRES_PASSWORD=REDACTED #USE A SECURE SHORT PASSWORD > ENSURE ITS NOT TOO LONG FOR POSTGRESQL
|
||||
AP_REDIS_HOST=redis
|
||||
AP_REDIS_PORT=6379
|
||||
AP_SANDBOX_RUN_TIME_SECONDS=600
|
||||
AP_TELEMETRY_ENABLED=true
|
||||
```
|
30
Docker & Kubernetes/Docker/Docker Compose/Adguard-Home.md
Normal file
30
Docker & Kubernetes/Docker/Docker Compose/Adguard-Home.md
Normal file
@ -0,0 +1,30 @@
|
||||
**Purpose**: AdGuard Home is a network-wide software for blocking ads & tracking. After you set it up, it will cover ALL your home devices, and you don’t need any client-side software for that. With the rise of Internet-Of-Things and connected devices, it becomes more and more important to be able to control your whole network.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: adguard/adguardhome
|
||||
ports:
|
||||
- 3000:3000
|
||||
- 53:53
|
||||
- 80:80
|
||||
volumes:
|
||||
- /srv/containers/adguard_home/workingdir:/opt/adguardhome/work
|
||||
- /srv/containers/adguard_home/config:/opt/adguardhome/conf
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.189
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,86 @@
|
||||
**Purpose**: HTML5-based Remote Access Broker for SSH, RDP, and VNC. Useful for remote access into an environment.
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: jasonbean/guacamole
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/guacamole:/config
|
||||
environment:
|
||||
- OPT_MYSQL=Y
|
||||
- OPT_MYSQL_EXTENSION=N
|
||||
- OPT_SQLSERVER=N
|
||||
- OPT_LDAP=N
|
||||
- OPT_DUO=N
|
||||
- OPT_CAS=N
|
||||
- OPT_TOTP=Y
|
||||
- OPT_QUICKCONNECT=N
|
||||
- OPT_HEADER=N
|
||||
- OPT_SAML=N
|
||||
- PUID=99
|
||||
- PGID=100
|
||||
- TZ=America/Denver
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.43
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Reverse Proxy Configuration
|
||||
|
||||
=== "Traefik"
|
||||
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
apache-guacamole:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: apache-guacamole
|
||||
rule: Host(`remote.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
apache-guacamole:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.43:8080
|
||||
passHostHeader: true
|
||||
```
|
||||
|
||||
=== "NGINX"
|
||||
|
||||
``` yaml
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name remote.bunny-lab.io;
|
||||
client_max_body_size 0;
|
||||
ssl on;
|
||||
location / {
|
||||
proxy_pass http://192.168.5.43:8080;
|
||||
proxy_buffering off;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $http_connection;
|
||||
access_log off;
|
||||
}
|
||||
}
|
||||
```
|
45
Docker & Kubernetes/Docker/Docker Compose/Authelia.md
Normal file
45
Docker & Kubernetes/Docker/Docker Compose/Authelia.md
Normal file
@ -0,0 +1,45 @@
|
||||
**Purpose**: Authelia is an open-source authentication and authorization server and portal fulfilling the identity and access management (IAM) role of information security in providing multi-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for common reverse proxies.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
services:
|
||||
authelia:
|
||||
image: authelia/authelia
|
||||
container_name: authelia
|
||||
volumes:
|
||||
- /mnt/authelia/config:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.159
|
||||
expose:
|
||||
- 9091
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
disable: true
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: redis
|
||||
volumes:
|
||||
- /mnt/authelia/redis:/data
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.158
|
||||
expose:
|
||||
- 6379
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
59
Docker & Kubernetes/Docker/Docker Compose/ChangeDetection.md
Normal file
59
Docker & Kubernetes/Docker/Docker Compose/ChangeDetection.md
Normal file
@ -0,0 +1,59 @@
|
||||
**Purpose**: Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more.
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: dgtlmoon/changedetection.io
|
||||
container_name: changedetection.io
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/changedetection/datastore:/datastore
|
||||
ports:
|
||||
- 5000:5000
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.changedetection.rule=Host(`changedetection.bunny-lab.io`)"
|
||||
- "traefik.http.routers.changedetection.entrypoints=websecure"
|
||||
- "traefik.http.routers.changedetection.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.changedetection.loadbalancer.server.port=5000"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.49
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
changedetection:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: changedetection
|
||||
rule: Host(`changedetection.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
changedetection:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.49:5000
|
||||
passHostHeader: true
|
||||
```
|
28
Docker & Kubernetes/Docker/Docker Compose/CyberChef.md
Normal file
28
Docker & Kubernetes/Docker/Docker Compose/CyberChef.md
Normal file
@ -0,0 +1,28 @@
|
||||
**Purpose**: The Cyber Swiss Army Knife - a web app for encryption, encoding, compression and data analysis.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: mpepping/cyberchef:latest
|
||||
container_name: cyberchef
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- 8000:8000
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.55
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
59
Docker & Kubernetes/Docker/Docker Compose/Dashy.md
Normal file
59
Docker & Kubernetes/Docker/Docker Compose/Dashy.md
Normal file
@ -0,0 +1,59 @@
|
||||
**Purpose**: A self-hostable personal dashboard built for you. Includes status-checking, widgets, themes, icon packs, a UI editor and tons more!
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
dashy:
|
||||
container_name: Dashy
|
||||
|
||||
# Pull latest image from DockerHub
|
||||
image: lissy93/dashy
|
||||
|
||||
# Set port that web service will be served on. Keep container port as 80
|
||||
ports:
|
||||
- 4000:80
|
||||
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dashy.rule=Host(`dashboard.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.dashy.entrypoints=websecure"
|
||||
- "traefik.http.routers.dashy.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.dashy.loadbalancer.server.port=80"
|
||||
|
||||
# Set any environmental variables
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- UID=1000
|
||||
- GID=1000
|
||||
|
||||
# Pass in your config file below, by specifying the path on your host machine
|
||||
volumes:
|
||||
- /srv/Containers/Dashy/conf.yml:/app/public/conf.yml
|
||||
- /srv/Containers/Dashy/item-icons:/app/public/item-icons
|
||||
|
||||
# Specify restart policy
|
||||
restart: unless-stopped
|
||||
|
||||
# Configure healthchecks
|
||||
healthcheck:
|
||||
test: ['CMD', 'node', '/app/services/healthcheck']
|
||||
interval: 1m30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
# Connect container to Docker_Network
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.57
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,31 @@
|
||||
**Purpose**: PLACEHOLDER
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
PLACEHOLDER
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
PLACEHOLDER
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
PLACEHOLDER:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: myresolver
|
||||
service: PLACEHOLDER
|
||||
rule: Host(`PLACEHOLDER.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
PLACEHOLDER:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://PLACEHOLDER:80
|
||||
passHostHeader: true
|
||||
```
|
34
Docker & Kubernetes/Docker/Docker Compose/Docusaurus.md
Normal file
34
Docker & Kubernetes/Docker/Docker Compose/Docusaurus.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: An optimized site generator in React. Docusaurus helps you to move fast and write content. Build documentation websites, blogs, marketing pages, and more.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
docusaurus:
|
||||
image: awesometic/docusaurus
|
||||
container_name: docusaurus
|
||||
environment:
|
||||
- TARGET_UID=1000
|
||||
- TARGET_GID=1000
|
||||
- AUTO_UPDATE=true
|
||||
- WEBSITE_NAME=docusaurus
|
||||
- TEMPLATE=classic
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/docusaurus:/docusaurus
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.72
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
49
Docker & Kubernetes/Docker/Docker Compose/Frigate.md
Normal file
49
Docker & Kubernetes/Docker/Docker Compose/Frigate.md
Normal file
@ -0,0 +1,49 @@
|
||||
**Purpose**: A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.9"
|
||||
services:
|
||||
frigate:
|
||||
container_name: frigate
|
||||
privileged: true # this may not be necessary for all setups
|
||||
restart: unless-stopped
|
||||
image: blakeblackshear/frigate:stable
|
||||
shm_size: "256mb" # update for your cameras based on calculation above
|
||||
# devices:
|
||||
# - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
|
||||
# - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
# - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /mnt/1TB_STORAGE/frigate/config.yml:/config/config.yml:ro
|
||||
- /mnt/1TB_STORAGE/frigate/media:/media/frigate
|
||||
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
||||
target: /tmp/cache
|
||||
tmpfs:
|
||||
size: 4000000000
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "1935:1935" # RTMP feeds
|
||||
environment:
|
||||
FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.201
|
||||
|
||||
mqtt:
|
||||
container_name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
ports:
|
||||
- "1883:1883"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.202
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
FRIGATE_RTSP_PASSWORD=SomethingSecure101
|
||||
```
|
81
Docker & Kubernetes/Docker/Docker Compose/Gitea.md
Normal file
81
Docker & Kubernetes/Docker/Docker Compose/Gitea.md
Normal file
@ -0,0 +1,81 @@
|
||||
**Purpose**: Gitea is a painless self-hosted all-in-one software development service, it includes Git hosting, code review, team collaboration, package registry and CI/CD. It is similar to GitHub, Bitbucket and GitLab. Gitea was forked from Gogs originally and almost all the code has been changed.
|
||||
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
privileged: true
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/gitea:/data
|
||||
# - /etc/timezone:/etc/timezone:ro
|
||||
# - /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3000:3000"
|
||||
- "222:22"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.70
|
||||
depends_on:
|
||||
- postgres
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.gitea.rule=Host(`git.bunny-lab.io`)"
|
||||
- "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
- "traefik.http.routers.gitea.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.gitea.loadbalancer.server.port=3000"
|
||||
|
||||
postgres:
|
||||
image: postgres:12-alpine
|
||||
ports:
|
||||
- 5432:5432
|
||||
volumes:
|
||||
- /srv/containers/gitea/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=gitea
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.71
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
POSTGRES_PASSWORD=SomethingSecure
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
gitea:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: gitea
|
||||
rule: Host(`git.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
gitea:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.70:3000
|
||||
passHostHeader: true
|
||||
```
|
37
Docker & Kubernetes/Docker/Docker Compose/HomeAssistant.md
Normal file
37
Docker & Kubernetes/Docker/Docker Compose/HomeAssistant.md
Normal file
@ -0,0 +1,37 @@
|
||||
**Purpose**: Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
homeassistant:
|
||||
container_name: homeassistant
|
||||
image: "ghcr.io/home-assistant/home-assistant:stable"
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/Home-Assistant-Core:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
restart: always
|
||||
privileged: true
|
||||
ports:
|
||||
- 8123:8123
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.252
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.homeassistant.rule=Host(`automation.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.homeassistant.entrypoints=websecure"
|
||||
- "traefik.http.routers.homeassistant.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
41
Docker & Kubernetes/Docker/Docker Compose/Homepage-Docker.md
Normal file
41
Docker & Kubernetes/Docker/Docker Compose/Homepage-Docker.md
Normal file
@ -0,0 +1,41 @@
|
||||
**Purpose**: A highly customizable homepage (or startpage / application dashboard) with Docker and service API integrations.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.8'
|
||||
services:
|
||||
homepage:
|
||||
image: ghcr.io/benphelps/homepage:latest
|
||||
container_name: homepage
|
||||
volumes:
|
||||
- /srv/containers/homepage-docker:/config
|
||||
- /srv/containers/homepage-docker/icons:/app/public/icons
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
- 3000:3000
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
dns:
|
||||
- 192.168.3.10
|
||||
- 192.168.3.11
|
||||
restart: unless-stopped
|
||||
extra_hosts:
|
||||
- "rancher.cyberstrawberry.net:192.168.3.21"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.44
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
26
Docker & Kubernetes/Docker/Docker Compose/IT-Tools.md
Normal file
26
Docker & Kubernetes/Docker/Docker Compose/IT-Tools.md
Normal file
@ -0,0 +1,26 @@
|
||||
**Purpose**: Collection of handy online tools for developers, with great UX.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: corentinth/it-tools:latest
|
||||
container_name: it-tools
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.16
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
43
Docker & Kubernetes/Docker/Docker Compose/Kopia.md
Normal file
43
Docker & Kubernetes/Docker/Docker Compose/Kopia.md
Normal file
@ -0,0 +1,43 @@
|
||||
**Purpose**: Cross-platform backup tool for Windows, macOS & Linux with fast, incremental backups, client-side end-to-end encryption, compression and data deduplication. CLI and GUI included.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
kopia:
|
||||
image: kopia/kopia:latest
|
||||
hostname: kopia-backup
|
||||
user: root
|
||||
restart: always
|
||||
ports:
|
||||
- 51515:51515
|
||||
environment:
|
||||
- KOPIA_PASSWORD=${KOPIA_ENRYPTION_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
privileged: true
|
||||
volumes:
|
||||
- /srv/containers/kopia/config:/app/config
|
||||
- /srv/containers/kopia/cache:/app/cache
|
||||
- /srv/containers/kopia/logs:/app/logs
|
||||
- /srv:/srv
|
||||
- /usr/share/zoneinfo:/usr/share/zoneinfo
|
||||
entrypoint: ["/bin/kopia", "server", "start", "--insecure", "--timezone=America/Denver", "--address=0.0.0.0:51515", "--override-username=${KOPIA_SERVER_USERNAME}", "--server-username=${KOPIA_SERVER_USERNAME}", "--server-password=${KOPIA_SERVER_PASSWORD}", "--disable-csrf-token-checks"]
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.14
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
:::note Credentials:
|
||||
Your username will be `kopia@kopia-backup` and the password will be the value you set for `--server-password` in the entrypoint section of the compose file. The `KOPIA_PASSWORD:` is used by the backup repository, such as Backblaze B2, to encrypt/decrypt the backed-up data, and must be updated in the compose file if the repository is changed / updated.
|
||||
:::
|
||||
|
||||
```jsx title=".env"
|
||||
KOPIA_ENRYPTION_PASSWORD=PasswordUsedToEncryptDataOnBackblazeB2
|
||||
KOPIA_SERVER_PASSWORD=ThisIsUsedToLogIntoKopiaWebUI
|
||||
KOPIA_SERVER_USERNAME=kopia@kopia-backup
|
||||
```
|
191
Docker & Kubernetes/Docker/Docker Compose/Material MkDocs.md
Normal file
191
Docker & Kubernetes/Docker/Docker Compose/Material MkDocs.md
Normal file
@ -0,0 +1,191 @@
|
||||
**Purpose**: Documentation that simply works. Write your documentation in Markdown and create a professional static site for your Open Source or commercial project in minutes – searchable, customizable, more than 60 languages, for all devices.
|
||||
|
||||
!!! note
|
||||
This is best deployed in tandem with the [Git Repo Updater](https://docs.bunny-lab.io/Containers/Docker/Docker%20Compose/Custom%20Containers/Git%20Repo%20Updater/) container in its own stack. Utilizing this will allow you to push commits to a repository to immediately (within 5 seconds) push changes into MKDocs without needing SSH/Portainer access to the server hosting MKDocs. If you don't have a GitHub account, consider deploying a [Gitea](https://docs.bunny-lab.io/Containers/Docker/Docker%20Compose/Gitea/) container to host your own code repository! This all assumes you have already deployed [Docker and Portainer](https://docs.bunny-lab.io/Containers/Portainer/Deploy%20Portainer/).
|
||||
|
||||
## Documentation / Pull Sequence
|
||||
``` mermaid
|
||||
sequenceDiagram
|
||||
participant Gitea
|
||||
participant Git_Repo_Updater as Git-Repo-Updater
|
||||
participant MkDocs
|
||||
participant NTFY
|
||||
|
||||
loop Every 5 seconds
|
||||
Git_Repo_Updater->>Gitea: Check for changes in repository
|
||||
alt Changes Detected
|
||||
Gitea->>Git_Repo_Updater: Notify change
|
||||
Git_Repo_Updater->>NTFY: Send change notification
|
||||
Git_Repo_Updater->>Gitea: Download data from repository
|
||||
Git_Repo_Updater->>MkDocs: Copy data to MkDocs
|
||||
MkDocs->>MkDocs: Reload and render webpages
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
## Deploy Material MKDocs
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
mkdocs:
|
||||
container_name: mkdocs
|
||||
image: squidfunk/mkdocs-material
|
||||
restart: always
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- "8000:8000"
|
||||
volumes:
|
||||
- /srv/containers/material-mkdocs/docs:/docs
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.76
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Config Example
|
||||
When you deploy MKDocs, you will need to give it a configuration to tell MKDocs how to structure itself. The configuration below is what I used in my deployment. This file is one folder level higher than the `/docs` folder that holds the documentation of the website.
|
||||
```jsx title="/srv/containers/material-mkdocs/docs/mkdocs.yml"
|
||||
# Project information
|
||||
site_name: Homelab Documentation
|
||||
site_url: https://docs.bunny-lab.io
|
||||
site_author: Nicole Rappe
|
||||
site_description: >-
|
||||
Bunny Lab Server, Script, and Container Documentation
|
||||
|
||||
# Configuration
|
||||
theme:
|
||||
name: material
|
||||
custom_dir: material/overrides
|
||||
features:
|
||||
- announce.dismiss
|
||||
- content.action.edit
|
||||
- content.action.view
|
||||
- content.code.annotate
|
||||
- content.code.copy
|
||||
- content.code.select
|
||||
- content.tabs.link
|
||||
- content.tooltips
|
||||
# - header.autohide
|
||||
- navigation.expand
|
||||
# - navigation.footer
|
||||
- navigation.indexes
|
||||
- navigation.instant
|
||||
- navigation.instant.prefetch
|
||||
- navigation.instant.progress
|
||||
- navigation.prune
|
||||
- navigation.sections
|
||||
- navigation.tabs
|
||||
- navigation.tabs.sticky
|
||||
- navigation.top
|
||||
- navigation.tracking
|
||||
- search.highlight
|
||||
- search.share
|
||||
- search.suggest
|
||||
- toc.follow
|
||||
# - toc.integrate ## If this is enabled, the TOC will appear on the left navigation menu.
|
||||
palette:
|
||||
- media: "(prefers-color-scheme)"
|
||||
toggle:
|
||||
icon: material/link
|
||||
name: Switch to light mode
|
||||
- media: "(prefers-color-scheme: light)"
|
||||
scheme: default
|
||||
primary: deep purple
|
||||
accent: deep purple
|
||||
toggle:
|
||||
icon: material/toggle-switch
|
||||
name: Switch to dark mode
|
||||
- media: "(prefers-color-scheme: dark)"
|
||||
scheme: slate
|
||||
primary: black
|
||||
accent: deep purple
|
||||
toggle:
|
||||
icon: material/toggle-switch-off
|
||||
name: Switch to system preference
|
||||
font:
|
||||
text: Roboto
|
||||
code: Roboto Mono
|
||||
favicon: assets/favicon.png
|
||||
icon:
|
||||
logo: logo
|
||||
|
||||
# Plugins
|
||||
plugins:
|
||||
- search:
|
||||
separator: '[\s\u200b\-_,:!=\[\]()"`/]+|\.(?!\d)|&[lg]t;|(?!\b)(?=[A-Z][a-z])'
|
||||
- minify:
|
||||
minify_html: true
|
||||
|
||||
# Hooks
|
||||
hooks:
|
||||
- material/overrides/hooks/shortcodes.py
|
||||
- material/overrides/hooks/translations.py
|
||||
|
||||
# Additional configuration
|
||||
extra:
|
||||
status:
|
||||
new: Recently added
|
||||
deprecated: Deprecated
|
||||
|
||||
# Extensions
|
||||
markdown_extensions:
|
||||
- abbr
|
||||
- admonition
|
||||
- attr_list
|
||||
- def_list
|
||||
- footnotes
|
||||
- md_in_html
|
||||
- toc:
|
||||
permalink: true
|
||||
toc_depth: 3
|
||||
- pymdownx.arithmatex:
|
||||
generic: true
|
||||
- pymdownx.betterem:
|
||||
smart_enable: all
|
||||
- pymdownx.caret
|
||||
- pymdownx.details
|
||||
- pymdownx.emoji:
|
||||
emoji_generator: !!python/name:material.extensions.emoji.to_svg
|
||||
emoji_index: !!python/name:material.extensions.emoji.twemoji
|
||||
- pymdownx.highlight:
|
||||
anchor_linenums: true
|
||||
line_spans: __span
|
||||
pygments_lang_class: true
|
||||
- pymdownx.inlinehilite
|
||||
- pymdownx.keys
|
||||
- pymdownx.magiclink:
|
||||
normalize_issue_symbols: true
|
||||
repo_url_shorthand: true
|
||||
user: squidfunk
|
||||
repo: mkdocs-material
|
||||
- pymdownx.mark
|
||||
- pymdownx.smartsymbols
|
||||
- pymdownx.snippets:
|
||||
auto_append:
|
||||
- includes/mkdocs.md
|
||||
- pymdownx.superfences:
|
||||
custom_fences:
|
||||
- name: mermaid
|
||||
class: mermaid
|
||||
format: !!python/name:pymdownx.superfences.fence_code_format
|
||||
- pymdownx.tabbed:
|
||||
alternate_style: true
|
||||
combine_header_slug: true
|
||||
slugify: !!python/object/apply:pymdownx.slugs.slugify
|
||||
kwds:
|
||||
case: lower
|
||||
- pymdownx.tasklist:
|
||||
custom_checkbox: true
|
||||
- pymdownx.tilde
|
||||
```
|
||||
|
||||
## Cleaning up
|
||||
When the server is deployed, it will come with a bunch of unnecessary documentation that tells you how to use it. You will want to go into the `/docs` folder, and delete everything except `assets/favicon.png`, `schema.json`, and `/schema`. These files are necessary to allow MKDocs to automatically detect and structure the documentation based on the file folder structure under `/docs`.
|
34
Docker & Kubernetes/Docker/Docker Compose/NGINX.md
Normal file
34
Docker & Kubernetes/Docker/Docker Compose/NGINX.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
nginx:
|
||||
image: lscr.io/linuxserver/nginx:latest
|
||||
container_name: nginx
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/nginx-portfolio-website:/config
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.12
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
64
Docker & Kubernetes/Docker/Docker Compose/Nextcloud.md
Normal file
64
Docker & Kubernetes/Docker/Docker Compose/Nextcloud.md
Normal file
@ -0,0 +1,64 @@
|
||||
**Purpose**: Deploy a Nextcloud and PostgreSQL database together.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
app:
|
||||
image: nextcloud:apache
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`files.bunny-lab.io`)"
|
||||
- "traefik.http.routers.nextcloud.entrypoints=websecure"
|
||||
- "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_HOST=${POSTGRES_HOST}
|
||||
- OVERWRITEPROTOCOL=https
|
||||
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
|
||||
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
|
||||
- NEXTCLOUD_TRUSTED_DOMAINS=${NEXTCLOUD_TRUSTED_DOMAINS}
|
||||
volumes:
|
||||
- /srv/containers/nextcloud/html:/var/www/html
|
||||
ports:
|
||||
- 443:443
|
||||
- 80:80
|
||||
restart: always
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.17
|
||||
db:
|
||||
image: postgres:12-alpine
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
volumes:
|
||||
- /srv/containers/nextcloud/db:/var/lib/postgresql/data
|
||||
ports:
|
||||
- 5432:5432
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.18
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
TZ=America/Denver
|
||||
POSTGRES_PASSWORD=SomeSecurePassword
|
||||
POSTGRES_USER=ncadmin
|
||||
POSTGRES_HOST=192.168.5.18
|
||||
POSTGRES_DB=nextcloud
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=SomeSuperSecurePassword
|
||||
NEXTCLOUD_TRUSTED_DOMAINS=cloud.bunny-lab.io
|
||||
```
|
45
Docker & Kubernetes/Docker/Docker Compose/Niltalk.md
Normal file
45
Docker & Kubernetes/Docker/Docker Compose/Niltalk.md
Normal file
@ -0,0 +1,45 @@
|
||||
**Purpose**: Niltalk is a web based disposable chat server. It allows users to create password protected disposable, ephemeral chatrooms and invite peers to chat rooms.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
volumes:
|
||||
- /srv/niltalk
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.196
|
||||
|
||||
niltalk:
|
||||
image: kailashnadh/niltalk:latest
|
||||
ports:
|
||||
- "9000:9000"
|
||||
depends_on:
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.197
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.niltalk.rule=Host(`temp.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.niltalk.entrypoints=websecure"
|
||||
- "traefik.http.routers.niltalk.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.niltalk.loadbalancer.server.port=9000"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
niltalk-data:
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
29
Docker & Kubernetes/Docker/Docker Compose/Node-Red.md
Normal file
29
Docker & Kubernetes/Docker/Docker Compose/Node-Red.md
Normal file
@ -0,0 +1,29 @@
|
||||
**Purpose**: Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
node-red:
|
||||
image: nodered/node-red:latest
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- "1880:1880"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.92
|
||||
volumes:
|
||||
- /srv/containers/node-red:/data
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
34
Docker & Kubernetes/Docker/Docker Compose/Ntfy.md
Normal file
34
Docker & Kubernetes/Docker/Docker Compose/Ntfy.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: ntfy (pronounced notify) is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It's infinitely flexible, and 100% free software.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
container_name: ntfy
|
||||
command:
|
||||
- serve
|
||||
environment:
|
||||
- TZ=America/Denver # optional: Change to your desired timezone
|
||||
#user: UID:GID # optional: Set custom user/group or uid/gid
|
||||
volumes:
|
||||
- /srv/containers/ntfy/cache:/var/cache/ntfy
|
||||
- /srv/containers/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 80:80
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.45
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
63
Docker & Kubernetes/Docker/Docker Compose/OnlyOffice-ee.md
Normal file
63
Docker & Kubernetes/Docker/Docker Compose/OnlyOffice-ee.md
Normal file
@ -0,0 +1,63 @@
|
||||
**Purpose**: ONLYOFFICE offers a secure online office suite highly compatible with MS Office formats. Generally used with Nextcloud to edit documents directly within the web browser.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: onlyoffice/documentserver-ee
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- /srv/containers/onlyoffice/DocumentServer/logs:/var/log/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data
|
||||
- /srv/containers/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/db:/var/lib/postgresql
|
||||
- /srv/containers/onlyoffice/DocumentServer/fonts:/usr/share/fonts/truetype/custom
|
||||
- /srv/containers/onlyoffice/DocumentServer/forgotten:/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten
|
||||
- /srv/containers/onlyoffice/DocumentServer/rabbitmq:/var/lib/rabbitmq
|
||||
- /srv/containers/onlyoffice/DocumentServer/redis:/var/lib/redis
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.rule=Host(`office.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.entrypoints=websecure"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.cyberstrawberry-onlyoffice.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.middlewares=onlyoffice-headers"
|
||||
- "traefik.http.middlewares.onlyoffice-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
#- "traefik.http.middlewares.onlyoffice-headers.headers.accessControlAllowOrigin=*"
|
||||
environment:
|
||||
- JWT_ENABLED=true
|
||||
- JWT_SECRET=REDACTED #SET THIS TO SOMETHING SECURE
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.143
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
:::tip
|
||||
If you wish to use this in a non-commercial homelab environment without limits, [this script](https://wiki.muwahhid.ru/ru/Unraid/Docker/Onlyoffice-Document-Server) does an endless trial without functionality limits.
|
||||
```
|
||||
docker stop office-document-server-ee
|
||||
docker rm office-document-server-ee
|
||||
rm -r /mnt/user/appdata/onlyoffice/DocumentServer
|
||||
sleep 5
|
||||
<USE A PORTAINER WEBHOOK TO RECREATE THE CONTAINER OR REFERENCE THE DOCKER RUN METHOD BELOW>
|
||||
```
|
||||
|
||||
Docker Run Method:
|
||||
```
|
||||
docker run -d --name='office-document-server-ee' --net='bridge' -e TZ="Europe/Moscow" -e HOST_OS="Unraid" -e 'JWT_ENABLED'='true' -e 'JWT_SECRET'='mySecret' -p '8082:80/tcp' -p '4432:443/tcp' -v '/mnt/user/appdata/onlyoffice/DocumentServer/logs':'/var/log/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/data':'/var/www/onlyoffice/Data':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/lib':'/var/lib/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/db':'/var/lib/postgresql':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/fonts':'/usr/share/fonts/truetype/custom':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/forgotten':'/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/rabbitmq':'/var/lib/rabbitmq':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/redis':'/var/lib/redis':'rw' 'onlyoffice/documentserver-ee'
|
||||
```
|
||||
:::
|
||||
|
56
Docker & Kubernetes/Docker/Docker Compose/Password Pusher.md
Normal file
56
Docker & Kubernetes/Docker/Docker Compose/Password Pusher.md
Normal file
@ -0,0 +1,56 @@
|
||||
**Purpose**: An application to securely communicate passwords over the web. Passwords automatically expire after a certain number of views and/or time has passed. Track who, what and when.
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
passwordpusher:
|
||||
image: docker.io/pglombardo/pwpush:release
|
||||
expose:
|
||||
- 5100
|
||||
restart: always
|
||||
environment:
|
||||
# Read Documention on how to generate a master key, then put it below
|
||||
- PWPUSH_MASTER_KEY=${PWPUSH_MASTER_KEY}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.170
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.passwordpusher.rule=Host(`temp.bunny-lab.io`)"
|
||||
- "traefik.http.routers.passwordpusher.entrypoints=websecure"
|
||||
- "traefik.http.routers.passwordpusher.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.passwordpusher.loadbalancer.server.port=5100"
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
PWPUSH_MASTER_KEY=<PASSWORD>
|
||||
```
|
||||
|
||||
!!! note "PWPUSH_MASTER_KEY"
|
||||
Generate a master key by visiting the [official online key generator](https://pwpush.com/en/pages/generate_key).
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
password-pusher:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: password-pusher
|
||||
rule: Host(`temp.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
password-pusher:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.170:5100
|
||||
passHostHeader: true
|
||||
```
|
41
Docker & Kubernetes/Docker/Docker Compose/Pi-Hole.md
Normal file
41
Docker & Kubernetes/Docker/Docker Compose/Pi-Hole.md
Normal file
@ -0,0 +1,41 @@
|
||||
**Purpose**: Pi-hole is a Linux network-level advertisement and Internet tracker blocking application which acts as a DNS sinkhole and optionally a DHCP server, intended for use on a private network.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
|
||||
services:
|
||||
pihole:
|
||||
container_name: pihole
|
||||
image: pihole/pihole:latest
|
||||
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
|
||||
ports:
|
||||
- "53:53/tcp"
|
||||
- "53:53/udp"
|
||||
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
|
||||
- "80:80/tcp"
|
||||
environment:
|
||||
TZ: 'America/Denver'
|
||||
WEBPASSWORD: 'REDACTED' #USE A SECURE PASSWORD HERE
|
||||
# Volumes store your data between container upgrades
|
||||
volumes:
|
||||
- /srv/containers/pihole/app:/etc/pihole
|
||||
- /srv/containers/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
|
||||
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
|
||||
# cap_add:
|
||||
# - NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.190
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
93
Docker & Kubernetes/Docker/Docker Compose/RocketChat.md
Normal file
93
Docker & Kubernetes/Docker/Docker Compose/RocketChat.md
Normal file
@ -0,0 +1,93 @@
|
||||
**Purpose**: Deploy a RocketChat and MongoDB database together.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
services:
|
||||
rocketchat:
|
||||
image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
|
||||
restart: always
|
||||
# labels:
|
||||
# traefik.enable: "true"
|
||||
# traefik.http.routers.rocketchat.rule: Host(`${DOMAIN:-}`)
|
||||
# traefik.http.routers.rocketchat.tls: "true"
|
||||
# traefik.http.routers.rocketchat.entrypoints: https
|
||||
# traefik.http.routers.rocketchat.tls.certresolver: le
|
||||
environment:
|
||||
MONGO_URL: "${MONGO_URL:-\
|
||||
mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
${MONGODB_DATABASE:-rocketchat}?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
MONGO_OPLOG_URL: "${MONGO_OPLOG_URL:\
|
||||
-mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
local?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
ROOT_URL: ${ROOT_URL:-http://localhost:${HOST_PORT:-3000}}
|
||||
PORT: ${PORT:-3000}
|
||||
DEPLOY_METHOD: docker
|
||||
DEPLOY_PLATFORM: ${DEPLOY_PLATFORM:-}
|
||||
REG_TOKEN: ${REG_TOKEN:-}
|
||||
depends_on:
|
||||
- rc_mongodb
|
||||
expose:
|
||||
- ${PORT:-3000}
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
ports:
|
||||
- "${BIND_IP:-0.0.0.0}:${HOST_PORT:-3000}:${PORT:-3000}"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.2
|
||||
|
||||
rc_mongodb:
|
||||
image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-5.0}
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/deeptree/rocket.chat/mongodb:/bitnami/mongodb
|
||||
environment:
|
||||
MONGODB_REPLICA_SET_MODE: primary
|
||||
MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}
|
||||
MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}
|
||||
MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-rc_mongodb}
|
||||
MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}
|
||||
MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}
|
||||
MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}
|
||||
ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.3
|
||||
|
||||
networks:
|
||||
docker__network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
TZ=America/Denver
|
||||
RELEASE=6.3.0
|
||||
PORT=3000 #Redundant - Can be Removed
|
||||
MONGODB_VERSION=6.0
|
||||
MONGODB_INITIAL_PRIMARY_HOST=rc_mongodb #Redundant - Can be Removed
|
||||
MONGODB_ADVERTISED_HOSTNAME=rc_mongodb #Redundant - Can be Removed
|
||||
```
|
||||
## Reverse Proxy Configuration
|
||||
```jsx title="nginx.conf"
|
||||
# Rocket.Chat Server
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name rocketchat.domain.net;
|
||||
error_log /var/log/nginx/new_rocketchat_error.log;
|
||||
client_max_body_size 500M;
|
||||
location / {
|
||||
proxy_pass http://192.168.5.2:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_set_header X-Nginx-Proxy true;
|
||||
proxy_redirect off;
|
||||
}
|
||||
}
|
||||
```
|
51
Docker & Kubernetes/Docker/Docker Compose/SearX.md
Normal file
51
Docker & Kubernetes/Docker/Docker Compose/SearX.md
Normal file
@ -0,0 +1,51 @@
|
||||
**Purpose**: Deploys a SearX Meta Search Engine Server
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
searx:
|
||||
image: searx/searx:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/searx/:/etc/searx
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.searx.rule=Host(`searx.bunny-lab.io`)"
|
||||
- "traefik.http.routers.searx.entrypoints=websecure"
|
||||
- "traefik.http.routers.searx.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.searx.loadbalancer.server.port=8080"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.124
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
searx:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: searx
|
||||
rule: Host(`searx.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
searx:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.124:8080
|
||||
passHostHeader: true
|
||||
```
|
135
Docker & Kubernetes/Docker/Docker Compose/Snipe-IT.md
Normal file
135
Docker & Kubernetes/Docker/Docker Compose/Snipe-IT.md
Normal file
@ -0,0 +1,135 @@
|
||||
**Purpose**: A free open source IT asset/license management system.
|
||||
|
||||
!!! warning
|
||||
The Snipe-IT container will attempt to launch after the MariaDB container starts, but MariaDB takes a while set itself up before it can accept connections; as a result, Snipe-IT will fail to initialize the database. Just wait about 30 seconds after deploying the stack, then restart the Snipe-IT container to initialize the database. You will know it worked if you see notes about data being `Migrated`.
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
snipeit:
|
||||
image: snipe/snipe-it
|
||||
ports:
|
||||
- "8000:80"
|
||||
depends_on:
|
||||
- db
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- ${DATA_LOCATION}/snipeit:/var/lib/snipeit
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.50
|
||||
|
||||
redis:
|
||||
image: redis:6.2.5-buster
|
||||
ports:
|
||||
- "6379:6379"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.51
|
||||
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
ports:
|
||||
- "3306:3306"
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- ${DATA_LOCATION}/mariadb:/var/lib/mysql
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.52
|
||||
|
||||
mailhog:
|
||||
image: mailhog/mailhog:v1.0.1
|
||||
ports:
|
||||
# - 1025:1025
|
||||
- "8025:8025"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.53
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_KEY=base64:SomethingSecure
|
||||
APP_URL=https://assets.bunny-lab.io
|
||||
APP_TIMEZONE='America/Denver'
|
||||
APP_LOCALE=en
|
||||
MAX_RESULTS=500
|
||||
PRIVATE_FILESYSTEM_DISK=local
|
||||
PUBLIC_FILESYSTEM_DISK=local_public
|
||||
DB_CONNECTION=mysql
|
||||
DB_HOST=db
|
||||
DB_DATABASE=snipedb
|
||||
DB_USERNAME=snipeuser
|
||||
DB_PASSWORD=SomethingSecure
|
||||
DB_PREFIX=null
|
||||
DB_DUMP_PATH='/usr/bin'
|
||||
DB_CHARSET=utf8mb4
|
||||
DB_COLLATION=utf8mb4_unicode_ci
|
||||
IMAGE_LIB=gd
|
||||
MYSQL_DATABASE=snipedb
|
||||
MYSQL_USER=snipeuser
|
||||
MYSQL_PASSWORD=SomethingSecure
|
||||
MYSQL_ROOT_PASSWORD=SomethingSecure
|
||||
REDIS_HOST=redis
|
||||
REDIS_PASSWORD=SomethingSecure
|
||||
REDIS_PORT=6379
|
||||
MAIL_DRIVER=smtp
|
||||
MAIL_HOST=mail.deeptree.tech
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=assets@bunny-lab.io
|
||||
MAIL_PASSWORD=SomethingSecure
|
||||
MAIL_ENCRYPTION=starttls
|
||||
MAIL_FROM_ADDR=assets@bunny-lab.io
|
||||
MAIL_FROM_NAME='Bunny Lab Asset Management'
|
||||
MAIL_REPLYTO_ADDR=assets@bunny-lab.io
|
||||
MAIL_REPLYTO_NAME='Bunny Lab Asset Management'
|
||||
MAIL_AUTO_EMBED_METHOD='attachment'
|
||||
DATA_LOCATION=/srv/containers/snipe-it
|
||||
APP_TRUSTED_PROXIES=192.168.5.29
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
assets:
|
||||
entryPoints:
|
||||
- websecure
|
||||
rule: Host(`assets.bunny-lab.io`)
|
||||
service: assets
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
middlewares:
|
||||
- assets
|
||||
|
||||
middlewares:
|
||||
assets:
|
||||
headers:
|
||||
customRequestHeaders:
|
||||
X-Forwarded-Proto: https
|
||||
X-Forwarded-Host: assets.bunny-lab.io
|
||||
customResponseHeaders:
|
||||
X-Custom-Header: CustomValue # Example of a static header
|
||||
|
||||
services:
|
||||
assets:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.50:80
|
||||
passHostHeader: true
|
||||
```
|
64
Docker & Kubernetes/Docker/Docker Compose/Stirling-PDF.md
Normal file
64
Docker & Kubernetes/Docker/Docker Compose/Stirling-PDF.md
Normal file
@ -0,0 +1,64 @@
|
||||
**Purpose**: This is a powerful locally hosted web based PDF manipulation tool using docker that allows you to perform various operations on PDF files, such as splitting merging, converting, reorganizing, adding images, rotating, compressing, and more. This locally hosted web application started as a 100% ChatGPT-made application and has evolved to include a wide range of features to handle all your PDF needs.
|
||||
|
||||
## Docker Configuration
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
app:
|
||||
image: frooodle/s-pdf:latest
|
||||
container_name: stirling-pdf
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
- DOCKER_ENABLE_SECURITY=false
|
||||
volumes:
|
||||
- /srv/containers/stirling-pdf/datastore:/datastore
|
||||
- /srv/containers/stirling-pdf/trainingData:/usr/share/tesseract-ocr/5/tessdata #Required for extra OCR languages
|
||||
- /srv/containers/stirling-pdf/extraConfigs:/configs
|
||||
- /srv/containers/stirling-pdf/customFiles:/customFiles/
|
||||
- /srv/containers/stirling-pdf/logs:/logs/
|
||||
ports:
|
||||
- 8080:8080
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.stirling-pdf.rule=Host(`pdf.bunny-lab.io`)"
|
||||
- "traefik.http.routers.stirling-pdf.entrypoints=websecure"
|
||||
- "traefik.http.routers.stirling-pdf.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.stirling-pdf.loadbalancer.server.port=8080"
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.54
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
N/A
|
||||
```
|
||||
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
stirling-pdf:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
http2:
|
||||
service: stirling-pdf
|
||||
rule: Host(`pdf.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
stirling-pdf:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.54:8080
|
||||
passHostHeader: true
|
||||
```
|
111
Docker & Kubernetes/Docker/Docker Compose/Traefik.md
Normal file
111
Docker & Kubernetes/Docker/Docker Compose/Traefik.md
Normal file
@ -0,0 +1,111 @@
|
||||
**Purpose**: Deploy a Traefik Reverse Proxy
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.3"
|
||||
services:
|
||||
traefik:
|
||||
image: "traefik:latest"
|
||||
restart: always
|
||||
container_name: "traefik"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
labels:
|
||||
- "traefik.http.routers.traefik-proxy.middlewares=my-buffering"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxRequestBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxResponseBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memRequestBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memResponseBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.retryExpression=IsNetworkError() && Attempts() <= 2"
|
||||
command:
|
||||
# Globals
|
||||
- "--log.level=ERROR"
|
||||
- "--api.insecure=true"
|
||||
- "--global.sendAnonymousUsage=false"
|
||||
# Docker
|
||||
# - "--providers.docker=true"
|
||||
# - "--providers.docker.exposedbydefault=false"
|
||||
# File Provider
|
||||
- "--providers.file.directory=/etc/traefik/dynamic"
|
||||
- "--providers.file.watch=true"
|
||||
# Entrypoints
|
||||
- "--entrypoints.web.address=:80"
|
||||
- "--entrypoints.websecure.address=:443"
|
||||
- "--entrypoints.web.http.redirections.entrypoint.to=websecure" #Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.scheme=https" #Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.permanent=true" #Redirect HTTP to HTTPS
|
||||
# LetsEncrypt
|
||||
# - "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
|
||||
- "--certificatesresolvers.letsencrypt.acme.dnschallenge=true" #TEMPORARY CHANGE
|
||||
- "--certificatesresolvers.letsencrypt.acme.dnschallenge.provider=cloudflare" #TEMPORARY CHANGE
|
||||
- "--certificatesresolvers.letsencrypt.acme.email=cyberstrawberry101@gmail.com"
|
||||
- "--certificatesresolvers.letsencrypt.acme.storage=/letsencrypt/acme.json"
|
||||
# labels:
|
||||
# # API
|
||||
# - "traefik.enable=true"
|
||||
# # Global http --> https
|
||||
# - "traefik.http.routers.http-catchall.rule=hostregexp(`{host:[a-z-.]+}`)"
|
||||
# - "traefik.http.routers.http-catchall.entrypoints=web"
|
||||
# - "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
|
||||
# - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- "/srv/containers/traefik/letsencrypt:/letsencrypt"
|
||||
- "/srv/containers/traefik/config:/etc/traefik"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||
- "/srv/containers/traefik/cloudflare:/cloudflare"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.29
|
||||
environment:
|
||||
- CF_API_EMAIL=cyberstrawberry101@gmail.com
|
||||
- CF_API_KEY=REDACTED
|
||||
extra_hosts:
|
||||
- "flask.cyberstrawberry.local:192.168.3.21"
|
||||
- "searx.cyberstrawberry.local:192.168.3.21"
|
||||
- "heimdall.cyberstrawberry.local:192.168.3.21"
|
||||
- "status.cyberstrawberry.local:192.168.3.21"
|
||||
- "rancher.cyberstrawberry.local:192.168.3.21"
|
||||
- "trilium.blockaderunners.local:192.168.3.21"
|
||||
- "pw.cyberstrawberry.local:192.168.3.22"
|
||||
- "remote.cyberstrawberry.local:192.168.5.43"
|
||||
- "cluster-cloud.cyberstrawberry.local:192.168.3.22"
|
||||
- "searx.blockaderunners.local:192.168.3.22"
|
||||
- "searx.deeptree-labs.local:192.168.3.22"
|
||||
- "cyberstrawberry.local:192.168.3.22"
|
||||
- "storage.cyberstrawberry.local:192.168.3.22"
|
||||
- "cloud.cyberstrawberry.local:192.168.5.146"
|
||||
- "cloud.blockaderunners.local:192.168.5.90"
|
||||
- "docs.blockaderunners.local:192.168.5.212"
|
||||
- "status.blockaderunners.local:192.168.5.13"
|
||||
- "blockaderunners.local:192.168.5.219"
|
||||
- "office.cyberstrawberry.local:192.168.5.143"
|
||||
- "git.deeptree.local:192.168.5.166"
|
||||
- "pw.deeptree.local:192.168.5.170"
|
||||
- "status.deeptree.local:192.168.5.211"
|
||||
- "temp.cyberstrawberry.local:192.168.5.197"
|
||||
- "drop.cyberstrawberry.local:192.168.5.14"
|
||||
- "vault.cyberstrawberry.local:192.168.3.22"
|
||||
- "bitwarden.cyberstrawberry.local:192.168.5.141"
|
||||
- "chat.cyberstrawberry.local:192.168.3.22"
|
||||
- "trilium.cyberstrawberry.local:192.168.3.22"
|
||||
- "node-red.cyberstrawberry.local:192.168.3.21"
|
||||
- "homelab.cyberstrawberry.local:192.168.3.22"
|
||||
- "awx.cyberstrawberry.local:192.168.3.21"
|
||||
- "git.cyberstrawberry.local:192.168.3.21"
|
||||
- "lab.cyberstrawberry.local:192.168.5.44"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,41 @@
|
||||
**Purpose**: The UniFi® Controller is a wireless network management software solution from Ubiquiti Networks™. It allows you to manage multiple wireless networks using a web browser.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
controller:
|
||||
image: lscr.io/linuxserver/unifi-controller:latest
|
||||
container_name: controller
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
#- MEM_LIMIT=1024 #optional
|
||||
#- MEM_STARTUP=1024 #optional
|
||||
volumes:
|
||||
- /srv/containers/unifi-controller:/config
|
||||
ports:
|
||||
- 8443:8443
|
||||
- 3478:3478/udp
|
||||
- 10001:10001/udp
|
||||
- 8080:8080
|
||||
- 1900:1900/udp #optional
|
||||
- 8843:8843 #optional
|
||||
- 8880:8880 #optional
|
||||
- 6789:6789 #optional
|
||||
- 5514:5514/udp #optional
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.140
|
||||
# ipv4_address: 192.168.3.140
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
33
Docker & Kubernetes/Docker/Docker Compose/UptimeKuma.md
Normal file
33
Docker & Kubernetes/Docker/Docker Compose/UptimeKuma.md
Normal file
@ -0,0 +1,33 @@
|
||||
**Purpose**: Deploy Uptime Kuma uptime monitor to monitor services in the homelab and send notifications to various services.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
uptimekuma:
|
||||
image: louislam/uptime-kuma
|
||||
ports:
|
||||
- 3001:3001
|
||||
volumes:
|
||||
- /mnt/uptimekuma:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# Allow status page to exist within an iframe
|
||||
- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=1
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.uptime-kuma.rule=Host(`status.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.uptime-kuma.entrypoints=websecure"
|
||||
- "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.uptime-kuma.loadbalancer.server.port=3001"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.211
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
63
Docker & Kubernetes/Docker/Docker Compose/VaultWarden.md
Normal file
63
Docker & Kubernetes/Docker/Docker Compose/VaultWarden.md
Normal file
@ -0,0 +1,63 @@
|
||||
**Purpose**: Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
vaultwarden:
|
||||
image: vaultwarden/server:latest
|
||||
container_name: vaultwarden
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
- INVITATIONS_ALLOWED=false
|
||||
- SIGNUPS_ALLOWED=false
|
||||
- WEBSOCKET_ENABLED=false
|
||||
- ADMIN_TOKEN=REDACTED #PUT A REALLY REALLY REALLY SECURE PASSWORD HERE
|
||||
volumes:
|
||||
- /srv/containers/vaultwarden:/data
|
||||
ports:
|
||||
- 80:80
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.15
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.bunny-vaultwarden.rule=Host(`vault.bunny-lab.io`)"
|
||||
- "traefik.http.routers.bunny-vaultwarden.entrypoints=websecure"
|
||||
- "traefik.http.routers.bunny-vaultwarden.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.bunny-vaultwarden.loadbalancer.server.port=80"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
:::caution
|
||||
It is **CRITICAL** that you never share the `ADMIN_TOKEN` with anyone. It allows you to log into the instance at https://vault.example.com/admin to add users, delete users, make changes system wide, etc.
|
||||
:::
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
## Traefik Reverse Proxy Configuration
|
||||
If the container does not run on the same host as Traefik, you will need to manually add configuration to Traefik's dynamic config file, outlined below.
|
||||
``` yaml
|
||||
http:
|
||||
routers:
|
||||
bunny-vaultwarden:
|
||||
entryPoints:
|
||||
- websecure
|
||||
tls:
|
||||
certResolver: letsencrypt
|
||||
service: vaultwarden
|
||||
rule: Host(`vault.bunny-lab.io`)
|
||||
|
||||
services:
|
||||
vaultwarden:
|
||||
loadBalancer:
|
||||
servers:
|
||||
- url: http://192.168.5.15:80
|
||||
passHostHeader: true
|
||||
```
|
49
Docker & Kubernetes/Docker/Docker Compose/Wordpress.md
Normal file
49
Docker & Kubernetes/Docker/Docker Compose/Wordpress.md
Normal file
@ -0,0 +1,49 @@
|
||||
**Purpose**: At its core, WordPress is the simplest, most popular way to create your own website or blog. In fact, WordPress powers over 43.3% of all the websites on the Internet. Yes – more than one in four websites that you visit are likely powered by WordPress.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
wordpress:
|
||||
image: wordpress:latest
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: 192.168.5.216
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
|
||||
WORDPRESS_DB_NAME: wordpress
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/Server:/var/www/html
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.217
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: lscr.io/linuxserver/mariadb
|
||||
restart: always
|
||||
ports:
|
||||
- 3306:3306
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
REMOTE_SQL: http://URL1/your.sql,https://URL2/your.sql
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/DB:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.216
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
WORDPRESS_DB_PASSWORD=SecurePassword101
|
||||
MYSQL_ROOT_PASSWORD=SecurePassword202
|
||||
```
|
8
Docker & Kubernetes/Docker/Docker Networking.md
Normal file
8
Docker & Kubernetes/Docker/Docker Networking.md
Normal file
@ -0,0 +1,8 @@
|
||||
### Configure Docker Network
|
||||
We want to use a dedicated subnet / network specifically for containers, so they don't trample over the **SERVER** and **LAN** networks. If you are unsure of the name of the network adapter, in this case `eth0`, just type `ipaddr` in the terminal to list the network interfaces to locate it.
|
||||
```
|
||||
docker network create -d macvlan --subnet=192.168.5.0/24 --gateway=192.168.5.1 -o parent=eth0 docker_network
|
||||
```
|
||||
|
||||
!!! note
|
||||
Be sure to replace `eth0` with the correct interface name using `ip addr` in the terminal. e.g. It may appear as something else like `ens18`, etc. If the interface doesn't exist, Docker will produce an error complaining about it.
|
@ -0,0 +1,2 @@
|
||||
awx-operator
|
||||
https://ansible.github.io/awx-operator/
|
28
Docker & Kubernetes/Helm Charts/AWX Operator/config.yml
Normal file
28
Docker & Kubernetes/Helm Charts/AWX Operator/config.yml
Normal file
@ -0,0 +1,28 @@
|
||||
AWX:
|
||||
enabled: true
|
||||
name: awx
|
||||
postgres:
|
||||
dbName: Unset
|
||||
enabled: false
|
||||
host: Unset
|
||||
password: Unset
|
||||
port: 5678
|
||||
sslmode: prefer
|
||||
type: unmanaged
|
||||
username: admin
|
||||
spec:
|
||||
admin_user: admin
|
||||
admin_email: cyberstrawberry101@gmail.com
|
||||
auto_upgrade: true
|
||||
hostname: awx.cyberstrawberry.net
|
||||
ingress_path: /
|
||||
ingress_path_type: Prefix
|
||||
ingress_type: ingress
|
||||
ipv6_disabled: true
|
||||
projects_persistence: true
|
||||
projects_storage_class: longhorn
|
||||
projects_storage_size: 32Gi
|
||||
task_privileged: true
|
||||
global:
|
||||
cattle:
|
||||
systemProjectId: p-78f96
|
@ -0,0 +1,25 @@
|
||||
krb5.conf
|
||||
|
||||
--------------------------------------------
|
||||
|
||||
[libdefaults]
|
||||
default_realm = MOONGATE.LOCAL
|
||||
dns_lookup_realm = true
|
||||
dns_lookup_kdc = true
|
||||
ticket_lifetime = 24h
|
||||
renew_lifetime = 7d
|
||||
forwardable = true
|
||||
default_ccache_name = KEYRING:persistent:%{uid}
|
||||
|
||||
[realms]
|
||||
MOONGATE.LOCAL = {
|
||||
kdc = NEXUS-DC-01.MOONGATE.LOCAL
|
||||
admin_server = NEXUS-DC-01.MOONGATE.LOCAL
|
||||
}
|
||||
|
||||
[domain_realm]
|
||||
.moongate.local = MOONGATE.LOCAL
|
||||
moongate.local = MOONGATE.LOCAL
|
||||
|
||||
--------------------------------------------
|
||||
|
1
Docker & Kubernetes/Helm Charts/AWX Operator/v1.3.0.txt
Normal file
1
Docker & Kubernetes/Helm Charts/AWX Operator/v1.3.0.txt
Normal file
@ -0,0 +1 @@
|
||||
v1.3.0
|
158
Docker & Kubernetes/Helm Charts/Gitea/config.yml
Normal file
158
Docker & Kubernetes/Helm Charts/Gitea/config.yml
Normal file
@ -0,0 +1,158 @@
|
||||
affinity: {}
|
||||
checkDeprecation: true
|
||||
clusterDomain: cluster.local
|
||||
containerSecurityContext: {}
|
||||
dnsConfig: {}
|
||||
extraContainerVolumeMounts: []
|
||||
extraInitVolumeMounts: []
|
||||
extraVolumeMounts: []
|
||||
extraVolumes: []
|
||||
gitea:
|
||||
additionalConfigFromEnvs:
|
||||
- name: ENV_TO_INI__SERVER__ROOT_URL
|
||||
value: https://git.cyberstrawberry.net
|
||||
additionalConfigSources: []
|
||||
admin:
|
||||
email: cyberstrawberry101@gmail.com
|
||||
existingSecret: null
|
||||
password: SUPER-SECRET-ADMIN-PASSWORD-THAT-NOONE-WILL-GUESS
|
||||
username: nicole.rappe
|
||||
config:
|
||||
APP_NAME: "CyberStrawberry"
|
||||
ldap: []
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 10
|
||||
initialDelaySeconds: 200
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
tcpSocket:
|
||||
port: http
|
||||
timeoutSeconds: 1
|
||||
metrics:
|
||||
enabled: false
|
||||
serviceMonitor:
|
||||
enabled: false
|
||||
oauth: []
|
||||
podAnnotations: {}
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 5
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
tcpSocket:
|
||||
port: http
|
||||
timeoutSeconds: 1
|
||||
ssh:
|
||||
logLevel: INFO
|
||||
startupProbe:
|
||||
enabled: false
|
||||
failureThreshold: 10
|
||||
initialDelaySeconds: 60
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
tcpSocket:
|
||||
port: http
|
||||
timeoutSeconds: 1
|
||||
global:
|
||||
hostAliases: []
|
||||
imagePullSecrets: []
|
||||
imageRegistry: ''
|
||||
storageClass: longhorn
|
||||
image:
|
||||
pullPolicy: Always
|
||||
registry: ''
|
||||
repository: gitea/gitea
|
||||
rootless: false
|
||||
tag: ''
|
||||
imagePullSecrets: []
|
||||
ingress:
|
||||
annotations: {}
|
||||
className: null
|
||||
enabled: false
|
||||
hosts:
|
||||
- host: git.cyberstrawberry.net
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
tls: []
|
||||
initPreScript: ''
|
||||
memcached:
|
||||
enabled: true
|
||||
service:
|
||||
ports:
|
||||
memcached: 11211
|
||||
nodeSelector: {}
|
||||
persistence:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
annotations: {}
|
||||
enabled: true
|
||||
existingClaim: null
|
||||
labels: {}
|
||||
size: 32Gi
|
||||
storageClass: null
|
||||
subPath: null
|
||||
podSecurityContext:
|
||||
fsGroup: 1000
|
||||
postgresql:
|
||||
enabled: true
|
||||
global:
|
||||
postgresql:
|
||||
auth:
|
||||
database: gitea
|
||||
password: gitea
|
||||
username: gitea
|
||||
service:
|
||||
ports:
|
||||
postgresql: 5432
|
||||
primary:
|
||||
persistence:
|
||||
size: 32Gi
|
||||
replicaCount: 1
|
||||
resources: {}
|
||||
schedulerName: ''
|
||||
securityContext: {}
|
||||
service:
|
||||
http:
|
||||
annotations: {}
|
||||
clusterIP: None
|
||||
externalIPs: null
|
||||
externalTrafficPolicy: null
|
||||
ipFamilies: null
|
||||
ipFamilyPolicy: null
|
||||
loadBalancerIP: null
|
||||
loadBalancerSourceRanges: []
|
||||
nodePort: null
|
||||
port: 3000
|
||||
type: ClusterIP
|
||||
ssh:
|
||||
annotations: {}
|
||||
clusterIP: None
|
||||
externalIPs: null
|
||||
externalTrafficPolicy: null
|
||||
hostPort: null
|
||||
ipFamilies: null
|
||||
ipFamilyPolicy: null
|
||||
loadBalancerIP: null
|
||||
loadBalancerSourceRanges: []
|
||||
nodePort: null
|
||||
port: 22
|
||||
type: ClusterIP
|
||||
signing:
|
||||
enabled: false
|
||||
existingSecret: ''
|
||||
gpgHome: /data/git/.gnupg
|
||||
privateKey: ''
|
||||
statefulset:
|
||||
annotations: {}
|
||||
env: []
|
||||
labels: {}
|
||||
terminationGracePeriodSeconds: 60
|
||||
test:
|
||||
enabled: true
|
||||
image:
|
||||
name: busybox
|
||||
tag: latest
|
||||
tolerations: []
|
194
Docker & Kubernetes/Helm Charts/Nextcloud/config.yml
Normal file
194
Docker & Kubernetes/Helm Charts/Nextcloud/config.yml
Normal file
@ -0,0 +1,194 @@
|
||||
affinity: {}
|
||||
cronjob:
|
||||
enabled: false
|
||||
lifecycle: {}
|
||||
resources: {}
|
||||
securityContext: {}
|
||||
deploymentAnnotations: {}
|
||||
deploymentLabels: {}
|
||||
externalDatabase:
|
||||
database: nextcloud
|
||||
enabled: true
|
||||
existingSecret:
|
||||
enabled: false
|
||||
host: cluster-nextcloud-postgresql
|
||||
password: SecurePasswordGoesHere
|
||||
type: postgresql
|
||||
user: nextcloud
|
||||
fullnameOverride: ''
|
||||
hpa:
|
||||
cputhreshold: 60
|
||||
enabled: false
|
||||
maxPods: 10
|
||||
minPods: 1
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
repository: nextcloud
|
||||
ingress:
|
||||
annotations: {}
|
||||
enabled: false
|
||||
labels: {}
|
||||
path: /
|
||||
pathType: Prefix
|
||||
internalDatabase:
|
||||
enabled: false
|
||||
name: nextcloud
|
||||
lifecycle: {}
|
||||
livenessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
mariadb:
|
||||
architecture: standalone
|
||||
auth:
|
||||
database: nextcloud
|
||||
password: changeme
|
||||
username: nextcloud
|
||||
enabled: false
|
||||
primary:
|
||||
persistence:
|
||||
accessMode: ReadWriteOnce
|
||||
enabled: false
|
||||
size: 8Gi
|
||||
metrics:
|
||||
enabled: false
|
||||
https: false
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
repository: xperimental/nextcloud-exporter
|
||||
tag: 0.6.0
|
||||
replicaCount: 1
|
||||
service:
|
||||
annotations:
|
||||
prometheus.io/port: '9205'
|
||||
prometheus.io/scrape: 'true'
|
||||
labels: {}
|
||||
type: ClusterIP
|
||||
serviceMonitor:
|
||||
enabled: false
|
||||
interval: 30s
|
||||
jobLabel: ''
|
||||
labels: {}
|
||||
namespace: ''
|
||||
scrapeTimeout: ''
|
||||
timeout: 5s
|
||||
tlsSkipVerify: false
|
||||
token: ''
|
||||
nameOverride: ''
|
||||
nextcloud:
|
||||
configs: {}
|
||||
datadir: /var/www/html/data
|
||||
defaultConfigs:
|
||||
.htaccess: true
|
||||
apache-pretty-urls.config.php: true
|
||||
apcu.config.php: true
|
||||
apps.config.php: true
|
||||
autoconfig.php: true
|
||||
redis.config.php: true
|
||||
smtp.config.php: true
|
||||
existingSecret:
|
||||
enabled: false
|
||||
extraEnv: null
|
||||
extraInitContainers: []
|
||||
extraSidecarContainers: []
|
||||
extraVolumeMounts: null
|
||||
extraVolumes: null
|
||||
host: storage.cyberstrawberry.net
|
||||
mail:
|
||||
domain: domain.com
|
||||
enabled: false
|
||||
fromAddress: user
|
||||
smtp:
|
||||
authtype: LOGIN
|
||||
host: domain.com
|
||||
name: user
|
||||
password: pass
|
||||
port: 465
|
||||
secure: ssl
|
||||
password: SUPER-SECRET-PASSWORD-FOR-ADMIN
|
||||
persistence:
|
||||
subPath: null
|
||||
phpConfigs: {}
|
||||
podSecurityContext: {}
|
||||
securityContext: {}
|
||||
strategy:
|
||||
type: Recreate
|
||||
update: 0
|
||||
username: Nicole
|
||||
nginx:
|
||||
config:
|
||||
default: true
|
||||
enabled: false
|
||||
image:
|
||||
pullPolicy: IfNotPresent
|
||||
repository: nginx
|
||||
tag: alpine
|
||||
resources: {}
|
||||
securityContext: {}
|
||||
nodeSelector: {}
|
||||
persistence:
|
||||
accessMode: ReadWriteOnce
|
||||
annotations: {}
|
||||
enabled: true
|
||||
nextcloudData:
|
||||
accessMode: ReadWriteOnce
|
||||
annotations: {}
|
||||
enabled: true
|
||||
size: 800Gi
|
||||
subPath: null
|
||||
size: 16Gi
|
||||
phpClientHttpsFix:
|
||||
enabled: true
|
||||
protocol: https
|
||||
podAnnotations: {}
|
||||
postgresql:
|
||||
enabled: true
|
||||
global:
|
||||
postgresql:
|
||||
auth:
|
||||
database: nextcloud
|
||||
password: SUPER-SECRET-PASSWORD-FOR-DB
|
||||
username: nextcloud
|
||||
primary:
|
||||
persistence:
|
||||
enabled: true
|
||||
rbac:
|
||||
enabled: false
|
||||
serviceaccount:
|
||||
annotations: {}
|
||||
create: true
|
||||
name: nextcloud-serviceaccount
|
||||
readinessProbe:
|
||||
enabled: true
|
||||
failureThreshold: 3
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
redis:
|
||||
auth:
|
||||
enabled: true
|
||||
password: changeme
|
||||
enabled: false
|
||||
replicaCount: 1
|
||||
resources: {}
|
||||
securityContext: {}
|
||||
service:
|
||||
loadBalancerIP: nil
|
||||
nodePort: nil
|
||||
port: 8080
|
||||
type: ClusterIP
|
||||
startupProbe:
|
||||
enabled: false
|
||||
failureThreshold: 30
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 10
|
||||
successThreshold: 1
|
||||
timeoutSeconds: 5
|
||||
tolerations: []
|
||||
global:
|
||||
cattle:
|
||||
systemProjectId: p-78f96
|
140
Docker & Kubernetes/Servers/AWX/AWX (MiniKube).md
Normal file
140
Docker & Kubernetes/Servers/AWX/AWX (MiniKube).md
Normal file
@ -0,0 +1,140 @@
|
||||
# Deploy AWX on Minikube Cluster
|
||||
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
|
||||
:::note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
:::
|
||||
|
||||
## Install Minikube Cluster
|
||||
### Update the Ubuntu Server
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
### Download and Install Minikube (Ubuntu Server)
|
||||
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
|
||||
sudo dpkg -i minikube_latest_amd64.deb
|
||||
|
||||
# Download Docker and Common Tools
|
||||
sudo apt install docker.io nfs-common iptables nano htop -y
|
||||
|
||||
# Configure Docker User
|
||||
sudo usermod -aG docker nicole
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
|
||||
:::
|
||||
### Fully Logout then sign back in to the server
|
||||
```
|
||||
exit
|
||||
```
|
||||
### Validate that permissions allow you to run docker commands while non-root
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Initialize Minikube Cluster
|
||||
Additional Documentation: https://github.com/ansible/awx-operator
|
||||
```
|
||||
minikube start --driver=docker
|
||||
minikube kubectl -- get nodes
|
||||
minikube kubectl -- get pods -A
|
||||
```
|
||||
|
||||
### Make sure Minikube Cluster Automatically Starts on Boot
|
||||
```jsx title="/etc/systemd/system/minikube.service"
|
||||
[Unit]
|
||||
Description=Minikube service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=nicole
|
||||
ExecStart=/usr/bin/minikube start --driver=docker
|
||||
ExecStop=/usr/bin/minikube stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
|
||||
:::
|
||||
:::info
|
||||
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
|
||||
:::
|
||||
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable minikube
|
||||
sudo systemctl start minikube
|
||||
```
|
||||
|
||||
### Make command alias for `kubectl`
|
||||
Be sure to add the following to the bottom of your existing profile file noted below.
|
||||
```jsx title="~/.bashrc"
|
||||
...
|
||||
alias kubectl="minikube kubectl --"
|
||||
```
|
||||
:::tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
|
||||
## Make AWX Operator Kustomization File:
|
||||
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
|
||||
```jsx title="kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: awx-service
|
||||
namespace: awx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080 # Choose an available port in the range of 30000-32767
|
||||
selector:
|
||||
app.kubernetes.io/name: awx-web
|
||||
```
|
||||
### Apply Configuration File
|
||||
Run from the same directory as the `awx-operator.yaml` file.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::info
|
||||
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
|
||||
:::
|
||||
|
||||
### View Logs / Track Deployment Progress
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
|
||||
```
|
||||
### Get AWX WebUI Address
|
||||
```
|
||||
minikube service -n awx awx-service --url
|
||||
```
|
||||
### Get WebUI Password:
|
||||
```
|
||||
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
@ -0,0 +1,162 @@
|
||||
**Purpose**:
|
||||
Deploying a Rancher RKE2 Cluster-based Ansible AWX Operator server. This can scale to a larger more enterprise environment if needed.
|
||||
|
||||
!!! note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 22.04** or later with at least 16GB of memory, 8 CPU cores, and 64GB of storage.
|
||||
|
||||
## Deploy Rancher RKE2 Cluster
|
||||
You will need to deploy a [Rancher RKE2 Cluster](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster/) on an Ubuntu Server-based virtual machine. After this phase, you can focus on the Ansible AWX-specific deployment. A single ControlPlane node is all you need to set up AWX, additional infrastructure can be added after-the-fact.
|
||||
|
||||
!!! tip
|
||||
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something during deployment.
|
||||
|
||||
## Server Configuration
|
||||
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
|
||||
|
||||
``` sh
|
||||
# Make the deployment folder
|
||||
mkdir -p /awx
|
||||
cd /awx
|
||||
```
|
||||
|
||||
We need to increase filesystem access limits:
|
||||
Temporarily Set the Limits Now:
|
||||
``` sh
|
||||
sudo sysctl fs.inotify.max_user_watches=524288
|
||||
sudo sysctl fs.inotify.max_user_instances=512
|
||||
```
|
||||
|
||||
Permanently Set the Limits for Later:
|
||||
```jsx title="/etc/sysctl.conf"
|
||||
# <End of File>
|
||||
fs.inotify.max_user_watches = 524288
|
||||
fs.inotify.max_user_instances = 512
|
||||
```
|
||||
|
||||
Apply the Settings:
|
||||
``` sh
|
||||
sudo sysctl -p
|
||||
```
|
||||
|
||||
### Create AWX Deployment Donfiguration Files
|
||||
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=awx.bunny-lab.io` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
|
||||
|
||||
=== "awx.yml"
|
||||
|
||||
```jsx title="/awx/awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
spec:
|
||||
service_type: ClusterIP
|
||||
```
|
||||
|
||||
=== "ingress.yml"
|
||||
|
||||
```jsx title="/awx/ingress.yml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: awx.bunny-lab.io
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: awx-service
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
|
||||
=== "kustomization.yml"
|
||||
|
||||
```jsx title="/awx/kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.10.0
|
||||
- awx.yml
|
||||
- ingress.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.10.0
|
||||
namespace: awx
|
||||
```
|
||||
|
||||
## Ensure the Kubernetes Cluster is Ready
|
||||
Check that the status of the cluster is ready by running the following commands, it should appear similar to the [Rancher RKE2 Example](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster/#install-helm-rancher-certmanager-jetstack-rancher-and-longhorn):
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
kubectl get pods --all-namespaces
|
||||
```
|
||||
|
||||
## Deploy AWX using Kustomize
|
||||
Now it is time to tell Kubernetes to read the configuration files using Kustomize (*built-in to newer versions of Kubernetes*) to deploy AWX into the cluster.
|
||||
!!! warning "Be Patient"
|
||||
The AWX deployment process can take a while. Use the commands in the [Troubleshooting](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/AWX%20Operator/Ansible%20AWX%20Operator/#troubleshooting) section if you want to track the progress after running the commands below.
|
||||
|
||||
If you get an error that looks like the below, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
|
||||
``` sh
|
||||
error: resource mapping not found for name: "awx" namespace: "awx" from ".": no matches for kind "AWX" in version "awx.ansible.com/v1beta1"
|
||||
ensure CRDs are installed first
|
||||
```
|
||||
|
||||
To check on the progress of the deployment, you can run the following command: `kubectl get pods -n awx`
|
||||
You will know that AWX is ready to be accessed in the next step if the output looks like below:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
awx-operator-controller-manager-7b9ccf9d4d-cnwhc 2/2 Running 2 (3m41s ago) 9m41s
|
||||
awx-postgres-13-0 1/1 Running 0 6m12s
|
||||
awx-task-7b5f8cf98c-rhrpd 4/4 Running 0 4m46s
|
||||
awx-web-6dbd7df9f7-kn8k2 3/3 Running 0 93s
|
||||
```
|
||||
|
||||
``` sh
|
||||
cd /awx
|
||||
kubectl apply -k .
|
||||
```
|
||||
|
||||
## Access the AWX WebUI behind Ingress Controller
|
||||
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your personal computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host.
|
||||
|
||||
The RKE2 Cluster will translate `awx.bunny-lab.io` to the AWX web-service container(s) automatically. SSL certificates are not covered in this documentation, but suffice to say, the can be configured on another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
|
||||
|
||||
!!! success "Accessing the AWX WebUI"
|
||||
If you have gotten this far, you should now be able to access AWX via the WebUI and log in.
|
||||
|
||||
- AWX WebUI: https://awx.bunny-lab.io
|
||||

|
||||
You may see a prompt about "AWX is currently upgrading. This page will refresh when complete". Be patient, let it finish. When it's done, it will take you to a login page.
|
||||
AWX will generate its own secure password the first time you set up AWX. Username is `admin`. You can run the following command to retrieve the password:
|
||||
```
|
||||
kubectl get secret awx-admin-password -n awx -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
|
||||
## Change Admin Password
|
||||
You will want to change the admin password straight-away. Use the following navigation structure to find where to change the password:
|
||||
``` mermaid
|
||||
graph LR
|
||||
A[AWX Dashboard] --> B[Access]
|
||||
B --> C[Users]
|
||||
C --> D[admin]
|
||||
D --> E[Edit]
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
|
||||
|
||||
!!! failure "Nested Reverse Proxy Issues"
|
||||
My homelab environment primarily uses a Traefik reverse proxy to handle all communications, but AWX currently has issues running behind Traefik/NGINX, and documentation outlining how to fix this does not exist here yet. For the time being, when you create the DNS record, use an `A` record pointing directly to the IP address of the Virtual Machine running the Rancher / AWX Operator cluster.
|
||||
|
||||
### AWX-Manager Deployment Logs
|
||||
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
|
||||
```
|
||||
!!! note
|
||||
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
|
@ -0,0 +1,68 @@
|
||||
**Purpose**: Once AWX is deployed, you will want to connect Gitea at https://git.bunny-lab.io. The reason for this is so we can pull in our playbooks, inventories, and templates automatically into AWX, making it more stateless overall and more resilient to potential failures of either AWX or the underlying Kubernetes Cluster hosting it.
|
||||
|
||||
## Obtain Gitea Token
|
||||
You already have this documented in Vaultwarden's password notes for awx.bunny-lab.io, but in case it gets lost, go to the [Gitea Token Page](https://git.bunny-lab.io/user/settings/applications) to set up an application token with read-only access for AWX, with a descriptive name.
|
||||
|
||||
## Create Gitea Credentials
|
||||
Before you make move on and make the project, you need to associate the Gitea token with an AWX "Credential". Navigate to **Resources > Credentials > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Credential Name | `git.bunny-lab.io` |
|
||||
| Description | `Gitea` |
|
||||
| Organization | `Default` *(Click the Magnifying Lens)* |
|
||||
| Credential Type | `Source Control` |
|
||||
| Username | `Gitea Username` *(e.g. `nicole`)* |
|
||||
| Password | `<Gitea Token>` |
|
||||
|
||||
## Create an AWX Project
|
||||
In order to link AWX to Gitea, you have to connect the two of them together with an AWX "Project". Navigate to **Resources > Projects > Add**
|
||||
|
||||
**Project Variables**:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Project Name | `Bunny-Lab` |
|
||||
| Description | `Homelab Environment` |
|
||||
| Organization | `Default` |
|
||||
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
|
||||
| Source Control Type | `Git` |
|
||||
|
||||
**Gitea-specific Variables**:
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Source Control URL | `https://git.bunny-lab.io/GitOps/awx.bunny-lab.io.git` |
|
||||
| Source Control Branch/Tag/Commit | `main` |
|
||||
| Source Control Credential | `git.bunny-lab.io` *(Click the Magnifying Lens)* |
|
||||
|
||||
## Add Playbooks
|
||||
AWX automatically imports any playbooks it finds from the project, and makes them available for templates operating within the same project-space. (e.g. "Bunny-Lab"). This means no special configuration is needed for the playbooks.
|
||||
|
||||
## Create an Inventory
|
||||
You will want to associate an inventory with the Gitea project now. Navigate to **Resources > Inventories > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Inventory Name | `Homelab` |
|
||||
| Description | `Homelab Inventory` |
|
||||
| Organization | `Default` |
|
||||
|
||||
### Add Gitea Inventory Source
|
||||
Now you will want to connect this inventory to the inventory file(s) hosted in the aforementioned Gitea repository. Navigate to **Resources > Inventories > Homelab > Sources > Add**
|
||||
|
||||
| **Field** | **Value** |
|
||||
| :--- | :--- |
|
||||
| Source Name | `git.bunny-lab.io` |
|
||||
| Description | `Gitea` |
|
||||
| Execution Environment | `AWX EE (latest)` *(Click the Magnifying Lens)* |
|
||||
| Source | `Sourced from a Project` |
|
||||
| Project | `Bunny-Lab` |
|
||||
| Inventory File | `inventories/homelab.ini` |
|
||||
|
||||
Check the box at the bottom named "**Update on Launch**". This will pull the latest inventory each time a job is run. It may slightly slow down jobs, but it ensures that everything is updated every time a job is ran.
|
||||
|
||||
|
||||
|
||||
## Webhooks
|
||||
Optionally, set up webhooks in Gitea to trigger inventory updates in AWX upon changes in the repository. This section is not documented yet, but will eventually be documented.
|
@ -0,0 +1,28 @@
|
||||
# WinRM (Kerberos)
|
||||
**Name**: "Kerberos WinRM"
|
||||
|
||||
```jsx title="Input Configuration"
|
||||
fields:
|
||||
- id: username
|
||||
type: string
|
||||
label: Username
|
||||
- id: password
|
||||
type: string
|
||||
label: Password
|
||||
secret: true
|
||||
- id: krb_realm
|
||||
type: string
|
||||
label: Kerberos Realm (Domain)
|
||||
required:
|
||||
- username
|
||||
- password
|
||||
- krb_realm
|
||||
```
|
||||
|
||||
```jsx title="Injector Configuration"
|
||||
extra_vars:
|
||||
ansible_user: '{{ username }}'
|
||||
ansible_password: '{{ password }}'
|
||||
ansible_winrm_transport: kerberos
|
||||
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
|
||||
```
|
@ -0,0 +1,36 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# AWX Credential Types
|
||||
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
|
||||
|
||||
## Windows-based Credentials
|
||||
### NTLM
|
||||
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
|
||||
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
|
||||
Credential Type: Machine
|
||||
Username: nicole.rappe@MOONGATE.LOCAL
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: runas
|
||||
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
|
||||
```
|
||||
### Kerberos
|
||||
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
|
||||
```jsx title="(Kerberos WinRM) nicole.rappe"
|
||||
Credential Type: Kerberos WinRM
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted>
|
||||
Kerberos Realm (Domain): MOONGATE.LOCAL
|
||||
```
|
||||
## Linux-based Credentials
|
||||
```jsx title="(LINUX) nicole"
|
||||
Credential Type: Machine
|
||||
Username: nicole
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: sudo
|
||||
Privilege Escalation Username: root
|
||||
```
|
||||
|
||||
:::note
|
||||
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
|
||||
:::
|
@ -0,0 +1,39 @@
|
||||
# Host Inventories
|
||||
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook.
|
||||
|
||||
```jsx title="(NTLM) MOON-HOST-01"
|
||||
Name: (NTLM) MOON-HOST-01
|
||||
Host(s): MOON-HOST-01 @ 192.168.3.4
|
||||
|
||||
Variables:
|
||||
---
|
||||
ansible_connection: winrm
|
||||
ansible_winrm_kerberos_delegation: false
|
||||
ansible_port: 5986
|
||||
ansible_winrm_transport: ntlm
|
||||
ansible_winrm_server_cert_validation: ignore
|
||||
```
|
||||
|
||||
```jsx title="(NTLM) CyberStrawberry - Windows Hosts"
|
||||
Name: (NTLM) CyberStrawberry - Windows Hosts
|
||||
Host(s): MOON-HOST-01 @ 192.168.3.4
|
||||
Host(s): MOON-HOST-02 @ 192.168.3.5
|
||||
|
||||
Variables:
|
||||
---
|
||||
ansible_connection: winrm
|
||||
ansible_winrm_kerberos_delegation: false
|
||||
ansible_port: 5986
|
||||
ansible_winrm_transport: ntlm
|
||||
ansible_winrm_server_cert_validation: ignore
|
||||
```
|
||||
|
||||
```jsx title="(LINUX) Unsorted Devices"
|
||||
Name: (LINUX) Unsorted Devices
|
||||
Host(s): CLSTR-COMPUTE-01 @ 192.168.3.50
|
||||
Host(s): CLSTR-COMPUTE-02 @ 192.168.3.51
|
||||
|
||||
Variables:
|
||||
---
|
||||
None
|
||||
```
|
@ -0,0 +1,16 @@
|
||||
# AWX Projects
|
||||
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
|
||||
|
||||
```jsx title="Ansible Playbooks (Gitea)"
|
||||
Name: Ansible Playbooks (Gitea)
|
||||
Source Control Type: Git
|
||||
Source Control URL: https://git.cyberstrawberry.net/nicole.rappe/ansible.git
|
||||
Source Control Credential: CyberStrawberry Gitea
|
||||
```
|
||||
|
||||
```jsx title="Resources > Credentials > CyberStrawberry Gitea"
|
||||
Name: CyberStrawberry Gitea
|
||||
Credential Type: Source Control
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
|
||||
```
|
@ -0,0 +1,21 @@
|
||||
# Templates
|
||||
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
|
||||
|
||||
```jsx title="Deploy Hyper-V VM"
|
||||
Name: Deploy Hyper-V VM
|
||||
Inventory: (NTLM) MOON-HOST-01
|
||||
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
|
||||
Credentials: (NTLM) nicole.rappe@MOONGATE.local
|
||||
Execution Environment: AWX EE (latest)
|
||||
Project: Ansible Playbooks (Gitea)
|
||||
|
||||
Variables:
|
||||
---
|
||||
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
|
||||
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
|
||||
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
|
||||
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
|
||||
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
|
||||
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
|
||||
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
|
||||
```
|
BIN
Docker & Kubernetes/Servers/AWX/AWX Operator/awx.png
Normal file
BIN
Docker & Kubernetes/Servers/AWX/AWX Operator/awx.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 122 KiB |
55
Docker & Kubernetes/Servers/Docker/Portainer.md
Normal file
55
Docker & Kubernetes/Servers/Docker/Portainer.md
Normal file
@ -0,0 +1,55 @@
|
||||
### Update The Package Manager
|
||||
We need to update the server before installing Docker
|
||||
|
||||
=== "Ubuntu Server"
|
||||
|
||||
``` sh
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
```
|
||||
|
||||
=== "Rocky Linux"
|
||||
|
||||
``` sh
|
||||
sudo dnf check-update
|
||||
```
|
||||
|
||||
### Deploy Docker
|
||||
Install Docker then deploy Portainer
|
||||
|
||||
Convenience Script:
|
||||
```
|
||||
curl -fsSL https://get.docker.com | sudo sh
|
||||
```
|
||||
|
||||
Alternative Methods:
|
||||
|
||||
=== "Ubuntu Server"
|
||||
|
||||
``` sh
|
||||
sudo apt install docker.io -y
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (1)
|
||||
```
|
||||
|
||||
1. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
|
||||
|
||||
=== "Rocky Linux"
|
||||
|
||||
``` sh
|
||||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
sudo dnf install docker-ce docker-ce-cli containerd.io
|
||||
sudo systemctl start docker
|
||||
sudo systemctl enable docker # (1)
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest # (2)
|
||||
```
|
||||
|
||||
1. This is needed to ensure that docker starts automatically every time the server is turned on.
|
||||
2. Be sure to set the `-v /srv/containers/portainer:/data` value to a safe place that gets backed up regularily.
|
||||
|
||||
### Configure Docker Network
|
||||
I highly recomment setting up a [Dedicated Docker MACVLAN Network](https://docs.bunny-lab.io/Containers/Docker/Docker%20Networking/). You can use it to keep your containers on their own subnet.
|
||||
|
||||
### Access Portainer WebUI
|
||||
You will be able to access the Portainer WebUI at the following address: `https://<IP Address>:9443`
|
||||
!!! warning
|
||||
You need to be quick, as there is a timeout period where you wont be able to onboard / provision Portainer and will be forced to restart it's container. If this happens, you can find the container using `sudo docker container ls` proceeded by `sudo docker restart <ID of Portainer Container>`.
|
187
Docker & Kubernetes/Servers/Kubernetes Clusters/K8S.md
Normal file
187
Docker & Kubernetes/Servers/Kubernetes Clusters/K8S.md
Normal file
@ -0,0 +1,187 @@
|
||||
# Deploy Generic Kubernetes
|
||||
The instructions outlined below assume you are deploying the environment using Ansible Playbooks either via Ansible's CLI or AWX.
|
||||
|
||||
### Deploy K8S User
|
||||
```jsx title="01-deploy-k8s-user.yml"
|
||||
- hosts: 'controller-nodes, worker-nodes'
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: create the k8sadmin user account
|
||||
user: name=k8sadmin append=yes state=present createhome=yes shell=/bin/bash
|
||||
|
||||
- name: allow 'k8sadmin' to use sudo without needing a password
|
||||
lineinfile:
|
||||
dest: /etc/sudoers
|
||||
line: 'k8sadmin ALL=(ALL) NOPASSWD: ALL'
|
||||
validate: 'visudo -cf %s'
|
||||
|
||||
- name: set up authorized keys for the k8sadmin user
|
||||
authorized_key: user=k8sadmin key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
### Install K8S
|
||||
```jsx title="02-install-k8s.yml"
|
||||
---
|
||||
- hosts: "controller-nodes, worker-nodes"
|
||||
remote_user: nicole
|
||||
become: yes
|
||||
become_method: sudo
|
||||
become_user: root
|
||||
gather_facts: yes
|
||||
connection: ssh
|
||||
|
||||
tasks:
|
||||
- name: Create containerd config file
|
||||
file:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
block: |
|
||||
overlay
|
||||
br_netfilter
|
||||
|
||||
- name: modprobe
|
||||
shell: |
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
|
||||
- name: Set system configurations for Kubernetes networking
|
||||
file:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
block: |
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
|
||||
- name: Apply new settings
|
||||
command: sudo sysctl --system
|
||||
|
||||
- name: install containerd
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y containerd
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
sudo systemctl restart containerd
|
||||
|
||||
- name: disable swap
|
||||
shell: |
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
|
||||
- name: install and configure dependencies
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
|
||||
- name: Create kubernetes repo file
|
||||
file:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
state: "touch"
|
||||
|
||||
- name: Add K8s Source
|
||||
blockinfile:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
block: |
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
|
||||
- name: Install Kubernetes
|
||||
shell: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
|
||||
### Configure ControlPlanes
|
||||
```jsx title="03-configure-controllers.yml"
|
||||
- hosts: controller-nodes
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: Initialize the K8S Cluster
|
||||
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
|
||||
args:
|
||||
chdir: $HOME
|
||||
creates: cluster_initialized.txt
|
||||
|
||||
- name: Create .kube directory
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
file:
|
||||
path: /home/k8sadmin/.kube
|
||||
state: directory
|
||||
mode: 0755
|
||||
|
||||
- name: Copy admin.conf to user's kube config
|
||||
copy:
|
||||
src: /etc/kubernetes/admin.conf
|
||||
dest: /home/k8sadmin/.kube/config
|
||||
remote_src: yes
|
||||
owner: k8sadmin
|
||||
|
||||
- name: Install the Pod Network
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
|
||||
args:
|
||||
chdir: $HOME
|
||||
|
||||
- name: Get the token for joining the worker nodes
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubeadm token create --print-join-command
|
||||
register: kubernetes_join_command
|
||||
|
||||
- name: Output Join Command to the Screen
|
||||
debug:
|
||||
msg: "{{ kubernetes_join_command.stdout }}"
|
||||
|
||||
- name: Copy join command to local file.
|
||||
become: yes
|
||||
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
|
||||
```
|
||||
|
||||
### Join Worker Node(s)
|
||||
```jsx title="04-join-worker-nodes.yml"
|
||||
- hosts: worker-nodes
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Copy join command from Ansible host to the worker nodes.
|
||||
become: yes
|
||||
copy:
|
||||
src: /tmp/kubernetes_join_command
|
||||
dest: /tmp/kubernetes_join_command
|
||||
mode: 0777
|
||||
|
||||
- name: Join the Worker nodes to the cluster.
|
||||
become: yes
|
||||
command: sh /tmp/kubernetes_join_command
|
||||
register: joined_or_not
|
||||
```
|
||||
|
||||
### Host Inventory File Template
|
||||
```jsx title="hosts"
|
||||
[controller-nodes]
|
||||
k8s-ctrlr-01 ansible_host=192.168.3.6 ansible_user=nicole
|
||||
|
||||
[worker-nodes]
|
||||
k8s-node-01 ansible_host=192.168.3.4 ansible_user=nicole
|
||||
k8s-node-02 ansible_host=192.168.3.5 ansible_user=nicole
|
||||
|
||||
[all:vars]
|
||||
ansible_become_user=root
|
||||
ansible_become_method=sudo
|
||||
```
|
221
Docker & Kubernetes/Servers/Kubernetes Clusters/Rancher RKE2.md
Normal file
221
Docker & Kubernetes/Servers/Kubernetes Clusters/Rancher RKE2.md
Normal file
@ -0,0 +1,221 @@
|
||||
# Deploy RKE2 Cluster
|
||||
Deploying a Rancher RKE2 Cluster is fairly straightforward. Just run the commands in-order and pay attention to which steps apply to all machines in the cluster, the controlplanes, and the workers.
|
||||
|
||||
!!! note "Prerequisites"
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
|
||||
## All Cluster Nodes
|
||||
Assume all commands are running as root moving forward. (e.g. `sudo su`)
|
||||
|
||||
### Run Updates
|
||||
You will need to run these commands on every server that participates in the cluster then perform a reboot of the server **PRIOR** to moving onto the next section.
|
||||
``` sh
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt install nfs-common iptables nano htop -y
|
||||
echo "Adding 15 Second Delay to Ensure Previous Commands finish running"
|
||||
sleep 15
|
||||
sudo apt autoremove -y
|
||||
sudo reboot
|
||||
```
|
||||
!!! tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
## Initial ControlPlane Node
|
||||
When you are starting a brand new cluster, you need to create what is referred to as the "Initial ControlPlane". This node is responsible for bootstrapping the entire cluster together in the beginning, and will eventually assist in handling container workloads and orchestrating operations in the cluster.
|
||||
!!! warning
|
||||
You only want to follow the instructions for the **initial** controlplane once. Running it on another machine to create additional controlplanes will cause the cluster to try to set up two different clusters, wrecking havok. Instead, follow the instructions in the next section to add redundant controlplanes.
|
||||
|
||||
### Download the Run Server Deployment Script
|
||||
```
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Enable & Configure Services
|
||||
``` sh
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Temporarily Export the Kubeconfig to manage the cluster from CLI
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
|
||||
# Add a Delay to Allow Cluster to Finish Initializing / Get Ready
|
||||
echo "Adding 60 Second Delay to Ensure Cluster is Ready - Run (kubectl get node) if the server is still not ready to know when to proceed."
|
||||
sleep 60
|
||||
|
||||
# Check that the Cluster Node is Running and Ready
|
||||
kubectl get node
|
||||
```
|
||||
|
||||
!!! example
|
||||
When the cluster is ready, you should see something like this when you run `kubectl get node`
|
||||
|
||||
This may be a good point to step away for 5 minutes, get a cup of coffee, and come back so it has a little extra time to be fully ready before moving on.
|
||||
```
|
||||
root@awx:/home/nicole# kubectl get node
|
||||
NAME STATUS ROLES AGE VERSION
|
||||
awx Ready control-plane,etcd,master 3m21s v1.26.12+rke2r1
|
||||
```
|
||||
|
||||
### Install Helm, Rancher, CertManager, Jetstack, Rancher, and Longhorn
|
||||
``` sh
|
||||
# Install Helm
|
||||
curl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
|
||||
# Install Necessary Helm Repositories
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm repo update
|
||||
|
||||
# Install Cert-Manager via Helm
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
|
||||
|
||||
# Install Jetstack via Helm
|
||||
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace
|
||||
|
||||
# Install Rancher via Helm
|
||||
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rancher.bunny-lab.io --set bootstrapPassword=bootStrapAllTheThings --set replicas=1
|
||||
|
||||
# Install Longhorn via Helm
|
||||
helm upgrade -i longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
|
||||
```
|
||||
|
||||
!!! example "Be Patient - Come back in 20 Minutes"
|
||||
Rancher is going to take a while to fully set itself up, things will appear broken. Depending on how many resources you gave the cluster, it may take longer or shorter. A good ballpark is giving it at least 20 minutes to deploy itself before attempting to log into the webUI at https://awx.bunny-lab.io.
|
||||
|
||||
If you want to keep an eye on the deployment progress, you need to run the following command: `kubectl get pods --all-namespaces`
|
||||
The output should look like how it does below:
|
||||
```
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
cattle-fleet-system fleet-controller-59cdb866d7-94r2q 1/1 Running 0 4m31s
|
||||
cattle-fleet-system gitjob-f497866f8-t726l 1/1 Running 0 4m31s
|
||||
cattle-provisioning-capi-system capi-controller-manager-6f87d6bd74-xx22v 1/1 Running 0 55s
|
||||
cattle-system helm-operation-28dcp 0/2 Completed 0 109s
|
||||
cattle-system helm-operation-f9qww 0/2 Completed 0 4m39s
|
||||
cattle-system helm-operation-ft8gq 0/2 Completed 0 26s
|
||||
cattle-system helm-operation-m27tq 0/2 Completed 0 61s
|
||||
cattle-system helm-operation-qrgj8 0/2 Completed 0 5m11s
|
||||
cattle-system rancher-64db9f48c-qm6v4 1/1 Running 3 (8m8s ago) 13m
|
||||
cattle-system rancher-webhook-65f5455d9c-tzbv4 1/1 Running 0 98s
|
||||
cert-manager cert-manager-55cf8685cb-86l4n 1/1 Running 0 14m
|
||||
cert-manager cert-manager-cainjector-fbd548cb8-9fgv4 1/1 Running 0 14m
|
||||
cert-manager cert-manager-webhook-655b4d58fb-s2cjh 1/1 Running 0 14m
|
||||
kube-system cloud-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
|
||||
kube-system etcd-awx 1/1 Running 0 19m
|
||||
kube-system helm-install-rke2-canal-q9vm6 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-coredns-q8w57 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-ingress-nginx-54vgk 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-metrics-server-87zhw 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-controller-crd-q6bh6 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-controller-tjk5f 0/1 Completed 0 19m
|
||||
kube-system helm-install-rke2-snapshot-validation-webhook-r9pcn 0/1 Completed 0 19m
|
||||
kube-system kube-apiserver-awx 1/1 Running 0 19m
|
||||
kube-system kube-controller-manager-awx 1/1 Running 5 (3m37s ago) 19m
|
||||
kube-system kube-proxy-awx 1/1 Running 0 19m
|
||||
kube-system kube-scheduler-awx 1/1 Running 5 (3m35s ago) 19m
|
||||
kube-system rke2-canal-gm45f 2/2 Running 0 19m
|
||||
kube-system rke2-coredns-rke2-coredns-565dfc7d75-qp64p 1/1 Running 0 19m
|
||||
kube-system rke2-coredns-rke2-coredns-autoscaler-6c48c95bf9-fclz5 1/1 Running 0 19m
|
||||
kube-system rke2-ingress-nginx-controller-lhjwq 1/1 Running 0 17m
|
||||
kube-system rke2-metrics-server-c9c78bd66-fnvx8 1/1 Running 0 18m
|
||||
kube-system rke2-snapshot-controller-6f7bbb497d-dw6v4 1/1 Running 4 (6m17s ago) 18m
|
||||
kube-system rke2-snapshot-validation-webhook-65b5675d5c-tdfcf 1/1 Running 0 18m
|
||||
longhorn-system csi-attacher-785fd6545b-6jfss 1/1 Running 1 (6m17s ago) 9m39s
|
||||
longhorn-system csi-attacher-785fd6545b-k7jdh 1/1 Running 0 9m39s
|
||||
longhorn-system csi-attacher-785fd6545b-rr6k4 1/1 Running 0 9m39s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-58dc8 1/1 Running 0 9m38s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-g8cv2 1/1 Running 0 9m38s
|
||||
longhorn-system csi-provisioner-8658f9bd9c-mbwh2 1/1 Running 0 9m38s
|
||||
longhorn-system csi-resizer-68c4c75bf5-d5vdd 1/1 Running 0 9m36s
|
||||
longhorn-system csi-resizer-68c4c75bf5-r96lf 1/1 Running 0 9m36s
|
||||
longhorn-system csi-resizer-68c4c75bf5-tnggs 1/1 Running 0 9m36s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-5szxn 1/1 Running 0 9m30s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-w96lw 1/1 Running 0 9m30s
|
||||
longhorn-system csi-snapshotter-7c466dd68f-xt42z 1/1 Running 0 9m30s
|
||||
longhorn-system engine-image-ei-68f17757-jn986 1/1 Running 0 10m
|
||||
longhorn-system instance-manager-fab02be089480f35c7b2288110eb9441 1/1 Running 0 10m
|
||||
longhorn-system longhorn-csi-plugin-5j77p 3/3 Running 0 9m30s
|
||||
longhorn-system longhorn-driver-deployer-75fff9c757-dps2j 1/1 Running 0 13m
|
||||
longhorn-system longhorn-manager-2vfr4 1/1 Running 4 (10m ago) 13m
|
||||
longhorn-system longhorn-ui-7dc586665c-hzt6k 1/1 Running 0 13m
|
||||
longhorn-system longhorn-ui-7dc586665c-lssfj 1/1 Running 0 13m
|
||||
```
|
||||
|
||||
!!! note
|
||||
Be sure to write down the "*bootstrapPassword*" variable for when you log into Rancher later. In this example, the password is `bootStrapAllTheThings`.
|
||||
Also be sure to adjust the "*hostname*" variable to reflect the FQDN of the cluster. You can leave it default like this and change it upon first login if you want. This is important for the last step where you adjust DNS. The example given is `rancher.bunny-lab.io`.
|
||||
|
||||
### Log into webUI
|
||||
At this point, you can log into the webUI at https://awx.bunny-lab.io using the default `bootStrapAllTheThings` password, or whatever password you configured, you can change the password after logging in if you need to by navigating to **Home > Users & Authentication > "..." > Edit Config > "New Password" > Save**. From here, you can deploy more nodes, or deploy single-node workloads such as an [Ansible AWX Operator](https://docs.bunny-lab.io/Containers/Kubernetes/Rancher%20RKE2/AWX%20Operator/Ansible%20AWX%20Operator/).
|
||||
|
||||
### Rebooting the ControlNode
|
||||
If you ever find yourself needing to reboot the ControlNode, and need to run kubectl CLI commands, you will need to run the command below to import the cluster credentials upon every reboot. Reboots should take much less time to get the cluster ready again as compared to the original deployments.
|
||||
```
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
```
|
||||
|
||||
## Create Additional ControlPlane Node(s)
|
||||
This is the part where you can add additional controlplane nodes to add additional redundancy to the RKE2 Cluster. This is important for high-availability environments.
|
||||
|
||||
### Download the Server Deployment Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Configure and Connect to Initial ControlPlane Node
|
||||
``` sh
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
```
|
||||
!!! note
|
||||
Be sure to change the IP address of the initial controlplane node provided in the example above to match your environment.
|
||||
|
||||
## Add Worker Node(s)
|
||||
Worker nodes are the bread-and-butter of a Kubernetes cluster. They handle running container workloads, and acting as storage for the cluster (this can be configured to varying degrees based on your needs).
|
||||
|
||||
### Download the Server Worker Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
|
||||
```
|
||||
### Configure and Connect to RKE2 Cluster
|
||||
``` sh
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service**
|
||||
systemctl enable rke2-agent.service
|
||||
systemctl start rke2-agent.service
|
||||
```
|
||||
|
||||
## DNS Server Record
|
||||
You will need to set up some kind of DNS server record to point the FQDN of the cluster (e.g. `rancher.bunny-lab.io`) to the IP address of the Initial ControlPlane. This can be achieved in a number of ways, such as editing the Windows `HOSTS` file, Linux's `/etc/resolv.conf` file, a Windows DNS Server "A" Record, or an NGINX/Traefik Reverse Proxy.
|
||||
|
||||
Once you have added the DNS record, you should be able to access the login page for the Rancher RKE2 Kubernetes cluster. Use the `bootstrapPassword` mentioned previously to log in, then change it immediately from the user management area of Rancher.
|
||||
|
||||
| TYPE OF ACCESS | FQDN | IP ADDRESS |
|
||||
| -------------- | ------------------------------------- | ------------ |
|
||||
| HOST FILE | rancher.bunny-lab.io | 192.168.3.10 |
|
||||
| REVERSE PROXY | http://rancher.bunny-lab.io:80 | 192.168.5.29 |
|
||||
| DNS RECORD | A Record: rancher.bunny-lab.io | 192.168.3.10 |
|
7
Docker & Kubernetes/index.md
Normal file
7
Docker & Kubernetes/index.md
Normal file
@ -0,0 +1,7 @@
|
||||
# Container Orchestration
|
||||
This section of the documentation goes over concepts such as Docker, Rancher, Kubernetes, and OpenStack. Various sub-topics such as deploying clusters that host AWX (Ansible) exist in this section, while things like Ansible Playbooks exist under the "Scripts" section instead.
|
||||
|
||||
!!! note
|
||||
This section assumes you have a base-line of understanding of installing Ubuntu Server or Rocky Linux, then following instructions to install Docker/Kubernetes to manage containers and clusters. If you don't understand how to deploy Linux as a physical server or virtual machine, research that, then come back to this section.
|
||||
|
||||
Use the navigation tabs at the left of this section to browse the different container-based categories.
|
Reference in New Issue
Block a user