Initial Commit
Bringing Documentation into Gitea
This commit is contained in:
@ -0,0 +1,70 @@
|
||||
**Purpose**: Self-hosted open-source no-code business automation tool.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.0'
|
||||
services:
|
||||
activepieces:
|
||||
image: activepieces/activepieces:0.3.11
|
||||
container_name: activepieces
|
||||
restart: unless-stopped
|
||||
privileged: true
|
||||
ports:
|
||||
- '8080:80'
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
env_file: stack.env
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.62
|
||||
postgres:
|
||||
image: 'postgres:14.4'
|
||||
container_name: postgres
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- 'POSTGRES_DB=${AP_POSTGRES_DATABASE}'
|
||||
- 'POSTGRES_PASSWORD=${AP_POSTGRES_PASSWORD}'
|
||||
- 'POSTGRES_USER=${AP_POSTGRES_USERNAME}'
|
||||
volumes:
|
||||
- /srv/containers/activepieces/postgresql:/var/lib/postgresql/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.61
|
||||
redis:
|
||||
image: 'redis:7.0.7'
|
||||
container_name: redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /srv/containers/activepieces/redis:/data'
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.60
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
AP_ENGINE_EXECUTABLE_PATH=dist/packages/engine/main.js
|
||||
AP_ENCRYPTION_KEY=e81f8754faa04acaa7b13caa5d2c6a5a
|
||||
AP_JWT_SECRET=REDACTED #BE SURE TO SET THIS WITH A VALID JWT SECRET > REFER TO OFFICIAL DOCUMENTATION
|
||||
AP_ENVIRONMENT=prod
|
||||
AP_FRONTEND_URL=https://ap.cyberstrawberry.net
|
||||
AP_NODE_EXECUTABLE_PATH=/usr/local/bin/node
|
||||
AP_POSTGRES_DATABASE=activepieces
|
||||
AP_POSTGRES_HOST=192.168.5.61
|
||||
AP_POSTGRES_PORT=5432
|
||||
AP_POSTGRES_USERNAME=postgres
|
||||
AP_POSTGRES_PASSWORD=REDACTED #USE A SECURE SHORT PASSWORD > ENSURE ITS NOT TOO LONG FOR POSTGRESQL
|
||||
AP_REDIS_HOST=redis
|
||||
AP_REDIS_PORT=6379
|
||||
AP_SANDBOX_RUN_TIME_SECONDS=600
|
||||
AP_TELEMETRY_ENABLED=true
|
||||
```
|
@ -0,0 +1,30 @@
|
||||
**Purpose**: AdGuard Home is a network-wide software for blocking ads & tracking. After you set it up, it will cover ALL your home devices, and you don’t need any client-side software for that. With the rise of Internet-Of-Things and connected devices, it becomes more and more important to be able to control your whole network.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: adguard/adguardhome
|
||||
ports:
|
||||
- 3000:3000
|
||||
- 53:53
|
||||
- 80:80
|
||||
volumes:
|
||||
- /srv/containers/adguard_home/workingdir:/opt/adguardhome/work
|
||||
- /srv/containers/adguard_home/config:/opt/adguardhome/conf
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.189
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
45
Container Documentation/Docker/Docker Compose/Authelia.md
Normal file
45
Container Documentation/Docker/Docker Compose/Authelia.md
Normal file
@ -0,0 +1,45 @@
|
||||
**Purpose**: Authelia is an open-source authentication and authorization server and portal fulfilling the identity and access management (IAM) role of information security in providing multi-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for common reverse proxies.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
services:
|
||||
authelia:
|
||||
image: authelia/authelia
|
||||
container_name: authelia
|
||||
volumes:
|
||||
- /mnt/authelia/config:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.159
|
||||
expose:
|
||||
- 9091
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
disable: true
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
redis:
|
||||
image: redis:alpine
|
||||
container_name: redis
|
||||
volumes:
|
||||
- /mnt/authelia/redis:/data
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.158
|
||||
expose:
|
||||
- 6379
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,99 @@
|
||||
**Purpose**: Docker container running Ubuntu Minimal 22.04 that automates much of the script mentioned in the [Git Repo Updater](https://docs.cyberstrawberry.net/General%20Scripts/Bash/Git%20Repo%20Updater) document. It offers the additional benefits of checking for updates every 5 seconds instead of every 60 seconds. It also accepts environment variables to provide credentials and notification settings.
|
||||
|
||||
### Deployment
|
||||
You can find the current up-to-date Gitea repository that includes the `docker-compose.yml` and `.env` files that you need to deploy everything [here](https://git.cyberstrawberry.net/container-registry/-/packages/container/git-repo-updater/latest)
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.3'
|
||||
services:
|
||||
git-repo-updater:
|
||||
privileged: true
|
||||
container_name: git-repo-updater
|
||||
environment:
|
||||
- REPO_URL=${REPO_URL}
|
||||
- COPY_DIR=${COPY_DIR}
|
||||
- NTFY_URL=${NTFY_URL}
|
||||
- GIT_USERNAME=${GIT_USERNAME}
|
||||
- GIT_PASSWORD=${GIT_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
image: git.cyberstrawberry.net/container-registry/git-repo-updater:latest
|
||||
volumes:
|
||||
#This folder is where the repository will be downloaded and updated - it needs a unique folder name.
|
||||
- ${TEMP_DIR}:/root/Repo_Watcher/repo
|
||||
#This is where you want the git repository data to be copied to (e.g. a server's data folder)
|
||||
- ${DESTINATION}:/DATA
|
||||
restart: always
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
REPO_URL=https://git.cyberstrawberry.net/cyberstrawberry/placeholder.git
|
||||
NTFY_URL=https://ntfy.cyberstrawberry.net/git-repo-updater
|
||||
TEMP_DIR=/srv/containers/git-repo-updater/REPO-NAME
|
||||
DESTINATION=/srv/containers/server-name/data
|
||||
GIT_USERNAME=nicole.rappe
|
||||
GIT_PASSWORD=USE-AN-APP-PASSWORD
|
||||
```
|
||||
### Build / Development
|
||||
If you want to learn how the container was assembled, the related build files are located [here](https://git.cyberstrawberry.net/container-registry/git-repo-updater)
|
||||
```jsx title="Dockerfile"
|
||||
FROM ubuntu:latest
|
||||
|
||||
# Install necessary packages
|
||||
RUN apt-get update && \
|
||||
apt-get install -y git curl rsync
|
||||
|
||||
# Add script
|
||||
COPY repo_watcher.sh /root/Repo_Watcher/repo_watcher.sh
|
||||
|
||||
# Make script executable
|
||||
RUN chmod +x /root/Repo_Watcher/repo_watcher.sh
|
||||
|
||||
# Start script
|
||||
CMD ["/bin/bash", "-c", "/root/Repo_Watcher/repo_watcher.sh"]
|
||||
```
|
||||
|
||||
```jsx title="repo_watcher.sh"
|
||||
#!/bin/bash
|
||||
|
||||
while true; do
|
||||
# Set Git credentials
|
||||
git config --global credential.helper 'store --file /tmp/git-credentials'
|
||||
echo "url=$REPO_URL" > /tmp/git-credentials
|
||||
echo "username=$GIT_USERNAME" >> /tmp/git-credentials
|
||||
echo "password=$GIT_PASSWORD" >> /tmp/git-credentials
|
||||
|
||||
# Navigate to the watcher directory
|
||||
cd /root/Repo_Watcher
|
||||
|
||||
# Clone the repo if it doesn't exist
|
||||
if [ -z "$(find /root/Repo_Watcher/repo -maxdepth 1 -mindepth 1 -type f -o -type d 2>/dev/null)" ]; then
|
||||
curl -d "Repository $REPO_URL doesn't exist locally - Downloading..." $NTFY_URL
|
||||
echo "Repository $REPO_URL doesn't exist locally - Downloading..."
|
||||
git clone $REPO_URL repo
|
||||
cd repo
|
||||
rsync -av --delete --exclude '.git/' ./ /DATA
|
||||
fi
|
||||
|
||||
cd repo
|
||||
|
||||
# Fetch the latest changes
|
||||
git fetch origin main
|
||||
|
||||
# Check if the local repository is behind the remote
|
||||
LOCAL=$(git rev-parse @)
|
||||
REMOTE=$(git rev-parse @{u})
|
||||
BASE=$(git merge-base @ @{u})
|
||||
|
||||
if [ $LOCAL = $REMOTE ]; then
|
||||
echo "Repository Up-to-Date"
|
||||
else
|
||||
curl -d "$REPO_URL Automatically Pulling Updates..." $NTFY_URL
|
||||
echo "Pulling Updates from Repository..."
|
||||
git pull origin main
|
||||
rsync -av --delete --exclude '.git/' ./ /DATA
|
||||
fi
|
||||
|
||||
# Wait for 5 seconds before the next iteration
|
||||
sleep 5
|
||||
|
||||
done
|
||||
```
|
59
Container Documentation/Docker/Docker Compose/Dashy.md
Normal file
59
Container Documentation/Docker/Docker Compose/Dashy.md
Normal file
@ -0,0 +1,59 @@
|
||||
**Purpose**: A self-hostable personal dashboard built for you. Includes status-checking, widgets, themes, icon packs, a UI editor and tons more!
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.8"
|
||||
services:
|
||||
dashy:
|
||||
container_name: Dashy
|
||||
|
||||
# Pull latest image from DockerHub
|
||||
image: lissy93/dashy
|
||||
|
||||
# Set port that web service will be served on. Keep container port as 80
|
||||
ports:
|
||||
- 4000:80
|
||||
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dashy.rule=Host(`dashboard.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.dashy.entrypoints=websecure"
|
||||
- "traefik.http.routers.dashy.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.dashy.loadbalancer.server.port=80"
|
||||
|
||||
# Set any environmental variables
|
||||
environment:
|
||||
- NODE_ENV=production
|
||||
- UID=1000
|
||||
- GID=1000
|
||||
|
||||
# Pass in your config file below, by specifying the path on your host machine
|
||||
volumes:
|
||||
- /srv/Containers/Dashy/conf.yml:/app/public/conf.yml
|
||||
- /srv/Containers/Dashy/item-icons:/app/public/item-icons
|
||||
|
||||
# Specify restart policy
|
||||
restart: unless-stopped
|
||||
|
||||
# Configure healthchecks
|
||||
healthcheck:
|
||||
test: ['CMD', 'node', '/app/services/healthcheck']
|
||||
interval: 1m30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
|
||||
# Connect container to Docker_Network
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.57
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
34
Container Documentation/Docker/Docker Compose/Docusaurus.md
Normal file
34
Container Documentation/Docker/Docker Compose/Docusaurus.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: An optimized site generator in React. Docusaurus helps you to move fast and write content. Build documentation websites, blogs, marketing pages, and more.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
docusaurus:
|
||||
image: awesometic/docusaurus
|
||||
container_name: docusaurus
|
||||
environment:
|
||||
- TARGET_UID=1000
|
||||
- TARGET_GID=1000
|
||||
- AUTO_UPDATE=true
|
||||
- WEBSITE_NAME=docusaurus
|
||||
- TEMPLATE=classic
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/docusaurus:/docusaurus
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.72
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
49
Container Documentation/Docker/Docker Compose/Frigate.md
Normal file
49
Container Documentation/Docker/Docker Compose/Frigate.md
Normal file
@ -0,0 +1,49 @@
|
||||
**Purpose**: A complete and local NVR designed for Home Assistant with AI object detection. Uses OpenCV and Tensorflow to perform realtime object detection locally for IP cameras.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.9"
|
||||
services:
|
||||
frigate:
|
||||
container_name: frigate
|
||||
privileged: true # this may not be necessary for all setups
|
||||
restart: unless-stopped
|
||||
image: blakeblackshear/frigate:stable
|
||||
shm_size: "256mb" # update for your cameras based on calculation above
|
||||
# devices:
|
||||
# - /dev/bus/usb:/dev/bus/usb # passes the USB Coral, needs to be modified for other versions
|
||||
# - /dev/apex_0:/dev/apex_0 # passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux
|
||||
# - /dev/dri/renderD128 # for intel hwaccel, needs to be updated for your hardware
|
||||
volumes:
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /mnt/1TB_STORAGE/frigate/config.yml:/config/config.yml:ro
|
||||
- /mnt/1TB_STORAGE/frigate/media:/media/frigate
|
||||
- type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
|
||||
target: /tmp/cache
|
||||
tmpfs:
|
||||
size: 4000000000
|
||||
ports:
|
||||
- "5000:5000"
|
||||
- "1935:1935" # RTMP feeds
|
||||
environment:
|
||||
FRIGATE_RTSP_PASSWORD: ${FRIGATE_RTSP_PASSWORD}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.201
|
||||
|
||||
mqtt:
|
||||
container_name: mqtt
|
||||
image: eclipse-mosquitto:1.6
|
||||
ports:
|
||||
- "1883:1883"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.202
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
FRIGATE_RTSP_PASSWORD=SomethingSecure101
|
||||
```
|
58
Container Documentation/Docker/Docker Compose/Gitea.md
Normal file
58
Container Documentation/Docker/Docker Compose/Gitea.md
Normal file
@ -0,0 +1,58 @@
|
||||
**Purpose**: Gitea is a painless self-hosted all-in-one software development service, it includes Git hosting, code review, team collaboration, package registry and CI/CD. It is similar to GitHub, Bitbucket and GitLab. Gitea was forked from Gogs originally and almost all the code has been changed.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
privileged: true
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/containers/gitea:/data
|
||||
# - /etc/timezone:/etc/timezone:ro
|
||||
# - /etc/localtime:/etc/localtime:ro
|
||||
ports:
|
||||
- "3000:3000"
|
||||
- "222:22"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.70
|
||||
# labels:
|
||||
# - "traefik.enable=true"
|
||||
# - "traefik.http.routers.deeptree-gitea.rule=Host(`git.domain.xyz`)"
|
||||
# - "traefik.http.routers.deeptree-gitea.entrypoints=websecure"
|
||||
# - "traefik.http.routers.deeptree-gitea.tls.certresolver=myresolver"
|
||||
# - "traefik.http.services.deeptree-gitea.loadbalancer.server.port=3000"
|
||||
depends_on:
|
||||
- postgres
|
||||
|
||||
postgres:
|
||||
image: postgres:12-alpine
|
||||
ports:
|
||||
- 5432:5432
|
||||
volumes:
|
||||
- /srv/containers/gitea/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=gitea
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.71
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
POSTGRES_PASSWORD=SomethingSecure
|
||||
```
|
@ -0,0 +1,37 @@
|
||||
**Purpose**: Open source home automation that puts local control and privacy first. Powered by a worldwide community of tinkerers and DIY enthusiasts.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
homeassistant:
|
||||
container_name: homeassistant
|
||||
image: "ghcr.io/home-assistant/home-assistant:stable"
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/Home-Assistant-Core:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
restart: always
|
||||
privileged: true
|
||||
ports:
|
||||
- 8123:8123
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.252
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.homeassistant.rule=Host(`automation.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.homeassistant.entrypoints=websecure"
|
||||
- "traefik.http.routers.homeassistant.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.homeassistant.loadbalancer.server.port=8123"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,41 @@
|
||||
**Purpose**: A highly customizable homepage (or startpage / application dashboard) with Docker and service API integrations.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.8'
|
||||
services:
|
||||
homepage:
|
||||
image: ghcr.io/benphelps/homepage:latest
|
||||
container_name: homepage
|
||||
volumes:
|
||||
- /srv/containers/homepage-docker:/config
|
||||
- /srv/containers/homepage-docker/icons:/app/public/icons
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
- 3000:3000
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
dns:
|
||||
- 192.168.3.10
|
||||
- 192.168.3.11
|
||||
restart: unless-stopped
|
||||
extra_hosts:
|
||||
- "rancher.cyberstrawberry.net:192.168.3.21"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.44
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
26
Container Documentation/Docker/Docker Compose/IT-Tools.md
Normal file
26
Container Documentation/Docker/Docker Compose/IT-Tools.md
Normal file
@ -0,0 +1,26 @@
|
||||
**Purpose**: Collection of handy online tools for developers, with great UX.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: corentinth/it-tools:latest
|
||||
container_name: it-tools
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
restart: always
|
||||
ports:
|
||||
- "80:80"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.16
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
43
Container Documentation/Docker/Docker Compose/Kopia.md
Normal file
43
Container Documentation/Docker/Docker Compose/Kopia.md
Normal file
@ -0,0 +1,43 @@
|
||||
**Purpose**: Cross-platform backup tool for Windows, macOS & Linux with fast, incremental backups, client-side end-to-end encryption, compression and data deduplication. CLI and GUI included.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
kopia:
|
||||
image: kopia/kopia:latest
|
||||
hostname: kopia-backup
|
||||
user: root
|
||||
restart: always
|
||||
ports:
|
||||
- 51515:51515
|
||||
environment:
|
||||
- KOPIA_PASSWORD=${KOPIA_ENRYPTION_PASSWORD}
|
||||
- TZ=America/Denver
|
||||
privileged: true
|
||||
volumes:
|
||||
- /srv/containers/kopia/config:/app/config
|
||||
- /srv/containers/kopia/cache:/app/cache
|
||||
- /srv/containers/kopia/cache:/app/logs
|
||||
- /srv:/srv
|
||||
- /usr/share/zoneinfo:/usr/share/zoneinfo
|
||||
entrypoint: ["/bin/kopia", "server", "start", "--insecure", "--timezone=America/Denver", "--address=0.0.0.0:51515", "--override-username=${KOPIA_SERVER_USERNAME}", "--server-username=${KOPIA_SERVER_USERNAME}", "--server-password=${KOPIA_SERVER_PASSWORD}", "--disable-csrf-token-checks"]
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.14
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
:::note Credentials:
|
||||
Your username will be `kopia@kopia-backup` and the password will be the value you set for `--server-password` in the entrypoint section of the compose file. The `KOPIA_PASSWORD:` is used by the backup repository, such as Backblaze B2, to encrypt/decrypt the backed-up data, and must be updated in the compose file if the repository is changed / updated.
|
||||
:::
|
||||
|
||||
```jsx title=".env"
|
||||
KOPIA_ENRYPTION_PASSWORD=PasswordUsedToEncryptDataOnBackblazeB2
|
||||
KOPIA_SERVER_PASSWORD=ThisIsUsedToLogIntoKopiaWebUI
|
||||
KOPIA_SERVER_USERNAME=kopia@kopia-backup
|
||||
```
|
34
Container Documentation/Docker/Docker Compose/NGINX.md
Normal file
34
Container Documentation/Docker/Docker Compose/NGINX.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
nginx:
|
||||
image: lscr.io/linuxserver/nginx:latest
|
||||
container_name: nginx
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Denver
|
||||
volumes:
|
||||
- /srv/containers/nginx-portfolio-website:/config
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.12
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
65
Container Documentation/Docker/Docker Compose/Nextcloud.md
Normal file
65
Container Documentation/Docker/Docker Compose/Nextcloud.md
Normal file
@ -0,0 +1,65 @@
|
||||
**Purpose**: Deploy a Nextcloud and PostgreSQL database together.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
app:
|
||||
image: nextcloud:apache
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`cloud.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.nextcloud.entrypoints=websecure"
|
||||
- "traefik.http.routers.nextcloud.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_HOST=${POSTGRES_HOST}
|
||||
- OVERWRITEPROTOCOL=https
|
||||
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
|
||||
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
|
||||
- NEXTCLOUD_TRUSTED_DOMAINS=${NEXTCLOUD_TRUSTED_DOMAINS}
|
||||
volumes:
|
||||
- /mnt/nextcloud/app:/config
|
||||
- /mnt/nextcloud/data:/data
|
||||
ports:
|
||||
- 443:443
|
||||
- 80:80
|
||||
restart: always
|
||||
depends_on:
|
||||
- db
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.3.11
|
||||
db:
|
||||
image: postgres:12-alpine
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- POSTGRES_USER=${POSTGRES_USER}
|
||||
- POSTGRES_DB=${POSTGRES_DB}
|
||||
volumes:
|
||||
- /mnt/nextcloud/db:/var/lib/postgresql/data
|
||||
ports:
|
||||
- 5432:5432
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.3.12
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
TZ=America/Denver
|
||||
POSTGRES_PASSWORD=SomeSecurePassword
|
||||
POSTGRES_USER=ncadmin
|
||||
POSTGRES_HOST=192.168.3.10
|
||||
POSTGRES_DB=nextcloud
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=SomeSuperSecurePassword
|
||||
NEXTCLOUD_TRUSTED_DOMAINS=cloud.cyberstrawberry.net
|
||||
```
|
45
Container Documentation/Docker/Docker Compose/Niltalk.md
Normal file
45
Container Documentation/Docker/Docker Compose/Niltalk.md
Normal file
@ -0,0 +1,45 @@
|
||||
**Purpose**: Niltalk is a web based disposable chat server. It allows users to create password protected disposable, ephemeral chatrooms and invite peers to chat rooms.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:alpine
|
||||
volumes:
|
||||
- /srv/niltalk
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.196
|
||||
|
||||
niltalk:
|
||||
image: kailashnadh/niltalk:latest
|
||||
ports:
|
||||
- "9000:9000"
|
||||
depends_on:
|
||||
- redis
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.197
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.niltalk.rule=Host(`temp.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.niltalk.entrypoints=websecure"
|
||||
- "traefik.http.routers.niltalk.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.niltalk.loadbalancer.server.port=9000"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
niltalk-data:
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
29
Container Documentation/Docker/Docker Compose/Node-Red.md
Normal file
29
Container Documentation/Docker/Docker Compose/Node-Red.md
Normal file
@ -0,0 +1,29 @@
|
||||
**Purpose**: Node-RED is a programming tool for wiring together hardware devices, APIs and online services in new and interesting ways.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.7"
|
||||
|
||||
services:
|
||||
node-red:
|
||||
image: nodered/node-red:latest
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
ports:
|
||||
- "1880:1880"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.92
|
||||
volumes:
|
||||
- /srv/containers/node-red:/data
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
34
Container Documentation/Docker/Docker Compose/Ntfy.md
Normal file
34
Container Documentation/Docker/Docker Compose/Ntfy.md
Normal file
@ -0,0 +1,34 @@
|
||||
**Purpose**: ntfy (pronounced notify) is a simple HTTP-based pub-sub notification service. It allows you to send notifications to your phone or desktop via scripts from any computer, and/or using a REST API. It's infinitely flexible, and 100% free software.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
ntfy:
|
||||
image: binwiederhier/ntfy
|
||||
container_name: ntfy
|
||||
command:
|
||||
- serve
|
||||
environment:
|
||||
- TZ=America/Denver # optional: Change to your desired timezone
|
||||
#user: UID:GID # optional: Set custom user/group or uid/gid
|
||||
volumes:
|
||||
- /srv/containers/ntfy/cache:/var/cache/ntfy
|
||||
- /srv/containers/ntfy/etc:/etc/ntfy
|
||||
ports:
|
||||
- 80:80
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.45
|
||||
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,63 @@
|
||||
**Purpose**: ONLYOFFICE offers a secure online office suite highly compatible with MS Office formats. Generally used with Nextcloud to edit documents directly within the web browser.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
app:
|
||||
image: onlyoffice/documentserver-ee
|
||||
ports:
|
||||
- 80:80
|
||||
- 443:443
|
||||
volumes:
|
||||
- /srv/containers/onlyoffice/DocumentServer/logs:/var/log/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/data:/var/www/onlyoffice/Data
|
||||
- /srv/containers/onlyoffice/DocumentServer/lib:/var/lib/onlyoffice
|
||||
- /srv/containers/onlyoffice/DocumentServer/db:/var/lib/postgresql
|
||||
- /srv/containers/onlyoffice/DocumentServer/fonts:/usr/share/fonts/truetype/custom
|
||||
- /srv/containers/onlyoffice/DocumentServer/forgotten:/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten
|
||||
- /srv/containers/onlyoffice/DocumentServer/rabbitmq:/var/lib/rabbitmq
|
||||
- /srv/containers/onlyoffice/DocumentServer/redis:/var/lib/redis
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.rule=Host(`office.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.entrypoints=websecure"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.tls.certresolver=myresolver"
|
||||
- "traefik.http.services.cyberstrawberry-onlyoffice.loadbalancer.server.port=80"
|
||||
- "traefik.http.routers.cyberstrawberry-onlyoffice.middlewares=onlyoffice-headers"
|
||||
- "traefik.http.middlewares.onlyoffice-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
#- "traefik.http.middlewares.onlyoffice-headers.headers.accessControlAllowOrigin=*"
|
||||
environment:
|
||||
- JWT_ENABLED=true
|
||||
- JWT_SECRET=REDACTED #SET THIS TO SOMETHING SECURE
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.143
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
||||
:::tip
|
||||
If you wish to use this in a non-commercial homelab environment without limits, [this script](https://wiki.muwahhid.ru/ru/Unraid/Docker/Onlyoffice-Document-Server) does an endless trial without functionality limits.
|
||||
```
|
||||
docker stop office-document-server-ee
|
||||
docker rm office-document-server-ee
|
||||
rm -r /mnt/user/appdata/onlyoffice/DocumentServer
|
||||
sleep 5
|
||||
<USE A PORTAINER WEBHOOK TO RECREATE THE CONTAINER OR REFERENCE THE DOCKER RUN METHOD BELOW>
|
||||
```
|
||||
|
||||
Docker Run Method:
|
||||
```
|
||||
docker run -d --name='office-document-server-ee' --net='bridge' -e TZ="Europe/Moscow" -e HOST_OS="Unraid" -e 'JWT_ENABLED'='true' -e 'JWT_SECRET'='mySecret' -p '8082:80/tcp' -p '4432:443/tcp' -v '/mnt/user/appdata/onlyoffice/DocumentServer/logs':'/var/log/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/data':'/var/www/onlyoffice/Data':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/lib':'/var/lib/onlyoffice':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/db':'/var/lib/postgresql':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/fonts':'/usr/share/fonts/truetype/custom':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/forgotten':'/var/lib/onlyoffice/documentserver/App_Data/cache/files/forgotten':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/rabbitmq':'/var/lib/rabbitmq':'rw' -v '/mnt/user/appdata/onlyoffice/DocumentServer/redis':'/var/lib/redis':'rw' 'onlyoffice/documentserver-ee'
|
||||
```
|
||||
:::
|
||||
|
@ -0,0 +1,32 @@
|
||||
**Purpose**: An application to securely communicate passwords over the web. Passwords automatically expire after a certain number of views and/or time has passed. Track who, what and when.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
passwordpusher:
|
||||
image: pglombardo/pwpush-ephemeral:release
|
||||
expose:
|
||||
- 5100
|
||||
restart: always
|
||||
environment:
|
||||
# Read Documention on how to generate a master key, then put it below
|
||||
- PWPUSH_MASTER_KEY=${PWPUSH_MASTER_KEY}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.170
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.passwordpusher.rule=Host(`pw.domain.com`)"
|
||||
- "traefik.http.routers.passwordpusher.entrypoints=websecure"
|
||||
- "traefik.http.routers.passwordpusher.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.passwordpusher.loadbalancer.server.port=5100"
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
PWPUSH_MASTER_KEY=<PASSWORD> # Read Documention on how to generate a master key, then put it here
|
||||
```
|
41
Container Documentation/Docker/Docker Compose/Pi-Hole.md
Normal file
41
Container Documentation/Docker/Docker Compose/Pi-Hole.md
Normal file
@ -0,0 +1,41 @@
|
||||
**Purpose**: Pi-hole is a Linux network-level advertisement and Internet tracker blocking application which acts as a DNS sinkhole and optionally a DHCP server, intended for use on a private network.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3"
|
||||
|
||||
# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
|
||||
services:
|
||||
pihole:
|
||||
container_name: pihole
|
||||
image: pihole/pihole:latest
|
||||
# For DHCP it is recommended to remove these ports and instead add: network_mode: "host"
|
||||
ports:
|
||||
- "53:53/tcp"
|
||||
- "53:53/udp"
|
||||
- "67:67/udp" # Only required if you are using Pi-hole as your DHCP server
|
||||
- "80:80/tcp"
|
||||
environment:
|
||||
TZ: 'America/Denver'
|
||||
WEBPASSWORD: 'REDACTED' #USE A SECURE PASSWORD HERE
|
||||
# Volumes store your data between container upgrades
|
||||
volumes:
|
||||
- /srv/containers/pihole/app:/etc/pihole
|
||||
- /srv/containers/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
|
||||
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
|
||||
# cap_add:
|
||||
# - NET_ADMIN # Required if you are using Pi-hole as your DHCP server, else not needed
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.190
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
93
Container Documentation/Docker/Docker Compose/RocketChat.md
Normal file
93
Container Documentation/Docker/Docker Compose/RocketChat.md
Normal file
@ -0,0 +1,93 @@
|
||||
**Purpose**: Deploy a RocketChat and MongoDB database together.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
services:
|
||||
rocketchat:
|
||||
image: registry.rocket.chat/rocketchat/rocket.chat:${RELEASE:-latest}
|
||||
restart: always
|
||||
# labels:
|
||||
# traefik.enable: "true"
|
||||
# traefik.http.routers.rocketchat.rule: Host(`${DOMAIN:-}`)
|
||||
# traefik.http.routers.rocketchat.tls: "true"
|
||||
# traefik.http.routers.rocketchat.entrypoints: https
|
||||
# traefik.http.routers.rocketchat.tls.certresolver: le
|
||||
environment:
|
||||
MONGO_URL: "${MONGO_URL:-\
|
||||
mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
${MONGODB_DATABASE:-rocketchat}?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
MONGO_OPLOG_URL: "${MONGO_OPLOG_URL:\
|
||||
-mongodb://${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}:${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}/\
|
||||
local?replicaSet=${MONGODB_REPLICA_SET_NAME:-rs0}}"
|
||||
ROOT_URL: ${ROOT_URL:-http://localhost:${HOST_PORT:-3000}}
|
||||
PORT: ${PORT:-3000}
|
||||
DEPLOY_METHOD: docker
|
||||
DEPLOY_PLATFORM: ${DEPLOY_PLATFORM:-}
|
||||
REG_TOKEN: ${REG_TOKEN:-}
|
||||
depends_on:
|
||||
- rc_mongodb
|
||||
expose:
|
||||
- ${PORT:-3000}
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 1.0.0.1
|
||||
- 8.8.8.8
|
||||
- 8.8.4.4
|
||||
ports:
|
||||
- "${BIND_IP:-0.0.0.0}:${HOST_PORT:-3000}:${PORT:-3000}"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.2
|
||||
|
||||
rc_mongodb:
|
||||
image: docker.io/bitnami/mongodb:${MONGODB_VERSION:-5.0}
|
||||
restart: always
|
||||
volumes:
|
||||
- /srv/deeptree/rocket.chat/mongodb:/bitnami/mongodb
|
||||
environment:
|
||||
MONGODB_REPLICA_SET_MODE: primary
|
||||
MONGODB_REPLICA_SET_NAME: ${MONGODB_REPLICA_SET_NAME:-rs0}
|
||||
MONGODB_PORT_NUMBER: ${MONGODB_PORT_NUMBER:-27017}
|
||||
MONGODB_INITIAL_PRIMARY_HOST: ${MONGODB_INITIAL_PRIMARY_HOST:-rc_mongodb}
|
||||
MONGODB_INITIAL_PRIMARY_PORT_NUMBER: ${MONGODB_INITIAL_PRIMARY_PORT_NUMBER:-27017}
|
||||
MONGODB_ADVERTISED_HOSTNAME: ${MONGODB_ADVERTISED_HOSTNAME:-rc_mongodb}
|
||||
MONGODB_ENABLE_JOURNAL: ${MONGODB_ENABLE_JOURNAL:-true}
|
||||
ALLOW_EMPTY_PASSWORD: ${ALLOW_EMPTY_PASSWORD:-yes}
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.3
|
||||
|
||||
networks:
|
||||
docker__network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
TZ=America/Denver
|
||||
RELEASE=6.3.0
|
||||
PORT=3000 #Redundant - Can be Removed
|
||||
MONGODB_VERSION=6.0
|
||||
MONGODB_INITIAL_PRIMARY_HOST=rc_mongodb #Redundant - Can be Removed
|
||||
MONGODB_ADVERTISED_HOSTNAME=rc_mongodb #Redundant - Can be Removed
|
||||
```
|
||||
## Reverse Proxy Configuration
|
||||
```jsx title="nginx.conf"
|
||||
# Rocket.Chat Server
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name rocketchat.domain.net;
|
||||
error_log /var/log/nginx/new_rocketchat_error.log;
|
||||
client_max_body_size 500M;
|
||||
location / {
|
||||
proxy_pass http://192.168.5.2:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $http_host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto https;
|
||||
proxy_set_header X-Nginx-Proxy true;
|
||||
proxy_redirect off;
|
||||
}
|
||||
}
|
||||
```
|
29
Container Documentation/Docker/Docker Compose/SearX.md
Normal file
29
Container Documentation/Docker/Docker Compose/SearX.md
Normal file
@ -0,0 +1,29 @@
|
||||
**Purpose**: Deploys a SearX Meta Search Engine Server
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
searx:
|
||||
image: searx/searx:latest
|
||||
ports:
|
||||
- 8080:8080
|
||||
volumes:
|
||||
- /srv/containers/searx/:/etc/searx
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.searx.rule=Host(`searx.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.searx.entrypoints=websecure"
|
||||
- "traefik.http.routers.searx.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.searx.loadbalancer.server.port=8080"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.124
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
102
Container Documentation/Docker/Docker Compose/Snipe-IT.md
Normal file
102
Container Documentation/Docker/Docker Compose/Snipe-IT.md
Normal file
@ -0,0 +1,102 @@
|
||||
**Purpose**: A free open source IT asset/license management system.
|
||||
|
||||
!!! warning
|
||||
The Snipe-IT container will attempt to launch after the MariaDB container starts, but MariaDB takes a while set itself up before it can accept connections; as a result, Snipe-IT will fail to initialize the database. Just wait about 30 seconds after deploying the stack, then restart the Snipe-IT container to initialize the database. You will know it worked if you see notes about data being `Migrated`.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
|
||||
services:
|
||||
snipeit:
|
||||
image: snipe/snipe-it
|
||||
ports:
|
||||
- "8000:80"
|
||||
depends_on:
|
||||
- db
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- ${DATA_LOCATION}/snipeit:/var/lib/snipeit
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.50
|
||||
|
||||
redis:
|
||||
image: redis:6.2.5-buster
|
||||
ports:
|
||||
- "6379:6379"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.51
|
||||
|
||||
db:
|
||||
image: mariadb:10.5
|
||||
ports:
|
||||
- "3306:3306"
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- ${DATA_LOCATION}/mariadb:/var/lib/mysql
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.52
|
||||
|
||||
mailhog:
|
||||
image: mailhog/mailhog:v1.0.1
|
||||
ports:
|
||||
# - 1025:1025
|
||||
- "8025:8025"
|
||||
env_file:
|
||||
- stack.env
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.53
|
||||
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
APP_ENV=production
|
||||
APP_DEBUG=false
|
||||
APP_KEY=base64:SomethingSecure
|
||||
APP_URL=https://assets.bunny-lab.io
|
||||
APP_TIMEZONE='America/Denver'
|
||||
APP_LOCALE=en
|
||||
MAX_RESULTS=500
|
||||
PRIVATE_FILESYSTEM_DISK=local
|
||||
PUBLIC_FILESYSTEM_DISK=local_public
|
||||
DB_CONNECTION=mysql
|
||||
DB_HOST=db
|
||||
DB_DATABASE=snipedb
|
||||
DB_USERNAME=snipeuser
|
||||
DB_PASSWORD=SomethingSecure
|
||||
DB_PREFIX=null
|
||||
DB_DUMP_PATH='/usr/bin'
|
||||
DB_CHARSET=utf8mb4
|
||||
DB_COLLATION=utf8mb4_unicode_ci
|
||||
IMAGE_LIB=gd
|
||||
MYSQL_DATABASE=snipedb
|
||||
MYSQL_USER=snipeuser
|
||||
MYSQL_PASSWORD=SomethingSecure
|
||||
MYSQL_ROOT_PASSWORD=SomethingSecure
|
||||
REDIS_HOST=redis
|
||||
REDIS_PASSWORD=SomethingSecure
|
||||
REDIS_PORT=6379
|
||||
MAIL_DRIVER=smtp
|
||||
MAIL_HOST=mail.deeptree.tech
|
||||
MAIL_PORT=587
|
||||
MAIL_USERNAME=assets@bunny-lab.io
|
||||
MAIL_PASSWORD=SomethingSecure
|
||||
MAIL_ENCRYPTION=starttls
|
||||
MAIL_FROM_ADDR=assets@bunny-lab.io
|
||||
MAIL_FROM_NAME='Bunny Lab Asset Management'
|
||||
MAIL_REPLYTO_ADDR=assets@bunny-lab.io
|
||||
MAIL_REPLYTO_NAME='Bunny Lab Asset Management'
|
||||
MAIL_AUTO_EMBED_METHOD='attachment'
|
||||
DATA_LOCATION=/srv/containers/snipe-it
|
||||
APP_TRUSTED_PROXIES=192.168.5.29
|
||||
```
|
111
Container Documentation/Docker/Docker Compose/Traefik.md
Normal file
111
Container Documentation/Docker/Docker Compose/Traefik.md
Normal file
@ -0,0 +1,111 @@
|
||||
**Purpose**: Deploy a Traefik Reverse Proxy
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "3.3"
|
||||
services:
|
||||
traefik:
|
||||
image: "traefik:latest"
|
||||
restart: always
|
||||
container_name: "traefik"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
labels:
|
||||
- "traefik.http.routers.traefik-proxy.middlewares=my-buffering"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxRequestBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.maxResponseBodyBytes=104857600"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memRequestBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.memResponseBodyBytes=2097152"
|
||||
- "traefik.http.middlewares.my-buffering.buffering.retryExpression=IsNetworkError() && Attempts() <= 2"
|
||||
command:
|
||||
# Globals
|
||||
- "--log.level=ERROR"
|
||||
- "--api.insecure=true"
|
||||
- "--global.sendAnonymousUsage=false"
|
||||
# Docker
|
||||
# - "--providers.docker=true"
|
||||
# - "--providers.docker.exposedbydefault=false"
|
||||
# File Provider
|
||||
- "--providers.file.directory=/etc/traefik/dynamic"
|
||||
- "--providers.file.watch=true"
|
||||
# Entrypoints
|
||||
- "--entrypoints.web.address=:80"
|
||||
- "--entrypoints.websecure.address=:443"
|
||||
- "--entrypoints.web.http.redirections.entrypoint.to=websecure" #Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.scheme=https" #Redirect HTTP to HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.permanent=true" #Redirect HTTP to HTTPS
|
||||
# LetsEncrypt
|
||||
# - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
|
||||
- "--certificatesresolvers.myresolver.acme.dnschallenge=true" #TEMPORARY CHANGE
|
||||
- "--certificatesresolvers.myresolver.acme.dnschallenge.provider=cloudflare" #TEMPORARY CHANGE
|
||||
- "--certificatesresolvers.myresolver.acme.email=cyberstrawberry101@gmail.com"
|
||||
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
|
||||
# labels:
|
||||
# # API
|
||||
# - "traefik.enable=true"
|
||||
# # Global http --> https
|
||||
# - "traefik.http.routers.http-catchall.rule=hostregexp(`{host:[a-z-.]+}`)"
|
||||
# - "traefik.http.routers.http-catchall.entrypoints=web"
|
||||
# - "traefik.http.routers.http-catchall.middlewares=redirect-to-https"
|
||||
# - "traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https"
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- "/srv/containers/traefik/letsencrypt:/letsencrypt"
|
||||
- "/srv/containers/traefik/config:/etc/traefik"
|
||||
- "/var/run/docker.sock:/var/run/docker.sock:ro"
|
||||
- "/srv/containers/traefik/cloudflare:/cloudflare"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.29
|
||||
environment:
|
||||
- CF_API_EMAIL=cyberstrawberry101@gmail.com
|
||||
- CF_API_KEY=REDACTED
|
||||
extra_hosts:
|
||||
- "flask.cyberstrawberry.local:192.168.3.21"
|
||||
- "searx.cyberstrawberry.local:192.168.3.21"
|
||||
- "heimdall.cyberstrawberry.local:192.168.3.21"
|
||||
- "status.cyberstrawberry.local:192.168.3.21"
|
||||
- "rancher.cyberstrawberry.local:192.168.3.21"
|
||||
- "trilium.blockaderunners.local:192.168.3.21"
|
||||
- "pw.cyberstrawberry.local:192.168.3.22"
|
||||
- "remote.cyberstrawberry.local:192.168.5.43"
|
||||
- "cluster-cloud.cyberstrawberry.local:192.168.3.22"
|
||||
- "searx.blockaderunners.local:192.168.3.22"
|
||||
- "searx.deeptree-labs.local:192.168.3.22"
|
||||
- "cyberstrawberry.local:192.168.3.22"
|
||||
- "storage.cyberstrawberry.local:192.168.3.22"
|
||||
- "cloud.cyberstrawberry.local:192.168.5.146"
|
||||
- "cloud.blockaderunners.local:192.168.5.90"
|
||||
- "docs.blockaderunners.local:192.168.5.212"
|
||||
- "status.blockaderunners.local:192.168.5.13"
|
||||
- "blockaderunners.local:192.168.5.219"
|
||||
- "office.cyberstrawberry.local:192.168.5.143"
|
||||
- "git.deeptree.local:192.168.5.166"
|
||||
- "pw.deeptree.local:192.168.5.170"
|
||||
- "status.deeptree.local:192.168.5.211"
|
||||
- "temp.cyberstrawberry.local:192.168.5.197"
|
||||
- "drop.cyberstrawberry.local:192.168.5.14"
|
||||
- "vault.cyberstrawberry.local:192.168.3.22"
|
||||
- "bitwarden.cyberstrawberry.local:192.168.5.141"
|
||||
- "chat.cyberstrawberry.local:192.168.3.22"
|
||||
- "trilium.cyberstrawberry.local:192.168.3.22"
|
||||
- "node-red.cyberstrawberry.local:192.168.3.21"
|
||||
- "homelab.cyberstrawberry.local:192.168.3.22"
|
||||
- "awx.cyberstrawberry.local:192.168.3.21"
|
||||
- "git.cyberstrawberry.local:192.168.3.21"
|
||||
- "lab.cyberstrawberry.local:192.168.5.44"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
@ -0,0 +1,41 @@
|
||||
**Purpose**: The UniFi® Controller is a wireless network management software solution from Ubiquiti Networks™. It allows you to manage multiple wireless networks using a web browser.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: "2.1"
|
||||
services:
|
||||
controller:
|
||||
image: lscr.io/linuxserver/unifi-controller:latest
|
||||
container_name: controller
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
#- MEM_LIMIT=1024 #optional
|
||||
#- MEM_STARTUP=1024 #optional
|
||||
volumes:
|
||||
- /srv/containers/unifi-controller:/config
|
||||
ports:
|
||||
- 8443:8443
|
||||
- 3478:3478/udp
|
||||
- 10001:10001/udp
|
||||
- 8080:8080
|
||||
- 1900:1900/udp #optional
|
||||
- 8843:8843 #optional
|
||||
- 8880:8880 #optional
|
||||
- 6789:6789 #optional
|
||||
- 5514:5514/udp #optional
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.140
|
||||
# ipv4_address: 192.168.3.140
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
33
Container Documentation/Docker/Docker Compose/UptimeKuma.md
Normal file
33
Container Documentation/Docker/Docker Compose/UptimeKuma.md
Normal file
@ -0,0 +1,33 @@
|
||||
**Purpose**: Deploy Uptime Kuma uptime monitor to monitor services in the homelab and send notifications to various services.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3'
|
||||
services:
|
||||
uptimekuma:
|
||||
image: louislam/uptime-kuma
|
||||
ports:
|
||||
- 3001:3001
|
||||
volumes:
|
||||
- /mnt/uptimekuma:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# Allow status page to exist within an iframe
|
||||
- UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=1
|
||||
restart: always
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.uptime-kuma.rule=Host(`status.cyberstrawberry.net`)"
|
||||
- "traefik.http.routers.uptime-kuma.entrypoints=websecure"
|
||||
- "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt"
|
||||
- "traefik.http.services.uptime-kuma.loadbalancer.server.port=3001"
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.211
|
||||
networks:
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
43
Container Documentation/Docker/Docker Compose/VaultWarden.md
Normal file
43
Container Documentation/Docker/Docker Compose/VaultWarden.md
Normal file
@ -0,0 +1,43 @@
|
||||
**Purpose**: Unofficial Bitwarden compatible server written in Rust, formerly known as bitwarden_rs.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
---
|
||||
version: "2.1"
|
||||
services:
|
||||
vaultwarden:
|
||||
image: vaultwarden/server:latest
|
||||
container_name: vaultwarden
|
||||
environment:
|
||||
- TZ=America/Denver
|
||||
- INVITATIONS_ALLOWED=false
|
||||
- SIGNUPS_ALLOWED=false
|
||||
- WEBSOCKET_ENABLED=false
|
||||
- ADMIN_TOKEN=REDACTED #PUT A REALLY REALLY REALLY SECURE PASSWORD HERE
|
||||
volumes:
|
||||
- /srv/containers/vaultwarden:/data
|
||||
ports:
|
||||
- 80:80
|
||||
restart: always
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.15
|
||||
# labels:
|
||||
# - "traefik.enable=true"
|
||||
# - "traefik.http.routers.vaultwarden.rule=Host(`vault.grymmweeper.com`)"
|
||||
# - "traefik.http.routers.vaultwarden.entrypoints=web"
|
||||
# - "traefik.http.routers.vaultwarden.tls.certresolver=letsencrypt"
|
||||
# - "traefik.http.services.vaultwarden.loadbalancer.server.port=80"
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
:::caution
|
||||
It is **CRITICAL** that you never share the `ADMIN_TOKEN` with anyone. It allows you to log into the instance at https://vault.example.com/admin to add users, delete users, make changes system wide, etc.
|
||||
:::
|
||||
|
||||
```jsx title=".env"
|
||||
Not Applicable
|
||||
```
|
49
Container Documentation/Docker/Docker Compose/Wordpress.md
Normal file
49
Container Documentation/Docker/Docker Compose/Wordpress.md
Normal file
@ -0,0 +1,49 @@
|
||||
**Purpose**: At its core, WordPress is the simplest, most popular way to create your own website or blog. In fact, WordPress powers over 43.3% of all the websites on the Internet. Yes – more than one in four websites that you visit are likely powered by WordPress.
|
||||
|
||||
```jsx title="docker-compose.yml"
|
||||
version: '3.7'
|
||||
services:
|
||||
wordpress:
|
||||
image: wordpress:latest
|
||||
restart: always
|
||||
ports:
|
||||
- 80:80
|
||||
environment:
|
||||
WORDPRESS_DB_HOST: 192.168.5.216
|
||||
WORDPRESS_DB_USER: wordpress
|
||||
WORDPRESS_DB_PASSWORD: ${WORDPRESS_DB_PASSWORD}
|
||||
WORDPRESS_DB_NAME: wordpress
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/Server:/var/www/html
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.217
|
||||
depends_on:
|
||||
- db
|
||||
db:
|
||||
image: lscr.io/linuxserver/mariadb
|
||||
restart: always
|
||||
ports:
|
||||
- 3306:3306
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
|
||||
MYSQL_DATABASE: wordpress
|
||||
MYSQL_USER: wordpress
|
||||
REMOTE_SQL: http://URL1/your.sql,https://URL2/your.sql
|
||||
volumes:
|
||||
- /srv/Containers/WordPress/DB:/config
|
||||
networks:
|
||||
docker_network:
|
||||
ipv4_address: 192.168.5.216
|
||||
networks:
|
||||
default:
|
||||
external:
|
||||
name: docker_network
|
||||
docker_network:
|
||||
external: true
|
||||
```
|
||||
|
||||
```jsx title=".env"
|
||||
WORDPRESS_DB_PASSWORD=SecurePassword101
|
||||
MYSQL_ROOT_PASSWORD=SecurePassword202
|
||||
```
|
5
Container Documentation/Docker/Docker Networking.md
Normal file
5
Container Documentation/Docker/Docker Networking.md
Normal file
@ -0,0 +1,5 @@
|
||||
### Configure Docker Network
|
||||
We want to use a dedicated subnet / network specifically for containers, so they don't trample over the **SERVER** and **LAN** networks. If you are unsure of the name of the network adapter, in this case `eth0`, just type `ipaddr` in the terminal to list the network interfaces to locate it.
|
||||
```
|
||||
docker network create -d macvlan --subnet=192.168.5.0/24 --gateway=192.168.5.1 -o parent=eth0 docker_network
|
||||
```
|
187
Container Documentation/Kubernetes/Deploy Generic K8S.md
Normal file
187
Container Documentation/Kubernetes/Deploy Generic K8S.md
Normal file
@ -0,0 +1,187 @@
|
||||
# Deploy Generic Kubernetes
|
||||
The instructions outlined below assume you are deploying the environment using Ansible Playbooks either via Ansible's CLI or AWX.
|
||||
|
||||
### Deploy K8S User
|
||||
```jsx title="01-deploy-k8s-user.yml"
|
||||
- hosts: 'controller-nodes, worker-nodes'
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: create the k8sadmin user account
|
||||
user: name=k8sadmin append=yes state=present createhome=yes shell=/bin/bash
|
||||
|
||||
- name: allow 'k8sadmin' to use sudo without needing a password
|
||||
lineinfile:
|
||||
dest: /etc/sudoers
|
||||
line: 'k8sadmin ALL=(ALL) NOPASSWD: ALL'
|
||||
validate: 'visudo -cf %s'
|
||||
|
||||
- name: set up authorized keys for the k8sadmin user
|
||||
authorized_key: user=k8sadmin key="{{item}}"
|
||||
with_file:
|
||||
- ~/.ssh/id_rsa.pub
|
||||
```
|
||||
|
||||
### Install K8S
|
||||
```jsx title="02-install-k8s.yml"
|
||||
---
|
||||
- hosts: "controller-nodes, worker-nodes"
|
||||
remote_user: nicole
|
||||
become: yes
|
||||
become_method: sudo
|
||||
become_user: root
|
||||
gather_facts: yes
|
||||
connection: ssh
|
||||
|
||||
tasks:
|
||||
- name: Create containerd config file
|
||||
file:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/modules-load.d/containerd.conf"
|
||||
block: |
|
||||
overlay
|
||||
br_netfilter
|
||||
|
||||
- name: modprobe
|
||||
shell: |
|
||||
sudo modprobe overlay
|
||||
sudo modprobe br_netfilter
|
||||
|
||||
|
||||
- name: Set system configurations for Kubernetes networking
|
||||
file:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
state: "touch"
|
||||
|
||||
- name: Add conf for containerd
|
||||
blockinfile:
|
||||
path: "/etc/sysctl.d/99-kubernetes-cri.conf"
|
||||
block: |
|
||||
net.bridge.bridge-nf-call-iptables = 1
|
||||
net.ipv4.ip_forward = 1
|
||||
net.bridge.bridge-nf-call-ip6tables = 1
|
||||
|
||||
- name: Apply new settings
|
||||
command: sudo sysctl --system
|
||||
|
||||
- name: install containerd
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y containerd
|
||||
sudo mkdir -p /etc/containerd
|
||||
sudo containerd config default | sudo tee /etc/containerd/config.toml
|
||||
sudo systemctl restart containerd
|
||||
|
||||
- name: disable swap
|
||||
shell: |
|
||||
sudo swapoff -a
|
||||
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
|
||||
|
||||
- name: install and configure dependencies
|
||||
shell: |
|
||||
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
|
||||
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
|
||||
|
||||
- name: Create kubernetes repo file
|
||||
file:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
state: "touch"
|
||||
|
||||
- name: Add K8s Source
|
||||
blockinfile:
|
||||
path: "/etc/apt/sources.list.d/kubernetes.list"
|
||||
block: |
|
||||
deb https://apt.kubernetes.io/ kubernetes-xenial main
|
||||
|
||||
- name: Install Kubernetes
|
||||
shell: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y kubelet=1.20.1-00 kubeadm=1.20.1-00 kubectl=1.20.1-00
|
||||
sudo apt-mark hold kubelet kubeadm kubectl
|
||||
```
|
||||
|
||||
### Configure ControlPlanes
|
||||
```jsx title="03-configure-controllers.yml"
|
||||
- hosts: controller-nodes
|
||||
become: yes
|
||||
|
||||
tasks:
|
||||
- name: Initialize the K8S Cluster
|
||||
shell: kubeadm init --pod-network-cidr=10.244.0.0/16
|
||||
args:
|
||||
chdir: $HOME
|
||||
creates: cluster_initialized.txt
|
||||
|
||||
- name: Create .kube directory
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
file:
|
||||
path: /home/k8sadmin/.kube
|
||||
state: directory
|
||||
mode: 0755
|
||||
|
||||
- name: Copy admin.conf to user's kube config
|
||||
copy:
|
||||
src: /etc/kubernetes/admin.conf
|
||||
dest: /home/k8sadmin/.kube/config
|
||||
remote_src: yes
|
||||
owner: k8sadmin
|
||||
|
||||
- name: Install the Pod Network
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
|
||||
args:
|
||||
chdir: $HOME
|
||||
|
||||
- name: Get the token for joining the worker nodes
|
||||
become: yes
|
||||
become_user: k8sadmin
|
||||
shell: kubeadm token create --print-join-command
|
||||
register: kubernetes_join_command
|
||||
|
||||
- name: Output Join Command to the Screen
|
||||
debug:
|
||||
msg: "{{ kubernetes_join_command.stdout }}"
|
||||
|
||||
- name: Copy join command to local file.
|
||||
become: yes
|
||||
local_action: copy content="{{ kubernetes_join_command.stdout_lines[0] }}" dest="/tmp/kubernetes_join_command" mode=0777
|
||||
```
|
||||
|
||||
### Join Worker Node(s)
|
||||
```jsx title="04-join-worker-nodes.yml"
|
||||
- hosts: worker-nodes
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
tasks:
|
||||
- name: Copy join command from Ansible host to the worker nodes.
|
||||
become: yes
|
||||
copy:
|
||||
src: /tmp/kubernetes_join_command
|
||||
dest: /tmp/kubernetes_join_command
|
||||
mode: 0777
|
||||
|
||||
- name: Join the Worker nodes to the cluster.
|
||||
become: yes
|
||||
command: sh /tmp/kubernetes_join_command
|
||||
register: joined_or_not
|
||||
```
|
||||
|
||||
### Host Inventory File Template
|
||||
```jsx title="hosts"
|
||||
[controller-nodes]
|
||||
k8s-ctrlr-01 ansible_host=192.168.3.6 ansible_user=nicole
|
||||
|
||||
[worker-nodes]
|
||||
k8s-node-01 ansible_host=192.168.3.4 ansible_user=nicole
|
||||
k8s-node-02 ansible_host=192.168.3.5 ansible_user=nicole
|
||||
|
||||
[all:vars]
|
||||
ansible_become_user=root
|
||||
ansible_become_method=sudo
|
||||
```
|
140
Container Documentation/Kubernetes/Minikube/Ansible Minikube.md
Normal file
140
Container Documentation/Kubernetes/Minikube/Ansible Minikube.md
Normal file
@ -0,0 +1,140 @@
|
||||
# AWX Operator on Minikube Cluster
|
||||
Minikube Cluster based deployment of Ansible AWX. (Ansible Tower)
|
||||
:::note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
:::
|
||||
|
||||
## Install Minikube Cluster
|
||||
### Update the Ubuntu Server
|
||||
```
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
### Download and Install Minikube (Ubuntu Server)
|
||||
Additional Documentation: https://minikube.sigs.k8s.io/docs/start/
|
||||
```
|
||||
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb
|
||||
sudo dpkg -i minikube_latest_amd64.deb
|
||||
|
||||
# Download Docker and Common Tools
|
||||
sudo apt install docker.io nfs-common iptables nano htop -y
|
||||
|
||||
# Configure Docker User
|
||||
sudo usermod -aG docker nicole
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `sudo usermod -aG docker nicole` command to whatever your local username is.
|
||||
:::
|
||||
### Fully Logout then sign back in to the server
|
||||
```
|
||||
exit
|
||||
```
|
||||
### Validate that permissions allow you to run docker commands while non-root
|
||||
```
|
||||
docker ps
|
||||
```
|
||||
|
||||
### Initialize Minikube Cluster
|
||||
Additional Documentation: https://github.com/ansible/awx-operator
|
||||
```
|
||||
minikube start --driver=docker
|
||||
minikube kubectl -- get nodes
|
||||
minikube kubectl -- get pods -A
|
||||
```
|
||||
|
||||
### Make sure Minikube Cluster Automatically Starts on Boot
|
||||
```jsx title="/etc/systemd/system/minikube.service"
|
||||
[Unit]
|
||||
Description=Minikube service
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
User=nicole
|
||||
ExecStart=/usr/bin/minikube start --driver=docker
|
||||
ExecStop=/usr/bin/minikube stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
:::caution
|
||||
Be sure to change the `nicole` username in the `User=nicole` line of the config to whatever your local username is.
|
||||
:::
|
||||
:::info
|
||||
You can remove the `--addons=ingress` if you plan on running AWX behind an existing reverse proxy using a "**NodePort**" connection.
|
||||
:::
|
||||
### Restart Service Daemon and Enable/Start Minikube Automatic Startup
|
||||
```
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable minikube
|
||||
sudo systemctl start minikube
|
||||
```
|
||||
|
||||
### Make command alias for `kubectl`
|
||||
Be sure to add the following to the bottom of your existing profile file noted below.
|
||||
```jsx title="~/.bashrc"
|
||||
...
|
||||
alias kubectl="minikube kubectl --"
|
||||
```
|
||||
:::tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
|
||||
## Make AWX Operator Kustomization File:
|
||||
Find the latest tag version here: https://github.com/ansible/awx-operator/releases
|
||||
```jsx title="kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: awx-service
|
||||
namespace: awx
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 30080 # Choose an available port in the range of 30000-32767
|
||||
selector:
|
||||
app.kubernetes.io/name: awx-web
|
||||
```
|
||||
### Apply Configuration File
|
||||
Run from the same directory as the `awx-operator.yaml` file.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::info
|
||||
If you get any errors, especially ones relating to "CRD"s, wait 30 seconds, and try re-running the `kubectl apply -k .` command to fully apply the `awx.yml` configuration file to bootstrap the awx deployment.
|
||||
:::
|
||||
|
||||
### View Logs / Track Deployment Progress
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager -c awx-manager
|
||||
```
|
||||
### Get AWX WebUI Address
|
||||
```
|
||||
minikube service -n awx awx-service --url
|
||||
```
|
||||
### Get WebUI Password:
|
||||
```
|
||||
kubectl get secret awx-demo-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
@ -0,0 +1,2 @@
|
||||
# Placeholder
|
||||
Placeholder
|
@ -0,0 +1,32 @@
|
||||
# 1-provision-vm.yml
|
||||
```jsx title="1-provision-vm.yml"
|
||||
---
|
||||
- name: Ubuntu Server-Based Cluster Deployment
|
||||
hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: Fetch updates
|
||||
apt:
|
||||
update_cache: yes
|
||||
|
||||
- name: Install packages
|
||||
apt:
|
||||
name:
|
||||
- nfs-common
|
||||
- iptables
|
||||
- nano
|
||||
- htop
|
||||
state: present
|
||||
update_cache: yes
|
||||
|
||||
- name: Upgrade all packages
|
||||
apt:
|
||||
upgrade: dist
|
||||
|
||||
- name: Autoremove unused packages
|
||||
apt:
|
||||
autoremove: yes
|
||||
|
||||
- name: Reboot the VM
|
||||
reboot:
|
||||
```
|
@ -0,0 +1,48 @@
|
||||
# 2-create-initial-controlplane.yml
|
||||
|
||||
```jsx title="2-create-initial-controlplane.yml"
|
||||
---
|
||||
- name: Deploy Rancher on a Kubernetes cluster
|
||||
hosts: your_target_host
|
||||
become: true
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Download and install the RKE2 server deployment script
|
||||
ansible.builtin.shell: |
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
|
||||
- name: Enable and start the RKE2 server service
|
||||
ansible.builtin.systemd:
|
||||
name: rke2-server
|
||||
enabled: yes
|
||||
state: started
|
||||
|
||||
- name: Create symlink for kubectl
|
||||
ansible.builtin.command: |
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
- name: Temporarily export the Kubeconfig
|
||||
ansible.builtin.shell: |
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
|
||||
- name: Install Helm
|
||||
ansible.builtin.shell: |
|
||||
curl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
|
||||
- name: Add Helm repos for Rancher and Jetstack
|
||||
ansible.builtin.shell: |
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
|
||||
- name: Install Cert-Manager CRDs
|
||||
ansible.builtin.shell: |
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
|
||||
|
||||
- name: Install Jetstack cert-manager via Helm
|
||||
ansible.builtin.shell: |
|
||||
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace
|
||||
|
||||
- name: Install Rancher via Helm
|
||||
ansible.builtin.shell: |
|
||||
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rancher.cyberstrawberry.net --set bootstrapPassword=bootStrapAllTheThings --set replicas=1
|
||||
```
|
@ -0,0 +1,48 @@
|
||||
# 3A-deploy-additional-controlplane.yml
|
||||
|
||||
```jsx title="3A-deploy-additional-controlplane.yml"
|
||||
---
|
||||
- name: RKE2 Kubernetes Cluster Deployment
|
||||
hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: Download and install RKE2 server
|
||||
shell: "curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -"
|
||||
|
||||
- name: Symlink the Kubectl Management Command
|
||||
command: "ln -s {{ item }} /usr/local/bin/kubectl"
|
||||
args:
|
||||
creates: "/usr/local/bin/kubectl"
|
||||
with_items:
|
||||
- "{{ find_kubectl.stdout }}"
|
||||
vars:
|
||||
find_kubectl:
|
||||
cmd: "find /var/lib/rancher/rke2/data/ -name kubectl"
|
||||
|
||||
- name: Create Rancher-Kubernetes-specific config directory
|
||||
file:
|
||||
path: "/etc/rancher/rke2/"
|
||||
state: directory
|
||||
|
||||
- name: Inject IP of Primary Cluster Host (First Node) into Config File
|
||||
lineinfile:
|
||||
path: "/etc/rancher/rke2/config.yaml"
|
||||
line: "server: https://192.168.3.21:9345"
|
||||
|
||||
- name: Get the node token from the first node in the cluster
|
||||
shell: "cat /var/lib/rancher/rke2/server/node-token"
|
||||
register: node_token
|
||||
run_once: true
|
||||
when: "'first_node' in group_names"
|
||||
|
||||
- name: Inject the Primary Cluster Host trust token into the config file
|
||||
lineinfile:
|
||||
path: "/etc/rancher/rke2/config.yaml"
|
||||
line: "token: {{ node_token.stdout }}"
|
||||
|
||||
- name: Enable and start the RKE2 server service
|
||||
systemd:
|
||||
name: rke2-server.service
|
||||
state: started
|
||||
enabled: yes
|
||||
```
|
@ -0,0 +1,38 @@
|
||||
# 3B-deploy-worker-node.yml
|
||||
|
||||
```jsx title="3B-deploy-worker-node.yml"
|
||||
---
|
||||
- name: RKE2 Kubernetes Worker Node Deployment
|
||||
hosts: all
|
||||
become: yes
|
||||
tasks:
|
||||
- name: Download and install RKE2 agent
|
||||
shell: "curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -"
|
||||
|
||||
- name: Create Rancher-Kubernetes-specific config directory
|
||||
file:
|
||||
path: "/etc/rancher/rke2/"
|
||||
state: directory
|
||||
|
||||
- name: Inject IP of Primary Cluster Host (First Node) into Config File
|
||||
lineinfile:
|
||||
path: "/etc/rancher/rke2/config.yaml"
|
||||
line: "server: https://192.168.3.21:9345"
|
||||
|
||||
- name: Get the node token from the first node in the cluster
|
||||
shell: "cat /var/lib/rancher/rke2/server/node-token"
|
||||
register: node_token
|
||||
run_once: true
|
||||
delegate_to: first_node_host
|
||||
|
||||
- name: Inject the Primary Cluster Host trust token into the config file
|
||||
lineinfile:
|
||||
path: "/etc/rancher/rke2/config.yaml"
|
||||
line: "token: {{ node_token.stdout }}"
|
||||
|
||||
- name: Enable and start the RKE2 agent service
|
||||
systemd:
|
||||
name: rke2-agent.service
|
||||
state: started
|
||||
enabled: yes
|
||||
```
|
@ -0,0 +1,28 @@
|
||||
# WinRM (Kerberos)
|
||||
**Name**: "Kerberos WinRM"
|
||||
|
||||
```jsx title="Input Configuration"
|
||||
fields:
|
||||
- id: username
|
||||
type: string
|
||||
label: Username
|
||||
- id: password
|
||||
type: string
|
||||
label: Password
|
||||
secret: true
|
||||
- id: krb_realm
|
||||
type: string
|
||||
label: Kerberos Realm (Domain)
|
||||
required:
|
||||
- username
|
||||
- password
|
||||
- krb_realm
|
||||
```
|
||||
|
||||
```jsx title="Injector Configuration"
|
||||
extra_vars:
|
||||
ansible_user: '{{ username }}'
|
||||
ansible_password: '{{ password }}'
|
||||
ansible_winrm_transport: kerberos
|
||||
ansible_winrm_kerberos_realm: '{{ krb_realm }}'
|
||||
```
|
@ -0,0 +1,36 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# AWX Credential Types
|
||||
When interacting with devices via Ansible Playbooks, you need to provide the playbook with credentials to connect to the device with. Examples are domain credentials for Windows devices, and local sudo user credentials for Linux.
|
||||
|
||||
## Windows-based Credentials
|
||||
### NTLM
|
||||
NTLM-based authentication is not exactly the most secure method of remotely running playbooks on Windows devices, but it is still encrypted using SSL certificates created by the device itself when provisioned correctly to enable WinRM functionality.
|
||||
```jsx title="(NTLM) nicole.rappe@MOONGATE.LOCAL"
|
||||
Credential Type: Machine
|
||||
Username: nicole.rappe@MOONGATE.LOCAL
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: runas
|
||||
Privilege Escalation Username: nicole.rappe@MOONGATE.LOCAL
|
||||
```
|
||||
### Kerberos
|
||||
Kerberos-based authentication is generally considered the most secure method of authentication with Windows devices, but can be trickier to set up since it requires additional setup inside of AWX in the cluster for it to function properly. At this time, there is no working Kerberos documentation.
|
||||
```jsx title="(Kerberos WinRM) nicole.rappe"
|
||||
Credential Type: Kerberos WinRM
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted>
|
||||
Kerberos Realm (Domain): MOONGATE.LOCAL
|
||||
```
|
||||
## Linux-based Credentials
|
||||
```jsx title="(LINUX) nicole"
|
||||
Credential Type: Machine
|
||||
Username: nicole
|
||||
Password: <Encrypted>
|
||||
Privilege Escalation Method: sudo
|
||||
Privilege Escalation Username: root
|
||||
```
|
||||
|
||||
:::note
|
||||
`WinRM / Kerberos` based credentials do not currently work as-expected. At this time, use either `Linux` or `NTLM` based credentials.
|
||||
:::
|
@ -0,0 +1,39 @@
|
||||
# Host Inventories
|
||||
When you are deploying playbooks, you target hosts that exist in "Inventories". These inventories consist of a list of hosts and their corresponding IP addresses, as well as any host-specific variables that may be necessary to declare to run the playbook.
|
||||
|
||||
```jsx title="(NTLM) MOON-HOST-01"
|
||||
Name: (NTLM) MOON-HOST-01
|
||||
Host(s): MOON-HOST-01 @ 192.168.3.4
|
||||
|
||||
Variables:
|
||||
---
|
||||
ansible_connection: winrm
|
||||
ansible_winrm_kerberos_delegation: false
|
||||
ansible_port: 5986
|
||||
ansible_winrm_transport: ntlm
|
||||
ansible_winrm_server_cert_validation: ignore
|
||||
```
|
||||
|
||||
```jsx title="(NTLM) CyberStrawberry - Windows Hosts"
|
||||
Name: (NTLM) CyberStrawberry - Windows Hosts
|
||||
Host(s): MOON-HOST-01 @ 192.168.3.4
|
||||
Host(s): MOON-HOST-02 @ 192.168.3.5
|
||||
|
||||
Variables:
|
||||
---
|
||||
ansible_connection: winrm
|
||||
ansible_winrm_kerberos_delegation: false
|
||||
ansible_port: 5986
|
||||
ansible_winrm_transport: ntlm
|
||||
ansible_winrm_server_cert_validation: ignore
|
||||
```
|
||||
|
||||
```jsx title="(LINUX) Unsorted Devices"
|
||||
Name: (LINUX) Unsorted Devices
|
||||
Host(s): CLSTR-COMPUTE-01 @ 192.168.3.50
|
||||
Host(s): CLSTR-COMPUTE-02 @ 192.168.3.51
|
||||
|
||||
Variables:
|
||||
---
|
||||
None
|
||||
```
|
@ -0,0 +1,16 @@
|
||||
# AWX Projects
|
||||
When you want to run playbooks on host devices in your inventory files, you need to host the playbooks in a "Project". Projects can be as simple as a connection to Gitea/Github to store playbooks in a repository.
|
||||
|
||||
```jsx title="Ansible Playbooks (Gitea)"
|
||||
Name: Ansible Playbooks (Gitea)
|
||||
Source Control Type: Git
|
||||
Source Control URL: https://git.cyberstrawberry.net/nicole.rappe/ansible.git
|
||||
Source Control Credential: CyberStrawberry Gitea
|
||||
```
|
||||
|
||||
```jsx title="Resources > Credentials > CyberStrawberry Gitea"
|
||||
Name: CyberStrawberry Gitea
|
||||
Credential Type: Source Control
|
||||
Username: nicole.rappe
|
||||
Password: <Encrypted> #If you use MFA on Gitea/Github, use an App Password instead for the project.
|
||||
```
|
@ -0,0 +1,100 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
---
|
||||
# Deploy AWX on RKE2 Cluster
|
||||
Deploying a Rancher RKE2 Cluster based Ansible AWX server may be considered overkill for a homelab, however the configuration seen below allows you to scale the needs of the cluster over time and gives you more experience with a more enterprise-ready cluster.
|
||||

|
||||
:::note Prerequisites
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later with at least 8GB of memory and 4 CPU cores.
|
||||
:::
|
||||
## Deploy Rancher RKE2 Cluster
|
||||
You will need to follow the [Rancher RKE2 Cluster Deployment](https://docs.cyberstrawberry.net/Container%20Documentation/Kubernetes/Rancher%20RKE2/Rancher%20RKE2%20Cluster)
|
||||
guide in order to initially set up the cluster itself. After this phase, you can focus on the Ansible AWX-specific aspects of the deployment. If you are only deploying AWX in a small environment, a single ControlPlane node is all you need to set up AWX.
|
||||
:::tip
|
||||
|
||||
If this is a virtual machine, after deploying the RKE2 cluster and validating it functions, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
:::
|
||||
## Server Configuration
|
||||
The AWX deployment will consist of 3 yaml files that configure the containers for AWX as well as the NGINX ingress networking-side of things. You will need all of them in the same folder for the deployment to be successful. For the purpose of this example, we will put all of them into a folder located at `/awx`.
|
||||
### Make the deployment folder
|
||||
```
|
||||
mkdir -p /awx
|
||||
cd /awx
|
||||
```
|
||||
### Run a command to adjust open file limits in Ubuntu Server (just-in-case)
|
||||
```
|
||||
ulimit -n 4096
|
||||
```
|
||||
### Create the AWX deployment configuration files
|
||||
You will need to create these files all in the same directory using the content of the examples below. Be sure to replace values such as the `spec.host=ansible.cyberstrawberry.net` in the `awx-ingress.yml` file to a hostname you can point a DNS server / record to.
|
||||
```jsx title="nano /awx/kustomization.yml"
|
||||
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||
kind: Kustomization
|
||||
resources:
|
||||
- github.com/ansible/awx-operator/config/default?ref=2.4.0
|
||||
- awx.yml
|
||||
- awx-ingress.yml
|
||||
images:
|
||||
- name: quay.io/ansible/awx-operator
|
||||
newTag: 2.4.0
|
||||
namespace: awx
|
||||
```
|
||||
```jsx title="nano /awx/awx.yml"
|
||||
apiVersion: awx.ansible.com/v1beta1
|
||||
kind: AWX
|
||||
metadata:
|
||||
name: awx
|
||||
spec:
|
||||
service_type: ClusterIP
|
||||
```
|
||||
```jsx title="nano /awx/awx-ingress.yml"
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: awx-ingress
|
||||
spec:
|
||||
rules:
|
||||
- host: ansible.cyberstrawberry.net
|
||||
http:
|
||||
paths:
|
||||
- pathType: Prefix
|
||||
path: "/"
|
||||
backend:
|
||||
service:
|
||||
name: awx-service
|
||||
port:
|
||||
number: 80
|
||||
```
|
||||
## Deploy AWX using Kustomize
|
||||
Now it is time to tell Rancher / Kubernetes to read the configuration files using Kustomize (built-in to newer versions of Kubernetes) to deploy AWX into the cluster. Be sure that you are still in the `/awx` folder before running this command. **Be Patient: ** The AWX deployment process can take a while. Go grab a cup of coffee and use the commands in the [Troubleshooting](https://docs.cyberstrawberry.net/Container%20Documentation/Kubernetes/Rancher%20RKE2/RKE2%20Ansible%20AWX#troubleshooting)
|
||||
section if you want to track the progress more directly.
|
||||
```
|
||||
kubectl apply -k .
|
||||
```
|
||||
:::caution
|
||||
If you get any errors mentioning "**CRD**" in the output, re-run the `kubectl apply -k .` command a second time after waiting about 10 seconds. The second time the error should be gone.
|
||||
:::
|
||||
## Access the WebUI behind Ingress Controller
|
||||
After you have deployed AWX into the cluster, it will not be immediately accessible to the host's network (such as your home computer) unless you set up a DNS record pointing to it. In the example above, you would have an `A` or `CNAME` DNS record pointing to the internal IP address of the Rancher RKE2 Cluster host. The RKE2 Cluster will translate `ansible.cyberstrawberry.net` to the AWX web-service container(s) automatically. SSL certificates are not covered in this documentation, but suffice to say, the can be configured on another reverse proxy such as Traefik or via Cert-Manager / JetStack. The process of setting this up goes outside the scope of this document.
|
||||
- AWX WebUI: https://ansible.cyberstrawberry.net
|
||||
### Retrieving the Auto-Generated Admin Password
|
||||
AWX will generate its own secure password the first time you set up AWX. This password is stored as a *secret* in Kubernetes. You can navigate to the WebUI of Rancher in the RKE2 Cluster as long as you have a DNS record matching the hostname you assigned to Rancher the first time you signed in.
|
||||
- Rancher WebUI: https://awx-cluster.cyberstrawberry.net
|
||||
- Alternatively, you can try running the following command to pull the admin password / secret automatically
|
||||
```
|
||||
kubectl get secret awx-admin-password -o jsonpath="{.data.password}" | base64 --decode ; echo
|
||||
```
|
||||
## Troubleshooting
|
||||
You may wish to want to track the deployment process to verify that it is actually doing something. There are a few Kubernetes commands that can assist with this listed below.
|
||||
### Show the container deployment progress for AWX
|
||||
```
|
||||
kubectl get pods -n awx
|
||||
```
|
||||
### AWX-Manager Deployment Logs
|
||||
You may want to track the internal logs of the `awx-manager` container which is responsible for the majority of the automated deployment of AWX. You can do so by running the command below.
|
||||
```
|
||||
kubectl logs -n awx awx-operator-controller-manager-6c58d59d97-qj2n2 -c awx-manager
|
||||
```
|
||||
:::note
|
||||
The `-6c58d59d97-qj2n2` noted at the end of the Kubernetes "Pod" mentioned in the command above is randomized. You will need to change it based on the name shown when running the `kubectl get pods -n awx` command.
|
||||
:::
|
@ -0,0 +1,21 @@
|
||||
# Templates
|
||||
Templates are basically pre-constructed groups of devices, playbooks, and credentials that perform a specific kind of task against a predefined group of hosts or device inventory.
|
||||
|
||||
```jsx title="Deploy Hyper-V VM"
|
||||
Name: Deploy Hyper-V VM
|
||||
Inventory: (NTLM) MOON-HOST-01
|
||||
Playbook: playbooks/Windows/Hyper-V/Deploy-VM.yml
|
||||
Credentials: (NTLM) nicole.rappe@MOONGATE.local
|
||||
Execution Environment: AWX EE (latest)
|
||||
Project: Ansible Playbooks (Gitea)
|
||||
|
||||
Variables:
|
||||
---
|
||||
random_number: "{{ lookup('password', '/dev/null chars=digits length=4') }}"
|
||||
random_letters: "{{ lookup('password', '/dev/null chars=ascii_uppercase length=4') }}"
|
||||
vm_name: "NEXUS-TEST-{{ random_number }}{{ random_letters }}"
|
||||
vm_memory: "8589934592" #Measured in Bytes (e.g. 8GB)
|
||||
vm_storage: "68719476736" #Measured in Bytes (e.g. 64GB)
|
||||
iso_path: "C:\\ubuntu-22.04-live-server-amd64.iso"
|
||||
vm_folder: "C:\\Virtual Machines\\{{ vm_name_fact }}"
|
||||
```
|
Binary file not shown.
After Width: | Height: | Size: 122 KiB |
@ -0,0 +1,136 @@
|
||||
# Deploy RKE2 Cluster
|
||||
Deploying a Rancher RKE2 Cluster is fairly straightforward. Just run the commands in-order and pay attention to which steps apply to all machines in the cluster, the controlplanes, and the workers.
|
||||
|
||||
!!! note "Prerequisites"
|
||||
This document assumes you are running **Ubuntu Server 20.04** or later.
|
||||
|
||||
## All Cluster Nodes
|
||||
### Run Updates
|
||||
You will need to run these commands on every server that participates in the cluster then perform a reboot of the server **PRIOR** to moving onto the next section.
|
||||
``` sh
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt install nfs-common iptables nano htop -y
|
||||
```
|
||||
|
||||
### Reboot the Node
|
||||
``` sh
|
||||
sudo apt autoremove -y
|
||||
sudo reboot
|
||||
```
|
||||
!!! tip
|
||||
If this is a virtual machine, now would be the best time to take a checkpoint / snapshot of the VM before moving forward, in case you need to perform rollbacks of the server(s) if you accidentally misconfigure something.
|
||||
## Initial ControlPlane Node
|
||||
When you are starting a brand new cluster, you need to create what is referred to as the "Initial ControlPlane". This node is responsible for bootstrapping the entire cluster together in the beginning, and will eventually assist in handling container workloads and orchestrating operations in the cluster.
|
||||
!!! warning
|
||||
You only want to follow the instructions for the **initial** controlplane once. Running it on another machine to create additional controlplanes will cause the cluster to try to set up two different clusters, wrecking havok. Instead, follow the instructions in the next section to add redundant controlplanes.
|
||||
|
||||
### Download the Run Server Deployment Script
|
||||
```
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Enable & Configure Services
|
||||
``` sh
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Temporarily Export the Kubeconfig to manage the cluster from CLI
|
||||
export KUBECONFIG=/etc/rancher/rke2/rke2.yaml
|
||||
|
||||
# Check that the Cluster Node is Running and Ready
|
||||
kubectl get node
|
||||
```
|
||||
### Install Helm, Rancher, CertManager, Jetstack, Rancher, and Longhorn
|
||||
``` sh
|
||||
# Install Helm
|
||||
curl -#L https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
||||
|
||||
# Install Necessary Helm Repositories
|
||||
helm repo add rancher-latest https://releases.rancher.com/server-charts/latest
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo add longhorn https://charts.longhorn.io
|
||||
helm repo update
|
||||
|
||||
# Install Cert-Manager via Helm
|
||||
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.6.1/cert-manager.crds.yaml
|
||||
|
||||
# Install Jetstack via Helm
|
||||
helm upgrade -i cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace
|
||||
|
||||
# Install Rancher via Helm
|
||||
helm upgrade -i rancher rancher-latest/rancher --create-namespace --namespace cattle-system --set hostname=rancher.cyberstrawberry.net --set bootstrapPassword=bootStrapAllTheThings --set replicas=1
|
||||
|
||||
# Install Longhorn via Helm
|
||||
helm upgrade -i longhorn longhorn/longhorn --namespace longhorn-system --create-namespace
|
||||
```
|
||||
|
||||
!!! note
|
||||
Be sure to write down the "*bootstrapPassword*" variable for when you log into Rancher later. In this example, the password is `bootStrapAllTheThings`.
|
||||
Also be sure to adjust the "*hostname*" variable to reflect the FQDN of the cluster. This is important for the last step where you adjust DNS. The example given is `rancher.cyberstrawberry.net`.
|
||||
|
||||
## Create Additional ControlPlane Node(s)
|
||||
This is the part where you can add additional controlplane nodes to add additional redundancy to the RKE2 Cluster. This is important for high-availability environments.
|
||||
|
||||
### Download the Server Deployment Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=server sh -
|
||||
```
|
||||
### Configure and Connect to Initial ControlPlane Node
|
||||
``` sh
|
||||
# Symlink the Kubectl Management Command
|
||||
ln -s $(find /var/lib/rancher/rke2/data/ -name kubectl) /usr/local/bin/kubectl
|
||||
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service
|
||||
systemctl enable rke2-server.service
|
||||
systemctl start rke2-server.service
|
||||
```
|
||||
!!! note
|
||||
Be sure to change the IP address of the initial controlplane node provided in the example above to match your environment.
|
||||
|
||||
## Add Worker Node(s)
|
||||
Worker nodes are the bread-and-butter of a Kubernetes cluster. They handle running container workloads, and acting as storage for the cluster (this can be configured to varying degrees based on your needs).
|
||||
|
||||
### Download the Server Worker Script
|
||||
``` sh
|
||||
curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent sh -
|
||||
```
|
||||
### Configure and Connect to RKE2 Cluster
|
||||
``` sh
|
||||
# Manually Create a Rancher-Kubernetes-Specific Config File
|
||||
mkdir -p /etc/rancher/rke2/
|
||||
|
||||
# Inject IP of Initial ControlPlane Node into Config File
|
||||
echo "server: https://192.168.3.21:9345" > /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Inject the Initial ControlPlane Node trust token into the config file
|
||||
# You can get the token by running the following command on the first node in the cluster: `cat /var/lib/rancher/rke2/server/node-token`
|
||||
echo "token: K10aa0632863da4ae4e2ccede0ca6a179f510a0eee0d6d6eb53dca96050048f055e::server:3b130ceebfbb7ed851cd990fe55e6f3a" >> /etc/rancher/rke2/config.yaml
|
||||
|
||||
# Start and Enable the Kubernetes Service**
|
||||
systemctl enable rke2-agent.service
|
||||
systemctl start rke2-agent.service
|
||||
```
|
||||
|
||||
## DNS Server Record
|
||||
You will need to set up some kind of DNS server record to point the FQDN of the cluster (e.g. `rancher.cyberstrawberry.net`) to the IP address of the Initial ControlPlane. This can be achieved in a number of ways, such as editing the Windows `HOSTS` file, Linux's `/etc/resolv.conf` file, a Windows DNS Server "A" Record, or an NGINX/Traefik Reverse Proxy.
|
||||
|
||||
Once you have added the DNS record, you should be able to access the login page for the Rancher RKE2 Kubernetes cluster. Use the `bootstrapPassword` mentioned previously to log in, then change it immediately from the user management area of Rancher.
|
||||
|
||||
| TYPE OF ACCESS | FQDN | IP ADDRESS |
|
||||
| -------------- | ------------------------------------- | ------------ |
|
||||
| HOST FILE | rancher.cyberstrawberry.net | 192.168.3.21 |
|
||||
| REVERSE PROXY | http://rancher.cyberstrawberry.net:80 | 192.168.5.29 |
|
||||
| DNS RECORD | A Record: rancher.cyberstrawberry.net | 192.168.3.21 |
|
31
Container Documentation/Portainer/Deploy Portainer.md
Normal file
31
Container Documentation/Portainer/Deploy Portainer.md
Normal file
@ -0,0 +1,31 @@
|
||||
### Update The Package Manager
|
||||
We need to update the server before installing Docker
|
||||
```jsx title="Ubuntu Server"
|
||||
sudo apt update
|
||||
sudo apt upgrade
|
||||
```
|
||||
```jsx title="Rocky Linux"
|
||||
sudo dnf check-update
|
||||
```
|
||||
|
||||
|
||||
### Deploy Docker
|
||||
Install Docker then deploy Portainer
|
||||
```jsx title="Ubuntu Server"
|
||||
sudo apt install docker.io
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest
|
||||
```
|
||||
```jsx title="Rocky Linux"
|
||||
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
|
||||
sudo dnf install docker-ce docker-ce-cli containerd.io
|
||||
sudo systemctl start docker
|
||||
sudo systemctl enable docker
|
||||
docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /srv/containers/portainer:/data portainer/portainer-ee:latest
|
||||
```
|
||||
|
||||
|
||||
### Configure Docker Network
|
||||
We want to use a dedicated subnet / network specifically for containers, so they don't trample over the **SERVER** and **LAN** networks. If you are unsure of the name of the network adapter, in this case `eth0`, just type `ipaddr` in the terminal to list the network interfaces to locate it.
|
||||
```
|
||||
docker network create -d macvlan --subnet=192.168.5.0/24 --gateway=192.168.5.1 -o parent=eth0 docker_network
|
||||
```
|
Reference in New Issue
Block a user