diff --git a/docs/12-self-hosting/01-overview.mdx b/docs/12-self-hosting/01-overview.mdx
new file mode 100644
index 00000000..4a072742
--- /dev/null
+++ b/docs/12-self-hosting/01-overview.mdx
@@ -0,0 +1,97 @@
+---
+toc_max_heading_level: 4
+---
+
+# About Self-Hosting
+
+Projects often encounter a constraint or requirement which makes free-tier hosted CI/CD runners
+insufficient for their needs. In these cases hosting your own CI/CD runner can be a viable
+alternative to premium-tier services or subscriptions. Self-hosting may also provide access to
+resources that are simply not available on many CI/CD services such as GPUs, faster drives, and
+newer CPU models.
+
+This guide will cover basic methods for hosting CI/CD runners on Bare-Metal, Virtual Machines, or
+using Cloud Runner. Containerized hosts will not be discussed because of their inherent reliance on
+insecure practices such as
+[Docker-in-Docker](http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/),
+[Privileged Containers](https://www.trendmicro.com/en_th/research/19/l/why-running-a-privileged-container-in-docker-is-a-bad-idea.html)
+and the additional tooling required to mitigate those risks such as
+[Kaniko](https://github.com/GoogleContainerTools/kaniko) or
+[Kata Containers](https://katacontainers.io/).
+
+## 📚 Prerequisite Knowledge
+
+Users of this guide should already be familiar with the Linux command-line, Shell scripting, and
+have a basic grasp of CI/CD concepts. For users who are not familiar with these concepts, we have
+included some resources for you to start your learning journey below.
+
+- [Techworld with Nana](https://www.youtube.com/@TechWorldwithNana)
+- [DevOps Toolkit](https://www.youtube.com/@DevOpsToolkit)
+- [Introduction to Bash Scripting](https://itsfoss.com/bash-scripting-tutorial/)
+
+## 📋 Constraints
+
+There are many ways to self-host CI/CD runners, and which one is best for you will depend on your
+own situation and constraints. For the purpose of this guide we will make the following assumptions:
+
+- 💻 User already has their own hardware
+- 💸 Budget for new hardware, software, or services is $0
+- 🛠️ FOSS tools should be prioritized where possible
+- 📜 We define `Self-Hosting` in this context to refer to a user taking responsibility for the
+ operating-system level configuration and life-cycle-management of a given compute resource (metal,
+ on-prem, cloud VM, VPS etc...)
+
+## ⚠️ Security Disclaimer
+
+This guide strives to maintain a balance between convenience and security for the sake of usability.
+The examples included in this guide are intended for use with on-prem hardware without public IP
+addresses accessible from external networks. Security is a constantly moving target which requires
+continuous effort to maintain. Users should conduct their own security review before using the
+following techniques on production or public systems.
+
+## ⚡️ Power Costs
+
+Hosting your own runners also comes with an increase in power consumption. This will vary based on
+the hardware you use and the prices of energy in your area. Below are some useful resources for
+discovering the potential energy costs of self-hosting.
+
+- https://outervision.com/power-supply-calculator
+- https://energyusecalculator.com/electricity_computer.htm
+
+## 💻 System Requirements
+
+This guide is tested on devices which meet the following requirements:
+
+- x86 or amd64 processor
+- Ubuntu 22.04 LTS Server or Debian 12 Bookworm
+- Root access to the operating system
+- Network connectivity
+
+## 📎 Quick Links:
+
+### Host Creation
+
+"Host Creation" in this context is the process of installing an operating system onto a piece of
+physical hardware, or the creation and configuration of virtualised compute resources.
+
+- [Bare-Metal](./03-host-creation/02-bare-metal.mdx)
+- [Virtual Machines using Multipass](./03-host-creation/02-multipass.mdx)
+- [Virtual Machines using QEMU](./03-host-creation/03-QEMU/01-overview.mdx)
+
+### Host Provisioning
+
+"Provisioning" here refers to the process of installing additional resources onto, and the
+configuration of your host beyond installing the base operating-system. Both manual and declarative
+workflows are supported.
+
+- [Manual Ubuntu 22.04 Setup](./04-host-provisioning/02-ubuntu-setup.mdx)
+- [Manual Debian 12 Setup](./04-host-provisioning/01-debian-setup.mdx)
+- [Declarative provisioning via Cloud-Init](./04-host-provisioning/03-cloud-init/01-about.mdx)
+
+### Runner Application Installation
+
+Once your host has been provisioned, you will then need to install the appropriate runner
+application. The guides below will walk you through that process.
+
+- [Github Actions](./05-runner-application-installation/02-github-actions.mdx)
+- [GitLab Pipelines](./05-runner-application-installation/01-gitlab-pipelines.mdx)
diff --git a/docs/12-self-hosting/02-host-types.mdx b/docs/12-self-hosting/02-host-types.mdx
new file mode 100644
index 00000000..4ab48255
--- /dev/null
+++ b/docs/12-self-hosting/02-host-types.mdx
@@ -0,0 +1,129 @@
+import Virtualisation from '/assets/images/Virtualization.drawio.png';
+import Metal from '/assets/images/Metal.drawio.png';
+import Docker from '/assets/images/DockerHost.drawio.png';
+import Kubernetes from '/assets/images/kubernetes.drawio.png';
+import Layers from '/assets/images/k8s-layers.drawio.png';
+import Layer0 from '/assets/images/k8s-layer0.drawio.png';
+import Layer1 from '/assets/images/k8s-layer1.drawio.png';
+import Layer2 from '/assets/images/k8s-layer2.drawio.png';
+import Layers01 from '/assets/images/k8s-layers01.drawio.png';
+import Layers012 from '/assets/images/k8s-layers012.drawio.png';
+
+# Types of Hosts
+
+## Bare-Metal
+
+"Bare Metal" means that your host OS is running directly on a piece of hardware without any
+virtualisation. This reduces the complexity of deployment at the cost of increased time and effort
+for re-provisioning the host.
+
+
+
+
+## Virtual Machines
+
+Virtual Machines are a software-defined layer of abstraction atop a Bare-Metal host which makes
+deployments more consistent and easier to manage declaratively. This greatly reduces the difficulty
+of re-deployment and creates the conditions required for securely running multiple guests within the
+same physical host. Virtual Machines can also be used to create hosts that run different operating
+systems (Windows, MacOS) or architectures (ARM) than the host machine. This added functionality
+comes at the cost of added complexity, a slight performance penalty, and you need to already have a
+Bare-Metal host on which to run the VMs.
+
+
+
+
+Additional Reading:
+
+- [A Study of Performance and Security Across the Virtualization Spectrum](https://repository.tudelft.nl/islandora/object/uuid:34b3732e-2960-4374-94a2-1c1b3f3c4bd5/datastream/OBJ/download) -
+ Vincent van Rijn
+- [Hyper-converged infrastructure](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure) -
+ Wikipedia
+- [Rethinking the PC](https://www.computerworld.com/article/3518849/rethinking-the-pc-why-virtual-machines-should-replace-operating-systems.html) -
+ Rob Enderle
+
+## Containers
+
+Containers are built on 'cgroups' (control groups), which are a feature of the Linux kernel that
+limits monitors, and isolates the resource usage of a collection of processes. This means that
+running containers on Linux is very lightweight form of virtualisation. However, on other operating
+systems which do not use the Linux kernel, a Linux virtual machine or translation-layer must be
+created to run containers. The manner by which each Operating System resolves this issue varies
+greatly as shown below. Because of this variance, the self-hosting documentation targets Linux as a
+means of avoiding excess complexity.
+
+
+
+
+Additional Reading:
+
+- [The Mental Model Of Docker Container Shipping](https://bernhardwenzel.com/2022/the-mental-model-of-docker-container-shipping/) -
+ Bernhard Wenzel
+- [Why is Docker-in-Docker considered bad?](https://devops.stackexchange.com/questions/676/why-is-docker-in-docker-considered-bad)
+- [Why it is recommended to run only one process in a container?](https://devops.stackexchange.com/questions/447/why-it-is-recommended-to-run-only-one-process-in-a-container)
+
+## Kubernetes (Cloud Runner)
+
+Kubernetes is somewhat of a combination of all other host types. Since it is an API, it must be
+installed on an existing host (called a "Node") which is usually either a VM or physical device. A
+Kubernetes "Cluster" is usually made up of 3 or more nodes - though you can have as few as one, or
+as many 5,000 per cluster.
+
+
+
+
+
+Once installed, Kubernetes creates
+[standardised interfaces](https://matt-rickard.com/kubernetes-interfaces) to control the hardware &
+software components of the underlying nodes (networking, storage, GPUs, CPU cores etc...) as well as
+a distributed key-value store which facilitates communication between all nodes in the cluster.
+
+
+
+
+
+With the underlying hardware abstracted into a generic pool of resources, Kubernetes is then able to
+re-compose those assets into isolated environments called "Namespaces" where it deploys
+containerised workloads in groups called "Pods". This layer of Kubernetes is very similar to a
+typical container host but with many more features for multi-tenancy, security, and life-cycle
+management.
+
+
+
+
+
+Additional Reading:
+
+- [Kubernetes Components](https://kubernetes.io/docs/concepts/overview/components/) - kubernetes.io
+- [A visual guide to Kubernetes networking fundamentals](https://opensource.com/article/22/6/kubernetes-networking-fundamentals) -
+ Nived Velayudhan
+- [Thinking about the complexity of the Kubernetes ecosystem](https://erkanerol.github.io/post/complexity-of-kubernetes/) -
+ Erkan Erol
+- [Ephemeral, Idempotent and Immutable Infrastructure ](https://cloudnativenow.com/topics/ephemeral-idempotent-and-immutable-infrastructure/) -
+ Marc Hornbeek
diff --git a/docs/12-self-hosting/03-host-creation/02-bare-metal.mdx b/docs/12-self-hosting/03-host-creation/02-bare-metal.mdx
new file mode 100644
index 00000000..9edfa5a4
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/02-bare-metal.mdx
@@ -0,0 +1,41 @@
+import Metal from '/assets/images/Metal.drawio.png';
+
+# Bare-Metal
+
+The Host is the computer that will execute the runner application. This can be a desktop computer,
+laptop, Virtual Machine, or VPS from a cloud provider.
+
+
+
+
+## If your host is a local machine:
+
+For a local machine you will need to perform a clean installation of the operating system. This
+means creating a bootable USB drive from an ISO file, booting the machine from the USB drive, and
+installing the OS. Links to download an official Live ISO file and installation guides are provided
+below. If you would like to create a custom ISO, try [PXEless](https://github.com/cloudymax/pxeless)
+or [Cubic](https://github.com/PJ-Singh-001/Cubic).
+
+### Ubuntu
+
+- Download the Ubuntu 22.04 LTS
+ [server installer](https://ftp.snt.utwente.nl/pub/os/linux/ubuntu-releases/22.04.3/ubuntu-22.04.3-live-server-amd64.iso)
+- [Guide: Install Ubuntu 22.04 LTS on a local machine](https://ostechnix.com/install-ubuntu-server/)
+
+### Debian
+
+- Download the Debian 12
+ [installation image](https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/debian-12.1.0-amd64-DVD-1.iso)
+- [Guide: Install Debian on a local system](https://www.linuxtechi.com/how-to-install-debian-11-bullseye/)
+
+## If your host is a virtual-machine:
+
+If you are using a VPS or VM, the OS should already be installed and admin user should already
+exist. Follow the appropriate guide in the provisioning section for your operating-system.
+
+- [Ubuntu 22.04](../04-host-provisioning/02-ubuntu-setup.mdx)
+- [Debian 12](../04-host-provisioning/01-debian-setup.mdx)
diff --git a/docs/12-self-hosting/03-host-creation/02-multipass.mdx b/docs/12-self-hosting/03-host-creation/02-multipass.mdx
new file mode 100644
index 00000000..2911b779
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/02-multipass.mdx
@@ -0,0 +1,152 @@
+---
+toc_max_heading_level: 4
+---
+
+# VMs with Multipass (Basic)
+
+Multipass is a light-weight Virtual Machine Manager for Linux, Windows and MacOS. It's designed for
+developers who want to quickly create a fresh Ubuntu environment with a single command. It uses the
+native hypervisor for whichever platform it is installed on (KVM on Linux, Hyper-V on Windows and
+HyperKit on MacOS) to run VMs with minimal overhead. It can also use VirtualBox on Windows and
+MacOS. The biggest limitation of Multipass is that it only creates Ubuntu VMs.
+
+- [Official Website](https://multipass.run/)
+- [Official Github Repo](https://github.com/canonical/multipass)
+
+## Installation
+
+To install multipass on Linux use the commands below.
+
+```bash
+sudo apt-get install snapd
+sudo snap install core
+sudo snap install multipass
+```
+
+For installation on Windows and MacOS, refer to the official installation instructions:
+
+- [How to install Multipass on Windows](https://multipass.run/docs/installing-on-windows)
+- [How to install Multipass on MacOS](https://multipass.run/docs/installing-on-macos)
+
+## Creating a VM
+
+- Set values
+
+ ```bash
+ # The name of the Virtual Machine
+ export VM_NAME="gameci"
+
+ # The name of the user to create
+ export VM_USER="vmadmin"
+
+ # Number of CPU cores to allocate to the VM
+ export VM_CPUS="2"
+
+ # Amount of Disk Space to allocate to the VM.
+ # Cannot exceed available on host.
+ export VM_DISK="32G"
+
+ # Amount of RAM to allocate to the VM.
+ # Cannot exceed available RAM on host.
+ export VM_MEM="8G"
+
+ # Set path on MacOS systems
+ export PATH="$PATH:/usr/local/bin/multipass"
+
+ # Set path on Linux system
+ export PATH="$PATH:/snap/bin/multipass"
+ ```
+
+- Create a password
+
+ ```bash
+ # Install the mkpasswd utility
+ sudo apt install -y whois
+
+ read PW_STRING
+ export PASSWORD=$(mkpasswd -m sha-512 --rounds=4096 "$PW_STRING" -s "saltsaltlettuce")
+ ```
+
+- Create an ssh-key for authenticating with the VM
+
+ ```bash
+ ssh-keygen -C $VM_USER -f runner
+ ```
+
+- Add the public ssh-key and password to a cloud-init file
+
+ See the [cloud init](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) official
+ docs for more information about cloud-init. More advanced templates are available in the
+ [Host Provisioning](../04-host-provisioning/03-cloud-init.mdx) directory.
+
+ ```bash
+ VM_KEY=$(cat runner.pub)
+
+ cat << EOF > cloud-init.yaml
+ #cloud-config
+ groups:
+ - docker
+ users:
+ - default
+ - name: ${VM_USER}
+ sudo: ALL=(ALL) NOPASSWD:ALL
+ shell: /bin/bash
+ groups: docker, admin, sudo, users
+ no_ssh_fingerprints: true
+ lock_passwd: false
+ passwd: ${PASSWORD}
+ ssh-authorized-keys:
+ - ${VM_KEY}
+ packages:
+ - docker.io
+ EOF
+ ```
+
+- Start the VM
+
+ See the [multipass launch](https://multipass.run/docs/launch-command) command docs for more
+ information.
+
+ ```bash
+ export VERBOSITY="-vvvvvv"
+
+ /snap/bin/multipass launch --name $VM_NAME \
+ --cpus $VM_CPUS \
+ --disk $VM_DISK \
+ --mem $VM_MEM \
+ --cloud-init cloud-init.yaml \
+ $VERBOSITY
+ ```
+
+- Get the VM's IP address
+
+ ```bash
+ VM_IP=$(/snap/bin/multipass list |grep "${VM_NAME}" |awk '{print $3}')
+ ```
+
+- Connect to the VM via ssh or cli
+
+ ssh:
+
+ ```bash
+ ssh -i runner $VM_USER@$VM_IP -o StrictHostKeyChecking=no -vvvv
+ ```
+
+ CLI:
+
+ ```bash
+ multipass shell $VM_NAME
+ ```
+
+- Install the runner application
+
+ - [GitHub Actions](../05-runner-application-installation/02-github-actions.mdx)
+ - [GitLab Pipelines](../05-runner-application-installation/01-gitlab-pipelines.mdx)
+
+## Cleanup
+
+```bash
+/snap/bin/multipass stop $VM_NAME
+/snap/bin/multipass delete $VM_NAME
+/snap/bin/multipass purge
+```
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/01-overview.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/01-overview.mdx
new file mode 100644
index 00000000..fa2f9ba5
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/01-overview.mdx
@@ -0,0 +1,72 @@
+---
+toc_max_heading_level: 4
+---
+
+# Install QEMU
+
+[QEMU](https://www.qemu.org/documentation/) (short for "Quick Emulator") is a generic and open
+source machine emulator and virtualiser.
+
+When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM
+board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very
+good performance.
+
+When used as a virtualiser, QEMU achieves near native performance by executing the guest code
+directly on the host CPU. QEMU supports virtualisation when executing under the Xen hypervisor or
+using the KVM kernel module in Linux. When using KVM, QEMU can virtualise x86, server and embedded
+PowerPC, 64-bit POWER, S390, 32-bit and 64-bit ARM, and MIPS guests.
+
+See also:
+[Virtualization and Hypervisors: Explaining QEMU, KVM, and Libvirt](https://sumit-ghosh.com/articles/virtualization-hypervisors-explaining-qemu-kvm-libvirt/)
+by Sumit Ghosh
+
+## Why use QEMU?
+
+- Like [ESXi](https://www.vmware.com/nl/products/esxi-and-esx.html), its capable of PCIe-passthrough
+ for GPUs. This is in contrast with [Firecracker](https://firecracker-microvm.github.io/) and
+ [VirtualBox](https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/guestadd-video.html)
+ which cannot.
+- Unlike ESXi, it's free.
+- When used with KVM, QEMU provides near-native levels of performance.
+- Can be used inside Kubernetes via [Kubevirt](https://kubevirt.io/)
+- It's fast - not quite as fast as [LXD](https://linuxcontainers.org/lxd/introduction/),
+ [Firecracker](https://firecracker-microvm.github.io/), or
+ [Cloud-Hypervisor](https://github.com/cloud-hypervisor/cloud-hypervisor) (formerly
+ [NEMU](https://github.com/intel/nemu)), but its far more mature, has a larger community, and more
+ documentation available.
+- Unlike [Multipass](https://multipass.run/docs) it can also create Windows and MacOS guests.
+- [Unlike Firecracker](https://github.com/firecracker-microvm/firecracker/issues/849#issuecomment-464731628)
+ it supports pinning memory addresses (and thus PCIe-passthrough) where firecracker cannot because
+ it would break their core feature of over-subscription.
+- Can be run in a micro-vm configuration to achieve a smaller memory footprint (Inspired by
+ Firecracker).
+
+These qualities make QEMU well-suited for those seeking a highly-performant and fully-featured
+hypervisor.
+
+## Requirements
+
+- Linux host running Debian 12 or Ubuntu 22.04
+- VNC viewer software installed on the machine you will use to access the VM
+ - [TightVNC](https://www.tightvnc.com/download.php) (Windows)
+ - [Remina](https://remmina.org/) (Linux)
+ - [TigerVNC](https://formulae.brew.sh/formula/tiger-vnc) (MacOS)
+
+## Installation
+
+- Install QEMU and its dependencies
+
+ ```bash
+ sudo apt-get install -y qemu-kvm \
+ bridge-utils \
+ virtinst\
+ ovmf \
+ qemu-utils \
+ cloud-image-utils \
+ tmux \
+ whois \
+ git \
+ jq \
+ git-extras \
+ guestfs-tools
+ ```
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/02-linux-cloudimage.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/02-linux-cloudimage.mdx
new file mode 100644
index 00000000..9b738ec0
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/02-linux-cloudimage.mdx
@@ -0,0 +1,199 @@
+import Vnc from '/assets/images/vnc-connection.png';
+import Ssh from '/assets/images/ssh-connection.png';
+
+# Linux (Cloud-Image)
+
+Cloud-Images are lightweight (usually under 700Mb) snapshots of a configured OS created by a
+publisher for use with public and private clouds. These images provide a way to repeatably create
+identical copies of a machine across platforms. Cloud-Image based systems are best used for
+ephemeral or immutable workloads and then discarded after each run. We will use Cloud-Init to
+customize the cloud-image immediately upon booting, prior to user-space initialization.
+
+## Download a Cloud-Image
+
+- Choose a cloud-image to use as the base OS:
+
+ ```yaml
+ Debian:
+ 12: 'https://laotzu.ftp.acc.umu.se/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2'
+ Ubuntu:
+ jammy: 'https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img'
+ ```
+
+- Download the image with `wget`
+
+ ```bash
+ export CLOUD_IMAGE_URL="https://laotzu.ftp.acc.umu.se/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2"
+ export CLOUD_IMAGE_NAME=$(basename -- "$CLOUD_IMAGE_URL")
+ wget -c -O "$CLOUD_IMAGE_NAME" "$CLOUD_IMAGE_URL" -q --show-progress
+ ```
+
+## VM Setup
+
+- Configure VM options
+
+ ```bash
+ # The name of the Virtual Machine
+ export VM_NAME="gameci"
+
+ # The name of the user to create
+ export VM_USER="vmadmin"
+
+ # Number of physical CPU cores to allocate to the VM
+ export PHYSICAL_CORES="2"
+
+ # Number of threads per core.
+ # Set this to `1` for CPUs that do not support hyper-threading
+ export THREADS="1"
+ export SMP=$(( $PHYSICAL_CORES * $THREADS ))
+
+ # Amount of Disk Space to allocate to the VM.
+ # Cannot exceed available on host.
+ export DISK_SIZE="32G"
+
+ # Amount of RAM to allocate to the VM.
+ # Cannot exceed available RAM on host.
+ export MEMORY="8G"
+
+ # IP address where host may be reached. Do not use `localhost`.
+ export HOST_ADDRESS="SOME IP HERE"
+
+ # Port used by SSH on the host
+ export HOST_SSH_PORT="22"
+
+ # Port to use when forwarding SSH to the VM
+ export VM_SSH_PORT="1234"
+
+ # Port number to expose on the host for VNC
+ export VNC_PORT="0"
+ ```
+
+### Credentials
+
+- Create an SSH Key
+
+ ```bash
+ yes |ssh-keygen -C "$VM_USER" \
+ -f runner \
+ -N '' \
+ -t rsa
+ ```
+
+- Create a password
+
+ ```bash
+ # Install the mkpasswd utility
+ sudo apt install -y whois
+
+ read PW_STRING
+ export PASSWORD=$(mkpasswd -m sha-512 --rounds=4096 "$PW_STRING" -s "saltsaltlettuce")
+ ```
+
+### Cloud-init Config
+
+- Create a Cloud-Init file
+
+ See the [Cloud-Init](https://cloudinit.readthedocs.io/en/latest/topics/examples.html) official
+ docs for more information about Cloud-Init. More advanced templates are available in the
+ [Host Provisioning](../../04-host-provisioning/03-Cloud-Init.mdx) directory.
+
+ ```bash
+ VM_KEY=$(cat runner.pub)
+
+ /bin/cat << EOF > cloud-init.yaml
+ #cloud-config
+ hostname: runner
+ disable_root: false
+ network:
+ config: disabled
+ users:
+ - name: ${VM_USER}
+ groups: users, admin, sudo
+ sudo: ALL=(ALL) NOPASSWD:ALL
+ shell: /bin/bash
+ lock_passwd: false
+ passwd: ${PASSWORD}
+ ssh_authorized_keys:
+ - ${VM_KEY}
+ EOF
+ ```
+
+### Disks
+
+- Create a Cloud-Init disk
+
+ ```bash
+ cloud-localds seed.img cloud-init.yaml
+ ```
+
+- Create a virtual disk using the cloud-image as a read-only backing file.
+
+ ```bash
+ qemu-img create -b ${CLOUD_IMAGE_NAME} -f qcow2 \
+ -F qcow2 disk.qcow2 \
+ "$DISK_SIZE" 1> /dev/null
+ ```
+
+## Create the VM
+
+- New guest
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -drive if=virtio,format=raw,file=seed.img,index=0,media=disk \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+- Boot existing guest
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+## Connect to the VM
+
+- Connect over SSH
+
+ - Copy the ssh private key `runner` to the machine you wish to connect to the VM with.
+ - Connect to the VM using the format `ssh -i runner $VM_USER@$HOST_ADDRESS -p$VM_SSH_PORT`
+
+
+
+
+
+
+
+- Connect using VNC
+
+ In your VNC software use the address format `$HOST_ADDRESS:$VNC_PORT` to connect to the VM.
+
+
+
+
+
+
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/03-linux-liveiso.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/03-linux-liveiso.mdx
new file mode 100644
index 00000000..3ac3992f
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/03-linux-liveiso.mdx
@@ -0,0 +1,145 @@
+# Linux (Live-ISO)
+
+import Vnc from '/assets/images/vnc-connection.png';
+import Ssh from '/assets/images/ssh-connection.png';
+import Debian from '/assets/images/debian-grub.png';
+
+Live-ISO installers usually contain the full set of requirements for installing an operating system
+as well as extra content for optional features. These images are much heavier than cloud-images and
+are generally 2-8Gb in size. Unlike cloud-images, Live-ISO installers can also be used to image
+physical machines and are suited for use in long-lived virtual-private servers.
+
+## Download or create an ISO file:
+
+- For Ubuntu images, tools like [Cubic](https://github.com/PJ-Singh-001/Cubic) or
+ [PXEless](https://github.com/cloudymax/pxeless) can be used to create customized ISO installers.
+
+- When using Debian 12 as the source image, you may need to manually add a boot-entry to the
+ virtual-machine bios after installation. That process is shown in-detail here:
+ [proxmox.com/wiki/OVMF/UEFI_Boot_Entries](https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries)
+
+- Official Ubuntu and Debian images can be downloaded from:
+
+ ```yaml
+ Ubuntu:
+ - https://mirror.mijn.host/ubuntu-releases/22.04.3/ubuntu-22.04.3-live-server-amd64.iso
+ Debian12:
+ - https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/debian-12.1.0-amd64-DVD-1.iso
+ ```
+
+- Download the ISO file
+
+ ```bash
+ export IMAGE_URL="https://cdimage.debian.org/debian-cd/current/amd64/iso-dvd/debian-12.1.0-amd64-DVD-1.iso"
+ export IMAGE_NAME=$(basename -- "$IMAGE_URL")
+ wget -c -O "$IMAGE_NAME" "$IMAGE_URL" -q --show-progress
+ ```
+
+## VM Setup
+
+- Configure the Virtual Machine options
+
+ ```bash
+ # The name of the Virtual Machine
+ export VM_NAME="gameci"
+
+ # Number of physical CPU cores to allocate to the VM
+ export PHYSICAL_CORES="2"
+
+ # Number of threads per core.
+ # Set this to `1` for CPUs that do not support hyper-threading
+ export THREADS="1"
+ export SMP=$(( $PHYSICAL_CORES * $THREADS ))
+
+ # Amount of Disk Space to allocate to the VM.
+ # Cannot exceed available on host.
+ export DISK_SIZE="32G"
+
+ # Amount of RAM to allocate to the VM.
+ # Cannot exceed available RAM on host.
+ export MEMORY="8G"
+
+ # IP address where host may be reached. Do not use `localhost`.
+ export HOST_ADDRESS="SOME IP HERE"
+
+ # Port used by SSH on the host
+ export HOST_SSH_PORT="22"
+
+ # Port to use when forwarding SSH to the VM
+ export VM_SSH_PORT="1234"
+
+ # Port number to expose on the host for VNC
+ export VNC_PORT="0"
+ ```
+
+- Create an empty disk where the OS will be installed.
+
+ ```bash
+ qemu-img create -f qcow2 disk.qcow2 $DISK_SIZE &>/dev/null
+ ```
+
+## Create the VM
+
+- Create new guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -cdrom $IMAGE_NAME \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -device intel-hda \
+ -device hda-duplex \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+- Boot existing guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -device intel-hda \
+ -device hda-duplex \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+## Connect via VNC
+
+- In your VNC software, use the address format `$HOST_ADDRESS:$VNC_PORT` to connect to the VM.
+
+
+
+
+
+
+
+- Complete the installation
+
+ Follow the instructions for Debian/Ubuntu installation using the guides in the
+ [Bare-Metal](docs/12-self-hosting/03-host-creation/02-bare-metal.mdx) section.
+
+
+
+
+
+
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx
new file mode 100644
index 00000000..2c6ac1f0
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/04-windows.mdx
@@ -0,0 +1,456 @@
+import Vnc from '/assets/images/vnc-connection.png';
+import Virtio from '/assets/images/win10-virtio-drivers.png';
+import BootFromCd from '/assets/images/boot-from-cd.png';
+import BootFromCd2 from '/assets/images/boot-from-cd2.png';
+import RDP from '/assets/images/win10-rdp.png';
+import Install from '/assets/images/win10-install.png';
+import Lang from '/assets/images/win10-language.png';
+import Serial from '/assets/images/win10-serial.png';
+import Version from '/assets/images/win10-version.png';
+import Eula from '/assets/images/win10-eula.png';
+import CustomInstall from '/assets/images/win10-custom-install.png';
+import DiskSelect from '/assets/images/win10-disk-select.png';
+import DriverBrowse from '/assets/images/win10-driver-browse.png';
+import InstallDriver from '/assets/images/win10-install-driver.png';
+import VirtioDisk from '/assets/images/win10-driver-disk.png';
+import Viostor from '/assets/images/win10-viostor.png';
+import VirtioGpu from '/assets/images/win10-virtiogpu.png';
+import NetKvm from '/assets/images/win10-netkvm.png';
+import FormatDisk from '/assets/images/win10-format-disk.png';
+import Partition from '/assets/images/win10-partitions.png';
+import Installing from '/assets/images/win10-installing.png';
+import DiskmarkVirtio from '/assets/images/diskmark-virtio.png';
+import DiskmarkSata from '/assets/images/diskmark-sata.png';
+
+# Windows
+
+Windows VMs on QEMU + KVM work very well provided that you have tailored your VM to match your
+needs. There are multiple possible combinations of images, hardware types, and installation methods
+that each have their own benefits and drawbacks.
+
+## Choosing an Image
+
+Since Windows is proprietary software, downloading an ISO is not as straight-forward as with Linux.
+
+- Server and Enterprise
+
+ Microsoft makes evaluation copies of it's server and enterprise images available through their
+ Evaluation Center. You will need to enter your contact details to obtain the image. These
+ evaluation images are multi-edition and therefore do not easily work for automated installation.
+
+ - [Windows Server 2022](https://info.microsoft.com/ww-landing-windows-server-2022.html)
+
+- Home and Pro
+
+ You can also obtain an image for Windows 10 from Microsoft. This is another multi-edition image
+ that cannot be easily automated due to the need to manually select the version to install during
+ the setup process.
+
+ - [Windows 10 Multi-edition](https://www.microsoft.com/nl-nl/software-download/windows10ISO)
+
+- Alternative Sources for single-edition images
+
+ For those who need a single-edition ISO which can be fully automated, it is easier to use 3rd
+ party services such as [https://uupdump.net/](https://uupdump.net/) to obtain the ISO image. There
+ is a decent guide [HERE](https://www.elevenforum.com/t/uup-dump-download-windows-insider-iso.344/)
+ on how to use the site.
+
+- Licenses
+
+ Be aware that all of the images above are unlicensed and it is up to you to obtain a valid
+ activation key for your installation. See
+ [this post](https://www.reddit.com/r/cheapwindowskeys/comments/wjvsae/cheap_windows_keys/) on
+ Reddit's /r/cheapwindowskeys for more information about how to acquire a license.
+
+- Rename your ISO file
+
+Once you have download your ISO image, rename it to `windows.iso` for compatibility with the rest of
+the commands in this guide.
+
+## Choosing a disk type
+
+There are two ways to create virtual disks covered in this guide: `Virtio` and `SATA`.
+
+- `Virtio` is more performant, but cannot be easily automated since the drivers must be manually
+ installed before Windows setup can start.
+
+
+
+
+
+
+- `SATA` drivers are supported natively by Windows and can easily be used for fully automated
+ installations. They are however, slower than `Virtio` drives.
+
+
+
+
+
+
+ > Testing performed using Qcow2 disk format, Windows 10 Pro Guest OS, and a Samsung 990 Pro SSD.
+ > For additional performance-tuning advice, see the article
+ > [Improving the performance of a Windows Guest on KVM/QEMU](https://leduccc.medium.com/improving-the-performance-of-a-windows-10-guest-on-qemu-a5b3f54d9cf5).
+
+## Creating an Answer file
+
+You can skip interactive steps of the Windows install process by using an answer-file. These are
+`.xml` config files which store system configuration data, similar to Cloud-Init or pre-seed on
+Linux. Answer files can only provide full-automation of the install process when using
+single-edition images. Multi-edition images will still require manual interaction to select a
+Windows version during setup.
+
+- You can use the website [Windows Answer File Generator](https://www.windowsafg.com/) to easily
+ create your own answer file.
+
+ Example of a pre-made answer file for Windows 10 with an auto-login admin account:
+
+ ```bash
+ wget -O ./config/autounattend.xml https://raw.githubusercontent.com/small-hack/smol-metal/main/autounattend.xml
+ ```
+
+- Answer files must be mounted in the root partition of the installation medium. That's usually a
+ USB drive but since we are using a virtual machine we will use an ISO file instead.
+
+ ```bash
+ mkdir ./config
+ mv $YOUR_ANSWER_FILE ./config/
+ mkisofs -o config.iso -J -r config
+ ```
+
+## VM Setup
+
+- Configure the Virtual Machine options
+
+ ```bash
+ # The name of the Virtual Machine
+ export VM_NAME="gameci"
+
+ # Number of physical CPU cores to allocate to the VM
+ export PHYSICAL_CORES="2"
+
+ # Number of threads per core.
+ # Set this to `1` for CPUs that do not support hyper-threading
+ export THREADS="1"
+ export SMP=$(( $PHYSICAL_CORES * $THREADS ))
+
+ # Amount of Disk Space to allocate to the VM.
+ # Cannot exceed available on host.
+ export DISK_SIZE="32G"
+
+ # Amount of RAM to allocate to the VM.
+ # Cannot exceed available RAM on host.
+ export MEMORY="8G"
+
+ # IP address where host may be reached. Do not use `localhost`.
+ export HOST_ADDRESS="SOME IP HERE"
+
+ # Port number to expose on the host for VNC
+ export VNC_PORT="0"
+ ```
+
+- Download virtual-disk drivers
+
+ ```bash
+ wget -O "virtio-drivers.iso" "https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/virtio-win-0.1.240-1/virtio-win-0.1.240.iso"
+ ```
+
+- Create the virtual disk
+
+ ```bash
+ qemu-img create -f qcow2 disk.qcow2 $DISK_SIZE
+ ```
+
+## Creating the VM
+
+### Automated install With SATA Drive
+
+- Create new guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -drive id=disk0,if=none,cache=none,format=qcow2,file=disk.qcow2 \
+ -device ahci,id=ahci -device ide-hd,drive=disk0,bus=ahci.0 \
+ -drive file=windows.iso,index=0,media=cdrom \
+ -drive file=virtio-drivers.iso,index=2,media=cdrom \
+ -drive file=config.iso,index=1,media=cdrom \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::3389-:3389 \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+- Boot existing guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -drive id=disk0,if=none,cache=none,format=qcow2,file=disk.qcow2 \
+ -device ahci,id=ahci -device ide-hd,drive=disk0,bus=ahci.0 \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::3389-:3389 \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+### Manual Install With Virtio Drive
+
+- Create new guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -drive file=windows.iso,index=1,media=cdrom \
+ -drive file=virtio-drivers.iso,index=2,media=cdrom \
+ -boot menu=on \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::3389-:3389 \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+- Boot existing guest:
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu host,kvm="off",hv_vendor_id="null" \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=disk,iothread=io \
+ -drive if=none,id=disk,cache=none,format=qcow2,aio=threads,file=disk.qcow2 \
+ -drive file=virtio-drivers.iso,index=2,media=cdrom \
+ -boot menu=on \
+ -serial none \
+ -parallel none \
+ -bios /usr/share/ovmf/OVMF.fd \
+ -usbdevice tablet \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -netdev user,id=network,hostfwd=tcp::3389-:3389 \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+### Boot from CD
+
+- Multi-edition images will prompt the user to 'Press any key to boot CD or DVD' as shown below.
+ When you see this prompt, press enter.
+
+
+
+
+
+
+
+- Other ISO images may not present the 'Press Enter' prompt to the serial console, instead you will
+ need to press enter after you see the following:
+
+
+
+
+
+
+
+- Connect to the VM using VNC
+
+ In your VNC software use the address format `$HOST_ADDRESS:$VNC_PORT` to connect to the VM.
+
+
+
+
+
+- Product Key
+
+ Enter your key or select "I don't have a product key" to skip.
+
+
+
+
+
+
+
+- Select Version
+
+ Choose a windows version to install, using a 'Pro' version is advised.
+
+
+
+
+
+
+
+- Accept the EULA
+
+
+
+
+
+
+
+- Select 'Custom Install'
+
+
+
+
+
+
+
+### Install Drivers (Optional)
+
+- You should now see the disk selection screen without any available disk. This is because Windows
+ cannot use the virtio disks without adding drivers during the installation process.
+
+
+
+
+
+
+
+- To install the required drivers, select the 'Load Driver' button, then select 'Browse' to open a
+ file-explorer from which to choose the install media.
+
+
+
+
+
+
+
+- The drivers are located in disk drive 'E' which should be named 'virtio-win-[version]'
+
+
+
+
+
+
+
+#### Viostor (Required)
+
+- Find and select the 'Viostor' directory, then choose the directory that corresponds to your
+ operating system version. Choose the 'amd64' directory inside and click 'OK'.
+
+
+
+
+
+
+
+ Click 'Next' on the following screen to install the driver.
+
+
+
+
+
+
+
+#### NetKvm (Required)
+
+- Repeat the same process from above to find and install the NetKVM driver
+
+
+
+
+
+
+
+#### Viogpudo (Optional)
+
+- This driver will add more screen resolution options to the VNC display. The installation process
+ is the same as the previous drivers.
+
+
+
+
+
+
+
+### Format and Partition Disk
+
+- With the drivers installed, we are now able to format and partition our virtual disk. Select
+ 'Drive 0' then click 'new'.
+
+
+
+
+
+
+
+- Select largest of the newly-created partitions, then click 'Next'.
+
+
+
+
+
+
+
+- The install process should now begin. From this point you may simply follow the directions
+ on-screen. When the system reboots, allow the 'Press any key to boot from CD' prompt to time-out.
+ This avoids restarting the setup process.
+
+
+
+
+
+
+
+## RDP
+
+- Enable RDP (Optional)
+
+ If you would like a more performant remote desktop, consider using RDP instead of VNC. The
+ required port for VNC `3389` is already forwarded in the commands above, the guide below will
+ guide you through enabling RDP.
+
+ - https://www.helpwire.app/blog/how-to-allow-rdp-windows-10/
+
+ You will also need an RDP client to access the VM. See the official Microsoft recommended clients
+ list here:
+ https://learn.microsoft.com/en-us/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients
+
+- Connect to the VM using RDP (Optional)
+
+ To connect to the VM with your RDP client, use the host's IP address as the 'PC Name'.
+
+
+
+
+
+
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/05-macos.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/05-macos.mdx
new file mode 100644
index 00000000..86e4db0b
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/05-macos.mdx
@@ -0,0 +1,301 @@
+import Vnc from '/assets/images/vnc-connection.png';
+import Ssh from '/assets/images/ssh-connection.png';
+import InstallLocation from '/assets/images/macos-choose-install-location.png';
+import FormatDisk from '/assets/images/macos-format-disk.png';
+import QuitDisk from '/assets/images/macos-quit-disk-utility.png';
+import Reinstall from '/assets/images/macos-reinstall.png';
+import CreateAccount from '/assets/images/macos-create-account.png';
+import BootMedia from '/assets/images/macos-select-boot-media.png';
+import BootMedia2 from '/assets/images/macos-select-boot-media2.png';
+import DiskUtility from '/assets/images/macos-select-disk-utility.png';
+import InstallMedia from '/assets/images/macos-select-install-media.png';
+import MacosSsh from '/assets/images/macos-ssh.png';
+import RemoteLogin from '/assets/images/macos-remote-login.png';
+import Hackintosh from '/assets/images/macos-hackintosh.png';
+
+# MacOS
+
+For MacOS guests, we will utilize the open-source project
+[OSX-KVM](https://github.com/kholia/OSX-KVM) which will help you create a **Virtual Hackintosh**.
+Such a system can be used for a variety of purposes (e.g. software builds, testing, reversing work)
+However, such a system lacks graphical acceleration, a reliable sound sub-system, USB 3
+functionality and other similar things. To enable these things, take a look their
+[notes](https://github.com/kholia/OSX-KVM/blob/master/notes.md). Older AMD CPU(s) are known to be
+problematic but modern AMD Ryzen processors work just fine.
+
+## A note on the legality of Hackintosh systems
+
+From
+[Legality of Hackintoshing](https://dortania.github.io/OpenCore-Install-Guide/why-oc.html#legality-of-hackintoshing)
+by OpenCore:
+
+Hackintoshing sits in a legal grey area, mainly that while this is not illegal it does in fact break
+the EULA. It is however not expressly Illegal as long as you abide by the following conditions:
+
+- You are downloading MacOS from Apple's servers directly
+- You are a non-profit organization, or using your hackintosh for educational and personal purposes.
+
+Users who plan to use their Hackintosh for professional or commercial purposes should refer to the
+[Psystar case](https://en.wikipedia.org/wiki/Psystar_Corporation) as well as their regional laws.
+This is not legal advice, consult a licensed attorney if you have further questions.
+
+## VM Setup
+
+- Clone the repo and cd into the new directory
+
+ ```bash
+ git clone --depth 1 --recursive https://github.com/kholia/OSX-KVM.git
+ cd OSX-KVM
+ ```
+
+- Configure the Virtual Machine options
+
+ ```bash
+ # The name of the Virtual Machine
+ export VM_NAME="gameci"
+
+ # Number of physical CPU cores to allocate to the VM
+ export PHYSICAL_CORES="2"
+
+ # Number of threads per core.
+ # Set this to `1` for CPUs that do not support hyper-threading
+ export THREADS="1"
+ export SMP=$(( $PHYSICAL_CORES * $THREADS ))
+
+ # MacOS uses much more disk space than Linux or windows due to Xcode tools etc...
+ # A minimum of 64G is advised.
+ export DISK_SIZE="64G"
+
+ # Amount of RAM to allocate to the VM.
+ # Cannot exceed available RAM on host.
+ export MEMORY="8G"
+
+ # IP address where host may be reached. Do not use `localhost`.
+ export HOST_ADDRESS="SOME IP HERE"
+
+ # Port used by SSH on the host
+ export HOST_SSH_PORT="22"
+
+ # Port to use when forwarding SSH to the VM
+ export VM_SSH_PORT="1234"
+
+ # Port number to expose on the host for VNC
+ export VNC_PORT="0"
+ ```
+
+## Download an Installer
+
+- Choose and download an installer using the included script
+
+ ```bash
+ ./fetch-macOS-v2.py
+ # 1. High Sierra (10.13)
+ # 2. Mojave (10.14)
+ # 3. Catalina (10.15)
+ # 4. Big Sur (11.7)
+ # 5. Monterey (12.6)
+ # 6. Ventura (13) - RECOMMENDED
+ # 7. Sonoma (14)
+ # Choose a product to download (1-6): 6
+ ```
+
+- Convert the downloaded BaseSystem.dmg file into the BaseSystem.img file.
+
+ ```bash
+ sudo apt-get install -y dmg2img && \
+ dmg2img -i BaseSystem.dmg BaseSystem.img
+ ```
+
+- Create a virtual disk image where MacOS will be installed.
+
+ ```bash
+ qemu-img create -f qcow2 mac_hdd_ng.img $DISK_SIZE &>/dev/null
+ ```
+
+## Create the VM
+
+- Create new guest
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -device usb-ehci,id=ehci \
+ -device nec-usb-xhci,id=xhci \
+ -global nec-usb-xhci.msi=off \
+ -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" \
+ -drive if=pflash,format=raw,readonly=on,file="OVMF_CODE.fd" \
+ -drive if=pflash,format=raw,file="OVMF_VARS-1024x768.fd" \
+ -smbios type=2 \
+ -device ich9-intel-hda -device hda-duplex \
+ -device ich9-ahci,id=sata \
+ -drive id=OpenCoreBoot,if=none,snapshot=on,format=qcow2,file="OpenCore/OpenCore.qcow2" \
+ -device ide-hd,bus=sata.2,drive="OpenCoreBoot" \
+ -device ide-hd,bus=sata.3,drive="InstallMedia" \
+ -drive id=InstallMedia,if=none,file="BaseSystem.img",format=raw \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=MacHDD,iothread=io \
+ -drive id=MacHDD,if=none,cache=none,format=qcow2,aio=threads,file="mac_hdd_ng.img" \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -usbdevice tablet \
+ -device usb-kbd,bus=ehci.0 \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+- Boot an existing guest
+
+ ```bash
+ sudo qemu-system-x86_64 \
+ -machine accel=kvm,type=q35 \
+ -cpu Penryn,kvm=on,vendor=GenuineIntel,+invtsc,vmware-cpuid-freq=on,+ssse3,+sse4.2,+popcnt,+avx,+aes,+xsave,+xsaveopt,check \
+ -smp $SMP,sockets=1,cores="$PHYSICAL_CORES",threads="$THREADS",maxcpus=$SMP \
+ -m "$MEMORY" \
+ -device usb-ehci,id=ehci \
+ -device nec-usb-xhci,id=xhci \
+ -global nec-usb-xhci.msi=off \
+ -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" \
+ -drive if=pflash,format=raw,readonly=on,file="OVMF_CODE.fd" \
+ -drive if=pflash,format=raw,file="OVMF_VARS-1024x768.fd" \
+ -smbios type=2 \
+ -device ich9-intel-hda -device hda-duplex \
+ -device ich9-ahci,id=sata \
+ -drive id=OpenCoreBoot,if=none,snapshot=on,format=qcow2,file="OpenCore/OpenCore.qcow2" \
+ -device ide-hd,bus=sata.2,drive="OpenCoreBoot" \
+ -object iothread,id=io \
+ -device virtio-blk-pci,drive=MacHDD,iothread=io \
+ -drive id=MacHDD,if=none,cache=none,format=qcow2,aio=threads,file="mac_hdd_ng.img" \
+ -serial stdio -vga virtio -parallel none \
+ -device virtio-net-pci,netdev=network \
+ -usbdevice tablet \
+ -device usb-kbd,bus=ehci.0 \
+ -netdev user,id=network,hostfwd=tcp::"${VM_SSH_PORT}"-:"${HOST_SSH_PORT}" \
+ -vnc "$HOST_ADDRESS":"$VNC_PORT"
+ ```
+
+## Connect to the VM using VNC.
+
+- In your VNC software use the address format `$HOST_ADDRESS:$VNC_PORT` to connect to the VM.
+
+
+
+
+
+
+
+## Format Virtual Drive
+
+- Select install media
+
+ Choose the 'macOS Base System' option using arrow-keys to select and enter to confirm.
+
+
+
+
+
+
+
+- Enter the disk utility
+
+
+
+
+
+
+
+- Format and rename the empty storage volumes
+
+ The disk should show up as `Apple Inc. VirtIO Block Media` in your Disk Utility window. Select the
+ `Erase` tool to reformat the volumes and change the name to `Macintosh HD`.
+
+
+
+
+
+
+
+- Quit the disk utility
+
+
+
+
+
+
+
+## Install MacOS
+
+- Choose the `Reinstall MacOS` option from the main menu
+
+
+
+
+
+
+
+- Choose your formatted volumes as the install location
+
+
+
+
+
+
+
+- Monitor the install process
+
+ The VM will reboot several times during the installation process. After the first reboot, you will
+ have a new option named 'MacOS Installer'. Select this option as your boot device.
+
+
+
+
+
+
+
+ When the first stage of the installation has completed, the 'MacOS Installer' option will change
+ to 'Macintosh HD' (or whatever you named your drive). Continue to select this option as the boot
+ device for all subsequent reboots.
+
+
+
+
+
+
+
+- Boot into MacOS and complete the account creation process
+
+ Finally, your VM should boot into the MacOS user setup screen and allow you to create your
+ account.
+
+
+
+
+
+
+
+## Enable Remote Login
+
+> Enabling remote access to your machine, especially on non-private networks is a security risk.
+> Securing MacOS is beyond the scope of this guide, but as a starting point users are advised to
+> disable password authentication for remote login as explained in the article
+> [Secure Your macOS Remote SSH Access by Disabling Password Login](https://medium.com/@stringmeteor/secure-your-macos-remote-ssh-access-by-disabling-password-access-68a92dd732d0).
+
+- Enable 'Remote Login' from the 'Sharing' menu in System Settings
+
+
+
+
+
+
+
+- Connect to the VM over SSH
+
+ Use the format `ssh @$HOST_ADDRESS -p$VM_SSH_PORT`
+
+
+
+
+
+
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx b/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx
new file mode 100644
index 00000000..f902a3a3
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/06-configuration.mdx
@@ -0,0 +1,416 @@
+# Advanced Configuration Options
+
+The following are advanced configuration options for QEMU Virtual Machines that users may find
+useful but carry significant risks for system corruption and data-loss. Many of the processes below
+are hardware-level options which will differ greatly across hardware types and vendors. Users are
+advised to proceed with caution and the understanding that support is provided on a 'best-effort'
+basis only.
+
+## Port forwarding
+
+By default, the provided QEMU creates VMs inside of a private virtual network through
+[SLIRP networking](https://wiki.qemu.org/Documentation/Networking). In this configuration network
+traffic must be forwarded to the VM through the host via opening ports.
+
+Exposing your machine to public networks is a security risk. Users should take appropriate measures
+to secure any public-facing services.
+
+- Example where RDP is being forwarded through the host to the VM:
+
+ ```bash
+ -netdev user,id=network,hostfwd=tcp::3389-:3389
+ ```
+
+- If you need to forward more ports, you may do so by adding another `hostfwd` argument to the
+ `-netdev` entry. This may become inconvenient when many ports need to be opened.
+
+ ```bash
+ -netdev user,id=network,hostfwd=tcp::3389-:3389,hostfwd=tcp::22-:22,hostfwd=tcp::80-:80
+ ```
+
+## Bridged Networks
+
+It is possible to create virtual machines on the host network, however it requires additional
+configuration of the host. This is accomplished by:
+
+1. Converting the Host's network device into a bridge.
+2. Creating a tap device controlled by the bridge.
+3. Allowing IP traffic forwarding over the bridge.
+4. Assigning the tap device to the virtual machine.
+
+Be aware that:
+
+- This process is NOT recommended for inexperienced users as performing the steps incorrectly will
+ result in a complete loss of network connectivity to the host machine which will require a reboot,
+ manual intervention, and potentially require re-imaging the machine.
+- Wireless network adapters CANNOT be used as bridge devices.
+- Many VPS and cloud-providers such as AWS and Equinix configure their networking in such a manner
+ that bridged networking in this way impossible.
+
+Proceed at your own risk.
+
+### Required Info
+
+- Network adapter name
+
+ You will need to find the name of your primary network adapter. This is hard to automate because
+ the name will change based on vendor, type of network adapter, and number of PCIe devices attached
+ to the host. Use the command `ip a` to show the network devices for your machine:
+
+ ```bash
+ 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
+ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+ inet 127.0.0.1/8 scope host lo
+ valid_lft forever preferred_lft forever
+ inet6 ::1/128 scope host
+ valid_lft forever preferred_lft forever
+ 2: enp0s31f6: mtu 1500 qdisc fq_codel state UP group default qlen 1000
+ link/ether 90:1b:0e:f3:86:e0 brd ff:ff:ff:ff:ff:ff
+ inet 192.168.50.101/32 scope global enp0s31f6
+ valid_lft forever preferred_lft forever
+ inet6 2a01:4f8:a0:3383::2/64 scope global
+ valid_lft forever preferred_lft forever
+ inet6 fe80::921b:eff:fef3:86e0/64 scope link
+ valid_lft forever preferred_lft forever
+ ```
+
+ In this example, the primary network adapter is `enp0s31f6`.
+
+- Network adapter's IP address
+
+ The IP address for the network adapter should be listed in the output of `ip a`. In the example
+ above it is `192.168.50.101/32`.
+
+- The default gateway
+
+ Use the command `route -n` to display the routing table. The default gateway should be the first
+ entry under the 'GATEWAY' header.
+
+ ```bash
+ Kernel IP routing table
+ Destination Gateway Genmask Flags Metric Ref Use Iface
+ 0.0.0.0 192.168.50.1 0.0.0.0 UG 0 0 0 enp0s31f6
+ 10.42.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
+ 10.42.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel-wg
+ 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
+ ```
+
+### Run the Script
+
+You will need to run these steps as a script unless you have physical access to a keyboard for the
+host machine, otherwise you will lose connection when the network restarts. Running as a script
+prevents this awkward issue. The following commands must be run from the `root` user account.
+
+- Create the script
+
+ ```bash
+ export NETWORK_DEVICE=""
+ export IP_ADDRESS=""
+ export DEFAULT_GATEWAY=""
+ export TAP_NUMBER="0"
+
+ /bin/cat << EOF > bridge.sh
+ #!/bin/bash -
+
+ # Treat unset variables as an error
+ set -o nounset
+
+ # create a bridge
+ sudo ip link add br0 type bridge
+ sudo ip link set br0 up
+ sudo ip link set ${NETWORK_DEVICE} up
+
+ ##########################################################
+ # network will drop here unless next steps are automated #
+ ##########################################################
+
+ # add the real ethernet interface to the bridge
+ sudo ip link set ${NETWORK_DEVICE} master br0
+
+ # remove all ip assignments from real interface
+ sudo ip addr flush dev ${NETWORK_DEVICE}
+
+ # give the bridge the real interface's old IP
+ sudo ip addr add ${IP_ADDRESS}/24 brd + dev br0
+
+ # add the default GW
+ sudo ip route add default via ${DEFAULT_GATEWAY} dev br0
+
+ # add a tap device for the user
+ sudo ip tuntap add dev tap${TAP_NUMBER} mode tap user root
+ sudo ip link set dev tap${TAP_NUMBER} up
+
+ # attach the tap device tot he bridge.
+ sudo ip link set tap${TAP_NUMBER} master br0
+
+ # Enable forwarding
+ iptables -F FORWARD
+ iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
+ sysctl -w net.ipv4.ip_forward=1
+
+ ##########################
+ # troubleshooting tips #
+ ##########################
+ #
+ # Show bridge status
+ # brctl show
+ #
+ # Show verbose of single item
+ # an active process must be attached to a device for it to not be "disabled"
+ # brctl showstp br0
+
+ EOF
+ ```
+
+- run the script
+
+ ```bash
+ sudo bash ./bridge.sh
+ ```
+
+- Verify the new bridge using `brctl show` and `ip a`
+
+ ```bash
+ brctl show
+ bridge name bridge id STP enabled interfaces
+ br0 8000.e6f9f3d4d16b yes enp0s31f6
+ tap0
+ docker0 8000.02422e6c0493 no
+ ```
+
+### Adjust QEMU command
+
+Since our VM will be making DHCP requests, we should assign it a mac-address.
+
+- Use the following command to generate a MAC address, or make up your own:
+
+ ```bash
+ openssl rand -hex 6 | sed 's/\(..\)/\1:/g; s/:$//'
+ ```
+
+- Populate the values below with your own, then use them to replace the existing `-device` and
+ `-netdev` options in the QEMU command.
+
+ ```bash
+ -device virtio-net-pci,netdev=network,mac=$MAC_ADDRESS \
+ -netdev tap,id=network,ifname=tap$TAP_NUMBER,script=no,downscript=no \
+ ```
+
+## Static IP Assignment
+
+When using a bridged network it is also possible to assign an IP address to your VM prior to
+creation. To do this, add a `netplan` configuration file to your Cloud-Init file, as well as a
+command to apply the new configuration.
+
+You will need to know the VM's network adapter name ahead of time which may take some trial and
+error, but it is generally `enp0s2` unless extra PCI devices are attached to the VM. In which case
+the final character may be a higher or lower number. Additionally you will need to know the IP
+address for the DNS name-server and default gateway.
+
+- Example:
+
+ ```yaml
+ #cloud-config
+ hostname: ${VM_NAME}
+ fqdn: ${VM_NAME}
+ disable_root: false
+ network:
+ config: disabled
+ users:
+ - name: ${USERNAME}
+ groups: users, admin, docker, sudo
+ sudo: ALL=(ALL) NOPASSWD:ALL
+ shell: /bin/bash
+ lock_passwd: false
+ passwd: ${PASSWORD}
+ write_files:
+ - path: /etc/netplan/99-my-new-config.yaml
+ permissions: '0644'
+ content: |
+ network:
+ ethernets:
+ ${INTERFACE}:
+ dhcp4: no
+ dhcp6: no
+ addresses: [${DESIRED_IP_ADDRESS}/24]
+ routes:
+ - to: default
+ via: ${DEFAULT_GATEWAY}
+ mtu: 1500
+ nameservers:
+ addresses: [${DNS_SERVER_IP}]
+ renderer: networkd
+ version: 2
+ runcmd:
+ ##############################################
+ # Apply the new config and remove the old one
+ - /usr/sbin/netplan --debug generate
+ - /usr/sbin/netplan --debug apply
+ - rm /etc/netplan/50*
+ ##############################################
+ # Give the network adapter time to come online
+ - sleep 5
+ ```
+
+## GPU Passthrough
+
+### Enabling IOMMU
+
+With QEMU, we are able to pass PCIe devices such as GPUs from the host to the guest using VFIO which
+can provide native-level performance. You will need to make sure that
+`Intel Virtualization Technology` and `Intel VT-d`, or `IOMMU` are enabled in your BIOS.
+Informatiweb.net has some good examples of this process in
+[this thread](https://us.informatiweb.net/tutorials/it/bios/enable-iommu-or-vt-d-in-your-bios.html).
+
+- Enable IOMMU
+
+ Enable IOMMU by changing the `GRUB_CMDLINE_LINUX_DEFAULT` line in your `/etc/default/grub` file to
+ the following:
+
+ ```bash
+ GRUB_CMDLINE_LINUX_DEFAULT="iommu=pt amd_iommu=on intel_iommu=on"
+ ```
+
+- Run `sudo update-grub`
+
+- Reboot the host
+
+### Verify IOMMU Compatibility
+
+Now that IOMMU is enabled we should be able to find devices in the `/sys/kernel/iommu_groups`
+directory. Use the following command to check if IOMMU is working. If everything works, the output
+will be a number higher than 0. If the output is 0, your system does not support IOMMU.
+
+```bash
+sudo find /sys/kernel/iommu_groups/ -type l |grep -c "/sys/kernel/iommu_groups/"
+```
+
+### Assign the `vfio-pci` driver
+
+To assign the correct driver to your GPU we will need to download the `driverctl` utility.
+Afterwards we will need to locate the PCI bus-ID for our GPU, then use that information with
+driverctl to assign the vfio-pci driver.
+
+- Install `driverctl`
+
+ ```bash
+ sudo apt-get install -y driverctl
+ ```
+
+- Get the GPU PCI bus-ID
+
+ The ID you are looking for will be the first value from the left.
+
+ ```bash
+ export GPU_BRAND="nvidia"
+ lspci |grep -ai ${GPU_BRAND} |grep VGA
+ ```
+
+ Output:
+
+ ```bash
+ 1:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1080] (rev a1)
+ ```
+
+- Assign the driver
+
+ Show available device by running the following command:
+
+ ```bash
+ sudo driverctl list-devices
+ ```
+
+ Output:
+
+ ```bash
+ 0000:00:00.0 skl_uncore
+ 0000:00:01.0 pcieport
+ 0000:00:14.0 xhci_hcd
+ 0000:00:14.2 intel_pch_thermal
+ 0000:00:16.0 (none)
+ 0000:00:17.0 ahci
+ 0000:00:1f.0 (none)
+ 0000:00:1f.2 (none)
+ 0000:00:1f.4 i801_smbus
+ 0000:00:1f.6 e1000e
+ 0000:01:00.0 nvidia
+ 0000:01:00.1 snd_hda_intel
+ ```
+
+ In the example above, we can see that the GPU's Bus ID `1:00.0` is present in the list and that
+ the driver currently assigned is `nvidia`. We can also see a second device in the same id-range:
+ `01.00.1` which is using an Intel driver. In order for passthrough to work properly, all devices
+ in the same id-range must use the `vfio-pci` driver.
+
+ To change the drivers we will run the following:
+
+ ```bash
+ sudo driverctl set-override 0000:01:00.0 vfio-pci
+ sudo driverctl set-override 0000:01:00.1 vfio-pci
+ ```
+
+ Run `sudo driverctl list-devices` again and you should now see the vfio-pci driver in use.
+
+ Output:
+
+ ```bash
+ 0000:00:00.0 skl_uncore
+ 0000:00:01.0 pcieport
+ 0000:00:14.0 xhci_hcd
+ 0000:00:14.2 intel_pch_thermal
+ 0000:00:16.0 (none)
+ 0000:00:17.0 ahci
+ 0000:00:1f.0 (none)
+ 0000:00:1f.2 (none)
+ 0000:00:1f.4 i801_smbus
+ 0000:00:1f.6 e1000e
+ 0000:01:00.0 vfio-pci [*]
+ 0000:01:00.1 vfio-pci [*]
+ ```
+
+### Adjust the QEMU command
+
+You are now ready to pass the GPU to your VM by adding a new `-device` option to the QEMU command.
+
+> If you plan to use your VM for full-screen applications then you may need to disable the virtual
+> display so that apps will properly choose the display attached to the GPU instead of the RedHat
+> virtual display. Do this by changing `-vga virtio` to `-vga none`. You will need to use x11vnc,
+> xrdp, sunshine etc.. to create a new remote display as the default QEMU VNC server will now
+> display the QEMU console instead of a display.
+
+- Consumer GPUs
+
+ ```bash
+ export BUS_ID="1:00.0"
+ -device vfio-pci,host=${BUS_ID},multifunction=on,x-vga=on \
+ ```
+
+- Data Center GPUs with large amounts of VRAM
+
+ ```bash
+ export BUS_ID="1:00.0"
+ -device vfio-pci,host=${BUS_ID},multifunction=on -fw_cfg name=opt/ovmf/X-PciMmio64Mb,string=65536 \
+ ```
+
+- Nvidia vGPU devices
+
+ ```bash
+ export VGPU="/sys/bus/mdev/devices/$VGPU_UUID"
+ -device vfio-pci,sysfsdev=${VGPU} -uuid $(uuidgen)
+ ```
+
+### Additional Reading
+
+- [GPU Passthrough on a Dell Precision 7540 and other high end laptops](https://leduccc.medium.com/simple-dgpu-passthrough-on-a-dell-precision-7450-ebe65b2e648e) -
+ leduccc
+
+- [Comprehensive guide to performance optimizations for gaming on virtual machines with KVM/QEMU and PCI passthrough](https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/) -
+ Mathias Hüber
+
+- [gpu-virtualization-with-kvm-qemu](https://medium.com/@calerogers/gpu-virtualization-with-kvm-qemu-63ca98a6a172) -
+ Cale Rogers
+
+- [Faster Virtual Machines on Linux Hosts with GPU Acceleration](https://adamgradzki.com/2020/04/06/faster-virtual-machines-linux/) -
+ Adam Gradzki
+
+- [PCI VFIO options](https://www.kernel.org/doc/html/latest/driver-api/vfio-pci-device-specific-driver-acceptance.html?highlight=vfio%20pci)
diff --git a/docs/12-self-hosting/03-host-creation/03-QEMU/_category_.yaml b/docs/12-self-hosting/03-host-creation/03-QEMU/_category_.yaml
new file mode 100644
index 00000000..6ba8af3f
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/03-QEMU/_category_.yaml
@@ -0,0 +1,4 @@
+---
+label: VMs with QEMU (Advanced)
+collapsible: true
+collapsed: true
diff --git a/docs/12-self-hosting/03-host-creation/_category_.yaml b/docs/12-self-hosting/03-host-creation/_category_.yaml
new file mode 100644
index 00000000..b4dee2cf
--- /dev/null
+++ b/docs/12-self-hosting/03-host-creation/_category_.yaml
@@ -0,0 +1,4 @@
+---
+label: Host Creation
+collapsible: true
+collapsed: true
diff --git a/docs/12-self-hosting/04-host-provisioning/01-debian-setup.mdx b/docs/12-self-hosting/04-host-provisioning/01-debian-setup.mdx
new file mode 100644
index 00000000..149fab97
--- /dev/null
+++ b/docs/12-self-hosting/04-host-provisioning/01-debian-setup.mdx
@@ -0,0 +1,318 @@
+---
+toc_max_heading_level: 4
+---
+
+# Debian Machine Setup
+
+Steps for manual configuration and provisioning of Debian 12 server systems. These steps will also
+upgrade a Debain 11 system to Debian 12. This guide assumes and recommends that the user is starting
+from a fresh installation. If you unfamiliar with the installation process for Debian, see the links
+below before progressing.
+
+- [How to Install Debian](https://www.linuxtechi.com/how-to-install-debian-11-bullseye/)
+
+- [Debian 12 ISO Image](https://cdimage.debian.org/cdimage/weekly-builds/amd64/iso-dvd/debian-testing-amd64-DVD-1.iso)
+
+## Base Packages
+
+- Log in to your host as the root user
+
+- Add required the apt sources and upgrade your system to the latest version.
+
+ ```bash
+ cat << EOF > /etc/apt/sources.list
+ deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware
+ deb-src http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware
+
+ deb http://deb.debian.org/debian-security/ bookworm-security main contrib non-free
+ deb-src http://deb.debian.org/debian-security/ bookworm-security main contrib non-free
+
+ deb http://deb.debian.org/debian bookworm-updates main contrib non-free
+ deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free
+ EOF
+ ```
+
+- Apply system updates and upgrades
+
+ Update the package list and upgrade system components prior to installing other software. Reboot
+ after the process completes.
+
+ ```bash
+ # run as root
+ apt-get update && \
+ apt-get upgrade -y && \
+ apt-get full-upgrade -y
+
+ reboot
+ ```
+
+- Install base utilities
+
+ The following is a curated set of base packages that are dependencies for steps later in the guide
+ and helpful in general.
+
+ ```bash
+ # Run as root
+ apt-get update && \
+ apt-get install -y wireguard \
+ ssh-import-id \
+ sudo \
+ curl \
+ tmux \
+ netplan.io \
+ apt-transport-https \
+ ca-certificates \
+ software-properties-common \
+ htop \
+ git-extras \
+ rsyslog \
+ fail2ban \
+ vim \
+ gpg \
+ open-iscsi \
+ nfs-common \
+ ncdu \
+ zip \
+ unzip \
+ iotop && \
+ sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && \
+ sudo chmod +x /usr/bin/yq && \
+ sudo systemctl enable fail2ban && \
+ sudo systemctl start fail2ban
+ ```
+
+## Setup the admin user
+
+- Create the user
+
+ ```bash
+ export NEW_USER=""
+ useradd -s /bin/bash -d /home/$NEW_USER/ -m -G sudo $NEW_USER
+ ```
+
+- Grant passwordless sudo permission
+
+ ```bash
+ echo "$NEW_USER ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
+ ```
+
+- Import an ssh key
+
+ If you have a GitHub, GitLab, or Launchpad account, you can use `ssh-import-id` to install you ssh
+ public-key into your host.
+
+ - [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account?platform=linux)
+ - [Use SSH keys to communicate with GitLab ](https://docs.gitlab.com/ee/user/ssh.html)
+ - [launchpad.net](https://launchpad.net/)
+
+ If you do not have one of the above, you can always upload your key manually as described in
+ [SSH Copy ID for Copying SSH Keys to Servers](https://www.ssh.com/academy/ssh/copy-id) from
+ [ssh.com](https://www.ssh.com).
+
+ Example usage for `ssh-import-id`:
+
+ ```bash
+ # GitHub
+ sudo -u $NEW_USER ssh-import-id-gh
+
+ # GitLab
+ URL="https://gitlab.exampledomain.com/%s.keys" sudo -u $NEW_USER ssh-import-id
+ ```
+
+- Add the user to relevant groups
+
+ ```bash
+ usermod -a -G kvm $NEW_USER
+ ```
+
+- Create a password for the user
+
+ ```bash
+ passwd $NEW_USER
+ ```
+
+## Install Docker
+
+- Download the docker GPG key
+
+ ```bash
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
+ sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ ```
+
+- Add the apt package source
+
+ ```bash
+ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+ ```
+
+- Update the apt package list and install Docker
+
+ ```bash
+ sudo apt-get update && sudo apt-get install -y docker-ce
+ ```
+
+- Add the user to the docker group
+
+ ```bash
+ usermod -a -G docker $NEW_USER
+ ```
+
+## install Docker Compose (Optional)
+
+The default apt package lists provide a very outdated version of docker compose. Below are the steps
+required to install a current version.
+
+- Find the latest version by visiting https://github.com/docker/compose/releases
+
+ ```bash
+ export COMPOSE_VERSION="2.17.3"
+ ```
+
+- Create a directory for the binary
+
+ ```bash
+ mkdir -p ~/.docker/cli-plugins/
+ ```
+
+- Download the binary
+
+ ```bash
+ curl -SL https://github.com/docker/compose/releases/download/v$COMPOSE_VERSION/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
+ ```
+
+- Make it executable
+
+ ```bash
+ chmod +x ~/.docker/cli-plugins/docker-compose
+ ```
+
+## Install NVIDIA GPU Drivers (Optional)
+
+Debian's built-in driver installation process is more reliable than Ubuntu's, but readers are still
+advised to get their driver installaer directly from NVIDIA. Instructions for apt installation are
+included for completeness.
+
+### Download and Install from Nvidia
+
+- Install required packages
+
+ ```bash
+ # Run as root
+ apt-get install -y gcc \
+ firmware-misc-nonfree \
+ linux-headers-amd64 \
+ linux-headers-`uname -r`
+ ```
+
+- Find your driver version and download the installer
+
+ You can find both gaming and data-center drivers using the Nvidia web tool, however cloud and vps
+ users will need to refer to their provider's documentation for installation instructions as many
+ providers will require you to use GRID drivers instead.
+
+ - Nvidia's web tool: https://www.nvidia.com/download/index.aspx.
+
+ Alternatively, you can download a specific driver version using curl:
+
+ ```bash
+ export DRIVER_VERSION=""
+
+ # GeForce Cards
+ curl --progress-bar -fL -O "https://us.download.nvidia.com/XFree86/Linux-x86_64/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run"
+
+ # Datacenter Cards
+ curl --progress-bar -fL -O "https://us.download.nvidia.com/tesla/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run"
+ ```
+
+- Run the Installer
+
+ It is required to run the driver from the system console, it cannot be installed from an
+ X-session. If you used a desktop image instead of a server image you will need to disable your
+ session manager.
+
+ ```bash
+ sudo bash "NVIDIA-Linux-x86_64-*.run"
+ ```
+
+### Download and Install from Apt
+
+Will not work for data-center cards or hosts which require GRID drivers.
+
+```bash
+sudo apt-get install nvidia-driver
+```
+
+## Install the Nvidia Container Toolkit (Optional)
+
+The Nvidia Container Toolkit allows contains to access the GPU resources of the underlying host
+machine. Requires that the GPU drivers are already installed on the host. See the official docs
+here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
+
+- Set your system distribution name to Debian 11 as workaround until Nvidia adds official Debian 12
+ support
+
+ ```bash
+ distribution=debian11
+ ```
+
+- Download the gpg key and add the repo to your apt sources
+
+ ```bash
+ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
+ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
+ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
+ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
+ ```
+
+- Update apt packages and install the container toolkit
+
+ ```bash
+ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
+ ```
+
+- Set `nvidia` the default container runtime
+
+ ```bash
+ sudo nvidia-ctk runtime configure --runtime=docker --set-as-default
+ ```
+
+- Restart the docker service
+
+ ```bash
+ sudo systemctl restart docker
+ ```
+
+- Test that it is working
+
+ Run one of Nvidia's
+ [sample workloads](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html)
+ to test system functionality.
+
+## Install the Prometheus metrics exporter (Optional)
+
+- Log in as, or assume the root user.
+
+- Download the metrics exporter application
+
+ ```bash
+ wget -O /opt/node_exporter-1.6.1.linux-amd64.tar.gz https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz && \
+ ```
+
+- Extract the archive and copy into place
+
+ ```bash
+ tar -xvf /opt/node_exporter-1.6.1.linux-amd64.tar.gz -C /opt && \
+ rm /opt/node_exporter-1.6.1.linux-amd64.tar.gz && \
+ ln -s node_exporter-1.6.1.linux-amd64 /opt/node_exporter
+ ```
+
+- Download and create the system service
+
+ ```bash
+ wget https://raw.githubusercontent.com/small-hack/smol-metal/main/node-exporter.service && \
+ sudo mv node-exporter.service /etc/systemd/system/node-exporter.service && \
+ systemctl daemon-reload && \
+ systemctl enable node-exporter && \
+ systemctl restart node-exporter
+ ```
diff --git a/docs/12-self-hosting/04-host-provisioning/02-ubuntu-setup.mdx b/docs/12-self-hosting/04-host-provisioning/02-ubuntu-setup.mdx
new file mode 100644
index 00000000..04e93cd7
--- /dev/null
+++ b/docs/12-self-hosting/04-host-provisioning/02-ubuntu-setup.mdx
@@ -0,0 +1,308 @@
+---
+toc_max_heading_level: 4
+---
+
+# Ubuntu Machine Setup
+
+Steps for manual configuration and provisioning of Ubuntu 22.04 server systems. This guide assumes
+and recommends that the user is starting from a fresh installation. If you unfamiliar with the
+installation process for Ubuntu, see the link below before progressing.
+
+- [How to Install Ubuntu 22.04 LTS Server Edition](https://ostechnix.com/install-ubuntu-server/)
+
+## Base Packages
+
+- Log in to your host as the root user.
+
+- Apply system updates and upgrades
+
+ You need update the package list and upgrade system components prior to installing other software.
+ Often this process results in updates that will require a system reboot.
+
+ ```bash
+ apt-get update && \
+ apt-get upgrade -y
+
+ reboot
+ ```
+
+- Install base utilities.
+
+ The following is a curated set of base packages that are dependencies for steps later in the guide
+ and helpful in general.
+
+ ```bash
+ apt-get update && \
+ apt-get install -y wireguard \
+ ssh-import-id \
+ sudo \
+ curl \
+ tmux \
+ netplan.io \
+ apt-transport-https \
+ ca-certificates \
+ software-properties-common \
+ htop \
+ git-extras \
+ rsyslog \
+ fail2ban \
+ vim \
+ gpg \
+ open-iscsi \
+ nfs-common \
+ ncdu \
+ zip \
+ unzip \
+ iotop && \
+ sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && \
+ sudo chmod +x /usr/bin/yq && \
+ sudo systemctl enable fail2ban && \
+ sudo systemctl start fail2ban
+ ```
+
+## Admin User Creation
+
+- Create the user
+
+ ```bash
+ export NEW_USER=""
+ useradd -s /bin/bash -d /home/$NEW_USER/ -m -G sudo $NEW_USER
+ ```
+
+- Grant passwordless sudo permission
+
+ ```bash
+ echo "$NEW_USER ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
+ ```
+
+- Import an ssh key
+
+ If you have a GitHub, GitLab, or Launchpad account, you can use `ssh-import-id` to install you ssh
+ public-key into your host.
+
+ - [Adding a new SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account?platform=linux)
+ - [Use SSH keys to communicate with GitLab ](https://docs.gitlab.com/ee/user/ssh.html)
+ - [launchpad.net](https://launchpad.net/)
+
+ If you do not have one of the above, you can always upload your key manually as described in
+ [SSH Copy ID for Copying SSH Keys to Servers](https://www.ssh.com/academy/ssh/copy-id) from
+ [ssh.com](https://www.ssh.com).
+
+ Example usage for `ssh-import-id`:
+
+ ```bash
+ # GitHub
+ sudo -u $NEW_USER ssh-import-id-gh
+
+ # GitLab
+ URL="https://gitlab.exampledomain.com/%s.keys" sudo -u $NEW_USER ssh-import-id
+ ```
+
+- Add the user to relevant groups
+
+ ```bash
+ usermod -a -G kvm $NEW_USER
+ ```
+
+- Create a password for the user
+
+ ```bash
+ passwd $NEW_USER
+ ```
+
+## Install Docker
+
+Setting up docker can get tricky because the version available from `apt` is usually out of date.
+
+- Download the docker gpg key
+
+ ```bash
+ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
+ sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ ```
+
+- Add the apt package source
+
+ ```bash
+ echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+ ```
+
+- Update the apt package list and install Docker
+
+ ```bash
+ sudo apt-get update && sudo apt-get install -y docker-ce
+ ```
+
+- Add user to the docker group
+
+ ```bash
+ usermod -a -G docker $NEW_USER
+ ```
+
+## Install Docker Compose (Optional)
+
+As with Docker, the default `apt` packages provide a very outdated version of docker compose. Below
+are the steps required to install a current version.
+
+- Find the latest version by visiting https://github.com/docker/compose/releases
+
+ ```bash
+ export COMPOSE_VERSION="2.17.3"
+ ```
+
+- Create a directory for the binary
+
+ ```bash
+ mkdir -p ~/.docker/cli-plugins/
+ ```
+
+- Download the binary
+
+ ```bash
+ curl -SL https://github.com/docker/compose/releases/download/v$COMPOSE_VERSION/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
+ ```
+
+- Make it executable
+
+ ```bash
+ chmod +x ~/.docker/cli-plugins/docker-compose
+ ```
+
+## Install NVIDIA GPU Drivers
+
+Ubuntu's built-in driver installation tool is really unreliable. Instructions for its use are
+included, but readers are advised to get their driver installer directly from Nvidia.
+
+### Download and Install from Nvidia
+
+- Download and install driver dependencies
+
+ ```bash
+ sudo apt-get install -y ubuntu-drivers-common \
+ linux-headers-generic \
+ gcc \
+ kmod \
+ make \
+ pkg-config \
+ libvulkan1
+ ```
+
+- Find your driver version and download the installer
+
+ You can find both gaming and data-center drivers using the Nvidia web tool, however cloud and vps
+ users will need to refer to their provider's documentation for installation instructions as many
+ providers will require you to use GRID drivers instead.
+
+ - Nvidia's web tool: https://www.nvidia.com/download/index.aspx.
+
+ Alternatively, you can download a specific driver version using curl
+
+ ```bash
+ export DRIVER_VERSION=""
+
+ # GeForce Cards
+ curl --progress-bar -fL -O "https://us.download.nvidia.com/XFree86/Linux-x86_64/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run"
+
+ # Datacenter Cards
+ curl --progress-bar -fL -O "https://us.download.nvidia.com/tesla/$DRIVER_VERSION/NVIDIA-Linux-x86_64-$DRIVER_VERSION.run"
+ ```
+
+- Run the Installer
+
+ It is required to run the driver from the system console, it cannot be installed from an
+ X-session. If you used a desktop image instead of a server image you will need to disable your
+ session manager.
+
+ ```bash
+ sudo bash "NVIDIA-Linux-x86_64-*.run"
+ ```
+
+### Download and Install from apt
+
+Automatic install is currently broken for multiple card-types, see
+[#1993019](https://bugs.launchpad.net/ubuntu/+source/ubuntu-drivers-common/+bug/1993019).
+
+- Automatic install (Broken)
+
+ ```bash
+ sudo ubuntu-drivers autoinstall
+ ```
+
+- Install specific driver version
+
+ ```bash
+ sudo ubuntu-drivers install nvidia:525
+ ```
+
+## Install the Nvidia Container Toolkit (Optional)
+
+The Nvidia Container Toolkit allows containers to access the GPU resources of the underlying host
+machine. Requires that the GPU drivers are already installed on the host. See the official docs
+here: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
+
+- Get your system distribution name
+
+ ```bash
+ distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
+ ```
+
+- Download the gpg key and add the repo to your apt sources
+
+ ```bash
+ curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
+ && curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | \
+ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
+ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
+ ```
+
+- Update apt packages and install the container toolkit
+
+ ```bash
+ sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
+ ```
+
+- Set `nvidia` the default container runtime
+
+ ```bash
+ sudo nvidia-ctk runtime configure --runtime=docker --set-as-default
+ ```
+
+- Restart the docker service
+
+ ```bash
+ sudo systemctl restart docker
+ ```
+
+- Test that it is working
+
+ Run one of Nvidia's
+ [sample workloads](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/sample-workload.html)
+ to verify functionality.
+
+## Install the Prometheus metrics exporter (Optional)
+
+- Log in as, or assume the root user.
+
+- Download the metrics exporter application
+
+ ```bash
+ wget -O /opt/node_exporter-1.6.1.linux-amd64.tar.gz https://github.com/prometheus/node_exporter/releases/download/v1.6.1/node_exporter-1.6.1.linux-amd64.tar.gz && \
+ ```
+
+- Extract the archive and copy into place
+
+ ```bash
+ tar -xvf /opt/node_exporter-1.6.1.linux-amd64.tar.gz -C /opt && \
+ rm /opt/node_exporter-1.6.1.linux-amd64.tar.gz && \
+ ln -s node_exporter-1.6.1.linux-amd64 /opt/node_exporter
+ ```
+
+- Download and create the system service
+
+ ```bash
+ wget https://raw.githubusercontent.com/small-hack/smol-metal/main/node-exporter.service && \
+ sudo mv node-exporter.service /etc/systemd/system/node-exporter.service && \
+ systemctl daemon-reload && \
+ systemctl enable node-exporter && \
+ systemctl restart node-exporter
+ ```
diff --git a/docs/12-self-hosting/04-host-provisioning/03-cloud-init.mdx b/docs/12-self-hosting/04-host-provisioning/03-cloud-init.mdx
new file mode 100644
index 00000000..40633061
--- /dev/null
+++ b/docs/12-self-hosting/04-host-provisioning/03-cloud-init.mdx
@@ -0,0 +1,205 @@
+# Declarative Provisioning using Cloud-Init
+
+Cloud-Init is a software package that automates the initialization of cloud instances during system
+boot and has become the industry standard solution for operating system customization. Cloud-Init is
+directly integrated into nearly every operating system which publishes a cloud-image, as well as
+supported by most cloud-providers and virtual machine managers. Cloud-Init can also provision
+bare-metal systems running Ubuntu.
+
+- Create custom Ubuntu installation images based on Cloud-Init using
+ [PXEless](https://github.com/cloudymax/pxeless)
+- Using Cloud-Init with Multipass or QEMU for automated provisioning
+- Use your Cloud-Init config to deploy VMs in the cloud
+
+Additional Cloud-Init resources:
+
+- [My Magical Adventure With Cloud-Init](https://christine.website/blog/cloud-init-2021-06-04) by
+ [Xe Iaso](https://xeiaso.net/)
+- [Cloud-Init Examples by Canonical](https://cloudinit.readthedocs.io/en/latest/reference/examples.html)
+- [Cloud-Init official GitHub repo](https://github.com/canonical/cloud-init)
+
+## Ubuntu Template
+
+This Cloud-Init template automates the non-optional provisioning steps from the
+[Ubuntu Machine Setup](./02-ubuntu-setup.mdx) guide.
+
+```bash
+/bin/cat << EOF > cloud-init.yaml
+#cloud-config
+disable_root: false
+network:
+ config: disabled
+users:
+ - name: ${VM_USER}
+ groups: users, admin, docker, sudo, kvm
+ sudo: ALL=(ALL) NOPASSWD:ALL
+ shell: /bin/bash
+ lock_passwd: false
+ passwd: ${PASSWORD}
+ ssh_authorized_keys:
+ - ${VM_KEY}
+package_update: true
+package_upgrade: true
+packages:
+ - wireguard
+ - ssh-import-id
+ - sudo
+ - curl
+ - tmux
+ - netplan.io
+ - apt-transport-https
+ - ca-certificates
+ - software-properties-common
+ - htop
+ - git-extras
+ - rsyslog
+ - fail2ban
+ - vim
+ - gpg
+ - open-iscsi
+ - nfs-common
+ - ncdu
+ - zip
+ - unzip
+ - iotop
+ - gcc
+ - ubuntu-drivers-common
+ - kmod
+ - make
+ - pkg-config
+ - libvulkan1
+runcmd:
+ #####################
+ # Install linux headers
+ - linux-headers-generic
+ ######################
+ # Install YQ
+ - wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq &&\
+ - chmod +x /usr/bin/yq
+ ######################
+ # Install Docker
+ - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/docker.list
+ - sudo apt-get update
+ - sudo apt-get install -y docker-ce
+ ########################
+ # Install Docker Compose
+ - sudo -u ${VM_USER} -i mkdir -p /home/${VM_USER}/.docker/cli-plugins/
+ - sudo -u ${VM_USER} -i curl -SL https://github.com/docker/compose/releases/download/v2.17.3/docker-compose-linux-x86_64 -o /home/${VM_USER}/.docker/cli-plugins/docker-compose
+ - sudo chmod +x /home/${VM_USER}/.docker/cli-plugins/docker-compose
+ ########################
+ # Brew and Python3
+ - wget https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
+ - chmod +x /install.sh
+ - chmod 777 /install.sh
+ - sudo -u ${VM_USER} NONINTERACTIVE=1 /bin/bash /install.sh
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/bin/brew shellenv >> /home/${VM_USER}/.profile
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/opt/python@3.11/libexec/bin >> /home/${VM_USER}/.profile
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/bin/brew install python@3.11
+ - sudo chown -R ${VM_USER}:${VM_USER} /home/linuxbrew
+ - sudo chown -R ${VM_USER}:${VM_USER} /home/${VM_USER}
+ ########################
+ # Start fail2ban
+ - sudo systemctl enable fail2ban
+ - sudo systemctl start fail2ban
+ - reboot
+EOF
+```
+
+## Debian Template
+
+This Cloud-Init template automates the non-optional provisioning steps from the
+[Debian Machine Setup](./01-debian-setup.mdx) guide.
+
+```bash
+/bin/cat << EOF > cloud-init.yaml
+#cloud-config
+disable_root: false
+network:
+ config: disabled
+groups:
+ - docker
+users:
+ - name: ${VM_USER}
+ groups: users, admin, docker, sudo, kvm
+ sudo: ALL=(ALL) NOPASSWD:ALL
+ shell: /bin/bash
+ lock_passwd: false
+ passwd: ${PASSWORD}
+ ssh_authorized_keys:
+ - ${VM_KEY}
+write_files:
+- path: /etc/apt/sources.list
+ content: |
+ deb http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware
+ deb-src http://deb.debian.org/debian bookworm main contrib non-free non-free-firmware
+
+ deb http://deb.debian.org/debian-security/ bookworm-security main contrib non-free
+ deb-src http://deb.debian.org/debian-security/ bookworm-security main contrib non-free
+
+ deb http://deb.debian.org/debian bookworm-updates main contrib non-free
+ deb-src http://deb.debian.org/debian bookworm-updates main contrib non-free
+package_update: true
+package_upgrade: true
+packages:
+ - wireguard
+ - ssh-import-id
+ - sudo
+ - curl
+ - tmux
+ - netplan.io
+ - apt-transport-https
+ - ca-certificates
+ - software-properties-common
+ - htop
+ - git-extras
+ - rsyslog
+ - fail2ban
+ - vim
+ - gpg
+ - open-iscsi
+ - nfs-common
+ - ncdu
+ - zip
+ - unzip
+ - iotop
+ - gcc
+ - firmware-misc-nonfree
+runcmd:
+ #####################
+ # Install linux headers
+ - linux-headers-amd64
+ - linux-headers-`uname -r`
+ ######################
+ # Install YQ
+ - wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq &&\
+ - chmod +x /usr/bin/yq
+ ######################
+ # Install Docker
+ - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
+ - echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian bookworm stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
+ - sudo apt-get update
+ - sudo apt-get install -y docker-ce
+ ########################
+ # Install Docker Compose
+ - sudo -u ${VM_USER} -i mkdir -p /home/${VM_USER}/.docker/cli-plugins/
+ - sudo -u ${VM_USER} -i curl -SL https://github.com/docker/compose/releases/download/v2.17.3/docker-compose-linux-x86_64 -o /home/${VM_USER}/.docker/cli-plugins/docker-compose
+ - sudo chmod +x /home/${VM_USER}/.docker/cli-plugins/docker-compose
+ ########################
+ # Brew and Python3
+ - wget https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh
+ - chmod +x /install.sh
+ - chmod 777 /install.sh
+ - sudo -u ${VM_USER} NONINTERACTIVE=1 /bin/bash /install.sh
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/bin/brew shellenv >> /home/${VM_USER}/.profile
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/opt/python@3.11/libexec/bin >> /home/${VM_USER}/.profile
+ - sudo -u ${VM_USER} /home/linuxbrew/.linuxbrew/bin/brew install python@3.11
+ - sudo chown -R ${VM_USER}:${VM_USER} /home/linuxbrew
+ - sudo chown -R ${VM_USER}:${VM_USER} /home/${VM_USER}
+ ########################
+ # Start fail2ban
+ - sudo systemctl enable fail2ban
+ - sudo systemctl start fail2ban
+ - reboot
+EOF
+```
diff --git a/docs/12-self-hosting/04-host-provisioning/_category_.yaml b/docs/12-self-hosting/04-host-provisioning/_category_.yaml
new file mode 100644
index 00000000..561e9c8b
--- /dev/null
+++ b/docs/12-self-hosting/04-host-provisioning/_category_.yaml
@@ -0,0 +1,4 @@
+---
+label: Host Provisioning
+collapsible: true
+collapsed: true
diff --git a/docs/12-self-hosting/05-runner-application-installation/01-gitlab-pipelines.mdx b/docs/12-self-hosting/05-runner-application-installation/01-gitlab-pipelines.mdx
new file mode 100644
index 00000000..443aced3
--- /dev/null
+++ b/docs/12-self-hosting/05-runner-application-installation/01-gitlab-pipelines.mdx
@@ -0,0 +1,225 @@
+---
+toc_max_heading_level: 4
+---
+
+import RepoPage from '/assets/images/gl-runner-repo-page.png';
+import CiMenu from '/assets/images/gl-runner-cicd-menu.png';
+import CiSettings from '/assets/images/gl-runner-cicd-settings.png';
+import NewRunner from '/assets/images/gl-runner-new-runner.png';
+import SelectOS from '/assets/images/gl-runner-select-os.png';
+import SelectTags from '/assets/images/gl-runner-tags.png';
+import CreateRunner from '/assets/images/gl-runner-create-runner.png';
+import RegisterRunner from '/assets/images/gl-runner-register-runner.png';
+import AccessTokenSelect from '/assets/images/gl-runner-access-token-select.png';
+import TokenScreen from '/assets/images/gl-runner-access-token-screen.png';
+import TokenName from '/assets/images/gl-runner-token-name.png';
+import TokenRole from '/assets/images/gl-runner-token-role.png';
+import TokenCreate from '/assets/images/gl-runner-token-create-button.png';
+import TokenComplete from '/assets/images/gl-runner-token-complete.png';
+
+# Gitlab Pipelines
+
+Gitlab pipeline runners support multiple installation and runtime configurations. This guide will
+create a very basic runner using the `shell` executor in order to avoid running docker-in-docker.
+This guide provides both manual and automated setup instructions, skip to the bottom of the page for
+the automation script.
+
+You may find more in-depth information regarding Gitlab Runners at the following links:
+
+- [Gitlab Runner](https://docs.gitlab.com/runner/)
+- [Install GitLab Runner](https://docs.gitlab.com/runner/install/)
+- [Configuring GitLab Runner](https://docs.gitlab.com/runner/configuration/)
+
+## Manual
+
+1. Download and install the Gitlab runner application on your host
+
+ ```bash
+ # Download the binary for your system
+ sudo curl -L --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64
+
+ # Give it permission to execute
+ sudo chmod +x /usr/local/bin/gitlab-runner
+
+ # Create a GitLab Runner user
+ sudo useradd --comment 'GitLab Runner' --create-home gitlab-runner --shell /bin/bash
+
+ # Install and run as a service
+ sudo gitlab-runner install --user=gitlab-runner --working-directory=/home/gitlab-runner
+ sudo gitlab-runner start
+ ```
+
+2. Log in to Gitlab and navigate to the repo you want to create a runner for
+
+
+
+
+
+
+
+3. From the menu on the left, select `Settings` then choose `CI/CD` from the pop-out list.
+
+
+
+
+
+
+
+4. Find the `Runners` section and click the `expand` button
+
+
+
+
+
+
+
+5. Select `New Project Runner`
+
+
+
+
+
+
+
+6. Choose `Linux` as the operating system
+
+
+
+
+
+
+
+7. Optionally, check the `Run untagged jobs` box if you would liek your runner to be the default for
+ all jobs.
+
+
+
+
+
+
+
+8. Click the `Create Runner` button at the bottom of the page
+
+
+
+
+
+
+
+9. Follow the instructions provided to register your runner
+
+
+
+
+
+
+
+## Automated
+
+The following script will perform the same actions as described above automatically. This is usefull
+for those who would prefer ephemeral runners or to use a declarative workflow. You will need to
+provide your own project access-token to the script as an input value. For mor in-depth information
+see the following resources:
+
+- [Tutorial: Create, register, and run your own project runner](https://docs.gitlab.com/ee/tutorials/create_register_first_runner/)
+- [Tutorial: Automate runner creation and registration ](https://docs.gitlab.com/ee/tutorials/automate_runner_creation/index.html)
+- [Runner Executors](https://docs.gitlab.com/runner/executors/)
+
+1. From the main page of your Gitlab repository, select the `Settings` option from the menu on the
+ left.
+
+
+
+
+
+
+
+2. Select the `Access Tokens` menu
+
+
+
+
+
+
+
+3. Select `Add new token`
+
+
+
+
+
+
+
+4. Give the new token a name and expiration date
+
+
+
+
+
+
+
+5. Set the following role and scopes for the token:
+
+
+
+
+
+
+
+6. Click the `Create Token` button
+
+
+
+
+
+
+
+7. Save the token string somewhere secure
+
+
+
+
+
+
+
+8. Copy and paste the following into your terminal to create the script
+
+ ```bash
+ /usr/bin/cat << 'EOF' > runner.sh
+ #!/bin/bash
+
+ export DOWNLOAD_URL="https://gitlab-runner-downloads.s3.amazonaws.com/latest/deb/gitlab-runner_amd64.deb"
+
+ curl -LJO "${DOWNLOAD_URL}"
+ sudo dpkg -i gitlab-runner_amd64.deb
+
+ export GITLAB_URL=$1
+ export PROJECT_ID=$2
+ export GITLAB_TOKEN=$3
+
+ RETURN=$(curl --silent --request POST --url "$GITLAB_URL/api/v4/user/runners" \
+ --data "runner_type=project_type" \
+ --data "project_id=$PROJECT_ID" \
+ --data "description=gameci runner" \
+ --data "tag_list=" \
+ --header "PRIVATE-TOKEN: $GITLAB_TOKEN")
+
+ TOKEN=$(echo $RETURN |jq -r '.token')
+
+ sudo gitlab-runner register \
+ --non-interactive \
+ --name "gameci-runner" \
+ --url "$GITLAB_URL" \
+ --token "$TOKEN" \
+ --executor "shell" \
+ --docker-image ubuntu:latest
+
+ sudo usermod -aG docker gitlab-runner
+ EOF
+ ```
+
+9. Run the script as follows:
+
+ ```bash
+ bash ./runner.sh
+ ```
diff --git a/docs/12-self-hosting/05-runner-application-installation/02-github-actions.mdx b/docs/12-self-hosting/05-runner-application-installation/02-github-actions.mdx
new file mode 100644
index 00000000..7aac9a6a
--- /dev/null
+++ b/docs/12-self-hosting/05-runner-application-installation/02-github-actions.mdx
@@ -0,0 +1,132 @@
+---
+toc_max_heading_level: 4
+---
+
+import RepoPage from '/assets/images/gh-runner-repo-page.png';
+import ActionsTab from '/assets/images/gh-runner-actions-tab.png';
+import NewRunner from '/assets/images/gh-runner-new-runner.png';
+import RunnerSettings from '/assets/images/gh-runner-settings.png';
+
+# Github Actions
+
+The Github actions runner application supports many different installation and configuration
+options. This guide covers on a basic setup that creates a single runner instance. Both manual and
+automated setup instructions are provided, skip to the bottom of the page for the automation script.
+
+For more in-depth guidance please refer to the official documentation:
+
+- [About self-hosted runners](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners)
+- [Hosting your own runners](https://docs.github.com/en/actions/hosting-your-own-runners)
+- [Using self-hosted runners in a workflow](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/using-self-hosted-runners-in-a-workflow)
+
+## Manual Setup
+
+1. Log in to [Github](https://github.com/) and navigate to the repo you would like to set up a
+ runner for and from the top menu of the repo's page, select the `settings` tab
+
+
+
+
+
+
+
+2. From them menu on the left, expand the `Actions` menu then select the `Runners` option
+
+
+
+
+
+
+
+3. Click the `New self-hosted runner` button to bring up the runner installation instructions.
+
+
+
+
+
+
+
+4. Select the `Linux` runner option, then follow the on-screen instructions to download, install,
+ and configure the runner application.
+
+
+
+
+
+
+5. Update your workflow file to use your self hosted runner by setting the `runs-on` valueto
+ `self-hosted` as shown in the example below.
+
+ ```yaml
+ name: learn-github-actions
+ on: [push]
+ jobs:
+ check-bats-version:
+ runs-on: self-hosted
+ steps:
+ - uses: actions/checkout@v4
+ - uses: actions/setup-node@v3
+ with:
+ node-version: '14'
+ - run: npm install -g bats
+ - run: bats -v
+ ```
+
+## Automated Setup
+
+The following script will perform the same actions as described above automatically. This is usefull
+for those who would prefer ephemeral runners or to use a declarative workflow. You will need to
+provide your own access-token to the script as an input value.
+
+1. Copy and paste the following int your terminal to create the script
+
+ ```bash
+ /usr/bin/cat << 'EOF' > runner.sh
+ #!/bin/bash
+
+ # url for github api endpoint
+ base_api_url="https://api.github.com"
+
+ # Username or Org name
+ owner=$1
+
+ # Name of the repository to create a runner for
+ repo=$2
+
+ # Access token
+ token=$3
+
+ # Runner platform
+ runner_plat=linux
+
+ # Get a authorized token for your repo/org
+ export RUNNER_TOKEN=$(curl -s -X POST ${base_api_url}/repos/${owner}/${repo}/actions/runners/registration-token -H "accept: application/vnd.github.everest-preview+json" -H "authorization: token ${token}" | jq -r '.token')
+
+ # Find the latest version of the runner software
+ latest_version_label=$(curl -s -X GET 'https://api.github.com/repos/actions/runner/releases/latest' | jq -r '.tag_name')
+ latest_version=$(echo ${latest_version_label:1})
+
+ # Assemble the string-value for the runner application archive
+ runner_file="actions-runner-${runner_plat}-x64-${latest_version}.tar.gz"
+
+ # Assemble the download URL
+ runner_url="https://github.com/actions/runner/releases/download/${latest_version_label}/${runner_file}"
+
+ # Download and extract the archive
+ wget -O ${runner_file} ${runner_url}
+ tar xzf "./${runner_file}"
+
+ # Install and configure the application without prompting for user-input
+ ./config.sh --url https://github.com/${owner}/${repo} --token ${RUNNER_TOKEN} --unattended
+
+ ./svc.sh install
+ sudo ./svc.sh start
+ sudo ./svc.sh status
+ EOF
+ ```
+
+2. Run the script as follows:
+
+ ```bash
+ bash ./runner.sh
+ ```
diff --git a/docs/12-self-hosting/05-runner-application-installation/_category_.yaml b/docs/12-self-hosting/05-runner-application-installation/_category_.yaml
new file mode 100644
index 00000000..cccf24d4
--- /dev/null
+++ b/docs/12-self-hosting/05-runner-application-installation/_category_.yaml
@@ -0,0 +1,4 @@
+---
+label: Runner Application Installation
+collapsible: true
+collapsed: true
diff --git a/docs/12-self-hosting/_category_.yaml b/docs/12-self-hosting/_category_.yaml
new file mode 100644
index 00000000..6a7cd1fc
--- /dev/null
+++ b/docs/12-self-hosting/_category_.yaml
@@ -0,0 +1,5 @@
+---
+position: 6.0
+label: Self-Hosting
+collapsible: true
+collapsed: true
diff --git a/static/assets/images/DockerHost.drawio.png b/static/assets/images/DockerHost.drawio.png
new file mode 100644
index 00000000..7cdd1359
Binary files /dev/null and b/static/assets/images/DockerHost.drawio.png differ
diff --git a/static/assets/images/DockerHost.drawio.svg b/static/assets/images/DockerHost.drawio.svg
index 757d0b68..cf138c1d 100644
--- a/static/assets/images/DockerHost.drawio.svg
+++ b/static/assets/images/DockerHost.drawio.svg
@@ -1,266 +1,4 @@
-
\ No newline at end of file
+
+
+
+
\ No newline at end of file
diff --git a/static/assets/images/Metal.drawio.png b/static/assets/images/Metal.drawio.png
new file mode 100644
index 00000000..c7e74e2d
Binary files /dev/null and b/static/assets/images/Metal.drawio.png differ
diff --git a/static/assets/images/Virtualization.drawio.png b/static/assets/images/Virtualization.drawio.png
new file mode 100644
index 00000000..362f9bf0
Binary files /dev/null and b/static/assets/images/Virtualization.drawio.png differ
diff --git a/static/assets/images/Virtualization.drawio.svg b/static/assets/images/Virtualization.drawio.svg
index 57ddeeb3..457b2add 100644
--- a/static/assets/images/Virtualization.drawio.svg
+++ b/static/assets/images/Virtualization.drawio.svg
@@ -1,324 +1,4 @@
-
\ No newline at end of file
+
+
+
+
\ No newline at end of file
diff --git a/static/assets/images/boot-from-cd.png b/static/assets/images/boot-from-cd.png
new file mode 100644
index 00000000..e999ef70
Binary files /dev/null and b/static/assets/images/boot-from-cd.png differ
diff --git a/static/assets/images/boot-from-cd2.png b/static/assets/images/boot-from-cd2.png
new file mode 100644
index 00000000..35e71f01
Binary files /dev/null and b/static/assets/images/boot-from-cd2.png differ
diff --git a/static/assets/images/debian-grub.png b/static/assets/images/debian-grub.png
new file mode 100644
index 00000000..977bed72
Binary files /dev/null and b/static/assets/images/debian-grub.png differ
diff --git a/static/assets/images/diskmark-sata.png b/static/assets/images/diskmark-sata.png
new file mode 100644
index 00000000..c3e6d4f0
Binary files /dev/null and b/static/assets/images/diskmark-sata.png differ
diff --git a/static/assets/images/diskmark-virtio.png b/static/assets/images/diskmark-virtio.png
new file mode 100644
index 00000000..7f0f14bd
Binary files /dev/null and b/static/assets/images/diskmark-virtio.png differ
diff --git a/static/assets/images/gh-runner-actions-tab.png b/static/assets/images/gh-runner-actions-tab.png
new file mode 100644
index 00000000..ca55002d
Binary files /dev/null and b/static/assets/images/gh-runner-actions-tab.png differ
diff --git a/static/assets/images/gh-runner-new-runner.png b/static/assets/images/gh-runner-new-runner.png
new file mode 100644
index 00000000..37043679
Binary files /dev/null and b/static/assets/images/gh-runner-new-runner.png differ
diff --git a/static/assets/images/gh-runner-repo-page.png b/static/assets/images/gh-runner-repo-page.png
new file mode 100644
index 00000000..0ac22eaa
Binary files /dev/null and b/static/assets/images/gh-runner-repo-page.png differ
diff --git a/static/assets/images/gh-runner-settings.png b/static/assets/images/gh-runner-settings.png
new file mode 100644
index 00000000..27dcc600
Binary files /dev/null and b/static/assets/images/gh-runner-settings.png differ
diff --git a/static/assets/images/gl-runner-access-token-screen.png b/static/assets/images/gl-runner-access-token-screen.png
new file mode 100644
index 00000000..75615d4d
Binary files /dev/null and b/static/assets/images/gl-runner-access-token-screen.png differ
diff --git a/static/assets/images/gl-runner-access-token-select.png b/static/assets/images/gl-runner-access-token-select.png
new file mode 100644
index 00000000..c3ca7003
Binary files /dev/null and b/static/assets/images/gl-runner-access-token-select.png differ
diff --git a/static/assets/images/gl-runner-cicd-menu.png b/static/assets/images/gl-runner-cicd-menu.png
new file mode 100644
index 00000000..2d66e0a1
Binary files /dev/null and b/static/assets/images/gl-runner-cicd-menu.png differ
diff --git a/static/assets/images/gl-runner-cicd-settings.png b/static/assets/images/gl-runner-cicd-settings.png
new file mode 100644
index 00000000..2e30103e
Binary files /dev/null and b/static/assets/images/gl-runner-cicd-settings.png differ
diff --git a/static/assets/images/gl-runner-create-runner.png b/static/assets/images/gl-runner-create-runner.png
new file mode 100644
index 00000000..cb4f6d8d
Binary files /dev/null and b/static/assets/images/gl-runner-create-runner.png differ
diff --git a/static/assets/images/gl-runner-new-runner.png b/static/assets/images/gl-runner-new-runner.png
new file mode 100644
index 00000000..e782b6d7
Binary files /dev/null and b/static/assets/images/gl-runner-new-runner.png differ
diff --git a/static/assets/images/gl-runner-register-runner.png b/static/assets/images/gl-runner-register-runner.png
new file mode 100644
index 00000000..be42be1a
Binary files /dev/null and b/static/assets/images/gl-runner-register-runner.png differ
diff --git a/static/assets/images/gl-runner-repo-page.png b/static/assets/images/gl-runner-repo-page.png
new file mode 100644
index 00000000..9e56c794
Binary files /dev/null and b/static/assets/images/gl-runner-repo-page.png differ
diff --git a/static/assets/images/gl-runner-select-os.png b/static/assets/images/gl-runner-select-os.png
new file mode 100644
index 00000000..0bec73d0
Binary files /dev/null and b/static/assets/images/gl-runner-select-os.png differ
diff --git a/static/assets/images/gl-runner-tags.png b/static/assets/images/gl-runner-tags.png
new file mode 100644
index 00000000..597874d5
Binary files /dev/null and b/static/assets/images/gl-runner-tags.png differ
diff --git a/static/assets/images/gl-runner-token-complete.png b/static/assets/images/gl-runner-token-complete.png
new file mode 100644
index 00000000..63d7a905
Binary files /dev/null and b/static/assets/images/gl-runner-token-complete.png differ
diff --git a/static/assets/images/gl-runner-token-create-button.png b/static/assets/images/gl-runner-token-create-button.png
new file mode 100644
index 00000000..ae98b5ef
Binary files /dev/null and b/static/assets/images/gl-runner-token-create-button.png differ
diff --git a/static/assets/images/gl-runner-token-name.png b/static/assets/images/gl-runner-token-name.png
new file mode 100644
index 00000000..dcc62701
Binary files /dev/null and b/static/assets/images/gl-runner-token-name.png differ
diff --git a/static/assets/images/gl-runner-token-role.png b/static/assets/images/gl-runner-token-role.png
new file mode 100644
index 00000000..07f7d70a
Binary files /dev/null and b/static/assets/images/gl-runner-token-role.png differ
diff --git a/static/assets/images/k8s-layer0.drawio.png b/static/assets/images/k8s-layer0.drawio.png
new file mode 100644
index 00000000..3ed14e13
Binary files /dev/null and b/static/assets/images/k8s-layer0.drawio.png differ
diff --git a/static/assets/images/k8s-layer1.drawio.png b/static/assets/images/k8s-layer1.drawio.png
new file mode 100644
index 00000000..c6abf52f
Binary files /dev/null and b/static/assets/images/k8s-layer1.drawio.png differ
diff --git a/static/assets/images/k8s-layer2.drawio.png b/static/assets/images/k8s-layer2.drawio.png
new file mode 100644
index 00000000..6a557011
Binary files /dev/null and b/static/assets/images/k8s-layer2.drawio.png differ
diff --git a/static/assets/images/k8s-layers.drawio.png b/static/assets/images/k8s-layers.drawio.png
new file mode 100644
index 00000000..1081f1c0
Binary files /dev/null and b/static/assets/images/k8s-layers.drawio.png differ
diff --git a/static/assets/images/k8s-layers01.drawio.png b/static/assets/images/k8s-layers01.drawio.png
new file mode 100644
index 00000000..5f0c80d4
Binary files /dev/null and b/static/assets/images/k8s-layers01.drawio.png differ
diff --git a/static/assets/images/k8s-layers012.drawio.png b/static/assets/images/k8s-layers012.drawio.png
new file mode 100644
index 00000000..dfd75fa2
Binary files /dev/null and b/static/assets/images/k8s-layers012.drawio.png differ
diff --git a/static/assets/images/kubernetes.drawio.png b/static/assets/images/kubernetes.drawio.png
new file mode 100644
index 00000000..5d54a32b
Binary files /dev/null and b/static/assets/images/kubernetes.drawio.png differ
diff --git a/static/assets/images/macos-choose-install-location.png b/static/assets/images/macos-choose-install-location.png
new file mode 100644
index 00000000..2d7f46e0
Binary files /dev/null and b/static/assets/images/macos-choose-install-location.png differ
diff --git a/static/assets/images/macos-create-account.png b/static/assets/images/macos-create-account.png
new file mode 100644
index 00000000..ddb2b9a7
Binary files /dev/null and b/static/assets/images/macos-create-account.png differ
diff --git a/static/assets/images/macos-format-disk.png b/static/assets/images/macos-format-disk.png
new file mode 100644
index 00000000..20f61aed
Binary files /dev/null and b/static/assets/images/macos-format-disk.png differ
diff --git a/static/assets/images/macos-hackintosh.png b/static/assets/images/macos-hackintosh.png
new file mode 100644
index 00000000..edb7a130
Binary files /dev/null and b/static/assets/images/macos-hackintosh.png differ
diff --git a/static/assets/images/macos-quit-disk-utility.png b/static/assets/images/macos-quit-disk-utility.png
new file mode 100644
index 00000000..5191e717
Binary files /dev/null and b/static/assets/images/macos-quit-disk-utility.png differ
diff --git a/static/assets/images/macos-reinstall.png b/static/assets/images/macos-reinstall.png
new file mode 100644
index 00000000..fbcfcd25
Binary files /dev/null and b/static/assets/images/macos-reinstall.png differ
diff --git a/static/assets/images/macos-remote-login.png b/static/assets/images/macos-remote-login.png
new file mode 100644
index 00000000..38b3426e
Binary files /dev/null and b/static/assets/images/macos-remote-login.png differ
diff --git a/static/assets/images/macos-select-boot-media.png b/static/assets/images/macos-select-boot-media.png
new file mode 100644
index 00000000..4f3a0f5b
Binary files /dev/null and b/static/assets/images/macos-select-boot-media.png differ
diff --git a/static/assets/images/macos-select-boot-media2.png b/static/assets/images/macos-select-boot-media2.png
new file mode 100644
index 00000000..42c01a8d
Binary files /dev/null and b/static/assets/images/macos-select-boot-media2.png differ
diff --git a/static/assets/images/macos-select-disk-utility.png b/static/assets/images/macos-select-disk-utility.png
new file mode 100644
index 00000000..3a6c9121
Binary files /dev/null and b/static/assets/images/macos-select-disk-utility.png differ
diff --git a/static/assets/images/macos-select-install-media.png b/static/assets/images/macos-select-install-media.png
new file mode 100644
index 00000000..6add41f3
Binary files /dev/null and b/static/assets/images/macos-select-install-media.png differ
diff --git a/static/assets/images/macos-ssh.png b/static/assets/images/macos-ssh.png
new file mode 100644
index 00000000..73588a6b
Binary files /dev/null and b/static/assets/images/macos-ssh.png differ
diff --git a/static/assets/images/qemu-logo.png b/static/assets/images/qemu-logo.png
new file mode 100644
index 00000000..10f9a76a
Binary files /dev/null and b/static/assets/images/qemu-logo.png differ
diff --git a/static/assets/images/ssh-connection.png b/static/assets/images/ssh-connection.png
new file mode 100644
index 00000000..6b5c33d4
Binary files /dev/null and b/static/assets/images/ssh-connection.png differ
diff --git a/static/assets/images/vnc-connection.png b/static/assets/images/vnc-connection.png
new file mode 100644
index 00000000..e886cf6e
Binary files /dev/null and b/static/assets/images/vnc-connection.png differ
diff --git a/static/assets/images/win10-custom-install.png b/static/assets/images/win10-custom-install.png
new file mode 100644
index 00000000..8d250871
Binary files /dev/null and b/static/assets/images/win10-custom-install.png differ
diff --git a/static/assets/images/win10-disk-select.png b/static/assets/images/win10-disk-select.png
new file mode 100644
index 00000000..b0e9c182
Binary files /dev/null and b/static/assets/images/win10-disk-select.png differ
diff --git a/static/assets/images/win10-driver-browse.png b/static/assets/images/win10-driver-browse.png
new file mode 100644
index 00000000..eb5be8d5
Binary files /dev/null and b/static/assets/images/win10-driver-browse.png differ
diff --git a/static/assets/images/win10-driver-disk.png b/static/assets/images/win10-driver-disk.png
new file mode 100644
index 00000000..8ef7e019
Binary files /dev/null and b/static/assets/images/win10-driver-disk.png differ
diff --git a/static/assets/images/win10-eula.png b/static/assets/images/win10-eula.png
new file mode 100644
index 00000000..75fd9873
Binary files /dev/null and b/static/assets/images/win10-eula.png differ
diff --git a/static/assets/images/win10-format-disk.png b/static/assets/images/win10-format-disk.png
new file mode 100644
index 00000000..00a577ee
Binary files /dev/null and b/static/assets/images/win10-format-disk.png differ
diff --git a/static/assets/images/win10-install-driver.png b/static/assets/images/win10-install-driver.png
new file mode 100644
index 00000000..5164a3a2
Binary files /dev/null and b/static/assets/images/win10-install-driver.png differ
diff --git a/static/assets/images/win10-install.png b/static/assets/images/win10-install.png
new file mode 100644
index 00000000..2f8c989e
Binary files /dev/null and b/static/assets/images/win10-install.png differ
diff --git a/static/assets/images/win10-installing.png b/static/assets/images/win10-installing.png
new file mode 100644
index 00000000..0abac4e3
Binary files /dev/null and b/static/assets/images/win10-installing.png differ
diff --git a/static/assets/images/win10-language.png b/static/assets/images/win10-language.png
new file mode 100644
index 00000000..39ea4c15
Binary files /dev/null and b/static/assets/images/win10-language.png differ
diff --git a/static/assets/images/win10-netkvm.png b/static/assets/images/win10-netkvm.png
new file mode 100644
index 00000000..e3f9fed6
Binary files /dev/null and b/static/assets/images/win10-netkvm.png differ
diff --git a/static/assets/images/win10-partitions.png b/static/assets/images/win10-partitions.png
new file mode 100644
index 00000000..2a295c33
Binary files /dev/null and b/static/assets/images/win10-partitions.png differ
diff --git a/static/assets/images/win10-rdp.png b/static/assets/images/win10-rdp.png
new file mode 100644
index 00000000..c2ce8a4b
Binary files /dev/null and b/static/assets/images/win10-rdp.png differ
diff --git a/static/assets/images/win10-serial.png b/static/assets/images/win10-serial.png
new file mode 100644
index 00000000..44ef6a15
Binary files /dev/null and b/static/assets/images/win10-serial.png differ
diff --git a/static/assets/images/win10-version.png b/static/assets/images/win10-version.png
new file mode 100644
index 00000000..33b01fae
Binary files /dev/null and b/static/assets/images/win10-version.png differ
diff --git a/static/assets/images/win10-viostor.png b/static/assets/images/win10-viostor.png
new file mode 100644
index 00000000..155b9a9f
Binary files /dev/null and b/static/assets/images/win10-viostor.png differ
diff --git a/static/assets/images/win10-virtio-drivers.png b/static/assets/images/win10-virtio-drivers.png
new file mode 100644
index 00000000..c842e15c
Binary files /dev/null and b/static/assets/images/win10-virtio-drivers.png differ
diff --git a/static/assets/images/win10-virtiogpu.png b/static/assets/images/win10-virtiogpu.png
new file mode 100644
index 00000000..57baf41b
Binary files /dev/null and b/static/assets/images/win10-virtiogpu.png differ