Move more stuff to kubernetes-deployments

Remove kubernetes stuff from readme
This commit is contained in:
Pim Kunis 2024-09-07 21:58:27 +02:00
parent 8744db7f1f
commit ad4d78ed2a
6 changed files with 4 additions and 214 deletions

View file

@ -9,18 +9,14 @@ Nix definitions to configure our servers at home.
- [dns.nix](https://github.com/kirelagin/dns.nix): A Nix DSL for defining DNS zones
- [flake-utils](https://github.com/numtide/flake-utils): Handy utilities to develop Nix flakes
- [nixos-hardware](https://github.com/NixOS/nixos-hardware): Hardware-specific NixOS modules. Doing the heavy lifting for our Raspberry Pi
- [kubenix](https://kubenix.org/): declare and deploy Kubernetes resources using Nix
- [nixhelm](https://github.com/farcaller/nixhelm): Nix-digestible Helm charts
- [sops-nix](https://github.com/Mic92/sops-nix): Sops secret management for Nix
## NixOS
### Prerequisites
## Prerequisites
1. Install the Nix package manager or NixOS ([link](https://nixos.org/download))
2. Enable flake and nix commands ([link](https://nixos.wiki/wiki/Flakes#Enable_flakes_permanently_in_NixOS))
### Bootstrapping
## Bootstrapping
We bootstrap our servers using [nixos-anywhere](https://github.com/nix-community/nixos-anywhere).
This reformats the hard disk of the server and installs a fresh NixOS.
@ -32,37 +28,11 @@ Additionally, it deploys an age identity, which is later used for decrypting sec
2. Ensure you have root SSH access to the server.
3. Run nixos-anywhere: `nix run '.#bootstrap' <servername> <hostname>`
### Deployment
## Deployment
To deploy all servers at once: `nix run 'nixpkgs#deploy-rs' -- '.#' -k`
To deploy only one server: `nix run 'nixpkgs#deploy-rs' -- -k --targets '.#<host>'`
## Kubernetes
### Prerequisites
To deploy to the Kubernetes cluster, first make sure you have an admin account on the cluster.
You can generate this using `nix run '.#gen-k3s-cert' <username> <servername> ~/.kube`, assuming you have SSH access to the master node.
This puts a private key, signed certificate and a kubeconfig in the kubeconfig directory
### Bootstrapping
We are now ready to deploy to the Kubernetes cluster.
Deployments are done through an experimental Kubernetes feature called [ApplySets](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-delete-objects).
Each applyset is responsible for a set number of resources within a namespace.
If the cluster has not been initialized yet, we must bootstrap it first.
Run these deployments:
- `nix run '.#bootstrap-default'`
- `nix run '.#bootstrap-kube-system'`
### Deployment
Now the cluster has been initialized and we can deploy applications.
To explore which applications we can deploy, run `nix flake show`.
Then, for each application, run `nix run '.#<application>'`.
Or, if you're lazy: `nix flake show --json | jq -r '.packages."x86_64-linux"|keys[]' | grep -- -deploy | xargs -I{} nix run ".#{}"`.
## Known bugs
### Rsync not available during bootstrap

View file

@ -1,75 +0,0 @@
# Longhorn notes
## Migration from NFS to Longhorn
1. Delete the workload, and delete the PVC and PVC using NFS.
2. Create Longhorn volumes as described below.
3. Copy NFS data from lewis.dmz to local disk.
4. Spin up a temporary pod and mount the Longhorn volume(s) in it:
```nix
{
pods.testje.spec = {
containers.testje = {
image = "nginx";
volumeMounts = [
{
name = "uploads";
mountPath = "/hedgedoc/public/uploads";
}
];
};
volumes = {
uploads.persistentVolumeClaim.claimName = "hedgedoc-uploads";
};
};
}
```
5. Use `kubectl cp` to copy the data from the local disk to the pod.
6. Delete the temporary pod.
7. Be sure to set the group ownership of the mount to the correct GID.
7. Create the workload with updated volume mounts.
8. Delete the data from local disk.
## Creation of new Longhorn volumes
While it seems handy to use a K8s StorageClass for Longhorn, we do *not* want to use that.
If you use a StorageClass, a PV and Longhorn volume will be automatically provisioned.
These will have the name `pvc-<UID of PVC>`, where the UID of the PVC is random.
This makes it hard to restore a backup to a Longhorn volume with the correct name.
Instead, we want to manually create the Longhorn volumes via the web UI.
Then, we can create the PV and PVC as usual using our K8s provisioning tool (e.g. Kubectl/Kubenix).
Follow these actions to create a Volume:
1. Using the Longhorn web UI, create a new Longhorn volume, keeping the following in mind:
- The size can be some more than what we expect to reasonable use. We use storage-overprovisioning, so the total size of volumes can exceed real disk size.
- The number of replicas should be 2.
2. Enable the "backup-nfs" recurring job for the Longhorn volume.
3. Disable the "default" recurring job group for the Longhorn volume.
4. Create the PV, PVC and workload as usual.
## Disaster recovery using Longhorn backups
Backing up Longhorn volumes is very easy, but restoring them is more tricky.
We consider here the case when all our machines are wiped, and all we have left is Longhorn backups.
To restore a backup, perform the following actions:
1. Restore the latest snapshot in the relevant Longhorn backup, keeping the following in mind:
- The name should remain the same (i.e. the one chosen at Longhorn volume creation).
- The number of replicas should be 2.
- Disable recurring jobs.
2. Enable the "backup-nfs" recurring job for the Longhorn volume.
3. Disable the "default" recurring job group for the Longhorn volume.
4. Create the PV, PVC and workload as usual.
## Recovering Longhorn volumes without a Kubernetes cluster
1. Navigate to the Longhorn backupstore location (`/mnt/longhorn/persistent/longhorn-backup/backupstore/volumes` for us).
2. Find the directory for the desired volume: `ls **/**`.
3. Determine the last backup for the volume: `cat volume.cfg | jq '.LastBackupName'`.
4. Find the blocks and the order that form the volume: `cat backups/<name>.cfg | jq '.Blocks'`.
5. Extract each block using lz4: `lz4 -d blocks/XX/YY/XXYY.blk block`.
6. Append the blocks to form the file system: `cat block1 block2 block3 > volume.img`
7. Lastly we need to fix the size of the image. We can simply append zero's to the end until the file is long enough so `fsck.ext4` does not complain anymore.
8. Mount the image: `mount -o loop volume.img /mnt/volume`.

View file

@ -1,11 +0,0 @@
# Media
[profilarr](https://github.com/Dictionarry-Hub/profilarr) was used to import the "1080p Transparent" quality profile to both Radarr and Sonarr.
Profilarr has some neat tools that magically applies custom formats and quality definitions.
As far as I understand, these are used to indentify files that are high quality.
Profilarr can then also import a quality profile, which uses the aforementioned definitions to select torrents in my desired format.
In my case, I have chosen "1080p Transparent."
According to the [docs](https://selectarr.pages.dev/):
> Projected Size: 10 - 15gb
>
> Description: Prioritizes 1080p transparent releases. Lossy audio is allowed, and all upgrades are allowed. HDR is banned.

View file

@ -40,7 +40,7 @@
};
outputs =
inputs@{ self, nixpkgs, flake-utils, ... }:
inputs@{ nixpkgs, flake-utils, ... }:
flake-utils.lib.meld inputs [
./scripts
./checks.nix

View file

@ -21,12 +21,6 @@ in
scriptPath = ./bootstrap.sh;
};
packages.gen-k3s-cert = createScript {
name = "create-k3s-cert";
runtimeInputs = with pkgs; [ openssl coreutils openssh yq ];
scriptPath = ./gen-k3s-cert.sh;
};
packages.prefetch-container-images =
let
imagesJSON = builtins.toFile "images.json" (builtins.toJSON self.globals.images);

View file

@ -1,88 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
IFS=$'\n\t'
username="${1-}"
host="${2-}"
output_path="${3:-.}"
if [ -z "$username" ] || [ -z "$host" ]
then
echo "Usage: $0 USERNAME HOST [OUTPUTPATH]"
exit 1
fi
# Create a temporary directory
temp=$(mktemp -d)
# Function to cleanup temporary directory on exit
cleanup() {
rm -rf "$temp"
}
trap cleanup EXIT
echo Generating the private key
openssl genpkey -algorithm ed25519 -out "$temp/key.pem"
echo Generating the certificate request
openssl req -new -key "$temp/key.pem" -out "$temp/req.csr" -subj "/CN=$username"
echo Creating K8S CSR manifest
csr="$(cat "$temp/req.csr" | base64 | tr -d '\n')"
k8s_csr="apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: $username-csr
spec:
request: $csr
expirationSeconds: 307584000 # 10 years
signerName: kubernetes.io/kube-apiserver-client
usages:
- digital signature
- key encipherment
- client auth
"
echo Creating K8S CSR resource
ssh "root@$host" "echo \"$k8s_csr\" | k3s kubectl apply -f -"
echo Approving K8S CSR
ssh "root@$host" "k3s kubectl certificate approve $username-csr"
echo Retrieving approved certificate
encoded_cert="$(ssh root@"$host" "k3s kubectl get csr $username-csr -o jsonpath='{.status.certificate}'")"
echo Retrieving default K3S kubeconfig
base_kubeconfig="$(ssh root@"$host" "cat /etc/rancher/k3s/k3s.yaml")"
echo Getting certificate authority data from default kubeconfig
cert_authority_data="$(echo -n "$base_kubeconfig" | yq -r '.clusters[0].cluster."certificate-authority-data"')"
echo Generating final kubeconfig
result_kubeconfig="apiVersion: v1
clusters:
- cluster:
certificate-authority-data: $cert_authority_data
server: https://$host:6443
name: default
contexts:
- context:
cluster: default
user: $username
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: $username
user:
client-certificate: $username.crt
client-key: $username.key
"
echo Writing resulting files to "$output_path"
echo -n "$encoded_cert" | base64 -d > $output_path/$username.crt
echo -n "$result_kubeconfig" > $output_path/config
cp $temp/key.pem $output_path/$username.key