From 0cc17722cb190d606310369e08181300259d9179 Mon Sep 17 00:00:00 2001 From: Pim Kunis Date: Sat, 7 Sep 2024 21:59:41 +0200 Subject: [PATCH] Move over stuff from nixos-servers --- README.md | 35 ++++ docs/longhorn.md | 75 ++++++++ docs/media.md | 11 ++ flake.lock | 389 ++-------------------------------------- flake.nix | 83 +-------- kubenix.nix | 72 ++++++++ scripts/default.nix | 23 +++ scripts/gen-k3s-cert.sh | 88 +++++++++ 8 files changed, 330 insertions(+), 446 deletions(-) create mode 100644 README.md create mode 100644 docs/longhorn.md create mode 100644 docs/media.md create mode 100644 kubenix.nix create mode 100644 scripts/default.nix create mode 100644 scripts/gen-k3s-cert.sh diff --git a/README.md b/README.md new file mode 100644 index 0000000..b2c5d53 --- /dev/null +++ b/README.md @@ -0,0 +1,35 @@ +# Kubernetes deployments + +We use [Kubenix](https://kubenix.org/) to write Kubernetes deployments in Nix! + +## Acknowledgements + +- [dns.nix](https://github.com/kirelagin/dns.nix): A Nix DSL for defining DNS zones +- [flake-utils](https://github.com/numtide/flake-utils): Handy utilities to develop Nix flakes +- [kubenix](https://kubenix.org/): Declare and deploy Kubernetes resources using Nix +- [nixhelm](https://github.com/farcaller/nixhelm): Nix-digestible Helm charts +- [sops-nix](https://github.com/Mic92/sops-nix): Sops secret management for Nix + +## Prerequisites + +To deploy to the Kubernetes cluster, first make sure you have an admin account on the cluster. +You can generate this using `nix run '.#gen-k3s-cert' ~/.kube`, assuming you have SSH access to the master node. +This puts a private key, signed certificate and a kubeconfig in the kubeconfig directory + +## Bootstrapping + +We are now ready to deploy to the Kubernetes cluster. +Deployments are done through an experimental Kubernetes feature called [ApplySets](https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/#how-to-delete-objects). +Each applyset is responsible for a set number of resources within a namespace. + +If the cluster has not been initialized yet, we must bootstrap it first. +Run these deployments: +- `nix run '.#bootstrap-default-deploy'` +- `nix run '.#bootstrap-kube-system-deploy'` + +## Deployment + +Now the cluster has been initialized and we can deploy applications. +To explore which applications we can deploy, run `nix flake show`. +Then, for each application, run `nix run '.#-deploy'`. +Or, if you're lazy: `nix flake show --json | jq -r '.packages."x86_64-linux"|keys[]' | grep -- -deploy | xargs -I{} nix run ".#{}"`. diff --git a/docs/longhorn.md b/docs/longhorn.md new file mode 100644 index 0000000..532a188 --- /dev/null +++ b/docs/longhorn.md @@ -0,0 +1,75 @@ +# Longhorn notes + +## Migration from NFS to Longhorn + +1. Delete the workload, and delete the PVC and PVC using NFS. +2. Create Longhorn volumes as described below. +3. Copy NFS data from lewis.dmz to local disk. +4. Spin up a temporary pod and mount the Longhorn volume(s) in it: + ```nix + { + pods.testje.spec = { + containers.testje = { + image = "nginx"; + + volumeMounts = [ + { + name = "uploads"; + mountPath = "/hedgedoc/public/uploads"; + } + ]; + }; + + volumes = { + uploads.persistentVolumeClaim.claimName = "hedgedoc-uploads"; + }; + }; + } + ``` +5. Use `kubectl cp` to copy the data from the local disk to the pod. +6. Delete the temporary pod. +7. Be sure to set the group ownership of the mount to the correct GID. +7. Create the workload with updated volume mounts. +8. Delete the data from local disk. + +## Creation of new Longhorn volumes + +While it seems handy to use a K8s StorageClass for Longhorn, we do *not* want to use that. +If you use a StorageClass, a PV and Longhorn volume will be automatically provisioned. +These will have the name `pvc-`, where the UID of the PVC is random. +This makes it hard to restore a backup to a Longhorn volume with the correct name. + +Instead, we want to manually create the Longhorn volumes via the web UI. +Then, we can create the PV and PVC as usual using our K8s provisioning tool (e.g. Kubectl/Kubenix). + +Follow these actions to create a Volume: +1. Using the Longhorn web UI, create a new Longhorn volume, keeping the following in mind: + - The size can be some more than what we expect to reasonable use. We use storage-overprovisioning, so the total size of volumes can exceed real disk size. + - The number of replicas should be 2. +2. Enable the "backup-nfs" recurring job for the Longhorn volume. +3. Disable the "default" recurring job group for the Longhorn volume. +4. Create the PV, PVC and workload as usual. + +## Disaster recovery using Longhorn backups + +Backing up Longhorn volumes is very easy, but restoring them is more tricky. +We consider here the case when all our machines are wiped, and all we have left is Longhorn backups. +To restore a backup, perform the following actions: +1. Restore the latest snapshot in the relevant Longhorn backup, keeping the following in mind: + - The name should remain the same (i.e. the one chosen at Longhorn volume creation). + - The number of replicas should be 2. + - Disable recurring jobs. +2. Enable the "backup-nfs" recurring job for the Longhorn volume. +3. Disable the "default" recurring job group for the Longhorn volume. +4. Create the PV, PVC and workload as usual. + +## Recovering Longhorn volumes without a Kubernetes cluster + +1. Navigate to the Longhorn backupstore location (`/mnt/longhorn/persistent/longhorn-backup/backupstore/volumes` for us). +2. Find the directory for the desired volume: `ls **/**`. +3. Determine the last backup for the volume: `cat volume.cfg | jq '.LastBackupName'`. +4. Find the blocks and the order that form the volume: `cat backups/.cfg | jq '.Blocks'`. +5. Extract each block using lz4: `lz4 -d blocks/XX/YY/XXYY.blk block`. +6. Append the blocks to form the file system: `cat block1 block2 block3 > volume.img` +7. Lastly we need to fix the size of the image. We can simply append zero's to the end until the file is long enough so `fsck.ext4` does not complain anymore. +8. Mount the image: `mount -o loop volume.img /mnt/volume`. diff --git a/docs/media.md b/docs/media.md new file mode 100644 index 0000000..34f0fc7 --- /dev/null +++ b/docs/media.md @@ -0,0 +1,11 @@ +# Media + +[profilarr](https://github.com/Dictionarry-Hub/profilarr) was used to import the "1080p Transparent" quality profile to both Radarr and Sonarr. +Profilarr has some neat tools that magically applies custom formats and quality definitions. +As far as I understand, these are used to indentify files that are high quality. +Profilarr can then also import a quality profile, which uses the aforementioned definitions to select torrents in my desired format. +In my case, I have chosen "1080p Transparent." +According to the [docs](https://selectarr.pages.dev/): +> Projected Size: 10 - 15gb +> +> Description: Prioritizes 1080p transparent releases. Lossy audio is allowed, and all upgrades are allowed. HDR is banned. diff --git a/flake.lock b/flake.lock index f46b1f9..01157e9 100644 --- a/flake.lock +++ b/flake.lock @@ -24,30 +24,6 @@ "url": "https://git.kun.is/home/blog-pim" } }, - "blog-pim_2": { - "inputs": { - "flutils": "flutils_2", - "nginx": "nginx_2", - "nixpkgs": [ - "servers", - "nixpkgs" - ] - }, - "locked": { - "lastModified": 1715503080, - "narHash": "sha256-/VnzHTpTq3u0z2Vgu/vKU0SHwOUIu8olHDORWT0IofM=", - "ref": "refs/heads/master", - "rev": "7296f7f5bf5f089a5137036dcbd8058cf3e4a9e5", - "revCount": 21, - "type": "git", - "url": "https://git.kun.is/home/blog-pim" - }, - "original": { - "rev": "7296f7f5bf5f089a5137036dcbd8058cf3e4a9e5", - "type": "git", - "url": "https://git.kun.is/home/blog-pim" - } - }, "deploy-rs": { "inputs": { "flake-compat": "flake-compat_2", @@ -165,22 +141,6 @@ } }, "flake-compat_3": { - "flake": false, - "locked": { - "lastModified": 1673956053, - "narHash": "sha256-4gtG9iQuiKITOjNQQeQIpoIB6b16fm+504Ch3sNKLd8=", - "owner": "edolstra", - "repo": "flake-compat", - "rev": "35bb57c0c8d8b62bbfd284272c928ceb64ddbde9", - "type": "github" - }, - "original": { - "owner": "edolstra", - "repo": "flake-compat", - "type": "github" - } - }, - "flake-compat_4": { "flake": false, "locked": { "lastModified": 1696426674, @@ -285,42 +245,7 @@ }, "flake-utils_5": { "inputs": { - "systems": "systems_8" - }, - "locked": { - "lastModified": 1710146030, - "narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=", - "owner": "numtide", - "repo": "flake-utils", - "rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a", - "type": "github" - }, - "original": { - "owner": "numtide", - "repo": "flake-utils", - "type": "github" - } - }, - "flake-utils_6": { - "inputs": { - "systems": "systems_10" - }, - "locked": { - "lastModified": 1710146030, - "narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=", - "owner": "numtide", - "repo": "flake-utils", - "rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a", - "type": "github" - }, - "original": { - "id": "flake-utils", - "type": "indirect" - } - }, - "flake-utils_7": { - "inputs": { - "systems": "systems_11" + "systems": "systems_7" }, "locked": { "lastModified": 1710146030, @@ -354,24 +279,6 @@ "type": "github" } }, - "flutils_2": { - "inputs": { - "systems": "systems_6" - }, - "locked": { - "lastModified": 1710146030, - "narHash": "sha256-SZ5L6eA7HJ/nmkzGG7/ISclqe6oZdOZTNoesiInkXPQ=", - "owner": "numtide", - "repo": "flake-utils", - "rev": "b1d9ab70662946ef0850d488da1c9019f3a9752a", - "type": "github" - }, - "original": { - "owner": "numtide", - "repo": "flake-utils", - "type": "github" - } - }, "haumea": { "inputs": { "nixpkgs": [ @@ -417,30 +324,6 @@ "type": "github" } }, - "kubenix_2": { - "inputs": { - "flake-compat": "flake-compat_3", - "nixpkgs": [ - "servers", - "nixpkgs-unstable" - ], - "systems": "systems_9", - "treefmt": "treefmt_2" - }, - "locked": { - "lastModified": 1717788185, - "narHash": "sha256-Uc6QSQqJa2lyv/1W4StwoKrjtq7cFjlKNhdrtanToGo=", - "owner": "pizzapim", - "repo": "kubenix", - "rev": "a9590abe23a2f7577bc3271d90955e9ccc2923fe", - "type": "github" - }, - "original": { - "owner": "pizzapim", - "repo": "kubenix", - "type": "github" - } - }, "nginx": { "flake": false, "locked": { @@ -457,22 +340,6 @@ "type": "github" } }, - "nginx_2": { - "flake": false, - "locked": { - "lastModified": 1713277799, - "narHash": "sha256-VNDzQvUGeh54F3s6SIq6lBrp4RatURzJoJqVorexttA=", - "owner": "nginx", - "repo": "nginx", - "rev": "d8a849ae3c99ee5ca82c9a06074761e937dac6d6", - "type": "github" - }, - "original": { - "owner": "nginx", - "repo": "nginx", - "type": "github" - } - }, "nix-github-actions": { "inputs": { "nixpkgs": [ @@ -495,29 +362,6 @@ "type": "github" } }, - "nix-github-actions_2": { - "inputs": { - "nixpkgs": [ - "servers", - "nixhelm", - "poetry2nix", - "nixpkgs" - ] - }, - "locked": { - "lastModified": 1703863825, - "narHash": "sha256-rXwqjtwiGKJheXB43ybM8NwWB8rO2dSRrEqes0S7F5Y=", - "owner": "nix-community", - "repo": "nix-github-actions", - "rev": "5163432afc817cf8bd1f031418d1869e4c9d5547", - "type": "github" - }, - "original": { - "owner": "nix-community", - "repo": "nix-github-actions", - "type": "github" - } - }, "nix-kube-generators": { "locked": { "lastModified": 1708155396, @@ -533,24 +377,9 @@ "type": "github" } }, - "nix-kube-generators_2": { - "locked": { - "lastModified": 1708155396, - "narHash": "sha256-A/BIeJjiRS7sBYP6tFJa/WHDPHe7DGTCkSEKXttYeAQ=", - "owner": "farcaller", - "repo": "nix-kube-generators", - "rev": "14dbd5e5b40615937900f71d9a9851b59b4d9a88", - "type": "github" - }, - "original": { - "owner": "farcaller", - "repo": "nix-kube-generators", - "type": "github" - } - }, "nix-snapshotter": { "inputs": { - "flake-compat": "flake-compat_4", + "flake-compat": "flake-compat_3", "flake-parts": "flake-parts", "nixpkgs": [ "servers", @@ -595,30 +424,6 @@ "type": "github" } }, - "nixhelm_2": { - "inputs": { - "flake-utils": "flake-utils_6", - "nix-kube-generators": "nix-kube-generators_2", - "nixpkgs": [ - "servers", - "nixpkgs" - ], - "poetry2nix": "poetry2nix_2" - }, - "locked": { - "lastModified": 1722301678, - "narHash": "sha256-dlsJGdLiXGgBSr/7Y+invyY/9+jJsFF6UkUpD7WMXRM=", - "owner": "farcaller", - "repo": "nixhelm", - "rev": "5a983d9da254b178ac5b689405fb5b179815ef91", - "type": "github" - }, - "original": { - "owner": "farcaller", - "repo": "nixhelm", - "type": "github" - } - }, "nixos-hardware": { "locked": { "lastModified": 1722332872, @@ -637,11 +442,11 @@ }, "nixpkgs": { "locked": { - "lastModified": 1725432240, - "narHash": "sha256-+yj+xgsfZaErbfYM3T+QvEE2hU7UuE+Jf0fJCJ8uPS0=", + "lastModified": 1725634671, + "narHash": "sha256-v3rIhsJBOMLR8e/RNWxr828tB+WywYIoajrZKFM+0Gg=", "owner": "nixos", "repo": "nixpkgs", - "rev": "ad416d066ca1222956472ab7d0555a6946746a80", + "rev": "574d1eac1c200690e27b8eb4e24887f8df7ac27c", "type": "github" }, "original": { @@ -699,22 +504,6 @@ "type": "github" } }, - "nixpkgs_3": { - "locked": { - "lastModified": 1722221733, - "narHash": "sha256-sga9SrrPb+pQJxG1ttJfMPheZvDOxApFfwXCFO0H9xw=", - "owner": "nixos", - "repo": "nixpkgs", - "rev": "12bf09802d77264e441f48e25459c10c93eada2e", - "type": "github" - }, - "original": { - "owner": "nixos", - "ref": "nixos-24.05", - "repo": "nixpkgs", - "type": "github" - } - }, "poetry2nix": { "inputs": { "flake-utils": "flake-utils_3", @@ -740,32 +529,6 @@ "type": "github" } }, - "poetry2nix_2": { - "inputs": { - "flake-utils": "flake-utils_7", - "nix-github-actions": "nix-github-actions_2", - "nixpkgs": [ - "servers", - "nixhelm", - "nixpkgs" - ], - "systems": "systems_12", - "treefmt-nix": "treefmt-nix_2" - }, - "locked": { - "lastModified": 1718285706, - "narHash": "sha256-DScsBM+kZvxOva7QegfdtleebMXh30XPxDQr/1IGKYo=", - "owner": "nix-community", - "repo": "poetry2nix", - "rev": "a5be1bbbe0af0266147a88e0ec43b18c722f2bb9", - "type": "github" - }, - "original": { - "owner": "nix-community", - "repo": "poetry2nix", - "type": "github" - } - }, "root": { "inputs": { "blog-pim": "blog-pim", @@ -779,28 +542,30 @@ }, "servers": { "inputs": { - "blog-pim": "blog-pim_2", "deploy-rs": "deploy-rs", "disko": "disko", "dns": "dns_2", "flake-utils": "flake-utils_5", - "kubenix": "kubenix_2", "nix-snapshotter": "nix-snapshotter", - "nixhelm": "nixhelm_2", "nixos-hardware": "nixos-hardware", - "nixpkgs": "nixpkgs_3", + "nixpkgs": [ + "nixpkgs" + ], "nixpkgs-unstable": "nixpkgs-unstable", "sops-nix": "sops-nix" }, "locked": { - "lastModified": 1725657303, - "narHash": "sha256-cKNKseXyMs/F0Ix2TlFktjmNQPIpDFI9UZIeS9Jgixo=", - "path": "/home/pim/git/nixos-servers", - "type": "path" + "lastModified": 1725739157, + "narHash": "sha256-80fEhMTITIQN8/8cyjlqI/PKBWQG2cl2R/VAhGy3l3o=", + "ref": "refs/heads/master", + "rev": "ad4d78ed2a8272e6474f4ed04c42ef75bd27da8b", + "revCount": 470, + "type": "git", + "url": "https://git.kun.is/home/nixos-servers" }, "original": { - "path": "/home/pim/git/nixos-servers", - "type": "path" + "type": "git", + "url": "https://git.kun.is/home/nixos-servers" } }, "sops-nix": { @@ -840,50 +605,6 @@ "type": "github" } }, - "systems_10": { - "locked": { - "lastModified": 1681028828, - "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", - "owner": "nix-systems", - "repo": "default", - "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", - "type": "github" - }, - "original": { - "owner": "nix-systems", - "repo": "default", - "type": "github" - } - }, - "systems_11": { - "locked": { - "lastModified": 1681028828, - "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", - "owner": "nix-systems", - "repo": "default", - "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", - "type": "github" - }, - "original": { - "owner": "nix-systems", - "repo": "default", - "type": "github" - } - }, - "systems_12": { - "locked": { - "lastModified": 1681028828, - "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", - "owner": "nix-systems", - "repo": "default", - "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", - "type": "github" - }, - "original": { - "id": "systems", - "type": "indirect" - } - }, "systems_2": { "locked": { "lastModified": 1681028828, @@ -972,35 +693,6 @@ "type": "github" } }, - "systems_8": { - "locked": { - "lastModified": 1681028828, - "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", - "owner": "nix-systems", - "repo": "default", - "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", - "type": "github" - }, - "original": { - "owner": "nix-systems", - "repo": "default", - "type": "github" - } - }, - "systems_9": { - "locked": { - "lastModified": 1681028828, - "narHash": "sha256-Vy1rq5AaRuLzOxct8nz4T6wlgyUR7zLU309k9mBC768=", - "owner": "nix-systems", - "repo": "default", - "rev": "da67096a3b9bf56a91d16901293e51ba5b49a27e", - "type": "github" - }, - "original": { - "id": "systems", - "type": "indirect" - } - }, "treefmt": { "inputs": { "nixpkgs": [ @@ -1044,54 +736,9 @@ "type": "github" } }, - "treefmt-nix_2": { - "inputs": { - "nixpkgs": [ - "servers", - "nixhelm", - "poetry2nix", - "nixpkgs" - ] - }, - "locked": { - "lastModified": 1717850719, - "narHash": "sha256-npYqVg+Wk4oxnWrnVG7416fpfrlRhp/lQ6wQ4DHI8YE=", - "owner": "numtide", - "repo": "treefmt-nix", - "rev": "4fc1c45a5f50169f9f29f6a98a438fb910b834ed", - "type": "github" - }, - "original": { - "owner": "numtide", - "repo": "treefmt-nix", - "type": "github" - } - }, - "treefmt_2": { - "inputs": { - "nixpkgs": [ - "servers", - "kubenix", - "nixpkgs" - ] - }, - "locked": { - "lastModified": 1688026376, - "narHash": "sha256-qJmkr9BWDpqblk4E9/rCsAEl39y2n4Ycw6KRopvpUcY=", - "owner": "numtide", - "repo": "treefmt-nix", - "rev": "df3f32b0cc253dfc7009b7317e8f0e7ccd70b1cf", - "type": "github" - }, - "original": { - "owner": "numtide", - "repo": "treefmt-nix", - "type": "github" - } - }, "utils": { "inputs": { - "systems": "systems_7" + "systems": "systems_6" }, "locked": { "lastModified": 1701680307, diff --git a/flake.nix b/flake.nix index 4dd1c74..4b0444c 100644 --- a/flake.nix +++ b/flake.nix @@ -30,82 +30,15 @@ }; servers = { - # url = "git+https://git.kun.is/home/nixos-servers"; - type = "path"; - path = "/home/pim/git/nixos-servers"; + url = "git+https://git.kun.is/home/nixos-servers"; + inputs = { + nixpkgs.follows = "nixpkgs"; + }; }; }; - outputs = inputs@{ self, servers, flutils, nixpkgs, kubenix, ... }: flutils.lib.eachDefaultSystem - (system: - let - pkgs = nixpkgs.legacyPackages.${system}; - deployScript = (pkgs.writeScriptBin "applyset-deploy.sh" (builtins.readFile ./applyset-deploy.sh)).overrideAttrs (old: { - buildCommand = "${old.buildCommand}\npatchShebangs $out"; - }); - - machines = servers.machines.${system}; - - mkKubernetes = name: module: namespace: (kubenix.evalModules.${system} { - specialArgs = { - inherit namespace system machines; - inherit (servers) globals; - inherit (inputs) nixhelm blog-pim dns; - }; - - module = { kubenix, ... }: - { - imports = [ - kubenix.modules.k8s - kubenix.modules.helm - ./modules - module - ]; - - config = { - kubenix.project = name; - kubernetes.namespace = namespace; - }; - }; - }).config.kubernetes; - - mkManifest = name: { module, namespace }: { - name = "${name}-manifest"; - value = (mkKubernetes name module namespace).result; - }; - - mkDeployApp = name: { module, namespace }: - let - kubernetes = mkKubernetes name module namespace; - kubeconfig = kubernetes.kubeconfig or ""; - result = kubernetes.result or ""; - - wrappedDeployScript = pkgs.symlinkJoin - { - name = "applyset-deploy.sh"; - paths = [ deployScript pkgs.vals pkgs.kubectl ]; - buildInputs = [ pkgs.makeWrapper ]; - passthru.manifest = result; - meta.mainProgram = "applyset-deploy.sh"; - - postBuild = '' - wrapProgram $out/bin/applyset-deploy.sh \ - --suffix PATH : "$out/bin" \ - --run 'export KUBECONFIG=''${KUBECONFIG:-${toString kubeconfig}}' \ - --set MANIFEST '${result}' \ - --set APPLYSET 'applyset-${name}' \ - --set NAMESPACE '${namespace}' - ''; - }; - in - { - name = "${name}-deploy"; - value = wrappedDeployScript; - }; - - deployments = import ./deployments.nix; - in - { - packages = pkgs.lib.mergeAttrs (pkgs.lib.mapAttrs' mkDeployApp deployments) (pkgs.lib.mapAttrs' mkManifest deployments); - }); + outputs = inputs@{ flutils, ... }: flutils.lib.meld inputs [ + ./kubenix.nix + ./scripts + ]; } diff --git a/kubenix.nix b/kubenix.nix new file mode 100644 index 0000000..7b3a8bb --- /dev/null +++ b/kubenix.nix @@ -0,0 +1,72 @@ +inputs@{ servers, flutils, nixpkgs, kubenix, ... }: flutils.lib.eachDefaultSystem + (system: + let + pkgs = nixpkgs.legacyPackages.${system}; + deployScript = (pkgs.writeScriptBin "applyset-deploy.sh" (builtins.readFile ./applyset-deploy.sh)).overrideAttrs (old: { + buildCommand = "${old.buildCommand}\npatchShebangs $out"; + }); + + machines = servers.machines.${system}; + + mkKubernetes = name: module: namespace: (kubenix.evalModules.${system} { + specialArgs = { + inherit namespace system machines; + inherit (servers) globals; + inherit (inputs) nixhelm blog-pim dns; + }; + + module = { kubenix, ... }: + { + imports = [ + kubenix.modules.k8s + kubenix.modules.helm + ./modules + module + ]; + + config = { + kubenix.project = name; + kubernetes.namespace = namespace; + }; + }; + }).config.kubernetes; + + mkManifest = name: { module, namespace }: { + name = "${name}-manifest"; + value = (mkKubernetes name module namespace).result; + }; + + mkDeployApp = name: { module, namespace }: + let + kubernetes = mkKubernetes name module namespace; + kubeconfig = kubernetes.kubeconfig or ""; + result = kubernetes.result or ""; + + wrappedDeployScript = pkgs.symlinkJoin + { + name = "applyset-deploy.sh"; + paths = [ deployScript pkgs.vals pkgs.kubectl ]; + buildInputs = [ pkgs.makeWrapper ]; + passthru.manifest = result; + meta.mainProgram = "applyset-deploy.sh"; + + postBuild = '' + wrapProgram $out/bin/applyset-deploy.sh \ + --suffix PATH : "$out/bin" \ + --run 'export KUBECONFIG=''${KUBECONFIG:-${toString kubeconfig}}' \ + --set MANIFEST '${result}' \ + --set APPLYSET 'applyset-${name}' \ + --set NAMESPACE '${namespace}' + ''; + }; + in + { + name = "${name}-deploy"; + value = wrappedDeployScript; + }; + + deployments = import ./deployments.nix; + in + { + packages = pkgs.lib.mergeAttrs (pkgs.lib.mapAttrs' mkDeployApp deployments) (pkgs.lib.mapAttrs' mkManifest deployments); + }) diff --git a/scripts/default.nix b/scripts/default.nix new file mode 100644 index 0000000..90dcbe2 --- /dev/null +++ b/scripts/default.nix @@ -0,0 +1,23 @@ +{ nixpkgs, flutils, ... }: flutils.lib.eachDefaultSystem (system: +let + pkgs = nixpkgs.legacyPackages.${system}; + createScript = { name, runtimeInputs, scriptPath, extraWrapperFlags ? "", ... }: + let + script = (pkgs.writeScriptBin name (builtins.readFile scriptPath)).overrideAttrs (old: { + buildCommand = "${old.buildCommand}\n patchShebangs $out"; + }); + in + pkgs.symlinkJoin { + inherit name; + paths = [ script ] ++ runtimeInputs; + buildInputs = [ pkgs.makeWrapper ]; + postBuild = "wrapProgram $out/bin/${name} --set PATH $out/bin ${extraWrapperFlags}"; + }; +in +{ + packages.gen-k3s-cert = createScript { + name = "create-k3s-cert"; + runtimeInputs = with pkgs; [ openssl coreutils openssh yq ]; + scriptPath = ./gen-k3s-cert.sh; + }; +}) diff --git a/scripts/gen-k3s-cert.sh b/scripts/gen-k3s-cert.sh new file mode 100644 index 0000000..405f9f9 --- /dev/null +++ b/scripts/gen-k3s-cert.sh @@ -0,0 +1,88 @@ +#!/usr/bin/env bash + +set -euo pipefail +IFS=$'\n\t' + +username="${1-}" +host="${2-}" +output_path="${3:-.}" + +if [ -z "$username" ] || [ -z "$host" ] + then + echo "Usage: $0 USERNAME HOST [OUTPUTPATH]" + exit 1 +fi + +# Create a temporary directory +temp=$(mktemp -d) + +# Function to cleanup temporary directory on exit +cleanup() { + rm -rf "$temp" +} +trap cleanup EXIT + +echo Generating the private key +openssl genpkey -algorithm ed25519 -out "$temp/key.pem" + +echo Generating the certificate request +openssl req -new -key "$temp/key.pem" -out "$temp/req.csr" -subj "/CN=$username" + +echo Creating K8S CSR manifest +csr="$(cat "$temp/req.csr" | base64 | tr -d '\n')" +k8s_csr="apiVersion: certificates.k8s.io/v1 +kind: CertificateSigningRequest +metadata: + name: $username-csr +spec: + request: $csr + expirationSeconds: 307584000 # 10 years + signerName: kubernetes.io/kube-apiserver-client + usages: + - digital signature + - key encipherment + - client auth +" + +echo Creating K8S CSR resource +ssh "root@$host" "echo \"$k8s_csr\" | k3s kubectl apply -f -" + +echo Approving K8S CSR +ssh "root@$host" "k3s kubectl certificate approve $username-csr" + +echo Retrieving approved certificate +encoded_cert="$(ssh root@"$host" "k3s kubectl get csr $username-csr -o jsonpath='{.status.certificate}'")" + +echo Retrieving default K3S kubeconfig +base_kubeconfig="$(ssh root@"$host" "cat /etc/rancher/k3s/k3s.yaml")" + +echo Getting certificate authority data from default kubeconfig +cert_authority_data="$(echo -n "$base_kubeconfig" | yq -r '.clusters[0].cluster."certificate-authority-data"')" + +echo Generating final kubeconfig +result_kubeconfig="apiVersion: v1 +clusters: +- cluster: + certificate-authority-data: $cert_authority_data + server: https://$host:6443 + name: default +contexts: +- context: + cluster: default + user: $username + name: default +current-context: default +kind: Config +preferences: {} +users: +- name: $username + user: + client-certificate: $username.crt + client-key: $username.key +" + +echo Writing resulting files to "$output_path" +echo -n "$encoded_cert" | base64 -d > $output_path/$username.crt +echo -n "$result_kubeconfig" > $output_path/config +cp $temp/key.pem $output_path/$username.key +