Remove jekyll cache
This commit is contained in:
parent
03cf79e8f0
commit
e242401553
9 changed files with 0 additions and 851 deletions
|
@ -1 +0,0 @@
|
|||
I"ÿ{"source"=>"/home/pim/git/blog-pim", "destination"=>"/home/pim/git/blog-pim/_site", "collections_dir"=>"", "cache_dir"=>".jekyll-cache", "plugins_dir"=>"_plugins", "layouts_dir"=>"_layouts", "data_dir"=>"_data", "includes_dir"=>"_includes", "collections"=>{"posts"=>{"output"=>true, "permalink"=>"/:categories/:year/:month/:day/:title:output_ext"}}, "safe"=>false, "include"=>[".htaccess"], "exclude"=>[".sass-cache", ".jekyll-cache", "gemfiles", "Gemfile", "Gemfile.lock", "node_modules", "vendor/bundle/", "vendor/cache/", "vendor/gems/", "vendor/ruby/"], "keep_files"=>[".git", ".svn"], "encoding"=>"utf-8", "markdown_ext"=>"markdown,mkdown,mkdn,mkd,md", "strict_front_matter"=>false, "show_drafts"=>nil, "limit_posts"=>0, "future"=>false, "unpublished"=>false, "whitelist"=>[], "plugins"=>[], "markdown"=>"kramdown", "highlighter"=>"rouge", "lsi"=>false, "excerpt_separator"=>"\n\n", "incremental"=>false, "detach"=>false, "port"=>"4000", "host"=>"127.0.0.1", "baseurl"=>nil, "show_dir_listing"=>false, "permalink"=>"date", "paginate_path"=>"/page:num", "timezone"=>nil, "quiet"=>false, "verbose"=>false, "defaults"=>[], "liquid"=>{"error_mode"=>"warn", "strict_filters"=>false, "strict_variables"=>false}, "kramdown"=>{"auto_ids"=>true, "toc_levels"=>[1, 2, 3, 4, 5, 6], "entity_output"=>"as_char", "smart_quotes"=>"lsquo,rsquo,ldquo,rdquo", "input"=>"GFM", "hard_wrap"=>false, "guess_lang"=>true, "footnote_nr"=>1, "show_warnings"=>false}, "livereload_port"=>35729, "serving"=>true, "watch"=>true, "url"=>"http://localhost:4000"}:ET
|
|
@ -1,180 +0,0 @@
|
|||
I"±N<p>Recently, I deployed <a href="https://concourse-ci.org/">Concourse CI</a> because I wanted to get my feet wet with a CI/CD pipeline.
|
||||
However, I had a practical use case lying around for a long time: automatically compiling my static website and deploying it to my docker Swarm.
|
||||
This took some time getting right, but the result works like a charm (<a href="https://git.kun.is/pim/static">source code</a>).</p>
|
||||
|
||||
<p>It’s comforting to know I don’t have move a finger and my website is automatically deployed.
|
||||
However, I would still like to receive some indication of what’s happening.
|
||||
And what’s a better way to do that, than using my <a href="https://github.com/caronc/apprise">Apprise</a> service to keep me up to date.
|
||||
There’s a little snag though: I could not find any Concourse resource that does this.
|
||||
That’s when I decided to just create it myself.</p>
|
||||
|
||||
<h1 id="the-plagiarism-hunt">The Plagiarism Hunt</h1>
|
||||
|
||||
<p>As any good computer person, I am lazy.
|
||||
I’d rather just copy someone’s work, so that’s what I did.
|
||||
I found <a href="https://github.com/mockersf/concourse-slack-notifier">this</a> GitHub repository that does the same thing but for Slack notifications.
|
||||
For some reason it’s archived, but it seemed like it should work.
|
||||
I actually noticed lots of repositories for Concourse resource types are archived, so not sure what’s going on there.</p>
|
||||
|
||||
<h1 id="getting-to-know-concourse">Getting to know Concourse</h1>
|
||||
|
||||
<p>Let’s first understand what we need to do reach our end goal of sending Apprise notifications from Concourse.</p>
|
||||
|
||||
<p>A Concourse pipeline takes some inputs, performs some operations on them which result in some outputs.
|
||||
These inputs and outputs are called <em>resources</em> in Concourse.
|
||||
For example, a Git repository could be a resource.
|
||||
Each resource is an instance of a <em>resource type</em>.
|
||||
A resource type therefore is simply a blueprint that can create multiple resources.
|
||||
To continue the example, a resource type could be “Git repositoryâ€<C3A2>.</p>
|
||||
|
||||
<p>We therefore need to create our own resource type that can send Apprise notifications.
|
||||
A resource type is simply a container that includes three scripts:</p>
|
||||
<ul>
|
||||
<li><code class="language-plaintext highlighter-rouge">check</code>: check for a new version of a resource</li>
|
||||
<li><code class="language-plaintext highlighter-rouge">in</code>: retrieve a version of the resource</li>
|
||||
<li><code class="language-plaintext highlighter-rouge">out</code>: create a version of the resource</li>
|
||||
</ul>
|
||||
|
||||
<p>As Apprise notifications are basically fire-and-forget, we will only implement the <code class="language-plaintext highlighter-rouge">out</code> script.</p>
|
||||
|
||||
<h1 id="writing-the-out-script">Writing the <code class="language-plaintext highlighter-rouge">out</code> script</h1>
|
||||
|
||||
<p>The whole script can be found <a href="https://git.kun.is/pim/concourse-apprise-notifier/src/branch/master/out">here</a>, but I will explain the most important bits of it.
|
||||
Note that I only use Apprise’s persistent storage solution, and not its stateless solution.</p>
|
||||
|
||||
<p>Concourse provides us with the working directory, which we <code class="language-plaintext highlighter-rouge">cd</code> to:</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> <span class="s2">"</span><span class="k">${</span><span class="nv">1</span><span class="k">}</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We create a timestamp, formatted in JSON, which we will use for the resource’s new version later.
|
||||
Concourse requires us to set a version for the resource, but since Apprise notifications don’t have that, we use the timestamp:</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">timestamp</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-n</span> <span class="s2">"{version:{timestamp:</span><span class="se">\"</span><span class="si">$(</span><span class="nb">date</span> +%s<span class="si">)</span><span class="se">\"</span><span class="s2">}}"</span><span class="si">)</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>First some black magic Bash to redirect file descriptors.
|
||||
Not sure why this is needed, but I copied it anyways.
|
||||
After that, we create a temporary file holding resource’s parameters.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">exec </span>3>&1
|
||||
<span class="nb">exec </span>1>&2
|
||||
|
||||
<span class="nv">payload</span><span class="o">=</span><span class="si">$(</span><span class="nb">mktemp</span> /tmp/resource-in.XXXXXX<span class="si">)</span>
|
||||
<span class="nb">cat</span> <span class="o">></span> <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span> <&0
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We then extract the individual parameters.
|
||||
The <code class="language-plaintext highlighter-rouge">source</code> key contains values how the resource type was specified, while the <code class="language-plaintext highlighter-rouge">params</code> key specifies parameters for this specific resource.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">apprise_host</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.source.host'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nv">apprise_key</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.source.key'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
|
||||
<span class="nv">alert_body</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.params.body'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nv">alert_title</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.params.title // null'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nv">alert_type</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.params.type // null'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nv">alert_tag</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.params.tag // null'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nv">alert_format</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'.params.format // null'</span> < <span class="s2">"</span><span class="k">${</span><span class="nv">payload</span><span class="k">}</span><span class="s2">"</span><span class="si">)</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We then format the different parameters using JSON:</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">alert_body</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">eval</span> <span class="s2">"printf </span><span class="se">\"</span><span class="k">${</span><span class="nv">alert_body</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> | jq <span class="nt">-R</span> <span class="nt">-s</span> .<span class="si">)</span><span class="s2">"</span>
|
||||
<span class="o">[</span> <span class="s2">"</span><span class="k">${</span><span class="nv">alert_title</span><span class="k">}</span><span class="s2">"</span> <span class="o">!=</span> <span class="s2">"null"</span> <span class="o">]</span> <span class="o">&&</span> <span class="nv">alert_title</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">eval</span> <span class="s2">"printf </span><span class="se">\"</span><span class="k">${</span><span class="nv">alert_title</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> | jq <span class="nt">-R</span> <span class="nt">-s</span> .<span class="si">)</span><span class="s2">"</span>
|
||||
<span class="o">[</span> <span class="s2">"</span><span class="k">${</span><span class="nv">alert_type</span><span class="k">}</span><span class="s2">"</span> <span class="o">!=</span> <span class="s2">"null"</span> <span class="o">]</span> <span class="o">&&</span> <span class="nv">alert_type</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">eval</span> <span class="s2">"printf </span><span class="se">\"</span><span class="k">${</span><span class="nv">alert_type</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> | jq <span class="nt">-R</span> <span class="nt">-s</span> .<span class="si">)</span><span class="s2">"</span>
|
||||
<span class="o">[</span> <span class="s2">"</span><span class="k">${</span><span class="nv">alert_tag</span><span class="k">}</span><span class="s2">"</span> <span class="o">!=</span> <span class="s2">"null"</span> <span class="o">]</span> <span class="o">&&</span> <span class="nv">alert_tag</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">eval</span> <span class="s2">"printf </span><span class="se">\"</span><span class="k">${</span><span class="nv">alert_tag</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> | jq <span class="nt">-R</span> <span class="nt">-s</span> .<span class="si">)</span><span class="s2">"</span>
|
||||
<span class="o">[</span> <span class="s2">"</span><span class="k">${</span><span class="nv">alert_format</span><span class="k">}</span><span class="s2">"</span> <span class="o">!=</span> <span class="s2">"null"</span> <span class="o">]</span> <span class="o">&&</span> <span class="nv">alert_format</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">eval</span> <span class="s2">"printf </span><span class="se">\"</span><span class="k">${</span><span class="nv">alert_format</span><span class="k">}</span><span class="se">\"</span><span class="s2">"</span> | jq <span class="nt">-R</span> <span class="nt">-s</span> .<span class="si">)</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Next, from the individual parameters we construct the final JSON message body we send to the Apprise endpoint.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">body</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">cat</span> <span class="o"><<</span><span class="no">EOF</span><span class="sh">
|
||||
{
|
||||
"body": </span><span class="k">${</span><span class="nv">alert_body</span><span class="k">}</span><span class="sh">,
|
||||
"title": </span><span class="k">${</span><span class="nv">alert_title</span><span class="k">}</span><span class="sh">,
|
||||
"type": </span><span class="k">${</span><span class="nv">alert_type</span><span class="k">}</span><span class="sh">,
|
||||
"tag": </span><span class="k">${</span><span class="nv">alert_tag</span><span class="k">}</span><span class="sh">,
|
||||
"format": </span><span class="k">${</span><span class="nv">alert_format</span><span class="k">}</span><span class="sh">
|
||||
}
|
||||
</span><span class="no">EOF
|
||||
</span><span class="si">)</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Before sending it just yet, we compact the JSON and remove any values that are <code class="language-plaintext highlighter-rouge">null</code>:</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">compact_body</span><span class="o">=</span><span class="s2">"</span><span class="si">$(</span><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">body</span><span class="k">}</span><span class="s2">"</span> | jq <span class="nt">-c</span> <span class="s1">'.'</span><span class="si">)</span><span class="s2">"</span>
|
||||
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$compact_body</span><span class="s2">"</span> | jq <span class="s1">'del(..|nulls)'</span> <span class="o">></span> /tmp/compact_body.json
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Here is the most important line, where we send the payload to the Apprise endpoint.
|
||||
It’s quite straight-forward.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-v</span> <span class="nt">-X</span> POST <span class="nt">-T</span> /tmp/compact_body.json <span class="nt">-H</span> <span class="s2">"Content-Type: application/json"</span> <span class="s2">"</span><span class="k">${</span><span class="nv">apprise_host</span><span class="k">}</span><span class="s2">/notify/</span><span class="k">${</span><span class="nv">apprise_key</span><span class="k">}</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Finally, we print the timestamp (fake version) in order to appease the Concourse gods.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">echo</span> <span class="s2">"</span><span class="k">${</span><span class="nv">timestamp</span><span class="k">}</span><span class="s2">"</span> <span class="o">></span>&3
|
||||
</code></pre></div></div>
|
||||
|
||||
<h1 id="building-the-container">Building the Container</h1>
|
||||
|
||||
<p>As said earlier, to actually use this script, we need to add it to a image.
|
||||
I won’t be explaining this whole process, but the source can be found <a href="https://git.kun.is/pim/concourse-apprise-notifier/src/branch/master/pipeline.yml">here</a>.
|
||||
The most important take-aways are these:</p>
|
||||
<ul>
|
||||
<li>Use <code class="language-plaintext highlighter-rouge">concourse/oci-build-task</code> to build a image from a Dockerfile.</li>
|
||||
<li>Use <code class="language-plaintext highlighter-rouge">registry-image</code> to push the image to an image registry.</li>
|
||||
</ul>
|
||||
|
||||
<h1 id="using-the-resource-type">Using the Resource Type</h1>
|
||||
|
||||
<p>Using our newly created resource type is surprisingly simple.
|
||||
I use it for the blog you are reading right now and the pipeline definition can be found <a href="https://git.kun.is/pim/static/src/branch/main/pipeline.yml">here</a>.
|
||||
Here we specify the resource type in a Concourse pipeline:</p>
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">resource_types</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">apprise</span>
|
||||
<span class="na">type</span><span class="pi">:</span> <span class="s">registry-image</span>
|
||||
<span class="na">source</span><span class="pi">:</span>
|
||||
<span class="na">repository</span><span class="pi">:</span> <span class="s">git.kun.is/pim/concourse-apprise-notifier</span>
|
||||
<span class="na">tag</span><span class="pi">:</span> <span class="s2">"</span><span class="s">1.1.1"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We simply have to tell Concourse where to find the image, and which tag we want.
|
||||
Next, we instantiate the resource type to create a resource:</p>
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">resources</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">apprise-notification</span>
|
||||
<span class="na">type</span><span class="pi">:</span> <span class="s">apprise</span>
|
||||
<span class="na">source</span><span class="pi">:</span>
|
||||
<span class="na">host</span><span class="pi">:</span> <span class="s">https://apprise.kun.is:444</span>
|
||||
<span class="na">key</span><span class="pi">:</span> <span class="s">concourse</span>
|
||||
<span class="na">icon</span><span class="pi">:</span> <span class="s">bell</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We simply specify the host to send Apprise notifications to.
|
||||
Yeah, I even gave it a little bell because it’s cute.</p>
|
||||
|
||||
<p>All that’s left to do, is actually send the notification.
|
||||
Let’s see how that is done:</p>
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">deploy-static-website</span>
|
||||
<span class="na">plan</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="na">task</span><span class="pi">:</span> <span class="s">deploy-site</span>
|
||||
<span class="na">config</span><span class="pi">:</span> <span class="s">...</span>
|
||||
|
||||
<span class="na">on_success</span><span class="pi">:</span>
|
||||
<span class="err"> </span><span class="na">put</span><span class="pi">:</span> <span class="s">apprise-notification</span>
|
||||
<span class="na"> params</span><span class="pi">:</span>
|
||||
<span class="err"> </span> <span class="na">title</span><span class="pi">:</span> <span class="s2">"</span><span class="s">Static</span><span class="nv"> </span><span class="s">website</span><span class="nv"> </span><span class="s">deployed!"</span>
|
||||
<span class="err"> </span> <span class="na">body</span><span class="pi">:</span> <span class="s2">"</span><span class="s">New</span><span class="nv"> </span><span class="s">version:</span><span class="nv"> </span><span class="s">$(cat</span><span class="nv"> </span><span class="s">version/version)"</span>
|
||||
<span class="err"> </span><span class="na">no_get</span><span class="pi">:</span> <span class="no">true</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>As can be seen, the Apprise notification can be triggered when a task is executed successfully.
|
||||
We do this using the <code class="language-plaintext highlighter-rouge">put</code> command, which execute the <code class="language-plaintext highlighter-rouge">out</code> script underwater.
|
||||
We set the notification’s title and body, and send it!
|
||||
The result is seen below in my Ntfy app, which Apprise forwards the message to:
|
||||
<img src="ntfy.png" alt="picture showing my Ntfy app with the Apprise notification" /></p>
|
||||
|
||||
<p>And to finish this off, here is what it looks like in the Concourse web UI:
|
||||
<img src="pipeline.png" alt="the concourse web gui showing the pipeline of my static website including the the apprise notification resources" /></p>
|
||||
|
||||
<h1 id="conclusion">Conclusion</h1>
|
||||
|
||||
<p>Concourse’s way of representing everything as an image/container is really interesting in my opinion.
|
||||
A resource type is quite easily implemented as well, although Bash might not be the optimal way to do this.
|
||||
I’ve seen some people implement it in Rust, which might be a good excuse to finally learn that language :)</p>
|
||||
|
||||
<p>Apart from Apprise notifications, I’m planning on creating a resource type to deploy to a Docker swarm eventually.
|
||||
This seems like a lot harder than simply sending notifications though.</p>
|
||||
:ET
|
|
@ -1,170 +0,0 @@
|
|||
I"µ:<p>Ever SSH’ed into a freshly installed server and gotten the following annoying message?</p>
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The authenticity of host 'host.tld (1.2.3.4)' can't be established.
|
||||
ED25519 key fingerprint is SHA256:eUXGdm1YdsMAS7vkdx6dOJdOGHdem5gQp4tadCfdLB8.
|
||||
Are you sure you want to continue connecting (yes/no)?
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Or even more annoying:</p>
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
|
||||
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
|
||||
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
|
||||
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
|
||||
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
|
||||
It is also possible that a host key has just been changed.
|
||||
The fingerprint for the ED25519 key sent by the remote host is
|
||||
SHA256:eUXGdm1YdsMAS7vkdx6dOJdOGHdem5gQp4tadCfdLB8.
|
||||
Please contact your system administrator.
|
||||
Add correct host key in /home/user/.ssh/known_hosts to get rid of this message.
|
||||
Offending ED25519 key in /home/user/.ssh/known_hosts:3
|
||||
remove with:
|
||||
ssh-keygen -f "/etc/ssh/ssh_known_hosts" -R "1.2.3.4"
|
||||
ED25519 host key for 1.2.3.4 has changed and you have requested strict checking.
|
||||
Host key verification failed.
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Could it be that the programmers at OpenSSH simply like to annoy us with these confusing messages?
|
||||
Maybe, but these warnings also serve as a way to notify users of a potential Man-in-the-Middle (MITM) attack.
|
||||
I won’t go into the details of this problem, but I refer you to <a href="https://blog.g3rt.nl/ssh-host-key-validation-strict-yet-user-friendly.html">this excellent blog post</a>.
|
||||
Instead, I would like to talk about ways to solve these annoying warnings.</p>
|
||||
|
||||
<p>One obvious solution is simply to add each host to your <code class="language-plaintext highlighter-rouge">known_hosts</code> file.
|
||||
This works okay when managing a handful of servers, but becomes unbearable when managing many servers.
|
||||
In my case, I wanted to quickly spin up virtual machines using Duncan Mac-Vicar’s <a href="https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs">Terraform Libvirt provider</a>, without having to accept their host key before connecting.
|
||||
The solution? Issuing SSH host certificates using an SSH certificate authority.</p>
|
||||
|
||||
<h2 id="ssh-certificate-authorities-vs-the-web">SSH Certificate Authorities vs. the Web</h2>
|
||||
|
||||
<p>The idea of an SSH certificate authority (CA) is quite easy to grasp, if you understand the web’s Public Key Infrastructure (PKI).
|
||||
Just like with the web, a trusted party can issue certificates that are offered when establishing a connection.
|
||||
The idea is, just by trusting the trusted party, you trust every certificate they issue.
|
||||
In the case of the web’s PKI, this trusted party is bundled and trusted by <a href="https://wiki.mozilla.org/CA">your browser</a> or operating system.
|
||||
However, in the case of SSH, the trusted party is you! (Okay you can also trust your own web certificate authority)
|
||||
With this great power, comes great responsibility which we will abuse heavily in this article.</p>
|
||||
|
||||
<h2 id="ssh-certificate-authority-for-terraform">SSH Certificate Authority for Terraform</h2>
|
||||
|
||||
<p>So, let’s start with a plan.
|
||||
I want to spawn virtual machines with Terraform which which are automatically provisioned with a SSH host certificate issued by my CA.
|
||||
This CA will be another host on my private network, issuing certificates over SSH.</p>
|
||||
|
||||
<h3 id="fetching-the-ssh-host-certificate">Fetching the SSH Host Certificate</h3>
|
||||
|
||||
<p>First we generate an SSH key pair in Terraform.
|
||||
Below is the code for that:</p>
|
||||
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"tls_private_key"</span> <span class="s2">"debian"</span> <span class="p">{</span>
|
||||
<span class="nx">algorithm</span> <span class="p">=</span> <span class="s2">"ED25519"</span>
|
||||
<span class="p">}</span>
|
||||
|
||||
<span class="k">data</span> <span class="s2">"tls_public_key"</span> <span class="s2">"debian"</span> <span class="p">{</span>
|
||||
<span class="nx">private_key_pem</span> <span class="p">=</span> <span class="nx">tls_private_key</span><span class="p">.</span><span class="nx">debian</span><span class="p">.</span><span class="nx">private_key_pem</span>
|
||||
<span class="p">}</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Now that we have an SSH key pair, we need to somehow make Terraform communicate this with the CA.
|
||||
Lucky for us, there is a way for Terraform to execute an arbitrary command with the <code class="language-plaintext highlighter-rouge">external</code> data feature.
|
||||
We call this script below:</p>
|
||||
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">data</span> <span class="s2">"external"</span> <span class="s2">"cert"</span> <span class="p">{</span>
|
||||
<span class="nx">program</span> <span class="p">=</span> <span class="p">[</span><span class="s2">"bash"</span><span class="p">,</span> <span class="s2">"</span><span class="k">${</span><span class="nx">path</span><span class="p">.</span><span class="k">module}</span><span class="s2">/get_cert.sh"</span><span class="p">]</span>
|
||||
|
||||
<span class="nx">query</span> <span class="p">=</span> <span class="p">{</span>
|
||||
<span class="nx">pubkey</span> <span class="p">=</span> <span class="nx">trimspace</span><span class="p">(</span><span class="k">data</span><span class="p">.</span><span class="nx">tls_public_key</span><span class="p">.</span><span class="nx">debian</span><span class="p">.</span><span class="nx">public_key_openssh</span><span class="p">)</span>
|
||||
<span class="nx">host</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">name</span>
|
||||
<span class="nx">cahost</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">ca_host</span>
|
||||
<span class="nx">cascript</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">ca_script</span>
|
||||
<span class="nx">cakey</span> <span class="p">=</span> <span class="kd">var</span><span class="p">.</span><span class="nx">ca_key</span>
|
||||
<span class="p">}</span>
|
||||
<span class="p">}</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>These query parameters will end up in the script’s stdin in JSON format.
|
||||
We can then read these parameters, and send them to the CA over SSH.
|
||||
The result must as well be in JSON format.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
|
||||
<span class="nb">set</span> <span class="nt">-euo</span> pipefail
|
||||
<span class="nv">IFS</span><span class="o">=</span><span class="s1">$'</span><span class="se">\n\t</span><span class="s1">'</span>
|
||||
|
||||
<span class="c"># Read the query parameters</span>
|
||||
<span class="nb">eval</span> <span class="s2">"</span><span class="si">$(</span>jq <span class="nt">-r</span> <span class="s1">'@sh "PUBKEY=\(.pubkey) HOST=\(.host) CAHOST=\(.cahost) CASCRIPT=\(.cascript) CAKEY=\(.cakey)"'</span><span class="si">)</span><span class="s2">"</span>
|
||||
|
||||
<span class="c"># Fetch certificate from the CA</span>
|
||||
<span class="c"># Warning: extremely ugly code that I am to lazy to fix</span>
|
||||
<span class="nv">CERT</span><span class="o">=</span><span class="si">$(</span>ssh <span class="nt">-o</span> <span class="nv">ConnectTimeout</span><span class="o">=</span>3 <span class="nt">-o</span> <span class="nv">ConnectionAttempts</span><span class="o">=</span>1 root@<span class="nv">$CAHOST</span> <span class="s1">'"'</span><span class="s2">"</span><span class="nv">$CASCRIPT</span><span class="s2">"</span><span class="s1">'" host "'</span><span class="s2">"</span><span class="nv">$CAKEY</span><span class="s2">"</span><span class="s1">'" "'</span><span class="s2">"</span><span class="nv">$PUBKEY</span><span class="s2">"</span><span class="s1">'" "'</span><span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span><span class="s1">'".dmz'</span><span class="si">)</span>
|
||||
|
||||
jq <span class="nt">-n</span> <span class="nt">--arg</span> cert <span class="s2">"</span><span class="nv">$CERT</span><span class="s2">"</span> <span class="s1">'{"cert":$cert}'</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>We see that a script is called on the remote host that issues the certificate.
|
||||
This is just a simple wrapper around <code class="language-plaintext highlighter-rouge">ssh-keygen</code>, which you can see below.</p>
|
||||
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
|
||||
<span class="nb">set</span> <span class="nt">-euo</span> pipefail
|
||||
<span class="nv">IFS</span><span class="o">=</span><span class="s1">$'</span><span class="se">\n\t</span><span class="s1">'</span>
|
||||
|
||||
host<span class="o">()</span> <span class="o">{</span>
|
||||
<span class="nv">CAKEY</span><span class="o">=</span><span class="s2">"</span><span class="nv">$2</span><span class="s2">"</span>
|
||||
<span class="nv">PUBKEY</span><span class="o">=</span><span class="s2">"</span><span class="nv">$3</span><span class="s2">"</span>
|
||||
<span class="nv">HOST</span><span class="o">=</span><span class="s2">"</span><span class="nv">$4</span><span class="s2">"</span>
|
||||
|
||||
<span class="nb">echo</span> <span class="s2">"</span><span class="nv">$PUBKEY</span><span class="s2">"</span> <span class="o">></span> /root/ca/<span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span>.pub
|
||||
ssh-keygen <span class="nt">-h</span> <span class="nt">-s</span> /root/ca/keys/<span class="s2">"</span><span class="nv">$CAKEY</span><span class="s2">"</span> <span class="nt">-I</span> <span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span> <span class="nt">-n</span> <span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span> /root/ca/<span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span>.pub
|
||||
<span class="nb">cat</span> /root/ca/<span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span><span class="nt">-cert</span>.pub
|
||||
<span class="nb">rm</span> /root/ca/<span class="s2">"</span><span class="nv">$HOST</span><span class="s2">"</span><span class="k">*</span>.pub
|
||||
<span class="o">}</span>
|
||||
|
||||
<span class="s2">"</span><span class="nv">$1</span><span class="s2">"</span> <span class="s2">"</span><span class="nv">$@</span><span class="s2">"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<h3 id="appeasing-the-terraform-gods">Appeasing the Terraform Gods</h3>
|
||||
|
||||
<p>So nice, we can fetch the SSH host certificate from the CA.
|
||||
We should just be able to use it right?
|
||||
We can, but it brings a big annoyance with it: Terraform will fetch a new certificate every time it is run.
|
||||
This is because the <code class="language-plaintext highlighter-rouge">external</code> feature of Terraform is a data source.
|
||||
If we were to use this data source for a Terraform resource, it would need to be updated every time we run Terraform.
|
||||
I have not been able to find a way to avoid fetching the certificate every time, except for writing my own resource provider which I’d rather not.
|
||||
I have, however, found a way to hack around the issue.</p>
|
||||
|
||||
<p>The idea is as follows: we can use Terraform’s <code class="language-plaintext highlighter-rouge">ignore_changes</code> to, well, ignore any changes of a resource.
|
||||
Unfortunately, we cannot use this for a <code class="language-plaintext highlighter-rouge">data</code> source, so we must create a glue <code class="language-plaintext highlighter-rouge">null_resource</code> that supports <code class="language-plaintext highlighter-rouge">ignore_changes</code>.
|
||||
This is shown in the code snipppet below.
|
||||
We use the <code class="language-plaintext highlighter-rouge">triggers</code> property simply to copy the certificate in; we don’t use it for it’s original purpose.</p>
|
||||
|
||||
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">resource</span> <span class="s2">"null_resource"</span> <span class="s2">"cert"</span> <span class="p">{</span>
|
||||
<span class="nx">triggers</span> <span class="p">=</span> <span class="p">{</span>
|
||||
<span class="nx">cert</span> <span class="p">=</span> <span class="k">data</span><span class="p">.</span><span class="nx">external</span><span class="p">.</span><span class="nx">cert</span><span class="p">.</span><span class="nx">result</span><span class="p">[</span><span class="s2">"cert"</span><span class="p">]</span>
|
||||
<span class="p">}</span>
|
||||
|
||||
<span class="nx">lifecycle</span> <span class="p">{</span>
|
||||
<span class="nx">ignore_changes</span> <span class="p">=</span> <span class="p">[</span>
|
||||
<span class="nx">triggers</span>
|
||||
<span class="p">]</span>
|
||||
<span class="p">}</span>
|
||||
<span class="p">}</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>And voilà , we can now use <code class="language-plaintext highlighter-rouge">null_resource.cert.triggers["cert"]</code> as our certificate, that won’t trigger replacements in Terraform.</p>
|
||||
|
||||
<h3 id="setting-the-host-certificate-with-cloud-init">Setting the Host Certificate with Cloud-Init</h3>
|
||||
|
||||
<p>Terraform’s Libvirt provider has native support for Cloud-Init, which is very handy.
|
||||
We can give the host certificate directly to Cloud-Init and place it on the virtual machine.
|
||||
Inside the Cloud-Init configuration, we can set the <code class="language-plaintext highlighter-rouge">ssh_keys</code> property to do this:</p>
|
||||
<div class="language-yml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">ssh_keys</span><span class="pi">:</span>
|
||||
<span class="na">ed25519_private</span><span class="pi">:</span> <span class="pi">|</span>
|
||||
<span class="s">${indent(4, private_key)}</span>
|
||||
<span class="na">ed25519_certificate</span><span class="pi">:</span> <span class="s2">"</span><span class="s">${host_cert}"</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>I hardcoded this to ED25519 keys, because this is all I use.</p>
|
||||
|
||||
<p>This works perfectly, and I never have to accept host certificates from virtual machines again.</p>
|
||||
|
||||
<h3 id="caveats">Caveats</h3>
|
||||
|
||||
<p>A sharp eye might have noticed the lifecycle of these host certificates is severely lacking.
|
||||
Namely, the deployed host certificates have no expiration date nore is there revocation function.
|
||||
There are ways to implement these, but for my home lab I did not deem this necessary at this point.
|
||||
In a more professional environment, I would suggest using <a href="https://www.vaultproject.io/">Hashicorp’s Vault</a>.</p>
|
||||
|
||||
<p>This project did teach me about the limits and flexibility of Terraform, so all in all a success!
|
||||
All code can be found on the git repository <a href="https://git.kun.is/home/tf-modules/src/branch/master/debian">here</a>.</p>
|
||||
:ET
|
Binary file not shown.
|
@ -1,281 +0,0 @@
|
|||
I"ýG<p>I have been meaning to write about the current state of my home lab infrastructure for a while now.
|
||||
Now that the most important parts are quite stable, I think the opportunity is ripe.
|
||||
I expect this post to get quite long, so I might have to leave out some details along the way.</p>
|
||||
|
||||
<p>This post will be a starting point for future infrastructure snapshots which I can hopefully put out periodically.
|
||||
That is, if there is enough worth talking about.</p>
|
||||
|
||||
<p>Keep an eye out for the <i class="fa-solid fa-code-branch"></i> icon, which links to the source code and configuration of anything mentioned.
|
||||
Oh yeah, did I mention everything I do is open source?</p>
|
||||
|
||||
<h1 id="networking-and-infrastructure-overview">Networking and Infrastructure Overview</h1>
|
||||
|
||||
<h2 id="hardware-and-operating-systems">Hardware and Operating Systems</h2>
|
||||
|
||||
<p>Let’s start with the basics: what kind of hardware do I use for my home lab?
|
||||
The most important servers are my three <a href="https://www.gigabyte.com/Mini-PcBarebone/GB-BLCE-4105-rev-10">Gigabyte Brix GB-BLCE-4105</a>.
|
||||
Two of them have 16 GB of memory, and one 8 GB.
|
||||
I named these servers as follows:</p>
|
||||
<ul>
|
||||
<li><strong>Atlas</strong>: because this server was going to “liftâ€<C3A2> a lot of virtual machines.</li>
|
||||
<li><strong>Lewis</strong>: we started out with a “Maxâ€<C3A2> server named after the Formula 1 driver Max Verstappen, but it kind of became an unmanagable behemoth without infrastructure-as-code. Our second server we subsequently named Lewis after his colleague Lewis Hamilton. Note: people around me vetoed these names and I am no F1 fan!</li>
|
||||
<li><strong>Jefke</strong>: it’s a funny Belgian name. That’s all.</li>
|
||||
</ul>
|
||||
|
||||
<p>Here is a picture of them sitting in their cosy closet:</p>
|
||||
|
||||
<p><img src="servers.jpeg" alt="A picture of my servers." /></p>
|
||||
|
||||
<p>If you look look to the left, you will also see a Raspberry pi 4B.
|
||||
I use this Pi to do some rudimentary monitoring whether servers and services are running.
|
||||
More on this in the relevant section below.
|
||||
The Pi is called <strong>Iris</strong> because it’s a messenger for the other servers.</p>
|
||||
|
||||
<p>I used to run Ubuntu on these systems, but I have since migrated away to Debian.
|
||||
The main reasons were Canonical <a href="https://askubuntu.com/questions/1434512/how-to-get-rid-of-ubuntu-pro-advertisement-when-updating-apt">putting advertisements in my terminal</a> and pushing Snap which has a <a href="https://hackaday.com/2020/06/24/whats-the-deal-with-snap-packages/">proprietry backend</a>.
|
||||
Two of my servers run the newly released Debian Bookworm, while one still runs Debian Bullseye.</p>
|
||||
|
||||
<h2 id="networking">Networking</h2>
|
||||
|
||||
<p>For networking, I wanted hypervisors and virtual machines separated by VLANs for security reasons.
|
||||
The following picture shows a simplified view of the VLANs present in my home lab:</p>
|
||||
|
||||
<p><img src="vlans.png" alt="Picture showing the VLANS in my home lab." /></p>
|
||||
|
||||
<p>All virtual machines are connected to a virtual bridge which tags network traffic with the DMZ VLAN.
|
||||
The hypervisors VLAN is used for traffic to and from the hypervisors.
|
||||
Devices from the hypervisors VLAN are allowed to connect to devices in the DMZ, but not vice versa.
|
||||
The hypervisors are connected to a switch using a trunk link, allows both DMZ and hypervisors traffic.</p>
|
||||
|
||||
<p>I realised the above design using ifupdown.
|
||||
Below is the configuration for each hypervisor, which creates a new <code class="language-plaintext highlighter-rouge">enp3s0.30</code> interface with all DMZ traffic from the <code class="language-plaintext highlighter-rouge">enp3s0</code> interface <a href="https://git.kun.is/home/hypervisors/src/commit/71b96d462116e4160b6467533fc476f3deb9c306/ansible/dmz.conf.j2"><i class="fa-solid fa-code-branch"></i></a>.</p>
|
||||
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>auto enp3s0.30
|
||||
iface enp3s0.30 inet manual
|
||||
iface enp3s0.30 inet6 auto
|
||||
accept_ra 0
|
||||
dhcp 0
|
||||
request_prefix 0
|
||||
privext 0
|
||||
pre-up sysctl -w net/ipv6/conf/enp3s0.30/disable_ipv6=1
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>This configuration seems more complex than it actually is.
|
||||
Most of it is to make sure the interface is not assigned an IPv4/6 address on the hypervisor host.
|
||||
The magic <code class="language-plaintext highlighter-rouge">.30</code> at the end of the interface name makes this interface tagged with VLAN ID 30 (DMZ for me).</p>
|
||||
|
||||
<p>Now that we have an interface tagged for the DMZ VLAN, we can create a bridge where future virtual machines can connect to:</p>
|
||||
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>auto dmzbr
|
||||
iface dmzbr inet manual
|
||||
bridge_ports enp3s0.30
|
||||
bridge_stp off
|
||||
iface dmzbr inet6 auto
|
||||
accept_ra 0
|
||||
dhcp 0
|
||||
request_prefix 0
|
||||
privext 0
|
||||
pre-up sysctl -w net/ipv6/conf/dmzbr/disable_ipv6=1
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Just like the previous config, this is quite bloated because I don’t want the interface to be assigned an IP address on the host.
|
||||
Most importantly, the <code class="language-plaintext highlighter-rouge">bridge_ports enp3s0.30</code> line here makes this interface a virtual bridge for the <code class="language-plaintext highlighter-rouge">enp3s0.30</code> interface.</p>
|
||||
|
||||
<p>And voilà , we now have a virtual bridge on each machine, where only DMZ traffic will flow.
|
||||
Here I verify whether this configuration works:</p>
|
||||
<details>
|
||||
<summary>Show</summary>
|
||||
|
||||
|
||||
We can see that the two virtual interfaces are created, and are only assigned a MAC address and not a IP address:
|
||||
```text
|
||||
root@atlas:~# ip a show enp3s0.30
|
||||
4: enp3s0.30@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master dmzbr state UP group default qlen 1000
|
||||
link/ether d8:5e:d3:4c:70:38 brd ff:ff:ff:ff:ff:ff
|
||||
5: dmzbr: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
|
||||
link/ether 4e:f7:1f:0f:ad:17 brd ff:ff:ff:ff:ff:ff
|
||||
```
|
||||
|
||||
Pinging a VM from a hypervisor works:
|
||||
```text
|
||||
root@atlas:~# ping -c1 maestro.dmz
|
||||
PING maestro.dmz (192.168.30.8) 56(84) bytes of data.
|
||||
64 bytes from 192.168.30.8 (192.168.30.8): icmp_seq=1 ttl=63 time=0.457 ms
|
||||
```
|
||||
|
||||
Pinging a hypervisor from a VM does not work:
|
||||
```text
|
||||
root@maestro:~# ping -c1 atlas.hyp
|
||||
PING atlas.hyp (192.168.40.2) 56(84) bytes of data.
|
||||
|
||||
--- atlas.hyp ping statistics ---
|
||||
1 packets transmitted, 0 received, 100% packet loss, time 0ms
|
||||
```
|
||||
</details>
|
||||
|
||||
<h2 id="dns-and-dhcp">DNS and DHCP</h2>
|
||||
|
||||
<p>Now that we have a working DMZ network, let’s build on it to get DNS and DHCP working.
|
||||
This will enable new virtual machines to obtain a static or dynamic IP address and register their host in DNS.
|
||||
This has actually been incredibly annoying due to our friend <a href="https://en.wikipedia.org/wiki/Network_address_translation?useskin=vector">Network address translation (NAT)</a>.</p>
|
||||
<details>
|
||||
<summary>NAT recap</summary>
|
||||
|
||||
Network address translation (NAT) is a function of a router which allows multiple hosts to share a single IP address.
|
||||
This is needed for IPv4, because IPv4 addresses are scarce and usually one household is only assigned a single IPv4 address.
|
||||
This is one of the problems IPv6 attempts to solve (mainly by having so many IP addresses that they should never run out).
|
||||
To solve the problem for IPv4, each host in a network is assigned a private IPv4 address, which can be reused for every network.
|
||||
|
||||
Then, the router must perform address translation.
|
||||
It does this by keeping track of ports opened by hosts in its private network.
|
||||
If a packet from the internet arrives at the router for such a port, it forwards this packet to the correct host.
|
||||
</details>
|
||||
|
||||
<p>I would like to host my own DNS on a virtual machine (called <strong>hermes</strong>, more on VMs later) in the DMZ network.
|
||||
This basically gives two problems:</p>
|
||||
|
||||
<ol>
|
||||
<li>The upstream DNS server will refer to the public internet-accessible IP address of our DNS server.
|
||||
This IP-address has no meaning inside the private network due to NAT and the router will reject the packet.</li>
|
||||
<li>Our DNS resolves hosts to their public internet-accessible IP address.
|
||||
This is similar to the previous problem as the public IP address has no meaning.</li>
|
||||
</ol>
|
||||
|
||||
<p>The first problem can be remediated by overriding the location of the DNS server for hosts inside the DMZ network.
|
||||
This can be achieved on my router, which uses Unbound as its recursive DNS server:</p>
|
||||
|
||||
<p><img src="unbound_overrides.png" alt="Unbound overides for kun.is and dmz domains." /></p>
|
||||
|
||||
<p>Any DNS requests to Unbound to domains in either <code class="language-plaintext highlighter-rouge">dmz</code> or <code class="language-plaintext highlighter-rouge">kun.is</code> will now be forwarded <code class="language-plaintext highlighter-rouge">192.168.30.7</code> (port 5353).
|
||||
This is the virtual machine hosting my DNS.</p>
|
||||
|
||||
<p>The second problem can be solved at the DNS server.
|
||||
We need to do some magic overriding, which <a href="https://dnsmasq.org/docs/dnsmasq-man.html">dnsmasq</a> is perfect for <a href="https://git.kun.is/home/hermes/src/commit/488024a7725f2325b8992e7a386b4630023f1b52/ansible/roles/dnsmasq/files/dnsmasq.conf"><i class="fa-solid fa-code-branch"></i></a>:</p>
|
||||
|
||||
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">alias</span>=<span class="m">84</span>.<span class="m">245</span>.<span class="m">14</span>.<span class="m">149</span>,<span class="m">192</span>.<span class="m">168</span>.<span class="m">30</span>.<span class="m">8</span>
|
||||
<span class="n">server</span>=/<span class="n">kun</span>.<span class="n">is</span>/<span class="m">192</span>.<span class="m">168</span>.<span class="m">30</span>.<span class="m">7</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>This always overrides the public IPv4 address to the private one.
|
||||
It also overrides the DNS server for <code class="language-plaintext highlighter-rouge">kun.is</code> to <code class="language-plaintext highlighter-rouge">192.168.30.7</code>.</p>
|
||||
|
||||
<p>Finally, behind the dnsmasq server, I run <a href="https://www.powerdns.com/">Powerdns</a> as authoritative DNS server <a href="https://git.kun.is/home/hermes/src/branch/master/ansible/roles/powerdns"><i class="fa-solid fa-code-branch"></i></a>.
|
||||
I like this DNS server because I can manage it with Terraform <a href="https://git.kun.is/home/hermes/src/commit/488024a7725f2325b8992e7a386b4630023f1b52/terraform/dns/kun_is.tf"><i class="fa-solid fa-code-branch"></i></a>.</p>
|
||||
|
||||
<p>Here is a small diagram showing my setup (my networking teacher would probably kill me for this):
|
||||
<img src="nat.png" alt="Shitty diagram showing my DNS setup." /></p>
|
||||
|
||||
<h1 id="virtualization">Virtualization</h1>
|
||||
<p>https://github.com/containrrr/shepherd
|
||||
Now that we have laid out the basic networking, let’s talk virtualization.
|
||||
Each of my servers are configured to run KVM virtual machines, orchestrated using Libvirt.
|
||||
Configuration of the physical hypervisor servers, including KVM/Libvirt is done using Ansible.
|
||||
The VMs are spun up using Terraform and the <a href="https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs">dmacvicar/libvirt</a> Terraform provider.</p>
|
||||
|
||||
<p>This all isn’t too exciting, except that I created a Terraform module that abstracts the Terraform Libvirt provider for my specific scenario <a href="https://git.kun.is/home/tf-modules/src/commit/e77d62f4a2a0c3847ffef4434c50a0f40f1fa794/debian/main.tf"><i class="fa-solid fa-code-branch"></i></a>:</p>
|
||||
<div class="language-terraform highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">module</span> <span class="s2">"maestro"</span> <span class="p">{</span>
|
||||
<span class="nx">source</span> <span class="p">=</span> <span class="s2">"git::https://git.kun.is/home/tf-modules.git//debian"</span>
|
||||
<span class="nx">name</span> <span class="p">=</span> <span class="s2">"maestro"</span>
|
||||
<span class="nx">domain_name</span> <span class="p">=</span> <span class="s2">"tf-maestro"</span>
|
||||
<span class="nx">memory</span> <span class="p">=</span> <span class="mi">10240</span>
|
||||
<span class="nx">mac</span> <span class="p">=</span> <span class="s2">"CA:FE:C0:FF:EE:08"</span>
|
||||
<span class="p">}</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>This automatically creates a Debian virtual machines with the properties specified.
|
||||
It also sets up certificate-based SSH authentication which I talked about <a href="/src/ssh/terraform/ansible/2023/05/23/homebrew-ssh-ca.html">before</a>.</p>
|
||||
|
||||
<h1 id="clustering">Clustering</h1>
|
||||
|
||||
<p>With virtualization explained, let’s move up one level further.
|
||||
Each of my three physical servers hosts a virtual machine running Docker, which together form a Docker Swarm.
|
||||
I use Traefik as a reverse proxy which routes requests to the correct container.</p>
|
||||
|
||||
<p>All data is hosted on a single machine and made available to containers using NFS.
|
||||
This might not be very secure (as NFS is not encrypted and no proper authentication), it is quite fast.</p>
|
||||
|
||||
<p>As of today, I host the following services on my Docker Swarm <a href="https://git.kun.is/home/shoarma"><i class="fa-solid fa-code-branch"></i></a>:</p>
|
||||
<ul>
|
||||
<li><a href="https://forgejo.org/">Forgejo</a> as Git server</li>
|
||||
<li><a href="https://www.freshrss.org/">FreshRSS</a> as RSS aggregator</li>
|
||||
<li><a href="https://hedgedoc.org/">Hedgedoc</a> as Markdown note-taking</li>
|
||||
<li><a href="https://hedgedoc.org/">Inbucket</a> for disposable email</li>
|
||||
<li><a href="https://cyberchef.org/">Cyberchef</a> for the lulz</li>
|
||||
<li><a href="https://kitchenowl.org/">Kitchenowl</a> for grocery lists</li>
|
||||
<li><a href="https://joinmastodon.org/">Mastodon</a> for microblogging</li>
|
||||
<li>A monitoring stack (read more below)</li>
|
||||
<li><a href="https://nextcloud.com/">Nextcloud</a> for cloud storage</li>
|
||||
<li><a href="https://pi-hole.net/">Pihole</a> to block advertisements</li>
|
||||
<li><a href="https://radicale.org/v3.html">Radicale</a> for calendar and contacts sync</li>
|
||||
<li><a href="https://www.seafile.com/en/home/">Seafile</a> for cloud storage and sync</li>
|
||||
<li><a href="https://github.com/containrrr/shepherd">Shephard</a> for automatic container updates</li>
|
||||
<li><a href="https://nginx.org/en/">Nginx</a> hosting static content (like this page!)</li>
|
||||
<li><a href="https://hub.docker.com/r/charypar/swarm-dashboard/#!">Docker Swarm dashboard</a></li>
|
||||
<li><a href="https://syncthing.net/">Syncthing</a> for file sync</li>
|
||||
</ul>
|
||||
|
||||
<h1 id="ci--cd">CI / CD</h1>
|
||||
|
||||
<p>For CI / CD, I run <a href="https://concourse-ci.org/">Concourse CI</a> in a separate VM.
|
||||
This is needed, because Concourse heavily uses containers to create reproducible builds.</p>
|
||||
|
||||
<p>Although I should probably use it for more, I currently use my Concourse for three pipelines:</p>
|
||||
|
||||
<ul>
|
||||
<li>A pipeline to build this static website and create a container image of it.
|
||||
The image is then uploaded to the image registry of my Forgejo instance.
|
||||
I love it when I can use stuff I previously built :)
|
||||
The pipeline finally deploys this new image to the Docker Swarm <a href="https://git.kun.is/pim/static/src/commit/eee4f0c70af6f2a49fabb730df761baa6475db22/pipeline.yml"><i class="fa-solid fa-code-branch"></i></a>.</li>
|
||||
<li>A pipeline to create a Concourse resource that sends Apprise alerts (Concourse-ception?) <a href="https://git.kun.is/pim/concourse-apprise-notifier/src/commit/b5d4413c1cd432bc856c45ec497a358aca1b8b21/pipeline.yml"><i class="fa-solid fa-code-branch"></i></a></li>
|
||||
<li>A pipeline to build a custom Fluentd image with plugins installed <a href="https://git.kun.is/pim/fluentd"><i class="fa-solid fa-code-branch"></i></a></li>
|
||||
</ul>
|
||||
|
||||
<h1 id="backups">Backups</h1>
|
||||
|
||||
<p>To create backups, I use <a href="https://www.borgbackup.org/">Borg</a>.
|
||||
As I keep all data on one machine, this backup process is quite simple.
|
||||
In fact, all this data is stored in a single Libvirt volume.
|
||||
To configure Borg with a simple declarative script, I use <a href="https://torsion.org/borgmatic/">Borgmatic</a>.</p>
|
||||
|
||||
<p>In order to back up the data inside the Libvirt volume, I create a snapshot to a file.
|
||||
Then I can mount this snapshot in my file system.
|
||||
The files can then be backed up while the system is still running.
|
||||
It is also possible to simply back up the Libvirt image, but this takes more time and storage <a href="https://git.kun.is/home/hypervisors/src/commit/71b96d462116e4160b6467533fc476f3deb9c306/ansible/roles/borg/backup.yml.j2"><i class="fa-solid fa-code-branch"></i></a>.</p>
|
||||
|
||||
<h1 id="monitoring-and-alerting">Monitoring and Alerting</h1>
|
||||
|
||||
<p>The last topic I would like to talk about is monitoring and alerting.
|
||||
This is something I’m still actively improving and only just set up properly.</p>
|
||||
|
||||
<h2 id="alerting">Alerting</h2>
|
||||
|
||||
<p>For alerting, I wanted something that runs entirely on my own infrastructure.
|
||||
I settled for Apprise + Ntfy.</p>
|
||||
|
||||
<p><a href="https://github.com/caronc/apprise">Apprise</a> is a server that is able to send notifications to dozens of services.
|
||||
For application developers, it is thus only necessary to implement the Apprise API to gain access to all these services.
|
||||
The Apprise API itself is also very simple.
|
||||
By using Apprise, I can also easily switch to another notification service later.
|
||||
<a href="https://ntfy.sh/">Ntfy</a> is free software made for mobile push notifications.</p>
|
||||
|
||||
<p>I use this alerting system in quite a lot of places in my infrastructure, for example when creating backups.</p>
|
||||
|
||||
<h2 id="uptime-monitoring">Uptime Monitoring</h2>
|
||||
|
||||
<p>The first monitoring setup I created, was using <a href="https://github.com/louislam/uptime-kuma">Uptime Kuma</a>.
|
||||
Uptime Kuma periodically pings a service to see whether it is still running.
|
||||
You can do a literal ping, test HTTP response codes, check database connectivity and much more.
|
||||
I use it to check whether my services and VMs are online.
|
||||
And the best part is, Uptime Kuma supports Apprise so I get push notifications on my phone whenever something goes down!</p>
|
||||
|
||||
<h2 id="metrics-and-log-monitoring">Metrics and Log Monitoring</h2>
|
||||
|
||||
<p>A new monitoring system I am still in the process of deploying is focused on metrics and logs.
|
||||
I plan on creating a separate blog post about this, so keep an eye out on that (for example using RSS :)).
|
||||
Safe to say, it is no basic ELK stack!</p>
|
||||
|
||||
<h1 id="conclusion">Conclusion</h1>
|
||||
|
||||
<p>That’s it for now!
|
||||
Hopefully I inspired someone to build something… or how not to :)</p>
|
||||
:ET
|
|
@ -1,66 +0,0 @@
|
|||
I"<p>Previously, I have used <a href="https://github.com/prometheus/node_exporter">Prometheus’ node_exporter</a> to monitor the memory usage of my servers.
|
||||
However, I am currently in the process of moving away from Prometheus to a new Monioring stack.
|
||||
While I understand the advantages, I felt like Prometheus’ pull architecture does not scale nicely.
|
||||
Everytime I spin up a new machine, I would have to centrally change Prometheus’ configuration in order for it to query the new server.</p>
|
||||
|
||||
<p>In order to collect metrics from my servers, I am now using <a href="https://fluentbit.io/">Fluent Bit</a>.
|
||||
I love Fluent Bit’s way of configuration which I can easily express as code and automate, its focus on effiency and being vendor agnostic.
|
||||
However, I have stumbled upon one, in my opinion, big issue with Fluent Bit: its <code class="language-plaintext highlighter-rouge">mem</code> plugin to monitor memory usage is <em>completely</em> useless.
|
||||
In this post I will go over the problem and my temporary solution.</p>
|
||||
|
||||
<h1 id="the-problem-with-fluent-bits-mem-plugin">The Problem with Fluent Bit’s <code class="language-plaintext highlighter-rouge">mem</code> Plugin</h1>
|
||||
|
||||
<p>As can be seen in <a href="https://docs.fluentbit.io/manual/pipeline/inputs/memory-metrics">the documentation</a>, Fluent Bit’s <code class="language-plaintext highlighter-rouge">mem</code> input plugin exposes a few metrics regarding memory usage which should be self-explaining: <code class="language-plaintext highlighter-rouge">Mem.total</code>, <code class="language-plaintext highlighter-rouge">Mem.used</code>, <code class="language-plaintext highlighter-rouge">Mem.free</code>, <code class="language-plaintext highlighter-rouge">Swap.total</code>, <code class="language-plaintext highlighter-rouge">Swap.used</code> and <code class="language-plaintext highlighter-rouge">Swap.free</code>.
|
||||
The problem is that <code class="language-plaintext highlighter-rouge">Mem.used</code> and <code class="language-plaintext highlighter-rouge">Mem.free</code> do not accurately reflect the machine’s actual memory usage.
|
||||
This is because these metrics include caches and buffers, which can be reclaimed by other processes if needed.
|
||||
Most tools reporting memory usage therefore include an additional metric that specifices the memory <em>available</em> on the system.
|
||||
For example, the command <code class="language-plaintext highlighter-rouge">free -m</code> reports the following data on my laptop:</p>
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code> total used free shared buff/cache available
|
||||
Mem: 15864 3728 7334 518 5647 12136
|
||||
Swap: 2383 663 1720
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Notice that the <code class="language-plaintext highlighter-rouge">available</code> memory is more than <code class="language-plaintext highlighter-rouge">free</code> memory.</p>
|
||||
|
||||
<p>While the issue is known (see <a href="https://github.com/fluent/fluent-bit/pull/3092">this</a> and <a href="https://github.com/fluent/fluent-bit/pull/5237">this</a> link), it is unfortunately not yet fixed.</p>
|
||||
|
||||
<h1 id="a-temporary-solution">A Temporary Solution</h1>
|
||||
|
||||
<p>The issues I linked previously provide stand-alone plugins that fix the problem, which will hopefully be merged in the official project at some point.
|
||||
However, I didn’t want to install another plugin so I used Fluent Bit’s <code class="language-plaintext highlighter-rouge">exec</code> input plugin and the <code class="language-plaintext highlighter-rouge">free</code> Linux command to query memory usage like so:</p>
|
||||
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[<span class="n">INPUT</span>]
|
||||
<span class="n">Name</span> <span class="n">exec</span>
|
||||
<span class="n">Tag</span> <span class="n">memory</span>
|
||||
<span class="n">Command</span> <span class="n">free</span> -<span class="n">m</span> | <span class="n">tail</span> -<span class="m">2</span> | <span class="n">tr</span> <span class="s1">'\n'</span> <span class="s1">' '</span>
|
||||
<span class="n">Interval_Sec</span> <span class="m">1</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>To interpret the command’s output, I created the following filter:</p>
|
||||
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[<span class="n">FILTER</span>]
|
||||
<span class="n">Name</span> <span class="n">parser</span>
|
||||
<span class="n">Match</span> <span class="n">memory</span>
|
||||
<span class="n">Key_Name</span> <span class="n">exec</span>
|
||||
<span class="n">Parser</span> <span class="n">free</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Lastly, I created the following parser (warning: regex shitcode incoming):</p>
|
||||
<div class="language-conf highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[<span class="n">PARSER</span>]
|
||||
<span class="n">Name</span> <span class="n">free</span>
|
||||
<span class="n">Format</span> <span class="n">regex</span>
|
||||
<span class="n">Regex</span> ^<span class="n">Mem</span>:\<span class="n">s</span>+(?<<span class="n">mem_total</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">mem_used</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">mem_free</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">mem_shared</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">mem_buff_cache</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">mem_available</span>>\<span class="n">d</span>+) <span class="n">Swap</span>:\<span class="n">s</span>+(?<<span class="n">swap_total</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">swap_used</span>>\<span class="n">d</span>+)\<span class="n">s</span>+(?<<span class="n">swap_free</span>>\<span class="n">d</span>+)
|
||||
<span class="n">Types</span> <span class="n">mem_total</span>:<span class="n">integer</span> <span class="n">mem_used</span>:<span class="n">integer</span> <span class="n">mem_free</span>:<span class="n">integer</span> <span class="n">mem_shared</span>:<span class="n">integer</span> <span class="n">mem_buff_cache</span>:<span class="n">integer</span> <span class="n">mem_available</span>:<span class="n">integer</span> <span class="n">swap_total</span>:<span class="n">integer</span> <span class="n">swap_used</span>:<span class="n">integer</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>With this configuration, you can use the <code class="language-plaintext highlighter-rouge">mem_available</code> metric to get accurate memory usage in Fluent Bit.</p>
|
||||
|
||||
<h1 id="conclusion">Conclusion</h1>
|
||||
|
||||
<p>Let’s hope Fluent Bit’s <code class="language-plaintext highlighter-rouge">mem</code> input plugin is improved upon soon so this hacky solution is not needed.
|
||||
I also intend to document my new monitoring pipeline, which at the moment consists of:</p>
|
||||
<ul>
|
||||
<li>Fluent Bit</li>
|
||||
<li>Fluentd</li>
|
||||
<li>Elasticsearch</li>
|
||||
<li>Grafana</li>
|
||||
</ul>
|
||||
:ET
|
|
@ -1,49 +0,0 @@
|
|||
I"Ð<p><em>See the <a href="#update">Update</a> at the end of the article.</em></p>
|
||||
|
||||
<p>Already a week ago, Hashicorp <a href="https://www.hashicorp.com/blog/hashicorp-adopts-business-source-license">announced</a> it would change the license on almost all its projects.
|
||||
Unlike <a href="https://github.com/hashicorp/terraform/commit/ab411a1952f5b28e6c4bd73071194761da36a83f">their previous license</a>, which was the Mozilla Public License 2.0, their new license is no longer truly open source.
|
||||
It is called the Business Source Licenseâ„¢ and restricts use of their software for competitors.
|
||||
In their own words:</p>
|
||||
<blockquote>
|
||||
<p>Vendors who provide competitive services built on our community products will no longer be able to incorporate future releases, bug fixes, or security patches contributed to our products.</p>
|
||||
</blockquote>
|
||||
|
||||
<p>I found <a href="https://meshedinsights.com/2021/02/02/rights-ratchet/">a great article</a> by MeshedInsights that names this behaviour the “rights ratchet modelâ€<C3A2>.
|
||||
They define a script start-ups use to garner the interest of open source enthusiasts but eventually turn their back on them for profit.
|
||||
The reason why Hashicorp can do this, is because contributors signed a copyright license agreement (CLA).
|
||||
This agreement transfers the copyright of contributors’ code to Hashicorp, allowing them to change the license if they want to.</p>
|
||||
|
||||
<p>I find this action really regrettable because I like their products.
|
||||
This sort of action was also why I wanted to avoid using an Elastic stack, which also had their <a href="https://www.elastic.co/pricing/faq/licensing">license changed</a>.<sup id="fnref:elastic" role="doc-noteref"><a href="#fn:elastic" class="footnote" rel="footnote">1</a></sup>
|
||||
These companies do not respect their contributors and the software stack beneath they built their product on, which is actually open source (Golang, Linux, etc.).</p>
|
||||
|
||||
<h1 id="impact-on-my-home-lab">Impact on my Home Lab</h1>
|
||||
|
||||
<p>I am using Terraform in my home lab to manage several important things:</p>
|
||||
<ul>
|
||||
<li>Libvirt virtual machines</li>
|
||||
<li>PowerDNS records</li>
|
||||
<li>Elasticsearch configuration</li>
|
||||
</ul>
|
||||
|
||||
<p>With Hashicorp’s anti open source move, I intend to move away from Terraform in the future.
|
||||
While I will not use Hashicorp’s products for new personal projects, I will leave my current setup as-is for some time because there is no real need to quickly migrate.</p>
|
||||
|
||||
<p>I might also investigate some of Terraform’s competitors, like Pulumi.
|
||||
Hopefully there is a project that respects open source which I can use in the future.</p>
|
||||
|
||||
<h1 id="update">Update</h1>
|
||||
|
||||
<p>A promising fork of Terraform has been announced called <a href="https://opentf.org/announcement">OpenTF</a>.
|
||||
They intend to take part of the Cloud Native Computing Foundation, which I think is a good effort because Terraform is so important for modern cloud infrastructures.</p>
|
||||
|
||||
<h1 id="footnotes">Footnotes</h1>
|
||||
|
||||
<div class="footnotes" role="doc-endnotes">
|
||||
<ol>
|
||||
<li id="fn:elastic" role="doc-endnote">
|
||||
<p>While I am still using Elasticsearch, I don’t use the rest of the Elastic stack in order to prevent a vendor lock-in. <a href="#fnref:elastic" class="reversefootnote" role="doc-backlink">↩</a></p>
|
||||
</li>
|
||||
</ol>
|
||||
</div>
|
||||
:ET
|
|
@ -1,53 +0,0 @@
|
|||
I"å<p>When I was scaling up my home lab, I started thinking more about data management.
|
||||
I hadn’t (and still haven’t) set up any form of network storage.
|
||||
I have, however, set up a backup mechanism using <a href="https://borgbackup.readthedocs.io/en/stable/">Borg</a>.
|
||||
Still, I want to operate lots of virtual machines, and backing up each one of them separately seemed excessive.
|
||||
So I started thinking, what if I just let the host machines back up the data?
|
||||
After all, the amount of physical hosts I have in my home lab is unlikely to increase drastically.</p>
|
||||
|
||||
<h1 id="the-use-case-for-sharing-directories">The Use Case for Sharing Directories</h1>
|
||||
|
||||
<p>I started working out this idea further.
|
||||
Without network storage, I needed a way for guest VMs to access the host’s disks.
|
||||
Here there are two possibilities, either expose some block device or a file system.
|
||||
Creating a whole virtual disk for just the data of some VMs seemed wasteful, and from my experiences also increases backup times dramatically.
|
||||
I therefore searched for a way to mount a directory from the host OS on the guest VM.
|
||||
This is when I stumbled upon <a href="https://rabexc.org/posts/p9-setup-in-libvirt">this blog</a> post talking about sharing directories with virtual machines.</p>
|
||||
|
||||
<h1 id="sharing-directories-with-virtio-9p">Sharing Directories with virtio-9p</h1>
|
||||
|
||||
<p>virtio-9p is a way to map a directory on the host OS to a special device on the virtual machine.
|
||||
In <code class="language-plaintext highlighter-rouge">virt-manager</code>, it looks like the following:
|
||||
<img src="virt-manager.png" alt="picture showing virt-manager configuration to map a directory to a VM" />
|
||||
Under the hood, virtio-9p uses the 9pnet protocol.
|
||||
Originally developed at Bell Labs, support for this is available in all modern Linux kernels.
|
||||
If you share a directory with a VM, you can then mount it.
|
||||
Below is an extract of my <code class="language-plaintext highlighter-rouge">/etc/fstab</code> to automatically mount the directory:</p>
|
||||
<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>data /mnt/data 9p trans=virtio,rw 0 0
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>The first argument (<code class="language-plaintext highlighter-rouge">data</code>) refers to the name you gave this share from the host
|
||||
With the <code class="language-plaintext highlighter-rouge">trans</code> option we specify that this is a virtio share.</p>
|
||||
|
||||
<h1 id="problems-with-virtio-9p">Problems with virtio-9p</h1>
|
||||
|
||||
<p>At first I had no problems with my setup, but I am now contemplating just moving to a network storage based setup because of two problems.</p>
|
||||
|
||||
<p>The first problem is that some files have suddenly changed ownership from <code class="language-plaintext highlighter-rouge">libvirt-qemu</code> to <code class="language-plaintext highlighter-rouge">root</code>.
|
||||
If the file is owned by <code class="language-plaintext highlighter-rouge">root</code>, the guest OS can still see it, but cannot access it.
|
||||
I am not entirely sure the problem lies with virtio, but I suspect it is.
|
||||
For anyone experiencing this problem, I wrote a small shell script to revert ownership to the <code class="language-plaintext highlighter-rouge">libvirt-qemu</code> user:</p>
|
||||
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>find <span class="nt">-printf</span> <span class="s2">"%h/%f %u</span><span class="se">\n</span><span class="s2">"</span> | <span class="nb">grep </span>root | <span class="nb">cut</span> <span class="nt">-d</span> <span class="s1">' '</span> <span class="nt">-f1</span> | xargs <span class="nb">chown </span>libvirt-qemu:libvirt-qemu
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>Another problem that I have experienced, is guests being unable to mount the directory at all.
|
||||
I have only experienced this problem once, but it was highly annoying.
|
||||
To fix it, I had to reboot the whole physical machine.</p>
|
||||
|
||||
<h1 id="alternatives">Alternatives</h1>
|
||||
|
||||
<p>virtio-9p seemed like a good idea, but as discussed, I had some problems with it.
|
||||
It seems <a href="https://virtio-fs.gitlab.io/">virtioFS</a> might be a an interesting alternative as it is designed specifically for sharing directories with VMs.</p>
|
||||
|
||||
<p>As for me, I will probably finally look into deploying network storage either with NFS or SSHFS.</p>
|
||||
:ET
|
|
@ -1,51 +0,0 @@
|
|||
I"ë<p><a href="https://borgbackup.readthedocs.io/en/stable/">BorgBackup</a> and <a href="https://torsion.org/borgmatic/">Borgmatic</a> have been my go-to tools to create backups for my home lab since I started creating backups.
|
||||
Using <a href="https://wiki.archlinux.org/title/systemd/Timers">Systemd Timers</a>, I regularly create a backup every night.
|
||||
I also monitor successful execution of the backup process, in case some error occurs.
|
||||
However, the way I set this up resulted in not receiving notifications.
|
||||
Even though it boils down to RTFM, I’d like to explain my error and how to handle errors correctly.</p>
|
||||
|
||||
<p>I was using the <code class="language-plaintext highlighter-rouge">on_error</code> option to handle errors, like so:</p>
|
||||
|
||||
<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">on_error</span><span class="pi">:</span>
|
||||
<span class="pi">-</span> <span class="s1">'</span><span class="s">apprise</span><span class="nv"> </span><span class="s">--body="Error</span><span class="nv"> </span><span class="s">while</span><span class="nv"> </span><span class="s">performing</span><span class="nv"> </span><span class="s">backup"</span><span class="nv"> </span><span class="s"><URL></span><span class="nv"> </span><span class="s">||</span><span class="nv"> </span><span class="s">true'</span>
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>However, <code class="language-plaintext highlighter-rouge">on_error</code> does not handle errors from the execution of <code class="language-plaintext highlighter-rouge">before_everything</code> and <code class="language-plaintext highlighter-rouge">after_everything</code> hooks.
|
||||
My solution to this was moving the error handling up to the Systemd service that calls Borgmatic.
|
||||
This results in the following Systemd service:</p>
|
||||
|
||||
<div class="language-systemd highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">[Unit]</span>
|
||||
<span class="nt">Description</span><span class="p">=</span>Backup data using Borgmatic
|
||||
<span class="c"># Added</span>
|
||||
<span class="nt">OnFailure</span><span class="p">=</span>backup-failure.service
|
||||
|
||||
<span class="k">[Service]</span>
|
||||
<span class="nt">ExecStart</span><span class="p">=</span>/usr/bin/borgmatic --config /root/backup.yml
|
||||
<span class="nt">Type</span><span class="p">=</span>oneshot
|
||||
</code></pre></div></div>
|
||||
|
||||
<p>This handles any error, be it from Borgmatic’s hooks or itself.
|
||||
The <code class="language-plaintext highlighter-rouge">backup-failure</code> service is very simple, and just calls Apprise to send a notification:</p>
|
||||
|
||||
<div class="language-systemd highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">[Unit]</span>
|
||||
<span class="nt">Description</span><span class="p">=</span>Send backup failure notification
|
||||
|
||||
<span class="k">[Service]</span>
|
||||
<span class="nt">Type</span><span class="p">=</span>oneshot
|
||||
<span class="nt">ExecStart</span><span class="p">=</span>apprise --body="Failed to create backup!" <URL>
|
||||
|
||||
<span class="k">[Install]</span>
|
||||
<span class="nt">WantedBy</span><span class="p">=</span>multi-user.target
|
||||
</code></pre></div></div>
|
||||
|
||||
<h1 id="the-aftermath-or-what-i-learned">The Aftermath (or what I learned)</h1>
|
||||
|
||||
<p>Because the error handling and alerting weren’t working propertly, my backups didn’t succeed for two weeks straight.
|
||||
And, of course, you only notice your backups aren’t working when you actually need them.
|
||||
This is exactly what happened: my disk was full and a MariaDB database crashed as a result of that.
|
||||
Actually, the whole database seemed to be corrupt and I find it worrying MariaDB does not seem to be very resilient to failures (in comparison a PostgreSQL database was able to recover automatically).
|
||||
I then tried to recover the data using last night’s backup, only to find out there was no such backup.
|
||||
Fortunately, I had other means to recover the data so I incurred no data loss.</p>
|
||||
|
||||
<p>I already knew it is important to test backups, but I learned it is also important to test failures during backups!</p>
|
||||
:ET
|
Loading…
Reference in a new issue