From 6952876401eaaf99406b4f42dbacbf0766daefb5 Mon Sep 17 00:00:00 2001 From: Pim Kunis Date: Tue, 11 Apr 2023 20:26:46 +0000 Subject: [PATCH] Update 'README.md' --- README.md | 21 ++++++--------------- 1 file changed, 6 insertions(+), 15 deletions(-) diff --git a/README.md b/README.md index bb7964f..ddb18f9 100644 --- a/README.md +++ b/README.md @@ -1,22 +1,13 @@ # Lewis -The Ubuntu server used for backups +Lewis is our server used for backups. -## TODO: pull architecture +## Architecture -For security reasons, it's better to have a pull architecture for backups instead of push: -If one server is compromised, the backup server can also be compromised. -This is especially nasty in case of a ransomware attack. -After researching, pull is best done using [sshfs](https://borgbackup.readthedocs.io/en/stable/deployment/pull-backup.html). -NB: using sshfs with our current setup of hard-coded ssh keys seems badly scalable and annoying. -A better solution could be to use ssh certificates. - -One negative side of pull, is that the backup server must now be aware of each server that has data to be backed up. -I designed our network to be as distributed as possible, such that it is easy to introduce new machines. -However, most machines that have data to be backed up will be virtual machines in the near future. -This data can therefore be saved on separate virtual disks. -The backup server only has to be aware of the physical servers running the hypervisors and back up these disks. -Fortunately, the amount of physical servers we own is unlikely to change much. +Backups are implemented in a pull fasion. +A single Borg backup repository is maintained on this server. +Servers in our network expose a SSHFS share with files they wish to be backed up. +Authentication of these SSHFS shares is done using SSH user certificates. ## TODO: off-site backups