• 0 Posts
  • 4 Comments
Joined 1 month ago
cake
Cake day: January 2nd, 2025

help-circle


  • As I said “how to reproduce this in a home setup”.

    I’m running multiple machines, paid little for all of them, and they all run at pretty low power. I replicate stuff on a schedule, I and have a cloud backup I verify quarterly.

    If OP is thinking about how to ensure uptime (however they define it) and prevent downtime due to upgrades, then looking at how Enterprise does things (the people who use research into this very subject performed by universities and organizations like Microsoft and Google), would be useful.

    Nowhere did I tell OP to do things this way, and I’d thank you to not make strawmen of my words.


  • In the business world it’s pretty common to do staged or switchover upgrades: test new version in a lab environment, iron out the install/config details. Then upgrade a single production server and do a test with a small group of users. Or, build new servers with the new stuff, have a set of users run on it for a while, in this way you can always just move those users back to a known good server.

    How do you do this at home? VMs for lots of stuff, or duplicate hardware for NAS type stuff (I’ve read of running TrueNAS in a VM).

    To borrow from the preparedness community: if you have 1 you have none, if you have 2 you have 1. As an example, the business world often runs mission-critical systems in a redundant setup in regionally-different data centers, so a storm won’t take them down. The question is how to reproduce this idea in a home lab environment.