I am finally making the push to self host everything I possibly can and leave as many cloud services as I can.
I have years of linux server admin experience so this is not a technical post, more of an attempt to get some crowd wisdom on a complex migration.
I have a plan and have identified services i would like to implement. Take it as given that the hardware I have can handle all this. But it is a lot so it won’t happen at once.
I would appreciate thoughts about the order in which to implement services. Install is only phase one, migration of existing data and shaking everything down to test stability is also time consuming. So any insights, especially on services that might present extra challenges when I start to add my own data, or dependencies I haven’t thought of.
The list order is not significant yet, but I would like to have an incremental plan. Those marked with * are already running and hosting my data locally with no issues.
Thanks in advance.
Base system
- Proxmox VE 8.3
- ZFS for a time-machine like backup to a local hdd
- Docker VM with containers
- Home Assistant *
- Esphome *
- Paperless-ngx *
- Photo Prism
- Firefly III
- Jellyfin
- Gitea
- Authelia
- Vaultwarden
- Radicale
- Prometheus
- Grafana
I also started with a Docker host in Proxmox, but have since switched to k3s, as I think it has reduced maintenance (mainly through FluxCD). But this is only an option if you want to learn k8s or already have experience.
If Proxmox runs on a consumer ssd, I would keep an eye on the smart values, as it used up the disk quickly in my case. I then bought second-hand enterprise ssds and have had no more problems since then. You could also outsource the intensive write processes or use an hdd for root if possible.
I put my storage controller directly into the VM via PCI, as it makes backups via zfs easier and I was able to avoid a speed bottleneck. However, the bottleneck was mainly caused by the (virtualized) firewall and the internal communication via it. As a result, the internal services could only communicate with a little more than 1GBit/s, although they were running on ssds and nvme raids.
I use sqlite databases when I can, because the backups are much easier and the speed feels faster in most cases. However, the file should ideally be available locally for the vm.
Otherwise I would prioritize logging and alerts, but as an experienced server admin you have probably thought of that.
For Home Assistant, I use the installation script from here, it works flawlessly:
https://community-scripts.github.io/ProxmoxVE/scripts
This group took over the project after the main developer passed on, they are quite easy to install and just need you to be in the Proxmox host shell (Once you install it, you will know where it is) :)
Looks good, I use a lot of the stuff you plan to host.
Don’t forget about enabling infrastructure. Nearly everything needs a database, so get that figured out early on. An LDAP server is also helpful, even though you can just use the file backend of Authelia. Decide if you want to enable access from outside and choose a suitable reverse proxy with a solution for certificates, if you did not already do that.
Hosting Grafana on the same host as all other services will give you no benefit if the host goes offline. If you plan to monitor that too.
I’d get the LDAP server, the database and the reverse proxy running first. Afterwards configure Authelia and and try to implement authentication for the first project. Gitea/Forgejo is a good first one, you can setup OIDC or Remote-User authentication with it. If you’ve got this down, the other projects are a breeze to set up.
Best of luck with your migration.
LDAP server is also helpful, even though you can just use the file backend of Authelia.
Samba4ad was easy to set up and get replicating. Switch over soon as you can.
swap Photoprism with Immich. Its a lot better imo
Are both immich and photoprism container-dependent, or just immich?
(If they fail 27002, they’re a hard no for me).
Authelia
Think about implementing this pretty early, if your plan is to use it for your own services ( which I’d assume).
You are correct that I will be using it only for internal authentication. I want to get away from my bad habit of reusing passwords on internal services to reduce pwnage if mr robot gets access ;)
Any experience on how authelia interacts with vaultwarden? They seem sympatico but should I install them in tandem? Would that make anything easier?
No, but Vaultwarden is the one thing I don’t even try to connect to authentik so a breach of the auth password won’t give away everything else
May I ask why you’d want to selfhost bitwarden if the free hosted version is almost as good aside from the few unimportant paid perks?
I’m not the guy you asked, but I self-host it because I like a couple of the features (like making an org for house stuff, and sharing that with certain family members), it’s really awesome for OTP as well. I honestly don’t know which features are the paid ones because I went straight to Vaultwarden as I knew I wanted it in house (physically) and Bitwarden didn’t offer that.
You can create (i think one) org under paid accounts as well and delegate specific collections access between members.
My use case is for home-stuff I want access from work (e.g. Jellyfin)
I don’t?
But you mention having vaultwarden and not connecting it to authentik. So you basically have bitwarden selfhosted.
Yes, but I don’t plan to host bitwarden. I was referring to op’s question regarding vaultwarden+auth. Sorry, I think I can’t follow you
No, but Vaultwarden is the one thing I don’t even try to connect to authentik
Implying you have it deployed in active use, no?