Hi all!

I will soon acquire a pretty beefy unit compared to my current setup (3 node server with each 16C, 512G RAM and 32T Storage).

Currently I run TrueNAS and Proxmox on bare metal and most of my storage is made available to apps via SSHFS or NFS.

I recently started looking for “modern” distributed filesystems and found some interesting S3-like/compatible projects.

To name a few:

  • MinIO
  • SeaweedFS
  • Garage
  • GlusterFS

I like the idea of abstracting the filesystem to allow me to move data around, play with redundancy and balancing, etc.

My most important services are:

  • Plex (Media management/sharing)
  • Stash (Like Plex 🙃)
  • Nextcloud
  • Caddy with Adguard Home and Unbound DNS
  • Most of the Arr suite
  • Git, Wiki, File/Link sharing services

As you can see, a lot of download/streaming/torrenting of files accross services. Smaller services are on a Docker VM on Proxmox.

Currently the setup is messy due to the organic evolution of my setup, but since I will upgrade on brand new metal, I was looking for suggestions on the pillars.

So far, I am considering installing a Proxmox cluster with the 3 nodes and host VMs for the heavy stuff and a Docker VM.

How do you see the file storage portion? Should I try a full/partial plunge info S3-compatible object storage? What architecture/tech would be interesting to experiment with?

Or should I stick with tried-and-true, boring solutions like NFS Shares?

Thank you for your suggestions!

  • forbiddenlake@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    By default, unencrypted, and unauthenticated, and permissions rely on IDs the client can fake.

    May or may not be a problem in practice, one should think about their personal threat model.

    Mine are read only and unauthenticated because they’re just media files, but I did add unneeded encryption via ktls because it wasn’t too hard to add (I already had a valid certificate to reuse)

      • 486@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        If someone compromises the host system you are in trouble.

        Not only the host. You have to trust every client to behave, as @forbiddenlake already mentioned, NFS relies on IDs that clients can easily fake to pretend they are someone else. Without rolling out all the Kerberos stuff, there really is no security when it comes to NFS.

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          You misunderstand. The hypervisor is the client. Stuff higher in the stack only sees raw storage. (By hypervisors I also mean docker and kubernetes) From a security perspective you just set an IP allow list

          • 486@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            Sure, if you have exactly one client that can access the server and you can ensure physical security of the actual network, I suppose it is fine. Still, those are some severe limitations and show how limited the ancient NFS protocol is, even in version 4.