• 1 Post
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle


  • As someone who has owned enterprise servers for self-hosting, I agree with the previous comment that you should avoid owning one if you can. They might be cheap, but your longterm ownership costs are going to be higher. That’s because as the server breaks down, you’ll be competing with other people for a dwindling supply of compatible parts. Unlike consumer PCs, server hardware is incredibly vendor locked. Hell, my last Proliant would keep the fans ramped at 100% because I installed a HDD that the BIOS didn’t like. This was after I spent weeks tracking down a disk that would at least be recognized, and the only drives I could find were already heavily used.

    My latest server is built with consumer parts fit into a 2U rack case, and I sleep so much easier knowing I can replace any of the parts myself with brand new alternatives.

    Plus as others have said, a 1U can be really loud. I don’t care about the sound of my gaming computer, but that poweredge was so obnoxious that despite being in the basement, I had to smother it with blankets just so the fans didn’t annoy me when I was watching TV upstairs. I still have a 1U Dell Poweredge, but I specifically sought out the generation that still let you hack the fan speeds in IPMI. From all my research, no such hack exists for the Proliant line.




  • OneCardboardBox@lemmy.sdf.orgtoSelfhosted@lemmy.worldServer build for Family
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    I’d recommend BTRFS in RAID1 over hardware or mdadm raid. You get FS snapshotting as a feature, which would be nice before running a system update.

    For disk drives, I’d recommend new if you can afford them. You should look into shucking: It’s where you buy an external drive and then remove (shuck) the HDD from inside. You can get enterprise grade disks for cheaper than buying that same disk on its own. The website https://shucks.top tracks the price of various disk drives, letting you know when there are good deals.




  • For backup, maybe a blu-ray drive? I think you would want something that can withstand the salty environment, and maybe resist water. Thing is, even with BDXL discs, you only get a capacity of 100GiB each, so that’s a lot of disks.

    What about an offsite backup? Your media library could live ashore (in a server at a friend’s house). You issue commands from your boat to download media, and then sync those files to your boat when it’s done. If you really need to recover from the backup, have your friend clone a disk and mail it to you.

    Do you even need a backup? Would data redundancy be enough? Sure if your boat catches fire and sinks, your movies are gone, but that’s probably the least of your problems. If you just want to make sure that the salt and water doesn’t destroy your data, how about:

    1. A multi-disk filesystem which can tolerate at least 1 failure
    2. Regular utilities scanning for failure. BTRFS scrubs, for example.
    3. Backup fresh disks kept in a salt and water resistant container (original sealed packaging), to swap any failing disk, and replicate data from any good drives remaining.
    4. Documentation/practice to perform the aforementioned disk replacement, so you’re not googling manpages at sea.

    This would probably be cheapest and have the least complexity.




  • As others have said, a reverse proxy is what you need.

    However I will also mention that another tool called macvlan exists, if you’re using containers like podman or docker. Setting up a macvlan network for your containers will trick your server into thinking that the ports exposed by your services belong to a different machine, thus letting them use the same ports at the same time. As far as your LAN is concerned, a container on a macvlan network has its own IP, independent of the host’s IP.

    Macvlan is worth setting up if you plan to expose some of your services outside your local network, or if you want to run a service on a port that your host is already using (eg: you want a container to act as DNS on port 53, but systemd-resolved is already using it on the host).

    You can set up port forwarding at your router to the containers that you want to publicly expose, and any other containers will be inaccessible. Meanwhile with just a reverse proxy, someone could try to send requests to any domain behind it, even if you don’t want to expose it.

    My network is set up such that:

    • Physical host has one IP address that’s only accessible over lan.
    • Containerized web services that I don’t want to expose publicly are behind a reverse proxy container that has its own IP on the macvlan.
    • Containerized web services that I do want to expose publicly have a separate reverse proxy container, which gets a different IP on the macvlan.
    • Router has ports 80 and 443 forwarding only to the IP address for my public proxy





  • There’s some setting in sonarr/radarr, I think it’s called “remote path mapping” or something. If you have different mounted volume paths between the torrent container and sonarr, you need to set this:

    Suppose:

    • Baremetal host has directory /mnt/myfiles

    • Your torrent container mounts /mnt/myfiles/torrent_downloads to /downloads

    • Your sonarr container mounts /mnt/myfiles/torrent_downloads to /data/torrent_downloads and /mnt/myfiles/shows is mounted to /data/shows (for copying completed files)

    You need a directory mapping to tell sonarr that the path in the torrent container is different from the path sonarr should look. Torrent client says “I have a new show to copy, it’s in /downloads”. Sonarr doesn’t have /downloads, but if you set up the path mapping, it knows that /downloads on the torrent client is actually equivalent to /data/torrent_downloads in sonarr. Thus, in the sonarr container, it copies the file from /data/torrent_downloads to /data/shows.


  • The user and group id inside the container doesn’t have to match any user on your host machine. It’s possible that user:70 is configured as the user to launch inside the container, in which case you should set the ownership of the directory to match what the container expects.

    Eg: The container for my torrent client runs as user 700 group 700. My host machine does not have either of those IDs defined. My torrent directory must be chown’d by 700:700 or else the container can’t read/write torrents.