• 1 Post
  • 43 Comments
Joined 2 years ago
cake
Cake day: November 17th, 2022

help-circle


  • pezhore@lemmy.mltoSelfhosted@lemmy.worldwhich git server for a company?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 month ago

    I’ll come out with an anti-recommendation: Don’t do GitLab.

    They used to be quite good, but lately (as in the past two years or so) they’ve been putting things behind a licensing paywall.

    Now if your company wants to pay for GitLab, then maybe consider it? But I’d probably look at some of the other options people have mentioned in this thread.





  • This. My first serious network upgrade was splitting out the router/firewall, Wifi, and switching to a Ubiquiti Edgerouter Lite, 8 port Netgear managed switch, and a Ubiquiti AP Pro.

    It ended up being around the price of a night hawk, but I had way better control over the firewall/NAT rules and it made future upgrades less painful as I could just target the switching vs WiFi for a change.

    As a side note, nearly all wifi routers that I’ve come across can act as just a access point. My current setup is using the Orbi mesh wifi system to get a decent signal to my attic bedroom.










  • I remember reading an interesting take on the 20TB drives when they came out - the impact of drive failure skyrockets with large density drives.

    Back with 2TB drives, you could fit 60-70 Blu-ray rips. If that drive dies (without backups/RAID), you’ll be hurting but not as bad as if you have a filled 20TB with 600-700 rips. Plus, even with RAID, the rebuild time increases with density, and for 20TB drives you could be waiting a week for rebuild.

    I like the idea of higher density drives, but in my opinion they only really make sense in large drive arrays where you can spread the data over dozens and dozens of replicated drives.



  • pezhore@lemmy.mltoSelfhosted@lemmy.worldSelf Hosting Fail
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    I didn’t intend to use it on the chest freezer - it was mostly for the modem, but since I had spare battery capacity and outlets I thought what the heck.

    The power load is practically nothing until it cycles, and even then it’s fairly efficient - my current runtime is estimated to be about 18 hours, more than enough to come up with an alternative if we lose power in a storm.



  • pezhore@lemmy.mltoSelfhosted@lemmy.worldSelf Hosting Fail
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    15
    ·
    6 months ago

    While I appreciate the sentiment, most traditional VMs do not like to have their power killed (especially non-journaling file systems).

    Even crash consistent applications can be impacted if the underlying host fs is affected by power loss.

    I do think that backup are a valid suggestion here, provided that the backup is an interrupted by a power surge or loss.


  • pezhore@lemmy.mltoSelfhosted@lemmy.worldSelf Hosting Fail
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    I agree that 99.999% uptime is a pipedream for most home labs, but I personally think a UPS is worth it, if only to give yourself the option to gracefully shut down systems in the event of a power outage.

    Eventually, I’ll get a working script that checks the battery backup for mains power loss and handle the graceful shutdown for me, but right now that extra 10-15 minutes of battery backup is enough for a manual effort.