• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle


  • I actually agree. For the majority of sites and/or use cases, it probably is sufficient.

    Explaining properly why LE is generally problematic, takes considerable depth of information, that I’m just not able to relay easily right now. But consider this:

    LE is mostly a convenience. They save an operator $1 per month per certificate. For everyone with hosting costs beyond $1000, this is laughable savings. People who take TLS seriously often have more demands than “padlock in the browser UI”. If a free service decides they no longer want to use OCSP, that’s an annoying disruption that was entirely not worth the $1 https://www.abetterinternet.org/post/replacing-ocsp-with-crls/

    LE has no SLA. You have no guarantee to be able to ever renew your certificate again. A risk not anyone should take.

    Who is paying for LE? If you’re not paying, how can you rely on the service to exist tomorrow?

    It’s not too long ago that people said “only some sites need HTTPS, HTTP is fine for most”. It never was, and people should not build anything relevant on “free” security today either.


  • gencha@lemm.eetoSelfhosted@lemmy.worldPaid SSL vs Letsencrypt
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    16
    ·
    6 days ago

    People who have actually relevant use cases with the need for a reliable partner would never use LE. It’s a gimmick for hobbyists and people who suck at their job.

    If you have never revoked a certificate, you don’t really know what you’re doing. If you have never run into rate-limiting issues with LE that block a rollout, you don’t know what you’re doing.

    LE works until it doesn’t, and then it’s like every other free service on the internet: no guarantees If your setup relies on the goodwill of a single entity handing out shit for free, it’s not a robust setup. If you rely on that entity to keep an OCSP responder alive for free so all your consumers can verify the validity of your certificate, that’s not great. And people do this to save their company $1 a month for the real thing? Even running the shitty certbot in compute has a larger cost. People are so blindly in love with this “free” garbage. The fanboys will never die off






  • Bro, I’m an AWS Cloud Solution Architect and I seriously don’t know what you’re talking about. And, no, when I waste time on Lemmy, then there is literally nothing better to do.

    AWS made S3. People built software to integrate S3 as a storage backend. Other people didn’t want to do AWS, and built single-node imitations of the S3 service. Now you use those services and think that is S3, while it is only a crude replica of what S3 really is. At this point the S3 API is redundant and you could just as well store your assets close to your application. You have no real, global S3 delivery service anyway. What’s the point?

    Most people misuse AWS S3. Using stuff like minio is even more misguided.









  • gencha@lemm.eetoSelfhosted@lemmy.worldHTTPS on homelab (just locally)
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    2 months ago

    I roll out Step CA to my workstation with an Ansible role. All other clients on the lab trust this CA and are allowed to request certificates for themselves through ACME, like LetsEncrypt.

    All my services on all clients on the network are exposed through traefik, which also handles the ACME process.

    When it comes to Jellyfin, this is entirely counter-productive. Your media server needs to be accessible to be useful. Jellyfin should be run with host networking to enable DLNA, which will never pass through TLS. Additionally, not all clients support custom CAs. Chromecast or the OS on a TV are prime candidates to break once you move your Jellyfin entirely behind a proxy with custom CA certificates. You can waste a lot of time on this and achieve very little. If you only use the web UI for Jellyfin, then you might not care, but I prefer to keep this service out of the fancy HTTPS setup.




  • Sharing the network space with another container is the way to go IMHO. I use podman and just run the main application in one container, and then another VPN-enabling container in the same pod, which is essentially what you’re achieving with with the network_mode: container:foo directive.

    Ideally, exposing ports on the host node is not part of your design, so don’t have any --port directives at all. Your host should allow routing to the hosted containers and, thus, their exposed ports. If you run your workloads in a dedicated network, like 10.0.1.0/24, then those addresses assigned to your containers need to be addressable. Then you just reach all of their exposed ports directly. Ultimately, you then want to control port exposure through services like firewalld, but that can usually be delayed. Just remember that port forwarding is not a security mechanism, it’s a convenience mechanism.

    If you want DLNA, forget about running that workload in a “proper” container. For DLNA, you need the ability to open random UDP ports for communication with consuming devices on the LAN. This will always require host networking.

    Your DLNA-enabled workloads, like Plex, or Jellyfin, need a host networking container. Your services that require internet privacy, like qBittorrent, need their own, dedicated pod, on a dedicated network, with another container that controls their networking plane to redirect communication to the VPN. Ideally, all your manual configuration then ends up with a directive in the Wireguard config like:

    PostUp = ip route add 192.168.1.0/24 via 192.168.19.1 dev eth0
    

    Wireguard will likely, by default, route all traffic through the wg0 device. You just then tell it that the LAN CIDR is reachable through eth0 directly. This enables your communication path to the VPN-secured container after the VPN is up.