Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 202 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • We’ve been using vector rendering for decades, this isn’t new at all. This just makes it better because supposedly now it can be offloaded to the GPU.

    From the OS’s perspective it doesn’t care: it hands a rectangle to the application to render into along with some metadata like what scaling to render as. Then the application does what it needs to do to get the pixels in there.

    This would be handled entirely in Qt, in this case, but any competing toolkit can also implement something similar and all.


  • No but it does show how much capitalism relies on the absolute exploitation of the labor market and the double-standards from the US in that regard. Free market good but only when US companies are the ones fucking everyone over.

    • US companies buying cheap stuff from China and marking it up 500%: good, American values
    • China cuts the middleman and sells the same product for the same price they would sell it to the reseller: noooooo we can’t compete with that, China bad, it’s so unfair! Waaaaaaa

    At least the EU doesn’t constantly brag about muh freedom and how the free market is the best thing ever and you’re a commie if you don’t agree that capitalism is the best.


  • I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

    That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.


  • IMO that’s more of a problem with the industry not really caring to support lower specs, or generally not seeing the deck as a real console or platform to target. People still make Switch games and the damn thing was already outdated at launch and they even underclocked it for good measures.

    At 800p you’ve got to start thinking, is most of the detail those games compute even actually visible the on screen? How many PCs does that make obsolete? If the deck can’t run it at 800p, even at 1080p you’re gonna need what, an RTX 2060 for the lowest settings on a PC?

    Some of the example titles don’t even sound like they’re the kind of titles that are made to showcase what your 4090 can do, which logically you’d want as many people as possible to be able to play it.



  • Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

    I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

    LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

    I guess one more to the pile of why everyone hates Zscaler.


  • That’s more of a general DevOps/server admin steep learning curve than Vaultwarden’s there, to be fair.

    It looks a bit complicated at first as Docker isn’t a trivial abstraction, but it’s well worth it once it’s all set up and going. Each container is always the same, and always independent. Vaultwarden per-se isn’t too bad to run without a container, but the same Docker setup can be used for say, Jitsi which is an absolute mess of components to install and make work, some Java stuff, and all. But with Docker? Just docker compose up -d, wait a minute or two and it’s good to go, just need to point your reverse proxy to it.

    Why do you need a reverse proxy? Because it’s a centralized location where everything comes in, and instead of having 10 different apps with their own certificates and ports, you have one proxy, one port, and a handful of certificates all managed together so you don’t have to figure out how to make all those apps play together nicely. Caddy is fine, you don’t need NGINX if you use Caddy. There’s also Traefik which lands in between Caddy and NGINX in ease of use. There’s also HAproxy. They all do the same fundamental thing: traffic comes in as HTTPS, it gets the Host header from the request and sends it to the right container as plain HTTP. Well it doesn’t have to work that way specifically but that’s the most common use case in self hosted.

    As for your backups, if you used a Docker compose file, the volume data should be in the same directory. But it’s probably using some sort of database so you might want to look into how to do periodic data exports instead, as databases don’t like to be backed up live since the file is always being updated so you can’t really get a proper snapshot of it in one go.

    But yeah, try to think of it as an infrastructure investment that makes deploying more apps in the future a breeze. Want to add a NextCloud? Add another docker compose file and start it, Caddy picks it up automagically and boom, it’s live and good to go!

    Moving services to a new server is also pretty easy as well. Copy over your configs and composes, and volumes if applicable. Start them all, and they should all get back exactly in the same state as they were on the other box. No services to install and configure, no repos to add, no distro to maintain. All built into the container by someone else so you don’t have to worry about any of it. Each update of the app will bring with it the whole matching updated OS with the right packages in the right versions.

    As a DevOps engineer we love the whole thing because I can have a Kubernetes cluster running on a whole rack and be like “here’s the apps I want you to run” and it just figures itself out, automatically balances the load, if a server goes down the containers respawn on another one and keeps going as if nothing happened. We don’t have to manually log into any of those servers to install services to run an app. More upfront work for minimal work afterwards.



  • IMO the biggest attack vector there would be a Minecraft exploit like log4j, so the most important part to me would make sure the game server is properly sandboxed just in case. Start from a point of view of, the attacker breached Minecraft and has shell access to that user. What can they do from there? Ideally, nothing useful other than maybe running a crypto miner. Don’t reuse passwords obviously.

    With systemd, I’d use the various Protect* directives like ProtectHome, ProtectSystem=full, or failing that, a container (Docker, Podman, LXC, manually, there’s options). Just a bare Alpine container with Java would be pretty ideal, as you can’t exploit sudo or some other SUID binaries if they don’t exist in the first place.

    That said the WireGuard solution is ideal because it limits potential attackers to people you handed a key, so at least you’d know who breached you.

    I’ve fogotten Minecraft servers online and really nothing happened whatsoever.



  • If your stuff is all Docker then yeah, immutable makes sense as it makes the entire box declarative and immutable: you can get back the exact same operating Docker environment on the server, and then you can get back the exact same Docker workloads going with the Docker compose configurations.

    If you ever need to run stuff you’d run on Debian, you can just shove it in a Debian container.

    That said, if most of the stuff is containers, the risk of just the core Debian breaking is fairly low. Pick whatever is easiest for you to deal with based on your needs. Immutable distros have a bit of a learning curve.


  • What developers seem to forget is how much stuff they get in return from Valve:

    • Valve invests heavily into Linux gaming, because they know it’s how you keep all those games working in the long run. (Proton)
    • Valve takes care of download bandwidth forever.
    • Built-in streaming and remote play together
    • The Steam Deck and upcoming SteamOS for the console-like experience
    • Forums
    • The workshop and mods
    • Chat and social networking features
    • Steam Input, and my beloved Steam Controller
    • Achievements

    The Epic store is just, here’s the game files loaded with DRM, try to enjoy. Why even sell through a store and not provide a direct download at this point and get 100% of the sales?

    I only buy on Steam because I want Valve to have their 30% cut, because they invest it in the community for everyone’s benefit, including Epic’s games and customers who want to play on Linux or the Steam Deck. Epic would be perfectly happy with the subbar Windows handheld experience because “it’s how PC gaming works”. Proton is amazing, it even eliminates variance that would exist on real Windows machines which results in more games just working right out of the box compared to Windows, esepcially very old ones.

    Between Epic and Valve, I’ll pick the one that makes gaming better for everyone.


  • I learned it accidentally trying to get root on an encrypted dataset working with systemd init without sd-zfs. This turns out to be how the zfs utility works internally to signal the driver “hey it’s okay, I’m a ZFS utility the user isn’t using mount directly”, and how you deal with mounting your root dataset to the temporary /sysroot while having its mountpoint set to / while in initramfs before pivoting root.

    Obviously, don’t use that other than recovering your data, if you want to use this array you should figure out the mountpoints properly so ZFS does it automatically. It shouldn’t break anything but it’s gross, either set mointpoint=legacy and use fstab or set its mountpoint in ZFS and use zfs mount.


  • The trick for this one is mount -t zfs -o zfsutil internal /mnt/some/path

    Assuming the root dataset is mountable. If you have a -o canmount=off on the dataset it will refuse to mount.

    If it’s -o mountpoint=legacy then you don’t need -o zfsutil, but still need to provide both the source and destination paths. Otherwise you’ll get the fstab error because mount can’t figure out what to mount or where to mount it.


  • Yep there’s a reason I reached directly for that configuration. WireGuard uses UDP, that’s one of the first things that gets blocked.

    Turns out that’s also the kind of protocol corporate VPNs use, reusing port 443 over TCP. They call those “SSL VPN”. They get to weed out all commercial VPNs used to bypass their firewalls as well as most torrent/game activity while still mostly catering to their business guests.


  • Best bet is probably going to be using something like OpenVPN on port 443 in TCP mode, which basically looks like regular HTTPS. It’s a hotel, I doubt they’re going to be doing deep analysis to detect signs it’s OpenVPN. It’s detectable easily but they wouldn’t spend the money on that advanced of a firewall.

    My guess is they went for an allowed list of ports rather than blocked, so it lets DNS (53), HTTP (80), HTTPS (443), probably also POP/IMAP/SMTP (110, 995, 143, 993, 465)




  • I wish Jitsi was actually good. It’s a pain in the ass to setup and I’ve yet to get anything more than maybe 480p on it across both Firefox and Chome as well as the mobile apps on iOS and Android. It even reports poor internet connection when the server is literally 5ms away over the Internet, so even if it has to fall back to routed traffic I’ve still got a full gigabit of connectivity between me and my server in a datacenter which is way more than enough. None of the open instances I tried were any different either.

    It feels like a ridiculously overcomplicated WebRTC demo app, the end performance is essentially identical.


  • Separate components that do one thing and only that thing and does it well are good. Extra containers are basically free.

    • The exporters provide the metrics. They can be standalone executables like the node exporter, can also be included in apps themselves easily since it’s just HTTP. It’s trivial to add metrics to just about anything without needing extra ports. Its protocol is also easier and more efficient than SNMP.
    • Prometheus scrapes those metrics and stores it into its database. In other apps that’d be the role things like PostgreSQL have: you don’t really use it directly, but it’s no less important.
    • Grafana is the frontend you slap in front of Prometheus to actually display your metrics.
    • Alertmanager looks at the metrics and sends alerts. It’s separate because if your Prometheus box goes down, how are you gonna be alerted of that?

    All 4 of those can be swapped with something else equivalent and it all still works. Don’t like the UI? Replace Grafana. Don’t like Prometheus? There’s VictoriaMetrics and InfluxDB

    It looks silly on a small scale, but it scales up very well. Couple hundred VMs per Prometheus install, node exporters on every VM and a single Grafana cluster to visualize the data for the whole infrastructure at once.

    That makes it all well liked in enterprise which means there are exporters for damn near anything (even the Lemmy server has a built-in exporter I can scrape with Prometheus), which in turn makes it the easy solution for self-hosters too, and here we are.

    I feel like it’s easier to set up than some of the all in one solutions I’ve used previously, despite being several components. They’re all components that basically just work out of the box.