

Yep, that’s my use-case. I am not interested in unlocking the door, only locking it.
Alt account of @Badabinski
Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.


Yep, that’s my use-case. I am not interested in unlocking the door, only locking it.
I’ve been on a Lethal Company kick for a few months. I’ve been getting super involved in the “high quota” scene, where people try to get the highest quota possible. It involves a lot of crazy tech and good team work, and I’m really enjoying it.


Yep, taking some care early on can pay dividends down the road. The data structures you choose really matter, and YAGNI can stop you from going overboard with indirection and other shit. Premature optimization is bad, but there’s nothing wrong with writing performant software as long as it’s still comprehensible and extensible.


Anubis has worked if that’s happening. The point is to make it computationally expensive to access a webpage, because that’s a natural rate limiter. It kinda sounds like it needs to be made more computationally expensive, however.
Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.
EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.
EDIT: In response to this:
There’s a reason a huge portion of docker images are alpine-based.
After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.
Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?
EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.
Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.
EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.


True! I just wonder how much energy they’d realistically be able to store for a given amount of resources. Like, does this have the same issues as Lifted Weight Storage? Where the energy density just doesn’t really make sense once you get right down to it. I don’t know the relevant math to determine how much water and at what pressures might be required to scale this up to the 500MWh/1GWh range. It might be perfectly fine.
EDIT: fuck man I’m not writing well today. edited to make me sound like less of a cretin


I wonder if this suffers from the same power density issue as most alternatives to pumped hydro systems. It’s REALLY hard to do better than megatons of water pumped 500 meters up a hill.


I thought I remembered reading that saltwater electrolysis is far more efficient than freshwater electrolysis. It’s probably not orders of magnitude different, but I imagine it might help a bit.


Should have just used AGPL from the start, instead of falling back to this fucked up modified BSD license. It wouldn’t stop people from stripping the branding, but they’d have to release source code which would tell all users what they’re actually using.
Hard power cycling your AC unit is bad for it and may eventually kill it. The fan needs to run for a bit after the compressor turns off. This affects large ACs more than small ones, but it may cause damage after a while. If your AC unit has an RF remote, I’d recommend using something like a Broadlink unit to control it.
I would not recommend ThirdReality zigbee smart plugs. Their firmware updates have been buggy far too often. Honestly, the only smart plugs I’ve been happy with are z-wave ones. Zooz ZEN04-LR and ZEN15-LR (for high current draw applications) plugs have been awesome for me. My hub is PoE, so I can easily stick it centrally in my home.


But k3s so niiiice.


“Get off vent, or I’ll have you bent.”
I wish those stupid videos weren’t the first thing my brain goes for when I see the word “Ventrilo.”
Proxmox HA cluster with a SAN. VM migrations go wheeeeeeeeee.
I’d just run HA on the mini PC. There are a boatload of add-ons that you can install which will allow you to make better use of the hardware.


Wireguard was written with the explicit goal of having sane, secure defaults. I totally feel you w.r.t. openvpn or ipsec, since it’s easy to do something wrong. Wireguard is much easier because it simply refuses to give you the choice to do things incorrectly.
w.r.t. the certificate thing, you could set up a reverse proxy and do HSTS to ensure nobody can load up a rogue CA on your devices. HSTS has the issue that SSH has (trust on first use or whatever it’s called), but you just need to make sure nobody is MITM you for that first connecting and then you’ll be good to go. This would let you use a self-signed certificate if you do desired.


For people like me who lack context:
Authelia is an open-source authentication and authorization server and portal fulfilling the identity and access management (IAM) role of information security in providing multi-factor authentication and single sign-on (SSO) for your applications via a web portal. It acts as a companion for common reverse proxies.
Open source can be enshittified. FOSS with many contributors should be basically proof against being fucked with.