While I don’t agree with your first point from my experience, the second one is very true. Especially for memory consumption, your typical Java app easily occupies five times as much as something more bare metal.
While I don’t agree with your first point from my experience, the second one is very true. Especially for memory consumption, your typical Java app easily occupies five times as much as something more bare metal.
I’m okay with steam because:
The Blizzard launcher is just for the few games Blizzard is selling. It asked me to go online way too often. Maybe every single time, I don’t remember exactly. It’s a bit like that Rockstar launcher. I don’t see any value in it besides auto updates.
You still have to install that annoying Blizzard launcher, I guess?
That wasn’t an easy game. But it didn’t require the accuracy today’s competitive FPS shooters do. Even Duke Nukem 3D was pretty cool back then. Was super easy to hit your targets though.
I’d prefer to be on the couch instead of at the desk, too. But FPS with controller is just worlds below mouse and keyboard.
I really want to enjoy games like Fallout or GTA on the Deck but compared to mouse/keyboard it’s just really bad. I cannot understand how so many people like to play games like CoD or Battlefield on consoles.
If people keep buying that crap, what’s going to stop the companies doing that? I’m at a point where I don’t care anymore to be honest. I have so many games on my library that I haven’t played yet olus all the emulated stuff, that’s going to be sufficient for the rest of my life. There will be really good AAA games by nice companies once in a while, like Baldur’s Gate 3 right now. Then there are really good indie games like Stardew Valley, Minecraft (in the early days), and so on.
Diablo 3. Not the one Blizzard released.
piss off their customers
At least for Reddit and Twitter, the users are not the actual customers. The ad companies are the customers.
There’s tons of good indie games. And you don’t even need a 2000€ PC to play them.
I think the dependencies might actually be a problem for the “one binary fits all” solution. For a simple binary the user is responsible for the external dependencies. If by any chance you’re using Arch, there a package in the AUR.
That’s actually not a bad idea. There are a few downsides to this like the binary being quite big compared to the classical “one binary per architecture” style. I’ll give it a though. The docker image is pretty small btw ;).
Sorry for the double response… I got an error the first time I hit Submit.
My favorite feature of good old reddit (rip)! Makes me feel right at home.
This is the way! There’s a catch with swap files on encrypted disks and hibernation but that’s quite a special case. Edit: forgot to mention zswap, the compressed version of swap.
I tried it a few times but was so slow (even in a local network), I ended up cancelling the transfer every single time. I prefer Syncthing which does require some basic setup though.
Not necessarily. For networking, I wrote a bash script with just a few lines that creates and assigns a private networking namespace to a pod and sets up the default routes. That script is run by a systemd user instance and has the suid flag set. One could argue that it’s not rootless because of that but that’s just the moment when it’s starting. No performance impact and very robust. A lot better than the docker network bridges imho.
sftpgo is a nice project to host files in a secure way without too much hassle.