• 3 Posts
  • 56 Comments
Joined 11 months ago
cake
Cake day: October 20th, 2023

help-circle

  • More drives is always better. But you need to understand how you are making it better.

    https://en.wikipedia.org/wiki/Standard_RAID_levels is a good breakdown of the different RAID levels. Those are slightly different depending on if you are doing “real”/hardware RAID or software raid (e.g. ZFS) but the principle holds true and the rest is just googling the translation (for example, Unraid is effectively RAID4 with some extra magic to better support mismatched drive sizes)

    That actually IS an important thing to understand early on. Because, depending on the RAID model you use, it might not be as easy as adding another drive. Have three 8 TB and want to add a 10? That last 2 TB won’t be used until EVERY drive has at least 10 TB. There are ways to set this up in ZFS and Ceph and the like but it can be a headache.

    And the issue isn’t the cloudflare tunnel. The issue is that you would have a publicly accessible service running on your network. If you use the cloudflare access control thing (login page before you can access the site) you mitigate a lot of that (while making it obnoxious for anything that uses an app…) but are still at the mercy of cloudflare.

    And understand that these are all very popular tools for a reason. So they are also things hackers REALLY care about getting access to. Just look up all the MANY MANY MANY ransomware attacks that QNAP had (and the hilarity of QNAP silently re-enabling online services with firmware updates…). Because using a botnet to just scan a list of domains and subdomains is pretty trivial and more than pays for itself after one person pays the ransom.

    As for paying for that? I would NEVER pay for nextcloud. It is fairly shit software that is overkill for what people use it for (file syncing and document server) and dogshit for what it pretends to be (google docs+drive). If I am going that route, I’ll just use Google Docs or might even check out the Proton Docs I pay for alongside my email and VPN.

    But for something self hosted where the only data that matters is backed up to a completely different storage setup? I still don’t like it being “exposed” but it is REALLY nice to have a working shopping list and the like when I head to the store.


  • A LOT of questions there.

    Unraid vs Truenas vs Proxmox+Ceph vs Proxmox+ZFS for NAS: I am not sure if Unraid is ONLY a subscription these days (I think it was going that way?) but for a single machine NAS with a hodgepodge of drives, it is pretty much unbeatable.

    That said, it sounds like you are buying dedicated drives. There are a lot of arguments for not having large spinning disk drives (I think general wisdom is 12 TB is the biggest you should go for speed reasons?), but at 3x18 you aren’t going to really be upgrading any time soon. So Truenas or just a ZFS pool in Proxmox seems reasonable. Although, with only three drives you are in a weird spot regarding “raid” options. Seeing as I am already going to antagonize enough people by having an opinion, I’ll let someone else wage the holy war of RAID levels.

    I personally run Proxmox+Ceph across three machines (with one specifically set up to use Proxmox+ZFS+Ceph so I can take my essential data with me in an evacuation). It is overkill and Proxmox+ZFS is probably sufficient for your needs. The main difference is that your “NAS” is actually a mount that you expose via SMB and something like Cockpit. Apalrd did a REALLY good video on this that goes step by step and explains everything and it is well worth checking out https://www.youtube.com/watch?v=Hu3t8pcq8O0.

    Ceph is always the wrong decision. It is too slow for enterprise and too finicky for home use. That said, I use ceph and love it. Proxmox abstracts away most of the chaos but you still need to understand enough to set up pools and cephfs (at which point it is exactly like the zfs examples above). And I love that I can set redundancy settings for different pools (folders) of data. So my blu ray rips are pretty much YOLO with minimal redundancy. My personal documents have multiple full backups (and then get backed up to a different storage setup entirely). Just understand that you really need at least three nodes (“servers”) for that to make sense. But also? If you are expanding it is very possible to set up the ceph in parallel to your initial ZFS pool (using separate drives/OSDs), copy stuff over, and then cannibalize the old OSDs. Just understand that makes that initial upgrade more expensive because you need to be able to duplicate all of the data you care about.

    I know some people want really fancy NASes with twenty million access methods. I want an SMB share that I can see when I am on my local network. So… barebones cockpit exposing an SMB share is nice. And I have syncthing set up to access the same share for the purpose of saves for video games and so forth.

    Unraid vs Truenas vs Proxmox for Services: Personally? I prefer to just use Proxmox to set up a crapton of containers/vms. I used Unraid for years but the vast majority of tutorials and wisdom out there are just setting things up via something closer to proxmox. And it is often a struggle to replicate that in the Unraid gui (although I think level1techs have good resources on how to access the real interface which is REALLY good?).

    And my general experience is that truenas is mostly a worst of all worlds in every aspect and is really just there if you want something but are afraid of/smart enough not to use proxmox like a sicko.

    Processor and Graphics: it really depends on what you are doing. For what you listed? Only frigate will really take advantage and I just bought a Coral accelerator which is a lot cheaper than a GPU and tends to outperform them for the kind of inference that Frigate does. There is an argument for having a proper GPU for transcoding in Plex but… I’ve never seen a point in that.

    That said: A buddy of mine does the whole vlogger thing and some day soon we are going to set up a contract for me to sit down and set her up an exporting box (with likely use as a streaming box). But I need to do more research on what she actually needs and how best to handle that and she needs to figure out her budget for both materials and my time (the latter likely just being another case where she pays for my vacation and I am her camera guy for like half of it). But we probably will grab a cheap intel gpu for that.

    External access: Don’t do it, that is a great way to get hacked.

    That out of the way. My nextcloud is exposed to the outside world via a cloudflare tunnel. It fills me with anxiety but as long as you regularly update everything it is “fine”.

    My plex? I have a lifetime plex pass so I just use their services to access it remotely. And I think I pay an annual fee for homeassistant because I genuinely want to support that project.

    Everything else? I used to use wireguard (and openvpn before it) but actually switched to tailscale. I like the control that the former provided but much prefer the model where I expose individual services (well, VMs). Because it is nice to have access to my cockpit share when I want to grab a file in a hotel room. There is zero reason that anything needs access to my qbitorrent or calibre or opnsense setup. Let alone even seeing my desktop that I totally forgot to turn off.

    But the general idea I use for all my selfhosted services is: The vast majority of interactions should happen when I am at home on my home network. It is a special case if I ever need to access anything remotely and that is where tailscale comes in.

    Theoretically you can also do the same via wireguard and subnetting and vlans but I always found that to be a mess to provide access both locally and remotely and the end result is I get lazy. Also, Tailscale is just an app on basically any machine whereas wireguard tends to involve some commands or weird phone interactions.


  • Railjack itself is fun.

    Railjack in pubs is hell on earth. Because you can never be sure when it is safe to “abort mission” so you basically need to rush to Nav to go back to dojo after every single mission. Which often includes at least one person who wants to do another one and doesn’t realize WHY you go back to dojo during pubs. Also, when there is a Nightwave for it, you have the hosts who just leave party the moment the objectives are done which has like a 70% chance of putting the rest of the party into a nav-less railjack where they just have to quit out and lose all the rewards.

    Also, we are at the mercy of the host’s railjack. It has gotten better but you still get the occasional “meta” railjack where only the pilot seat has good guns and everyone else needs to just get in their archwings or do endless repairs and forging because the guns are worthless.

    Which… basically makes it the worst of all worlds in Warframe

    • We have the host migration issues but they are almost guaranteed rather than occasional
    • We have the annoying delay between runs common to the open worlds
    • Much like Spy, a bad matchmaking guarantees a loss or a miserable time

    Which is why I suspect that Railjack will always remain a thing that is there for story quest sequences and not much else.





  • I mean… I don’t really disagree in this specific context.

    I assume Fortnite has kernel level/rootkit anti-cheat. And Epic make massive amounts of cash from all the goku skins people buy. Unless they have the resources to test at least the major distros and keep aware of possible hacks/bypasses on that side it is just begging for exploits. And it is big enough that the moment one is identified EVERYBODY is grabbing an ubuntu live CD to get some goku dollars.

    I still think it is shit that they don’t directly support Linux with the EGS (especially since they distribute Unreal Engine and marketplace stuff via that). But for their “more revenue than the GDP of a small nation” live game? I get it.


    A buddy who works on one of the popular live games made the comparison to pokemon cards. Everyone thinks it is a great idea to show them off at school. Until the kid trips, they get scattered on the floor, and it is a god damned feeding frenzy of every single kid losing their minds to scramble and fight over that dog eared pikachu card.



  • I am not aware of any games having a problem with too many cores*. But most of those (from memory) seem like peak Pentium era games. For the sake of this explanation I will only focus on Intel because AMD was kind of a dumpster fire for the pertinent parts of this.

    Up until probably the late 00s/early 10s, the basic idea was that a computer processor should be really really fast and powerful. Intel’s Pentium line was basically the peak of this for consumers. One core with little to no threading but holy crap was it fast and had a lot of nice architectural features to make it faster. But once we hit the 4 Ghz clock speed range, the technology required to go considerably faster started to get really messy (and started having to care about fundamental laws of physics…). And it was around this time that we started to see the rise of the “Core” line of processors. The idea being that rather than have one really powerful processor you would have 2 or 4 or 8 “kind of powerful” processors. Think “i4” as it were. And now we are at the point where we have a bunch of really powerful processors and life is great.

    But the problem is that games (and most software outside of HPC) were very much written for those single powerful cores. So if Dawn of War ran past on a chonky 4 Ghz Pentium, it didn’t have the logic to split that load across two or three cores of a 3 Ghz i4. So you were effectively taking a game meant to run on one powerful CPU core and putting it on one weaker CPU core that also may have lower bandwidth to memory or be missing instructions that helped speed things up.

    To put it in video game (so really gun) terms: it is the difference between playing with a high powered DMR and going to a machine gun, but still treating it like it is semiauto.

    But the nice thing is that compatibility layers (whether it is settings in Windows or funkiness with wine/proton) can increasingly use common tricks to make a few threads of your latest AMD chip behave like a pretty chonky Pentium processor.

    *: Speculation as I am not aware of any games that did this but I have seen a lot of code that did it. A fundamental concept in parallel/multithreaded programming is the “parallel for”. Let’s say you have ten screws to tighten on your ikea furniture. The serial version of that is that you tighten each one, in order. The parallel version is that you have a second allen key and tell your buddy to do the five on that side while you do the five on this side. But a lot of junior programmers won’t constrain that parallel for. So there might be ten screws to tighten… and they have a crew of thirty people fighting over who gets to hold the allen key and who tightens what. So it ends up being a lot slower than if you just did it yourself.



  • Denuvo is a lot more effective than that.

    But also? That kind of makes a huge difference. Story time (so obviously anecdotal and grain of salt and…)!

    Way back when there was this series called Mass Effect. All us PC gamers were pissed off that Bioware were traitors who had abandoned us and never wanted to play that xbox shit. Until there was a port of the game to PC and we all needed it in our veins.

    Mass Effect PC was one of the first (?) games with activation model securom as a DRM model. And the Scene Group who cracked it first did a piss poor job and the game would crash once you finished the tutorial and got to the starmap. But, because it was the era of multiple Scene Groups vying for power, nobody wanted to be “second” to crack a game.

    So the various message boards were full of people complaining and eventually a good many of us pirates just drove down to Best Buy and bought the game because we needed it NOW!!!

    I want to say it was properly cracked within a week? But that was still, likely, very significant sales. Largely for the same reason that publishers/devs pay for day one influencer streams and the like. That is the peak of marketing and when you get all the impulse buys who didn’t pre-order.


  • Honestly? I think you are getting hung up on approaching this from the T9 perspective.

    Take a look at the mechanical keyboard community. Most people are sane and looking at TKL or even 60% layouts where most buttons people actually use on a keyboard are represented by dedicated keys. But there are some real sickos who go for fully minimalist layouts where they have closer to 20 or even 10 keys. And those are the leyouts where you heavily rely on different layers so that “A” might actually be “Button 1 with modifier X and Y” whereas B is “button 1 with modifier X”. Basically people took the logic of dvorak and went to an insane degree. It is terrifying and it is beautiful.

    And that has the same issues that mapping to a gamepad would. Some people are going to be able to internalize that in a timely fashion. Others are going to spend months using typing tools online to train themselves. And the rest of us are going to say “nope” and move on.

    In terms of how to have a better steam deck keyboard? I think there is a lot of room for someone to go full keeb-pill and take advantage of the physical buttons. I would 100% watch that youtube video, maybe throw a tip in a tip jar, and then have a new appreciation for the touchscreen keyboard the next time I have to enter my Warframe password. But I still think that if your game is heavily reliant on having a keyboard… it isn’t a Steam Deck game. And that is fine. I am not going to play DCS on my Steam Deck. One of these days I probably will futz around with streaming X4 though.


  • T9 I think is an example of how to map a large input space (keyboard) to a small device (gamepad). But it mostly thrived in a way to convey meaning to a numeric string (1-800-COLLECT, for example). But the people who actually used it for SMS/beepers were few and far between and we were so reliant on auto-complete/predictive language during the flipphone era. This is why it was the era of “oh, so and so texted me. Let me call them back”

    largely for the same reason that even a lot of “keeb” enthusiasts increasingly acknowledge that going below a 60/65% for a programmer or a 40% for a writer is… of very questionable utility. Some people have the brain pathways to learn completely different keyboard layouts and can keep 4 or 5 layers worth of keys in their heads and write straight up fortran with a 13 key keyboard, Just like how some people can learn a new spoken/written language in a few weeks. Most people can’t and they basically just “ruin” normal keyboards for themselves.

    As for hacking the gibson: Actually the vast majority of media depictions involve basically a keyboard/touch screen strapped to a wrist (that IS what a deck is). So if you really want your daily driver to be something with serious security issues due to how the lock screen is implemented… that is how you get your Count Zero on.

    But that really is the issue here. You and I are discussing how you would map an actual keyboard to the steam deck. That isn’t what is being discussed here (careful, you too will get blocked (OH NOES!!!) because you didn’t give proper respect to a blog post). This is mapping an RPG/roguelite from old school curses input to a gamepad/touch screen combo. Which, as I said, is a fundamentally “wrong” idea. In large part because the vikeys/vimkeys solution of a lot of classic roguelikes/lites was a way to provide gamepad like controls with just a keyboard.


    As an aside: My brain is blanking on it (it might actually have been the Steam Controller), but I remember an on screen keyboard that actually used analog sticks and felt like a weird hybrid of t9 and the god awful ps3/4 on screen keyboards. Something like you hit a button to bring up the keyboard and then move your analog stick toward a cluster (forget if they were nested or not) and buttons to pick the options. Was very much in that “This is cool as hell but my brain is not gonna learn it” category.

    As for the touch screen: I also really dislike it (hence hedging my comment above). But I do wish more games, particularly the complex ones, would take advantage of it. Let me tap the inventory bag to open up my inventory rather than switching to interface mode and sticking over to it or having a dedicated button that could be a different skill. Stardew Valley’s Steam Controller layout (I forget if it was official or community) was awesome in a similar way because you would get context touchpad menus to quickly navigate the interface. But the problem is that it becomes a device specific layout and gets almost no usage.


  • Mate. If all you want is an echo chamber then don’t post on a message board. That is a blog post with the comments turned off.

    And I did read your blog post. I didn’t watch your podcast so, deep apologies if that offended you somehow. And I still think you are doing what a lot of developers did during the 00s when “console ports” and “optional gamepads” were the big thing in PC dev space. You are trying to adapt an existing control scheme with radically different inputs rather than acknowledging what controls you actually need.

    That is WHY Caves of Qud is such an amazing steam deck experience. That is WHY stuff like Stardew Valley on the Steam Controller are still looked at so fondly. And that is why so many other games never “feel right”. Because devs are trying to map a gamepad to a keyboard (hello Dark Souls) or an analog stick to a mouse cursor (fuck you Bungie and Ubi) or a keyboard to a gamepad.

    Hell, we still see it with a lot of the CRPG, Strategy Games, and RTSes that devs try to make work for a gamepad. Very few get it “right” because it is a really hard challenge. And why Dragon’s Age increasingly became basically a Divinity 2/Oblivion style game rather than a “real” CRPG like DA:O was.




  • I mean, I love Deus Ex. But the story was very barebones and mostly read as a hodgepodge of all the conspiracy theories we had read on usenet or saw on xfiles.

    The real lasting legacy of that game is that it is arguably the first “immersive sim”. Games like System Shock and Thief had a lot of the elements but Deus Ex was the first time you really could approach the vast majority of the game with very different playstyles and were “equally” rewarded for each.

    As much as I love Nier Automata, I also would not say it is the GOAT game story. But Deus Ex definitely is not in that discussion.




  • To my understanding, Nintendo actively opposed doing so.

    But when Nintendo was “competing” with Sega/Sony, brick and mortar stores had a LOT more power. EBGames/Gamestop could basically do whatever they wanted because moving the Nintendo shelves to be behind the Sony shelves would lead to noticeable sales changes. So it was a lot more common for Toys R Us to run their own sales to move merchandise.

    But in the past twenty years or so, Nintendo have actively shitlisted anyone who puts a discount on their games. Amazon famously got shitlisted at least three or four times which led to a lot of weirdness in terms of what “editions” of a given game was available for purchase.


  • I just use a pretty generic z-wave plug and home assistant. In the past I did more complex setups that actually determine what process is spiking and so forth. But eventually realized that “this is doing a lot of compute…” is a catch all for a LOT of potential issues.

    And I guess I don’t understand what you mean by “shouldn’t be wireless”. It is inherently going to be wireless because you will be on your phone on the other side of the planet. If you genuinely suspect you will be vulnerable to attacks of this scale then you… probably have other things to worry about.

    But as a safety blanket?