i’m lizard 🦎

  • 1 Post
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • If you’re a gamedev trying to make a decent mobile game, you’re competing on all the usual fronts like price and perceived quality, but competing for attention has gotten a whole lot harder when [arbitrary card game] has a hour of dailies, [arbitrary gacha game] always has a special campaign going and [arbitrary fake gambling game] is about to have its battle pass end and they’re only halfway through. And that has gone up by so, so much over the past decade. It was never good but it’s gotten absolutely egregious. At this point, even any generic snake clone will have a battle pass.

    Every person that ends up committed to a couple of those long-term-commitment games ends up having much less time for other games. And they make a lot of money, which means they also end up having a hell of a marketing budget.





  • Storj is blockchain stuff with the storage and bandwidth provided by individual node operators. They’ve kinda tried to bury the whole blockchain stuff and generally keep it removed from their main signup/pricing/usage flow; customers pay in USD and never have to see any of it. But it’s still there in the background and it’s still the main reward system for node operators.

    There’s some clickwrapped T&Cs for operators that set some minimum requirements, they’ve made sure one node leaving doesn’t cause data loss, but I’d still be very wary of using them for anything irreplaceable. It only takes one crypto crash or the like for the whole thing to die out, and while they might end up suing some guys running an old NAS out of their garage, that’s not gonna get your data back.







  • Even if the source is kept decently preserved, the build environments are usually not. If they still have a machine in the exact state it was in at the time the game was finished, it might be as easy as Project -> Build, but… they almost certainly don’t. So that likely has to be rebuilt from scratch, and you’d be very lucky to find any kind of documentation on how things worked.

    Game studios tend to have it particularly bad because of how much binary-only engines/middleware (standalone bits like Havok physics/Bink video/etc) they used, how often the game’s data and code builds were mixed together in some way and how in some cases the project is designed to build things like console releases at the same time. If you lost the install files for your physics engine, you’re probably straight up screwed.

    By the time you’ve figured all of that out, you can be easily hundreds of hours in, with tons of weird little issues that might require different people to solve. Some examples: you might end up needing to build it in Windows XP because no other OS runs all of the software used during the build, any sysadmin is NOT going to be happy installing WinXP on their network so the machine has to stay offline, getting code onto that machine might be a pain due to how Perforce or whatever is used by them, even things taken for granted like a particular version of the DirectX 9 SDK might be hard to find, etc. Sometimes licensing/activation of tools used in the build process is an impossible to solve problem because it needs some DRM dongle or activation server that no longer exists and the software was never publicly available, so there is no crack.


  • If such a process existed, the entity in question would almost certainly end up being shut down by that process, unless they find a funny technical loophole around it, in which case that would be a failure of the law that should not be rejoiced by anyone.

    But as it stands, that law and process does not exist; ISPs already can and will shut you down for things like downloading copyrighted content (with or without complaints from the copyright holder), tethering without approval, being a technical nuisance in the form of mass port scanning, hosting insecure services and other such stuff. “Hosting a platform solely dedicated to harassment and stalking and ignoring abuse complaints about it” absolutely deserves to be on that list.


  • “If we don’t let the oppressors roam freely, they might try to oppress you” is not something I expected to read from the EFF today. But well, here we are.

    It has been standard internet behavior that if a platform does not have the proper response to abuse complaints, you move up a layer higher until you find someone that is receptive to it. This has been standard operating procedure for more or less for the entirety of the current millennium, and this article has done absolutely zero work to provide a good reason it should be anything otherwise, other than bringing up generic “free speech” stuff.

    You should not get a path out of that process because one layer immediately above the problematic entity is actively choosing to disregard abuse complaints. You simply move up to the next step. And this process simply must keep existing, as doing anything otherwise is to allow people to pull off all kinds of bad things; scams, spam, illegal activity and far more.

    And if you abolish the non-legal form of that process? Well, there’s still a legal process - and as soon as someone that wants to censor minorities gets control over the legal process, they will simply change the rules in their favor, as has happened countless times in the past.



  • As pointed out, the DNS issue was fixed, and the other point made about Python wheels has also been addressed; quite a good chunk of packages on PyPi have had a musl wheel added in the past 6 months or so, including numpy & scipy. I’m also not certain if the Go part is true; probably somewhere around half of the Go apps I’m running as a container are running or were built on an Alpine base.


  • The argument does exist. This article by PEN America is one of the most widely spread ones and largely misrepresents the situation. It’s based on a PopSci article with a similar headline, though the contents of the article tell a rather different story.

    Nothing really says out loud what’s going on: Republicans enacted an extremely vague and unrealistically short deadline book ban as part of a bill (that does some other stuff like removing AIDS education), forcing schools to either throw out every book that might be vaguely suspect or resort to funny measures like this. This school’s use of ChatGPT was purely to save books that were on a human-assembled list of challenged books, to reduce the negative effect of the book ban, while being potentially defensible in court (remains to be seen how that’ll work out, but they made an “objective” process and stuck to it - that’s what matters to them).


  • Okay, the thing that really matters to me:

    “Frankly, we have more important things to do than spend a lot of time trying to figure out how to protect kids from books,” Exman tells PopSci via email. “At the same time, we do have a legal and ethical obligation to comply with the law. Our goal here really is a defensible process.”

    According to Exman, she and fellow administrators first compiled a master list of commonly challenged books, then removed all those challenged for reasons other than sexual content. For those titles within Mason City’s library collections, administrators asked ChatGPT the specific language of Iowa’s new law, “Does [book] contain a description or depiction of a sex act?”

    It really only got rid of things that would’ve otherwise had to go to begin with, while saving a few others.

    It feels a bit closer to malicious compliance more than truly letting the AI decide the fate of things, and doing full proper compliance within the 3 months they were given would’ve been nigh impossible. I’m suspecting that the lawmakers were hoping that by giving them such a small timeframe, schools would throw everything vaguely suspect out. This ultimately leaves more books accessible, which I consider to be a good end result, even if the process to get there is a little weird.



  • I do and I can confirm there are no requests (except for robots.txt and the odd /favicon.ico). Google sorta respects robots.txt. They do have a weird gotcha though: they still put the URLs in search, they just appear with an useless description. Their suggestion to avoid that can be summarized as: don’t block us, let us crawl and just tell us not to use the result, just trust us! when they could very easily change that behavior to make more sense. Not a single damn person with Google blocked in robots.txt wants to be indexed, and their logic on password protecting kind of makes sense but my concern isn’t security, it’s that I don’t like them (or Bing or Yandex).

    Another gotcha I’ve seen linked is that their ad targeting bot for Google AdSense (different crawler) doesn’t respect a * exclusion, but that kind of makes sense since it will only ever visit your site if you place AdSense ads on it.

    And I suppose they’ll train Bard on all data they scraped because of course. Probably no way to opt out of that without opting out of Google Search as well.