There’s a great talk here where he talks about using local models where I could see them actually being useful.
Hopefully we get there and memory stops this ridiculous 5000% markup.
There are a bunch of useful ways. For example, I was toying with a Minecraft server where people start a country and a local llm can come up with a country code based on their country’s name and the ISO-3166 standard in a few minutes
Cannonical talking about UX AI?
LoL.
LMAO even.
It’s actually a really good talk from someone qoth decades of UX experience. The focus was more on innovation in UX (with the example that Microsoft got AI in Windows very… very wrong).
How’s that good news? It sounds like they are just double-dipping…
My only takeaway that could be seen as good news is that they at least expect consumers to have access to local computing power strong enough to run local AI, and that computing power is very likely in the form of GPUs that can also be used for PC gaming. Hopefully this means there’s still some focus on consumer GPUs somewhere out there rather than just selling them all to OpenAI.
Doesn’t really make much sense. I mean yeah, privacy and all of that but think of the environmental impact of 1000 inefficient PCs vs. 1 efficient PC shared by 1000 people. Maybe just open source models hosted by a community would be better.
Or better yet just forget about it entirely.
The only way I would be comfortable with Ai is if I could craft it myself, run it locally, and prevent it from feeding bullshit results, was energy efficient, could prevent it from phoning home, wasn’t built off of stolen data and also didn’t give profit to big companies
I mean… You can. You can train and run models yourself. Lots of people and orgs do.
Yeah thats true! I hope that the world shifts towards that rather than what most people have been doing.
Fantastic. They’ll make US pay for it. There’s no way they don’t turn this into something more evil.




