2.4 is the tipping point. Mark my words.
Any day now, it’s gonna be the year of the Linux handheld.
2.4 is the tipping point. Mark my words.
Any day now, it’s gonna be the year of the Linux handheld.
That’s what’s really confusing me: why add an expensive feature, that obviously doesn’t work and even in the best case adds only minor improvements?
I mean, it’s not another option like with Bing. It’s the default. Every stupid little search will take up AI resources. For what? Market cap?
I could see those as an option for rural areas without much traffic. A full train might not be economical, but a small pod is. It could transport people to the closest proper train station where they can hop off.
But that would mean you’d have to maintain a ton of tracks for a handful of people.
How many companies need such a scale, but are not able to provide it inhouse for less money?
Everyone wants to be Netflix, but 99% of companies don’t even need close to that amount of scalability. I’d argue, a significant part of projects could be run on a raspberry pi, if they’d be engineered properly.
Have you considered something like tailscale?
And how many buyers actually care about that?
I’m pretty sure, nowadays institutional buyers define the market. Tons of regular people don’t even have laptops (or desktops for that matter) anymore.
And what is the result? Either you have to check the sources if they really mean what the agent says they do, or you don’t check them meaning the whole thing is useless since they might come up with garbage anyway.
I think you’re arguing on a different level than I am. I’m not interested in mitigations or workarounds. That’s fine for a specific use case, but I’m talking about the usage in principle. You inherently cannot trust an AI. It does hallucinate. And unless we get the “shroominess” down to an extremely low level, we can’t trust the system with anything important. It will always be just a small tool that needs professional supervision.
See, again, nitpicky details, even though we both know exactly what was meant.
Oh, I’m terribly sorry that I didn’t use the exact wording that the semantic overlord required for his incantations.
Let’s recap, you only read the title, which by definition does not contain all the information, you wrote an extremely arrogant and absolutely not helpful comment, if challenged you answer with even more arrogance, and your only defense is nitpicky semantics, which even if taken at face value, do not change the value of your comment at all.
You are not helping anyone. No, not even others.
Even agents suffer from the same problem stated above: you can’t trust them.
Compare it to a traditional SQL database. If the DB says, that it saved a row or that there are 40 rows in the table, then that’s true. They do have bugs, obviously, but in general you can trust them.
AI agents don’t have that level of reliability. They’ll happily tell you that the empty database has all the 509 entries you expect them to have. Sure, you can improve reliability, but you won’t get anywhere near the DB example.
And I think that’s what makes it so hard to extrapolate progress. AI fails miserably at absolute basic tasks and doesn’t even see that it failed. Success seems more chance than science. That’s the opposite of how every technology before worked. Simple problems first, if that’s solved, you push towards the next challenge. AI in contrast is remarkably good at some highly complex tasks, but then fails at basic reasoning a minute later.
The closest one is about a trip over the Atlantic away.
It’s absolutely opaque to me, especially the non-big-name brands barely get any reliable reviews and especially given the silicon lottery, I can’t tell if every chip is like the reviewed ones.
If I just happen to get the bad module that craps out after 6 months, the positive reviews are not that helpful.
Honestly, that is the typical self-righteous stackoverflow response that is helping no one.
You know exactly what I mean, you know exactly how to treat the question, but you chose to play captain obvious of the second arrogance division and posted this.
Of course devices will fail at some point, what are you even trying to add here?
The problem I see is mainly the divergence between hype and reality now, and a lack of a clear path forward.
Currently, AI is almost completely unable to work unsupervised. It fucks up constantly and is like a junior employee who sometimes shows up on acid. That’s cool and all, but has relatively little practical use. However, I also don’t see how this will improve over time. With computers or smartphones, you could see relatively early on, what the potential is and the progression was steady and could be somewhat reliably extrapolated. With AI that’s not possible. We have no idea, if the current architectures could hit a wall tomorrow and don’t improve anymore. It could become an asymptotic process, where we need massive increases for marginal gains.
Those two things combined mean, we currently only have toys, and we don’t know if these will turn into tools anytime soon.
The clear answer is: don’t use subversion. There’s really no reason not to use git, since you can use git just like subversion if you want to.
There’s a lot of heat to sink.
Absolutely. I barely touch code anymore, but I talk about how to touch code a lot.
Not even “weird” shit, just variations of similar sentiments on various characters.
Like, you have a city with hundreds of people on the street, yesterday something noteworthy happened and everyone has an opinion on that. Each NPC gets a bunch of parameters, some pre-defined, some random, and answers based on that.
Yes, I forgot that, it was a long day.
I find it really weird that something as simple as the basic functionality of nextcloud seemingly can’t be implemented in a stable and lightweight manner.
Nextcloud always seems one update away from self destruction and it prepares for that by hoarding all the resources it can get. It never feels fast or responsive. I just want a way to share files between my machines.
There are other solutions, I know, but they’re all terrible in their own way.