AggressivelyPassive

  • 10 Posts
  • 173 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle







  • And what is the result? Either you have to check the sources if they really mean what the agent says they do, or you don’t check them meaning the whole thing is useless since they might come up with garbage anyway.

    I think you’re arguing on a different level than I am. I’m not interested in mitigations or workarounds. That’s fine for a specific use case, but I’m talking about the usage in principle. You inherently cannot trust an AI. It does hallucinate. And unless we get the “shroominess” down to an extremely low level, we can’t trust the system with anything important. It will always be just a small tool that needs professional supervision.



  • AggressivelyPassive@feddit.deOPtoSelfhosted@lemmy.worldCheap, but reliable SSDs?
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    5 months ago

    Oh, I’m terribly sorry that I didn’t use the exact wording that the semantic overlord required for his incantations.

    Let’s recap, you only read the title, which by definition does not contain all the information, you wrote an extremely arrogant and absolutely not helpful comment, if challenged you answer with even more arrogance, and your only defense is nitpicky semantics, which even if taken at face value, do not change the value of your comment at all.

    You are not helping anyone. No, not even others.


  • Even agents suffer from the same problem stated above: you can’t trust them.

    Compare it to a traditional SQL database. If the DB says, that it saved a row or that there are 40 rows in the table, then that’s true. They do have bugs, obviously, but in general you can trust them.

    AI agents don’t have that level of reliability. They’ll happily tell you that the empty database has all the 509 entries you expect them to have. Sure, you can improve reliability, but you won’t get anywhere near the DB example.

    And I think that’s what makes it so hard to extrapolate progress. AI fails miserably at absolute basic tasks and doesn’t even see that it failed. Success seems more chance than science. That’s the opposite of how every technology before worked. Simple problems first, if that’s solved, you push towards the next challenge. AI in contrast is remarkably good at some highly complex tasks, but then fails at basic reasoning a minute later.






  • The problem I see is mainly the divergence between hype and reality now, and a lack of a clear path forward.

    Currently, AI is almost completely unable to work unsupervised. It fucks up constantly and is like a junior employee who sometimes shows up on acid. That’s cool and all, but has relatively little practical use. However, I also don’t see how this will improve over time. With computers or smartphones, you could see relatively early on, what the potential is and the progression was steady and could be somewhat reliably extrapolated. With AI that’s not possible. We have no idea, if the current architectures could hit a wall tomorrow and don’t improve anymore. It could become an asymptotic process, where we need massive increases for marginal gains.

    Those two things combined mean, we currently only have toys, and we don’t know if these will turn into tools anytime soon.