Formerly u/CanadaPlus101 on Reddit.

  • 2 Posts
  • 408 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • that you need to get conspiracy theorists to sit down and do the treatment. With their general level of paranoia around a) tech, b) science, and c) manipulation, that not likely to happen.

    You overestimate how hard it is to get a conspiracy theorist to click on something. I don’t know, it seems promising to me. I more worry that it can be used to sell things more nefarious than “climate change is real”.

    you need a level of “AI” that isn’t going to start hallucinating and instead enforce the subjects’ conspiracy beliefs. Despite techbros’ hype of the technology, I’m not convinced we’re anywhere close.

    They used a purpose-finetuned GPT-4 model for this study, and it didn’t go off script in that way once. I bet you could make it if you really tried, but if you’re doing adversarial prompting then you’re not the target for this thing anyway.
















  • But there isn’t any mechanism inherent in large language models (LLMs) that would seem to enable this and, if real, it would be completely unexplained.

    There’s no mechanism in LLMs that allow for anything. It’s a blackbox. Everything we know about them is empirical.

    LLMs are not brains and do not meaningfully share any of the mechanisms that animals or people use to reason or think.

    It’s a lot like a brain. A small, unidirectional brain, but a brain.

    LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

    I’ll bet you a month’s salary that this guy couldn’t explain said math to me. Somebody just told him this, and he’s extrapolated way more than he should from “math”.

    I could possibly implement one of these things from memory, given the weights. Definitely if I’m allowed a few reference checks.


    Okay, this article is pretty long, so I’m not going to read it all, but it’s not just in front of naive audiences that LLMs seem capable of complex tasks. Measured scientifically, there’s still a lot there. I get the sense the author’s conclusion was a motivated one.



  • While I think the basic idea of deliberately introducing friction is interesting, I’d say the philosophers cited are making what’s really a psychology statement, and so exceeding their qualifications, which irks me. The essay itself is philosophy, at least in the “design philosophy” sense.

    If you are designing friction in, how do you go about it without turning away users? BeReal is the first successful-ish example that comes to mind. Forcing you to post at an inconvenient time is arguably friction-y, but people sign up in that case because the friction is experienced socially all at once, and it’s a statement against the atmosphere conventional social media creates. For more practical tools that might be hard to replicate.