• dubyakay@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        7 months ago

        Like the CEO of my company, who’s on the board of directors of other companies and shit. And LLM literally writes the shitty corporate talk scripts for him.

    • FiniteBanjo@lemmy.today
      link
      fedilink
      arrow-up
      6
      ·
      7 months ago

      Yeah, I was gonna say, I would agree with “AI cannot do the job of people” but that’s not what he is saying. He is trying to dismiss the idea that he will inevitably replace workers with AI so that he can avoid backlash while slowly incorporating it into their projects.

    • Vivendi@lemmy.zip
      link
      fedilink
      arrow-up
      6
      arrow-down
      11
      ·
      7 months ago

      Stochastic plagiarism machine AKA LLM models won’t replace anyone

      They already have a 52% percent failure rate and this is with top of the line training data

      • Nevoic@lemm.ee
        link
        fedilink
        arrow-up
        7
        arrow-down
        5
        ·
        edit-2
        7 months ago

        In my line of work (programming) they absolutely do not have a 52% failure rate by any reasonable definition of the word “failure”. More than 9/10 times they’ll produce code at at least a junior level. It won’t be the best code, sometimes it’ll have trivial mistakes in it, but junior developers do the same thing.

        The main issue is confidence, it’s essentially like having a junior developer that is way overconfident for 1/1000th of the cost. This is extremely manageable, and June 2024 is not the end all be all of LLMs. Even if LLMs only got worse, and this is the literal peak, it will still reshape entire industries. Junior developers cannot find a job, and with the massive reduction in junior devs we’ll see a massive reaction in senior devs down the line.

        In the short term the same quality work will be done with far, far fewer programmers required. In 10-20 years time if we get literally no progress in the field of LLMs or other model architectures then yeah it’s going to be fucked. If there is advancement to the degree of replacing senior developers, then humans won’t be required anyway, and we’re still fucked (assuming we still live in a capitalist society). In a proper society less work would actually be a positive for humanity, but under capitalism less work is an existential threat to our existence.

        • Dangerhart@lemm.ee
          link
          fedilink
          arrow-up
          7
          ·
          7 months ago

          This is the exact opposite of my experience. We’ve been using codium in my org and 9/10 times it’s garbage and they will not allow anything that is not on prem. I’m pretty consistently getting recommendations for methods that don’t exist, invalid class names, things that look like the wrong language, etc. To get the recommendations I have to cancel out of auto complete, which is often times much better. It seems like it can make up for someone who doesn’t have good workflows, shortcuts and a premium ide, but otherwise it’s been a waste of time and money.

        • Vivendi@lemmy.zip
          link
          fedilink
          arrow-up
          2
          arrow-down
          4
          ·
          7 months ago

          There is literally a university study that proves over 50% failure in programming tasks. It’s not a rational model, deal with it, get off the Kool aid, and move on.

          • Nevoic@lemm.ee
            link
            fedilink
            arrow-up
            6
            arrow-down
            3
            ·
            7 months ago

            If you didn’t have an agenda/preconceived idea you wanted proven, you’d understand that a single study has never been used by any credible scientist to say anything is proven, ever.

            Only people who don’t understand how data works will say a single study from a single university proves anything, let alone anything about a model trained on billions of parameters across a field as broad as “programming”.

            I could feed GPT “programming” tasks that I know it would fail on 100% of the time. I also could feed it “programming” tasks I know it would succeed on 100% of the time. If you think LLMs have nothing to offer programmers, you have no idea how to use them. I’ve been successfully using GPT4T for months now, and it’s been very good. It’s better in static environments where it can be fed compiler errors to fix itself continually (if you ever look at more than a headline about GPT performance you’d know there’s a substantial difference between zero-shot and 3-shot performance).

            Bugs exist, but code heavily written by LLMs has not been proven to be any more or less buggy than code heavily written by junior devs. Our internal metrics have them within any reasonable margin of error (senior+GPT recently beating out senior+junior, but it’s been flipping back and forth), and senior+GPT tickets get done much faster. The downside is GPT doesn’t become a senior, where a junior does with years of training, though 2 years ago LLMs were at a 5th grade coding level on average, and going from 5th grade to surpassing college level and matching junior output is a massive feat, even if some luddites like yourself refuse to accept it.

  • alessandro@lemmy.caOP
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    7 months ago

    Take-Two CEO: The idea that AI will make everyone unemployed is ‘the stupidest thing I’ve ever heard’

    He doesn’t want everyone unemployed, he just want everyone at Take-Two, except him, to be fired thanks to AI.

    If overall humans are unemployed, he got no one to sell.

  • DebatableRaccoon@lemmy.ca
    link
    fedilink
    arrow-up
    17
    ·
    7 months ago

    Nah, this evil over-paid bastard will have already fired everyone by then. Can we make this prick take the place of Johnny Klebitz, please?

      • Mikufan@ani.social
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        7 months ago

        Most people won’t in the foreseeable future. LLMs are in a feedback loop and eat their own outputs, making them effectively worse at what they are doing over time. Also they are a gigantic risk factor.

  • ssm@lemmy.sdf.org
    link
    fedilink
    arrow-up
    3
    arrow-down
    15
    ·
    edit-2
    7 months ago

    Wow first time in my life I agreed with a CEO, at least only from reading the headline

  • Album@lemmy.ca
    link
    fedilink
    arrow-up
    10
    arrow-down
    34
    ·
    edit-2
    7 months ago

    If by AI he means LLM or generative AI then sure. But LLM and GenAI are not truly AI in the full sense of the meaning. They’re building blocks to it. A mind is more complex. The singularity still approaches at blistering speed.

    Edit: downvotes don’t change the fact that these "AI"s are not intelligent. It’s a misnomer by ppl who want to sell you shit. https://bigthink.com/the-future/artificial-general-intelligence-true-ai/

    • Hawk@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      24
      ·
      7 months ago

      I think they’re down voting because, no, the singularity is not approaching at blistering speed.

      • Album@lemmy.ca
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        7 months ago

        Ah yeah I see your point I suppose - i didn’t think that’s the line that would hang people up though. By definition of the theory of the singularity is that it will be blisteringly quick - though that’s my word for how quick it will be. The whole concept that the last 50 years equivalent of tech advance will be achieved in the next 25, and so on … to the point that it creates the singularity. I think we will see it in our lifetimes and it’s going to be much closer than people are comfortable with.

    • sparkle@lemm.ee
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      edit-2
      7 months ago

      Sounds like you’re mixing up AI with AGI and have no idea of what you’re talking about, like 99% of the people on the internet who suddenly act like they’re data science experts. This article is just taking advantage of the fact that people like you don’t know what “AI” means to get clicks by misdirecting you with improperly worded claims. “True AI” doesn’t mean anything.

      Also the term “AI” to describe complex algorithms existed long before the technology was ever in the capitalist market. You literally just completely made that part up. One of the guys that coined it (John McCarthy) was one of the most important computer scientists of all time, who was also a cognitive scientist, he’s the same guy who invented garbage collection and Lisp. One of the other guys to coin the term was Claude Shannon, who is widely considered the father of information theory and laid the foundation for the Information Age. The other people to participate in coining the term include the person who made the first assembler & designed the first mass-produced computer, and the guy who proposed the theory of bounded rationality. The guys who coined AI and founded/established the field were pretty much Turing’s successors, not people looking to “sell you shit”.

    • kakes@sh.itjust.works
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      7 months ago

      You mean they aren’t General Artificial Intelligence. They definitely are AI.

      That said, I do think we’re like 2 or 3 paradigm shifts away from General AI.

    • Dr. Dabbles@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      7 months ago

      LLM are parlour tricks that impress the gullible and easily confused. LLM aren’t building blocks for anything but energy consumption and greed. “The singularity” isn’t coming, true intelligence isn’t coming. You’ve been lied to, and this is the fourth time in my career alone that this has had to be said.