• Nevoic@lemm.ee
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    7 months ago

    If you didn’t have an agenda/preconceived idea you wanted proven, you’d understand that a single study has never been used by any credible scientist to say anything is proven, ever.

    Only people who don’t understand how data works will say a single study from a single university proves anything, let alone anything about a model trained on billions of parameters across a field as broad as “programming”.

    I could feed GPT “programming” tasks that I know it would fail on 100% of the time. I also could feed it “programming” tasks I know it would succeed on 100% of the time. If you think LLMs have nothing to offer programmers, you have no idea how to use them. I’ve been successfully using GPT4T for months now, and it’s been very good. It’s better in static environments where it can be fed compiler errors to fix itself continually (if you ever look at more than a headline about GPT performance you’d know there’s a substantial difference between zero-shot and 3-shot performance).

    Bugs exist, but code heavily written by LLMs has not been proven to be any more or less buggy than code heavily written by junior devs. Our internal metrics have them within any reasonable margin of error (senior+GPT recently beating out senior+junior, but it’s been flipping back and forth), and senior+GPT tickets get done much faster. The downside is GPT doesn’t become a senior, where a junior does with years of training, though 2 years ago LLMs were at a 5th grade coding level on average, and going from 5th grade to surpassing college level and matching junior output is a massive feat, even if some luddites like yourself refuse to accept it.