• BlackLaZoR@kbin.run
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    Unless you’re doing music or graphics design there’s no usecase. And if you do, you probably have high end GPU anyway

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      I could see use for local text gen, but that apparently eats quite a bit more than what desktop PCs could offer if you want to have some actually good results & speed. Generally though, I’d rather want separate extension cards for this. Making it part of other processors is just going to increase their price, even for those who have no use for it.

      • BlackLaZoR@kbin.run
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        4 months ago

        There are local models for text gen - not as good as chatGPT but at the same time they’re uncensored - so it may or may not be useful

        • DarkThoughts@fedia.io
          link
          fedilink
          arrow-up
          2
          ·
          4 months ago

          Yes, I know - that’s my point. But you need the necessary hardware to run those models in a performative way. Waiting a minute to produce some vaguely relevant gibberish is not going to be of much use. You could also use generative text for other applications, such as video game NPCs, especially all those otherwise useless drones you see in a lot of open world titles could gain a lot of depth.