“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.

  • Mak'@pawb.social
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    2 hours ago

    Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…

    I half expect to next read an article about his ouster.

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    139
    arrow-down
    7
    ·
    6 hours ago

    Correction, LLMs being used to automate shit doesn’t generate any value. The underlying AI technology is generating tons of value.

    AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.

    • shaggyb@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 hour ago

      Well sure, but you’re forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.

    • DozensOfDonner@mander.xyz
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      4
      ·
      3 hours ago

      Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.

    • Mrkawfee@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      4 hours ago

      Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.

        • dovah@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          54 minutes ago

          You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          12
          ·
          3 hours ago

          A Large Language Model is a translator basically, all it did was bridge the gap between us speaking normally and a computer understanding what we are saying.

          The actual decisions all these “AI” programs do are Machine Learning algorithms, and these algorithms have not fundamentally changed since we created them and started tweaking them in the 90s.

          AI is basically a marketing term that companies jumped on to generate hype because they made it so the ML programs could talk to you, but they’re not actually intelligent in the same sense people are, at least by the definitions set by computer scientists.

          • weker01@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 hour ago

            What algorithm are you referring to?

            The fundamental idea to use matrix multiplication plus a non linear function, the idea of deep learning i.e. back propagating derivatives and the idea of gradient descent in general, may not have changed but the actual algorithms sure have.

            For example, the transformer architecture (that is utilized by most modern models) based on multi headed self attention, optimizers like adamw, the whole idea of diffusion for image generation are I would say quite disruptive.

            Another point is that generative ai was always belittled in the research community, until like 2015 (subjective feeling would need meta study to confirm). The focus was mostly on classification something not much talked about today in comparison.

      • dustyData@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        2 hours ago

        AI is just what we call automation until marketing figures out a new way to sell the tech. LLMs are generative AI, hardly useful or valuable, but new and shiny and has a party trick that tickles the human brain in a way that makes people give their money to others. Machine learning and other forms of AI have been around for longer and most have value generating applications but aren’t as fun to demonstrate so they never got the traction LLMs have gathered.

      • Match!!@pawb.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        3 hours ago

        I’m afraid you’re going to have to learn about AI models besides LLMs

      • rockSlayer@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 hours ago

        It’s always important to double check the work of AI, but yea it excels at solving problems we’ve been using brute force on

  • ToaLanjiao@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    1
    ·
    5 hours ago

    LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren’t immediately obvious.

    • Arghblarg@lemmy.ca
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      5
      ·
      5 hours ago

      I think AI is generating negative value … the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn’t given anything other than, as you’ve said, plain-language search (with bonus hallucination bullshit, yay!) … snake oil, indeed.

      • themurphy@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        2
        ·
        4 hours ago

        Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.

        I believe in AI for sure in some fields, but I understand the skeptics around LLMs.

        But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.

        My point is, its not that black and white.

  • Mrkawfee@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 hours ago

    Is he saying it’s just LLMs that are generating no value?

    I wish reporters could be more specific with their terminology. They just add to the confusion.

    Edit: he’s talking about generative AI, of which LLMs are a subset.