I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

  • Sonotsugipaa@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    1 day ago

    In defense of people who say LLMs are not intelligent: they probably mean to say they are not sapient, and I think they’re loosely correct if you consider the literal word “intelligent” to have a different meaning from the denotative “Intelligence” in the context of Artificial Intelligence.

    • ramble81@lemmy.zip
      link
      fedilink
      arrow-up
      12
      ·
      21 hours ago

      I remember when “heuristics” were all the rage. Frankly that’s what LLMs are, advanced heuristics. “Intelligence” is nothing more than marketing bingo.

    • YappyMonotheist@lemmy.world
      link
      fedilink
      arrow-up
      26
      arrow-down
      5
      ·
      edit-2
      1 day ago

      ‘Intelligence’ requires understanding, understanding requires awareness. This is not seen in anything called “AI”, not today at least, but maybe not ever. Again, why not use a different word, one that actually applies to these advanced calculators? Expecting the best out of humanity, it may be because of the appeal of the added pizzazz and the excitement that comes with it or simple semantic confusion… but seeing the people behind it all, it probably is so the dummies get overly excited and buy stuff/make these bots integral parts of their lives. 🤷

      • Sonotsugipaa@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        4
        ·
        23 hours ago

        The term “Artificial Intelligence” has been around for a long time, 25 years ago AI was an acceptable name for NPC logic in videogames. Arguably that’s still the case, and personally I vastly prefer “Artificial Intelligence” to “Broad Simulation Of Common Sense Powered By Von Neumann Machines”.

        • XeroxCool@lemmy.world
          link
          fedilink
          arrow-up
          12
          ·
          21 hours ago

          The overuse (and overtrust) of LLMs has made me feel ashamed to reference video game NPCs as AI and I hate it. There was nothing wrong with it. We all understood the ability of the AI to be limited to specific functions. I loved when Forza Horizon introduced “drivatar” AI personalities of actual players, resembling their actual activities. Now it’s a vomit term for shady search engines and confused visualizers.

          • Sonotsugipaa@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            19 hours ago

            I don’t share the feeling. I’ll gladly tie a M$ shareholder to a chair, force them to watch me play Perfect Dark, and say “man I love these AI settings, I wish they made AI like they used to”.

      • Perspectivist@feddit.ukOP
        link
        fedilink
        arrow-up
        3
        arrow-down
        5
        ·
        21 hours ago

        “Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.

        You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.

        • YappyMonotheist@lemmy.world
          link
          fedilink
          arrow-up
          8
          arrow-down
          2
          ·
          21 hours ago

          Do you think “AI” KNOWS/UNDERSTANDS what a grammatically incorrect sentence is or what molecules even are? How?

          • BB84@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            5
            ·
            edit-2
            16 hours ago

            Do most humans understand what molecules are? How?

            Everything I know about molecules I got from textbooks. Am I just regurgitating my “training data” without understanding? How does one really understand molecules?

          • Perspectivist@feddit.ukOP
            link
            fedilink
            arrow-up
            4
            arrow-down
            6
            ·
            21 hours ago

            You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.

            No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.

            The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that’s enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you’d like it to.

            • YappyMonotheist@lemmy.world
              link
              fedilink
              arrow-up
              8
              arrow-down
              1
              ·
              21 hours ago

              Intelligence, as the word has always been used, requires awareness and understanding, not just spitting out data after input, as dynamic and complex the process might be, through a set of rules. AI, as you just described it, does nothing necessarily different from other computational tools: they speed up processes that can be calculated/algorithmically structured. I don’t see how that particularly makes “AI” deserving of the adjective ‘intelligent’, it seems more of a marketing term the same way ‘smartphones’ were. The disagreement we’re having here is semantic…

              • SkyeStarfall@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                3
                arrow-down
                2
                ·
                20 hours ago

                The funny thing is, is that the goalposts on what is/isn’t intelligent has always shifted in the AI world

                Being good at chess used to be a symbol of high intelligence. Now? Computer software can beat the best chess players in a fraction of the time used to think, 100% of the time, and we call that just an algorithm

                This is not how intelligence has always been used. Moreover, we don’t even have a full understand of what intelligence is

                And as a final note, human brains are also computational “tools”. As far as we can tell, there’s nothing fundamentally different between a brain and a theoretical Turing machine

                And in a way, isn’t what we “spit” out also data? Specifically data in the form of nerve output and all the internal processing that accompanies it?