I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

  • Perspectivist@feddit.ukOP
    link
    fedilink
    arrow-up
    13
    arrow-down
    3
    ·
    1 day ago

    Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.

    • Poxlox@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      18 hours ago

      That’s not quite right, the discussion of consciousness, mind, and reasoning are all relevant and have been in the philosophy of artificial intelligence for hundreds of years. You are valid to call it AI within your definitions but those are not exactly agreed on, such as whether you ascribe to Alan Turing or John Searle, for example

    • Evil Edgelord@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      1 day ago

      It’s not really solving problems or learning patterns now, is it? I don’t see it getting past any captchas or answering health questions accurately, so we’re definitely not there.

      • Perspectivist@feddit.ukOP
        link
        fedilink
        arrow-up
        13
        arrow-down
        2
        ·
        1 day ago

        If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.

        The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You’re blaming the hammer for not turning screws.