Look, I don’t believe that an AGI is possible or atleast within the next few decade. But I was thinking about, if one came to be, how can we differentiate it from a Large Language Model (LLM) that has read every book ever written by humans?

Such an LLM would have the “knowledge” of almost every human emotions, morals, and can even infer from the past if the situations are slightly changed. Also such LLM would be backed by pretty powerful infrastructure, so hallucinations might be eliminated and can handle different context at a single time.

One might say, it also has to have emotions to be considered an AGI and that’s a valid one. But an LLM is capable of putting on a facade at-least in a conversation. So we might have to hard time reading if the emotions are genuine or just some texts churned out by some rules and algorithms.

In a pure TEXTUAL context, I feel it would be hard to tell them apart. What are your thoughts on this? BTW this is a shower-thought, so I might be wrong.

  • Naich@lemmings.world
    link
    fedilink
    arrow-up
    35
    ·
    1 day ago

    An LLM doesn’t understand the output it gives. It can’t understand what someone wants when they talk to it, and it can’t generate an original thought. It’s as far from actual intelligence as auto complete on your phone.

    • unwarlikeExtortion@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      Nor does it “read” its input. It doesn’t even process it.

      It’s built/tuned using it. Or as AI techros would say, trained.

      • Thorry@feddit.org
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        Think of it this way:

        If I ask you can a car fly? You might say well if you put wings on it or a rocket engine or something, maybe? OK, I say, so I point at a car on the street and ask: Do you think that specific car can fly? You will probably say no.

        Why? Even though you might not fully understand how a car works and all the parts that go into it, you can easily tell it does not have any of the things it needs to fly.

        It’s the same with an LLM. We know what kinds of things are needed for true intelligence and we can easily tell the LLM does not have the parts required. So an LLM alone can never ever lead to AGI, more parts are needed. Even though we might not fully understand how the internals of an LLM function in specific cases and might also not know what parts exactly are needed for intelligence or how those work.

        A full understanding of all parts isn’t required to discern large scale capabilities.

      • TheJesusaurus@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        24 hours ago

        How do you currently understand whether the thing you’re talking to possesses sentience or not?

        • Tracaine@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          20 hours ago

          We don’t. Period. I could be looking at you dead in the eye right now and have no objective way of knowing you are sentient in the same way I am.

          • TheJesusaurus@piefed.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            20 hours ago

            Didn’t ask how you know. But how you understand.

            Sure you don’t know someone else is sapient. But you treat them as if they are

        • CmdrShepard49@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          23 hours ago

          Because as far as we know currently only humans have sentience so if you’re talking to a human you know it does, and if you’re talking to anything else, you know it doesn’t.

          • TheJesusaurus@piefed.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            23 hours ago

            How do you know it’s not a dolphin in one of their hidden underwater dolphin tech cities?

            Literally more likely than a “take the average of the internet and put it in a blender” machine gaining a soul

            • CmdrShepard49@sh.itjust.works
              link
              fedilink
              arrow-up
              2
              ·
              23 hours ago

              Im talking about face to face. When you speak to someone online it becomes a lot blurrier but I would err on the side of an LLM until proven otherwise.