• Showroom7561@lemmy.ca
    link
    fedilink
    arrow-up
    28
    arrow-down
    2
    ·
    6 hours ago

    In a quite unexpected turn of events, it is claimed that OpenAI’s ChatGPT “got absolutely wrecked on the beginner level” while playing Atari Chess.

    Who the hell thought this was “unexpected”?

    What’s next? ChatGPT vs. Microwave to see which can make instant oatmeal the fastest? 😂

  • Lucy :3@feddit.org
    link
    fedilink
    arrow-up
    75
    arrow-down
    1
    ·
    7 hours ago

    Anyone even believing that a generic word auto completer would beat classic algorithms wherever possible probably belongs into a psychiatry.

    • realitista@lemm.ee
      link
      fedilink
      arrow-up
      27
      ·
      6 hours ago

      There are a lot of people out there that think LLM’s are somehow reasoning. Even reasoning models aren’t really doing it. It important to do demonstrations like this in the hopes that the general public will understand the limitations of this tech.

      • Photuris@lemmy.ml
        link
        fedilink
        arrow-up
        9
        arrow-down
        7
        ·
        6 hours ago

        But the general public (myself included) doesn’t really understand how our own reasoning happens.

        Does anyone, really? i.e., am I merely a meat computer that takes in massive amounts of input over a lifetime, builds internal models of the world, tests said models through trial-and-error, and outputs novel combinations of data when said combinations are useful for me in a given context in said world?

        Is what I do when I “reason” really all that different from what an LLM does, fundamentally? Do I do more than language prediction when I “think”? And if so, what is it?

        • realitista@lemm.ee
          link
          fedilink
          arrow-up
          8
          ·
          5 hours ago

          This is definitely part of the issue, not sure why people are downvoting this. That’s also why tests like this are important, to illustrate that thinking in the way we know it isn’t happening in these models.

  • thefartographer@lemm.ee
    link
    fedilink
    arrow-up
    30
    ·
    7 hours ago

    Atari game programmed to know chess moves: knight to B4

    Chat-GPT: many Redditors have credited Chesster A. Pawnington with inventing the game when he chased the queen across the palace before crushing the king with a castle tower. Then he became the king and created his own queen by playing “The Twist” and “Let’s Twist Again” at the same time.

  • Opinionhaver@feddit.uk
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    4
    ·
    7 hours ago

    Isn’t this kind of like ridiculing that same Atari for not being able to form coherent sentences? It’s not all that surprising that a system not designed to play chess loses to a system designed specifically for that purpose.

  • Arthur Besse@lemmy.ml
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    6 hours ago

    This article buries the lede so much that many readers probably miss it completely: the important takeaway here, which is clearer in The Register’s version of the story, is that ChatGPT cannot actually play chess:

    “Despite being given a baseline board layout to identify pieces, ChatGPT confused rooks for bishops, missed pawn forks, and repeatedly lost track of where pieces were."

    To actually use an LLM as a chess engine without the kind of manual intervention that this person did, you would need to combine it with some other software to automate continuing to ask it for a different next move every time it suggests an invalid one. And, if you did that, it would still mostly lose, even to much older chess engines than Atari’s Video Chess.

    edit: i see now that numerous people have done this; you can find many websites where you can “play chess against chatgpt” (which actually means: with chatgpt and also some other mechanism to enforce the rules). and if you know how to play chess you should easily win :)

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 hours ago

      You probably could train an AI to play chess and win, but it wouldn’t be an LLM.

      In fact, let’s go see…

      • Stockfish: Open-source and regularly ranks at the top of computer chess tournaments. It uses advanced alpha-beta search and a neural network evaluation (NNUE).

      • Leela Chess Zero (Lc0): Inspired by DeepMind’s AlphaZero, it uses deep reinforcement learning and plays via a neural network with Monte Carlo tree search.

      • AlphaZero: Developed by DeepMind, it reached superhuman levels using reinforcement learning and defeated Stockfish in high-profile matches (though not under perfectly fair conditions).

      Hmm. neural networks and reinforcement learning. So non-LLM AI.

      you can play chess against something based on chatgpt, and if you’re any good at chess you can win

      You don’t even have to be good. You can just flat out lie to ChatGPT because fiction and fact are intertwined in language.

      “You can’t put me in check because your queen can only move 1d6 squares in a single turn.”

  • Wytch@lemmy.zip
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    7 hours ago

    This article makes ChatGPT sound like a deranged blowhard, blaming everything but its own ineptitude for its failure.

    So yeah, that tracks.

  • oce 🐆@jlai.lu
    link
    fedilink
    arrow-up
    17
    arrow-down
    4
    ·
    7 hours ago

    A PE teacher got absolutely wrecked by a former Olympic sprinter at a sprint competition.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    10
    arrow-down
    3
    ·
    7 hours ago

    Well… yeah. That’s not what LLMs do. That’s like saying “A leafblower got absolutely wrecked by 1998 Dodge Viper in beginner’s drag race”. It’s only impressive if you don’t understand what a leafblower is.

    • misk@sopuli.xyzOP
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      6 hours ago

      People write code with LLMs. Programming language is just a language specialised at precise logic. That’s what „AI” is advertised to be good at. How can you do that an not the other?

      • TimeSquirrel@kbin.melroy.org
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        6 hours ago

        It’s not very good at it though, if you’ve ever used it to code. It automates and eases a lot of mundane tasks, but still requires a LOT of supervision and domain knowledge to not have it go off the rails or hallucinate code that’s either full of bugs or will never work. It’s not a “prompt and forget” thing, not by a long shot. It’s just an easier way to steal code it picked up from Stackoverflow and GitHub.

        Me as a human will know to check how much data is going into a fixed size buffer somewhere and break out of the code if it exceeds it. The LLM will have no qualms about putting buffer overflow vulnerabilities all over your shit because it doesn’t care, it only wants to fulfill the prompt and get something to work.

        • misk@sopuli.xyzOP
          link
          fedilink
          arrow-up
          5
          ·
          6 hours ago

          I’m not saying it’s good at coding, I’m saying it’s specifically advertised as being very good at it.

      • MagicShel@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        45 minutes ago

        “Precise logic” is specifically what AI is not any good at whatsoever.

        AI might be able to write a program that beats an A2600 in chess, but it should not be expected to win at chess itself.

        • misk@sopuli.xyzOP
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          3 hours ago

          I shall await the moment when AI pretends to be as confident about communicating not being able to do something as it is with the opposite because it looks like it’s my job somehow.

          • MagicShel@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 hours ago

            Yeah, LLMs seem pretty unlikely to do that, though if they figure it out that would be great. That’s just not their wheelhouse. You have to know enough about what you’re attempting to ask the right questions and recognize bad answers. The thing you’re trying to do needs be within your reach without AI or you are unlikely to be successful.

            I think the problem is more the over-promising what AI can do (or people who don’t understand it at all making assumptions because it sounds human-like).

  • Ace@feddit.uk
    link
    fedilink
    arrow-up
    6
    arrow-down
    3
    ·
    edit-2
    6 hours ago

    machine designed to play chess beats machine not designed to play chess at chess!

    Fascinating news!

    Consider me successfully ragebaited into engaging. Why people are upvoting this drivel is beyond me.