I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

  • Feyd@programming.dev
    link
    fedilink
    arrow-up
    25
    ·
    18 hours ago

    Usually the reason we want people to stop calling LLMs AI is because there has been a giant marketing machine constructed designed to (and successfully) tricking laymen into believing that LLMs are adjacent to and one tiny breakthrough away from becoming AGI.

    From another angle, your statement that AI is not a specific term is correct. Why, then, should we keep using it in common parlance when it just serves to confuse laymen? Let’s just use the more specific terms.

  • Shanmugha@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    18 hours ago

    So… not intelligent. In the sense that when someone without enough knowledge of computers and/or LLMs hears “LLM is intelligent” and sees “an LLM tells me X”, they will be likely to believe that X is true, and not without a reason. Exactly this is my main reason against all the use of intelligence-related terms. When spoken by knowledgeable people who do know the difference - yeah, I am all for that. But first we need to cut the crap of advertisement and hype

    • Aceticon@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      13 hours ago

      “Intelligent” is itself a highly unspecific term which covers quite a lot of different things.

      What you’re think is “reasoning” or “rationalizing”, and LLMs can’t do that at all.

      However what LLMs (and most Machine Learning implementations) can do is “pattern matching” which is also an element of intelligence: it’s what gives us and most animals the ability to recognize things such as food or predators without actually thinking about it (you just see, say, a cat, and you know without thinking that it’s a cat even though cats don’t all look the same), plus in humans it’s also what’s behind intuition.

      PS: Way back since when they were invented over 3 decades ago, Neural Networks and other Machine Learning technologies were already very good at finding patterns in their training data - often better than humans.

      The evolution of the technology has added to it the capability of creating content which follows those patterns, giving us things like LLMs or image generation.

      However what has been made clear by LLMs is that using patterns alone (plus a little randomness to vary the results) in generating textual content is not enough to create useful content beyond entertainment, and that’s exactly because LLMs can’t rationalize. However, the original pattern matching stuff without the content generation is still widely used and very successfully so, in things from OCR to image recognition.

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      11
      ·
      17 hours ago

      So… not intelligent.

      But they are intelligent - just not in the way people tend to think.

      There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.

      • SanguinePar@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        17 hours ago

        But they are intelligent - just not in the way people tend to think.

        Doesn’t that just degenerate into a debate over semantics though? Ie what is “intelligence”.

        Not having a go, this is a good thread, and useful I think 👍

        • SkyeStarfall@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          15 hours ago

          Yes, and that has always been the debate

          But the short answer is that we don’t really have a good grasp at what intelligence is, so it is all semantics in the end

  • Sonotsugipaa@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    20 hours ago

    In defense of people who say LLMs are not intelligent: they probably mean to say they are not sapient, and I think they’re loosely correct if you consider the literal word “intelligent” to have a different meaning from the denotative “Intelligence” in the context of Artificial Intelligence.

    • ramble81@lemmy.zip
      link
      fedilink
      arrow-up
      11
      ·
      17 hours ago

      I remember when “heuristics” were all the rage. Frankly that’s what LLMs are, advanced heuristics. “Intelligence” is nothing more than marketing bingo.

    • YappyMonotheist@lemmy.world
      link
      fedilink
      arrow-up
      24
      arrow-down
      5
      ·
      edit-2
      20 hours ago

      ‘Intelligence’ requires understanding, understanding requires awareness. This is not seen in anything called “AI”, not today at least, but maybe not ever. Again, why not use a different word, one that actually applies to these advanced calculators? Expecting the best out of humanity, it may be because of the appeal of the added pizzazz and the excitement that comes with it or simple semantic confusion… but seeing the people behind it all, it probably is so the dummies get overly excited and buy stuff/make these bots integral parts of their lives. 🤷

      • Sonotsugipaa@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        4
        ·
        18 hours ago

        The term “Artificial Intelligence” has been around for a long time, 25 years ago AI was an acceptable name for NPC logic in videogames. Arguably that’s still the case, and personally I vastly prefer “Artificial Intelligence” to “Broad Simulation Of Common Sense Powered By Von Neumann Machines”.

        • XeroxCool@lemmy.world
          link
          fedilink
          arrow-up
          11
          ·
          16 hours ago

          The overuse (and overtrust) of LLMs has made me feel ashamed to reference video game NPCs as AI and I hate it. There was nothing wrong with it. We all understood the ability of the AI to be limited to specific functions. I loved when Forza Horizon introduced “drivatar” AI personalities of actual players, resembling their actual activities. Now it’s a vomit term for shady search engines and confused visualizers.

          • Sonotsugipaa@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            ·
            14 hours ago

            I don’t share the feeling. I’ll gladly tie a M$ shareholder to a chair, force them to watch me play Perfect Dark, and say “man I love these AI settings, I wish they made AI like they used to”.

      • Perspectivist@feddit.ukOP
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        17 hours ago

        “Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.

        You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.

        • YappyMonotheist@lemmy.world
          link
          fedilink
          arrow-up
          7
          arrow-down
          2
          ·
          17 hours ago

          Do you think “AI” KNOWS/UNDERSTANDS what a grammatically incorrect sentence is or what molecules even are? How?

          • BB84@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            12 hours ago

            Do most humans understand what molecules are? How?

            Everything I know about molecules I got from textbooks. Am I just regurgitating my “training data” without understanding? How does one really understand molecules?

          • Perspectivist@feddit.ukOP
            link
            fedilink
            arrow-up
            4
            arrow-down
            5
            ·
            16 hours ago

            You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.

            No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.

            The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that’s enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you’d like it to.

            • YappyMonotheist@lemmy.world
              link
              fedilink
              arrow-up
              7
              arrow-down
              1
              ·
              16 hours ago

              Intelligence, as the word has always been used, requires awareness and understanding, not just spitting out data after input, as dynamic and complex the process might be, through a set of rules. AI, as you just described it, does nothing necessarily different from other computational tools: they speed up processes that can be calculated/algorithmically structured. I don’t see how that particularly makes “AI” deserving of the adjective ‘intelligent’, it seems more of a marketing term the same way ‘smartphones’ were. The disagreement we’re having here is semantic…

              • SkyeStarfall@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                3
                arrow-down
                2
                ·
                15 hours ago

                The funny thing is, is that the goalposts on what is/isn’t intelligent has always shifted in the AI world

                Being good at chess used to be a symbol of high intelligence. Now? Computer software can beat the best chess players in a fraction of the time used to think, 100% of the time, and we call that just an algorithm

                This is not how intelligence has always been used. Moreover, we don’t even have a full understand of what intelligence is

                And as a final note, human brains are also computational “tools”. As far as we can tell, there’s nothing fundamentally different between a brain and a theoretical Turing machine

                And in a way, isn’t what we “spit” out also data? Specifically data in the form of nerve output and all the internal processing that accompanies it?

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    20 hours ago

    And “intelligence” itself isn’t very well defined either. So the only word that remains is “artificial”, and we can agree on that.

    I usually try to avoid the word “AI”. I’ll say “LLM” if I talk about chatbots, ChatGPT etc. Or I use the term “machine learning” when broadly speaking about the concept of computers learning and doing such things. It’s not exactly the same thing, though. But when reading other people’s texts I always think of LLMs when they say AI, because that’s currently what they mean almost every time. And AGI is more sci-fi as of now, so it needs some disclaimers and context anyway.

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      17
      arrow-down
      2
      ·
      20 hours ago

      In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.

      That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        20 hours ago

        Yeah, generative AI is a good point.

        I’m not sure with the computer scientists, though. It’s certainly not any task, that’d be AGI. And it’s not necessarily connected to humans either. Sure they’re the prime example of intelligence (whatever it is). But I think a search engine is AI as well, depending how it’s laid out. And text to speech, old-school expert systems. A thermostat that controls your heating with a machine learning model might count as well, I’m not sure about that. And that’s not really like human cognitive tasks. Closer to curve fitting, than anything else. The thermostat includes problem-solving, learning, perception, knowledge, and planning and decision making. But on the human intelligence score it wouldn’t even be a thing that compares.

        • Perspectivist@feddit.ukOP
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          20 hours ago

          It’s certainly not any task, that’d be AGI.

          Any individual task I mean. Not every task.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            18 hours ago

            Yeah, I’d say some select tasks. And it’s not really the entire distinction. I can do math equations with my cognitive capabilities. My pocket calculator can do the same, yet it’s not AI. So the definition has to be something else. And AI can do tasks I cannot do. Like go through large amounts of data. Or find patterns a human can not find. So it’s not really tied to specific things we do. But a generalized form of intelligence, and I don’t think that’s well defined or humans are the comparison. They’re more a stand-in measurement scale. But I don’t think that’s what it’s about.

            Edit: And I’d question the entire usefulness of such a definition. ChatGPT can write very professional-looking text and things that pass as Wikipedia articles. A 5-year-old human can’t do that. However the average 5yo can make a sandwich. Now try that with ChatGPT and tell me what that tells about their intelligence. It doesn’t really fit as a definition because it’s kind of too broad and ill-defined and humans can do a wide variety of tasks and slight differences in focus changes everything around into its opposite.

            • Perspectivist@feddit.ukOP
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              16 hours ago

              Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.

              Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).

              As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                14 hours ago

                Yeah, you’re right. I think we can circle back to your original post, which stated the term is unspecific. However, I don’t think that makes sense in computer science, or natural science in general. The way I learned is: you always start out with definitions. And mathematical, concise and waterproof ones, because they need to be internally consistent and you then base an entire building on top of it. And that just collapses if the foundation isn’t there. And maths starts to show weird quirks. So the computer scientists need a proper definition anyway. But that doesn’t stop us using the same word for a different, imperfect one in every day talk. I think they’re not the same, though.

                I’m not sure about the robotics. Some people say intelligence is inherently linked to interacting with the real world. And that it isn’t a thing in isolation. So that would mean an AI would need to be able to manipulate the real world. You’re certainly right that can be done without robotics and limited to text and pictures on a screen. But I think ultimately it’s the same thing. And multimodal models can in fact use almost the same mechanisms they use to process and manipulate image and text, and apply it to movements and navigate 3D space. I’d argue robotics is the same side of the same coin.

                And it’s similar for humans. I use the same brain and roughly similar mechanics that enable me to do it, whether I learn a natural science, or when I learn dancing moves or become a good basketball player. I’d argue that’s manifestations of the same thing. Also requires knowledge, decision making… And that’d make a professional dancer “intelligent” in a similar way. I’m not sure if that’s an accepted way to think of it, though.

  • Poxlox@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    14 hours ago

    There’s also a philosophical definition, which I think is hotly contested so depending on your school of thought your belief of is LLM AI can vary. Usually many people take issue with the thought over questions like does it have a mind, think, or have consciousness?

  • YappyMonotheist@lemmy.world
    link
    fedilink
    arrow-up
    9
    arrow-down
    3
    ·
    21 hours ago

    I still think intelligence is a marketing term or simply a misnomer. It’s basically an advanced calculator. Intelligence questions, creates rules from nothing, transforms raw data from reality into ideas, has its own volition… And the same goes for a chess engine, of course, it’s just more visible because it’s not spitting out text but chess moves. Intelligence and consciousness don’t seem to be computational processes.

    • noma@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      7
      ·
      20 hours ago

      I could follow everything you said up until the conclusion. If consciousness is not computational, then what is going on in our brains instead? I know of course that even neuroscientists don’t know exactly, but just in broad principle. I always thought our brains are still doing computation, just with a different method to computers. I don’t mean to be contrarian, I’m just genuenly curious what other kind of process could support consciousness?

      • YappyMonotheist@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        19 hours ago

        I’m not gonna claim to ‘know’ things here, and I’m too groggy to even attempt to give you a satisfying answer but: applied formal logic as seen in any machine based on logic gates is just an expression/replication of simplified thought and not a copy of our base mental processes. The mind understands truths that cannot even be formalized or derived, such as axiomatic truths. Even if something can be understood and predicted, it doesn’t mean the process could be written down in code. It certainly isn’t today…

        My understanding of the topic is closer to Roger Penrose’s postulates so please check this wiki page and maybe watch a couple of vids on the topic, I’m just a peasant with a hunch when it comes to “AI”. 🤷

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      7
      arrow-down
      3
      ·
      20 hours ago

      You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.

      A calculator doesn’t qualify because it runs “fixed code” with no learning or generalization. There’s no flexibility to it. It can’t adapt.

      • YappyMonotheist@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        20 hours ago

        Not just human but many other animals too, the only group of entities we have ever used the term ‘intelligence’ for. It could be an entirely physical process, sure (doesn’t imply replication but at least holds a hopeful possibility). I’m not gonna lie and say I understand the ins and outs of these bots, I’m definitely more ignorant on the subject than not, but I don’t see how the word intelligence applies in earnest here. Handheld calculators are programmed to “solve problems” based on given rules too… dynamic code and other advances don’t change the fact that they’re the same logic-gate machine at their core. Having said that, I’m sure they have their uses (idk if they’re worth harming the planet for them with the amount of energy they consume!), I’m just not the biggest fan of the semantics.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    What would you call systems that are used for discovery of new drugs or treatments? For example, companies using “AI” for Parkinson’s research.

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      9
      ·
      18 hours ago

      Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.

        • Perspectivist@feddit.ukOP
          link
          fedilink
          arrow-up
          3
          ·
          10 hours ago

          They’re generally just referred to as “deep learning” or “machine learning”. The models themselves usually have names of their own, such as AlphaFold, PathAI and Enlitic.

          • Xaphanos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 hours ago

            Does that include systems used for “correlation science”? Things like “people that are left-handed and eat sardines are more likely to develop eyebrow cancer”. Also genetic correlations for odd things like musical talent?

            Edit: in other words, searches that look for correlations in hundreds of thousands of parameters.

  • ℍ𝕂-𝟞𝟝@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    12 hours ago

    AGI itself has been made up as a marketing term by LLM companies.

    Let’s not forget that the official definition of AGI is that it can make 200 billion dollars.

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      11 hours ago

      The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

      By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

      • ℍ𝕂-𝟞𝟝@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 hours ago

        That is true, but it was a term narrowly and incoherently used by scientists. In fact, that one paper used it, and it took ten years for it to be picked up again, again by just a few academic papers. Even the academic community preferred terms like “strong AI” before the current hype.

        AGI was not a term that was used to refer to a concept, it had to be explained by each and every article that mentioned it, it was not a general term that had a strict meaning attached to it. It was brought to that level by Google/Deepmind employees two years ago, and then got into the place where every second Medium article is buzzwording around with it when it became a corporate target for OpenAI/Microsoft.

      • navatar@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        14 hours ago

        This visual is a bit misleading. LLMs are not a subset of genAI and they aren’t really comparable, because LLMs refer to a vague model type (usually transformers with hundreds of millions of parameters) and genAI is a buzzword for the task of language generation. LLMs can be fine tuned for a variety of other tasks, like sequence and token classification, and there are other model architectures that can do language generation.

        Unrelated, but it’s disappointing how marketing and hype lead to so much confusion and information muddying. Even Wikipedia declaratively states that the most capable LLMs are generative, which academically is simply not the case.

        Source: computational linguist who works on LLMs

    • Perspectivist@feddit.ukOP
      link
      fedilink
      arrow-up
      13
      arrow-down
      3
      ·
      20 hours ago

      Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.

      • Poxlox@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        14 hours ago

        That’s not quite right, the discussion of consciousness, mind, and reasoning are all relevant and have been in the philosophy of artificial intelligence for hundreds of years. You are valid to call it AI within your definitions but those are not exactly agreed on, such as whether you ascribe to Alan Turing or John Searle, for example

      • Evil Edgelord@sh.itjust.works
        link
        fedilink
        arrow-up
        4
        arrow-down
        3
        ·
        20 hours ago

        It’s not really solving problems or learning patterns now, is it? I don’t see it getting past any captchas or answering health questions accurately, so we’re definitely not there.

        • Perspectivist@feddit.ukOP
          link
          fedilink
          arrow-up
          13
          arrow-down
          2
          ·
          20 hours ago

          If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.

          The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You’re blaming the hammer for not turning screws.

  • chemical_cutthroat@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    10
    ·
    21 hours ago

    When someone online claims that LLMs aren’t AI, my immediate response is to ask them to prove they are a real and intelligent life form. It turns out proving you are real is pretty damned hard when it boils down to it. LLMs may be narrow AI, but humans are pretty narrow in our thinking as well.

    I started a project back in January. It’s not ready for the public yet, but I’m planning for a early September release. Initially I don’t think it will be capable of much, but I’m going to be training it on various datasets in hopes that it is able to pick up on the basics fairly quickly. Over the next few years I’m aiming to train it on verbal communication and limited problem solving, as well as working on refining motor skills for interaction with its environment. After that, I’ll be handing it off regularly to professionals who have a lot more experience than me when it comes to training. Of course, I’ll still have my own input, but I’ll be relying a lot on the expertise of others for training data. It’s going to be a slow process, but my long term goal is a world wide release sometime in 2043, or maybe 2044, with some limited exposure before then. Of course, the training process never ends and new data is always becoming available, so I expect that to continue well beyond 2044.