I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.
That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.
Yeah, generative AI is a good point.
I’m not sure with the computer scientists, though. It’s certainly not any task, that’d be AGI. And it’s not necessarily connected to humans either. Sure they’re the prime example of intelligence (whatever it is). But I think a search engine is AI as well, depending how it’s laid out. And text to speech, old-school expert systems. A thermostat that controls your heating with a machine learning model might count as well, I’m not sure about that. And that’s not really like human cognitive tasks. Closer to curve fitting, than anything else. The thermostat includes problem-solving, learning, perception, knowledge, and planning and decision making. But on the human intelligence score it wouldn’t even be a thing that compares.
Any individual task I mean. Not every task.
Yeah, I’d say some select tasks. And it’s not really the entire distinction. I can do math equations with my cognitive capabilities. My pocket calculator can do the same, yet it’s not AI. So the definition has to be something else. And AI can do tasks I cannot do. Like go through large amounts of data. Or find patterns a human can not find. So it’s not really tied to specific things we do. But a generalized form of intelligence, and I don’t think that’s well defined or humans are the comparison. They’re more a stand-in measurement scale. But I don’t think that’s what it’s about.
Edit: And I’d question the entire usefulness of such a definition. ChatGPT can write very professional-looking text and things that pass as Wikipedia articles. A 5-year-old human can’t do that. However the average 5yo can make a sandwich. Now try that with ChatGPT and tell me what that tells about their intelligence. It doesn’t really fit as a definition because it’s kind of too broad and ill-defined and humans can do a wide variety of tasks and slight differences in focus changes everything around into its opposite.
Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.
Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).
As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.
Yeah, you’re right. I think we can circle back to your original post, which stated the term is unspecific. However, I don’t think that makes sense in computer science, or natural science in general. The way I learned is: you always start out with definitions. And mathematical, concise and waterproof ones, because they need to be internally consistent and you then base an entire building on top of it. And that just collapses if the foundation isn’t there. And maths starts to show weird quirks. So the computer scientists need a proper definition anyway. But that doesn’t stop us using the same word for a different, imperfect one in every day talk. I think they’re not the same, though.
I’m not sure about the robotics. Some people say intelligence is inherently linked to interacting with the real world. And that it isn’t a thing in isolation. So that would mean an AI would need to be able to manipulate the real world. You’re certainly right that can be done without robotics and limited to text and pictures on a screen. But I think ultimately it’s the same thing. And multimodal models can in fact use almost the same mechanisms they use to process and manipulate image and text, and apply it to movements and navigate 3D space. I’d argue robotics is the same side of the same coin.
And it’s similar for humans. I use the same brain and roughly similar mechanics that enable me to do it, whether I learn a natural science, or when I learn dancing moves or become a good basketball player. I’d argue that’s manifestations of the same thing. Also requires knowledge, decision making… And that’d make a professional dancer “intelligent” in a similar way. I’m not sure if that’s an accepted way to think of it, though.