I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.
Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.
That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).
One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.
What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.
You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.
A calculator doesn’t qualify because it runs “fixed code” with no learning or generalization. There’s no flexibility to it. It can’t adapt.
Not just human but many other animals too, the only group of entities we have ever used the term ‘intelligence’ for. It could be an entirely physical process, sure (doesn’t imply replication but at least holds a hopeful possibility). I’m not gonna lie and say I understand the ins and outs of these bots, I’m definitely more ignorant on the subject than not, but I don’t see how the word intelligence applies in earnest here. Handheld calculators are programmed to “solve problems” based on given rules too… dynamic code and other advances don’t change the fact that they’re the same logic-gate machine at their core. Having said that, I’m sure they have their uses (idk if they’re worth harming the planet for them with the amount of energy they consume!), I’m just not the biggest fan of the semantics.