I’ll start believing in AI when, and if, it’s able to eliminate error. When will AI be able to work out whether the training material it used is true, fasle, myth, or other narrative?
We tried to build systems that perform a kind of basic, rudimentary, extremely power intensive and inefficient mimicry of how (we think maybe) brain cells work.
Then that system lies to us, makes epic bumbling mistakes, expresses itself with extreme, overconfidence, and constantly creatively misinterprets simple instructions. It recognizes patterns that aren’t there, and regurgitates garbage information that it picks up on the internet.
Hmmm… Actually, maybe we’re doing a pretty good job of making systems that work similarly to the way brain cells work…
I really hate this headline.
They aren’t wrong 70% of the time.
The study found that they only successfully complete multi-step business tasks 30% of the time. Those tasks were made up by the researchers to simulate an office environment.
This percentage spread for different models is also absolutely massive too, with some coming in at 1% completion and others coming in over 30%.