I think the problem is that, while the model isn’t actually reasoning, it’s very good at convincing people it actually is.
I see current LLMs kinda like an RPG character build with all ability points put into Charisma. It’s actually not that good at most tasks, but it’s so good at convincing people that they start to think it’s actually doing a great job.
I think the problem is that, while the model isn’t actually reasoning, it’s very good at convincing people it actually is.
I see current LLMs kinda like an RPG character build with all ability points put into Charisma. It’s actually not that good at most tasks, but it’s so good at convincing people that they start to think it’s actually doing a great job.