I shall await the moment when AI pretends to be as confident about communicating not being able to do something as it is with the opposite because it looks like it’s my job somehow.
Yeah, LLMs seem pretty unlikely to do that, though if they figure it out that would be great. That’s just not their wheelhouse. You have to know enough about what you’re attempting to ask the right questions and recognize bad answers. The thing you’re trying to do needs be within your reach without AI or you are unlikely to be successful.
I think the problem is more the over-promising what AI can do (or people who don’t understand it at all making assumptions because it sounds human-like).
“Precise logic” is specifically what AI is not any good at whatsoever.
AI might be able to write a program that beats an A2600 in chess, but it should not be expected to win at chess itself.
I shall await the moment when AI pretends to be as confident about communicating not being able to do something as it is with the opposite because it looks like it’s my job somehow.
Yeah, LLMs seem pretty unlikely to do that, though if they figure it out that would be great. That’s just not their wheelhouse. You have to know enough about what you’re attempting to ask the right questions and recognize bad answers. The thing you’re trying to do needs be within your reach without AI or you are unlikely to be successful.
I think the problem is more the over-promising what AI can do (or people who don’t understand it at all making assumptions because it sounds human-like).