Yeah, LLMs seem pretty unlikely to do that, though if they figure it out that would be great. That’s just not their wheelhouse. You have to know enough about what you’re attempting to ask the right questions and recognize bad answers. The thing you’re trying to do needs be within your reach without AI or you are unlikely to be successful.
I think the problem is more the over-promising what AI can do (or people who don’t understand it at all making assumptions because it sounds human-like).
Yeah, LLMs seem pretty unlikely to do that, though if they figure it out that would be great. That’s just not their wheelhouse. You have to know enough about what you’re attempting to ask the right questions and recognize bad answers. The thing you’re trying to do needs be within your reach without AI or you are unlikely to be successful.
I think the problem is more the over-promising what AI can do (or people who don’t understand it at all making assumptions because it sounds human-like).