

Uh, no. You want to be mad at something like that look into how they’re training models without a care for bias (or adding in their own biases).
Hallucination is a completely different thing that is mathematically proven to happen regardless of who or what made it. Even if the model only knows about fluffy puppies and kitties it will still always hallucinate to some extent, just in that case it will be hallucinating fluffy puppies and kitties. It’s just random data at the end.
That isn’t some conspiracy. Now if you expected a model that’s fluffy kitties and puppies and you’re mad because it starts spewing out hate speech - that’s not hallucination. That’s the training data.
If you’re going to rage about something like that, you might as well rage about the correct thing.
I’m getting real tired here of the “AI is the boogieman”. AI isn’t bad. We’ve had AI and Models for over 20 years now. They can be really helpful. The bias that is baked into them and how they’re implemented and trained has always been and will continue to be the problem.












Not related at all to the arguments above.