Jefouree

The discoveries worth talking about each week.


Story permalink

Harvard DSR AI/ML

Why Vision-Language Models Still Hallucinate—Even When They're Confident

Log in to share

Imagine someone squinting at a photo and confidently describing something that isn't there: the model's language circuits are drowning out its visual ones, like a loud person dominating a conversation.

This means we're finally untangling *where* AI confidence breaks down, separating visual blind spots from reasoning errors—crucial for deploying these systems anywhere people's safety matters.


Bug reported: No