Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "hallucination" ... "behaviour that only sometimes resembles thinking"

I guess you'll find that if you limit the definition of thinking that much most humans are not capable of thinking either.





You see, we are here observing a clash in the terminology. Hallucinations in humans is thinking, just not typical. So called "hallucinations" in LLM programs are just noise output, a garbage. This is why using anthropomorphic terms for programs is bad. Just like "thinking" or "reasoning".

I think the answer is somewhere in the middle, not as restrictive as parent, but also not as wide as AI companies want us to believe. My personal opinion is that hallucinations (random noise) are a fundamental building block of what makes human thinking and creativity possible, but we have additional modes of neuroprocessing layered on top of it, which filter and modify the underlying hallucinations in a way so they become directed at a purpose. We see the opposite if the filters fail, in some non-neurotypical individuals, due to a variety of causes. We also make use of tools to optimize that filter function further by externalizing it.

The flip side of this is that fundamentally, I don't see a reason why machines could not get the same filtering capabilities over time by adjusting their architecture.


I have never in my life met a person who hallucinates in the way ChatGPT etc do. If I did, I would probably assume they were deliberately lying, or very unwell.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: