"Closed as per WP:SNOW. There is no indication whatsoever that there is consensus
to change the status of the BBC as a generally reliable source, neither based on the above discussion nor based on this RfC".
That's a fallacy of denial of the antecedent. You are inferring from the fact that airplanes really fly that AIs really think, but it's not a logically valid inference.
Observing a common (potential) failure mode is not equivalent to asserting a logical inference. It is only a fallacy if you "P, therefore C" which GP is not (at least to my eye) doing.
Thermometers and human brains are both mechanisms. Why would one be capable of measuring temperature and other capable of learning abstract thought?
> If it turns out that LLMs don't model human brains well enough to qualify as "learning abstract thought" the way humans do, some future technology will do so. Human brains aren't magic, special or different.
Internal monologue is a like a war correspondent's report of the daily battle. The journalist didn't plan or fight the battle, they just provided an after-the-fact description. Likewise the brain's thinking--a highly parallelized process involving billions of neurons--is not done with words.
Play a little game of "what word will I think of next?" ... just let it happen. Those word choices are fed to the monologue, they aren't a product of it.
What does that have to do with the claim? It is very unlikely that 38% of Stanford students are actually disabled, and your success has nothing whatsoever to do with that.
I wrote some SNOBOL IV programs back in the day and met Ralph Griswold when he visited the UCLA Computer Club. Fun language with very interesting ideas. Looking into Unicon is on my list of things to do.
"Closed as per WP:SNOW. There is no indication whatsoever that there is consensus to change the status of the BBC as a generally reliable source, neither based on the above discussion nor based on this RfC".
Wikipedians know a troll when they see one.
reply