A bit of anecdote: last year I hung out with a bunch of old classmates that I hadn't seen for quite a while. None of them works in tech.
Surprisingly to me, all of them have ChatGPT installed on their phones.
And unsurprisingly to me, none of them treated it like an actual intelligence. That makes me wonder where those who think ChatGPT is sentient come from.
(It's a bit worrisome that several of them thought it worked "like Google search and Google translation combined", even by the time ChatGPT couldn't do web search...!)
I think it’s more than a few and it’s still rising, and therein lies the issue.
Which is why it is paramount to talk about this now, when we may still turn the tide. LLMs can be useful, but it’s important to have the right mental model, understanding, expectations, and attitude towards them.
This is a No True Scotsman fallacy. And it's radically factually wrong.
The rest of your comment is along the lines of the famous (but apocryphal) Pauline Kael line “I can’t believe Nixon won. I don’t know anyone who voted for him.”
Except that is exactly what we’re seeing with LLMs. People believing exactly that.