Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLMs are language models. We interact with them using language, all of that, but also only that. That doesn't mean that they have "common sense", context, same motivations, agency, or even reasoning like us.

But as we interact with other people using mostly language, and since the start of internet a lot of those interactions happen in way similar to how we interact with AI, the difference is not so obvious. We are falling into the Turing test in this, mostly because that test is more about language than about intelligence.



> But as we interact with other people using mostly language,

Didn't they used to say that 90% of communication is non-verbal?

Look, that was a while ago, when people met IRL. So maybe not as true today.


"Language" is just the interface. What happens on the inside of LLMs is a lot weirder than that.


What matters is what happen in the outside. We don't know what happen in our inside (or the inside of others, at least), we know the language and how it is used, event the meanings don't have to be the same as long as it is consistent. And you get that by construction. Does that mean intelligence, self consciousness, soul or whatever? We only know that it walk like a duck and quacks like a duck.


But have you considered that humans really really want to feel like they're unique and special and exceptional?

If it walks like a duck and quacks like a duck, then it's not anything remotely resembling a duck, and is just a bag of statistics doing surface level duck imitation. According to... ducks, mostly.


Which arithmetic operation in an LLM is weird?


The fact that you can represent abstract thinking as a big old bag of matrix math sure is.


So it's not weird, it's actually very mundane.


If your takeaway from the LLM breakthrough is "abstract thinking is actually quite mundane", then at least you're heading in the right direction. Some people are straight up in denial.


You have no idea what abstract thinking actually is but you are convinced the illusion presented by an LLM is it. Your ontology is confused but I doubt you are going to figure out why b/c that would require some abstract thinking which you're convinced is no more special than matrix arithmetic.


If I wanted worthless pseudo-philosophical drivel, I'd ask GPT-2 for some. Otherwise? Functional similarity at this degree is more than good enough for me.

By now, I am fully convinced that this denial is "AI effect" in action. Which, in turn, is nothing but cope and seethe driven by human desire to remain Very Special.


Which matrix arithmetic did you perform to find that gold nugget of insight?


"Weirder" does not mean "more complex" or "more human-like".


And?


I feel like the interface in this case has caused us to fool ourselves into thinking there's more there than there is.

Before 2022 (most of history), if you had a long seemingly sensible conversation with something, you could safely assume this other party was a real thinking human mind.

it's like a duck call.

edit, i want to add because this is neural net that's trained to output sensible text, language isn't just the interface.

unlike a website there's no separation between anything, with LLM's the back and front end are all one blob.

edit2: seems I have upset the ducks that think the duck call is a real duck .




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: