> The fact is: something fundamental has changed that enables a computer to pretty effectively understand natural language.
I have commented elsewhere but this bears repeating
LLM's do not think, understand, reason, reflect, comprehend and they never shall.
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.
You could use a similar argument to argue that "brains do not think, understand, reason, reflect, comprehend and they never shall." After all, there's nothing in there but neurons, synapses and other biological gunk. If you look at it that way.
That argument you posed does not follow. Brains do all of those things, at least I know mine does because those are the most intimate experiences I have. More intimate than even my sense experiences. It's what lead Kant to exclaim 'cogito ergo sum' and Ibn Sina before him to also note that fact.
Moreover how the brain does what it does an open academic question. One of the most difficult, please see the Hard problem of consciousness for example.
I can make certain determined judgments about things, my digital thermometer does not think or understand when it tells me the temperature, something I myself would be unable to determine. My digital LLM does not think for the same reason. Importantly my paper and pen version of that very same LLM would also not think.
I have commented elsewhere but this bears repeating
LLM's do not think, understand, reason, reflect, comprehend and they never shall.
If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).
I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.