> ChatGPT is not "intelligence", so please don't call it "AI". I define "intelligence" as being capable of knowing or understanding, at least within some domain.
Great -- another "submarines can't swim" person. [EDIT2: Apparently this is not his position, although it's only clear in a different page he links to. See below.]
By this definition nothing is AI. Quite an ignorant stance for someone who used to work at an AI laboratory.
ETA:
> Please join me in spreading the word that people should not trust systems that mindlessly play with words to be correct in what those words mean.
Please join me in spreading the counterargument to this: The best way to predict a physical system is to have an accurate model of a physical system; the best way to predict what a human would write next is to have a model of the human mind.
"They work by predicting the next word" does not prove that they are not thinking.
EDIT2, con't: So, he clarifies his stance elsewhere [1]. His position appears to be:
1. Systems -- including both "classical AI" systems like chess and machine learning / deep learning systems -- can be said to have semantic understanding, even if they are not 100% correct, if there has been some effort to "validate" the output: to correlate it to reality.
2. ChatGPT and other LLMs have had no effort to validate their output
3. Therefore, ChatGPT and other LLMs have no semantic understanding.
#2 is not stated so explicitly. However, he actually goes into quite a bit of detail to emphasize the validation part in #1, going so far as to describe completely inaccurate systems still count as "attempted artificial intelligence" because they "purport to understand". So the only way #3 makes any sense is for #2 to be presented as stated.
And, #2 is simply and clearly false. All the AI labs go to great lengths to increase the correlation between the output of their AI and the truth ("reduce hallucination"); and have been making steady progress.
So to state it forwards:
1. According to [1], a system's output can reflect "real knowledge" and a "semantic understanding" -- and thus qualify as "AI" -- if someone "validate[s] the system by comparing its judgment against [ground truth]".
2. ChatGPT, Claude, and others have had significant effort put into them to validate them against ground truth.
3. So, ChatGPT has semantic understanding, and is thus AI.
Are you saying that LLMs _do_ have a model of the human mind in their weights?
Imagine you use an ARIMA model to forecast demand for your business or the economy or whatever. It's easy to say it doesn't have a 'world model' in the sense that it doesn't predict things that are obvious only if you understand what the variables _mean_ implicitly. But in what way is it different from an LLM?
> Are you saying that LLMs _do_ have a model of the human mind in their weights?
On topics with "complicated disagreements", an important way of moving forward is to find small things where we can move forward.
There are a large number of otherwise intelligent people who think that "LLMs work by predicting the next word; therefore LLMs cannot think" is a valid proposition; and since the antecedent is undoubtedly true, they think the consequent is undoubtedly true, and therefore they do not need to consider any more arguments or evidence.
If I can do one thing, it would be to show people that this proposition is not true: a system which did think would do better at the "predict the next word" task than a system which did not think.
You have to come up with some other way to determine if a system is thinking or not.
When I studied computer science, the artificial intelligence practical courses were things like building a line-follower robot or implementing a border detection algorithm based on difference of gaussians.
Anyone calls anything "AI" and I think it is fair to accept that other people trace the line somewhere else.
I think I'd define "classical" AI as any system where, rather that putting in an explicit algorithm, you give the computer a goal and have it "figure out" how to achieve that goal.
By that definition, SQL query planners, compiler optimizers, Google Maps routing algorithms, chess playing algorithms, and so on were all "AI". (In fact, I'm pretty sure SQLite's website refers to their query planner as an "AI" somewhere; by classical definitions this is correct.)
But does an SQL query planner "understand" databases? Does Stockfish "understand" chess? Does Google Maps "understand" roads? I doubt even most AI proponents would say "yes". The computer does the searching and evaluation, but the models and evaluation functions are developed by humans, and stripped down to their bare essentials.
RMS might say yes, here's from the linked page describing other things as having knowledge and understanding:
> There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.”
Thanks -- that's not at all clear in this post (nor is it clear from the link text that its target would include a more complete description of his position).
I've updated my comment in response to this. Basically: It seems his key test is "Is someone validating the output, trying to steer it towards ground truth?" And since the answer re ChatGPT and Claude is clearly "yes", then ChatGPT clearly does count as an AI with semantic understanding, by his definition.
> I think it is fair to accept that other people trace the line somewhere else.
It's a pointless naming exercise, no better than me arguing that I'm going to stop calling it quicksort because sometimes it's not quick.
It's widely called this, it's exactly in line with how the field would use it. You can have your own definitions, it just makes talking to other people harder because you're refusing to accept what certain words mean to others - perhaps a fun problem given the overall complaint about LLMs not understanding the meaning of words.
Great -- another "submarines can't swim" person. [EDIT2: Apparently this is not his position, although it's only clear in a different page he links to. See below.]
By this definition nothing is AI. Quite an ignorant stance for someone who used to work at an AI laboratory.
ETA:
> Please join me in spreading the word that people should not trust systems that mindlessly play with words to be correct in what those words mean.
Please join me in spreading the counterargument to this: The best way to predict a physical system is to have an accurate model of a physical system; the best way to predict what a human would write next is to have a model of the human mind.
"They work by predicting the next word" does not prove that they are not thinking.
EDIT2, con't: So, he clarifies his stance elsewhere [1]. His position appears to be:
1. Systems -- including both "classical AI" systems like chess and machine learning / deep learning systems -- can be said to have semantic understanding, even if they are not 100% correct, if there has been some effort to "validate" the output: to correlate it to reality.
2. ChatGPT and other LLMs have had no effort to validate their output
3. Therefore, ChatGPT and other LLMs have no semantic understanding.
#2 is not stated so explicitly. However, he actually goes into quite a bit of detail to emphasize the validation part in #1, going so far as to describe completely inaccurate systems still count as "attempted artificial intelligence" because they "purport to understand". So the only way #3 makes any sense is for #2 to be presented as stated.
And, #2 is simply and clearly false. All the AI labs go to great lengths to increase the correlation between the output of their AI and the truth ("reduce hallucination"); and have been making steady progress.
So to state it forwards:
1. According to [1], a system's output can reflect "real knowledge" and a "semantic understanding" -- and thus qualify as "AI" -- if someone "validate[s] the system by comparing its judgment against [ground truth]".
2. ChatGPT, Claude, and others have had significant effort put into them to validate them against ground truth.
3. So, ChatGPT has semantic understanding, and is thus AI.
[1] https://www.gnu.org/philosophy/words-to-avoid.html#Artificia...