> Are you saying that LLMs _do_ have a model of the human mind in their weights?
On topics with "complicated disagreements", an important way of moving forward is to find small things where we can move forward.
There are a large number of otherwise intelligent people who think that "LLMs work by predicting the next word; therefore LLMs cannot think" is a valid proposition; and since the antecedent is undoubtedly true, they think the consequent is undoubtedly true, and therefore they do not need to consider any more arguments or evidence.
If I can do one thing, it would be to show people that this proposition is not true: a system which did think would do better at the "predict the next word" task than a system which did not think.
You have to come up with some other way to determine if a system is thinking or not.
On topics with "complicated disagreements", an important way of moving forward is to find small things where we can move forward.
There are a large number of otherwise intelligent people who think that "LLMs work by predicting the next word; therefore LLMs cannot think" is a valid proposition; and since the antecedent is undoubtedly true, they think the consequent is undoubtedly true, and therefore they do not need to consider any more arguments or evidence.
If I can do one thing, it would be to show people that this proposition is not true: a system which did think would do better at the "predict the next word" task than a system which did not think.
You have to come up with some other way to determine if a system is thinking or not.