Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> He's famously a curmudgeon, not lazy. How would you expect him to respond?

Not lazily, clearly. You can argue he's not lazy, but this is a very lazy take about LLMs.

> Stallman's wider point (and I think it's safe to say this, considering it's one that he's been making for 40+ years) would be that debating the epistemology of closed-source flagship models is fruitless because... they're closed source.

You are making that point for him. He is not. He is actively making this fruitless argument.

> This criticism is so vague it becomes meaningless. No-one can respond to it because we don't know what you're citing exactly, but you're obviously right that the field is broad, older than most realise, and well-developed philosophically.

I don't get what you are missing here then. It's a broad field and LLMs clearly are within it, you can only say they aren't if you don't know the history of the field which is either laziness or deliberate in this case because RMS has worked in the field. I notice he conveniently puts some of his kind of work in this field as "artificial intelligence" that somehow have understanding and knowledge.

> embracing them without skepticism in your work

That's not a point I'm arguing with.

> as we can prove that we think and reason (and I don't think I need to cite this).

Can we? In a way we can test another thing? This is entirely distinct from everything else he's saying here as the threshold for him is not "can think and reason like a person" but the barest version of knowledge or understanding which he attributes to exceptionally simpler systems.



> Not lazily, clearly. You can argue he's not lazy, but this is a very lazy take about LLMs.

Feel free to check out a longer analysis [1] (which he also linked in the source).

> You are making that point for him. He is not. He is actively making this fruitless argument.

Are we reading the same thing? He wrote:

> Another reason to reject ChatGPT in particular is that users cannot get a copy of it. It is unreleased software -- users cannot get even an executable to run, let alone the source code. The only way to use it is by talking to a server which keeps users at arm's length.

...and you see no connection to his ethos? An opaque nondeterministic model, trained on closed data, now being prepped (at the very least) to serve search ads [2] to users? I can't believe I need to state this, but he's the creator of the GNU license. Use your brain.

> I don't get what you are missing here then. [...] I notice he conveniently puts some of his kind of work in this field as "artificial intelligence" that somehow have understanding and knowledge.

You're not making an argument. How, directly and in plain language, is his opinion incorrect?

> Can we? In a way we can test another thing [...] to exceptionally simpler systems.

Yes... it is one of very few foundational principles and the closest thing to a universally agreed idea. Are you actually trying to challenge 'cogito ergo sum'?

[1] https://www.gnu.org/philosophy/words-to-avoid.html#Artificia... [2] https://x.com/btibor91/status/1994714152636690834


> ...and you see no connection to his ethos? An opaque nondeterministic model, trained on closed data, now being prepped (at the very least) to serve search ads [2] to users? I can't believe I need to state this, but he's the creator of the GNU license. Use your brain.

You seem very confused about what I'm saying so I will try again, despite your insult.

It is extremely clear why he would be against a closed source thing regardless of what it is. That is not in any sort of a doubt.

He however is arguing about whether it knows and understands things.

When you said "debating the epistemology of closed-source flagship models is fruitless" I understood you to be talking about this, not whether closed source things are good or not. Otherwise what did you mean by epistemology?

> Feel free to check out a longer analysis [1] (which he also linked in the source).

Yes, I quoted it to you already.

> You're not making an argument. How, directly and in plain language, is his opinion incorrect?

They are AI systems by long standing use of the term within the field.

> Yes...

So we have a test for it?

> it is one of very few foundational principles and the closest thing to a universally agreed idea. Are you actually trying to challenge 'cogito ergo sum'?

That is not a test.

I'm also not sure why you included the words "to exceptionally simpler systems" after snipping out another part, that doesn't make a sentence that works at all and doesn't represent what I said there.


> You seem very confused about what I'm saying so I will try again, despite your insult.

I'd call it an observation, but I'm willing to add that you are exhausting. Confusion (or, more likely a vested interest) certainly reigns.

> It is extremely clear [...] Otherwise what did you mean by epistemology?

We are talking about both because he makes both points. A) Stallman states it possesses inherently unreliable knowledge and judgment (hence gambling) and B) When someone is being imperious there is a need to state the obvious to clarify their point. You understood correctly and seem more concerned with quibbling than discussion. Much in the same way as your persnickety condescension I now wonder if you know and understand things in real terms or are simply more motivated by dunking on Stallman for some obscure reason.

> They are AI systems by long standing use of the term within the field.

No. ChatGPT is not. It is marketed (being the operative term) as a wide solution; yet is not one in the same manner as the purposeful gearing (whatever the technique) of an LLM towards a specific and defined task. Now we reach the wall of discussing a closed-source LLM, which was my point. What I said previously does not elide their abstract usefulness and obvious flaws. Clearly you're someone familiar, so none of this should be controversial unless you're pawing at a discussion of the importance of free will.

> Yes, I quoted it to you already.

I'm aware. Your point?

> That is not a test.

The world wonders. Is this some sort of divine test of patience? Please provide an objective rubric for the proving the existence of the mind. Until then, I'll stick with Descartes.

> I'm also not sure why you included the words "to exceptionally simpler systems" after snipping out another part, that doesn't make a sentence that works at all and doesn't represent what I said there.

Must I really explain the purpose of an ellipsis to you? We both know what you said.


> We are talking about both because he makes both points.

He does. I am talking about one of those points. The epistemology side is entirely unrelated to whether closed source things are good or bad.

> A) Stallman states it possesses inherently unreliable knowledge and judgment (hence gambling)

He says it does not contain knowlegde, not that it is simply unreliable. He happily says unreliable systems have knowledge.

> No. ChatGPT is not. It is marketed (being the operative term) as a wide solution;

A wide solution is nothing to do with things being called AI. This is true both in the field of AI and how RMS defines AI. You can argue that it doesn't meet some other bar that you have, I don't really care, I was responding to his line and why it does not make sense.

> The world wonders. Is this some sort of divine test of patience? Please provide an objective rubric for the proving the existence of the mind. Until then, I'll stick with Descartes.

You said we could prove we could think. I asked if we can do that *in a way we can test other things to see if they can think* because I do not think such a thing exists. You said yes, we do have a test. Now you're complaining that such a thing does not exist

> Now we reach the wall of discussing a closed-source LLM, which was my point.

Ok, it's nothing to do with what I'm talking about though (which should have been extremely clear from what I wrote originally, and given my explicit clarifications to you), so you can have that conversation with someone else.

> Must I really explain the purpose of an ellipsis to you? We both know what you said.

You need to explain how you're using them, because what you're doing to my sentences makes no sense. Here's what I said, and why, broken down for you:

> Can we? In a way we can test another thing? This is entirely distinct from everything else he's saying here

The question of thinking is entirely distinct from what he is saying. It is not at all related, it is a side thing I am responding to you about.

> as the threshold for him is not "can think and reason like a person"

This is key. We are not talking about comparisons to people. He does not talk about thinking or reasoning. He is talking about knowledge and understanding.

> but the barest version of knowledge or understanding

And not even a large amount, we are not comparing to humans, smart humans, animals, etc.

> which he attributes to exceptionally simpler systems.

He says trained yolov5 models have knowledge and are AI.

He says that xgboost models have it and are AI.

He says that transformer based and closed source models are AI.


Let it go, man. I think that you are wilfully misinterpreting what both he and I are saying and being obtuse to boot. Whatever the case, we're clearly not going to convince one another.


Most of this is very simply telling you what he's said. I'd recommend you read the link you pointed me to, because it contains exactly what I've told you is in there. Closed source transformer models and decision trees and yolo models have knowledge and are AI, chatgpt he argues does not. That's not an argument I'm trying to convince you of, I'm just telling you exactly what he's written.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: