Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] ChatGPT Is Still a Bullshit Machine (gizmodo.com)
31 points by 01-_- 4 months ago | hide | past | favorite | 20 comments


We already know the system is really bad at spelling. I have Claude configured to periodically remind me “By the way, I think there are ** n's in 'banana'”, so I don't forget what I am dealing with. It has never gotten this right.

But that doesn't mean that it is not extremely useful. It only means I shouldn't ask it to spell stuff.

If a human is unable to count the n's in 'banana' we expect them to be barely functional. Articles like this one try to draw the same inference about the LLM: it can't count 'n's, so it must not be able to do anything else either.

But it's a bad argument, and I'm tired of hearing it.


It's as much that LLMs are bad at counting letters in words as it is that humans are good at it.

LLMs are also bad at many things that humans don't notice immediately.

That is a problem because it leads humans to trust LLMs with tasks at which LLMs currently are bad, such as picking stocks, screening job applicants, providing life advice...


The particular problem (and one that AI firms marketing approaches have actively leveraged and made worse[0]) is that correlations between capacities that humans are used to from observing other humans do not hold for LLMs, so assumptions about what an LLM should be able to do based what is observed to do and what a human ovserved to do that qould also be expected to be capable of do not hold even as loose rules of thumb.

[0] e.g., by promoting AIs as having equivalent capacities of humans of various education levels because they could pass tests that were part of the standards for, and correlate for humans with other abilities of, people with that educational background.


ChatGPT 5 just 'thought for a couple of seconds' and then output '2.'. Seems like we have to update our expectations as the technology improves.


You can tell gpt to write a program to count the n's in 'banana', and then run the program to find the answer, and it can do that.


I don't disagree with your first point, that it's not still extremely useful despite its flaws. I absolutely use it to build project outlines, write code snippets, etc.

Your overall conclusion though seems a little free of context. Average people (i.e. my mom googling something) absolutely do not have the wherewithal to keep track of the various pros and cons of the underlying system that generates the magical giant blue box at the top of their search that has all the answers. They are being deliberately duped by the salesmen-in-chief of these giant companies, as are all of their investors.


It's a reminder that LLMs are not reasoning machines. LLMs are very useful in many cases, but one should not treat them as if they can reason.


I can't understand why all the AI services are allowed to get away with modes such as "deep thinking" and "deep research".

OpenAI even claims "reasoning" is available.

> Built-in agents – deep research, ChatGPT agent, and Codex can reason across your documents, tools, and codebases to save you hours

https://openai.com/chatgpt/pricing/


What are folks uploading an article that's the equivalent of supermarket tabloid junk?

You just like the title?


This is one of the first (and nicest) editorials in a long line of "ChatGPT never delivered on it's promises" you will start seeing soon.


Yep.

I have been waiting for GPT 5 to hit my account and kept asking it the model, it was 4o until this morning.

Then this morning it said it was GPT 5 and would I like to code and design a stress test for it to compete against 4o, it kept assisting this was something I should do even though I didn't ask and then kept skirting around it when I told it to do it, before it realised it couldn't.


I don’t think comparing a LLM to a calculator is necessarily apt. If anything i'd say you can use these LLM's as a reflection of you. If you think Alabama has an R. Then it's not maths fault it tries to find an answer that matches your persistence, especially since I'm sure somewhere in its training set alabamer exists.


I'd personally liken it to expecting planes to fly like birds do.


Perhaps this is a good analogy, in which case I'd prefer they stop advertising it as a better/faster/cheaper bird. Speaking as a metaphorical bird, it clearly cannot do well what I do. It does do it poorly at a remarkable speed though.

So what is the software development task that this plane excels at? Other than bullshitting one's manager.


Then it's not maths fault it tries to find an answer that matches your persistence, especially since I'm sure somewhere in its training set alabamer exists.

It is not supposed to find an answer that matches my persistence, its supposed to tell the truth or admit that it does not know. And even if there is an alabamer in the training set, that is either something else, not a US state, or a misspelling, in neither case should it end up on the list.


No, it is supposed to find an answer that matches your persistence. That's what it does, and understanding that is the key to understanding its strengths and weaknesses. Otherwise you may just keep drinking the investors' kool-aid and pretend that it's a tool that's supposed to tell the truth. That's not what it does, that's not how it works and it's a safe bet that's not how it's gonna work in foreseeable future.


No, it is supposed to tell the truth and that is what is advertised, matching your persistence is what it sometimes actually does. But people are using it because it sometimes tells the truth, not because it sometimes matches your persistence.


Then they're just confused by false marketing. LLMs predict plausible text, that's all they do. Anything else is a side effect.


When the marketing tells us it's like talking to a PhD in the relevant field on any topic, it's worth pointing out that's only true if the PhD in question has recently suffered severe head trauma.


Unusual for such outlets to take jabs at prominent companies. Normally, they are much more lenient. Interesting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: