Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s hard to see this article as being written in good faith. We’re at the point that we are responding to low quality LLM outputs with low quality LLM retorts and voting them both to the front page because of feelings.


I'm at the point now where I simply stop reading the article once it has too many red flags, something that is happening increasingly often.

I don't enjoy reading AI slop but it feels worse when users of AI tools have chosen not to disclose the authors of these articles as Claude/ChatGPT/etc. Rather than being honest upfront, they choose to hide this fact.


I added some sentences at the top, so it wont waste people's time:

Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy. These are just personal notes, but I would really appreciate feedback: feel free to share your thoughts, open an issue, or send a pull request!

If you prefer to read only fully human-written articles, feel free to skip this one.


It clearly wasn't "refined" using LLMs when it contained commands that plainly don't work. Don't lie.


We've flagged it. Please don't waste our time in the future.


> but I would really appreciate feedback

very well

> Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy

Perhaps you should stick to writing about things you can write with clarity and and accuracy yourself instead of relying on an LLM to do it for you. Alternatively, properly cite and highlight what portions you used AI on/for from the outset as failure to do so reads at best as lazy slop and more often as intentional duplicity


the entire github organization looks to be ai slop books... why even do this?


As a fan and user of Zig I found the original post embarrassing, but chalked it up to the enthusiasm of a new user discovering the joy of something that clicked for them

Taking offense to that enthusiasm and generating this weirdly defensive and uninformed take is something else, though


Edit: Apologies, it looks like I misunderstood. Original response left below for posterity.

It's not "weirdly defensive and uninformed" to question the value of posting a bunch of inaccurate LLM slop, especially without any disclosures.

If you're pro-AI, you should be against this too, before these errors get used as training data.


I think you are misunderstanding, they are calling TFA a defensive and uninformed reply to the pro-Zig post from yesterday.


Ohhhh, my apologies, then.


I can see how you would have read it that way, now, but yes — I meant this article is defensive for no reason while being uninformed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: