I'm at the point now where I simply stop reading the article once it has too many red flags, something that is happening increasingly often.
I don't enjoy reading AI slop but it feels worse when users of AI tools have chosen not to disclose the authors of these articles as Claude/ChatGPT/etc. Rather than being honest upfront, they choose to hide this fact.
I added some sentences at the top, so it wont waste people's time:
Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy. These are just personal notes, but I would really appreciate feedback: feel free to share your thoughts, open an issue, or send a pull request!
If you prefer to read only fully human-written articles, feel free to skip this one.
> Some parts of this article were refined with help from LLMs to improve clarity and technical accuracy
Perhaps you should stick to writing about things you can write with clarity and and accuracy yourself instead of relying on an LLM to do it for you. Alternatively, properly cite and highlight what portions you used AI on/for from the outset as failure to do so reads at best as lazy slop and more often as intentional duplicity
I don't enjoy reading AI slop but it feels worse when users of AI tools have chosen not to disclose the authors of these articles as Claude/ChatGPT/etc. Rather than being honest upfront, they choose to hide this fact.