Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Isn't this more likely an example of LLM-crappification?

As in, the reason it isn't documented is that no one has a clue...



In that case, it should be documented at least that the process is driven by LLM and hence, it's not known exactly how it makes a decision...but I guess that would not look good to most users so they just leave it out.


Writing that you don't have a clue why the software works or doesn't isn't very good marketing :)

Welcome to the stupid singularity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: