Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing is, good documentation captures the difference between what you expect and what something is. LLMs, as they exist now, are mass-producers of cliché. You can sense it even in their prose or those inverted puzzle evaluations [0] , they are simply statistical models picking the most likely options without enough depth to subvert the average. When you ask them to describe a piece of code, you'll only get the rephrasing of what the code already says.

- [0] https://github.com/cpldcpu/MisguidedAttention



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: