Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One of my saved memories is to always give shorter "chat like" concise to the point answers and give further description if prompted to only


I've read from several supposed AI prompt-masters that this actually reduces output quality. I can't speak to the validity of these claims though.


Forcing shorter answers will definitely reduce their quality. Every token an LLM generates is like a little bit of extra thinking time. Sometimes it needs to work up to an answer. If you end a response too quickly, such as by demanding one-word answers, it's much more likely to produce hallucinations.


Is this proven?


I know Andrej Karpathy mentions it in his youtube series so there's a good chance of it being true.


It's certainly true anecdotally. I've seen it personally plenty of times and I've seen it reported plenty of times.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: