Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wow! That's actually kind of disturbing.

LLMs have a real problem with not treating context differently from instructions. Because they intermingle the two they will always be vulnerable to this in some form.

 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: