Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there any evidence we have the technical ability to put such ambiguous guardrails on LLMs?


No. We can add guardrails, but nothing that has been proven to work reliably.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: