Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That might be smarter than we initially give it credit. By leaving the “safer” (read: harder to get wrong) things to their own models and then the more “creative” stuff to an explicit external model, they can shift blame: “Hey, we didn’t made up that information, we explicitly said that was ChatGPT”. I don’t think they’ll say it outright like that. Because they won’t have to.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: