Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What a ludicrous reply, to suggest it should be "socially unacceptable" to believe the Paperclip Maximizer thought experiment might reveal a scenario that is bad for humans overall.


Of course it would be bad for humanity. “Short humanity and long paperclips”, in my reading, is pro-extinctionism. The specter of Daniel Faggella haunts this site and this industry.


> pro-extinctionism

I can only speculate as I didn't write that post, but by my reading they were just stating their belief that AI is likely to lead to human extinction, not that they were happy about that outcome.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: