What a ludicrous reply, to suggest it should be "socially unacceptable" to believe the Paperclip Maximizer thought experiment might reveal a scenario that is bad for humans overall.
Of course it would be bad for humanity. “Short humanity and long paperclips”, in my reading, is pro-extinctionism. The specter of Daniel Faggella haunts this site and this industry.
I can only speculate as I didn't write that post, but by my reading they were just stating their belief that AI is likely to lead to human extinction, not that they were happy about that outcome.