Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> What should kids be aiming for according to you? Computer Scientist? Biologist? Finance? Construction?

Can you sit down with an unfamiliar domain and develop enough genuine curiosity to get good at it, without a syllabus or a credential dangling in front of you?

The kids who'll do well in a world where the field-to-security mapping keeps shifting are the ones who can self-direct — not the ones who picked the right field in 2026.

Although full disclosure I'm short humans and very long paperclips.



> The kids who'll do well in a world where the field-to-security mapping keeps shifting are the ones who can self-direct — not the ones who picked the right field in 2026.

Agreed that if someone can self direct and is capable, they’ll do better. Assuming two people who are similar in that regard, what are professions that may benefit from AI rather than hurt because of it.


> Can you sit down with an unfamiliar domain and develop enough genuine curiosity to get good at it, without a syllabus or a credential dangling in front of you?

Do I have faith that I'll be compensated according to my developed ability?

Looking broadly at the recent past, the correct answer seems "no".


> full disclosure I'm short humans and very long paperclips.

What does that mean in practice? Are there specific stock market bets you've made because of that world view?


It can be read in two different ways. Massive techno-optimism, or massive techno-pessimism but facing a reality where humans are increasingly less valuable.

In the first case, buy AI stocks. In the second case, build a bunker in the wilderness.


I don't think they are actually talking about the stock market and are just saying they think AI will destroy everything (referencing Nick Bostrom's Paperclip Maximizer theory).


> Although full disclosure I'm short humans and very long paperclips.

What a ludicrous world we live in where this is a socially acceptable view to hold.


What a ludicrous reply, to suggest it should be "socially unacceptable" to believe the Paperclip Maximizer thought experiment might reveal a scenario that is bad for humans overall.


Of course it would be bad for humanity. “Short humanity and long paperclips”, in my reading, is pro-extinctionism. The specter of Daniel Faggella haunts this site and this industry.


> pro-extinctionism

I can only speculate as I didn't write that post, but by my reading they were just stating their belief that AI is likely to lead to human extinction, not that they were happy about that outcome.


Reality doesn’t give a shit about your beliefs, as the saying goes.


If your model of reality includes imminent human extinction, you have some form of imperative to do something about that other than “ZOMG Claude Code”. YMMV


Are you saying that the comment _supported_ human extinction? I think they're just saying they think it's a likely outcome. It doesn't appear to be an endorsement.

Personally, I think there's a worryingly high chance that ourselves or our kids will live in a dead, desertified, apocalyptic hellscape of a planet after we hit 5+ degrees of warming, but saying that doesn't mean I _want_ it to happen. In fact, I would prefer it not to!


Pro-extinctionism (in favor of some “greater conciousness” that spreads across the stars) is a nontrivial minority view among AI people, including some “AI safety” leaders.

One of the reasons that I’m slightly less worried about a climate apocalypse is that there isn’t an equivalent group of people that sees the “inevitability” and concludes that it must be a moral good for the planet to warm 5 degrees. I’d argue that multiple degrees of warming is more inevitable than paperclips, but there’s a serious global effort to mitigate and avoid it anyway!


Yeah it’s a big issue, I just don’t read the comment you’re objecting to as supporting that.

I mean I might think oil prices will go up but still choose not to buy oil stocks for moral reasons.


Given the OP’s general disposition towards AI in other comments I’m not convinced. But I’m happy to admit that absent proof I was being uncharitable— if so, my bad.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: