For some given task, perhaps; but the AI only consumes power while actively working. The human has to run 24/7 and also expends energy on useless organs like kidneys, gonads, hopes, and dreams.
Forms of retirement that don't have the force of law can be adjusted on the fly to match the available resources. When the government forcibly requires that each elderly person be paid a fixed amount of resources yearly, it's possible for there to be literally zero surplus for the young people making the resources. That can't happen under systems where the transfers are voluntary.
Of course talent+effort are better than either alone, but it seems strange to argue that there will be zero effect on the value of having just one of them. AI may not raise the talented lazy person straightforwardly above the hard-working grinder but it seems likely that it will alter their relative position, in favor of talent.
What does it mean to even say "having just one of them"? I think the false dichotomy just torpedoes the ability to predict the effect of new tools at all. There's already a world of difference between the janitor who couldn't learn how to read but does his best to show up and clean the place as well as he can every day and the middle manager engineer with population-median math or engineering abilities but a 12-hour-day work ethic that has let him climb the ladder a bit. And the effect of these AI tools we're considering here is going to be MUCH larger on one than the other - it's gonna be worse here for the smarter one, until the AI's are shoveling crap around with human-level dexterity. (Who knows, maybe that's next.)
Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.
EDIT: here's a scenario where it'll be harder to be lazy as a software engineer already, not even in the "super AI" future: in the recent past, if you were quicker than your coworkers and lazy, you could fuck around for 3 hours than knock something out in 1 hour and look just as productive, or more, than many of your coworkers. If everyone knows - even your boss - that it actually should only take 45 minutes of prompting then reviewing code from the model, and can trivially check that in the background themselves if they get suspicious, then you might be in trouble.
The "smart but lazy" person in an agentic AI workplace is the dude orchestrating a dozen models with a virtual scrum master. It's much more possible today to get a 40h work week's worth of work done in 4h than it ever has been before, because the gains that are possible with complex AI workflows are so massive, particularly if you craft workflows that match problems specifically. And because it's absolutely insane to do such a thing with modern tools and the lack of abstractions available to you, even insaner to expect people to do it, so you can't set proficiency targets on that rubric. You might have to actually work 40h at the onset, but I definitely work with someone who is considered a super hero for the amount of work they do, but I know they dick around and experiment all day every day, because all they do is churn Cursor credits into PRs through a series of insane agents. They're probably going to get a bonus for delivering an impossible project on time, as a matter of fact.
> Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.
I'm clearly not talking about the _truly_ lazy people. I'm talking about classifications within the group of already successful creative/STEM professionals that are the ones who are going to be maximally impacted by AI. Obviously you're not as lazy as you could be if you manage to have a 20 year software career, but that doesn't mean you aren't fundamentally lazy or have a terrible work ethic, it just means you have a certain minimum standard you manage to hold yourself to. That's the person I'm talking about - the person who works twelve hours a day more isn't going to be able to meaningfully distinguish themselves any more. The quantity of their work becomes immaterial, so what matters is the quality, and the smarter, lazier dude is going to have better AI output because he has smarter inputs.
Humans, even pointy-haired ones, do have slightly larger brains than dogs precisely for the purpose of being able to form associations over longer timespans. That's a big part of what intelligence is.
To be fair, I could name managers who would do less damage if they were replaced by a dog. Managing upwards would be greatly eased with the application of dog treats.
I think the analogy with dogs is flawed. Association is merely correlative, not causal. It is irrational per se, because it does not concern itself with causes. And the more time passes, the more options there are with which to form associations. So it's not a question of brain size or anything that this might be standing in for.
Human beings can with varying degrees of success reflect on their behavior. They can recognize how certain behaviors on their part might encourage certain responses among employees. More importantly, however, they can recognize which behaviors on their part are simply not good in the first place and learn to behave as they should. It's not a question of aligning domino tiles so they fall over a certain way. First and foremost, it's a question of justice and benevolence. Be just and be benevolent, not a domino-aligning psychopath.
They did, but not through any virtue or skill on her part. That was just the plain luck of Bitcoin happening to go way way up before the bankruptcy estate paid out using what it had, and the court that sentenced her couldn't know that would happen and if it somehow had, should not have taken it into account. You wouldn't advocate a lighter sentence for murder if the bullet had miraculously been struck by lightning on its way to the target's skull and thereby missed.
Doesn't Dune take place in ~22k AD? Wiki says "the Butlerian Jihad is a conflict taking place over 11,000 years in the future (and over 10,000 years before the events of Dune)." [0]
How many of them believe that copyright infringement and job loss are "major harms"? How many believe that data centers put a Great Lake through their cooling system daily? Polls like this are meaningless.
If it's a question of fairness, the guy you're replying to has a point. If it's a question of civilization... well, toll roads are kind of inextricable from civilized society.
reply