Hacker Newsnew | past | comments | ask | show | jobs | submit | dasil003's commentslogin

I think the analogy is directionally good, but it short changes the abstract and recursive nature of software.

We were already writing code that was automating not only manual work but also simpler programs. LLMs essentially just move us one more (large) hop up the abstraction ladder. And yes I get that it’s a different type of hop (non-deterministic, extremely leaky, etc), but it’s still a hop.

So if the only thing you want to do is manually write code in the traditional way (perhaps with vim instead of IntelliJ) then yeah I think you’re cooked. On the other hand, if you are willing to work with LLM-assisted tooling and learn how to compensate for its shortcomings then I think you’ll have a bright future.


It'll never happen because it shines a light on uncomfortable facts that would risk far too much cognitive dissonance across the political spectrum. Please keep the discourse to identity politics, culture wars, the Epstein files, and large-scale, unprovoked acts of international warfare; those will all be much easier for us to talk about as a nation than what we should do about housing prices.

Yeah you forgot the aliens ;)

The graph does a really poor job supporting the conclusion, most obviously because it only goes back to 2016, the peak of boom times, it doesn't go anywhere near 2008 so why does the caption talk about that? Just this same graph alone going back to 1990 would be super eye-opening.

The other thing is it's showing first derivative, not absolute numbers, which is a very questionable way to derive "worst employment situation" in a field that has been on world-changing boom over the last 50 years.


Theres a zoomed out version immediately below that tweet.

https://x.com/JosephPolitano/status/2029916369056079975


Ah, I can't see that because I'm not logged into Twitter. Doesn't quite pack the same punch does it?

https://imgur.com/a/kB9CAKF via Imgur (though you get resizing - its bigger on my screen)

The question that I have for this data though is that its showing the derivative - the change each year in hiring.

The dot com crash is clear and very visible in there. The global financial crisis is also a dip in there (I'm saving this for when people claim the number of jobs lost compared to the dot com crash).

From 2010 to 2020, there was a fairly steady linear growth of employment. There was the dip in 2020, but 2020 to 2024 had a much higher peak. My "I want to know about the data" is "is the area above +150k jobs from 2020 to 2024 greater than the area below 0 from 2024 to 2026?"


The justification of better UX seems reasonable regardless of politics. I'd prefer not to have to log in on any HN link, and I also can't say I want to optimize for link health as this is super topical and I will never want to look at it again after this discussion is over.

I feel like this study is very naive about how corporate status and power works. Consider this part:

> Employees who are more likely to fall for corporate bullshit may help elevate the types of dysfunctional leaders who are more likely to use it

The rank and file don't elevate leaders, it's decided by higher-ups, and the higher you go the more they care about actual non-bullshit results. Where bullshit thrives is because higher level business strategy is actually hard and ambiguous, so there's a continuum of bullshit where you are expected to at least say credible things, but it's couched in bullshit terminology to broaden the range of success they can claim, and leave room and plausable deniability for failures. Strong leaders are keenly aware of this nuance, and therefore leaders are judged on reputation and outcomes over time, because any given thing they say may be wrong, but the track record is undeniable. This is why you never hear a bad word about leaders while they are there, they are just fired (or more likely "resign") one day seemingly out of the blue.

What this article misses is that to survive in a corporate environment everyone needs to put up and nod along to bullshit. Most of the time whether it's right or wrong and the level of bullshit doesn't really matter to most of the employees, they're just incentivized to play along and not express negativity. Within the rank and file, obviously some are more susceptible to bullshit than others, but I don't think this study necessarily gets at that, as a lot of people will act agreeable just to survive in corporate life, and their disposition will be largely independent of their true understanding and feelings about whatever bullshit they are presented with day to day.


There’s also a compounding effect. Even though they tend to be a bit hand-wavy, you can use the words synergy or paradigm in a sentence and still have it confer some kind of meaning. However as soon as you utter the phrase “synergistic paradigm” you are obviously and completely full of shit.

Also a lot of corporate jargon does have specific connotations for skilled communicators to send a message that is seemingly polite but is actually saying something controversial that is picked up only by those in the room savvy enough to understand. In skilled hands it’s very useful, in unskilled hands it’s complete gibberish. In many ways that’s a feature as the clueless cargo-culters quickly out themselves, and then the smart leaders can use that knowledge to route around them or deploy them in non-harmful ways. All without any overt confrontation ever taking place.


I really don't get this point of view at all. I acknowledge that two yours into my quarter century of experience, most of what I knew was easily replaceable by the AI of today. After two decades of experience however, syntax and specific algorithm and language knowledge was perhaps 10% of my value, nowhere near the vast majority.

The idea that low-paid LLM wranglers are going to push out the experienced engineers just doesn't wash. What I think is much more likely to happen is the number of software engineers greatly reduces, but the remaining ones actually get paid more, because writing code is no longer the long pole, and having fewer minds designing the system at a high level will allow for more cohesive higher-level design, and less focus on local artesenal code quality.

To be honest AI is just the catalyst and excuse for overhiring that happened due to the gold rush over the last 20 years related to the internet and smart phone revolutions, zero-interest rate, and pandemic effect.


> language knowledge was perhaps 10% of my value, nowhere near the vast majority.

Do you not see LLM's catching up with your experience fast?

You might not lose your job, but you'll definitely have to take a pay cut


> What I think is much more likely to happen is the number of software engineers greatly reduces, but the remaining ones actually get paid more.

You realize that this is contradictory, right? If the number of competitors remains the same, yet there are far fewer jobs, it's a buyer's market: companies have to offer very little to find someone desperate enough.

> It will allow for more cohesive higher-level design, and less focus on local artesenal code quality.

I don't buy this, LLM code is extremely bloated. It never reuses abstractions or comes up with novel designs to simplify systems. It can't say no, it just keeps bolting on code. In a very very abstract sense you might be right, but that's outside the realm of engineering, that's product design.


You raise some good points about the economics, that's where I feel the least confident, but let me explain my reasoning.

Software has eaten the world, and thus the value of maintaining software has never been hire. Engineers are the people who understand how software works. Therefore unless we move away from software, the value of software engineering remains high.

AI does not reduce software, it increases the amount of software, makes messier software and generally increases the surface area of what needs to be maintained. I could be wrong, but as impressive as LLM's language and code processing capabilities are, I believe there is a huge chasm that will likely never be crossed between the human intent of systems and their implementation that only human engineers can actually bridge. And even if I'm wrong, there's another headwind which is that, as Simon Willison has point out, you can't hold an LLM accountable, and therefore corporate leaders are very unlikely to put AI in any position of power, because all the experience and levers they have for control are based on millenia of evolution and a shared understanding of human experience; in short they want a throat to choke.

The other factor is that while AI can clearly replace rote coding today, I think the demos oversell the utility of that software. Sure it's fine to get started, but you quickly paint yourself in a corner if you attempt to run a business on that code overtime where UX cohesion, operational stability and data integrity over time are paramount and not something that can be solved for without a lot of knowledge and guardrails.

So net of all this, where I think we land is a lot of jobs that are based purely on knowledge of one slow-changing system and specific code syntax will go away, but there will be engineers who maintain all the same code, they'll just cover more scope with LLM assisted tools. You put your finger on something, that I do believe this moves engineering closer to product design, but I still think there's a huge amount on the engineering side that LLMs won't be able to do any time soon (both for technical and the social reasons stated above), and ultimately I don't see the boundary the same way you do, as software engineers we have always had to justify our systems by their real world interaction.


> Software is everywhere and thus the value of maintaining software and the value of software engineering remains high.

This is an unfinished argument. What if we get coding agents to maintain software? What if frequent rewriting becomes cheap enough? Something that's a tenth or one hundredth of your salary doesn't have to be good to make for a good business decision. Why do you think every native application has been replaced by slop made up of 10 layers of JS frameworks on top of electron? Nothing matters as long as the product is cheap and fast to pump out, barely works on modern hardware, and makes dough.

> AI does not reduce software, it increases the amount of software.

There's not infinite demand for software. If AI inference costs take 50% of the prior payroll expenses, while making a company twice as efficient, that means we need 4 times as much demand in software engineering at the same salary for everyone to keep their job. What new or improved subscription, app, website, device, or other software product does the world need right now? 99.9% of people use the same 5 apps. Most of their free time, attention, and disposable income has already been captured by trash that is unbeatable due to network effects. Are we all going to sell shitty LLM frontends to businesses until they notice they could have done the same thing themselves? There might be an explosion in new software, but no one there to care about using it.

> I believe there is a huge chasm that will likely never be crossed between the human intent of systems and their implementation that only human engineers can actually bridge.

Maybe, or the AI might just be missing context. Think of all the unwritten culture, practices, and conversations the LLM hasn't been made aware of.

> In short they want a throat to choke.

You're responsible for those under you anyway, this doesn't help. Banking on those in charge being irrational forever in a way that is bad for business, and without ever noticing, is a bad gamble.

> The other factor is that while AI can clearly replace rote coding today [...], X is not something that can be solved for without a lot of knowledge and guardrails.

I'm talking about the world the AI-maximalists predict is rapidly approaching, not where we are today. None of that knowledge and none of those guardrails are hard to grasp intellectually, compared to advanced mathematics for example. Put your institutional knowledge in a .md file and add another agent that enforces guardrails in a loop. The only way out I see is a situation where there are complex patterns that we intuitively grasp, but can't articulate. Patterns that somehow span too much data or don't have enough examples for LLMs to pick up on.

> There will be engineers who maintain all the same code, they'll just cover more scope with LLM assisted tools.

So fewer jobs with lesser qualifications?

> Ultimately I don't see the boundary the same way you do, as software engineers we have always had to justify our systems by their real world interaction.

I've seen the way engineers design products, and I like products designed by engineers, but no layperson does. Laypeople don't want power, privacy, or agency. They care about how things work, and they lie to themselves and others about what they really want. They don't want a native desktop app that streams high-quality audio from a self-hosted collection, they want a subscription that autoplays algorithmic slop through a react native app on their iPhone. Do you really think you're better at appealing to/fleecing customers than people with actual UX, marketing, and behavioral psychology experience? This example only applies to mass-market software, but I'm sure it's not much different in other fields. Engineers keep thinking they could everyone else's job, but they don't do so well in practice.


I'm sort of shocked at how little of my argument seemed to land with you in any way. I'm wondering how many cycles of software hype have you been through? Were you here for the PC revolution, the .com era, smartphone mass adoption?

There's a lot of what-ifs, and worst case scenarios in your reply that I simply don't find likely. I am not drinking the koolaid from the AI maximalists or the doomers. I could be wrong of course, no one can predict the future, but to me the very real, novel and broad utility of LLMs that we are just learning to harness combined with the investment outlook are leading to a mania that has people overestimating where things will land when the dust settles. If I'm wrong then I guess I'll join the disenfranchised masses picking up pitchforks, but I'm not going to waste time worrying about that until I see more evidence that it's actually going that badly.

So far what I see is that software engineers are the ones getting the most actual utility of AI tooling. The reason is that it still requires a precision of thought and specificity to get anything sustainable out AI coding tools. Note this doesn't mean that engineers can design better apps than proper designers, rather my point is that designers and other disciplines still can not go much further than prototypes, they still need engineers to write the prompts, test the output, maintain the system, and debug things when they go wrong. I have worked long enough with large cross-functional teams to know that the vast majority of folks in non-engineering functions simply can not get enough specificity and clarity in their requests to allow an LLM to turn it into a working system that will work over time. The will hit a wall very quickly where new features add bugs faster than they improve things, and the whole thing collapses under its own weight like a mansion of popsicle sticks. And by the way, I don't consider AI-assisted coding to require less qualification than regular coding. Sure you don't need to know as much syntax or algorithms, but you absolutely need to know data modeling, performance, reliability, debugging, consistency, and migration knowledge in order to use AI to contribute to any software that powers a real business, and yeah you might need to develop your product and business sensibilities, but to me that's what been happening throughout the history of computing. Wiring up ENIAC, certainly required qualifications that were not needed for assembly programing, which in turn required certain things that C programmers did not need and so forth, but harnessing the increasing compute power and complexity required new qualifications. I don't think AI will ultimately be that different, it will change the way we work, it doesn't replace what senior engineers do.


> What I think is much more likely to happen is the number of software engineers greatly reduces

So you just believe you'll be one of the ones left behind?

Best of luck to you


As much as I recognize that a truly talented product manager is worth their weight in gold, I'd say the average engineer would be much more capable of learning to be an average PM than vice versa.

PM vibe coding a prototype for demonstration purposes? Might be a better use of a designer or engineers time, but okay I could see it being valuable. PM vibe coding something to ship to production? Your title is now engineer and you are responsible for your change, otherwise this is a direct path to destroying the quality of your product and the integrity of its data.


> I'd say the average engineer would be much more capable of learning to be an average PM than vice versa.

It's a completely different skillset. Practice shows, that most Engineers simply do not want to be PMs or find out about that after making the change and regretting it.


I agree but my point stands, even if they don’t want to, an engineer at least has the precision of thought to specify how a product could work. Many PMs simply don’t have this, so asking them to become vibe coders is a hopeless waste of time.

I agree, the thought of some PMs building an actual system is absurd. They do not understand the details necessary.

But quite a few developers I know would also be absolutely hopeless as PMs. No people skills, no interest in strategy or the long term view, do not want to hear about end users.

PM = project manager in my world


I imagine you’re saying that as a software engineer :). As a manager of both software engineers and product managers, I think this view is a bit of a stretch.

Some software engineers would make good product managers, some product managers would make good software engineers, and the majority of both are best suited to their current job.


You're correct I've spent most time as a software engineer or a engineering manager. I actually started my career as a web designer though. I've also been effectively an active PM in roles where there was no formal product management funtion. I've also been a co-founder of a two person operation where I did all the product and tech, and my co-founder did all the biz and operations. Another startup I co-founded I was "CTO" but was effectively the number 2 in a 100 person company and had veto power over our first PM hire. I've also been part of a larger scaleup that was acquired in a scenario that left a number of folks orphaned, so for a while I was also managing a handful of designers, program managers, and an IT manager.

So yeah, I understand your point, and if I ran a cross-functional team like that I would hopefully hire well enough that I felt the same. So maybe to restate my thinking in a way that may be slightly less controversial: AI is eating a lot of the low-level mechanistic work that used to define being a software engineer, however I never believed that was where the value was for engineers anyway. While some PMs are incredible and would no doubt be able to get good at vibe coding, the majority in my 25 years experience do not have the patience to get to a precision of requirements which is absolutely still a requirement to get anything out of AI.


>I'd say the average engineer would be much more capable of learning to be an average PM than vice versa.

software engineers love to imagine that they have the only job you can't train yourself to do quickly, all evidence to the contrary.


I don't know how that relates to what I said, and it's certainly not something I believe. Personally I don't think any form of knowledge work can be learned quickly in 21st century, otherwise it would have been automated away already (well before the current wave of AI that further accelerates the trend). That goes for product management, design, operations, strategy equally to software engineering, data science any other technical disciplines.

It relates directly to what you said. You said you thought that it would be pretty easy to teach an engineer to be a project manager and pretty hard to teach a project manager to be an engineer. That is just exactly the kind of directionality I pointed out.

You're putting words in my mouth. Also, I was talking about Product Managers, there's a word of different between that role and Project Manager (or Program Manager).

People absolutely do get promoted for simplicity. I've also seen promos rejected for over-engineering concerns as well. I get that the opposite can happen, and at the end of the day there is some subjectivity, and a good deal of optics involved in the promotion process at any large company, but don't be demoralized by this! The minute you give up your engineering judgment and ethical principles in pursuit of some perceived political path to promotion, you immediately lose credibility with the most competent people that you most would want to work with and learn from. I assure you they are sprinkled throughout all kinds of organizations at all levels, so don't let the bozos define your worldview.

Sure also in big companies there are plenty of places for low performers to survive by owning some very small and rigid scope that doesn’t require any real end-to-end thinking.

In my experience distribution of engineer quality is even across companies, countries, ages and any other dimension we can come up. Certain big scale skills can really only be practiced at honed at large tech companies, but it’s always a small minority that actually make those things happen. Resume alone can be an extremely misleading signal.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: