Hacker Newsnew | past | comments | ask | show | jobs | submit | sdevonoes's commentslogin

I wish that were also true for the case of Claude/Codex/etc

> Learn to code and then Google will surely hire you and pay you $250k right off the bat

Weird. In EU, 99% of graduates didn’t (don’t) have that in mind… A fresh graduate in CS typically earns less than 40-50K (even less depending on the country).

So USA is now like the EU?


No, USA is not like the EU because everything still costs American prices.

And because the employed software engineers still make way, way more than that, but the number of unemployed who make $0 is increasing (and that set may soon be full of fresh graduates).

It has been for a while I suspect.

Agent bots are the new “TODO” list apps. Seems cool and all, but I wish I could see someone writing useful software with LLMs, at least once.

So much power in our hands, and soon another Facebook will appear built entirely by LLMs. What a fucking waste of time and money.

It’s getting tiring.


It’s sad not because of AI itself but because of the companies behind AI: we are now paying for every single line of code we produce. That sucks

Weird you got downvoted for that. This is exactly the thing which has been bothering me about all of this.

Pre-LLM there are paid products and licenced stuff, but for the most part you could code in any language using free or community edition IDE's and mostly open toolchains. The total requirement for me as an individual to start using some language or stack is owning a computer and having internet access. Both provided by a stable market with consumer choice.

Post-LLM there is now this blackbox of a service which you depend on and for which someone is picking up a not-insignificant tab where the costs currently seem massively subsidised, and which is getting to be a requirement for your skill set. Open local models? Fine, but who is training them? How will those stay up-to-date?

Oh, and then there is the not-quite-insignificant ecological aspect and that bit where the powers-that-be seem to have collectively decided that copyright doesn't really apply here.


But aren’t companies enforcing AI usage? If noy, wait for it

Mine's tracking it complete with a leaderboard (LOL) and it's been suggested to me that it'd be in my best interest not to be too low on that list, so I suspect in the back half of the year some sterner conversations and/or pink-slips are going to be coming the way of those who've not caught on that they need to at least be sending some make-work crap to their LLMs every day, even if they immediately throw the output in the metaphorical garbage bin.

It's basically an even-more-ridiculous version of ranking programmers by lines-of-code/week.

What's especially comical is I've seen enormous gains in my (longish, at this point) career from learning other tools (e.g. expanding my familiarity with Unix or otherwise fairly common command line tools) and never, ever has anyone measured how much I'm using them, and never, ever has management become in any way involved in pushing them on me. It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week. WTF? That kind of thing should be leads' and seniors' business, to spread and encourage knowledge and appropriate tool use among themselves and with juniors, to the degree it should be anyone's business. Seems like yet another smell indicating that this whole LLM boom is built on shaky ground.


> It's like the CEO coming down to tell everyone they'll be making sure all the programmers are using regular expressions enough, and tracking time spent engaging with regular expressions, or they'll be counting how many breakpoints they're setting in their debuggers per week.

That's because they weren't sold regex as as service by a massive company, while also being reassured by everyone that any person not using at least one regular expression per line of code is effectively worthless and exposes their business to a threat of immediate obsolescence and destruction. They finally found a way to sell the same kind of FOMO to a majority of execs in the software industry.


What's stopping someone from just having the AI churn out garbage all day long? Or like, put your AI into plan mode with extra high reasoning and have it churn for 10 minutes to make a microscopic change in some source file. Repeat ad infinium.

> What's stopping someone from just having the AI churn out garbage all day long?

In my case it's morality.


Interesting consideration, 'mandates' and all. Definitely in camp 'toss the output', here. I think I'll see 'morality' leaving when $EMPLOYER fires 'professional discretion'... forcing usage and, ultimately, debasing the position.

edit: Peer said it well, IMO. The consequences aren't really yours. Also: something, something, Goodhart's Law.


I would argue that making the company experience the consequences of its choice of metrics / mandates is in fact a moral imperative.

> even if they immediately throw the output in the metaphorical garbage bin.

Gotta be careful if you do that tho; e.x. Copilot can monitor 'accept' rate, so at bare minimum you'd have to accept the changes than immediately back them out...


In a couple years, we'll have office workspaces equipped with EEG helmets that you must wear while working, to measure your sentiment upon seeing LLM-generated code. The worst performers get the boot, so you better be happy!

If you use AI to back it out, sounds like you’ve found an infinite feedback loop for those metrics.

Did industrial psychology die out as a field? Why do we keep reinventing the wheel when it comes to perverse incentives. It’s like working on a team working with scrum where the big bosses expect the average velocity to go up every sprint, forever, but the engineers are the ones deciding the point totals on tickets.


I wonder if Copilot can write a commit and backout routine for them.

From a management perspective I would be highly skeptics of token leaderboards. You are incentivizing people to piss away company money with uncertain rewards.

I mean… throw some docs into the context window, see it explode. Repeat that a few times with some multi-step workflows. Presto, hundreds of dollars in “AI” spending accomplishing nothing. In olden days we’d just burn the cash in a waste paper basket.


My company doesn’t enforce AI usage but for those who choose to use it, every month they highlight the biggest users. It’s always non-tech people who absolutely don’t understand how LLMs work and just run a single chat for as long as possible before our system cuts them off and forces them into a new chat context.

"Can't fix stupid"

Vibe code a side project at work. I’m willing to bet the tools aren’t mapping the code contribution locations to business impact (hard problem).

Reviewing AI generated code at PR time is a bottleneck. It cancels most of the benefits senior leadership thinks AI offers (delivery speed).

There’s also this implicit imbalance engineers typically don’t like: it takes me 10 min to submit a complete feature thanks to Claude… but for the human reviewing my PR in a manual way it will take them 10-20 times that.

Edit: at the end real engineers know that what takes effort is a) to know what to build and why, b) to verify that what was built is correct. Currently AI doesn’t help much with any of these 2 points.

The inbetweens are needed but they are a byproduct. Senior leadership doesn’t know this, though.


Indeed. My view as a CEO is, if you are still reviewing the code yourself then what use is it that you can produce a bunch of text at a faster rate?

I'd prefer people wrote good quality code and checked it as they went along... whilst allowing room for other stuff they didn't think of to come to the front. The production process of using LLMs is entirely different, in its current state I don't see the net benefit.

E.g. if you have a very crystalised vision of what you want, why would I want an engineer to use an LLM to write it, when the LLM can't do both raw production and review? Could this change? Sure. But there's no benefit for me personally to shift toward working that way now - I'd rather it came into existence first before I expose myself to incremental risk that affects business operations. I want a comprehensive solution.


You should lay off your engineering team and do it all in Lovable amigo.

Where are you CEO?

At a shitty company. The problem is - you cannot ship a large amount of code quickly in a perfect way. Positioning the problem as "what's the point of generating all this code so fast if I still need a warm body at the end making sure it's OK?" is hilarious.

Don't do that. Just ship it. Yes, good tests, linting, etc will help but if you really believe you don't need humans in the loop at all, at least for the time being, you are fucked.

But go ahead, buy the hype. Your agent swarm can build an operating system in 15 minutes and everything will just work. Cool.


Edit- I disagree with you, didn’t realize you weren’t Op

Also wow you gutted your original comment

This is what I don't understand about this policy. There's no way a senior has enough spare capacity to be the gate keeper on every PR made by AI below them. So now we are just making it so the senior people use more AI to keep up but now they're to blame for letting it happen.

It sounds like a piss poor deal for seniors unless senior engineer now means professional code reviewer.


That's amazon in a nutshell though. Create conflicting metrics for performance, push credit up and responsibility down, punish everyone below you for not meeting the double standards

> Create conflicting metrics for performance, push credit up and responsibility down, punish everyone below you for not meeting the double standards

This resonates with my experience.

The only thing you forgot is that you can also use the 12^H^H 14 leadership principles to argue whatever you want (and then the opposite of what you argued last month, still using the same leadership principles).


Got a project finished early? Well, you didn't insist on the highest standards. Made sure things were held to a high standard? Well, you weren't biased for action.

Were you a knowledge source for the entire team? Well, you weren't learning and being curious. Did you ask a lot of questions to learn everything? Well, then you weren't "are right a lot".

Did you think big and come up with an architecture that saved Amazon a lot of money? Then you weren't inventing and simplifying. Build something simple to get out out the door quick? Well, you weren't thinking big.

Did you act quickly without consulting others to fix an issue? Well you weren't earning trust. Did you consult people to make sure they were happy with the solution? Well you weren't biased for action.

Thats just a few examples, there's so many more


Very nice, I can imagine someone turning it into a little satirical webpage, which implements a kind of decision tree:

1. Choose from a set of challenge types (e.g. meeting a deadline, reliability)

2. Choose whether the challenge was "met" or "failed".

3. Choose whether you want to make the person look good or bad, by following/ignoring a principle.

4. Results: A list of relevant principles with short rationalizations.

I'm almost tempted to try, except perhaps I should treasure my ignorance.

If a tool like that gets popular enough that most employees are using it for office-politics, it might even start to deflate the whole Leadership Principles thing.


I have always received my accolade and never seen that twisted in that way there, though. But that was during covid. And in Europe.

the key is to understand which LPs apply to your L+1 and which apply to you (hint: its not the fun ones)

Most AI advocates I know believe this period, reviewing every line of code, will come to an end when models improve. So there will be no bottleneck. We will simply test and ship, with AI doing all the code and review.

Possibly, but it doesn't make sense to restructure things in advance of that actually happening, particularly since there's no roadmap for getting there right now.

They are already at this point - they just think the world needs to catch up. They don’t review the code most of the time. They believe it’s just a matter of becoming comfortable with the idea you don’t write code. Seems plenty of startups in SV are also doing this.

Like the fake facebook that had a security hole so severe every single participant had their API keys exposed?

Yep, that’s the most prominent example of a system built without code review I can also reach for. Whether all such systems also suffer critical flaws is another question entirely. And whether that matters is a further unknown.

Surely they know all this. They're worried about AI code degrading codebase quality, so they're putting on the brakes.

> Senior leadership doesn’t know this, though.

Well, you'd think senior leadership should know how their business and their people work.


to be fair senior engineering leads in the software world are like Voltaire's joke about the holy roman empire, neither holy, roman or an empire.

Despite the name not a lot of seniority, leadership or engineering going around



99% of the engineers out there are generic ones (including myself)… and most of us are working.


If you take away my AWS account and my ability to “add on to what Becky said” and “look at things from a 1000 foot view”, I am a “generic developer” and was one for 25 years.

That doesn’t have anything to do with the fact that “generic developers” are a dime a dozen and it’s hard to stand out from the crowd using an ATS. I just said I had the same issue when experimenting with ATS’s.


As a non native English speaker: British accent is harder to understand (I know there are many accents in the UK). American accent is easier to understand. Idioms are equally harder to understand in both.

For example “bend over backwards”. I get the meaning, but my brain would never produce that phrase. I would say something like “adapts”, “compromises”, etc.


Do you think there is any benefit from learning idioms? When speaking a non-native language, I always struggle with wanting to sound like a native speaker, and never using idioms and jargon makes me feel like I'm setting myself apart. However, it's really hard to use them correctly when you don't speak the language natively.

The real litmus test is whether one would allow LLMs to determine a medical procedure without human check. As of 2026, I wouldn’t. In the same sense I prefer to work with engineers with tons of experience rather than fresh graduates using LLMs


You think prompting is here to stay? Sql has survived a long period of time. Servlets haven’t. We moved from assembly to higher languages. Flash couldn’t make it. So, im not sure for how long we will be prompting. Sure it looks great right now (just like Flash, servlets and assembly looked back then) but I think another technology will emerge that perhaps is based on promps behind the curtains but doesn’t look like the current prompting.

I would say prompting is not here to stay. It’s just temporary “tech”


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: