Ive been working in AI for about a year, and Ive been working as a web developer for about 20 years. It appears to me that everyone that thinks AI is going to handle development for companies (in the reasonably near future) are also folks who have never had high level engineering responsibilities for a single project for a multi-year period. That is, theyve never been exposed to the realities of how nuanced managing a software product is at a technical level. The bots can barely write code-bootcamp level scripts. Im not confident that we will see AI solving the kinds of (coding) problems that engineering leaders are handling over multi-quarter projects.
I’m an AppSec engineer, and work with 300+ devs and Software Engineers. I’ve worked through start phase and two acquisitions. You spot on at this point in time. I’m on the team responsible for testing m365 copilot before it rolls out to our org. A month ago I would have agreed with you 100% but now i’m leaning more to theirs a 50% chance of large scale automation happening within 5 years.
What AI was missing is the larger business context. I doesn’t know the politics behind why things are the way they are and why fixing and issue might cost the company 50k every minute or if library is updated it would break 15 business critical products without proper coordination.
M365 Copilot is bridging that gap. Right now it’s dumb and only access what you can see on OneDrive and sharepoint. With plugins and connectors it’s going to integrate into every development platform sooner or later.
I still think it’s some years out and will require a lot of human interaction before these generalized agents can be onboarded.
It’s a security nightmare for me. We basically just automated the recon for any attacker that has compromised a 365 Account. In my opinion it’s moving to fast even when it’s dumb as bricks and has the context of a 2 year old.
I’ve been using it to compare static analysis findings and m365 copilot returns a lot of the same findings with mitigation suggestion. It’s still not 100% though, but either is any secuirty testing.
I give it two years before the grunt work is fully automated
My theory is that they know it won’t work, but they’re scaring developers into using and improving AI and taking pay cuts, so that they can eventually replace them. Every penny they can take from humans and invest in AI research is being used in order to be able to get rid of the humans, eventually.
The AGI that might eventually come won’t free humanity from labor, it will free the wealthy from having to manage and support laborers.
I think right now ai does enable code velocity, though the engineer still has to be an expert to direct the ai for what to write, lookup error codes, understand what the code is doing enough to make edits and ask followup questions, and so forth. The 'future promise' is having an agent-based model act as that person in the middle, e.g. Devin's feature release video. It's really hard to predict how far away that is from becoming good enough to work in the real world, though it certainly isn't here yet.
As someone who has worked in startup environments for 20 years, its rather offensive that anyone in 2024 would claim that employees arent taking risks.
In todays salary brackets, one could be comfortably making 220k as a staff dev some huge healthcare/pharma tech firm. One may also feel the job is boring, soulless, and mostly uncomfortable. Then one may choose to work at an exciting AI startup for ~160k, and suddenly find themselves way happier, growing more, and engaged. One just took ~60k worth of risk right there, not to mention that it could be come $0 tomorrow, and one likely now has crap healthcare benefits, given the startup status.
That 60k could be a million dollars in ~15 years if invested wisely.
The hype train strikes yet again! In fellow consultancy circles, we were giving talks on exactly this back in '17 and '18, but the hype train was too strong then. It was even harder to convince this same crowd that SPAs are usually a bad idea, and emerging technology like Hotwire or LiveView would soon eclipse the SPA-obsessed culture. GraphQL is immensely powerful and useful for its exact use case, and extremely burdensome and expensive for any other use case. If you dont already know that use case, YAGNI. The same can be said for any hype train. The problem with any kind of hype train in tech is everyone looks for reasons to use whatever the new hyped thing is, rarely does anyone determine if the new hyped thing actually fits their use case(s). This will continue so long as there is a dichotomy of culture between youths and those with lengthy experience. We old people will continue to point out exactly why this or that is pure hype, and will continue to be drowned out by the emotions that come along with participating in hype.
This reads like a junior developer's first foray into the concept of the test pyramid. Congrats, author has learned that there is a forest beyond the trees.
I would guess for the same reason the government consistently buys $1000 hammers and $5000 toilet seats - to mask some military operation theyd rather not publicly report on.
Were pro-right-tool-for-the-job. The US Stock market is already a free-money generating machine. We need more money in the US Stock Market, not in B.S. toys for contrarian criminals.