Nobody can know they will need to lay off 10% multiple years from now. So many things can change between now and then.
For all Block knows, AI for coding kind of plateaus where it is now and there is a huge boom in software engineer hiring taking advantage of the new tech to produce even more/better features.
Part of the trouble for software companies is that AI hype is sucking 99% of investments in the space. You might have a solid but non-sexy software business and struggle to find the investment you need
Counterpoint, why do current state of the art generative AI companies, with the ability to use models that the public can't even access, and the ability to burn tokens at cost, still pay for very expensive Saas software?
That's really simple - actually writing the software has never really been the hard part in most SaaS apps. So long as you're moderately disciplined and organised it's easy to build what most SaaS apps are e.g. a CRUD-app-with-a-clever-bit. The clever bit is the initial challenge that sets it apart from the rest, but encoding that in software has never really been that difficult.
Having the ideas necessary to know what to write is where practically all the value lies (caveat: there is value in doing the same as someone else but better, or cheaper.) AI can help with that, but only in so much as telling you the basics or filling in the blanks if you're really stuck. It can't tell you the 'clever bit' because that is by definition new and interesting and doesn't appear in the training data.
What this means is that at some point Anthropic will be able to prompt Opus to clone Jira and never pay an Atlassian bill again. Opus just needs to figure out what Jira is first. It's not there yet.
> What this means is that at some point Anthropic will be able to prompt Opus to clone Jira and never pay an Atlassian bill again. Opus just needs to figure out what Jira is first. It's not there yet.
Bang on, and Jira is the perfect example! Because Jira isn't a bag of features: Jira is a list of features and the way they fit together (well or poorly, depending on your opinion).
That's the second-order product design that it's going to take next-gen coding AI workflows to automate. Mostly because that bit comes from user discovery, political arguments, sales prioritization, product vision, etc. It's a horrendous "art" of multi-variable zero-sum optimization.
When products get it right (early Slack) then it's invisible because "of course they made it do the thing I want to do."
When products get it wrong (MS Teams, Adobe Acrobat, Jira, HR platforms) then it's obvious features weren't composed well.
Expect there's more than one {user discovery} -> {product specification} AI startup out there, working on it in a hierarchical fashion with current AI now.
On top of that, it's one thing to write the code, whereas it's another to actually run that code with maximal reliability and minimal downtime. I'm sure LLMs can churn out Terraform all day long, but can they troubleshoot when something goes wrong (as is often the case)?
I would posit another large factor is "owning" the software comes with the long tail of edge cases, bugs, support, on-call, regulations, etc... that an established SaaS has learned and iterated on for many years with many customers.
For the vast majority of companies they would (and should) rather let the SaaS figure that out and focus on their actual company
AI companies already know what they need. they're paying for it. it would make a great case study for them to make a list of all external software they're using, list the features they use (or make the ai watch them for a week), and then prompt the AI to rewrite those in-house.
This is what people don't get, what's coming up and it'll hit them like a ton of bricks. Software development, after toy examples, was a scale limiting factor for the better part of software development if you had domain expertise. Now, we hear constantly that it doesn't matter since "muh experience" and architecture, choices, tradeoffs etc for which you need seniority to operate LLM efficiently (or at all). This is true, of course. What people don't seem to get that that's what's coming next. Your experience won't mean crap anymore and then the ride starts full blast.
Addendum to counterpoint: why haven't those SotA gen-AI companies become the most productive software companies on earth, and release better and cheaper competitors to all currently popular software?
People always gripe about the poor quality of software staples like Microsoft Office or GitHub or Slack. Why hasn't OpenAI or Anthropic released a superior office suite or code hosting platform or enterprise chat? It would be both a huge cash cow and the best possible advertising that AI-facilitated software development is truly the real deal 10x force multiplier they claim.
If someone invents a special shovel that can magically identify ore deposits 100% of the time, they aren't going to sell it with the rest of the shovelmongers. They're going to keep it to themselves and actually mine gold.
Because it’s not their business to sell a chat app? "Our company is the frontier lab for AI models, oh and btw we also offer SlackClone, sign up for enterprise please". Their job is selling shovels, really good, increasingly more expensive shovels that keep getting better, let others waste their time looking for gold.
But Google sells the productivity apps and also does the exact same things OpenAI does.
If their work on Gemini is this leading world-class stuff, why aren’t Google’s software products not suddenly becoming better?
Was the most recent release of Android demonstrative of a significant uptick in product iteration? Shouldn’t we suddenly be seeing Android pulling far ahead of iOS in an unusually rapid fashion because Apple doesn’t have access to the same quality of shovels?
What about Microsoft Windows 11? Isn’t Microsoft a major OpenAI investor with full access to their latest and greatest?
Why aren’t we seeing release schedules accelerate or feature lists growing at a faster rate?
Supposedly we are selling a lot of shovels here but I don’t see a lot of holes being dug.
Android is a poor example here especially with how more and more features are moved from the OS to Play services. Google is shipping so many features without even an OS update that's how Android has always been. Even for their OS, Pixel feature drops happen every quarter. AOSP is only a base for others to build anyway, have you seen how fast samsung and others are pushing updates and uncountable number of features. It's not comparable to iOS at all.
Not really no. It's pretty much the same pace as before. I wanted to point out Android is not playing catch up to iOS in anyway in features or quality, it's the opposite. Your comment asked why Google isn't catching up to Apple with AI's help. iOS meanwhile has been regressing since 18 and is a mess now on 26.
Yes, to clarify, I’m not making any claim on Android versus Apple and which one is better, who is catching up to whom. Which operating system is ahead or better is essentially irrelevant to the point I’m making.
My main claim revolves around your second sentence: Google is a major primary source of AI research and has access to frontier models before all their customers, especially competitors like Apple who are clearly behind in the AI race and/or not participating in the same way.
In theory, if AI is transformational to developer velocity, Android and all other products under Google’s umbrella should be moving faster than competitors that don’t have early access and preferential wholesale cost AI infrastructure, and they should be clearly iterating faster and better than they did prior to ~2022-2024.
To me, the biggest argument for an AI bubble burst is that companies like Meta and Google won’t actually be able to show their prospective customers that their own workflows have benefitted. Google can’t say “we now ship major [Google Product] features n% faster/better” because there’s no evidence of it. They might make the claim but nobody will believe them.
Major corporations will try the products, start spending $20-200 per engineer per month extra, they’ll see productivity gains of <5% and maybe even see code quality drop, then they’ll decide that the experiment was a bust.
But they are marketing their AI as replacing all software engineers. Their CEO can’t stop saying it. According to them the cost of producing software is now just the cost of tokens to generate it.
They have special knowledge to leverage AI to clone (and even improve) huge revenue businesses with high margin. If their claims about the abilities of LLMs are accurate it would be foolish to just leave that on the table.
It would also prove the power of their LLM product as truly disruptive. It would be amazing marketing!
They care about money, they are making tons of investor money doing what they are doing, there's no incentive to pivot if it would just turn investor money into consumer money.
Their business is making money. If they can build money printing machines, they're not going to refuse to use them because that's "not their business".
Do you really think they would be out donating trillions of dollars to other companies out of the goodness of their hearts, instead of just bankrupting everyone in the software industry if they could?
Huh? What kind of question is that? Who waste the opportunity to win the AI race to become another Jira vendor? Everything has the opportunity cost. Didn’t you already learn that?
Isnt that point kind of the counterpoint to the AI-first narrative.
With standard, human driven operations its true about opportunity costs. What we are told is that AI will replace human, essentially saying that opportunity cost becomes cash only. Then the question of why doesnt AI lab start SaaS fully managed by AI becomes ever more interesting. Maybe because it's not that simple. Hence, it's not that easy in other companies as well to just replace devs, engineers and so on with AI
Waste ? They can become both an AI race winner AND a disruptive Jira vendor. Yet they don't. Why ?
To be a successful Jira vendor will prove their point that software engineers are obsolete now. Why don't they do that already ?
Why hasn't OpenAI or Anthropic released a superior office suite or code hosting platform or enterprise chat?
My guess is two-fold. One, they are specialized in AI. Two, building another anthropic is a big moat and they like to keep it big vs what you could build with it.
Why aren't we in the year of the Linux desktop? It's free and arguably close enough, better, or as good as Windows.
I think in the modern world people would absolutely sell the special shovel, because even if you have a better product that doesn't mean people are going to be using it. You need to have a much better product for a long time for that to happen. And being much better than the competition is hard.
Anthropic appears to have realized before OpenAI that code gen was an important enough market to specialize in.
For now though, building smarter models / general integration tooling is a better us of model companies' capital.
Once/if performance gains plateau, expect you'll see them pivot right quick to in-house software factories for easily cloneable, large TAM product spaces, then spin off those products if they're successful.
100% agreed. When/if that pivot happens will be the sign that gen-AI is truly disrupting the software market in a profound way. "You're using the model wrong/you're not using the latest model" is an oft-repeated argument against AI skeptics. Nobody knows how to use the latest models better than their developers.
their costs are bound to compute anyway, they don't mind huge compensations also - it's not much of a cost saving to re-build, even cheaply, inhouse Slack or whatever?
Why do you have to waste ultra-expensive engineers on it? You have agents. And verifying your product works as it is claimed should absolutely be part of your mission. How can you possibly claim that your models are revolutionising software development if you haven't even used them to revolutionise your own software development in-house? Not only that, it would produce a huge marketing coup that would immediately lead to a flood of enterprise spending if you could demonstrate that your agents actually do what you constantly claim them to do.
PS. If you're claiming that coding an application is ultra-expensive, you are already entering the argument on the side of the comment you're arguing against, which is making a counterpoint to the article, which claims in the first sentence:
> The math is simple: if it costs almost nothing to build an app, it costs almost nothing to clone an app. And if cloning is free, subscription pricing dies.
They did revolutionise software development in-house. Both Codex/Claude Code are 90% agent written these days, and bring in billions of dollars of revenue.
Billions of dollars of revenue on trillions of dollars of investment is not a revolutionary feat. I promise you I could turn trillions into billions too.
Neither of those software are primarily responsible for the revenue, either. The actual models underlying them are, not the trivial CLI chat interface (which, despite being trivial software, still manages to be full of bugs that go unfixed for months). I also don't even think it's true that Codex is primarily agent-written. OpenAI specifically cited using Electron in their recent Codex desktop application for "agent orchestration" to save human developer time on porting it across platforms, which does not sound like a successful exercise in eating their dog food.
If you have tools that allow superior efficiency shouldn't you be hiring every possible just expensive engineer you can get your hands on and put them to produce massive amounts of products to out compete everyone else in the world.
Shouldn't they be in place to replace absolutely every other tech company? That is tens of trillions of valuation in short few years.
What is the lifetime value of an individual pretraining run, and what is the cost to do it? Whether it is a net positive seems to still be an open question.
Actually there is a chart of answers to this question, because the frontier providers have been delivering new models for some time. The answer is that so far they have been net positive.
It does seem like things are moving very quickly even deeper than what you are saying. Less than a year ago langchain, model fine tuning and RAG were the cutting edge and the “thing to do”.
Now because of models improving, context sizes getting bigger, and commercial offerings improving I hardly hear about them.
This is very interesting to me. I’ve been working on a side project with interactive Python tutorials in the browser, and I’ve been somewhat discouraged recently by how LLMs have been changing the landscape.
It seems SEO for this sort of thing is dead, so another funnel/channel is needed. Also, CS enrollment seems to be down this past fall for the first time in a while (based on the CERP pulse survey).
But maybe there is still a market for that sort of educational content.
For all Block knows, AI for coding kind of plateaus where it is now and there is a huge boom in software engineer hiring taking advantage of the new tech to produce even more/better features.
reply