Coding agents seem the most likely to become general purpose agents that everyone uses eventually for daily work. They have the most mature and comprehensive capability around tool use, especially on the filesystem, but also in opening browsers, searching the web, running programs (via command line), etc. Their current limitation for widespread usage is UX and security, but at least for the latter, that's being worked on
I just helped a non-technical friend install one of these coding agents, because its the best way to use an AI model today that can do more than give him answers to questions
When people talk about inflation, I don't think they're referring to just CPI, but asset inflation too. Things like equities, real estate, gold/silver/platinum, bitcoin, etc.
These have been outpacing CPI because they're levered by cheap debt, brought to you by central bank actions that keep rates low so governments can play the same levered games with their own runaway fiscal policies.
That's a lot of financial devices painted with a broad brush, and I think the charge that so may central banks are knuckled under with fiscal dominance is simply not sustainable. The ones that are, we tend to hear about.
Because there's a lot one could write about each of: equities, real estate, gold, silver, platinum (which have very different industrial exposures), and bitcoin, which have many price drivers.
So let's try something more parsimonious: what do you make of people, institutions, etc that bid on short and even long-dated sovereign debt around the globe, and come up the collective discovered price of, say...3.5%, annualized, for maturity in a month? https://www.treasurydirect.gov/auctions/announcements-data-r...
Cold sales/outbound sales is dying or mostly dead. SaaS platforms and “growth ops” that made it easy to set up sales sequences and find ICP lists helped kill it. AI making it easy to personalize and do all the work has been the nail in the coffin
This is because sales is a zero sum game. When everyone can do something at scale, like send an email sequence, nobody wins. Now inboxes are flooded with spam that get deleted and phones go straight to voicemail because people have learned it’s not worth it. You can try to create even bigger lists to capture some 0.01% that will respond, but that’s a shrinking game and many B2B companies don’t have the market size for it
Instead, for my company and others I know, we’ve returned to old fashioned human relationships that don’t scale as easily. Building partnerships, asking for warm introductions, conferences, networking, events, hell I even know of someone who knocks on doors for B2B and it works for them. People ignore spam from bots but they’ll listen to real humans. They’ll read emails and take phone calls from people they know. It’s about trust now, not scale
I’d still recommend learning closing and everything needed for once a deal is in your pipeline. I think a book like Founding Sales is good for that, if a bit dated now (skip the stuff about cold sales in the first half of the book). Never Split the Difference for negotiation. For in person sales, this is basically what anyone in partnerships and outer sales do. I don’t know of resources on that. I’m learning from friends who do that and old fashioned searching
Conferences are huge. We’ve done 5 tradeshow demos for our clients in the last 10 months.
I agree with you about the zero sum, once you’ve seen the first hyper personalised email that says they really like your commitment to x, the rest are all the same.
Useful technology can still create a bubble. The internet is useful but the dotcom bubble still occurred. There’s expectations around how much the invested capital will see a return and growing opportunity cost if it doesn’t, and that’s what creates concerns about a bubble. If a bubble bursts, the capital will go elsewhere, and then you’ll have an “AI winter” once again
Malicious libraries will drive more code to be written by LLMs. Currently, malicious libraries seem to be typically trivial libraries. A WhatsApp API library is just on the edge of something that can be vibe coded, and avoiding getting pwned may be a good enough tipping point to embrace NIH syndrome more and more, which I think would be a net negative for F/OSS
The incentives are aligned with the AI models companies, which benefit from using more tokens to code something from scratch
Security issues will simply move to LLM related security holes
The library in question is a malicious fork of a library which reverse engineered the undocumented WhatsApp Web API. Good luck making a slop generator reverse engineer an API.
I would wager LLMs in a good enough tool/eval loop would actually do pretty well at that task. But they may also be pretty good at just replicating existing libraries wholesale, sans the malicious bits
That graph is not inflation-adjusted and basically says to avoid using it like this in the description:
> Average prices are best used to measure the price level in a particular month, not to measure price change over time. It is more appropriate to use CPI index values for the particular item categories to measure price change.
I’m not doubting that (inflation-adjusted) energy prices have gone up but this graph is misleading to represent it
“Agent skills” seems more like a pattern than something that needs a standard. It’s like announcing you’ve created a standard for “singletons” or “monads”
> As the verification process itself becomes automated, the challenge will move to correctly defining the specification: that is, how do you know that the properties that were proved are actually the properties that you cared about? Reading and writing such formal specifications still requires expertise and careful thought. But writing the spec is vastly easier and quicker than writing the proof by hand, so this is progress.
Proofs never took off because most software engineering moved away from waterfall development, not just because proofs are difficult. Long formal specifications were abandoned since often those who wrote them misunderstood what the user wanted or the user didn’t know what they wanted. Instead, agile development took over and software evolved more iteratively and rapidly to meet the user.
The author seems to make their prediction based on the flawed assumption that difficulty in writing proofs was the only reason we avoided them, when in reality the real challenge was understanding what the user actually wanted.
The thing is, if it takes say a year to go from a formal spec to a formally proven implementation and then the spec changes because there was a misunderstanding about the requirements, it's a completely broken process. If the same process now takes say a day or even a week instead, that becomes usable as a feedback loop and very much desirable. Sometimes a quantitative improvement leads to a qualitative change.
And yet code is being written and deployed to prod all the time, with many layers of tests. Formal specs can be used at least at all the same levels, but crucially also at the technical docs level. LLMs make writing them cheap. What’s not to like?
I just helped a non-technical friend install one of these coding agents, because its the best way to use an AI model today that can do more than give him answers to questions
reply