The original talk by Ward Cunnigham made this clear; a conscious choice is required for it to be considered debt. You are explicitly releasing something you KNOW is suboptimal, specifically because you want the benefits today and have anticipated the costs to pay it down later, as well as the interest payments should you chose not to. You're using financial leverage the same way as you would buying a house or car.
By that definition, we do this all the time. I'd wager every feature release has some degree of "oh we can address that later if/when this takes off".
If you accidentally introduce bugs or regressions that gums up the works, that's not "debt", that's a mistake. If you choose the wrong thing and realize too late, that's not "debt", it's just bad decision making. If you choose wrong and are dissallowed from ever going back and cleaning up as agreed, that's not "debt", it's bad management.
We've got to stop using "tech debt" to mean "everything we don't like about software".
This x1000. The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
* the ability to find essentially any information ever created by anyone anywhere at anytime,
* the ability to communicate with anyone on Earth over any distance instantaneously in audio, video, or text,
* the ability to order any product made anywhere and have it delivered to our door in a day or two,
* the ability to work with anyone across the world on shared tasks and projects, with no need for centralized offices for most knowledge work.
That was a massive undertaking with many permutations requiring lots of software written by lots of people.
But it's largely done now. Software consumes a significant fraction of all waking hours of almost everyone on Earth. New software mainly just competes with existing software to replace attention. There's not much room left to expand the market.
So it's difficult to see the value of LLMs that can generate even more software even faster. What value is left to provide for users?
LLMs themselves have the potential to offering staggering economic value, but only at huge social cost: replacing human labor on scales never seen before.
All of that to say, maybe this is the reason so much time is being spent on meta-work today than on actual software engineering.
I have watched artists thoughtfully integrate digital lighting and the like at a scale I'd never seen before the LLMs rolled up and made it possible to get programs to work without knowing how to program.
The fundamental ceiling of what an LLM can do when connected to an IDE is incredible, and orders of magnitude higher than the limits of any no-code / low-code platform conceived thus far. "Democratizing" software - where now the only limits are your imagination, tenacity, and ability to keep the bots aligned with your vision, is allowing incredible things that wouldn't have happened otherwise because you now don't strictly need to learn to program for a programming-involved art project to work out.
Should you learn how to code if you're doing stuff like that? Absolutely. But is it letting people who have no idea about computing dabble their feet in and do extremely impressive stuff for the low cost of $20/month? Also yes.
Now this is the right take. It's one thing for us to do navel-gazing into the recursive autononomous future; it's another to step back and see what Normal People can do, now that the walls are coming down around our profession. Creating new walls is probably not the answer! From the Cathedral and Bazaar, we now have an entire metaphorical city of development happening, by people who would not have thought it possible a few years ago.
I don't know what the future of my job holds other than what it always had: helping people who have good ideas to get them done properly.
The thing is though it all still feels so…rudderless/pointless sometimes?
When digital cameras came out, it democratized filmmaking immensely. But it wasn’t just people screwing around - amazing new works of art, received positively by audiences and critics alike, exploded in number. They wound up winning film fests, garnering millions of views (and fans) online, and even on big screens world wide, almost immediately
Where are the vibe coded apps that are actually good? Where are the new, innovative creations built by “normal” people? Because by now you’d think we’d see them. It’s all been parlor tricks, proofs of concept, and post mortems on how a bot ruined half a year’s work or whatever. The “good stuff” is still happening behind closed doors, led by experienced engineers on existing projects. It’s a productivity multiplier more than anything it seems, but it doesn’t seem useful as a tool for new people to make new things in any given space.
Vibe coding is actually "good" for small, bespoke things. The same way that Excel is "good" for small tasks, bad bad for larger things. Too easy to make mistakes, too hard to maintain.
I could equally ask - where are all the Excel workbooks that are actually _good_? No-one needs to share their Excel workbooks. They don't need 10k github stars. They just achieve some small goal of the Excel user. These LLM agents just need to do what the user needs doing at any moment.
(Sometimes, that can be a small part of a larger job in software, or a series of small parts perhaps - but again you are going to see this "show up" as a part of peoples workflow in maintaining enterprise software which is what most programmers are employed to do, in other words, you won't directly see it at all. And no, digital cameras didn't change the field 18 months after the first somewhat-usable one was released - it took quite a while for the technology to become good enough and cheap to democratize filmmaking).
> maintaining enterprise software which is what most programmers are employed to do …
I hear little from those involved with enterprise or line-of-business applications discussing their findings. Forums like this are dominated by SAAS, tool makers, computer and data scientists, and infrastructure concerns.
Anyone using AI with large, complex business systems?
Totally agree. I see a lot of experimentation, initial exploration for an idea, etc. but the middle and end portions are never noteworthy except when it goes haywire and someone makes a blogpost about it.
Ideating is important but it is also very far from what is being promised. It’s also not that useful to the average person most of the time. If this is truly a revolutionary, must-have, daily-use technology, then by now we should have some idea of where it lives. But we don’t! The best and most consistent application so far is coding agents for coders. That’s great, but again, not the promise and very limited in scope.
Our sales and marketing have started making their own tools for themselves. This week. They actually launched a terminal.
They hit a wall with deployment, for now, but it’s amusing to watch.
And since I wouldn’t trust their stuff (or Claude’s) with a 10-mile long stick I strongly suggested we put it on Cloudflare behind eight layers of Access / Zero Trust. Easy deployment, and "solves" (if we can call it that) many of the security issues (or not; maybe I’m wrong).
I have found that LLM’s are fantastic for rewriting things in ways that get me to break through writer’s block. It’s great for just keeping me going when I can’t think of the next words, even if I just sit on it and come back later. In that way it helps me create. But this covers one major issue that affects my progress, it doesn’t like…do the job for me, if that makes sense. I throw out probably 80% of what the LLM spits out, but even just seeing what you don’t want can often help you decide what you do want.
> Where are the vibe coded apps that are actually good? Where are the new, innovative creations built by “normal” people? Because by now you’d think we’d see them.
They're busy using them. They're probably not GitHub users or HN readers. I've seen some really nice internal (business) apps made.
I mean, yeah. I've seen a network infrastructure monitoring system for an ISP, a router config generator tool, and a go-based BGP EVPN daemon in the past week. All are in production.
Sure but maybe we’re all better off spending more time going for walks, learning to cook, playing sports , talking to friends and family, participating in spiritual communities, and making love (to other people!)
This line of reasoning applies to nearly any way to spend time - "why are you playing videogames? Learn X instead!" or "why are you bothering with X when Y exists?" or "what, you don't know how to make sourdough? Silly goose!"
At the end of the day, we all have only finite time on this earth, and how one chooses to spend the meager time between eat, sleep, and fend for self is up to them. If a person is content to play sports in their free time, more power to them. If they want to play videogames, and find satisfaction in that, great! Broadly, I like to create. Most of my creations are engineering-adjacent much more than they art. That's fine, and I'm happy. I do everything you named on that list in addition to building stuff.
While using AI, I have caused things to exist that I want much faster than I could have otherwise - I know how to program, but I'm not very fast and I have to have the docs open all the time because the things I want to do are so broad and varied that one week it's bash SLURM scripts and the next it's adding things to my k8s config and the one after that it's something in Python and I don't have enough brain cells to keep track of seven different languages well enough to not accidentally put semicolons at the end of my Python scripts or use the wrong syntax but boy at least I have a bunch of stuff that actually works in the time frame and attention span that I have left after the rest of my life for that day occurs. It's not like I wasn't programming before AI - I've been doing bash scripts and Arduino stuff since middle school - but I have a lot more to show for the little free time I have to work with in the last year or so.
And, for the people who don't really know how to code, the incredible power of their computers is now much closer to their fingertips and usable for more than Electron apps. Want to have a thing happen? Ask, Wait, Iterate. All for cheaper than fiverr, and you might learn a few things before you finish.
“ The fundamental ceiling of what an LLM can do when connected to an IDE is incredible, and orders of magnitude higher than the limits of any no-code / low-code platform conceived thus far.”
AI Agents can write and modify base Python / C++ / Rust / whatever pretty well, and thus users aren't limited by "sorry the building blocks only go together in this one particular manner".
It's like the difference between an EZ-Bake oven and a fully furnished kitchen. The EZ-Bake oven can get some stuff done but its limits are much more severely obvious than the kitchen's, and the kitchen's first limiting factor in what can be produced is usually the human cooking in it
Emacs can be configured with no code written by the user and Linux can be controlled with minimal user knowledge of the command line. Still some knowledge is necessary in most cases, but nowhere near what was required a handful of years back.
It will keep being true. A few months ago the bar was Sonnet 4.0 performance. Literally just a few months ago. Now we have open weights models that reach that level.
Yes, you can get books. I have hundreds of ebooks on my Kindle with pretty much any other book a moment's download away. Even LLMs can regurgitate 95% of Harry Potter with a single prompt.
The actual quote from 1884 seems to have been:
"The advancement of the arts, from year to year, taxes our credulity, and seems to presage the arrival of that period when human improvement must end." - Henry L. Ellsworth
Either way we have a lot of things but it's not quite STTNG yet. There's no limit to how much more we can do.
I suspect the layoffs are for financial reasons, not because software is "done".
It still takes incredible amounts of resources just to build and operate even modest piles of spaghetti. The industry is basically just layers of duct tape being applied all the time to hold things together. The average user can barely operate a computer. There's no consensus for handling identity or distributed computing. We still have a long way to go.
> So it's difficult to see the value of LLMs that can generate even more software even faster. What value is left to provide for users?
I know a half dozen people who've created working software in the past month to solve a problem nothing else solved as well as what they made themselves. Software developers have finally automated themselves out of a job.
(I still think it's interesting that this requires pre-existing languages, libraries, etc, so this might not work in the future. But at least for now, we now have "Visual Basic" without the need for the visual part)
I see the next really big task for software as the ability to separate the signal from the noise. Sifting the wheat from the chaff has gone from a 'nice to have' to 'rescue my sanity'.
Maybe agents and AI in general will help with that. Maybe it will just make the problem worse.
> So it's difficult to see the value of LLMs that can generate even more software even faster. What value is left to provide for users?
In the past two or three days I generated an interactive disk usage crawler tailored to my operating system and my needs. I have audited essentially none of the code, merely providing vision and detailed explanations of the user experience and algorithms that I want, and yet got back an interactive TUI application that does what I want with tons of tests and room to easily expand. I plan to audit the code more deeply soon to get it into a shape I'd be more comfortable open-sourcing. One thing agents suck at is meaningful DRY.
A spreadsheet editor with at most a couple of hundred MBs in size that can compete against Excel, for example. While also not eating from RAM resources. The same goes for a new browser and a new browser engine, it's time for Chrome to have a real competitor, it has become a mess. I can of other such examples, but these are the 2 biggest ones.
> The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
The overwhelming majority of real jobs are not related to these things you read about on Hacker News.
I help a local group with resume reviews and job search advice. A common theme is that junior devs really want to do work in these new frameworks, tools, libraries, or other trending topics they've been reading about, but discover that the job market is much more boring. The jobs working on those fun and new topics are few and far between, generally reserved for the few developers who are willing to sacrifice a lot to work on them or very senior developers who are preferred for those jobs.
There’s a whole world out there that doesn’t seem to be addressed by the original comment. On one end of that scale you have things like bespoke software for small businesses, some niche inventory management solution that just sits quietly in the corner for years. On the other end, there’s the whole world of embedded software, game dev, design software, bespoke art pipeline tools…
It can seem that the majority of software in the world is about generating clicks and optimising engagement, but that’s just the very loud minority.
Not that you asked… But I would be happy with a junior position writing production C or ASM - but I assume that those sorts of positions are on the other end of the same boat. Who the hell has any use for an amateur dev. with an autistic fascination and _zero_ practical experience?
Someone here shared an article here, recently, espousing something along the lines of "home garden programming." I see software development moving in this direction, just like machining did: Either in a space-age shop, that looks more like a lab, with a fix-axis "machining center," or in the garage with Grandpappy's clapped out Atlas - and nothing in between.
> I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
Feels like there’s a counter to the frequent citation of Jevon’s Paradox in there somewhere, in the context of LLM impact on the software dev market. Overestimation of external demand for software, or at least any that can be fulfilled by a human-in-the-loop / one-dev-to-many-users model? The end goal of LLMs feels like, in effect, the Last Framework, and the end of (money in) meta-engineering by devs for devs.
Amen. Now with all the agents and bots, I often pause and wonder — how much code is there left to write that we need AI as our saving grace? How many unsolved problems, underserved customers, unanswered questions actually justify the volume? Where did we all go wrong?
I think we have reached peak functionality in software, therefore the only place left to go was make the underlying code more complex, messy, and impossible for humans to read. /s
This is a good point. I've seen people with really complex AI setups (multiple agents collaborating for hours). But what are they building? Are they building a react app with an express backend? A next js app? Which itself is a layer on top of an abstraction?
I haven't tried this myself but I'm curious if an LLM could build a scalable, maintainable app that doesn't use a framework or external libraries. Could be danger due to lack of training data but I think it's important to build stuff that people use, not stuff that people use to build stuff that people use to build stuff that....
Not that meta frameworks aren't valuable, but I think they're often solving the wrong problem.
When it comes time to debug would you rather ask questions about and dig through code in a popular open source library, or dig through code generated by an LLM specifically for your project?
The copout answer is it depends. I've debugged sloppy code in React both before and after LLMs were commonly used. I've also debugged very well-written custom frameworks before and after LLMs.
I think with proper guardrails and verification/validation, a custom framework could be easier to maintain than sloppy React code (or insert popular framework here).
My point is that as long as we keep the status quo of how software is built (using popular tools that male it fast and easy to build software without LLMs that often were unperformant), we'll keep heading down this path of trying to solve the problems of frameworks instead of directly solving the problems with our app.
You are going to allow a product from a company you have no reason to trust write important software for you and put it into production without checking the code to see what it does?
I agree with you, which makes me seem like the laggard at work. Devil's advocate is that AI-native development will use AI to ask these questions and such. So whether it's a framework or standard lib, def agree knowing your stuff is what matters, but the tools to demonstrate this knowledge is fast in flux.
Again, I am on the slow train. But this seems to be all I hear. "code optimized for humans" is marked for death.
had another thought on my drive just now. nextjs is really fantastic with LLM usage because there's so much body of work to source from. previously i found nextjs unbearable to work with with its bespoke isomorphic APIs. too dense, too many nuances, too much across the stack.
with LLMs it spit it out amazingly fast. but does that make nextjs the framework better or worse in design paradigms, that LLM is a requirement in order to navigate?
> Are these tools necessary to build what we actually need?
I think the entire software industry has reached a saturation point. There's not really anything missing anymore. Existing tools do 99% of what we humans could need, so you're just getting recycled and regurgitated versions of existing tools... slap a different logo and a veneer on it, and its a product.
The tools are mostly there, but there is a lot of need. Quality can be much better. Quality is UI, reliability, security, and a bunch of other similar things I can't think of offhand.
We still don’t have truly transparent transference in locally-run software. Go anywhere in the world, and your locally running software tags along with precisely preserved state no matter what device you happen to be dragging along with you, with device-appropriate interfacing.
We still don’t have single source documentation with lineage all the way back to the code.
We still don’t treat introspection and observability as two sides of a troubleshooting coin (I think there are more “sides” but want to keep the example simple). We do not have the kind of introspection on modern hardware that Lisp Machines had, and SOTA observability conversations still revolve around sampling enough at the right places to make up for that.
We still don’t have coordination planes, databases, and systems in general capable of absorbing the volume of queries generated by LLM’s. Even if LLM models themselves froze their progress as-is, they’re plenty sophisticated enough when deployed en masse to overwhelm existing data infrastructure.
The list is endless.
IMHO our software world has never been so fertile with possibilities.
> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly?
Don't forget App Stores. Everyone's still trying to build app stores, even if they have nothing to sell in them.
It's almost as if every major company's actual product is their stock price. Every other thing they do is a side quest or some strategic thing they think might convince analysts to make their stock price to move.
Well that's the thing, AI can mean anyone with an idea can build it, but only the people that own stuff will be able to leverage that to own more stuff.
The legal doctrine that a company's primary responsibility is to maximize shareholder value dates from the 1970s. It started with Milton Friedman with a 1971 essay in the NYTimes [1] and then gained a lot of currency throughout the 70s stagflation and economic malaise. The final death-knell of the corporation as a social enterprise came during the 1980s era of corporate raiders and PE buyouts.
Note that the system that came before it had problems too. In the 50s and 60s, the top marginal tax rate was about 90%, which meant that above a certain level it made almost no sense for a corporate executive to be paid more. This kept executive salaries to a reasonable multiple of employee salaries, but it meant that executives and high-ranking managers tended to pay themselves in perks. This was the "Mad Men" era of private jets, private company apartments, secretaries who were playthings, etc. Friedman's essay was basically arguing against this world of corporate unaccountability and corruption, where formal pay and compensation were reasonable, but informal perks and arrangements managed to privilege the people in power in a complete opaque, unaccountable way.
Turns out that power is a hell of a drug, and the people in power will always find ways to use that to enrich themselves regardless of what the laws and incentives are.
>> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
This is because all the low-hanging fruit has already been built. CRM. Invoicing. HR. Project/task management. And hundreds of others in various flavors.
It may exist (with a loose term of exist) but they are all mostly garbage. There's still plenty opportunity to make non-garbage version of things that already exist
This is technically true but also a bit naive. Established incumbents are very difficult to dislodge with merely a better version of their products. This becomes more true the larger the product and the average customer size. A good example is QuickBooks, which is a really janky accounting/bookkeeping software that is almost universally hated, but newer and better solutions haven't been able to capture much market share from it.
It’s hard to actually build a better QuickBooks because to build a better QuickBooks you need 1000+ integrations that each took hundreds of man hours to build.
Most common mistake: Creating diagrams in a graphics program without semantics. Why not write your "diagram" in code or at least some structured format?
- LLMs can read and write it, it serves as valuable context
- Always up to date, rendered on demand
- source controllable
- editable (thus will actually get edited)
Most architectural diagrams I see are just images with no source or semantics. Strictly less useful and more prone to rot.
Easier and largely compatible with the rest of the world. Solving problems with "If we all switched to NixOS..." is a non-starter in most organizations.
My rule of thumb: keep a strict separation between my projects (which change constantly) and my operating system (which I set up once and periodically update). Any hard nix dependency inside the project is a failure of abstraction IMO. Collaborating with people on other operating systems isn't optional!
In practice this means using language-specific package management (uv, cargo, etc) and ignoring the nix way.
The default has been pay $x/month for every service. I've seen startups that require a dozen service accounts just to run the software, and dozens more to get onboarded org wide. One service for feature flags. One service for logs. One service for traces. One service for error handling. Another service for ticket tracking, which is completely separate from your planning, design, and CI services. Jesus. What do people hope to accomplish here besides just defering blame?
Replacing SAAS isn't about building a replacement services 1:1. It's about figuring out what you actually needed in the first place! Often we only use a tiny fraction of what the full-blown SAAS offers. IOW it's about eliminating the service entirely and building something that fits your actual needs, rather than following what some VC thinks your needs are.
AI or not, the "build vs buy" pendulum is now swinging hard to build. And IMO that's a real opportunity to consolidate, trim some fat, and actually apply engineering practices rather than just blindly signing up for every SAAS that crosses your path.
This fundamentally misunderstands a couple of things.
DIY software is "free" like a free yacht is free. It initially looks appealing but there's a lot of expensive hidden costs and upkeep and pitfalls and problems.
For one, this is a bad assumption:
> building something that fits your actual needs
Unless your business is very small and not growing, this is a moving target. Your needs are going to change as you grow and different groups in the org are going to have different needs.
You really don't want to be dicking around creating software that already exists instead of doing the shit that actually makes you money. Spending a few hundo thousand on a bunch of software is nothing, you spend that on one engineer.
You buy a SaaS product because you have a problem and want to throw money at someone else to deal with it.
Try CodeCompanion if you're using neovim. I have a keybind set up and takes the highlighted region, prepends some context which says roughly "if you see a TODO comment, do it, if you see a WTF comment, try to explain it", and presents you an inline diff to accept/reject edits. It's great for tactical LLM use on small sections of code.
For strategic use on any larger codebase though, it's more productive to use something like plan mode in Claude code.
Considering LLMs are models of language, investing in the clarity of the written word pays off in spades.
I don't know whether "literate programming" per se is required. Good names, docstrings, type signatures, strategic comments re: "why", a good README, and thoughtfully-designed abstractions are enough to establish a solid pattern.
Going full "literate programming" may not be necessary. I'd maybe reframe it as a focus on communication. Notebooks, examples, scripts and such can go a long way to reinforcing the patterns.
Ultimately that's what it's about: establishing patterns for both your human readers and your LLMs to follow.
Yeah, I think what is needed is somewhere between docstrings+strategic comments, and literate programming.
Basically, it's incredibly helpful to document the higher-level structure of the code, almost like extensive docstrings at the file level and subdirectory level and project level.
The problem is that major architectural concepts and decisions are often cross-cutting across files and directories, so those aren't always the right places. And there's also the question of what properly belongs in code files, vs. what belongs in design documents, and how to ensure they are kept in sync.
The question being - are LLMs 'good' at interpreting and making choices/decisions about data structures and relationships?
I do not write code for a living but I studied comp sci. My impression was always that the good software engineers did not worry about the code, not nearly as much as the data structures and so on.
The only use of code is to process data, aka information. And any knowledge worker that the success of processing information is mostly relying on how it's organized (try operating a library without an index).
Most of the time is spent about researching what data is available and learning what data should be returned after the processing. Then you spend a bit of brain power to connect the two. The code is always trivial. I don't remember ever discussing code in the workplace since I started my career. It was always about plans (hypotheses), information (data inquiry), and specifications (especially when collaborating).
If the code is worrying you, it would be better to buy a book on whatever technology you're using and refresh your knowledge. I keep bookmarks in my web browser and have a few books on my shelf that I occasionally page through.
Wow, the world is getting much faster at exploiting CVEs
> 67.2% of exploited CVEs in 2026 are zero-days, up from 16.1% in 2018
But the exploit rate (the pct of all published CVEs that are actually exploited in the wild) has dropped from a high of 2.11% in 2021 to 0.64% in 2026. Meaning we're either getting worse at exploitation (not likely) or reporting more obscure, pragmatically not-really-an-issue issues that can't be replicated IRL.
So we're in a weird situation:
The vast majority 99.4% of CVEs will never see the light of day as an actual attack. Lots of noise, and getting noisier.
But those that do will happen with increasing speed! So there are increased consequences for missing the signal.
The entire zeitgeist of software technology revolves around the assumption that making things efficient, easy, and quick is inherently good. Most people who are "sitting in front of rectangles, moving tiny rectangles" have sometime grandiose notions of their works' importance; we're making X work better for the good of Y to enable Z. Abstract shit like that.
No man, you're just making X easier. If the world needs more X, fine. If not, woops.
The detachment from reality makes it all too easy to deceive yourself into thinking "hey this actually helps people".
> Most people who are "sitting in front of rectangles, moving tiny rectangles"
Hey dude these are my emotional support rectangles!
Truth is, anything can be meaningful. We make our own meaning and almost anything will do as long as you believe in it. If optimizing rectangles on the screen makes you happy, that’s great. If it doesn’t, find something else to do.
It’s really just because those of us choosing this profession are also very good at optimizing chosen metrics. But don’t always ask whether they are good metrics and whether they become counterproductive past some point.
This is one of the reasons why I'm so disgusted by the mainstream voices around AI. As if I'm going to be "left behind" because my only priority isn't increasing shareholder value or building a saas that makes the world a worse place.
Requirements handed down - never seen it in 25 years. The requirements are always fluid, by definition. At best, you get a wish list which needs to be ammended with reality. If you have completely static requirements, you don't need an engineer! You just do it. Engineering IS refining the requirements according to empirical data.
Once you have requirements that are correct (for all well-defined definitions of "correct"), the code implementation is so trival that an LLM can do it :-)
By that definition, we do this all the time. I'd wager every feature release has some degree of "oh we can address that later if/when this takes off".
If you accidentally introduce bugs or regressions that gums up the works, that's not "debt", that's a mistake. If you choose the wrong thing and realize too late, that's not "debt", it's just bad decision making. If you choose wrong and are dissallowed from ever going back and cleaning up as agreed, that's not "debt", it's bad management.
We've got to stop using "tech debt" to mean "everything we don't like about software".
reply