You are way off: it's just about money. For a long time, making appliances was an ok business, making good stuff, selling them, factories running, employment, margins ok, ... and there was progress/innovation to do.
Now that there is not much to update or innovate with, and companies have already squeezed workers in Bengladesh to the max, the only current innovation and additional money source are "connected" and "ads".
I don't see any contradiction between the two takes; I suspect capital pressure will force us into an inhumane dystopia where baseline existence is miserable, and quiet rational thought is a luxury.
So what? The Rolls-Royce & Bentley L-series engine was made from 1959 to 2020, 60 years, (https://en.wikipedia.org/wiki/Rolls-Royce–Bentley_L-series_V...), and was only replaced because of mergers and change in ownership, not because of its capacities. It of course evolved quite some during that production span.
Say you are working on a banking system. You ship a login form, it is deployed, used by tons of people. Six months later you are mid-sprint on the final leg of a project that will hook your bank into the new FedNow system. There are dozens of departments working together to coordinate deploying this new setup as large amounts of money will be moved through it. You are elbows deep in the context of your part of this and the system cannot go live without it. Twice a day you are getting QA feedback and need to make prompt updates to your code so the work doesn’t stall.
This is when the report comes in that your login form update from six months ago does not work on mobile Opera if you disable JavaScript. The fix isn’t obvious and will require research, potentially many hours or even days of testing and since it is a login form you will need the QA team to test it after you find another developer on your team to do a code review for you.
What exactly would you do in this case? Pull resources from a major project that has the full attention of the C suite to accommodate some tin foil Luddite a few weeks sooner or classify this as lower priority?
This is a great example... except I think the right answer to "what exactly would you do in this case?" doesn't support your argument.
I'd document that mobile Opera with Javascript disabled is an unsupported config, and ask a team to make a help center doc asking mobile Opera users to enable JS.
This is too logical, practical, and pragmatic. Which product owner/project manager would approve such a thing!?
Being able to think of simple, practical solutions like this is one of the hardest skills to develop as a team, IMO. Not everything needs to be perfect and not everything needs a product-level fix. Sometimes a "here's the workaround" is good enough and if enough people complain or your metrics show use friction in some journey, then prioritize the fix.
GP's example is so niche that it isn't worth fixing without evidence that the impact is significant.
That is also a solution. But the part where you drop everything to immediately document this, and then involve someone else on the team to write more documentation is the exact constraint I was trying to demonstrate. This bug is out of your and your team’s current context. It is low priority. A workaround reply is appropriate here and may have already been sent to the customer by tech support but it is also entirely appropriate to wait a few weeks to complete even what you stated if it is going to affect the company’s bottom line to do it sooner.
I just checked and it's fixed now, but for a long time the "Shop Policies" section on Etsy shops had the text of first field misaligned[0]. That's the sort of thing that might get thrown into a fix week, but never actually prioritized outside of a "fix week" situation. (tbf, it also might just get noticed and fixed by an engineer randomly without prioritization.)
Yes exactly. Any non-critical out of current scope bug must be evaluated for whether it should interrupt the current work. Is it a priority? You cannot automate this process by saying “if is_bug: return {priority: IMMEDIATE}” as suggested by the quote about id above, because you will absolutely destroy any velocity. In fact that quote seems to me to be talking about not committing new code with known bugs, not dropping everything as soon as a non-critical bug is discovered in old code.
Instead you need to have a triage process and a planning process, which to some degree most software teams do. The problem is that most of these processes do not have a rigid way of dealing with really old low priority bugs. A bug fix week is one option for addressing that need.
No. My argument is valid if you have deadlines and your resources are not infinite. Either you were the only one reporting bugs at which point of course you could fix the as you found them because they were always in your work context or you had no deadlines and could afford to switch context without the inefficiency of it affecting anything.
In most situations you have users who also find bugs and report them when they want, not when you are ready for them.
You can even see that your argument does not apply generally by the fact that bugs exist in software for years. If your way was both more efficient AND more aligned with human nature then everyone would be working like this but clearly almost nobody can afford to drop everything to fix a user’s random low priority bug the minute it is reported.
Well, we clearly come from very different work methodologies.
You have deadlines, velocity is a goal rather than a measurement, and probably several other (IMHO) process mistakes. In such systems, doing what is best for the organization can often be bad for your personal career. Still, that's probably the norm in much of the industry.
My view is that having bugs is costly. They cause problems in development, and alienates users. A bug free code base is an incredible asset to have!
You say it's inefficient to "switch context" and fix a bug the moment you find it. There is some truth there, but... (1) there are ways to work without huge context load, (2) I don't have to fix the bug that very minute. Usually, I make a note and get to it the next day or so. Also (3) the average bug fix in a well structured and tested code base is usually pretty quick.
> If your way was both more efficient AND more aligned with human nature then everyone would be working like this
This assumes the software industry is really well organized. After 40 years experience writing software, that is just hilarious! Though I probably also thought that before I got involved with much better organizations.
I suspect we do. Though you misunderstood my comment about velocity. I was using that purely as a way to demonstrate that something measurable is affected by dropping everything to fix a bug. Sounds like you do wait for an opportune moment to fix a bug and do not make it a top priority after all so I think you see the cost of interruptions.
But yes I am aware of lots of parts of this industry where you do not need to rush a project no matter what. I worked at places that had a breakneck velocity and at places where it is much more chill. I prefer the latter but I can say that I still want to ship software which means goals and deadlines. Bugs should be fixed ASAP but priorities must also be respected.
After 20 years doing this as a career, I agree this industry is a bit of a mess :)
> If the normal process leaves things like this to "some other time", one should start by fixing the process.
Adding regular fixits is how they fix the normal process.
This addition recognizes that most bug fixes do not move the metrics that are used to prioritize work toward business goals. But maintaining software quality, and in particular preventing technical debt from piling up, are essential to any long-running software development process. If quality goes to crap, the project will eventually die. It will bog down under its own weight, and soon no good developer will want to work on it. And then the software project will never again be able to meet business goals.
So: the normal process now includes fixits, dedicated time to focus on fixing things that have no direct contribution to business goals but do, over the long term, determine whether any business goals get met.
It seems that pipe operator was introduced largely because PHP arrays and strings don't have "methods". You can't write something in "OOP" style: "some_string"->str_replace("some", "replacement")->strtoupper(). With PHPs array / string procedural way writing such chains is much bulkier. Pipe operator will somewhat reduce the boilerplate, but the native "OOP" style is still much better.
Although there is a proposal for adding "methods" but I don't remember the link.
I'm not a blind PHP hater, but it seems like PHP community members sometimes celebrate new PHP features when their equivalents have been there for many years in other programming languages. https://waspdev.com/articles/2025-06-12/my-honest-opinion-ab...
No, the OOP style isn't better. The set of functions one can use in OOP is closed.
Imagine I want to AfD a custom string function for a feature which uPpErCaSeS every second letter as I need that for some purpose: I can't do in OOP style.
In OOP I could extend the string class, but most other parts of the code won't magically use my string type now.
Thus I have to create a free standing function for this (which probably also is better as I don't need internal state of thee object, thus livingnoutisde is good for encapsulation)
And thus my string function works different from other string functions.
my_casing($string->trim())->substr(3);
(The example of course is non sensical and could be reordered, but we argue syntax)
Having them all be simple functions makes it equal.
Of course there are alternative approaches. C++ argues for years about "uniform call syntax" which would always allow "object style" function calls, which could also find non-memwbr functions where the first argument is of compatible type, but such a thing requires stricter (or even static) typing, this won't work in PHP.
Sadly about 98% of real world users are going to fall into scams, ransomwares and stuff. They are not mentally challenged, there are just so many traps/fakes/tempting stuff that we as IT people are more aware of (but even we still fall into some).
We also can't count on every person being able to check every single thing they do: how do you check if some food or drug you get is good or not? you can't really, you have to trust someone who knows.
It’s a bit like the Elizabeth Warren toaster analogy. If you bought a toaster with shoddy wiring and it caught fire and burned down your house, everyone would blame the manufacturer and not sneer at you online for not learning electrical engineering and not checking the wiring yourself before using it.
It's more like if I buy a reliable toaster, but I buy bread that's secretly poisoned by the manufacturer and hurt myself. I'm not gonna demand the toaster maker add a poison sensor to the toaster and say "how dare they didn't protect me!"
I don't buy this in the first place. It is reasonable to expect consumers to do some background research into the products they buy. In fact, it is the only way capitalism can function as a meritocracy.
Society should be more dangerous as a means to force people to learn more about technology they rely on.
As the family black sheep who's statements are often regarded as the words of an emotionally unstable person, who's familiar with quite a few instances of such people being correct and ignored because they're inopportune (and completely preventable) trauma (if only someone had cared...) threatens the collective family bag/ego, I'm inclined to believe her.
Now that there is not much to update or innovate with, and companies have already squeezed workers in Bengladesh to the max, the only current innovation and additional money source are "connected" and "ads".
reply