It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.
It is not dysfunctional to ignore unreachable "bugs". A memory leak on a missile which won't be reached because it will explode long before that amount of time has passed is not a bug.
It's a debt though. Because people will forget it's there and then at some point someone changes a counter from milliseconds to microseconds and then the issue happens 1000 times sooner.
It's never right to leave structural issues even if "they don't happen under normal conditions".
In hard real-time software, you have a performance budget otherwise the missile fails.
It might be more maintainable to have leaks instead of elaborate destruction routines, because then you only have to consider the costs of allocations.
Java has a null garbage collector (Sigma GC) for the same reason. If your financial application really needs good performance at any cost and you don't want to rewrite it, you can throw money at the problem to make it go away.
I don't think this argument makes sense. You wouldn't provision a 100GB server for a service where 1GB would do just in case unexpected conditions come up. If the requirements change, then the setup can change, doing it just because is wasteful. What if we forget is not a valid argument to over engineer and over provision.
If a fix is relatively low cost and improves the software in a way that makes it easier to modify in the future, it makes it easier to change the requirements. In aggregate these pay off.
If a missile passes the long hurdles and hoops built into modern Defence T&E procurement it will only ever be considered out of spec once it fails.
For a good portion of platforms they will go into service, be used for a decade or longer, and not once will the design be modified before going end of life and replaced.
If you wanted to progressively iterate or improve on these platforms, then yes continual updates and investing in the eradication of tech debt is well worth the cost.
If you're strapping explosives attached to a rocket engine to your vehicle and pointing it at someone, there is merit in knowing it will behave exactly the same way it has done the past 1000 times.
Neither ethos in modifying a system is necessarily wrong, but you do have to choose which you're going with, and what the merits and drawbacks of that are.
Again, when you're building a missile nobody should "forget" a detail.
You have very clearly in the specification, "this missile SHALL not have a run time before reboot of greater than 36 hours ref. donut_count.c:423 integer counter overflows"
Seriously, there's a military standard for pop tarts and they'd get rejected if they had out of spec amounts of frosting on top. It is not the software world you live in.
It's not that they don't ever make mistakes, just an extraordinary amount of effort is put into not making mistakes and oftentimes things are done "wrong" on purpose because of tradeoffs ordinary silicon valley software engineers have no context about.
The way it always seemed to go for me, when I was in that role, is the product is already complete, development is done, you're handed all the tests/etc that the disinterested developers care to give you, and you're told to make those tests presentable and robust, and increase test coverage. The process of doing that inevitably uncovers issues, but nobody cares because the thing is already done and working, so what was the point of any of it? The point was just to check off a box. At companies like this, the role is bullshit work.