This logic is both too broad and rigid to be of much practical use[1]. It needs to be tightened to compare languages that are identical except for static type checks, otherwise the statically typed language could admit other kinds of errors (memory errors immediately come to mind) that many dynamic languages do not have and you would need some way of weighing the relative cost to reliability of the different categories of errors.
Even if the two languages are identical except for the static types, then it is clearly possible to write programs that do not have any runtime type errors in the dynamic language (I'll leave it as an exercise to the reader to prove this but it is very clearly true) so there exist programs in any dynamic language that are equally reliable to their static counterpart.
[1] I also disagree with your definition of reliability but I'm granting it for the sake of discussion.
The claim was about reliability and lack of empirical evidence. Once framed that way, definitions matter. My argument is purely ceteris paribus: take a language, hold everything constant, and add strict static type checking. Once you do that, every other comparison disappears by definition. Same runtime, same semantics, same memory model, same expressiveness. The only remaining difference is the runtime error set.
Static typing rejects at compile time a strict subset of programs that would otherwise run and fail with runtime type errors. That is not an empirical claim; it follows directly from the definition of static typing. This is not hypothetical either. TypeScript vs JavaScript, or Python vs Python with a sound type checker, are real examples of exactly this transformation. The error profile is identical except the typed variant admits fewer runtime failures.
Pointing out that some dynamic programs have no runtime type errors does not contradict this. It only shows that individual programs can be equally reliable. The asymmetry is at the language level: it is impossible to deploy a program with runtime type errors in a sound statically typed language, while it is always possible in a dynamically typed one. That strictly reduces the space of possible runtime failures.
Redefining “reliability” does not change the result. Suppose reliability is expanded to include readability, maintainability, developer skill, team discipline, or development velocity. Those may matter in general, but they are not variables in this comparison. By construction, everything except typing is held constant. There is literally nothing else left to compare. All non-type-related factors are identical by assumption. What remains is exactly one difference: the presence or absence of runtime type errors. At that point, reliability reduces to failure count not as a philosophical choice, but because there is no other dimension remaining.
Between two otherwise identical systems, the one that can fail in fewer ways at runtime is more reliable. That conclusion is not empirical, sociological, or debatable. It follows directly from the setup.
Sometimes someone genuinely has a clear vision that is superior to the status quo and is capable of executing it, improving quality, performance and maintainability. The challenge is distinguishing these cases from the muddled abstractions that make everything worse. This argument feels a bit like "no one gets fired for buying IBM." Blanket advice like this is an invitation to shut down thinking and stymie innovation. At the same time, the author is not wrong that imposing a bad abstraction on an org is often disastrous. Use your powers of reason to distinguish the good and bad cases.
I had a roommate who failed out of college because he was addicted to Everquest (yes, Everquest, and yes I am middle-aged). Your last paragraph is barely even hyperbolic. Do you think unemployed young men who live at home with their parents, do little to no physical activity, spending most of their time playing videogaming and/or trolling on the internet are not stuck destroying their bodies (and minds) in a spiral of deadly addiction? Maybe you are a functional gamer, but there are many, many gamers who are not and this technology is maybe a quasi effective cope for our punishing society writ large, but from the outside, gaming addicts appear to be living a sad and limited life.
Or to put it more succinctly, would you want your obituary to lead with your call of duty prowess?
That quote is a category error. It’s about moral judgment of people, not epistemic evaluation of claims. I’m not condemning Le Guin, her character, or anyone who enjoys fiction. I’m saying a specific explanatory claim about how stories relate to truth and human cognition is false.
If “judge not” applied here, then no scientific criticism is permissible at all. You couldn’t say a theory is wrong, a model is flawed, or a claim is unsupported, because the critic is also imperfect. That standard would immediately end every serious discussion on HN.
Quoting scripture in response to an evolutionary and cognitive argument isn’t a rebuttal. It’s a frame shift from “is this claim true” to “are you allowed to say it.” That avoids engaging the substance entirely.
If you think the argument is wrong, point to the error. If not, appealing to moral humility doesn’t rescue a claim from being false.
The canonical western text is Richard Wilhelm's german interpretation, translated to english by Cary Baynes. This site has the hexagram descriptions from that translation: https://www.iching.online/wilhelm.php
I recommend buying the book though. It is fascinating whether or not you buy into it.
In the medium to long term, if LLMs are unable to easily learn new languages and remap the knowledge they gained from training on different languages, then they will have failed in their mission of becoming a general intelligence.
> It clearly wasn’t ’that bad’ for most human history given how prevalent it was. But in the modern world we are trauma merchants.
What in the actual fuck kind of logic brought you here to that conclusion? I worry for you and whatever content you’ve consumed over the years that allowed you to build up this theory.
Well I did a Psych minor for lols back in the day and there's no doubt that almost all of current social psychology is made up of trauma merchants peddling their wares. 'Step 1: gaslight you into thinking you're fucked up Step 2: Convince you I'm the one who can help you' kind of shit. Psychologists obviously get upset by the suggestion.
But in terms of rape, as I said in my original comment it's just a suspicion. Happy to hear your counterarguments if you have any.
Was rape that common? Like in hunter gatherer times (most of human history) most mating would have been within the band. I don’t think intra-band mating would have been rapey, mostly. Incestuous by modern standards, for sure, but I don’t see why it would have been rapey. Inter-tribe conquest mating definitely happened, but was it really that common compared to the normal mode? It takes way more effort, at least.
My similarly controversial take is that modern rape is traumatic because rapists are no longer hung in public. I think public hangings might have had an under appreciated healing effect on the psyche. Like if a guy who attacked you is still out there, he might attack you again. But if you saw him hang, you just might feel better.
It's been extremely prevalent. In terms of prehistory, we have lots of evidence that young women were almost always spared if one group massacred another, and we have genetic evidence that invariably the winning male bloodline would become predominant in any conquered group.
If you look at the Columbia link and do other research it's pretty obvious that 'punishing' rapists has never really been about punishing them or giving women some kind of absolution. In the code of Hammurabi and with the Jews women who didn't scream so that others could hear were prosecuted for adultery or stoned lol. The idea of giving women the satisfaction of watching anything for their own benefit is a very modern notion and even now doesn't really exist anyway. That's just your personal fantasy. You can go back to the Assyrians to see that if, for example, you raped my virgin daughter then I could legally rape your wife. It's mostly been a property or bloodline issue. It's never been about the females and that's another reason I think it's massively overblown in modern times. It's been normal human behaviour for millions of years. To put it another way, if you were a young 19 year old female in a village that was being ransacked, say, 4000 years ago, you'd know what was going to happen to you if your males lost. I don't think it would have been that traumatising - the males in your village would have done it to the females in their village were the roles reversed. The 'trauma' is largely a modern phenomenon where everything has to be upsetting/triggering/trauma-inducing. Everybody has to be a victim these days.
As impressive as this analysis is by the compiler, I shudder to think how much time the compiler spends doing these kinds of optimizations. In my opinion, a better architecture would be for the compiler to provide a separate analysis tool that suggests source level changes for these kinds of optimizations. It could alert you that the loop could be replaced with a popcount (and optionally make the textual replacement in the source code for you). Then you pay the cost of the optimization once and have the benefit of clarity about what your code _actually_ runs at runtime instead of the compiler transparently pulling the rug out from underneath you when run with optimizations.
Side note: many years ago I wrote the backend for a private global surveillance system that has almost surely tracked the physical location of anyone reading this. We could efficiently track how often a device had been seen at a location in the prior 64 (days|weeks|months) in just 192 bytes and use popcount to compute the value. I am not proud that I built this.
I liked the first few paragraphs, but the article took a wrong turn for me when it pivoted to ir. Compiler ir need not be fiendishly complex, though modern compilers, particularly clang and gcc can create this impression.
There are two nice benchmarks that I use to measure the complexity of a compiler: how long is the compiler and how long does it take to self compile. Clang and gcc are abysmal on these benchmarks. In fact, I would argue that they fail the second benchmark entirely because they are not capable of self compilation because their build systems rely on external tooling. In other words, there is no implementation of gcc or clang that does not rely on an additional tool that is external to the compiler itself.
The main pedagogical value of these compilers is not as an exemplar, but rather an antithesis.
If the boosters are correct about the trajectory of llm performance, these objections do not hold.
Debugging machine code is only bad because of poor tooling. Surely if vibe coding to machine code works we should be able to vibe code better debuggers. Portability is a non issue because the llm would have full semantic knowledge of the problem and would generate optimal, or at least nearly optimal, machine code for any known machine. This would be better, faster and cheaper than having the llm target an intermediate language, like c or rust. Moreover, they would have the ability to self-debug and fix their own bugs with minimal to no human intervention.
I don't think there is widespread understanding of how bloated and inefficient most real world compilers (and build systems) are, burning huge amounts of unnecessary energy to translate high level code, written by humans who have their own energy requirements, to machine code. It seems highly plausible to me that better llms could generate better machine code for less total energy expenditure (and in theory cost) than the human + compiler pair.
Of course I do not believe that any of the existing models are capable of doing this today, but I do not have enough expertise to make any claims for or against the possibility that the models can reach this level.
Even if the two languages are identical except for the static types, then it is clearly possible to write programs that do not have any runtime type errors in the dynamic language (I'll leave it as an exercise to the reader to prove this but it is very clearly true) so there exist programs in any dynamic language that are equally reliable to their static counterpart.
[1] I also disagree with your definition of reliability but I'm granting it for the sake of discussion.
reply