The compiler ensures that the code is valid, and what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method? … some dude living in that module all day every day exercising monk-like discipline? That is unwanted for a few reasons, notably the routine failures of such efforts over time.
Module names and namespaces and function names can lie. But they are also corrected wholesale and en-masse when first fixed, those lies are made apparent when using them. If right_pad() is updated so it’s actually left_pad() it gets caught as an error source during implementation or as an independent naming issue in working code. If that misrepresentation is the source of an emergent error it will be visible and unavoidable in debugging if it’s in code, and the subsequent correction will be validated by the compiler (and therefore amenable to automated testing).
Lies in comments don’t reduce the potential for lies in code, but keeping inline comments minimal and focused on exceptional circumstances can meaningfully reduce the number of aggregate lies in a codebase.
> what ensures that ‘// used a suboptimal sort because reasons’ is updated during a global refactor that changes the method?
And for that matter, what ensures it is even correct the first time it is written?
(I think this is probably the far more common problem when I'm looking at a bug, newly discovered: the logic was broken on day 1, hasn't changed since; the comment, when there is one, is as wrong as the day it was written.)
An important addendum: code can sometimes, with a bit of extra thinking of part of the reader, answer the 'why' question. But it's even harder for code to answer the 'why not' question. Ie what were other approaches that we tried and that didn't work? Or what business requirements preclude these other approaches.
> But it's even harder for code to answer the 'why not' question.
Great point. Well-placed documentation as to why an approach was not taken can be quite valuable.
For example, documenting that domain events are persisted in the same DB transaction as changes to corresponding entities and then picked up by a different workflow instead of being sent immediately after a commit.
I don't think this is enough to completely obsolete comments, but a good chunk of that information can be encoded in a VCS. It encodes all past approaches and also contains the reasoning and why not in annotation. You can also query this per line of your project.
Git history is incredible important, yes, but also limited.
Practically, it only encodes information that made it into `main`, not what an author just mulled over in their head or just had a brief prototype for, or ran an unrelated toy simulation over.
Yes, git ain't the only one, but apart from interface difference, they are pretty much compatible in what they allow you to record in the history, I think?
Part of the problem here is that we use git for two only weakly correlated purposes:
- A history of the code
- Make nice and reviewable proposals for code changes ('Pull Request')
For the former, you want to be honest. For the latter, you want to present a polished 'lie'.
Not really. Launchpad.net does not have any public branches I could share atm as an example, but Bazaar (now breezy) allowed having a nested "merge commit": your trunk would have "flattened" merge commits ("Merge branch foo"), and under it you could easily get to each individual commit by a developer ("Prototype", "Add test"...). It would really be shown as a tree, but smartness was wven richer.
This was made possible by using a DAG for commit storage and referencing, instead of relying on file contents and series of commits per reference. Merge behaviour was much smarter in case of diverging tip or criss-cross merges. But this ultimately was harder and slower to implement, and developers did not value this enough and they instead accepted the Git trade-offs.
So you seamlessly did both with a different VCS without splitting those up: in a sense, computers and software worried about that for us.
You can select whether you want the diff to the first or the second parent, which is the difference between collapsing and expanding merges. You can also completely collapse merges by showing first-parent-history.
Or I do not understand what you mean with "the expected thing".
If you throw away commit messages, that is on you, it is not a limitation of Git. If I am cleaning up before merging, I'm maybe rephrasing things, but I am not throwing that information away. I regularly push branches under 'draft/...' or 'fail/...' to the central project repository.
The WIP commits I initially recorded also don't necessarily existed as such in my file system and often don't really work completely, so I don't know why the commit after a rebase is any more a lie then the commit before the rebase.
It's a 'lie' in the sense that you are optimising for telling a convenient and easy to understand story for the reviewer where each commit works atomically.
The "honest" historical record of when I decided to use "git commit" while working on something is 100% useless for anyone but me (for me it's 90% useless).
git tracks revisions, not history of file changes.
You put past failed implementation in comments? That sounds like a nightmare. I rather only include a short description in the comment that can then link to the older implementation if necessary.
But why would you ever put that into your VCS as opposed to code comments?
The VCS history has to be actively pulled up and reading through it is a slog, and history becomes exceptionally difficult to retrace in certain kinds of refactoring.
In contrast, code comments are exactly what you need and no more, you can't accidentally miss them, and you don't have to do extra work to find them.
I have never understood the idea of relying on code history instead of code comments. It seems like it's all downsides, zero upsides.
Because comments are a bad fit to encode the evolution of code. We implemented systems to do that for a reason.
> The VCS history has to be actively pulled up and reading through it is a slog
Yes, but it also allows to query history e.g. by function, which to me gets me to understand much faster than wading through the current state and trying to piece information together from the status quo and comments.
> history becomes exceptionally difficult to retrace in certain kinds of refactoring.
True, but these refactorings also make it more difficult to understand other properties of code that still refers to the architecture pre-refactoring.
> I have never understood the idea of relying on code history instead of code comments. It seems like it's all downsides, zero upsides.
Comments are inherently linear to the code, that is sometimes what you need, for complex behaviour, you rather want to comment things along another dimension, and that is what a VCS provides.
What I write is this:
/* This used to do X, but this causes Y and Z
and also conflicts with the FOO introduced
in 5d066d46a5541673d7059705ccaec8f086415102.
Therefore it does now do BAR,
see c7124e6c1b247b5ec713c7fb8c53d1251f31a6af */
Both have their place. While I mostly agree with you, there's a clear example where git history is better: delete old or dead or unused code, rather than comment it out.
Agreed. Tests are documentation too. Tests are the "contract": "my code solves those issues. If you have to modify my tests, you have a different understanding than I had and should make sure it is what you want".
When I saw the title, I thought of Lambda Calculus[0] and SKI combinators[1]. Given that there are "only six useful colors", I wonder if M&Ms could be used to implement them.
Funny you mention that, because yes, a combinator-style encoding is probably a cleaner fit for the “only six colors constraint than my stack machine. I hacked together a tiny SKI-flavored M&M reducer as a proof of concept: B=S, G=K, R=I, Y=(, O=), and N... is a free atom, so `B G G NNN` reduces to `a2`.
It is important to remember that clarifying the legal implications of "pledge" is entirely different than supporting and/or defending this instance of its usage.
One can do the former whilst repudiating the latter and remain logically consistent.
I'm not understanding why clarifying the legal implications is important if it's a smoke screen for everyone involved doing what they are going to do anyway. It seems more like a distraction away from the real problems.
Using Claude to provide a legal definition of "pledge" is unconvincing at best.
> What are the legal protections of a “pledge”?
To answer that question is to first agree upon the legal definition of "pledge":
pledge
v. to deposit personal property as security for a personal
loan of money. If the loan is not repaid when due, the
personal property pledged shall be forfeit to the lender.
The property is known as collateral. To pledge is the same
as to pawn. 2) to promise to do something.[0]
Without careful review of the document signed, it is impossible to verify which form of the above is applicable in this case.
> A pledge is a public commitment or statement of intent, not a binding legal contract.
This very well may be incorrect in this context and serves an exemplar as to why relying upon statistical document generation is not a recommended legal strategy.
No, this is not my goal. My goal was to illuminate that Claude is a product which produces the most statistically relevant content to a prompt submitted therein.
> I'm not sure why your failure to do so should be taken up with law.com?
The post to which I originally replied cited "Claude" as if it were an authoritative source. To which I disagreed and then provided a definition from law.com. Where is my failure?
> Law.com's first definition is inapplicable.
From the article:
The pledge includes a commitment by technology companies to
bring or buy electricity supplies for their datacenters,
either from new power plants or existing plants with
expanded output capacity. It also includes commitments from
big tech to pay for upgrades to power delivery systems and
to enter special electricity rate agreements with utilities.[0]
> That leaves us with the second definition, which says nothing about whether a pledge is legally binding.
To which I originally wrote:
Without careful review of the document signed, it is
impossible to verify which form of the above is applicable
in this case.
Said article is not about a loan backed by a security agreement. That eliminates law.com definition 1.
Law.com definition 2 is silent on whether pledges are binding.
Thus ended your research.
I don't know why you care if Claude.com is authoritative. Law.com isn't either, the authoritative legal references are paywalled. A law dictionary, as we've demonstrated by law.com's second definition's vagueness, isn't necessarily even the correct reference to consult.
Your failure, I suppose, is that you provided worse information than Claude. I suppose you should have typed "Don't cite Claude please" and moved on.
> Your answer is less useful and thought out than the Claude response.
"Less useful" is subjective and I shall not contend. "Less thought out" is laughable as I possess the ability to think and "Claude" does not.
> Claude actually answers the question in the context in which it's being asked.
The LLM-based service generated a statistically relevant document to the prompt given in which you, presumably a human, interpreted said document as being "actually answers the question". This is otherwise known as anthropomorphism[0].
> We need key AI researchers at these companies to speak out ...
See this[0] article from Business Insider dated 2026-02-16 titled:
The art of the squeal
What we can learn from the flood of AI resignation letters
And containing:
This past week brought several additions to the annals of
"Why I quit this incredibly valuable company working on
bleeding-edge tech" letters, including from researchers at
xAI and an op-ed in The New York Times from a departing
OpenAI researcher. Perhaps the most unusual was by Mrinank
Sharma, who was put in charge of Anthropic's Safeguards
Research Team a year ago, and who announced his departure
from what is often considered the more safety-minded of the
leading AI startups.
> It's kind of like the small string optimization you see in C++ ...
Agreed. These types of optimizations can yield significant benefits and are often employed in language standard libraries. For example, the Scala standard library employs an analogous optimization in their Set[0] collection type.
This is the recommendation I have heard peers, both technical and managerial, echo for years in one form or another:
4. Upskill professionally. We're not hiring code monkeys
for $200K-400K TC. We want Engineers who can communicate
business problems into technical requirements. This means
also understanding the industry your company is in, how to
manage up to leadership, and what are the revenue drivers
and cost centers of your employer. Learn how to make a
business case for technical issues. If you cannot
communicate why refactoring your codebase from Python to
Golang would positively impact topline metrics, no one will
prioritize it.
The above involves one thing people can possess which GenAI cannot; understanding stakeholder problems which need to be solved and then doing so.
You seem to have forgotten politics, since at the managerial level that is the most effective tool at hand. Engineers with their arguments and rethoric be damned.
Engineers can make an argument if you can also logically and coherently tie your argument with outcomes that can grow pipeline and/or revenue.
Most customers that matter to a business don't churn due to subpar user experience - discounting, roadmap, and dedicating a subset of engineering staff to handle bugs originating from a handful of the most important accounts is enough to prevent churn.
That said, this advice only really holds in the US (and that too in the major tech hubs). If you work in Western Europe you're shit out of luck as a SWE - management culture there just doesn't give a shit about software, because for most Western European businesses software is a cost center, not a revenue generator.
>> Attacks like this are not helped by the increasingly-common "curl | bash" installation instructions ...
> It's not really any different than downloading a binary from a website, which we've been doing for 30 years.
The two are very different, even though some ecosystems (such as PHP) have used the "curl | bash" idiom for about the same amount of time. Specifically, binary downloads from reputable sites have separately published hashes (MD5, SHA, etc.) to confirm what is being retrieved along with other mechanisms to certify the source of the binaries.
Which is the reason why it's better to actually cryptographically sign the packages, and put a key in some trusted keystore, where it can actually be verified to belong to the real distributor, as well as proving that the key hasn't been changed in X amount of days/months/years.
Still doesn't address the fact that keys can be stolen, people can be tricked, and the gigantic all-consuming issue of people just being too lazy to go through with verifying anything in the first place. (Which is sadly not really a thing you can blame people for, it takes up time for no easily directly discernable reason other than the vague feeling of security, and I myself have done it many more times than I would like to admit...)
> If the attacker already controls the download link and has a valid https certificate, can't they just modify the published hash as well?
This implies an attacker controlling the server having the certificate's private key or the certificate's private key otherwise being exfiltrated (likely in conjunction with a DNS poisoning attack). There is no way for a network client to defend against this type of TLS[0] compromise.
Comments only lie if they are allowed to become one.
Just like a method name can lie. Or a class name. Or ...
reply