I think if you ask something generic like “shoes”, this could be true.
When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.
If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...
What does that mean? How does one come to a personal moral conclusion? Vibes?
(I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)
Things are hacked together, extremely difficult to change (without a pile of more hacks, Palantir is most interested in embedding itself deeper and manipulating RFPs than helping orgs operate more effectively, they waaaaaay overpromise during sales and can’t deliver, costs and timelines overrun by a lot, they’ll shift the goalposts by trying to sell the next Magic Fix before the first thing is finished (because they oversold/botched implementation) or has delivered value commensurate with its cost.
Perhaps. But they made $1.6 billion in net income in 2025, which, from a business perspective, makes them about $10.6 billion more competent than OpenAI.
You can view it however you want, but reality disagrees with you. Palantir's profit comes from real customers paying real money for their real products.
And it's hilarious that you would compare Palantir to a crypto pump-and-dump while claiming OpenAI creates more value and is more successful.
They had to negotiate away the non-profit structure of OpenAI. Sam used that as a marketing and recruiting tool, but it had outlived that and was only a problem from then on.
For OAI to be a purely capitalist venture, they had to rip that out. But since the non-profit owned control of the company, it had to get something for giving up those rights. This led to a huge negotiation and MSFT ended up with 27% of a company that doesn’t get kneecapped by an ethical board.
In reality, though, the board of both the non-profit and the for profit are nearly identical and beholden to Sam, post–failed coup.
People understand that everyone makes mistakes and firing anyone who does only leads to people prioritizing hiding their mistakes vs. fixing them.
It’s helpful, whenever you find yourself saying something like, “the only real explanation to me”, to think of a good faith version before assuming that the most cynical take is reality.
I think there are mistakes and then there are mistakes.
There is a point where the postmortem needs to stop being blameless.
Getting things like this wrong is an existential risk to a important institution. We can’t be genuinely concerned about lost faith in institutions and also not hold them to the highest levels of accountability.
The issue is that these areas are optimized for—so we don’t build capacity or the surrounding infrastructure for fallbacks—and relatively small likelihood events have tremendous risk-adjusted costs.
If you have one event with a 10% chance of throwing off the world’s semiconductors, that’s incredibly dangerous and worth talking about. If you have five such things (the quartz mine, bromine conversion, helium supply, etc.), there is a 60% chance that none of those events land.
Even still, it’s worth raising alarm about each and every one of them, because a single failure causes so much collateral damage. But people assume if something didn’t happen, it wasn’t worth prepping for.
When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.
reply