Hacker Newsnew | past | comments | ask | show | jobs | submit | tyre's commentslogin

I think if you ask something generic like “shoes”, this could be true.

When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.


It’s not, but legal is not the same as ethical.

For a long time, and probably still, it was legal for the US to torture enemy combatants. It was never ethical.


If you add to that the very broad limits of what the current administration considers "legal" (as in "pretty much anything we want to do"), I can understand feeling uneasy as a Google employee...

You’d need some shared ethical/moral framework to make that claim, which doesn’t really seem to exist anymore

You don't need a shared moral framework to come to a personal moral conclusion.

What does that mean? How does one come to a personal moral conclusion? Vibes?

(I take "moral framework" to mean a principled stance that gives objective grounding for a moral judgement. I agree that we can come to a moral judgement without putting it through a systematic and discursive defense, and I reject the notion that there are many moralities or that they are arbitrary, but it is also true that diverging conceptions of the basis of morality will frustrate agreement. Stopping at personal moral judgement does not lend itself to fruitful dialogue and understanding, as it constraints the domain of what is intersubjectively knowable.)


You can, but it’s easier to convince them to want something that they don’t need or is actively harmful.

People I’ve spoken to in DoD strongly disagree with you there.

Competent at doing the things the DoD ought to do? Or competent at getting paid to do things for the DoD?

What are their complaints?

Things are hacked together, extremely difficult to change (without a pile of more hacks, Palantir is most interested in embedding itself deeper and manipulating RFPs than helping orgs operate more effectively, they waaaaaay overpromise during sales and can’t deliver, costs and timelines overrun by a lot, they’ll shift the goalposts by trying to sell the next Magic Fix before the first thing is finished (because they oversold/botched implementation) or has delivered value commensurate with its cost.

Perhaps. But they made $1.6 billion in net income in 2025, which, from a business perspective, makes them about $10.6 billion more competent than OpenAI.

We view competence differently. I value things outside of simply making money.

See: the crypto argument that it’s successful because number go up when it is almost entirely pump and dumps and money laundering.

I don’t view that as success, but people do.


You can view it however you want, but reality disagrees with you. Palantir's profit comes from real customers paying real money for their real products.

And it's hilarious that you would compare Palantir to a crypto pump-and-dump while claiming OpenAI creates more value and is more successful.


They had to negotiate away the non-profit structure of OpenAI. Sam used that as a marketing and recruiting tool, but it had outlived that and was only a problem from then on.

For OAI to be a purely capitalist venture, they had to rip that out. But since the non-profit owned control of the company, it had to get something for giving up those rights. This led to a huge negotiation and MSFT ended up with 27% of a company that doesn’t get kneecapped by an ethical board.

In reality, though, the board of both the non-profit and the for profit are nearly identical and beholden to Sam, post–failed coup.


Or!

People understand that everyone makes mistakes and firing anyone who does only leads to people prioritizing hiding their mistakes vs. fixing them.

It’s helpful, whenever you find yourself saying something like, “the only real explanation to me”, to think of a good faith version before assuming that the most cynical take is reality.


I think there are mistakes and then there are mistakes.

There is a point where the postmortem needs to stop being blameless.

Getting things like this wrong is an existential risk to a important institution. We can’t be genuinely concerned about lost faith in institutions and also not hold them to the highest levels of accountability.


A little “bank error in your favor” sitchu. We love to see it.

"See what?" --Gavin

Facebook was built before Claude Code existed.

I mean search engine results are pretty poor and have been for a long time. They reflect SEO, not credibility or quality.

LLMs have plenty of issues, but they’re relatively clean compared with what the future will look like.


The issue is that these areas are optimized for—so we don’t build capacity or the surrounding infrastructure for fallbacks—and relatively small likelihood events have tremendous risk-adjusted costs.

If you have one event with a 10% chance of throwing off the world’s semiconductors, that’s incredibly dangerous and worth talking about. If you have five such things (the quartz mine, bromine conversion, helium supply, etc.), there is a 60% chance that none of those events land.

Even still, it’s worth raising alarm about each and every one of them, because a single failure causes so much collateral damage. But people assume if something didn’t happen, it wasn’t worth prepping for.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: