Hacker Newsnew | past | comments | ask | show | jobs | submit | cryptonector's commentslogin

> In the example it seems pretty clear to me that:

> Mutex::new(AppConfig::default());

> ...is meant to be acquiring a mutex protecting some global config object, yes? That's what I'm calling a "global lock".

You could certainly have a global lock at the top-most level, but you're not required to. The example is just an example.


Imagine the cost of a shahed drone being as low as $5,000, or less. Imagine the cartels south of the U.S. having tens of thousands, maybe hundreds of thousands of them. It could get painful fast. That's one thing this war is showing.

Things are cheap and easy enough that this sort of state of the art warfare is accessible to individuals.

It was obvious ten years ago that this was coming, but saying so just made you sound crazy.

I said a similar thing when the Nordstream pipeline was popped. Basically anyone with, say, $400k to rent a boat and a work class underwater ROV could have done it. Sure, pricey, but that's low enough that a single individual could have financed it.


Those supply chains are highly visible and relatively limited. Building a vast number of Shahed-level drones is going to be noticed long before you actually build them.

You mean more visible than a drug distribution network?

I dunno anyway. Buying a roll of a thousand of some component here and there wouldn't appear on anybody's radar. Maybe the motors would be a problem, but then, Mexico has enough good enough motor factories.


Sure hope so. But what if it's a partnership with parts of the Mexican army or something?

If the cartels attack American civilian infrastructure with drones, the American public will support a full on land invasion and annexation of Mexico if they're told that will make it stop.

This makes no sense. The only real danger are religious nutters like Iran, USA and Israel. Everyone else just wants to make money.

So its your argument that if the US, Iran, and Israel laid down arms the world would find an indefinite peace?


So what is the point of calling out three countries and saying they are the only real dangers?

Not only that, but it has to reach much higher altitudes in order to also reach the much higher re-entry velocities that it will have IRL. That makes testing Orion very expensive. Testing Crew Dragon was much much cheaper.

Absolutely not. I will not even consider the word of an organization that has repeatedly failed to learn from its past mistakes. They need to demonstrate an ability to learn first, and to do so they need to take these concerns seriously. That means no astronauts on Artemis II.

Not every lawsuit that is heard by a court goes to trial.

Right. This was strongly implied by my comment.

Wow, that's horrible.

Claude handles human languages other than English just fine.

Specifically the Y combinator enables recursion in a language that otherwise does not support recursion but does support closures.

> But again - the hard part is not cloning the product, it's stealing your customers.

Yes. A Red Hat, a Microsoft -- these companies have processes, organizational structure, politics, friction, etc. They might like your products, but replicating them might not be easy for reasons that have nothing to do with how easy it is given the freedom to do it. Small shops with vision might well have a bright future, for a while, maybe.


> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.

First of all: it's not as though no new LLMs are being trained. Of course they are.

Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.

Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.

> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.

There's a pretty good chance that LLMs buff open source, yes.

> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.

> Why should this happen? The moment you make your idea public, anyone can build it. [...]

This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.


why would they buff open source? why would Microsoft Copilot, with its insane costs, ever be used for that purpose?

or insane costs for any serious LLM -- how does Anthropic get return on investment by improving FOSS

the end state is a walled garden and technofeudalism


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: