Hacker Newsnew | past | comments | ask | show | jobs | submit | spiderice's commentslogin

Oh man, I sure hope he disclosed that

For starters, money. There is no better value out there that I'm aware of than Claude Code Max. Claude Code also just works way better than Opencode, in my experience. Though I know there are those that have experienced the exact opposite.

I find Claude Code bloated and a bit clunky. Those same Claude models work better in Opencode, where I can also combine them with other providers.

The fact that Anthropic recently started blocking their coding plans usage from other tools is telling. They are in the phase where they realize they can't compete in an open field and need to go back behind their fortress gates and hope to endure a siege from the stronger opponents.


> what if the current prices really are unsustainable and the thing goes 10x?

Where does this idea come from? We know how much it costs to run LLMs. It's not like we're waiting to find out. AI companies aren't losing money on API tokens. What could possibly happen to make prices go 10x when they're already running at a profit? Claude Max might be a different story, but AI is going to get cheaper to run. Not randomly 10x for the same models.


From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference. It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated), but if the screws start being turned on investment dollars they not only have to increase the price of their current offerings (2x cost wouldn't shock me), but some of them also need a (massive) influx of capital to handle things like datacenter build obligations (10s of billions of dollars). So I don't think it's crazy to think that prices might go up quite a bit. We've already seen waves of it, like last summer when Cursor suddenly became a lot more expensive (or less functional, depending on your perspective)

Dario Amodei has said that their models actually have a good return, even when accounting for training costs [0]. They lose money because of R&D, training the next bigger models, and I assume also investment in other areas like data centers.

Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply. All of this makes me believe them when they say they are profitable on API usage. Usage on the plans is a bit more unknown.

[0] https://youtu.be/GcqQ1ebBqkc?si=Vs2R4taIhj3uwIyj&t=1088


We can also look at the inference costs at 3rd party inference providers.

Their whole company has to be profitable, or at least not run out of money/investors. If you have no cash you can't just point to one part of your business as being profitable, given that it will quickly become hopelessly out-of-date when other models overtake it.

Other models will only overtake as long as there is enough investor money or margins from inference for others to continue training bigger and bigger models.

We can see from inference costs at third party providers that the inference is profitable enough to sustain even third party providers of proprietary models that they are undoubtedly paying licensing/usage fees for, and so these models won't go away.


Yeah, that’s the whole game they’re playing. Compete until they can’t raise more and then they will start cutting costs and introducing new revenue sources like ads.

They spend money on growth and new models. At some point that will slow and then they’ll start to spend less on R&D and training. Competition means some may lose, but models will continue to be served.


> Sam Altman has made similar statements, and Chinese companies also often serve their models very cheaply.

Sam Altman got fired by his own board for dishonesty, and a lot of the original OpenAI people have left. I don't know the guy, but given his track record I'm not sure I'd just take his word for it.

As for chinese models..: https://www.wheresyoured.at/the-enshittifinancial-crisis/#th...

From the article:

> You’re probably gonna say at this point that Anthropic or OpenAI might go public, which will infuse capital into the system, and I want to give you a preview of what to look forward to, courtesy of AI labs MiniMax and Zhipu (as reported by The Information), which just filed to go public in Hong Kong.

> Anyway, I’m sure these numbers are great-oh my GOD!

> In the first half of this year, Zhipu had a net loss of $334 million on $27 million in revenue, and guess what, 85% of that revenue came from enterprise customers. Meanwhile, MiniMax made $53.4 million in revenue in the first nine months of the year, and burned $211 million to earn it.


This is my understanding as well. If GPT made money the companies that run them would be publicly traded?

Furthermore, companies which are publicly traded show that overall the products are not economical. Meta and MSFT are great examples of this, though they have recently seen opposite sides of investors appraising their results. Notably, OpenAI and MSFT are more closely linked than any other Mag7 companies with an AI startup.

https://www.forbes.com/sites/phoebeliu/2025/11/10/openai-spe...


Going public is not a trivial thing for a company to do. You may want to bring in additional facts to support your thesis.

Going public also brings with it a lot of pesky reporting requirements and challenges. If it wasn't for the benefit of liquidity for shareholders, "nobody" would go public. If the bigger shareholders can get enough liquidity from private sales, or have a long enough time horizon, there's very little to be gained from going public.

> From what I've read, every major AI player is losing a (lot) of money on running LLMs, even just with inference.

> It's hard to say for sure because they don't publish the financials (or if they do, it tends to be obfuscated)

Yeah, exactly. So how the hell the bloggers you read know AI players are losing money? Are they whistleblowers? Or they're pulling numbers out of their asses? Your choice.


Some of it's whistle blowers, some of it is pretty simple math and analysis. Some of it's just common sense. Constantly raising money isn't sustainable and just increases obligations dramatically.. if these companies didn't need the cash to keep operating, they probably wouldn't be asking for tens of billions a year because it creates profit expectations that simply can't be delivered on.

Sam Altman is on record saying that OpenAI is profitable on inference. He might be lying, but it seems an unlikely thing to lie about.

Where did u get this notion from? you must not be old enough to know how subscription services play out. Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

Heck we were spoiled by “memory is cheap” but here we are today wasting it at every expense as prices keep skyrocketing (ps they ain’t coming back down). If you can’t see the shift to forceful subscriptions via technologies guised as “security” ie. secure boot and the monopolistic distribution (Apple, Google, Amazon) or the OEM, you’re running with blinders. Computings future as it’s heading will be closed ecosystems that are subscription serviced, mobile only. They’ll nickel and dime users for every nuanced freedom of expression they can.

Is it crazy to correlate the price of memory to our ability to localize LLM?


> Ask your parents about their internet or mobile billings. Or the very least check Azures, AWS, Netflix historical pricing.

None of these went 10x. Actually the internet went 0.0001~0.001x for me in terms of bits/money. I lived through dial-up era.


Yes, they are very obviously suggesting it didn't happen. I have no idea why certain people on the left want to ignore what is happening on Iran, and even pretend like nothing problematic is happening in Iran.

Nobody’s saying nothing problematic is happening in Iran.

What I’m saying is that a lot of people are extremely interested in seeing Iran fall and that Western media paid by Saudi Arabia has exactly zero credibility. So get better sources, that’s all.


That seems like circular logic.

You're saying parental responsibility should govern because TikTok is legal, while cigarettes require state intervention because they're illegal. But they are only illegal because we made them illegal (for minors). And isn't that exactly what is being discussed here?

For the sake of consistency, do you think cigarettes should be legal for minors if they have parental consent? If not, what is the distinction between TikTok and cigarettes that causes you to think the government should be involved in one but not the other?


What I am saying is that if you want to regulate social media companies, pass a law, don't punish companies for breaking a law that isn't on the books.

The harm from cigarette use is direct, and there is no level of cigarette use that can be considered safe and healthy. Additionally, it would be very difficult for parents to prevent their children from buying them if they could walk into any convenience store and buy them. On the other side, social media use can be harmful, but it is possible to use social media in a healthy way.

I'm curious where it ends when you start banning kids from things that are only potentially addictive or harmful. Should parents be able to let their children watch TV, play video games, or have a phone or tablet?

What's the distinction between those things and social media for you?


Ah yes, Tim isn't running again because there is no truth to it. My god. Some of you are so obsessed with the "narrative" that you'll look at the sun and say it's night.

Their daycares, or their "daycares"? Not clear which one you mean.

I was not aware of that fake daycare propaganda until someone else exposed its meaning later in the thread.

As a parent, you should know that believing this obviously false propaganda requires both 1) a weird and overly specific interest in daycares, and 2) not enough normal healthy exposure to kids to understand what daycares don't let weird freaks come inspect the children. Namely, repeating this obvious lie gives off pedo vibes, and I would never let you near my children after hearing you gobble up that propaganda uncritically and then even going so far as to spread it. Ick


Which is probably the easiest thing ever to prove, since people are openly trying to impede them

There are already people on X who have infiltrated chats and posted screen captures. Getting the full content of the chats isn't going to be difficult. They have way to many people in them.

How is a Tesla a "statement car"? A Cybertruck, sure. But Tesla's are as normal as anything on the road nowadays.


Depends on the market. In Australia Tesla is much pricier than all the Chinese options (more the norm). In my area people who would have probably bought a Tesla are looking at BMW's range.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: