Hacker Newsnew | past | comments | ask | show | jobs | submit | zemvpferreira's commentslogin

It’s worth noting that muscle is not all the same. If you’re just into bodybuilding then sure, proximity to failure is what matters. For athletics though, there still seems to be a big impact in the rep range you work in.

This. Muscles can be optimized for volume/endurance or power, or some balance between them. Taking legs as an example: Powerlifters obviously go for pure power, whereas runners need a bit of power but mostly endurance, whereas cyclists need more power than runners but more endurance than powerlifters.

All of these benefit from weight training, but depending on the sport, the programming will be very different.


I think I know where they're coming from as I used to have a similar wrong model. I thought strength = more muscle cells and endurance = just better heart/lungs to deliver oxygen and clear waste like CO2 and lactic acid.

Turns out muscle fibers mostly grow bigger rather than more numerous, and there are different fiber types (slow-twitch vs fast-twitch) that adapt based on how you train. So for the same muscle, an Ironman runner and a guy doing heavy low-rep squats will develop different fiber characteristics: you can't fully max out both.

I'm simplifying, but learning this changed a lot about how I understand exercise at the biological level.


I could never get into the New Yorker. It has always felt to me like every piece is deliberately drawn out. They take you to the precipice of something interesting only to pull back into an origin story, over and over again. I think it's the opposite of good writing: bloated, conceited, style over substance. It's not even meandering, it's just teasing. I'm sure it earned its place at the table long ago but the only part of it I can enjoy are the cartoons.

My biggest reading pleasure used to be the LRB but it was infected with the politics virus years ago. It used to be a place to learn minutiae through wonderful language and now it feels mostly like virtue signalling. I don't know where the best writing is these days but it sure as shit doesn't feel like it's in major print.


Boy are you going to bite your tongue on this one


For the upcoming foldable. Keeping the air allows them to successively engineer the following foldable generation with lower risk and spread out the costs.


But it makes no sense. If you want to test for thinness, they’ve been doing it already with ipads.

Also look at the thinness, weight of iphone 6s and compare it to air. You will be suprised.

The main paint points about foldable is a — duh — folding screen and a hinge. And neither are in air.


> The main paint points about foldable is a — duh — folding screen and a hinge. And neither are in air.

The idea is that the folding phone would be essentially 2 Air’s* with a hinge between them.

* possibly/probably thinner, but the Air serves as a “how thin can we make this since we need to improve our ability to make thin phones/components to accomplish a folding phone”. A sort of “you have to walk before you can run”-type thing. At least that’s how I see it.


Like I’ve said: - ipad pro is 5.1 mm - iphone air is 5.6 mm

Air is thicker than ipad


I’m the same age and also read C&H voraciously. Looking back I was (to a point) blueprinted on the kid, but mostly by virtue of being a single child, smart and alienated from most of my peers at school. I wish Susie gave me the time of day. Calvin wasn’t a role model, he was an accurate portrayal. (To a point)


GLP-1’s should make you less concerned in that case, they’re poised to become extremely affordable very soon. Ending the obesity epidemic will do more to bridge the class divide than anything I can practically imagine. Not to mention the other compulsions these drugs help moderate - alcohol, tobacco, gambling etc. It’s my best hope for worldwide quality of life improvement in the next 10 years.


> Ending the obesity epidemic will do more to bridge the class divide

My hope is the "waiting for the other shoe to drop" folks are just expressing sour grapes.

If it runs deeper and merges with the anti-vaxers, we've got a behavioural problem fuelling a class divide. That is my fear.


I’ve thought about this a decent amount.

My opinion has shifted over the years. At first I also thought it was largely just sour grapes re: accessibility and fear of the unknown, but now I’m thinking that a large number of people are going to be so far deep into anti-GLP opinions and hot takes they can’t backtrack out of it. Much like political or social beliefs you make into your identity. Too embarrassing to admit you might be wrong.

I know you’re alluding to the same thing, it’s just interesting to me someone else in the world seems to share these thoughts. I also think it may really delineate a multi-generational class divide that is hard to break.

Or all the folks on GLP-1s will develop some rare form of cancer and die early leaving the world to the so-called haters.


I’m about to go to the cinema so I can’t find you references, but there’s a lot of anecdotal evidence at least of glp1’s curbing all sorts of addictive behaviour. I personally started Mounjaro last week and my coffee cravings have gone way, way down for the first time in my adult life.


Some white-collar professionals enjoy continuing their work past retirement age. It can be stimulating, high-leverage, and I have often seen them contributing at key moments without spending much time at the office. The accumulated wisdom and political capital of many decades at the wheel makes a difference. I've also seen blue-collar workers keep at it past retirement age because of their finances or some other compulsion despite arthritis, weakness, bad sight etc and rue every moment. Let's make sure we understand that not every craft is heaven and not every corporation is a hellscape.


I definitely would have kept my software engineering career longer if I could have found a decent job like I used to have. But what it means to be and what's expected of a professional software engineer today is so different from how I spent my career and how I like/need to work. So I've retired rather than continue fighting it.


There’s plenty companies around who are doing more or less what they did 20 years ago. Enterprise ERP systems is where you want to look.


There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.

OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.


> they’re using their equity to buy compute that is critical to improving their core technology

But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.

The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice


There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.


As much as I dislike metaphors, this sounded reasonable to me. Just don't go poking holes in the metaphor instead of the real argument.


Indeed, poking holes in the metaphor is like putting a pin in a balloon, rather than knocking it out of the park by addressing the real argument.


OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.

In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.

Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.


But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?


I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.

The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.


If model development stalls, then the open weight free models will eventually totally catch up. The model itself will become a complete commodity.


It very well might. The ones with most smooth integrations and applications will win.

This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.

But enterprise software integration that might end up mostly proprietary.


Because they’re not that wildly unprofitable. Yes, obviously the companies spend a ton of money on training, but several have said that each model is independently “profitable” - the income from selling access to the model has overcome the costs of training it. It’s just that revenues haven’t overcome the cost of training the next one, which gets bigger every time.


> the income from selling access to the model has overcome the costs of training it.

Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.


I've also seen Open AI and Anthropic say it's pretty close at least. I'll try to follow up with a source.


The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.

What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.


They can also buy out the startup or match the development by hiring more people. Their comp packages are very competitive.


There's at least one contributor here on HN that believes growth in models is strictly exponential: https://www.julian.ac/blog/2025/09/27/failing-to-understand-...


Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.

Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.

The models are already useful for many applications, and they are being integrated into more business and consumer products every day.

Adoption is what will turn the flywheel into a rocket.


Well, the thing is that that kind of hardware chips quickly decrease in value. It's not like the billions spend in past bubbles like the 2000s where internet infrastructure was build (copper, fibre) or even during 1950s where transport infrastructure (roads) were build.


Data centers are massive infrastructural investments similar to roads and rails. They are not just a bunch of chips duct taped together, but large buildings with huge power and networking requirements.

Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.

All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.

They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.


The assumption is that they have a large moat.

If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.

This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.

OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.

The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.


> This will be true if (as I believe) AI will plateau as we run out of training data.

Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.


As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.


ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.

I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.

Even if that is solved, models are terrible at long tail.


When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.


Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.

Or not - there still knowledge in people heads that is not bleeding into ai chat.

One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.


The necessity of higher quality data from vetted experts is why Mercor just raised at 10B


I’mafraid I don’t share your optimism. I think we are more or less seeing the limitations of the transformer architecture.


Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?


Quantized, a top-end Mac can run models up to about 200B (with 128GiB of unified RAM). They'll run a little slow but they're usable.

This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.


They are pretty cheap compared to _actual_ costs of GPU farms, or buying A100 though. Of course not everybody will buy these machines, but everybody don’t really need high powered LLM’s either. Prob 13B Mistral can be trained to do your homework and pretend to he your girlfriend.


Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.

Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.


I think that, at best, that description boils down to Nvidia, Oracle, etc inventing fake wealth to build something and OpenAI building their own fake wealth by getting to use that new compute effectively for free.

There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.


The same way the stock market invents a trillion dollars of fake wealth on a strong up day?

That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.

It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.


The stock market isn't inventing money. Those investing in the stock market might be, those buying on leverage for example.

Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.


I think it's worse. The US market feels like a casino to me right now and grift is at an all time high. We're not getting good economic data, it's super unpredictable, and private equity is a disaster waiting to happen IMO. For sure there are smart people able to make money on the gamble, but it's not my jam.

I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.


More money is lost by bears fighting a bull market, than in actual bear market crashes.

I’ve made that mistake already.

I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.


> It seems very bubbly to me, but not dotcom level bubbly.

Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.


Nvidia has a trailing PE of 50. Cisco was 200 At the height of the dotcom bubble.

Nowhere near that level. There’s real demand and real revenue this time.

It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.


We shouldn't judge whether an indicator is stable or okay only by looking to see if its the highest historical value.

PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.

Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?


I’m not asking if it makes sense, I’m simply pointing out that by that measure this is much less extreme than 2000. As I stated, I think we’re in a bubble, so valuations won’t make much sense.

If you have a better measure, share it. I trust data more than your or my feelings on the matter.


Unless you have evidence that this measure of yours is a reliable predictor of how big a bubble is, it's on par with my gut feeling.


I sell you a cat for $1B and you sell me a dog for $1B and now we’re both billionaires! Whether the capital markets “want” that or not it’s still silly.


If we’re both willing to pay that in a free market economy, then we both leave the deal happy.

Things are worth what people are willing to pay for them. And that can change over time.

Sentiment matters more than fundamental value in the short term.

Long term, on a timescale of a decade or more, it’s different.


Both parties would need the $1B prior to the transaction for it to even potentially be meaningful, and still they just traded a cat for a dog and only paid each other on paper.

That ultimately wouldn't be a big deal if the paper valuation from the trade didn't matter. As it stands, though, both parties could log it as both revenue and expenses, and being public companies their valuation, and debt they can borrow against it, is based in part on revenue numbers. If the number was meaningless who cares, but the numbers aren't meaningless and at such a scale they can impact the entire economy.


> If we’re both willing to pay that in a free market economy

The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.


Is that not fraud?


Yes, it is fraud round tripping is fraud, whether the government is willing to prosecute it or not.


> OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.

I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.


Happy to have provided. I’m not an AI bull and not in any way invested in the U.S. economy besides a little money in funds, but I do try to think about the war of today vs the war of yesterday. Hopefully that’s always en vogue.


Eventually when ChatGPT replaces Google Search, they will run ads, and so have that whole revenue stream. Still isn't enough money to buy the trillions worth of infrastructure they want, but it might be enough to keep the lights on.


That's an insightful point! Making insightful points like that one is taxing on the brain, you should consider an electolyte drink like Brawndo™ (it's got what plants crave) to keep yourself sharp!

Ugh I hate it so much, but you're right, it's coming.


One thing I've been contemplating lately is that from a business perspective, when your competitors expand their revenue avenues (generally through ads) you have three options: copy them to catch up, do nothing and perish, and lobby the government for increased consumer protections.

I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.

Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.


Why would ChatGPT replace google search when search also has AI? At best they'd steal some of Google's market share, which I'd imagine would decline with embedded ads.


Two reasons:

1) Google Search is now 99% crap that nobody wants, and even the AI answers are largely crap,

2) I believe somebody is going to eventually realize that search engines are stupid and improve on them. The whole idea of a single text box where you type some words and the search engine reads your mind to figure out the one thing you wanted, and then gives you one generic answer, is crap. We've just been blind to this because we don't see any other answer to realize we've been getting crap.

If I type in "when did MMS come out", Google will tell me when the candy product M&M's came out. But I wanted to know when the Multimedia Messaging Service was released. At some point somebody is going to realize that you can't actually tell what the hell the person wants from these simple queries alone. The computer needs to ask you questions to narrow down the field. That's sometimes what happens in ChatGPT, but it can be greatly improved with simple buttons/drop-downs/filters/etc. I think it'll also be improved by more dynamic and continuous voice input for context. (I notice Google Search now has audio input; I wonder if that came in after ChatGPT? Wayback Machine shows it starting in mid-2024) When they eventually implement all this, and people realize it's a million times better than what Google has, then Google will be playing catch-up.


Dotcom scams included "vendor financing", where telecom equipment providers invested in their customers who built infrastructure:

https://time.com/archive/6931645/how-the-once-luminous-lucen...

The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.


Wasn’t there also a bunch of telecom infrastructure created in the dot-com bubble, tangible products created, etc? Things like servers, telephone wires, underwater internet cables, tech-storefronts, internet satellites, etc.


so much fiber was run that in the US over 90% of it wasn't even used


>they’re using their equity to buy compute that is critical to improving their core technology

That's only like 1/8th of the flywheel, though.


> There’s one key difference in my opinion

The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.


> Nvidia's stock rally leaves it with a LOT of room to fund big bets right now

And then what happens if the stock collapses?


Hence the emphasis on right now.


> critical to improving their core technology

It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.


> I have some faith it could go another way.

I wonder how they felt during the .com era.


Yes, this time is different, trust big bro sama.


Heliotherapy is well-due for a resurgence. One of my favourite youtubers (conquer aging or die trying) has a great interview with a medical doctor about sunlight as a medical intervention. Well worth the watch:

https://m.youtube.com/watch?v=UF8UE6cJaWQ


That's Doctor Roger Seheult, MD, who hosts videos for continuing education provider MedCram (https://www.medcram.com/). They post lots of free videos on their YouTUbe channel (https://www.youtube.com/channel/UCG-iSMVtWbbwDDXgXXypARQ) and several of them cover the research behind heliotherapy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: