GDP adjustments are warranted, but it is more stark than both the estimates suggest.
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.
The shovels and labour used to make those things where not depreciated.
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesn’t need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that “you gain entry to system X” works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that they’ll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
They’re unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didn’t expand by nearly that much.
I think it’s maybe plausible that private compute feels similar in the next do-or-die global war.
On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. I’m not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.
> Young children seem to pick it up with ease. It cannot be that hard
It is other way around. Children can pick up a lot of skills that adults struggle at, like languages for example.
Plenty of research has shown reduced plasticity of the brain has stronger correlation to learning speed and ability as it grows old. Most breakthrough research is usually at age 40 or less or chess grand-masters fade in skill after. 25-40 is probably age group where the optimal balance between knowledge experience and learning ability for best outcomes.
Real Madrid alone is more than $5B ? Maybe you mean to say league association is worth 5B ? That seems too high the association does not have lot of margins they pass through most of their revenue to the clubs .
The last domestic TV deal they signed recently was worth $6B for 5 seasons or so, which is what you are proposing they buy.
In enterprise value terms that $1B/year growing 6 %YoY is worth a lot more than $5B.
In contrast Cloudflare has a $2.5B revenue albeit growing much faster but also has much smaller earnings or free cash flow, I.e. money they are not spending to make their current revenue.
They make about $25m a year in profit. Cloudflair actually looses a small amount of money on 2.5x the revenue. However, Cloudflairs market cap is about 100x that of RM's and that's because they have a growing business, in a growing industry and can easily become profitable when needed. That's probably not possible for RM and their very pricey lineup of players.
It is not the comparison to make, which is why I focused on the league’s tv revenue which is what is relevant , i only bought up Madrid’s valuation to ask where is Laliga ‘s $5B coming from.
Real Madrid owns the Bernabeu a valuable piece of real estate in the heart of Madrid and many other assets the Real Madrid brand is very monetizable .
Sports team have been consistently growing businesses in every major sport in both Europe and US. Comparing a sports team and SaaS company is hardly going to be apples to apples with different asset , revenue, brand and monopoly and strategic profiles.
——
The risk to the league due to piracy is the value of the television deal. The buyer paying $1B/yr (DAZN) is the reason for this enforcement.
If Cloudflare wants to buy this problem away that is what they need — The $1B deal growing 5-6% YoY and get into the streaming business .
Prime alone is expected to spend $4B on live sports rights this year. It is very expensive space with everyone from Apple to Google and Netflix to sovereign funds going deeper every year .
The streaming revenues otherwise aren’t expected to be massively grow so this is the content play that is least risk - compared to investing in say 4-5 blockbuster movies or tv series this is far more predictable and consistent revenue stream.
Devops engineers did not know 101 of cable management or what even a cage nut is and being amazed to see a small office running 3 used dell servers bought dirt cheap, and shocked when it sounded like a air raid when they booted up, thought hot swapping was just magic.
It is always the case - earlier in the 80s-90s programmers were shaking their heads when people stopped learning assembly and trusted the compilers fully
This is nothing and hardly is shocking? new skills are learnt only if valuable otherwise one layer below seems like magic.
> Anything new generated by an AI is public domain[1]
Language models do generate character for character existing code on which they are trained on . The training corpus usually contain code which is only source available but is not FOSS licensed .
Generated does not automatically mean novel or new the bar needed for IP.
[1] Even this is not definitely ruled in courts or codified in IP law and treaties yet .
Only TCO matters, that is the efficiency you actually optimize for, ie dollar per mile[1]not miles per gallon.
If the car is going to need to be in shop for days needing you to have a replacement rental because the model is difficult to service and the cost of service itself is not cheap , that can easily outweigh any marginal mpg gain .
Similarly because it is expensive and time consuming you may likely skip service schedules , the engine will then have a reduced life, or seizes up on the road and you need an expensive tow and rebuild etc .
You are implicitly assuming none of these will change if the maintenance is more difficult , that is not the case though
This is what OP is implying when he says a part with regular maintenance schedule to be easily accessible.
[1] of which fuel is only one part , substantial yes but not the only one
> Only TCO matters, that is the efficiency you actually optimize for, ie dollar per mile[1]not miles per gallon.
You’d be surprised how few people actually consider TCO when looking at vehicles, the amount of people driving Jeeps and Audis and similar vehicles that depreciate 60-70% in 5-6 years blows my mind, I just assume anyone driving a car like that hates money.
I bought a RAV4 for $32,000 in 2021, a co-worker of mine paid just over $60k for a Jeep Grand Cherokee 4xe the same year, and the model years are the same. 5 years later, my car is worth more than his (around 22k, his is 18-20k), he ate over $40,000 of depreciation in 5 years, that’s just insane to me.
I'm just gonna copy and paste a response to another similar comment:
The point that I am making (obviously, I think) is that tradeoffs exist, even if you don't think the right decision was made, your full view into the trade space is likely incomplete, or prioritizes something different than the engineers.
Putting some random number of hypothetical mpg improvement was clearly a mistake, but I assumed people here would be able to get the point I was trying to make, instead of getting riled up about the relationship (or lack thereof) of oil filters and fuel efficiency.
I did read that before commenting, to be clear - the specific nature of your proposed optimization is not important and I took your premise to be true ie it will improve fuel efficiency and therefore save some money.
—
In general, the point was it is not operational efficiency in ideal conditions alone and serviceability is an important component because it can add to the overall cost of ownership significantly and individual car owners (in comparison to fleet) are typically poorer in factoring this in their buy decisions.
——
It comes down to numbers , if the proposed change, results in 10% improvement probably not worth it, 10x then definitely so .
I.e will the car become 22 MPGe or 200MPGe . Larger the gain more trade-offs like serviceability or life expectancy all can be sacrificed.
hybrids costs more upfront (both sets of expensive components - transmission/motor +engine/battery) but still work if driven enough miles, as the gain in efficiency makes up for the upfront.
Exact number of that miles is localized to you and me - depends things like tax difference including tolls, gas prices, MPGe diff, electricity prices, interest and purchasing power of currency other consumables costs like tires and so on.
In addition to usage distribution aspects others called out .
$1K is not actual cost, just API pricing being compared to subscription pricing. It is quite possible that API has a large operating margins, and say costs only $100 to deliver $1K worth of API credits.
Vibe coding in my opinion is analogous to say borrowing on a credit card to gamble on a startup.
Occasionally, in IRL you hear the feel good story how Fred smith gambled the last $5,000 to save FedEx and so on, but most people with that mindset end up crashing out.
Vibe coding a product runs the risk of acquiring too much tech debt before project is successful.
Product Market Fit is very hard, you need to keep enough room for pivots. Changes in direction will always accumulate debt, even when tech is well written. It is far more difficult when you accumulate debt quickly.
The counterpoint being that procrastinating and over-engineering prematurely or building lot of unrelated tooling and loosing focus can also bring the product down quickly or never let it start .
Being able to vibe code POCs etc is a great tool if done in a controlled limited well defined way.
Just as borrowing cash on your credit card is not always bad, it just usually is.
Maybe vibe coding gets so good that we completely trash what was written and build from zero.
I've seen that with badly written code as well, easier to rewrite than fix the un-goldly mess.
Yes, if you know what you want exactly it is not difficult to rewrite. Writing is the easiest part of coding.
The challenge is knowing exactly what is needed. No matter how bad the code, it is never easy to justify a rewrite .
In a large and old enough code base, documentation is always incomplete or incorrect, the code becomes is the spec.
Tens or hundreds of thousands of hours would have been expended in making it "work". A refactor inevitably breaks things, because no single person can fully understand everything.
There is a reason why it is a well know principle don't fix what is not broken. Same reasons why we still have banking or aviation systems still running mainframe and COBOL from 70s.
A rewrite requires the same or likely more number of hours of testing and adoption in a typically much shorter span of time in ironing out the issues [1]. Few organizations either private or public have the appetite to go through the pain and if its money facing or public facing component it is even harder to get buy-in from leadership or the users of app even.
---
[1] During the original deployment the issues(bugs or feature gaps) would have been incrementally solved over many years or decades even. During the rewrite you don't have 10-20 years, so you not only have to expend the same or more number of hours you will have to do it much quicker as well.
Workloads emerge with higher capacity not other way around. Lossless media, to virtual reality applications all scale better with more available bandwidth.
An average AAA game is 100-200GB today. That is not by accident, The best residential internet of 1Gbps dedicated it is still 30 minutes of download, for the average buyer it is still few hours easily.
A 2TB today game is a 5 hour download on 1 Gbps connection and days for median buyer. Game developers can not think of a 2TB game if storage capacity, I/O performance, and bandwidth all do not support it.
Hypothetically If I could ship a 200TB game I would probably pre-render most of the graphics at much higher resolutions/frame-rates than compute it poorly on the GPU on the fly.
More fundamentally, we would lean towards less compute on client and more computed assets driven approach for applications. A good example of that in tech world in the last decade is how we have switched to using docker/container layers from just distributing source files or built packages. the typical docker images in the corporate world exceed 1GB, the source files being actually shipped are probably less than 10Mb of that. We are trading size for better control, Pre built packages instead of source was the same trade-off in 90s.
Depending on what is more scarce you optimize for it. Single threaded and even multi-threaded compute growth has been slowing down. Consumer internet bandwidth has no such physics limit that processors do so it is not a bad idea to optimize for pre-computed assets delivery rather than rely on client side compute.
I'll assume by "game servers" you mean "video game binary and asset distribution servers that support game stores like Steam and Epic and others".
When I paid Comcast for 1.5Gbit/s down, Steam would saturate that downlink with most games. I now pay for service that's no less than 100mbit symmetric, but is almost always something like 300->600mbit. Steam can -obviously- saturate that. Amusingly, the Epic Games Store (EGS) client cannot. Why?
Well, as far as I can tell, the problem is that -unlike the Steam client- the EGS client single-threads its downloads and does a lot of CPU-heavy work as part of those downloads. Back when I was running Windows, EGS game downloads absolutely pegged one of my 32 logical CPUs and left a ton of download bandwidth unused. In contrast, Steam sets like eight or sixteen of my logical CPUs at roughly half utilization and absolutely saturates my download bandwidth. So, yeah... if you're talking about downloads from video games stores it might be that whatever client your video game store uses sucks shit.
OTOH, if you're talking about video game servers where people play games they've already installed with each other, unless those servers are squirting mods and other such custom resources at clients on initial connect, game servers usually need like hundreds of kbps at most. They're also often provisioned to trickle those distributed-on-initial-connect custom resources in an often-misguided attempt to not disturb the gameplay of currently-connected clients.
Game downloads, whether on a console or a PC, come from a CDN. The difference is that Steam has a lot of capacity. They can have millions of players all downloading the same game on the same day at gigabit speeds. Console makers invariably cheap out and cannot reach the same level of service.
Hell, it might be the case that console manufacturers are doing the same stupid shit that EGS is doing. Perhaps they wrote their download code back when 50mbit/s was a dreadfully fast download speed for the average USian to have and they haven't updated it since. (And why would they? What's a consumer's alternative other than "Pay 1k or more for a gaming machine that can run games delivered through Steam" or "Don't play video games"?)
You can still survive without using generative tools. Just not writing crud apps .
There is plenty of code that require proof of correctnesss and solid guarantees like in aviation or space and so on. Torvalds in a recent interview mentioned how little code he gets is generated despite kernel code being available to train easily .
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
reply