But value is meaningless. The ultimate purpose for a producer in a market is to provide infinite value for no cost.
Of course that is a pipe dream, so it should provide the highest value for the lowest cost.
For those who want to argue it should be a balance: consider the opposite position. A producer should provide no value at infinite cost. In this case, everything withers. If party A and party B need each other's products to survive, they can do that when the value is infinite and the cost is zero, but not when the value is zero and the cost is infinite.
The last few decades have shown that giving the finger to the customer and going all in on shareholdermaxxing has nothing but terrible effects and is like sticking a spanner into the wheel of capitalism.
My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies, nor a massive acceleration on quality or breadth (not quantity!) of development.
Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one. If tests and dev only has marginal cost now, why aren't they going all in on writing extremely performant, almost completely bug-free native applications everywhere?
And this repeats itself across all big tech or AI hype companies. They all have these supposed earth-shattering gains in productivity but then.. there hasn't been anything to show for that in years? Despite that whole subsect of tech plus big tech dropping trillions of dollars on it?
And then there is also the really uncomfortable question for all tech CEOs and managers: LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code. And LLMs are supposedly godlike. Leadership is a fuzzy thing. At some point the chickens will come to roost and tech companies with LLM CEOs / managers and human developers or even completely LLM'd will outperform human-led / managed companies. The capital class will jeer about that for a while, but the cost for tokens will continue to drop to near zero. At that point, they're out of leverage too.
Your proof-in-pudding test seems to assume that AI is binary -- either it accelerates everyone's development 100x ("let's rewrite every app into bug-free native applications") or nothing ("there hasn't been anything to show for that in years"). I posit reality is somewhere in between the two.
Considering that "AI will replace nearly all devs" and "AI will give 100x boost" and such we were promised, it makes sense to question this.
After almost all hyped technology is also "somewere between the two" extremes of not doing what it promises at all and doing it. The question is which edge it's closer to.
LLM’s are capable of searching information spaces and generating some outputs that one can use to do their job.
But it’s not taking anyone’s job, ever. People are not bots, a lot of the work they do is tacit and goes well beyond the capabilities and abilities of llm’s.
Many tech firms are essentially mature and are currently using too much labour. This will lead to a natural cycle of lay offs if they cannot figure out projects to allocate the surplus labour. This is normal and healthy - only a deluded economist believes in ‘perfect’ stuff.
> Someone in power doesn’t get to choose - the board of directors do. Who’s job is to act in the best interest of shareholders.
Alas, shareholder value is a great ideal, but it tends to be honoured in practice rather less strictly.
As you can also see when sudden competition leads to rounds of efficiency improvements, cost cutting and product enhancements: even without competition, a penny saved is a penny earned for shareholders. But only when fierce competition threatens to put managers' jobs at risk, do they really kick into overdrive.
Since the majority shareholder(s) can decide to replace the board of directors, it’s not the board of directors who holds the (ultimate) power, it’s the majority shareholder(s).
> LLMs are better at 'fuzzy' things like writing specs or documentation than they are at writing code.
At least for writing specs, this is clearly not true. I am a startup founder/engineer who has written a lot of code, but I've written less and less code over the last couple of years and very little now. Even much of the code review can be delegated to frontier models now (if you know which ones to use for which purpose).
I still need to guide the models to write and revise specs a great deal. Current frontier LLMs are great at verifiable things (quite obvious to those who know how they're trained), including finding most bugs. They are still much less competent than expert humans at understanding many 'softer' aspects of business and user requirements.
> My proof-in-pudding test is still the fact that we haven't seen gigantic mass firings at tech companies
This assumes that companies will announce such mass firings (yeah, I'm aware of WARN Act); when in reality they will steadily let go of people for various reasons (including "performance").
From my (tech heavy) social circle, I have noticed an uptick in the number of people suddenly becoming unemployed.
For Jevons paradox to be a win-win, you need these 3 statements to be true:
1)Workers get more productive thanks to AI.
2)Higher worker productivity translates into lower prices.
3)Most importantly, consumer demand needs to explode in reaction to lower prices. And we're finding out in real-time that the demand is inelastic.
Around 1900, 40% of American workers worked in agriculture. Today, it's < 2%.
Which is similar to what we see with coding: The increase in demand has not exploded enough to offset the job-killing of each farmer being able to produce more food.
> Microsoft has been going heavy on AI for 1y+ now. But then they replace their cruddy native Windows Copilot application with an Electron one.
This.
Also, Microsoft is going heavy on AI but it's primarily chatbot gimmicks they call copilot agents, and they need to deeply integrate it with all their business products and have customers grant access to all their communications and business data to give something for the chatbot to work with. They go on and on in their AI your with their example on how a company can work on agents alone, and they tell everyone their job is obsoleted by agents, but they don't seem to dogfood any of their products.
I've been hit by that bug, although it only deletes mail AFAIK. There's a separate bug that completely corrupts the mail database on compaction, making Thunderbird lock up including for every future launch.
Its a beautiful open source effort but products that have bugs like that languish for 10-20 years just aren't reliable. I need my mail client to be reliable.
Yes, FUD and long held myths can be found anywhere. But speaking as a staff member and someone who has seen first hand user reports, here is some straight shooting:
* there are rare cases of a profile either misplaced (exists but not correctly pointed to) or gone - it is something which I understand Firefox people are working on (Thunderbird uses the Firefox profile system)
* there are extremely rare reports where prefs.js is corrupted
* there are no compact failures in current versions - there are no open bug reports for recent versions, so it has been totally obliterated by a rewrite and subsequent fixes. Most user reports of compact failure are attributed to other causes of folder corruption
* folder corruption can occur as easily from external sources as from product bugs.
Also, beware drawing broad conclusions about other users' experience from one's own personal experience. I have almost never experienced corruption - once in the last 10 years. But I am also using a Thunderbird profile that has gone through 5 different laptops, two different OS, using daily builds, which is AMPLE opportunity to have had multiple catastrophic failures. But because I know other users experiences I consider myself lucky.
Although in recent years it looks like it turned from a bug about one specific (never resolved) issue to a more general troubleshooting session related to data loss issues.
What I don't understand is why the AV1 pool isn't activating their MAD clause.
Part of the idea with AV1 was that with the constituents also holding such a massive warchest of patents (plus big tech being richer than god), they would countersue and demolish anyone that tries to bully AV1 users. Which would act like deterrence.
Where is all that might? Was it all just saber rattling, and are they basically going to let the AVC / HEVC patent holders make a fool out of them?
You can set your ULA to something like "fddd:192:168::/48" and then on your vlan you prefix hint, say, "66". Now, any device on that vlan will be addressable by "fddd:192:168:66::$host". For example, your gateway ('router') for that vlan would be "fddd:192:168:66::1".
If you want to be really wonky you can script DHCPv6 to statically assign ULA IPv6 leases that match the IPv4, and expire them when the IPv4 lease expires, but like said upthread, addressing hosts via IPv6 is the wrong way to go about it. On your lan, you really want to be doing ".local" / ".lan" / ".home".
> addressing hosts via IPv6 is the wrong way to go about it. On your lan, you really want to be doing ".local" / ".lan" / ".home".
.local is fine as long as all the daemons work correctly, but AFAIK there's no way to have SLAAC and put hosts in "normal" internal DNS, so .lan/.home/.internal are probably out.
> On your lan, you really want to be doing ".local" / ".lan" / ".home".
The "official" is home.arpa according to RFC 8375 [1]:
Users and devices within a home network (hereafter referred to as
"homenet") require devices and services to be identified by names
that are unique within the boundaries of the homenet [RFC7368]. The
naming mechanism needs to function without configuration from the
user. While it may be possible for a name to be delegated by an ISP,
homenets must also function in the absence of such a delegation.
This document reserves the name 'home.arpa.' to serve as the default
name for this purpose, with a scope limited to each individual
homenet.
It may be the most officially-recommended for home use, but .internal is also officially endorsed for "private-use applications" (deciding the semantics of these is left as an exercise to the reader): https://en.wikipedia.org/wiki/.internal
".home" and ".lan" along with a bunch of other historic tlds are on the reserved list and cannot be registered.
Call techy people pathologically lazy but no one is going to switch to typing ".home.arpa" or ".internal". They should have stuck with the original proposal of making ".home" official, instead of sticking ".arpa" behind it. That immediately doomed the RFC.
I do it by abusing the static slaac address. I have a set of wierd vms where they are cloned from a reference image, so no fixed config allowed. I should have probably just have used dhcp6 but I started by trying slaac and the static address were stable enough for my purposes so it stuck.
How does that work? I initially assumed you meant you just statically assigned machines to addresses, which I think would work courtesy of collision avoidance (and the massive address space), but I can't see how that would work for VMs. Are you just letting VMs pick an IP at random and then having them never change it, at which point you manually add them to DNS?
Pretty much. A given mac address assigned in the vm config maps directly to a static slaac address(the ones they recommend you not use) and those preknown slaac address are in dns, Like I said, I should probably use dhcp6 but it was a personal experiment in cloning a vm for a sandbox execution environment. and those slacc address were stable enough for that. every time it gets cloned to the same mac address it ended up with the same ip6 address. works for me, don't have to faf around with dhcp6, put it in dns. time for a drink.
But the point is that is the address you would put in dns if you also wanted to use slaac. Most of the time however you will just set a manual address. And this was with obsd, where when slaac is setup you get the slaac address and a temporary address. I don't really know what linux does. Might have to try now.
Clarification for others: with privacy extensions disabled, SLAAC'd IPv6 addresses are deterministically generated based on MAC addresses. There's also an inbetween (IPv6 are stable per network by hashing).
I wonder if there are low power Intel or AMD boards that accept DDR3. So many sticks of 2 / 4 / 8GB DDR3 inside laptops going into recycling or landfills which would do perfectly fine for low power purposes. Hell, performance for standard workloads scales with access times, not bandwidth, and DDR3 sits nicely at CAS8 1600MHz and CAS10 2133MHz..
For a second I hoped you were gonna comment on how LLMs are going to rot out our skillset and our brains. Like some people already complaining they "have to think" when ChatGPT or Claude or Grok is down.
The other day I was doing some programming without an LSP, and I felt lost without it. I was very familiar with the APIs I was using, but I couldn't remember the method names off the top of my head, so I had to reference docs extensively. I am reliant on LSP-powered tab completions to be productive, and my "memorizing API methods" skill has atrophied. But I'm not worried about this having some kind of impact on my brain health because not having to memorize API methods leaves more room for other things.
It's possible some people offload too much to LLMs but personally, my brain is still doing a lot of work even when I'm "vibecoding".
Ironically this is one of my main use cases for LLMs
“Can you give me an example of how to read a video file using the Win32 API like it’s 2004?” - me trying to diagnose a windows game crashing under wine
Exactly. I feel this is the strongest use case. I can get personalized digests of documentation for exactly what I'm building.
On the other hand, there's people that generate tokens to feed into a token generator that generates tokens which feeds its tokens to two other token generators which both use the tokens to generate two different categories of tokens for different tasks so that their tokens can be used by a "manager" token generator which generates tokens to...
This really makes me think of A Deepness in the Sky by Vernor Vinge. A loose prequel to A Fire Upon The Deep, and IMO actually the superior story. It plays in the far future of humanity.
In part of it, one group tries to take control of a huge ship from another group. They in part do this by trying to bypass all the cybersecurity. But in those far future days, you don't interface with all the aeons of layers of command protocols anymore, you just query an AI who does it for you. So, this group has a few tech guys that try the bypass by using the old command protocols directly (in a way the same thing like the iOS exploit that used a vulnerability in a PostScript font library from 90s).
Imagine being used to LLM prompting + responses, and suddenly you have to deal with something like
sed '/^```/d;/^#/d;s/^[[:space:]]\*//;/^$/d' | head -1); [[ $r ]]
and generally obtuse terminal output and man pages.
:)
(offtopic: name your variables, don't do local x c r a;. Readability is king, and a few hundred thousand years from now some poor Qeng Ho fellow might thank his lucky stars you did).
I'm glad you guys at least went with CloudFlare. LMarena went with Google's ReCaptcha, which is plain evil. It'll often gaslight you and pretend you failed a captcha of identifying something as simple as fire hydrants. Another lovely trick is asking you to identify bridges or busses, but in actuality it also wants you to identify viaducts or semi-trucks.
Its better to do it from the source, obviously.
reply