At various previous companies I've worked at product managers, executives, and engineers love bandying about the idea of "building for nontechnical users" as a way to make their widgets more "friendly". But it's just another way to otherize and denigrate "those people" who are the out group. They might, through a metacognitive defect or simple sociopathy, actually believe they're "doing good" by considering the poor creature's plight and making compassionate decisions on their behalf. But it's all crap. All they're actually doing is confirming their biases. LLMs are the divine nectar to these people, an enshittification accelerant par excellence.
> The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.
The ugly, uncomfortable part is that when a certain fraction of people decide violence is the answer, a tipping point is reached and that's what happens. Historically, people have reached that point en masse without a great deal of provocation. So for a society to remain successful--or to remain at all--it needs to prevent this tipping point from happening. Force alone can't do that.
This is why we need some kind of professional accountability for software engineers. This behavior is willful malpractice, and it only flies because they know they'll never face consequences when it goes wrong. Let's change that.
I was with you right up until the final paragraph, but this made me do a double take:
> OpenAI is too important to trust sama with.
...wat? They made a chat bot. How can that possibly be so existentially important? The concept of "importance" (and its cousin "danger") has no place in the realistic assessment of what OpenAI has accomplished. They haven't built anything dangerous, there is no "AI safety" problem, and nothing they've done so far is truly "important". They have built a chat bot which can do some neat tricks. Remains to be seen whether they'll improve it enough to stay solvent.
The whole "super serious what-ifs" game is just marketing.
Yeah the whole fearmongering is clearly just marketing at this point. Your LLM isn't going to suddenly gain sentience and destroy humanity if it has 10x more parameters or trains on 10x more reddit threads.
I'm not even sure we're any closer to AGI than we were before LLMs. It's getting more funding and research, but none of the research seems very innovative. And now it's probably much more difficult to get funding for anything that's not a transformer model.
> I'm not even sure we're any closer to AGI than we were before LLMs.
I mean this is very obviously untrue. It'd be like saying we aren't any closer to space flight after watching a demonstration of the Wright Flyer. Before 2022-2023 AI could barely write coherent paragraphs; now it can one-shot an entire letter or program or blog post (even if it's full of LLM tropes).
Just because something is overhyped doesn't mean you have to be dismissive of it.
In hindsight there's an obvious evolutionary pathway from the Wright Flyer to Gemeni/Apollo/Soyuz.. but at the time in 1903 there absolutely was not, and anyone telling you so would be a crank of the highest degree. So it may turn out that LLMs have some place on the evolutionary path to AGI, or it could turn out they're a dead end like Cayley's ornithopters. Show me AGI first, then we can discuss whether LLMs had something to do with it.
In order to get to space, you must first be capable of flight through the atmosphere. That should be apparent to anyone even then because the atmosphere is in between space and the ground.
Regardless of whether spaceflight is still 1000 or 100 or 50 years away, you are still closer than you were before you demonstrated the ability to fly.
Or we could be stuck here for decades pending a breakthrough nobody alive today can even conceive of, or we could be compute limited by a half dozen orders of magnitude. Or it could happen next week. That's the nature of breakthroughs--you just can't have any idea when or how (or if) they'll happen.
It seems to me the answer is more "stay away from Bobs". If you don't get too tangled up with those guys when the shit hits the fan you won't get much on you.
I guess a counterpoint might be Apple's "strategy". Scare quotes because I truly don't know if it was deliberate or just a happy accident. But somehow they've managed to not get so intensively exposed to the downside risk--if the wild claims about AI don't pan out they're not going to lose very much compared with the other megacorps.
Apples plan has been pretty obvious. They invested in small locally running features that provide small utility rather than massive hosted models that cost a fortune and aren’t profitable.
There also doesn’t seem to be much risk in falling behind. If you wait longer you can skip buying the now obsolete GPUs and training the now obsolete models.
reply