Photoshop is a good example -- not that I agree with everything in the app, but just to design all the interactions properly in photoshop would take hundreds of hours (not to mention testing and figuring out the edges). If your goal is a 1-to-1 clone why not use Krita or photoshop? With LLM you'll get "mostly there" with many many hours of work, and lots of sharp edges. If all you need is paint bucket, basic brush / pencil, and save/load, ok maybe you can one-shot it in a few hours... or just use paint / aesprite...
In this case, I see the author's point. The DJ isn't being advertised as "a narrow tool to select some random pop tunes". If an average person is told this is AI, has a full text interface and responds with "sure I'll do what you asked" and appears to understand, then they expect it to do what it is asked.
We're told its better than people at selecting songs (e.g. has the combined wisdom of all music and music experts), basic requests like "play the first movement of Beethoven's 7th" don't sound hard for an average person with limited / no musical expertise. If I said "please play the entire 7th symphony", and the tool responds with "sure, I'll play the whole thing", then proceeds to play the Beatles, I'd say that's a fair thing to point out as a shortcoming.
Its only obvious to tech people that understand that the technology has extreme limits and only works well on areas with abundant high quality data and labels, and can't be expected to reason like a person at all in many cases, that those limits seem as obvious as hammer / screw-driver. And that given how spotify developed these models, they probably didn't really intend classical or test that area -- so it fails despite sounding confident.
But maybe we should stop advertising screwdrivers as universal intelligence? There's a lot of mott and bailey going on. When AI makes mistakes its "just tools, stop expecting intelligence." However, when people question the AI hype its "humans make mistakes too, LLMs are truly reasoning and better most humans already." And "the entire labor economy will be replaced, human DJs will cease to exist.".
There's a simple solution. If a medical malpractice happens, law suit against the LLM company. If their license is revoked as part of that finding, unfortunately that applies to the "doctor" (e.g. ChatGPT).
Same for self-driving. Just hold each car like a normal driver, the owning AI company has liability. So after ~20 tickets and accidents in a week, a few ambulances being blocked, the only option is to revoke the driver's license (of which, all the cars share one, as they have the same brain).
This would make AI companies more cautious and only advertise capabilities they actually have and can verify. They would be held to the standard of a human. I think that's reasonable (why replace humans if the outcome is worse, and why reduce protections for individuals).
To make the analogy more clear: even if a telemedicine doc sees 10,000 patients a day all over the world, they would be held liable for any medical malpractice. Bad enough, and their license would be revoked, regardless of the fact that they see many patients all over the world. Same deal with AI / LLM -- if ChatGPT is making medical advice and it hurts someone, that's the same as a human doing so -- its malpractice and lawsuits can happen.
If they are somehow licensed, well then that license can be revoked. We would revoke a human's license for a single offense in some cases, the same should occur with AI.
they can be continuously updated, assuming you re-run representative samples of the training set through them continuously. Unlike a mammal brain which preserves the function of neurons unless they activate in a situation which causes a training signal, deep nets have catastrophic forgetting because signals get scattered everywhere. If you had a model continuously learning about you in your pocket, without tons of cycles spent "remembering" old examples. In fact, this is a major stumbling block in standard training, sampling is a huge problem. If you just iterate through the training corpus, you'll have forgotten most of the english stuff by the time you finish with chinese or spanish. You have to constantly mix and balance training info due to this limitation.
The fundamental difference is that physical neurons have a discrete on/off activation, while digital "neurons" in a network are merely continuous differentiable operations. They also don't have a notion of "spike timining dependency" to avoid overwriting activations that weren't related to an outcome. There are things like reward-decay over time, but this applies to the signal at a very coarse level, updates are still scattered to almost the entire system with every training example.
I think one major draw to human-like for factors is the reuse of existing ecosystems and tools. If you have human-like grasping, you can reuse tools and utensils for human hands, otherwise, you need custom attachments. If you have human-like legs you can navigate stairs, wear pants for customization, and possibly operate a car or bike.
Its a bit like choosing JS / python -- of course performance is inferior to a compiled language with highly tailored code, but they are flexible and have an ecosystem that might do 99% of the lifting for you.
But in isolation, I agree with your idea that specialized robots with form fitted specifically to task will likely outperform a more generalized solution in a specific domain of behavior, the more generalized will likely outperform in flexibility and reusability (e.g. capable of reusing the human ecosystem).
I think it’s less about tools and more about the spaces that humans operate in.
You don’t need a human-like hand to hold a tool made for humans. As an extreme example, you can make a robot operate a power drill with strap to hold it and a servo with a small bit of wood to operate the trigger mechanism.
But for a robot operating in a space made for humans there certainly are some physical requirements which are based on the human form: maximum volume and clearances, stairs, fragile fixtures that can’t be operated with too much force, etc.
Ever walk through some over-crowded antique shop where you need to twist and lean your body to avoid knocking into thing?
This is exactly the example I used to show the ridiculousness of humanoid robots. You're not going to have a humanoid form climb into the driver seat of your car and act as your chauffeur. Your car will be the robot.
Just how ridiculous is a humanoid robot chauffeur? Should know how to drive a stick shift? Should it use its superior ability to swivel its head around to check for cars in the blind spot?
Someday business schools will have a chapter in their product management course that covers all the ways people slide ass backwards into thinking a humanoid robot makes sense. Humanoid robots will always be crushed between the rock of low value use cases below their price points the hard place of high value use cases that deserve tailored solutions.
There are a whole lot of tools intended for human use that I would use much more effectively if I could rotate my wrist repeatedly in the same direction.
Wrote this elsewhere, but I think its worth thinking about a scenario like the book "daemon", rather than a "super-intelligence explosion" type scenario (which may be more like curing the cold or fusion than building a faster car).
All it really takes to do some kind of crazy world-dominating thing is some simple mechanisms and base intelligence, which the machines already possess. Using basic tactics like coercion, spoofing, threats, financial leverage, an unsophisticated attacker could cause major damage.
For example, that Meta exec who had their email deleted. Imagine instead one email had a malicious prompt which the bot obeyed. That prompt simply emailed everyone in her contacts list telling them to do something urgently (and possibly prompting other bots who are reading those emails). You could pretty quickly do something like cause a market crash, a nationwide panic, or maybe even an international conflict with no "super intelligence" needed, just human negligence, short-sightedness, and laziness.
Examples would be things like saying there is a threat incoming, a CIA source said so. Another would be that everyone will be fired, Meta is going bankrupt, etc. Its very easy to craft a prompt like that and fire it off to all the execs you can find (or just fire off random emails with plausible sounding emails). Then you just need to hit one and might set off a cascade.
The book daemon explored an interesting concept. It explored the idea that an AI could dominate and cause problems, not through super-intelligence, but through simple mechanisms that already exist.
Like the executive who deleted all her emails -- humans giving tons of control and access, and being extremely compliant to digital systems is all it takes. Give agent control of bank and your social media, and it already has all the movie scripts and mobster movie themes to exploit and blackmail you effectively with very rudimentary methods (threats, coercion, blackmail, etc.).
Just spoofing a simple email with the account it gained access too at the Meta exec's email (had it hit an email with an attack prompt), could have been enough to initiate some kind of thing like this. For example, by emailing everyone at the company and in contacts with commands that would be caught by other bots. No super-intelligence needed, just a good prompt and some human negligence.
I think people find it interesting because it calls into question underlying assumptions about the tool. What would you say the tool is for? Programming?
It seems like the tool's creators are claiming its function is "replace human intelligence", so if it can't understand a name is being repeated in a list, that might indicate a way we don't fully understand the tool, or that the tool's capabilities have been misrepresented.
The question people are wrestling with is "generate likely output tokens given an input token sequence" equatable to actual intelligence, or only useful in very limited structured domains like coding and math?
Humans can't develop safety until there is enough blood in the streets. Only issue with AI is that threshold may come at a point where its too far gone to recover. But humans can't put in seatbelts until we're losing 40k people per year in car crashes. Unfortunately its just how we're wired. Those that are careful are outcompeted by the brash and the fast-moving, until the relative value of moving fast is removed, then we consider the value of making things safe. We didn't start with safe electricity, we started by killing lots of people and starting lots of fires. Many many years later, we ended up with electrical codes and standards.
The AI proponents who originally spoke of safety did so because they are aware of the dangers. However they, like all of us, are not able to change human nature or society. Molloch will drag them into the most dangerous game or eliminate them from the competition. Only with time, death, and damage (and many lawsuits) will any measure of safety be gained. The righteous will say "see we said AI was dangerous!" but that will be the only satisfaction they can have, many years after the damage is done.
If we want to speedrun safety, the only real mechanism is to make legal recourse more viable (e.g. $1M penalty per copyright infringement, $100M per AI-related death, etc.). If this was the case, lawyers self-interest and greed will compete with the self-interest and greed of the AI corps, balancing the risk (but there is no altruistic route to solving this).
Yes, it would probably have been better to have industrial evolution instead. Or are you arguing that all the countless deaths, maimings, child labor, 16-hour workdays, robber barons, black lung, radium jaws, and so on and so on were simply how it had to go? Or do you simply not care because all of that happened to other people?
Yeah, pretty much. A material emitting a previously unknown form of energy that turns out to be extremely harmful is really something you can only discover by trial and error. And what do you mean it's happening to other people? I am being exposed to all kinds of shit like PFAs and microplastics today. But it turns out that the technological progress outweighs all the environmental pollutants and accidents that it took to get here and we still live healthier and longer lives than we did before.
Not sure if that's true, what are your reasons for believing that? Are you saying we couldn't have invented the machines we used if we took safety measures along the way (e.g. having guards on machines that chopped of arms and legs)? Perhaps progress would have been slower -- since rather than just using the saw, you'd need a saw with a guard and emergency switch -- but it seems like if humans were more circumspect, we would have the industrial revolution, but more deliberate and controlled. Agreed it probably wouldn't have been "overnight factories in every city", but then again, you probably wouldn't have many of the externalities we're still learning about and paying for?
Is it possible to construct an ID using some kind of centralized observable phenomena? Due to how time and distance distinguish things, would they always be unique? Like only one person will ever simultaneously observe stars in certain positions and intensities, color, etc. Similar to how I've heard some companies use lava lamps or other noisy processes to generate entropy.
I guess I'm wondering if there is a way to construct a universal coordinate frame for the whole universe? If so, then its possible to trivially assign local time + x + y + z + salt to make unique ids.
reply