> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?
It's difficult to know what people really believe, especially after only a few minutes of discussion, but I would say most people I talk to don't believe AGI is even possible. And they probably think their life won't be changed much by LLMs, AI, etc.
I haven't heard a good argument for why AGI isn't already here. It has average humans beat and seems generally to be better-than-novice in any given field that requires intelligence. They play Go, they write music, they've read Shakespeare, they are better at empathy and conversation than most. What more are we asking AI to do? And can a normal human do it?
I think you should consider carefully whether AI is actually better at these things (especially any one given model at all of them), or if your ability to judge quality in these areas is flawed/limited.
So? Do I not count as a benchmark of basic intelligent now? I've got a bunch of tests and whatnot that suggest I'm a reasonably above average at thinking. There is this fascinating trend where people would rather bump humans out of the naturally intelligent category rather than admit AIs are actually already at an AGI standard. If we're looking for intelligent conversation AI is definitely above average.
Above-average intelligence isn't a high-quality standard. Intelligence is nowhere near sufficient to get to high quality on most things. As seen with the current generations of AGI models. People seem to be looking for signs of wild superintelligences like being a polymath at the peak of human performance.
A lot of people who are also above average according to a bunch of tests disagree with you. Even if we take 'above average' on some tests to mean in every area--above average at literacy, above average at music, above average at empathy--it's still clear that many people have higher standards for these things than you. I'm not saying definitively that this means your standards are unreasonably easy to meet, but I do think it's important to think about it, rather than just assume that--because it impresses you--it must be impressive in general.
When AI surprises any one of us, it's a good idea to consider whether 'better than me at X' is the same as 'better than the average human at X', or even 'good at X'.
A major weak point for AIs is long term tasks and agentic behavior. Which is, as it turns out, its own realm of behavior that's hard to learn from text data, and also somewhat separate from g - the raw intelligence component.
An average human still has LLMs beat there, which might be distorting people's perceptions. But task length horizon is going up, so that moat holding isn't a given at all.
I’d say that an increasingly more common strand is that the way LLMs work is so wildly different than how we humans operate that it is effectively an alien intelligence pretending to be human. We have never and still don’t fully understand why LLMs work the way they do.
I’m of the opinion that AGI is an anthropomorphizing of digital intelligence.
The irony is that as LLMs improve, they will both become better at “pretending” to be human, and even more alien in the way they work. This will become even more true once we allow LLMs to train themselves.
If that’s the case than I don’t think that human criteria is really applicable here except in an evaluation of how it relates to us. Perhaps your list is applicable in LLM’s relativity to humans but many think we need some new metrics for intelligence.
I would expect sufficient "General Intelligence" to be able to correct itself in process. I hear way too often that you need to restart something to get it work. This to me doesn't sound sufficient yet for general intelligence. For that you should be able to leave it running all the time and learn and progress during run-time.
We have bunch of tools for specific tasks. This doesn't again sound like general.
>What more are we asking AI to do? And can a normal human do it?
1. Learn/Improve yourself with each action you take
2. Create better editions/versions of yourself
3. Solve problem in areas that you were not trained for simply by trial and error where you yourself decide if what you are doing is correct or wrong
Compared to the average human? Yes. Most people are distressingly bad at empathy to the point where just repeating what they just heard back to an interlocutor in a stressful situation could be considered an advanced technique. The average standard of empathy isn't that far away from someone who sees beatings as a legitimate form of communication. Humans suck at empathy, especially outside a tight in-group. But even in-group they lack ability.
I am sorry for you. You must surround yourself with a lot of awful people. That is pretty sad to read. Get out of whatever you are stuck in, it can't be good for you.
The stats are something like 1 in 10 people experience domestic violence. Unless someone takes a vow of silence and goes to live in the wilderness there is no way to avoid awful people. They're just people.
The average standard is not high. Although I suppose an argument could be made that wife-beaters are actually just evil rather than being low-empathy but I think the point is still clear enough.
No, what I'm saying is that around 6-8 out of 10 people are worse at empathy than a chatbot, in my estimation. And even if that gets knocked down a little I still don't see how people would argue that humans have some unassailable edge. Chatbots are an AGI system. Especially the omni-models.
I don't know why you picked that particular example to make your point. I do notice though that you framed it in a pretty sexist way. You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways.
Why that confirms that humans are in general not capable of being empathy is beyond me. My point still stands. You cant fix the whole world. BUT, you definitely can make sure you surround yourself with decent people, at least to a certain extend. I know the drill. I have a disability, and I had (and have) to deal with people treating me in a very inappropriate way. Patronisation, not being taken serious, you name it, I know it. But that still didn't make me the frustrated kind of person you seem to be. You have a choice. Just drop toxic people and you will see, most humans can be pretty decent.
> You realize the dark figure of men getting abused by their wives is higher then the media reports? In any case, my point is, violence in relationships happens both ways.
Yes. That is in fact pretty much exactly what I'm arguing. People are often horrible.
> BUT, you definitely can make sure you surround yourself with decent people...
People generally can't. Otherwise there'd be a bunch more noticeable social stratification to isolate abusive spouses instead of it being politely ignored. And if people could, you would - you note in the next sentence that you can't being dealt with in an inappropriate way.
And you aren't even trying to identify people who are generally low empathy, you're just trying to find people who don't treat you badly.
> me the frustrated kind of person you seem to be.
The irony in a thread on empathy. What frustration? Being an enthusiastic human-observer isn't usually frustrating. Some days I suppose. But that sort of guess is the type of thing that AIs don't tend to do - they typically do focus rather carefully on the actual words used and ideas being expressed.
An AI (LLM) neither focuses on words nor on ideas. What you are promoting is plain escapism, which sounds rather unhealthy to me. To each their own. But really, get some help. There are ways, many ways, to deal with a depression, other then waiting for a digital god.
If you object to HN you didn't have to create an account. And I still reckon even a sycophantic AI would still have managed more empathy in its response. They tend to be a bit wordy and attempt to actually engage with the substance of what people say too.
They didn't even mention HN. Are you saying the people you associate with are just on HN?
Don't spend all your time on HN or weigh your opinions of humanity on it. People on here are probably the least representative of social society. That's not rejecting it, that's just common sense.
You got 200ms of round trip delay across your nervous system. Some of the modern AI robotics systems already have that beat, sensor data to actuator action.
Anyone who's trying to build universal AI-driven robots converges on architectures like that. Larger language-based models driving smaller "executive" models that operate in real time at a high frequency.
Sure, I assume some sociopaths would have extremely high levels of cognitive empathy. It is really a question of semantics - but the issue is I don't think the people arguing against AGI can define their terms at all without the current models being AGI or falling into the classic Diogenes behold! a man! problem of the definition not really capturing anything useful - like intelligence. Traditionally the Turing test has been close to what people mean, but for obvious reasons nobody cares about it any more.
> artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
This seems like a factually correct sentence. Emphasis on "potential".
You can be a bear and still think AI will be big one day. It's quite plausible that LLMs will remain limited and we don't find anything better for decades and the stocks crash. But saying AI will never be a big thing is just unrealistic.
I think we should split definition somehow, between what LLMs can do today (or next few years) with how big a thing this particular capability can be (a derivative of the capability). And then what some future AI could do and with how big a thing that future capability could be.
I regularly see people who distinguish between current and future capabilities, but then still lump societal impact (how big a thing could be) into one projection.
The key bubble question is - if that future AI is sufficiently far away (for example if there will be a gap, a new "AI winter" for a few decades), then does this current capability justify the capital expenditures, and if not then by how much?
One upon a time in SF i was told that human-driven cars would be illegal, or too expensive to insure, by the end of the decade. That was last decade. The modern tech economy is all about bubbles biult and sustained by hype people. Vertical farming. Pot replacing alcohol. Blockchains replacing lawyers. The metaverse replacing everything. Sure, we are in an AI bubble but we aslo ride atop a dozen others.
AI data centers in space? In five years? Really? No fiber connections? Does any sane person actually believe this? No. But if that is what keeps the billions flowing upwards then who am I to judge.
Not just in SF. "Journalists" love to pick up these enflated futuristic projections and run with 'em, since they sound so cozy and generate clicks. I still remember the "Google Car" craze from the early 2010er years. And if you tell people who read and believe this futuristic nonesense that it is enflated, you get pushback, because, yeah, why should a single person know better then a incentivized journalist...
I'm quite skeptical of the data centers in space claim, but I think a proof of concept can certainly be achieved in five years. I'm less convinced that we'll ever see widescale deployment of data center satellites.
And to be fair, I've read that Google's timelines for this project extend far beyond a 5 year horizon. I think it's a rational research direction for them, since it gets people excited and historically many space-related innovations have been repurposed to benefit other industries. Best case scenario would be that research done in support of this data centers in space project leads to innovations that can be applied towards normal data centers.
Someone can build a server in space, pairing a puny underpowered rack with a handful of servers to a ginormous football field sized solar panel plus a heat radiator plus a heavy as hell insulated battery to survive being a planet shade every hour for tens of minutes. We can do that from existing components and launch on existing rockets, no problem.
Why though?
Why would anyone need a server in space in the first place? What is a benefit for that location, necessitating a cost an order of magnitude higher (or more) compared to a warehouse anywhere on the planet?
Try asking for a 24/7 multi-gig data connection to a space server. Space suddenly doesnt seem so big once you start playing around with RF allocations.
Do data centers on Earth have no employees present, and none who ever come on site for the life of the data center? Prove that out on earth and I will start to believe your space data center.
AI is changing the world and has changed the world already.
See, AI is a field... and it's also a buzzword: once a technology passes out of fashion and becomes part of the fabric of computing, it is no longer called AI in the public imagination. GOFAI techniques, like rules engines and propositional-logic inference, were certainly considered AI in the 1970s and 1980s, and are still used, they're just no longer called that.
The statistical methods behind machine learning, transformers, and LLMs are certainly game changers for the field. Whether they will usher in a revolutionary new economy, or simply be accepted as sometimes-useful computation techniques as their limitations and the boundaries of their benefits become more widely known, remains to be seen but I think it will be closer to the latter than the former.
What's the alternative? Is there literally any AI tech more promising and disruptive than LLMs? Or should we buy into that "it's not ackhtually AI" meme?
Those are LLMs with an extra modality bolted to them.
Which is good - that it works this well speaks of the generality of autoregressive transformers, and the "reasoning over image data" progress with things like Qwen3-VL is very impressive. It's a good capability to have. But it's not a separate thing from the LLM breakthrough at all.
Even the more specialized real time robotics AIs often have a bag of transformers backed by an actual LLM.
I don't think that's fair; one of the most significant criticisms of the AI industry is the number of misleading claims made by its spokespeople, which has had a significant effect on public perception. The parent comment is a relevant expression of that.
If I'm being fucking honest, then this generation of LLMs might already beat most humans on raw intelligence, AI progress shows no signs of stopping, and "it's not actually thinking" is just another "AI effect" cope that humans come up with to feel more important and more exceptional.
...AI is currently the subject of great enthusiasm. If that enthusiasm doesn’t produce a bubble conforming to the historical pattern, that will be a first.
> I haven’t met anyone who doesn’t believe artificial intelligence has the potential to be one of the biggest technological developments of all time, reshaping both daily life and the global economy.
You’re trying to weigh in on this topic and you didn’t even _talk_ to a bear?