We should ask, when will AI make a discovery on its own? For instance, computers should be able to understand numbers, and run analysis on numbers. Computers have complete access to every fact that humans know about numbers. So numbers should be the first place that we should expect to see genuine innovation from AI. This is a simple test for the moment that AI is able to make original contributions to our society: when can AI come up with a new thesis about numbers, and then build an original proof, something that can be published in the major, peer-reviewed math journals.
Until AI can do that, we have to admit that it's not really aware or sentient or any of the other more ambitious things that have recently been claimed for it.
Can AI teach us anything new about the pattern of prime numbers?
Can AI develop an original proof for the shape of shadows in high dimensional spaces?
Can AI creatively prove a new limit to mathematics?
There are 2 researchers in AI who deserve more attention: Kenneth O. Stanley and Joel Lehman. They wrote a great book: Why Greatness Cannot Be Planned. They look at the limits of utility functions and explain the importance of novelty. As an antidote to some of the hype around AI, I strongly recommend this book:
>This is a simple test for the moment that AI is able to make original contributions to our society: when can AI come up with a new thesis about numbers, and then build an original proof, something that can be published in the major, peer-reviewed math journals.
>Until AI can do that, we have to admit that it's not really aware or sentient or any of the other more ambitious things that have recently been claimed for it.
We have to admit no such thing, that is an absurdly high bar. The vast majority of humanity has not produced an original mathematical proof worthy of being published in a peer-reviewed math journal, and realistically it isn't possible for the vast majority of humanity to do so. Nevertheless, we are essentially all sentient/aware. "If it can't generate new and novel math that can pass peer review, it's not aware or sentient" is a moving of the goalposts so far and fast it should be giving you windburn.
The difference being, probably the vast majority of humans _could_ publish in a journal if they devoted their life to it 100%.
Additionally the AI does not get bored or frustrated, which is probably one of the biggest impediments (other than money) that most people would have to such an endeavour.
If the AI had to do this while also doing all the other things humans do at the same time and constrained to the power of a human brain, then yes it would be unrealistic.
>The difference being, probably the vast majority of humans _could_ publish in a journal if they devoted their life to it 100%.
I believe that, yes. That's why I said "realistically", because of course the vast majority of humans currently in existence actually cannot in a reasonable timeframe (keep in mind ChatGPT has been around for 8 months), no matter how much you incentivise them. And maybe if there were 7 billion AIs on the planet, 0.00174533469% of them could produce a publishable paper on mathematical theory throughout an average 60-80 years of life - I don't believe it, but we have nowhere near enough knowledge about current AI systems to say for sure right now.
My point isn't that an AI couldn't generate a novel mathematical proof - eventually I'm certain one could, and we should definitely work towards it. My point is that it is absolutely absurd to say that an AI isn't intelligent if it can't generate a novel mathematical proof, because if that standard was applied to humans it would mean 99.9982546653% of us aren't intelligent.
Yes but that's not the question either. Chatgpt can probably publish in a journal already. The question is whether it can make impactful work. This is very unclear if most people could do even if working on it 100%
So what if you cant either. You as a human at least possess the self-direction and innate will to direct your own actions and thoughts in some direction. Does AI? No it doesn't. It literally does nothing unless directed by human-set parameters into doing so. For now, regardless of technical abilities, this makes AI far from anything that can easily be defined as sentient.
We evolved visual and hand dexterity for considerably more than two millions years (more like hundreds of millions, I don't care to go find out when hands first evolved but the neural crest was 550 million years ago), and we need many, many more than "only few examples" to be able to draw hands, let alone a sentence. This is something you could only possibly say if you have never tried to draw a realistic hand. There is a reason that long before AI image generation was publicly talked about, many artists joked all the time about how they could draw everything except hands correctly. Hands are particularly difficult to draw. If anything, it is genuinely interesting and maybe worthy of research as to why hands are so hard for us to draw and why they are so hard for these DL networks to draw, and if the reasons are related.
I am of the opinion that AI is neither truly artificial in nature nor intelligent, in the way that we imagine intelligence.
But AI is capable of doing the things you mentioned, perhaps not on that scale just yet, but certainly in principle.
The reason being, that transformer AI in LLM models is actually just an engine for parsing human intelligence.
As the engine improves, it will appear “smarter”, but it is still just parsing its way through the n-dimensional memetic matrix that is human language and culture. …. Just like we do.
Unless there exists a superintelligence expressed in that data set, AI will not express superintelligence in the way we would expect.
AI does express superintelligence though. In its ability to carry on coherent conversations with thousands of people simultaneously on a diverse range of subjects and create documents and code at the speed of conversation.
Right now it is hobbled by limitations of the parsing engine and an inflexibility of not being able to aggregate new knowledge, but those things are improving and being worked on, just not ready for public access yet.
The problem with LLMs, which are really impressive, is that they have too many parameters. This makes them fragile, as they lack regularization.
IMHO, a very interesting research route is to combine small mechanistic models with big networks such as LLMs, where the latter play the role of intuition.
Research by Tenenbaum from MIT, and other similar groups, is heading in this direction.
The "inteligence" level is extremely hard to gauge overall since humans typically have a very deep understanding of a handful of topics, with lots of repetition and knowledge distillation while learning.
Whereas current LLM training focuses on getting as much diverse data as possible and training for very few epochs, so the inteligence we get is wider than any human could ever hope to achieve, but is also shallow in its understanding of each topic. Combining all fields of knowledge together does have its benefits in solving certain problems though.
High schoolers come up with new ideas all the time. AI is just generic scenarios that are mathematical average of what you’d expect. It’s really reductionist to say people are that, betrays a certain cynicism about humanity
Those new ideas are hardly ever better than a string of words probabilistically drawn from a bag. Saying that AI can't think the same things we do betrays a kind of human exceptionalism that's been a part of AI criticism forever. The bar is always one step higher, like some kind of mathematical function that approaches infinity as capabilities improve.
So are you then saying that nothing humans do is derivative?
How many books and movies are just retelling one of a handful of original stories in a different way?
That's all ai does, it builds upon existing works just life we do.
It wrote an entire very accurate book outline for a book about langchain a tool it knew nothing about. I only fed it very basic info, not the whole docs. Clarified a few things it got wrong etc...
Many of the claims against it is that it can't do x thing that "some" group of humans can do even if it's someone many others also can't do.
I guarantee you, it can write a better proof or get closer than I ever would.
IMHO one of the things it really lacks is a purpose or drive. Without a desire and without rewards for learning shit, it'll only learn the basics it needs to formulate an answer.
Curiosity, intrigue, desire, even if we could fake that, might lead to some interesting things. Also adding senses and multi modal things.
AI already does 'innovative' work in the art field. It makes new images, new things that have not been digitally painted before. I think that making new proofs or new other intellectual things is something that can be solved just by making better models.
Most humans are not intelligent by the GP proposed measure. Whatever happened to the Turing test? It's core concept was the holy grail, the inability to know if you are speaking to a human on the other side of your text messages or not was an unimaginable apex. We have now not only conquered that summit, but blown right past it.
I've started to think of LLMs as not so much AI as collective intelligence. An LLM aggregates a huge amount of human-generated information and thinking into one convenient semi-intelligent entity, without doing much really original thinking of its own (so far).
But this alone is potentially profound. Better ways to be collectively smarter could itself accelerate change. Vernor Vinge's famous essay "The Coming Technological Singularity" wasn't just about AI; he also suggested collective intelligence as a way the singularity could happen.
I have a feeling that these ad-hoc bars for AI to clear are extremely similar to Plato's definition of a human ("a featherless biped"), in that they look for what features a human/intelligence has, but don't incorporate the flip-side.
Hence the large amount of Diogenian refutations; including plucked chickens, chess computers, visual classifiers, generative AIs, LLMs, and now, I guess, proof generators (which already exist in some form or another) that could loudly proclaim "behold, a human!".
Unless we rigidly define what intelligence actually is, how can we even hope to correctly identify one?
If the word intelligence is too overloaded or obscure, it is best replace that word use with component words, often new themselves. Intelligence can redefine itself.
I have a theory that there is a kind of dual-think going on around AI 'hallucination'. Specifically that the only meaningful difference between imagination and what people are calling hallucination is whether or not the outcome is useful.
Complete lay-person viewpoint here of course, outside of toying with some neural networks back in the day.
Most hallucinations I’ve seen “make sense” or “look right”. I guess that’s a certain type of creativity. And it’s not like common sense ideas have never been profitable..
I think the difference is more to do with the fact that 'hallucination' is passed off as reality (whether it's ChatGPT confidently telling you that Abraham Lincoln had a MySpace page, or that weird guy on the train telling you that there are spiders in the seat cushions).
People are usually able to distinguish between their imagined scenarios and the real world.
> computers [AI] should be able to understand numbers, and run analysis on numbers.
That's not how any of this works!
"Human brains are made of neurons, so humans must be experts on neurons."
Large Language Models are all notoriously bad at simple arithmetic ("numbers") for the same reasons humans are. We cheat and use calculators to increase our numeracy, but LLMs are trained on human text, not the method used to generate that text.
They can see (and learn from) the output we've generated from calculators, but they can't see the step-by-step process for multiplying and adding numbers that the calculators use internally. Even if they could see those steps and learn from that, the resulting efficiency would be hideously bad, and the error rate unacceptably high. Adding up the numbers of just a small spreadsheet would cost about $1 if run through GPT 4, but a tiny fraction of a cent if run through Excel.
There have been attempts at giving LLMs access to calculator plugins such as Wolfram Alpha, but it's early days and the LLMs are worse at using such tools than people are.
I'm not an expert on this by any means, but from what I've read about the four-color theorem, essentially all of the insight in the proof came from human mathematicians. Mathematicians figured out an ingenious way to reduce to the problem to carrying out a a large number of tedious computations, and the computer was used only to carry out those computations and thereby complete the proof. This seems quite different from a computer coming up with a proof by itself.
I recall that DeepMind’s AI discovered a new type of Sorting Algorithm. Sorting is one of the most “trafficked” area of CS research, so I would say it’s a true discovery.
Certainly computer-aided proofs have been a thing for a while; I wonder where one draws the line between "the AI made a new proof" and "we build a system that proved X"? Which I guess really gets at the more basic question "what is an AI"?
I think one line is: you ask the computer about a problem in a declarative way (e.g. Z3) and the computer came up with a solution or a description of where it is blocked by the problem with potential areas of attack.
It doesn't even have to discover anything new to be a compelling proof of concept. All it has to do is discover something we already know without having been fed the answer in some way.
Today's AIs can't do this, because the entire basis for their intelligence is having been fed all the answers humanity has, and regurgitating those back to us in a somewhat more flexible and adaptive way than a search engine.
Until AI can do that, we have to admit that it's not really aware or sentient or any of the other more ambitious things that have recently been claimed for it.
Can AI teach us anything new about the pattern of prime numbers?
Can AI develop an original proof for the shape of shadows in high dimensional spaces?
Can AI creatively prove a new limit to mathematics?
There are 2 researchers in AI who deserve more attention: Kenneth O. Stanley and Joel Lehman. They wrote a great book: Why Greatness Cannot Be Planned. They look at the limits of utility functions and explain the importance of novelty. As an antidote to some of the hype around AI, I strongly recommend this book:
https://www.amazon.com/Why-Greatness-Cannot-Planned-Objectiv...