Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is Wang even able to achieve superintelligence? Is anyone? I'm unable to make sense of Wang's compensation package. What actual, practical skills does he bring to the table? Is this all a stunt to drive Meta's stock value?


The way it sounds, Zuckerberg believes that they can, or at the very least has people around him telling him that they can. But Zuckerberg also though that the Metaverse would be thing.

LeCun obviously thinks otherwise and believes that LLMs are a dead-end, and he might be right. The trouble with LLMs is that most people don't really understand how they work. They seem smart, but they are not; they are really just good at appearing to be smart. But that may have created the illusion the true artificial intelligence is much closer than it really is in the minds of many people including Zuckerberg. And obviously, there now exists an entire industry that relies on that idea to raise further funding.

As for Wang, he's not an AI researcher per se, he basically built a data sweatshop. But he apparently is a good manager who knows how to get projects done. Maybe the hope is that giving him as many resources as possible will allow him to work his magic and get their superintelligence project on track.


Wang is a networking machine and has connected with everyone in the industry. Likely was brought in as a recruiting leader. Mark being Mark, though, doesn’t understand the value of vision and figured getting big names in the same room was better than actually having a plan.


Your last sentence suggests that he willingly failed to take the choice to create a vision and a plan.

If, for whatever reason, you don't have a vision and a plan, hiring big names to help kickstart that process seems like a way better next step than "do nothing".


Wang is not Zuck's first choice. Zuck couldn't get the top talents he wanted so he got Wang. Unfortunately Wang is not technical, he excels in managing the labeling company and be the top in providing such services.

That's why I also think the hiring angle makes sense. It would actually be astonishing if he could turn technical and compete with the leaders in OAI/Anthrpic


How to draw an owl:

1. Hire an artist.

2. Draw the rest of the fucking owl.


3. Scribble over the draft from the artist. Tell them what is wrong and why. repeat a few times.

4. In frustration, use some AI tool to generate a couple of drafts that are close to what you want and hand them to the artist.

5. Hire a new artist after the first one quits because you don't respect the creative process.

6. Dig deeper into a variety of AI image-generating tools to get really close to what you want, but not quite get there.

7. Hire someone from Fiverr to tweak it in Photoshop because the artists, both bio and non-bio, have burned through your available cash and time.

8. Settle for the least bad of the lot because you have to ship and accept you will never get the image you have in your head.


You’re right – the way I phrased it assumes “having a plan” is a possibility for him. It isn’t. The best he was ever going to do was get talent in the room, make a Thinking Machines knockoff blog post with some hand wavey word salad, and stand around until they do something useful.


> they are really just good at appearing to be smart.

In other words, functionally speaking, for many purposes, they are smart.

This is obvious in coding in particular, where with relatively minimal guidance, LLMs outperform most human developers in many significant respects. Saying that they’re “not smart” seems more like an attempt to claim specialness for your own intelligence than a useful assessment of LLM capabilities.


> They seem smart, but they are not; they are really just good at appearing to be smart

There are too many different ways to measure intelligence.

Speed, matching, discovery, memory, etc.

We can combine those levers infinitely create/justify "smart". Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.

Maybe you meant genius? Because that standard is quite high and there's no way they're genius today.


They're neither smart nor dumb and I think that trying to measure them along that scale is a fool's errand. They're combinatorial regurgitation machines. The fact that we keep pointing to that as an approximation of intelligence says more about us than it, namely that we don't understand intelligence and that we look for ourselves in other things to define intelligence. This is why when experts use these things within their domain of expertise they're underwhelmed, but when used outside of those domains they become halfway useful.

Trying to create new terminology ("genius", "superintelligence", etc.) seems to only shift goal posts and define new ways of approximation.

Personally, I'll believe a system is intelligent when it presents something novel and new and challenges our understanding of the world as we know it (not as I personally do because I don't have the corpus of the internet in my head).


> You can be both at the same time.

Smart and dumb are opposites. So this seems dubious. You can have access to a large base of trivial knowledge (mostly in a single language), as LLMs do, but have absolutely no intelligence, as LLMs demonstrate.

You can be dumb yet good at Jeopardy. This is no dichotomy.


> Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.

This has to be bait


> They seem smart, but they are not; they are really just good at appearing to be smart.

Can you give an example of the difference between these two things?


Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content. Let's assume they are doing a pretty convincing job too. Now, the audience watching these scenes may think that the actor is actually speaking the language, but in reality they are just mimicking.

This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.


> Imagine an actor who is playing a character speaking a language that they actor does not speak. Due to a lack of time, the actor decides against actually learning the language and instead opts to just memorise and train how to speak their lines without actually understanding the content.

Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are always grammatical, always relevant to what you said modulo ambiguity, largely coherent, and accurate more often than not. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.


You've missed the point of the example, of course it's not the exact same thing. With regard to LLM, the biggest difference is that it's a regression against the world's knowledge, like an actor who memorized every question that happens to have an answer written down in history. If you give him a novel question, he'll look at similar questions and just hallucinate a mashup of the answers hoping it makes sense, even though he has no idea what he's telling you. That's why LLMs do things like make up nonsensical API calls when writing code that seem right but have no basis in reality. It has no idea what it's doing, it's just trying to regress code in its knowledge base to match your query.


I don't think I missed the point; my point is that LLMs do something more complex and far more effective than memorise->regurgitate, and so the original analogy doesn't shed any light. This actor has read billions of plays and learned many of the underlying patterns, which allows him to come up with novel and (often) sensible responses when he is forced to improvise.


> LLMs do something more complex and far more effective than memorise-regurgitate

They literally do not, what are you talking about?


What kind of training data do you suppose contains an answer to "how to build a submarine out of spaghetti on Mars" ? What do you think memorization means?

https://chatgpt.com/s/t_6942e03a42b481919092d4751e3d808e


You are describing Searle's "Chinese Room argument"[1] to some extent.

It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is something there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...

[1] https://plato.stanford.edu/entries/chinese-room/


If an LLM was intelligent, wouldn't it get bored?


Why should it?


1. I would argue that an actor performing in this way does actually understand what his character means

2. Why doesn't this apply to you from my perspective?


Being able to learn to play Moonlight Sonata vs. being able to create it. Being able to write a video game vs being able to write a video game that sells. Being able to tell you newtons equations vs being able to discover the acceleration of gravity on earth


So if an LLM could do any of those things you would consider it very smart?


Wisdom vs knowledge, where the word "knowledge" is doing a lot of work. LLMs don't "know" anything, they predict the next token that has the aesthetics of a response the prompter wants.


I suspect a lot of people but especially nerdy folks might mix up knowledge and intelligence, because they've been told "you know so much stuff, you are very smart!"

And so when they interact with a bot that knows everything, they associate it with smart.

Plus we anthropomorphise a lot.

Is Wikipedia "smart"?


What is the definition of intelligence?


Ability to create an internal model of the world and run simulations/predictions on it in order to optimize the actions that lead to a goal. Bigger, more detailed models and more accurate prediction power are more intelligent.


How do you know if something is creating an internal model of the world?


Look at the physical implementation of how it computes.

So you are making the determination based on the method, not on the outcome.

Did I ever promise otherwise? Intelligence is inherently computational, and needs a physical substrate. You can understand it both by interacting with the black box and opening up the box.

Definitely not _only_ knowledge.


Right, so a dictionary isn't intelligent. Is a dog intelligent?


It doesn't seem obvious to me that predicting a token that is the answer to a question someone asked would require anything less than coming up with that answer via another method.


Hallucinating things that never exist?


Imagination?


I think these are clearly two different words that mean different things.


Yet they are correlated and confused in part.

What are the differences between a person that is smart and an LLM that seems smart but isn't?


The ability to generate novel ideas.


What's your definition of a novel idea? How do you measure that?

I've had a 15 year+ successful career as a SWE so far. I don't think I've had a single idea so novel that today's LLM could not have come up with it.


I've had plenty. Independent discovery is a real thing, especially with juniors.


Well that's not true - see the Terry Tao article using AlphaEvolve to discover new proofs.

Additionally, "novel ideas" isn't something that is included in something that smart people do so why would it be a requirement for AI.


How many people generate novel ideas? When I look around at work, most people basically operate like an LLM. They see what’s being done by others and emulate it.


In my experience, discernment and good judgment. The "generating ideas" capabilities is good. The text summarization capabilities are great. However when it comes to making reasoned choices, it seems like it's losing all abilities, and even worse it will sound grossly overconfident or sycophantic or both.


The LLM is not a person.


it's in the eye of the beholder


Humans aren't smart, they are really just good at appearing to be smart.

Prove me wrong.


You'll just claim we only "appeared" to prove you wrong ;)


If you don’t think humans are smart, then what living creature qualifies as smart to you? Or do you think humans created the word but it describes nothing that actually exists in the real world?


I think most things humans do are reflexive, type one "thinking" that AIs do just as well as humans.

I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.

I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.


You didn’t answer my question


That's because it's reductionist and I reject the supposition.

I think humans are smart. I also think AI is smart.


Your original comment was:

“Humans aren't smart, they are really just good at appearing to be smart. Prove me wrong.”


Wang is able to accurately gauge zuck’s intelligence.


If Zuck throws $2-$4Bn towards a bunch of AI “superstars” and that’s enough to convince the market that Meta is now a serious AI company, it will translate into hundreds of billions in market cap increases.

Seems like a great bang for the buck.


Oracle also briefly convinced the market it was a serious AI company and received a market cap increase. Until it evaporated.


Wang never led a frontier lab. He founded a company that uses hlow-paid uman intelligence to label training data. But clearly he is as slick a schmoozer as Sam Altman to have taken in a seasoned operator like Zuckerberg.


> What actual, practical skills does he bring to the table?

This hot dog, this no hot dog.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: