Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can see where Sutskever is coming from.

We are in a situation where the hardware is probably sufficient for AI to do as well as humans, but in terms of thinking things over, coming to understand the world and developing original insights about it, LLMs aren't good, probably due to the algorithm.

To get something good at thinking and understanding you may be better rebuilding the basic algorithm rather than tinkering with LLMs to meet customer demands.

I mean the basic LLM thing of have an array of a few billion parameters, feed in all the text on the internet using matrix multiplication to adjust the parameters, use it to predict more text and expect the thing to be smart is a bit of a bodge. It's surprising it works as well as it does.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: