Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The answer is we are in the Pentium II 33 MHz stage of AI chips.

So, plan accordingly into what the future demand may look like.



The pentium II was 233 MHz, not 33, but sure. The problem is, if that metaphor holds, we're so vastly far away from "the future" that it's impossible to predict what it looks like. Just for example, NVIDIA's chips kinda suck for inference. Vastly overprovisioned.

It's like the folks who were saying "the internet is growing rapidly, Cisco powers the internet, therefore Cisco will grow as rapidly as the internet" in the late 90s. Oops.


I think the challenge is to determine whether or not AI applications are generalizably useful to all of society as general purpose PCs and smartphones or if they are more narrowly focused like crypto applications.


I can think of hundreds of use cases for LLMs. But all my ideas are hindered by inference cost, context size, and LLM accuracy.

I assume all 3 will rapidly improve.


The answer is yes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: