Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. But I was responding to "Apple probably realised far too late". I think they were in fact way ahead of everyone else, it's just that the hardware of 2017 can't keep up with the demands of today.


It was specifically the LLM stuff. Their neural engines were never designed for running LLMs. The question is if the new neural engine in the A17 Pro and M4 actually have the required features to run LLMs or not. That’s at least what I suspect.


> I think they were in fact way ahead of everyone else,

This would be a lot easier to argue if they hadn't gimped their Neural Engine by only allowing it to run CoreML models. Nobody in the industry uses or cares about CoreML, even now. Back then, in 2017, it was still underpowered hardware that would obviously be outshined by a GPU compute shader.

I think Apple would be ahead of everyone else if they did the same thing Nvidia did by combining their Neural Engine and GPU, then tying it together with a composition layer. Instead they have a bunch of disconnected software and hardware libraries; you really can't blame anyone for trying to avoid iOS as an AI client.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: