Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

FWIW - I used to do research in this area - PINNs are a terribly overhyped idea.

See for example https://www.nature.com/articles/s42256-024-00897-5

Classical solvers are very very good at solving PDEs. In contrast PINNs solve PDEs by... training a neural network. Not once, that can be used again later. But every single time you solve a new PDE!

You can vary this idea to try to fix it, but it's still really hard to make it better than any classical method.

As such the main use cases for PINNs -- they do have them! -- is to solve awkward stuff like high-dimensional PDEs or nonlocal operators or something. Here it's not that the PINNs got any better, it's just that all the classical solvers fall off a cliff.

---

Importantly -- none of the above applies to stuff like neural differential equations or neural closure models. These are genuinely really cool and have wide-ranging applications.! The difference is that PINNs are numerical solvers, whilst NDEs/NCMs are techniques for modelling data.

/rant ;)



I concur. As a postdoc for many years adjacent to this work, I was similarly unimpressed.

The best part about PINNs is that since there are so many parameters to tune, you can get several papers out of the same problem. Then these researchers get more publications, hence better job prospects, and go on to promote PINNs even more. Eventually they’ll move on, but not before having sucked the air out of more promising research directions.

—a jaded academic


I believe a lot of this hype is purely attributable to Karniadakis and how bad a lot of the methods in many areas of engineering are. The methods coming out of CRUNCH (PINNs chief among them) seem, if they are not just actually, more intelligent in comparison, since engineers are happy to take a solution to inverse or model selection problems by pure brute force as "innovative" haha.


The general rule of thumb to go by is that whatever Karniadakis proposes, doesn't actually work outside of his benchmarks. PINNs don't really work, and _his flavor_ of neural operators also don't really work.

PINNs have serious problems with the way the "PDE-component" of the loss function needs to be posed, and outside of throwing tons of, often Chinese, PhD students, and postdocs at it, they usually don't work for actual problems. Mostly owed to the instabilities of higher order automatic derivatives, at which point PINN-people begin to go through a cascade of alternative approaches to obtain these higher-order derivatives. But these are all just hacks.


I love karniadakis energy. I invited him to give a talk in my research center ands his talk was fun and really targeted at physicists who understand numerical computing. He gave a good sell and was highly opinionated which was super welcomed. His main argument was that these are just other ways to arrive optimisation and they worked very quickly with only a bit of data. I am sure he would correct me greatly at this point. I’m not an expert on this topic but he knew the field very well and talked at length about the differences between one iterative method he developed and the method that Yao lai at Stanford developed after I had her work on my mind because she talked in an ai conference I organised in Oslo. I liked that he seemed to be willing to disagree with people about his own opinions because he simply believed he is correct.

Edit: this is the Yao lai paper I’m talking about:

https://www.sciencedirect.com/science/article/pii/S002199912...


What do you do now?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: