Hacker Newsnew | past | comments | ask | show | jobs | submit | dhdc's commentslogin


The joke is that the Nobel prize in chemistry is often awarded to non-chemists.


All first five titles in the Touhou series can only run on PC98. The sixth title is the first one to run on windows.

There was also a large amount of revamp of the settings, backgrounds and characters. The first 5 titles have settings so different their stories are rarely considered canon now.


Kuramoto-Sivashinsky is another fun equation with biharmonic operator.


Turbulence is unsolved because Navier-Stokes is unsolved.


Can you not tell it's sarcasm?


Which of course is brought to you by Stephen Wolfram, president, CEO and founder of Wolfram Research Inc., designer of Wolfram Language™ which is available in Wolfram Mathematica™.


If he had had twins he could have raised them from birth to outdo each other. Alpha-Wolfram


"Buy my book!"


Counter example: MATLAB. Proprietory software, proprietory language, yet it remains unrivalled in control engineering, DSP, communication systems and so on.

I genuinely hope that Octave can take off and trade blows with MATLAB face to face, but not sure if that will ever happen simply because of the amount of money MathWorks is pumping into developing toolboxes.


> yet it remains unrivalled in control engineering

No, its usage is just declining slower in those areas because ppl don't want unnecessary change introducing unnecessary defects, esp since safety and reliability matters a lot for some equipments, and also engineers spend time on smth else than learning new languages and frameworks.

In 20 years Matlab will be as legacy as Fortran is now, still some left but mostly forgone.


> as legacy as Fortran is now

what are you talking about? There's a lot of numerical code written (and actively developed) in Fortran today. If you are anywhere near numerical mathematics, you are running algorithms written in Fortran that somebody is maintaining.


'Lots' and 'legacy' are compatible.


> In 20 years Matlab will be as legacy as Fortran is now

Not if MathWorks keep offering discounts to universities for putting MATLAB in the courses, gotta get those undergrads "hooked" early before graduating.


I've been involved in some hires of new graduates in both engineering and physics. What I've noticed is that students are acutely aware of job market for programmers, and to a somewhat lesser extent, the relative status of "hardware" and "software." They're differentiating themselves into programmers and non-programmers. I've observed that anybody who can program well enough to do it for money, will eventually be doing so.

They're all exposed to Matlab (it's on every resume), but that could range from actually knowing how to program, to having been given some pre-written scripts to run in a class.

However, the ones who are inclined to program, want to learn a language that they perceive to be relevant to the software development job market. Some of them have gone so far as to take a handful of CS courses and are as up to date on good coding practices as the CS majors themselves. This even includes some students in traditionally non programming fields such as biology and chemistry.

Remember that it's usually easier to learn your second language, so if a student has the itch to program, there's a pretty good chance that they will have learned Python on the side by the time they graduate.


Programming languages are not heroin... people want a tools offering some features and they use whatever has them and is not too hard to learn a new one.

Matlab-class languages are <1 week for anyone smart to learn and usably prodactive at then it's a smooth learning curve up. They're not Scala or Haskell or enterprise Java frameworks...

When Julia gets all the advatanges currently in the Python ecosystem (and it's just a matter of time), it's game over. Ppl use Matlab instead of Python bc Python is weird and slow at many linear algebra stuff... AI/ML ppl are OK with Python bc they rarely write low level numerics code and when they do it has to run on stuff other than regular CPUs so it's C anyway.

If Matlab looses (fairly) we all win. But Mathematica/Wolfram is a different thing... there's all the symbolic computing stuff and the idea of integrating access to a general real-world-knowledge-DB into the language itself in there that will take decades to re-invent...


Not really the reality imho. I was in a cursus where mostly Matlab were taught 10y ago, while I was already programming on the side in Python/R. Most of the students realized after their master that neither academia nor private company were using Matlab and because programming was not the focus, they had a really hard time to transfer their knowledge to any other programming language. Not SW/SE/Dev here, just scientific/engineer cross-over students.

After raising their voices, Faculty finally switched from Matlab to Python for those courses because what matters was your number of hire after their master not to get a deal for a software nobody uses besides two old professors.

And in general, the more academia moves away from private solutions (be it STATA, ArcGIS, Matlab, SAS, etc), education will move from it also. (Belgian University)


Given that Fortran is one of the few languages with first class support for GPGPU development, including graphical debuggers and IDEs, not bad.


>as legacy as Fortran is now

Disappointingly because modern Fortran is a nice language for numerical computing.


Compared to C, sure it is. Probably compared to C++ to since it's less opportunity for obsfucation. Compared to anything else probably not.


Issue is most such codes are written in C and C++. Also anything else usually lacks native multidimensional arrays, element-wise array operations, and parallel programming facilities, let alone raw performance.


> In 20 years Matlab will be as legacy as Fortran is now, still some left but mostly forgone.

This is a joke, right?

In many academic disciplines that involve numerical work, the amount of Fortran code in use today greatly exceeds that of any of the rivals.

Professors were forced to use Fortran by their advisors when they were in school, so most of their code is in it. These professors are not going to allow their students to reinvent any wheels. And the cycle continues. To give you an idea of how extreme this phenomenon is - almost all the Fortran code out there in academia is still in Fortran 77.


No way, Simulink is way too popular for that


The theorem you are looking for is Shannon's Source Coding Theorem[1]. It basically states that no encoding scheme can losslessly compress data beyond the given data's Shannon entropy.

[1]: https://en.m.wikipedia.org/wiki/Shannon's_source_coding_theo...


Amazing, thank you, it had to be something like that. It makes much more sense as an encoding limitation than as something abstract as the "shortest possible program" via Kolmolgorov complexity. I was wondering whether KC may just be an inconsistently defined thought experiment compared to concrete ideas from information theory, and if it's not, how much in common it would necessarily have with the coding theorem, as how are they not talking about the same thing.


What you're after is "minimum description length": choose some representation, e.g. gzip, and try to find the shortest encoding of our input in that representation.

The simplest representation is just raw data. There's only one way to represent our input in this format: just write it out verbatim. Hence its minimum description length is identical to the input size.

A more powerful representation is run-length encoding, where runs of identical symbols can be stored as the symbol and the number of repetitions; everything else is stored raw. Note that there are multiple ways to represent an input in this format: e.g. "50 zeros" is the same as "25 zeros 25 zeros", "00000 45 zeros", etc. yet the latter aren't minimum descriptions.

A more powerful representation is to use back-references: this lets us compress runs of multiple repeated symbols (run-length encoding only handles runs with 1 repeated symbol), and re-use commonly-occuring 'phrases'. This is essentially how LZ works. Again, there are multiple ways to represent an input in this format; finding a minimum description is incredibly expensive; most compressors don't bother, and instead have a configurable 'effort' (e.g. 1 to 9).

Adding more power to a representation makes it able to compress more complex patterns, but also makes it harder to find a minimum description. The most powerful representation is a Universal Turing Machine, which can represent any computable pattern; its minimum description length is the Kolmogorov complexity, and finding it is uncomputable in general.

Note that both Shannon information and algorithmic information (i.e. Kolmogorov complexity) can be gamed if we focus on a single message; for example, I can define an encoding like this:

- To encode a snapshot of Wikipedia as of 2022-06-30T14:00:00Z, output a single `1`

- To encode any other data, run it through `zip` and prepend a `0`

- To decode data which begins with a `1`, output a snapshot of Wikipedia as of 2022-06-30T14:00:00Z

- To decode data which begins with a `0`, send everything after that `0` through `unzip`

Hence we need to focus on the distribution of possible messages, and the big-O behaviour after many messages have been sent, rather than individual messages themselves.

Shannon information is essentially a frequentist approach, based on counting and expectations. Algorithmic information is essentially Bayesian, where the choice of Universal Turing Machine defines a prior, the bits of a message (AKA a program) are evidence, and decoding that message (AKA running the program) updates that prior.


So rewarding, thank you! Naievely, doesn't the expectation in the frequentist approach function as a prior in the Bayesian case? It's like there is a logical homomorphism between the two concepts of KC and SI.


In a sense, Kolmolgorov complexity and Shannon entropy can basically be considered as equivalent concepts: they both define the minimum amount of information required to fully define a given piece of data.


I'm afraid there really isn't any hobbiest simulators out there. A lot of these arrangements rely on the intrinsic characteristics and geometry of the MOSFETs used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: