Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Richard Hamming had a pithy line about this (I believe this is the origin of the phrase "In computer science, we stand on each others feet"):

Indeed, one of my major complaints about the computer field is that whereas Newton could say, "If I have seen a little farther than others, it is because I have stood on the shoulders of giants," I am forced to say, "Today we stand on each other's feet." Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way. Science is supposed to be cumulative, not almost endless duplication of the same kind of things.



There's just as much wasted effort, not-invented-here attitude, and standing on each others' feet in the hard sciences as there is in computer science. The reason why it's so much more visible in this field, is that the financial barrier to entry is so incredibly low – all you need is a cheap computer, which you already have anyway. If doing physics experiments was as cheap as programming, a lot more people would build fusion devices for fun in their garage [1]. They also wouldn't advance actual fusion research, nor would they hinder it, and yet some distinguished physicists would lament that "in physics, we stand on each others feet".

[1] https://en.wikipedia.org/wiki/Fusor


So once Newton figured out the equation for gravity, Einstein should have left it well enough alone? Science rewrites itself all the time, too. That's the point.

What kills me about programming is the ugliness. In natural science, there's an "I know it when I see it" factor of essential simplicity and beauty. We may not know what to work towards, but we know when we've found it. It's hard to improve on F=ma, and it looks it.

In programming, we just keep endlessly tweaking. (Recently I commented here about a non-software job where I had no deadline, and the programmers here were legitimately concerned that lacking a deadline would cause me to "gold-plate" it.) Writing software, I'm having to use a language with 100 reserved words. There's no essential beauty here. It's a million little committee decisions, layered on the legacy systems of today. I guarantee that in 10 years it's going to be replaced by another language which has 100 reserved words, which will make slightly different design decisions, based on the legacy systems of that time.

Programming language design today is almost a random walk. I don't see it converging at all. Everybody takes a slightly different starting point, and then adds all the features from every other popular language. There's no science happening here.


Einstein should not have spent his time endlessly reordering and rephrasing the three laws of motion in pursuit of elegance. He should not have tirelessly advocated for replacing F=ma with F/m=a. And he didn't.


If agree. Our industry is subject to poorly-vetted fads. For example, a few years ago everyone was talking about functional programming and lambdas: the yet another newest "magic Legos". Most of the examples given to justify it were either unrealistic to the real world ("lab toys"), or could have been done with OOP if the OOP engine of the language were better. Lambda's were essentially patching bad OOP, per attaching behavior to objects. Now readers have to deal with 2 paradigms and syntaxes: FP and OOP, whereas if you fixed the lame OOP; they'd only have to deal with one. Sorry, the language did NOT need lambdas.


Or, new languages and language features just steal and adapt from Common Lisp (it's okay, that was just a joke, geeze). I never felt the availability of nearly any programming paradigm ever marred my experience in working in Lisp or made the language any less coherent.

If a fad is just a rediscovery (renewed interest in) some programming paradigm, they will all be familiar to you when they come back around. Hopefully your language incorporates it gracefully. I'm a little circumspect about how Java's lambdas turned out, but I'm not certain it could have actually been otherwise.

So, I don't think it is the inclusion of multiple programming paradigms in your language that is troubling, but rather how it incorporates them. One of the first pieces of advice (and very good advice) in Effective C++ is something to the effect of a) acknowledge that C++ is really a federation of smaller languages, and b) at the outset of a project, explicitly decide which parts the project will use, and which parts it won't.


Lisp is arguably too abstract for rank-and-file use. For more on this hypothesis, search below (Ctrl F) for "Domain-specific languages tend to herd people into certain styles and idioms, making cross-staff reading easier, even if it's more typing. Standardization often trumps linguistic parsimony in real-world work."


Domain specific languages are everywhere. If you ever worked in the Java Enterprise Domain: just see the zillions of XML or other configuration languages. Take any 1MLOC Java project and it will have all kinds of languages and extensions.

From my personal view, very descriptive Lisp programs are actually quite easy to read - but they can be harder to maintain: there is this meta-level.

Lisp has a bunch of 'problems':

1) it has this meta-level where code is data and where programs transform code. This adds added complexity and increases the distance of the executing code from the written code. The machine is possibly transforming a statement before executing it, then the new code will be executed and can also be possibly transformed.

2) the code as data feature adds a layer of confusion: what is code and what is data exactly when?

3) the amount for programmer freedom makes it possible to write extremely hard to understand code. Especially the code might only be understandable while it is running (because then introspection and reflection can be used).

4) much of Lisp was developed at a time when more people knew how to use it. A lot of that practical knowledge is lost and thus it's difficult to educate new programmers. In the 'open source'/'free software' domain SBCL

OTOH, the fear of application/domain specific constructs is overblown. Sometimes groups report that Lisp code for large applications is much smaller and more readable than the equivalent, say, C++. If the code is full of low-level operators, repeating code, etc., then often the usual answers are configuration systems, extensive meta-architecture, added languages, an added scripting level, code generators, lots of manual labor, user-interface-level automation tools, ... this is no better or even worse than Lisp-level code generation/transformation.


>Science is supposed to be cumulative, not almost endless duplication of the same kind of things.

Yes, but inventing new languages and building libraries for them has nothing to do with science. Nor is science even a motivator for these endeavours.

We're not talking computer science here. We're talking implementation details.


Welp, people can choose what they want to work on. You could invest your time in building something new (computer science?) or working on reimplementing things in the One True Language (implementation details).


The industry doesn't seem to perfect any one thing; instead flickers off for the next shiney fad. For example, both OOP and FP offer (or can offer [1]) "abstraction". One can perfect their skills in OOP to solve similar abstraction problems that somebody who perfected their FP skills can solve in FP. If the industry stuck with or the other, people could use them more effectively by shere experience (including language improvements). But mixing them together in random and different ways just confuses more than it helps. (I'm talking average programmer here, not Sheldon Coopers.) It's collective Attention Deficit Disorder.

[1] Languages can and do implement one or the other poorly.


Yeah, I don't really share Hamming's perspective there at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: