I also disagree with the premise of this article, and I do not think the argument presented here is compelling at all. The motivations that spur people to do work are complex and diverse, and this article seems to pretend otherwise. For example:
> but it's common for new languages to trigger reimplementation of, e.g., container data structures, HTTP clients, and random number generators. If the new language did not exist, that effort could have been spent on improving existing libraries or some other useful endeavour.
Yes, it could have, but would it? Speaking as someone who has done some of this reimplementation work in a newer language, I can unequivocally say that it would not in my case. I'm just one data point, but I don't think I'm unusual. There is an aspect of greenfield development that is appealing to me. It is a somewhat unique opportunity to execute a vision with a lot of freedom. There is also the benefit of new forms of expression that new tools give you. I'm painting with broad strokes here, but I'm generally a believer in the idea that tools themselves can both limit and empower the expression of ideas.
If it weren't for the new language, I surely would have done something else with my time. I don't know what it would be, but I know it would not be trying to expend the social capital required to make small incremental improvements to existing software using tools that I find too limited. As another commenter mentioned, I might have just watched more Netflix.
Speaking as someone who was the "re-implementee" in some cases (i.e. our mixed C/C++ regex library Hyperscan has had some ideas borrowed for your regex library in Rust) I'd say things aren't 100% simple here.
I think there is a strong tendency of people programming in new languages to be overly convinced of the elegance and superiority of their new language as a result of (a) getting to rewrite software without the burden of those fussy, irritating existing users and (b) getting to 'rebuild' software without the rather exhausting process of feeling your way to an algorithm or system that never existed before.
There's absolutely nothing wrong with either of these things, but sometimes I think people working in hip new languages tend to mistake the elegance and superiority of their new language for the advantages conferred by a "rework". I bet I could build a pretty elegant regular expression system in C/C++ if I got to start from scratch, too!
This isn't a statement about one language being better than another, and I'm aware of how vexed a problem it is to compare new languages with old. If you built a pile of nonesuch systems in your new language people would complain that they can't compare these systems with stuff that they're familiar with. :-)
I'd just call for a bit of caution in assessing new languages being pressed into service for old, familiar tasks.
> There's absolutely nothing wrong with either of these things
Sure. But note the OP used the word "irresponsible," so I'd say you're actually disagreeing with the OP, and not me. :-)
> I think there is a strong tendency of people programming in new languages to be overly convinced of the elegance and superiority of their new language as a result of (a) getting to rewrite software without the burden of those fussy, irritating existing users and (b) getting to 'rebuild' software without the rather exhausting process of feeling your way to an algorithm or system that never existed before.
Yes. But there must also be room for folks to seriously argue the benefits of one tool over another without simultaneously being dismissed as the zealotry of the newly converted, which is a feeling that I am all too familiar with!
Richard Hamming had a pithy line about this (I believe this is the origin of the phrase "In computer science, we stand on each others feet"):
Indeed, one of my major complaints about the computer field is that whereas Newton could say, "If I have seen a little farther than others, it is because I have stood on the shoulders of giants," I am forced to say, "Today we stand on each other's feet." Perhaps the central problem we face in all of computer science is how we are to get to the situation where we build on top of the work of others rather than redoing so much of it in a trivially different way. Science is supposed to be cumulative, not almost endless duplication of the same kind of things.
There's just as much wasted effort, not-invented-here attitude, and standing on each others' feet in the hard sciences as there is in computer science. The reason why it's so much more visible in this field, is that the financial barrier to entry is so incredibly low – all you need is a cheap computer, which you already have anyway. If doing physics experiments was as cheap as programming, a lot more people would build fusion devices for fun in their garage [1]. They also wouldn't advance actual fusion research, nor would they hinder it, and yet some distinguished physicists would lament that "in physics, we stand on each others feet".
So once Newton figured out the equation for gravity, Einstein should have left it well enough alone? Science rewrites itself all the time, too. That's the point.
What kills me about programming is the ugliness. In natural science, there's an "I know it when I see it" factor of essential simplicity and beauty. We may not know what to work towards, but we know when we've found it. It's hard to improve on F=ma, and it looks it.
In programming, we just keep endlessly tweaking. (Recently I commented here about a non-software job where I had no deadline, and the programmers here were legitimately concerned that lacking a deadline would cause me to "gold-plate" it.) Writing software, I'm having to use a language with 100 reserved words. There's no essential beauty here. It's a million little committee decisions, layered on the legacy systems of today. I guarantee that in 10 years it's going to be replaced by another language which has 100 reserved words, which will make slightly different design decisions, based on the legacy systems of that time.
Programming language design today is almost a random walk. I don't see it converging at all. Everybody takes a slightly different starting point, and then adds all the features from every other popular language. There's no science happening here.
Einstein should not have spent his time endlessly reordering and rephrasing the three laws of motion in pursuit of elegance. He should not have tirelessly advocated for replacing F=ma with F/m=a. And he didn't.
If agree. Our industry is subject to poorly-vetted fads. For example, a few years ago everyone was talking about functional programming and lambdas: the yet another newest "magic Legos". Most of the examples given to justify it were either unrealistic to the real world ("lab toys"), or could have been done with OOP if the OOP engine of the language were better. Lambda's were essentially patching bad OOP, per attaching behavior to objects. Now readers have to deal with 2 paradigms and syntaxes: FP and OOP, whereas if you fixed the lame OOP; they'd only have to deal with one. Sorry, the language did NOT need lambdas.
Or, new languages and language features just steal and adapt from Common Lisp (it's okay, that was just a joke, geeze). I never felt the availability of nearly any programming paradigm ever marred my experience in working in Lisp or made the language any less coherent.
If a fad is just a rediscovery (renewed interest in) some programming paradigm, they will all be familiar to you when they come back around. Hopefully your language incorporates it gracefully. I'm a little circumspect about how Java's lambdas turned out, but I'm not certain it could have actually been otherwise.
So, I don't think it is the inclusion of multiple programming paradigms in your language that is troubling, but rather how it incorporates them. One of the first pieces of advice (and very good advice) in Effective C++ is something to the effect of a) acknowledge that C++ is really a federation of smaller languages, and b) at the outset of a project, explicitly decide which parts the project will use, and which parts it won't.
Lisp is arguably too abstract for rank-and-file use. For more on this hypothesis, search below (Ctrl F) for "Domain-specific languages tend to herd people into certain styles and idioms, making cross-staff reading easier, even if it's more typing. Standardization often trumps linguistic parsimony in real-world work."
Domain specific languages are everywhere. If you ever worked in the Java Enterprise Domain: just see the zillions of XML or other configuration languages. Take any 1MLOC Java project and it will have all kinds of languages and extensions.
From my personal view, very descriptive Lisp programs are actually quite easy to read - but they can be harder to maintain: there is this meta-level.
Lisp has a bunch of 'problems':
1) it has this meta-level where code is data and where programs transform code. This adds added complexity and increases the distance of the executing code from the written code. The machine is possibly transforming a statement before executing it, then the new code will be executed and can also be possibly transformed.
2) the code as data feature adds a layer of confusion: what is code and what is data exactly when?
3) the amount for programmer freedom makes it possible to write extremely hard to understand code. Especially the code might only be understandable while it is running (because then introspection and reflection can be used).
4) much of Lisp was developed at a time when more people knew how to use it. A lot of that practical knowledge is lost and thus it's difficult to educate new programmers. In the 'open source'/'free software' domain SBCL
OTOH, the fear of application/domain specific constructs is overblown. Sometimes groups report that Lisp code for large applications is much smaller and more readable than the equivalent, say, C++. If the code is full of low-level operators, repeating code, etc., then often the usual answers are configuration systems, extensive meta-architecture, added languages, an added scripting level, code generators, lots of manual labor, user-interface-level automation tools, ... this is no better or even worse than Lisp-level code generation/transformation.
Welp, people can choose what they want to work on. You could invest your time in building something new (computer science?) or working on reimplementing things in the One True Language (implementation details).
The industry doesn't seem to perfect any one thing; instead flickers off for the next shiney fad. For example, both OOP and FP offer (or can offer [1]) "abstraction". One can perfect their skills in OOP to solve similar abstraction problems that somebody who perfected their FP skills can solve in FP. If the industry stuck with or the other, people could use them more effectively by shere experience (including language improvements). But mixing them together in random and different ways just confuses more than it helps. (I'm talking average programmer here, not Sheldon Coopers.) It's collective Attention Deficit Disorder.
[1] Languages can and do implement one or the other poorly.
"It's easier to birth a baby than to resurrect the dead."
There's the weight of an existing community that also might make "improvements" more difficult. It's the same reason people leave large companies to go start smaller companies. Could they have stayed and made that company better? Maybe. Maybe not.
There’s a low-hanging fruit element to new languages as well. Reimplementing is a lot easier than improving, since improving a well understood algorithm requires invention and redoing something only requires research.
Definitely. Also, there can be flaws in the actual implementation of older tools as well. I'm going to avoid diving into specifics because it would be a distraction, but I know more than a few older tools that would benefit from parallelism. They were written in a time where parallelism wasn't as ubiquitously supported as it is today, and thus, the entire implementation would need to be carefully rethought, refactored and possibly rewritten to grow that benefit. We're talking shared global mutable state, everywhere.
Even if you had the person with the expertise, time and motivation to do the work to improve the implementation, you still aren't even guaranteed that the enterprise will succeed. You'll need immense social capital to even convince others that it's actually the right thing to do. Just think of all the new bugs that will be introduced. The angry users. The headache. The added maintenance burden and the pressure from trying to fix the new bugs quickly. It's an absolute nightmare and it's why it doesn't happen, IMO. It's also why starting anew is so much easier and so much more enticing. When you don't have any users, that's when you can really innovate.
"They were written in a time where parallelism wasn't as ubiquitously supported as it is today, and thus, the entire implementation would need to be carefully rethought, refactored and possibly rewritten to grow that benefit. We're talking shared global mutable state, everywhere."
I keep a mental list of "things that are effectively impossible to backport onto an existing large source code base", which I really need to keep better track of. (By large, think developer-centuries at a minimum.) But it includes: parallelism/threads/concurrency, adding an external API, transactional correctness, ripping pieces apart and separating them by a network, and transitioning to a fundamentally different data store with different guarantees.
All of these have been done, of course, so they aren't literally "impossible". It's just that once you have a source code base with hundreds or thousands of developer-years into them, the amount of effort required and the resulting changes are so extensive that it's not entirely unreasonable to call the result a different code base. You certainly wouldn't end up with anything where you could code new functionality that works against both the old and new codebase without changes, unless you also made that a priority on its own and put a huge additional amount of work into it. (See, for instance, the amount of work PyPy has put into supporting CPython modules. Again, it's possible, barely, but it was on its own terms a very significant amount of work.) So there's a fairly real sense in which you didn't put these features into the code base; you created a new one with a clear parent line of development, but is no longer meaningfully the same.
Each of these things, and all the other ones I've forgotten, can be the basis of a new programming language, because at scale it's actually much easier to write a new language like, say, Erlang, and populate it with libraries, and proceed to use it to write tons of multithreaded, Erlang-robust project code than it is to write the same amount of multithreaded, Erlang-robust project code with C. However much work it was to create Erlang libraries, it's dwarfed by orders of magnitude by the amount of Erlang code using it. It's easier to write Rust and populate it with libraries and use it for projects than it is to write the same amount of safe project code with C. Etc.
I'm sorry, but the need for parallelism is exaggerated; it's mostly a fad. It greatly complicates language design and debugging such that it should be added and used with care. Systems software (OS, networking, database engines, etc.) do indeed need good support for it, but the majority of domain applications only need basic support for parallelism (unless you are doing something silly, like reinventing a database). If language designers don't learn to say no to fads, the language becomes a bloated mess that only a mother can love. Let other suckers test fads for viability, and only add features that are relevant and time-tested for the target audience.
Parallelism is certainly not a fad. We have such different conceptions of the reality around us that it's not even worth arguing with you (in particular, after seeing your other comment about the Evils of Lambdas). I'll happily play your role of the "sucker."
Moreover, you missed my point, unless you're genuinely making an argument for radical ludditism.
He said it's mostly a fad, not entirely. Obviously parallelism is frequently very useful.
I think what he was getting at is that the idea of languages integrating parallelism such that almost all code would be parallel all the time has proven to be a dead end. For the functional language research world promised for a long time that the main benefit for FP was automatic parallelism (no mutable states, you see). But it never really happened.
And in the more mainstream world, Java 8 invested heavily in parallel streams, but I've never seen an actual parallel stream in real code. I see threads all the time, despite everyone agreeing how awful they are. Parallel streams? Auto-parallelised FP code? No, doesn't happen. The level of parallelism is too small for programmers to think about. Most datasets are too small and most loops too unimportant to even take the risk of a bug creeping in through attempting to use parallelism, regardless of how convenient.
> I think what he was getting at is that the idea of languages integrating parallelism such that almost all code would be parallel all the time has proven to be a dead end.
But that's obviously not what I was talking about. I wasn't even talking about parallelism in the context of language design. If an old tool is written in C and uses shared global mutable state so extensively as to make a performance boost from parallelism so impractical as to require a rewrite, then that's exactly the thing stopping people from "just" redirecting their efforts to improving old software. Because sometimes small incremental improvements aren't enough, and getting the buy-in to do a full rethink of an old tool takes tremendous social capital. And this is just about improving the old tool using its existing language---this isn't about going and using a new language with better support for parallelism. (Which, by the way, comes in many flavors.)
At no point did I talk about fancy parallelism related language features. At no point did I say that we should just go and add parallelism to an old program because that's what all the cool kids are doing. I framed it as a specific solution to a specific problem: making the old program faster. People like faster programs and speed ain't a fad. If you want to bring in examples of adding parallelism that don't improve performance, then great, but leave me out of it, because at no point did I advocate that.
I think your interpretation of tabtab's comment is a real reach, but whatever. Particularly given their other comments denouncing the evils of lambdas. Really? Give me a break.
I've been in a good many debates about the practically of lambdas and parallelism for typical application development[1], and I stand by my claim. My opponents failed to give sufficient practical examples that stood up to scrutiny. Granted, some of it is subjective, but the fact it's subjective means it's not a necessity.
True, sometimes it's good to have more than one way to do something, but sometimes it also creates a bigger and unnecessary learning curve. In the cases I looked at, making better OOP would result in a simpler language than adding lambdas, at least in my judgement. I'd be glad to debate it further, but this is not the best forum for such.
[1] Some argue the same languages should be used for systems software and applications development. I don't agree.
I see your debates and raise you practice. I've done plenty of typical application development that makes practical use of lambdas and parallelism for great benefit.
I'm not saying they "don't work", only that other features/designs can often do them also without using/adding new constructs: parsimony of language features. Can you suggest a forum to continue such debates? If not, I'll see what I can find. (My past favorite forums died.)
You've pretty drastically changed topics. I have no interest in a language design debate with someone who thinks of lambdas as "magic Legos." As I said before, our conceptions of reality are so drastically different that it would be a gratuitous waste of my time.
You've gone from "parallelism is a fad" to "let's subjectively evaluate language design based on my ideas of parsimony." Parallelism isn't some kind of language feature I'm touting. It's an implementation tactic that can be used to make certain programs faster. Before such tactics were common, it was easy to write programs that were blissfully unaware of it in such a way that it is difficult to bolt on later. I used parallelism as one possible example of something that would be incredibly difficult to add to older programs to combat the notion that we should stop re-inventing the world and instead improve older stuff.
Like most things in this world, parallelism can be misused. This is hardly interesting and really doesn't need to be pointed out. Moreover, the degree to which parallelism can be wielded effectively will, in part, depend on your tools. Some tools make parallelism easier to use or reason about than other tools do. With that said, even that was immaterial to my point, because I wasn't doing a comparative analysis of parallelism across programming languages. I was just using it as an example of a concrete improvement that one could make to older programs that is often not practical to do, precisely because they are older programs that are widely used. Parallelism isn't the only example of this kind of improvement; jerf provided other examples in this thread. But parallelism is an easy one to grasp because most programmers with a few years of experience can appreciate what it's like to refactor a large program that is deeply coupled to unsynchronized shared global mutable state to a program that isn't---which is generally a good idea if you want to add parallelism to it.
Your initial response to this was:
> I'm sorry, but the need for parallelism is exaggerated; it's mostly a fad.
But this is completely pointless. That parallelism is a useful way to make a program faster is taken as axiomatic in my comment. There is enough of my code (and others) out in the wild that uses parallelism to achieve some concrete measurable improvement that I feel it is obviously true and really doesn't need any further explanation. But here you are, nitpicking at an axiom with a bunch of nonsense about "fads," and completely missing my point in the process.
Re: "let's subjectively evaluate language design based on my ideas of parsimony." -- I didn't claim that. I don't know where you got it. Note that "less code to do X" is often the claim made, and "code size" is probably the most objective metric available to compare techniques (but still imperfect). "Number of lines needing changes per change request X" is another. Other claim types such as "elegant" is often in the eye of the beholder. There is no universally accepted ruler (metric) for "elegant".
And I'm not against SOME parallelism support in app languages. It's just my experience that if you "need" to do it A LOT in applications, you are probably doing something wrong.
Re: "Like most things in this world, parallelism can be misused. This is hardly interesting and really doesn't need to be pointed out." -- Fads tend to greatly increase the % of misuse.
And lambdas and OOP do fight over similar territory, or at least can if the language has certain features. For example, Java coders may say, "I need lambdas here because OOP can't do what I need." But it turns out Java's OOP can't do what's needed, not something inherent in OOP.
Anyhow, English is often not sufficient by itself to clarify such debates; we'd need specific code samples and scenarios to explore.
> And I'm not against SOME parallelism support in app languages.
Once again, you said:
> I'm sorry, but the need for parallelism is exaggerated; it's mostly a fad.
So, what's your point? Every time someone talks about parallelism---whether it's legitimate or not---you see it as your duty to pontificate on the "fad" that is parallelism? You've completely and utterly missed my point, which had nothing at all to do with debating the legitimate use of paralleism in a specific case. That there are legitimate uses of parallelism is enough to make my point.
> It's just my experience that if you "need" to do it A LOT in applications, you are probably doing something wrong.
Again, so what? This has literally nothing to do with anything I've said in this thread.
So, umm, thanks for derailing the conversation into your own pet cause?
> And lambdas and OOP do fight over similar territory, or at least can if the language has certain features. For example, Java coders may say, "I need lambdas here because OOP can't do what I need." But it turns out Java's OOP can't do what's needed, not something inherent in OOP.
Again, so what? This seems like a useless academic point. At no point did I invite such navel gazing.
Re: "That there are legitimate uses of parallelism is enough to make my point." -- Yes, there are niches that need such; I don't dispute that. "Faddism" is a matter of degree. Every big language shoehorning in lots of X because niche Y needs X is probably a case of faddism. Again, we'd probably have to examine specific instances. English by itself won't settle this.
There are a couple of problems with your perspective:
1. Even if re-implementation of libraries in new languages were free due to an excitement factor, real software is not built this way. New languages become old, and software needs maintenance. So what you get for free (supposing you're right) is not worth very much and not what counts, anyway. Most of the cost and value is in prolonged maintenance, which has to be done for multiple implementations in parallel.
2. It's reasonable to assume that work is never really free. But if we're hypothesizing about alternate realities, you may as well imagine that instead of new languages something else is invented to motivate you to do free work.
I don't really appreciate the debate tactic of pretending that I made a stronger claim that I actually did. In particular, I didn't claim or imply that re-implementation of things was "free." It is much much easier to demonstrate that claim as ridiculous as opposed to the more measured perspective that I actually expressed.
> real software is not built this way
This is a bullshit say-nothing phrase. You're using "real software" to presumably denote some dichotomy of software that doesn't exist. If you asked 10 people to define what "real software" actually meant, you'd get 10 different definitions. They might agree on the extremes; but there's a whole mess of crap in the middle that we could endlessly debate about.
> New languages become old, and software needs maintenance. So what you get for free (supposing you're right) is not worth very much and not what counts, anyway. Most of the cost and value is in prolonged maintenance, which has to be done for multiple implementations in parallel.
This isn't clearly true at all.
I mean, if you're going to talk about maintenance, then you also need to consider the impact of improved tools on the costs of maintenance. If a tool makes maintenance easier or less costly, then it's pretty hard to say anything definitive about long term costs. New tools can and do impact the usage of old tools, which in turn can have a direct impact on long term maintenance costs.
> It's reasonable to assume that work is never really free. But if we're hypothesizing about alternate realities, you may as well imagine that instead of new languages something else is invented to motivate you to do free work.
Absolutely! But that's not the argument I'm countering. I'm specifically countering the notion that new language development can be "irresponsible" because it "could" divert work away from other projects. What I'm saying is that it is not necessarily true at all.
There is a more measured argument to be made here, but the OP did not make it.
New ideas should be born and die in accordance with the resources of folks willing to keep them alive. We don't need some grand notion of "responsibility" to control the ebb and flow of ideas.
> You're using "real software" to presumably denote some dichotomy of software that doesn't exist.
I think that what I meant was clear from context: software that has a long-term, significant impact.
> then you also need to consider the impact of improved tools on the costs of maintenance
Sure, but however much maintenance costs, it still needs to be done. It cannot be carried long by a novelty factor.
> New ideas should be born and die in accordance with the resources of folks willing to keep them alive. We don't need some grand notion of "responsibility" to control the ebb and flow of ideas.
I agree. Anyone should work on whatever they find interesting, and that (normally) doesn't make the choice irresponsible. But I still think that empirically new languages, on average, increasingly cost more and contribute less.
Most new things suck. But every now & then something new AND cool comes along, making all of the old stuff less relevant.
For me, Elixir was yet another language to learn, and was just a reimplementation of another language. But when I used it, I was like "whoa, this is sweet!"
You gotta build something new when you can't find something old that does what you wanna do.
My point has nothing at all to do with what sucks and what's cool, but with the fact that transitioning to new languages does have a non-trivial cost, for individual organizations as well as to the industry as a whole. Moreover, productivity gains from new languages seem to have plateaued a couple of decades ago (except from some niches, such as system programming, that have seen little change in the past 20-30 years). I.e., we no longer see the same productivity boosts as we did going from Assembly -> FORTRAN, from FORTRAN -> C, or from C -> Java (and even then, the gains were smaller with each jump). Fred Brooks predicted in the '80s this would happen, and reality has shown even smaller gains than he had predicted. Part of the reason for the growing fragmentation is precisely because no language has stood out as a clear leader.
> but with the fact that transitioning to new languages does have a non-trivial cost
But this is completely uninteresting. Nobody has claimed that transitioning to a new programming languages doesn't have a non-trivial cost.
You're countering a non-existent argument. I mentioned this in my initial reply to you: you're debating a point that is easy to refute. I don't know why, or where the source of confusion is, but you are.
> Moreover, productivity gains from new languages seem to have plateaued a couple of decades ago (except from some niches, such as system programming, that have seen little change in the past 20-30 years). I.e., we no longer see the same productivity boosts as we did going from Assembly -> FORTRAN, from FORTRAN -> C, or from C -> Java (and even then, the gains were smaller with each jump). Fred Brooks predicted in the '80s this would happen, and reality has shown even smaller gains than he had predicted.
This just sounds like a lot of opinion expressed as if it were a fact.
> Nobody has claimed that transitioning to a new programming languages doesn't have a non-trivial cost.
I was replying to the argument that the work is not at the expense of anything else because it wouldn't happen otherwise.
> This just sounds like a lot of opinion expressed as if it were a fact.
I think it's pretty well supported. The cost of developing software from scratch has not decreased significantly since ~2000 (there have been gains due to the availability of lots of open source libraries). In the mid eighties, Brooks claimed that no single language improvement would give us a 10x productivity gain over a single decade. While at the time his prediction was seen as overly pessimistic, it's been three decadades and we still haven't gained a 10x boost with all improvements combined (barring, maybe, the availability of open source libraries, which has nothing to do with language improvements). You can see clearly that companies are not migrating en-masse to some new language (as they did with assembly->C or C->Java/C#). Instead, many languages are being used, none so far seems to have a definitive advantage that encourages the majority of the industry to migrate. You do see plenty of claims, but no major bottom-line impact that causes decision-makers to say, "we must use that!" as was the case with previous generations, where, BTW, significant advantages became apparent quite quickly.
> This just sounds like a lot of opinion expressed as if it were a fact
It's arguably overstated, but I'm curious if you would argue that there are languages created in the past 15 years that create as big of a productivity boost as Assembly -> C or C -> something with a GC?
I would not argue that, no. I could relate my own personal experiences over the last 15 years, but they wouldn't generalize and might not even be coherent themselves (because I am human and therefore haven't remained immutable).
FWIW an example of behaviour I would consider irresponsible would be if a large company liked Rust but didn't like not controlling it, so decided to clone or fork Rust and use their resources to popularize that (instead of just putting those resources into Rust itself).
> I don't really appreciate the debate tactic of pretending that I made a stronger claim that I actually did.
Ahem.
> I'm specifically countering the notion that new language development can be "irresponsible" because it "could" divert work away from other projects. What I'm saying is that it is not necessarily true at all.
I (OP) agree that it is "not necessarily true", and my post deliberately made the very weak claim that "in some cases" "it could be irresponsible". I don't see how I could have made the argument "more measured" without eliding it completely.
Your OP makes several generalizations, and I disagree with almost every single one of them. So no, from my perspective, your position really isn't measured at all.
> I don't see how I could have made the argument "more measured" without eliding it completely.
Honestly I would just ignore this person @burntsushi .
I actually KNOW who you are from your really consistently excellent work on useful libraries, whereas I have no idea who "pron" is and I suspect he is a troll
pron isn't a troll. They comment pretty frequently on HN and other places. I generally disagree with their perspective on software development; but we likely have very different experiences.
And the flaw with your perspective is that if we all followed this advice, we might have a few highly polished bullet proof C libraries to use, but we'd still be stuck writing C and following all of its paradigms.
Sure, but you have to have the bad attempts to learn things that don't work or we won't end up with the good ones. We fall into a local maximum instead of exploring farther and potentially going higher. Every dead end is another bit of the map explored when it comes to creating expressive languages.
Now, is there an argument we'd be better off doing that exploration in a space designed to make DSLs easier than in building entirely new languages? That I dunno but it is worth considering.
you just gave me a great idea for a new language, Well actually a big library of ASM code that every other language could write wrappers to to get low level gains on all the boilerplate. Then we could all focus on making that really fast.
The Truffle VM stuff takes an interesting approach by (afaik) executing the languages getter/setters and then having a common data format (I presume?). What if someone could make a LLVM version of that?
there is always a myth that someone with absolute control of resources could more efficiently re-allocate those efforts.
But first you have to obtain absolute top down control in order to do so.
Very similar to how command-and-control politics schemes could be in theory more efficient than messy democracies, but those messy democracies, while inefficient, produce variety that guard against the much larger errors of mis-allocation, ineptitude, or corruption.
Also the person with the vision of absolute top down control typically selects their vision as the correct one. If only we implemented my vision, since it is the correct one.
If we did what the OP suggests we'd be communicating with CORBA across the net, have artifacts such as "car" and "cdr" x10.
It sounds interesting, but I don't really agree with the idea that we can analogize software development with socio-political organization. There are certainly a lot of interesting similarities, but I would be super careful about drawing any conclusions from it.
The cost of mindshare. Everyone grouping up on a major programming language or framework. It's a bit irritating when people often choose their language or framework because of the size of the community rather than actually evaluating alternative languages or frameworks first.
Take React for example. There are some great alternatives to it, especially inspired by react but simpler & cleaner DSLs. But people just keep with React anyways because everyone is already there.
I used react for 4 years. I don't see the point in using something that is 10% different.
Then you need to hire engineers, most of them know more react than an obscure library.
React has lower chance to be discontinued.
React works well. I rather focus on building new products than changing tech. Of course during the jQuery time I hated it all the time and I was looking for alternatives, but react is exactly how I think about UI: components (html, CSS, js all in one file) and not templates.
I'm pretty efficient at react and I can't justify using something else.
React recently decreased in size... To 10 megabytes. This could be helped by the use of CDN and caching, but react discourages this in favor of webpack bundles.
I can't speak to React, but I definitely see this happening with Java. To the point where I've watched people make conscious decisions to the effect of, "Sure, I do think Kotlin is a superior language that will allow us to work better and faster, but everyone already knows Java, and we feel more comfortable sticking with the mainstream."
:shrug: I like to be a bit more flexible than that, but I suppose there is something to be said for consistency.
It's probably a small disagreement in your case. Language decisions are hard to undo. JetBrains doesn't (yet) have the reputation of long term support for developer platforms that Sun, Microsoft, the open source community etc have. Also judging the ROI is hard - the decision makers who will be held accountable don't benefit from the use of the better tool.
That's true but the learning curve difference between React and VueJS isn't anywhere near as significant as the learning curve difference between JavaScript and Elixir for example.
Programming languages have a lot of nuances and patterns of use which can take years to fully absorb. Angular, VueJS and React share many similar patterns and best-practices.
How much of the new languages are supported by a new fresh wave of engineers that want to learn things?
To become a good engineer I had to go through implementing my own container types, play with my own little databases and network libraries, implement a build system, ... I published a few of those experiments in Ruby, the new/hip language at the time. Picking a fresh language seems the obvious playground to do this, it's where you can leave your mark.
My impression was that nodejs developers' age distribution was quite young as well when it started.
My point is that it's necessary to introduce new languages once it a while to get good engineers :)
At one company I worked for, the guy in charge of ops had put a whole lot of work into standardizing the production environment and tooling, and getting it all tuned just so.
A couple years later, he lamented that new hires were achieving deep competence more slowly, and weren't reaching the same level of competence that previous team members used to. It wasn't a big deal most the time, but they made a lot of mistakes when rolling out new components, and tended to get caught flat-footed whenever something went sideways. He suspected that the problem was that, in its current state, the new system worked so smoothly that they could get away with understanding things at a very superficial level, and that hindered their learning.
That was my first thought when I read the post: He's discounting the fact that by doing this, many programmers are gaining in expertise. It's not a given that those programmers would grow their skill set as much otherwise. Advancing the state of the art in a well established language requires a lot more skills than implementing simple libraries for a new language.
IMHO, the author is completely wrong and the sentence "C is the desert island language" is closer to reality (see http://www-cs-students.stanford.edu/~blynn/c/intro.html). The fact that such a poor language remains the single sane choice to build the kernel of Linux is a proof of the lack of languages. C++ was already a mess in 98. Python started as simple but has added more and more complex syntaxes. Java is in the hands of Satan, Ada is too verbose, nobody takes the effort to learn it, Haskell and SCALA are niche languages not so easy to learn...
IMHO, there is a need for a polyvalent language that would enable development of Linux kernel (low level and performance), that would be easy to learn and that would enable quick development (dynamic typing, syntactic sugar, ...).
ML family languages are already the "polyvalent languages" you describe and have been a better choice than C for at least 20 years now. If your goal is seeing adoption of such a language, your time would be better spent fixing the things that lead you to dismiss Haskell and Scala - any new language would certainly be even more of a "niche language" than they are.
> there is a need for a polyvalent language that would enable development of Linux kernel (low level and performance), that would be easy to learn and that would enable quick development
That's a pretty tall order for a single language to support all those desires. But Rust seems to at least aim for that space (it doesn't check all your boxes though).
I'm not sure how old that article is, but parts of gcc suite have been rewritten in a subset of C++ for several years now (https://lwn.net/Articles/542457/), starting from over 10 years ago.
Yes, I know that. C++ is also very much used for game development. C++ allows to build very impressive abstractions that are very handy. The main drawback is that this flexibility has a huge impact on the language. When you switch between two projects writen in C++, it is almost as if they were using a different language. I have understood that this language was a mess in 98 when I have bought the standard and read it completely. In professional projects, I always try to avoid it in favor of java despite all the years I have spent learning and using C++.
Re: C++ allows to build very impressive abstractions that are very handy. The main drawback is that this flexibility has a huge impact on the language. When you switch between two projects writen in C++, it is almost as if they were using a different language.
This has always been the two-edged sword of high abstraction: people use high abstraction to invent idioms that match their OWN head, and other heads have difficultly reading it. I've been in long debates with Lisp fans over the commercial practicality of Lisp. Lisp and its variations have had almost 60 years to "catch on" in the mainstream, and keeps failing, staying a niche. If you keep losing beauty pageants for 60 years, it's time to admit you may be ugly. Just because YOU like it and/or YOU can read a certain style does NOT mean others can.
Domain-specific languages tend to herd people into certain styles and idioms, making cross-staff reading easier, even if it's more typing. Standardization often trumps linguistic parsimony in real-world work. I'm just the messenger. (I'm talking general domains here, not necessarily specific industries.)
My own experience with C++ is mainly limited to game development since the base classes in UE4 are written in and can be extended with C++ (as is most of the engine itself).
Unreal has a crazy bolted on reflection system implemented with its own pre-processor to work around the lack of reflection in C++. Otherwise C++ makes sense there I think because (a) performance is extremely good which is important for games, 3D games in particular; and (b) games are the type of program which can often benefit from OO design and inheritance.
I'm a long time C casual who only recently made the leap to C++, but I know what you mean when you say two C++ codebases can have vastly different styles. Of course that is true of many languages but it does appear to be amplified with C++.
> When you switch between two projects writen in C++, it is almost as if they were using a different language.
I strongly disagree. This may have been the case 20 years ago, but most current projects use C++, not some weird -fno-rtti -no-stdinc -fno-whatever sub-language. Maybe they don't use all the features of the standard library, but which project in Java or Python would ?
Well, to me it's like the way Shakespeare, e e cummings, and Kool Keith all write in the same language.
Context switching between working on work codebase, Chromium code base, and various third party C++ codebases is a headache as much for their basic differences in coding style (use of whitespace, capitalization, naming, etc.) as for their wildly varying build/metabuild processes.
And of course, each implements and then extensively uses their own library features like reference-counted pointers, managed GC pointers, collection data structures, iterator-like abstractions, etc.
Boost? I'm sure there's good reason for the -- sui generis -- build system. Always been too underwater to spend much time wondering about why that's a yak I'm shaving.
Not easy, because you want the best of all worlds: safe(1), low level(2), efficient(3), easy to learn(4) and easy to implement (5)? All of that seems quite incompatible to me. And C remains the language of choice, because it fulfills all the needs except safety.
Dynamic typing and garbage collection go directly against the "low level and performant" goals. You can avoid GC with clever lifetime rules (Rust) but that loses "easy to learn".
There's no particular reason you have to use C to write Linux. Linux is written in C because when the project started, that was the most reasonable choice. It persists today because of inertia and because introducing something better, like C++, would cause big flamewars.
People have written perfectly functional and competent kernels in C#, Java and C++ after all. There's no technical reason why you can't do it.
I really believe that it is better to have many small 'throw-away' programming languages that can interoperate with each other, than having a few huge languages that are isolated in their own ecosystem.
A language shouldn't be complex nor should it take more than a few days to learn, and it should be easy to abandon when another language is better suited for a problem.
What actually makes many languages useful is their standard libraries and library ecosystem, and the two always get mixed up. When people talk about how great language X is, in most cases they mean how feature-rich or easy to use the standard library of that language is.
With few exceptions, those libraries shouldn't be tied to a specific language. Let me access your library written in language X from my language Y.
Let me easily create projects that are made of different languages, for each part of the project the language that fits this part best.
Function calls between languages are typically expensive and difficult to implement. Each language generally has its own expectations about the layout of data in memory, which means that for language A to call a function written in language B, it needs to set up a block of memory in the layout expected by language B. This normally involves a lot of copying, which adds overhead to the function call.
There also needs to be some code that does this copying, and this code needs to be implemented for every pair of languages. Alternatively, all the languages can agree on a common interface level (C, JVM, .NET, etc.), but in this case most languages won't get a function interface that feels native to the language.
Php took the approach of implementing many of its libraries using existing C libraries. They may change the parameters and packaging, but they are only reinventing the interfaces, not the implementation. It's one of the reasons for Php's early success, despite being a somewhat clunky language.
Therefore if anyone out there wants to invent Yet Another Language, try to piggyback on existing C libraries to speed up library creation.
Agreed, but I think those problems should be solved, the same way LLVM solved the N:M frontend:backend problem :)
It's very likely that interacting with such a generic library interface doesn't feel native to the language, but I think it doesn't have to be a monstrosity like DOM or CORBA.
IMHO these problems are fundamentally unsolvable, often because the data models are incompatible (the fundamental behavior and benefits of language A relies on their data structures having trait X; and the fundamental behavior and benefits of language B relies on their data structures not having trait X), so a library written in one language can't just accept nontrivial data from the other, it can't be allowed to operate directly on the other language's data in memory without at least copying it and often requires a performance-killing transformation of the whole data.
You see that in all kinds of interfaces across language boundaries. Memory layouts, thread safety, (im)mutability, structure ownership or reference counting, handles to external/OS resources, etc, etc.
Every language is an abstraction that relies on certain assumptions, and those particular assumptions are what makes the language good for it's niche. Different languages rely on different, often incompatible assumptions. It's possible to build a wrappers that ensure that the assumptions of the other language are met and interaction with native data structures is possible and convenient (e.g. like Numpy does for Python), but that wrapper needs to be different for different languages and assumptions, it can't be the same library, it needs to behave differently in other languages to meet their expectations.
Truffle solves these problems. Accessing foreign data layouts results in the access patterns of the foreign language being inlined and compiled into the accessing language. In other words data adaptation is done just-in-time and at the read site, rather than by copying entire data structures ahead of time.
Well, for an interesting definition of "solves". As far as I can tell, it effectively introduce a new (implementation-)language, the Truffle language and then adds implementations of language front-ends that compile to this Truffle language.
Which can be functionally adequate for some, maybe many use-cases, but is not the same as actual interoperability.
I wouldn't describe it like that. There's no super-language. That's not how Truffle works at all. Rather, Truffle is a way to express language semantics through writing interpreters, such that interpreted ASTs are then compiled into native code blocks.
I disagree. They can do whatever they want because it's passion driven.
If people finds the new language wonderful they'll choose to spend their time there and to create their own community around it.
It's their free time and they have every right to choose how to spend it.
Also with RPC, the apache project arrow, etc... there'll always be people out there will bridge community.
Also is this really a problem? Programming language fragmentation? I've seen front end javascript frameworks fragmentation everywhere and they're fine with it. And the solution on the horizon are standardization such as web component (via w3c). These frameworks converge toward web component and now it's getting standardize. I'd argued that because of the fragmentation we actually know what we want to standardize and it also pushed for it.
If there weren't any fragmentation I'm not entirely sure if web component would ever be standardize.
And that's fine too, it's perfectly ok to disagree with the choices others make. But that's as far as it goes, you wouldn't like someone else telling you how to spend your time without getting anything in return either.
I'd bet most developers would be fine with supporting and incrementally improving if they made enough money from that to do what they really want to do on the side.
I would take a minute or two to reflect on all the time and energy that hundreds of thousands of individuals give away for free. It's easy to forget how far we've come by standing on their shoulders.
I think this could be mitigated in some cases with better tooling. Many programming languages just make it needlessly difficult to interact with anything that's not part of their ecosystem.
Check out parcel [0], a web application bundler. It has built-in support for lots of different assets [1], with two notable inclusions being ReasonML and Rust! In this blog post [2] they highlight how easy it is to import Rust code from JavaScript.
Another neat example is Objective-C bridging on macOS [3]. The code usually doesn't end up looking very pretty, and it can be brittle at times, but with JavaScriptObjC you can interact directly with all the native APIs using JavaScript. Here's a blog post [4] showing how to write a native app on macOS using JavaScriptObjC.
That reflects rather well how I feel about the recent DSL craze, specially in the ruby community. It's also how I feel about libraries and frameworks.
Programming languages probably suffer the least from this phenomenon due to how difficult it is to create a complete language, with compilers and all.
In contrast, languages that just extend others often do benefit from existing communities. Take moonscript¹ for example, it's just a new syntax for Lua, so you can use all the existing libraries at no cost. Or take Terra², which can make use of all the C libraries out there and the Lua libraries at the same time.
Is the DSL craze really new? Look at a lot of the early history of unix and you'll see sh, sed, awk, tex and probably a heap of others that didn't survive. Not to mentions lex/yacc, two DSL's to make DSLs. The idea of specialized languages for specialized tasks has been around for decades.
Most of "the DSL craze" in Ruby is just defining methods. It's rarely new languages, but simply a matter of taking full advantage of the language to let you talk about your domain in a more natural way. It's not so much a "dsl craze" in Ruby as writing idiomatic Ruby well, the same was as e.g. Smalltalk or lisps also tend to make you create your own vocabulary that is close to indistinguishable (more so than in Ruby, in fact, where the block syntax and control structure syntax differ) from the language itself.
An example that comes up often for me are EDSLs for database queries. I do see the value of them, but on the other hand it causes you to move away from a “lingua Franca” of database querying to a programming environment specific one. What probably makes sense is 1) evolving SQL or SQL tooling to support more features that people think they need (an example that comes to mind is type safe queries) and 2) better support for binding these queries to different languages.
External DSLs (even with "cross-language bindings") are much worse to work with than than internal ones and SQL is no exception. IMO what's really needed is a willingness for databases to step away from SQL and expose an API that's more friendly to modern programming languages (or better yet, make the database embeddable as a library rather than a framework you have to build your application into). The eDSLs help a little, but as long as they're obliged to compile into SQL strings there's a limit to how effective they can be.
Not only that, we have the Java and .NET runtimes, both of which can be used from many languages. I'd say, along with C, that makes three widely adopted approaches to sharing libraries between languages.
Furthermore, C libraries don't even have to be written in C. It's far from unheard-of to use a more powerful language like a lisp or Ocaml to write a C library.
> widely adopted approach to sharing libraries across languages
For platform libraries we do; it's called FFI. Your language can use openssl or libxml or whatever, so can mine. It is more general than COM; COM is a specific ABI (that we can target with FFI).
Language-specific libraries are tied to language semantics, because language-specific objects pass across language-specific calling boundaries. We don't think about using Python dictionaries from Guile Scheme and such.
I understand the concern but it assumes that open-source contributions are a zero-sum game, which I don't think they are. New languages often encourage users to become contributors where they might have remained users in the old language [citation needed]
Not to mention legacy inertia - things get harder to change in existing libraries when they have loads of users. Even if the change make sense in a vacuum.
a) Your advice on how I spend my free time is not welcome,
b) Same ideas that apply to science apply to FLOSS code. You can do and share you research.
And completely disagree with the idea that if I publish some code I do a commitment to maintain that code. That's written in most (all?) FLOSS license: this software comes with no warranty.
"However, I hope people consider carefully the social costs of creating a new programming language especially if it becomes popular, and understand that in some cases creating a popular new language could actually be irresponsible."
Passion isn't fungible and there is zero gauruntee that the person who creates an interesting new language wouldn't have chosen to binge watch Netflix instead.
Furthermore without a crystal ball it's difficult to separate ahead of time which efforts will move us forward in some small or large ways.
> it's difficult to separate ahead of time which efforts will move us forward in some small or large ways.
That's the point: most efforts do not move us forward but instead move us backwards because fewer people are working on the things that matter.
It's a solid argument and it applies to far more of the open source community than just to programming languages, in fact programming languages are the smaller part of the issue, but the argument still applies.
> The entire premise of this post is flawed.
I don't think so. I would not dismiss an article without at least trying to make a genuine effort to understand the point an article makes. The fact is that it takes effort to launch a new programming language beyond just writing some code and the long term commitment should be there if you are going to let other people run their production systems on what you throw into the world.
This doesn't mean you don't get to scratch your itch, it means that once your programming language gains adoption beyond some people playing around with it you can't just walk off and say that it isn't your problem.
> If you adopted a language that is not mature for production, then you knew the risk when you made that choice.
True, but at the same time I can't help but think that a few hundred programming languages are all equally useful and all equally deserving of their continued existence.
> Innovation in PL happens with these small languages, and it's a good thing.
I do not see the author as having any beef whatsoever with 'small languages used for innovation', I think he has a beef with small languages that pretend to have staying power behind them when in fact they do not.
Quite a bit of this stuff is people experimenting and making a 'me-too' version of something, and I have done this many times myself. But once you release a thing like that into the world you should be ready to step up and commit to it. If not then you might as well keep it to yourself.
The best test of staying power would be longevity and engagement by a large community. If a new language lacks either it is obviously far more likely to cease to exist a few years from now.
It's hard to pick unproven future winners fortunately you don't have to do this. Use what you know works.
The fact is that it takes effort to launch a new programming language beyond just writing some code and the long term commitment should be there if you are going to let other people run their production systems on what you throw into the world.
From my admittedly limited knowledge JavaScript seems to contradict this claim.
I don't agree with the article either, one could make the same argument for every piece of new code written.
>I don't agree with the article either, one could make the same argument for every piece of new code written.
No you can't. The issue is that languages generally have their own runtime and object linkage system. If you create a new one that doesn't play nicely with C linkage/runtime, JVM, .Net, or compiles down to javascript (or wasm) then you are creating an entirely new platform and this creates the fragmentation.
For example, Python and Rust play nicely with C linkage. Scala runs on the JVM. These play nicely with other systems. You can have Python code that calls Rust code that calls C code and it all fits together. But you may struggle to have Python code calling Lua code calling Julia code.
"every piece of code written" doesn't have this problem at all. If you write something in an existing language then you are contributing to that ecosystem instead of making a new ecosystem. And if you are making a new ecosystem, for pity's sake please consider making that ecosystem play nicely with at least one existing ecosystem.
JavaScript is a pretty good example of a language that overshot its goal and that would have benefited from evaluating the cost of its release. It was put together in a hurry, without much in terms of forethought and we're paying the price every day.
> I don't agree with the article either, one could make the same argument for every piece of new code written.
So you do agree. Yes, you could make the same argument for every piece of code, and that's precisely the point, the author chose to use programming languages where the problem is the the least visible. But that does not invalidate the point, it strengthens it.
>it means that once your programming language gains adoption beyond some people playing around with it you can't just walk off and say that it isn't your problem
Assuming that you don't work for them and that you don't have a contract for support with them, then why can't you just walk away?
In fact I'd go further: it would be healthy if people that needed support for open-source got used to the idea of paying for it.
A. An unpaid programmer choosing to walk away from their open-source project.
B. The world's richest companies running their production systems on open-source, but failing to adequately fund development.
Here's the best example: OpenSSL and HeartBleed up until 2014 [0].
>Tech giants, chastened by Heartbleed, finally agree to fund OpenSSL
>
IBM, Intel, Microsoft, Facebook, Google, and others pledge millions to open source.
>
>...
>
>Steve Marquess wrote in a blog post last week that OpenSSL typically receives about $2,000 in donations a year and has just one employee who works full time on the open source code.Given that, perhaps we shouldn’t be surprised by the existence of Heartbleed, a security flaw in OpenSSL that can expose user passwords and the private encryption keys needed to protect websites.
I introduced 'B', but not as the second arm of a binary choice. Note that lots of the companies that deploy open source also contribute to that open source, in fact, if you removed such paid contributions to the open source world then you likely would not have much to run in the first place.
But that doesn't mean that there isn't a cost to fielding a new programming language due to fragmentation, which is the subject of the article.
There very clearly are two parties in a binary situation, and it is implicit in what you wrote:
A. The unpaid programmer who created the new language. Apparently, because she/he negligently "let other people run their production systems" on the new language, somehow she/he has a moral obligation to support this commercial use of the new language until her/his dying day.
B. The people and companies (in fact mainly companies) using the new programming language in production who are not just "some people playing around". These entities presumably chose to use some hot new language in production, because it might solve some business problem. Apparently, they don't have any moral responsibility whatsoever to have a plan in place for what to do if the apparently critical individual who created the new language walks away - for example contracting or employing either the original programmer, or suitable substitutes perhaps.
>if you removed such paid contributions to the open source world then you likely would not have much to run in the first place
That's very true, but also completely irrelevant to the case
under discussion. We're talking about new programming languages created by an unpaid programmers, and still critically dependent on unpaid programmers.
>But that doesn't mean that there isn't a cost to fielding a new programming language due to fragmentation
Sure there is cost - I don't disagree. Though, there is also a loss-of-innovation cost of not creating new programming languages.
My objection is that both you and the author of the article are being coercive, invoking morality in an attempt to control unpaid programmers who are not obligated to you, or to society, and certainly not obligated at all to people/companies that chose freely to use their work in production.
People are somehow obligated by what I perceive as the groups interests but are really my interests is an argument I see regarding linux distributions.
Perceptibly insert distro here which I use would be better off if it had more labor. Therefore people should all focus their efforts on fewer distros including the one I use.
Therefore people are harming the ecosystem by making yet another Ubuntu derivative with a slightly different graphical environment/default set of packages/settings.
Scratch the surface and they are annoyed by things like hardware support or long unfixed bugs/feature requests because the guys making Ubuntu knock offs are totally going to be capable AND willing to learn how to hack on drivers for insert hardware or fix insert project here before they rewrite it again or add features that the underlying project wontfix
> This doesn't mean you don't get to scratch your itch, it means that once your programming language gains adoption beyond some people playing around with it you can't just walk off and say that it isn't your problem.
Why not? You display a lack of perspective that is similar to the author of the article you confuse what would be convenient and useful for you personally with a universal good.
If would be good for you if people who gave you free tools acquired a commitment to continue to support and grow these tools without a contract or cash but there is no particular reason this ought to be so. Even if you want to pay there is even no reason why anyone has to be willing to take your cash if they aren't inclined to sell you their labor.
Try going to walmart and seeing if you can buy the cash register.
Similarly the original author believes that if more people just worked on fewer projects that he prefers these projects would be better. This ignores the fact that historical progress including the projects he prefers have benefited from a chaotic environment where people work on what they not the author prefer.
> I would not dismiss an article without at least trying to make a genuine effort to understand the point an article makes.
And likewise, you didn't seem to understand michaelmrose's points and your reply didn't address his specific criticisms of the article and how it has 2 flaws.
To me, your restatement has repeated the same 2 flaws:
>most efforts do not move us forward but instead move us backwards because fewer people are working on the things that matter. It's a solid argument
It's not a solid argument and the 2 flaws in O'Callahan's essay are:
(1) Post hoc analysis flaw: Judging what moves us "forward" vs "backwards" is post hoc analysis. It requires a time machine. In reality, we always live in the present moment, and therefore, we can't know what inconsequential things today will eventually be a net benefit to society tomorrow.
(2) Fungible & zero sum flaw. That a human interest level in any given activity is fungible and therefore zero-sum which supposedly takes away from other more worthy things.
To apply the flawed thinking to the C Language, we would have to transport ourselves back to 1972 and O'Callahan would then criticize Dennis Ritchie for "inventing yet another language and fragmenting the landscape". Instead, D.R. should have focused his effort on languages that already existed like Cobol (1959) and Lisp (1958). Well, if Dennis Ritchie was not intellectually interested in enhancing the older ecosystems of Cobol and Lisp, it's a moot point.
Imagine taking all those "Shown HN" posts of new programming languages[1] and lecturing people that they should have focusd their energy on existing "more important" things such as the broken C code in Heartbleed OpenSSL. That advice would be naive about human nature. Many people aren't interested in working on the OpenSSL codebase.
If one believes that "video games" are a waste of time, it's flawed to assume that a programmer's 1000 hours on a "useless" game engine could have been redirected on developing algorithms for DNA analysis or writing code for a government healthcare portal to help the nation's citizens. Motivations matter.
Similar arguments can be made for scientists choosing to working on particular problems in physics, math, and engineering. E.g. It was a "waste of time" for Andrew Wiles to work on Fermat's Last Theorem when "P=NP" is the "more important" problem. Well, the he worked on it because Fermat's problem fascinated him since he was a little boy and "P!=NP" didn't grip him in the same passionate way.
> and the long term commitment should be there if you [...] , it means that once your programming language gains adoption beyond some people playing around with it you can't just walk off and say that it isn't your problem.
I just wanted to note that this particular opinion of "language inventor's ongoing commitment required" was not stated in O'Callahan's essay.
To be the devil's advocate (as in I don't really hold this position but would like to see it debated): Maybe we would be better off if people watched Netflix instead.
No. Languages are also, of course, languages, i.e., notation.
If every chemist changed notation every 3 years, there's an off chance that one might develop something better than Hill notation, but it would also make it unbearable to try to read the literature. I don't think it's controversial to suggest that this would be a net loss for the field.
Computer science is young enough that we don't have any universal notation yet, but I've learned a couple dozen programming languages and my conclusion is that maybe 1 new language per decade is worth using. We're churning notation way faster than is productive.
I disagree with your statement. It would in fact be: scientists create unnecessary new notations and that causes a lot of fragmentation in the science community. They all try to solve one thing which is do science. On the other hand I would agree that some new notation could create new ways to see things that were not possible with the old notation. I think in the long run we're ok with many programming languages. Whatever is useful will survive and what isn't will die out.
Programming languages and libraries are not universally expressive or useful and are not of universally good quality. New independent languages and reimplementations are necessary to better express problems and therefore reduce defects and maintenance costs. They are also necessary for anything of quality to emerge, for all the right people to make all the right choices. Incidentally this is one of the reasons that prevents large projects from achieving good quality and large organizations from producing quality software.
When exactly have diversity and choice become problems?
It’s a blessing that so many new languages are popping up. Those that have a reason to exist, will succeed, and the time spent on them will be an investment; those that will fail will do so for a reason. And that’s ok. Some people will have wasted their time. But not the community as a whole.
Failures are part of evolution. Thanks to failures we have Rust and Julia and Nim and who knows how many other new interesting languages.
With times languages become obsolete, so from one side they try to "upgrade" their syntax, semantic and internal, from the other side they keep backward compatibility.
Keep adding features on top of something that was not designed for those features will simply create a huge mess to work with.
And arguably the oldest languages are the one where you need to follow "best practise", where there are thousands of way to achieve the same goal, etc...
It is not bad, but we really need newer languages to move forward the field. Most of them will fail, some of them will fail while having used a great amount of resources, but some will eventually succeed and moving the field forward.
I don't think languages progress evolutionary like species. None will probably succeed, but we will have a lot more variety of languages for anything we can think of.
It is not just creating a new language. In a company environment you see a huge cost when people want to start using a new language or/and framework.
It comes down to values and what push people forward.
I am not saying one language should be enough. I am saying that for example that even for a company that already has a library of more than a few components built in React a a medium codebase using it, starting using Vue will add a quite large cost. Even if a framework is 20% better in some metrics, having to rewrite some reusable components (assuming they were written well) in another framework is going to sunk a bunch of time. There is a learning curve that people need to go through. Hardly the first time you use a framework, you end up writing the most awesome code in that framework. There are bugs, there may not be a lot of documentation or open source libraries. From an engineering prospective is going to be fun and you are going to learn something new but from a business prospective it is rarely a good decision.
Same thing is probably true when teams in company with a large Java codebase want to switch to Scala.
On the other end when you have a 10x better framework or technology you are better paying attention and make sure you plan a transition sooner than later. For example, I think GraphQl is a order of magnitude better than REST when you consider a whole end-to-end system.
Engineers, especially the smart ones, get bored quickly. Inventing a new largely adopted language or framework is an irresistible calling from inside. Exploring new territories is also another source of growth. It is all about values and the way we go about satisfying them.
My "side" take on this is that as long as reimplementation is considered an urgent activity with each new language, we may be privileging languages that are actually better for rebuilding things that are already well understood over languages that are better for "feeling our way through problems".
I'm not 100% sure of this point, but it feels at least possible that features that make a language good for reimplementing a problem with the structure known in advance might be awkward for exploratory work. I've built a bunch of stuff recently where I had NFI what I was doing as I went along and had to tear up vast tracts of the data structures as I went; it was good to have the fairly loosey-goosey typing of a mix of C/C++ and (effectively) asm to do it (while still being able to see performance levels of aggressive optimization, which was also part of the goal).
Vendorization. For every language out there there is a consulting firm or corporation selling tooling and support. This allows business strategies aimed at market capture and monopolization.
To business, you’re not a general purpose plumber, you’re just a specially trained installer of a particular brand of water boiler.
This is a consequence, not a cause. With a few exceptions (ex: Java), successful languages are designed with a use case. C is for writing UNIX, Rust for Firefox, PHP for generating personal home pages and JS was Netscape's solution for dynamic webpages, Go is well suited to what Google is doing.
Authors of these languages want them to be successful because the more popular a language is, the more people will help them on their project. They may even give out free tooling and support for that reason. Consulting firms usually come later, when the language is already popular.
I suspect the Razor engine/language is a case of vendorization. (Razor is a markup templating language which is part of MS Visual Studio.) I suspect MS promoted it because it complicates IDE design (parsing) so as to keep other IDE's out of their turf.
The prior templating syntax ("<%...%>") was simple to learn, simple to use, easy to read, easy to debug, and easy to parse (per IDE vendor). Razor didn't add much in practical features to justify the huge leap in complexity it caused.
It may save 2% of key-strokes but complicated syntax and parsing by a factor of roughly 30. It would have to save at least 25% of keystrokes to justify 30x complexity in my opinion. Let alone debugging and parsing error headaches. Bad deal.
Are you saying that the original creators of languages are motivated to a large extent by the thought of distant future consulting money? Because that does not ring true at all.
It's hard to do, but I would like to see more overhauls of existing languages. I wrote a relevant comment about the approach of Checked C vs. Rust here:
https://news.ycombinator.com/item?id=17944310 (tl;dr Software gets rewritten much less frequently than you think. Rust is going to add to the existing ecosystem in C++ and C, not replace it.)
Facebook's Hack is another example of this approach -- an overhaul of PHP. Although I guess it led to 2 languages and not one -- it's not replacing PHP! Like I said, it's hard :)
Most new languages come with an improvement, usually of the non-trivial sort and grow communities and survive on their own merit. Usually, the benefit of a new language exceeds the cost... or it simply gets no traction.
It's annoying how new languages sometimes become popular in spite of not adding any value to the development process.
I think part of the problem is that new languages open up a new market which incentivises developers to create open source libraries
that will become the foundation of the new ecosystem; it's a sure way to become a 'rockstar developer'; if the language succeeds, you succeed with it; the odds of success are much higher
than having to compete with other libraries and frameworks within a well established ecosystem.
He‘s right about the cost, but it‘s surely smaller than the cost of bloated protocols, specifications, reference implementations. Nobody except large corporations is able to implement a competitive web browser, for example, regardless of the language. New languages would be much less of an issue with clean API designs and specifications.
The opinions expressed in this blog post are one of those uninteresting, infinitely parroted idioms that people like to assert to seem smart and pragmatic, but are completely idiotic upon further reflection.
It's similar to "If you rent you are throwing your money away" completely ignoring the realities of how DUMB that statement is. Paying for a house which you must mortgage, pay taxes on, and maintain, but cannot rent out or sell is much more expensive than renting someone else's house, and historically the stock market gives greater yields for many investors.
First of all, we have not yet written the perfect programming language, and every single language written has pros and cons for various different tasks.
Choosing the right language can lead to writing abstractions that make you ABSURDLY more productive than alternative choices. Write RabbitMQ in Javascript instead of Erlang and tell me we only need one language to rule them all.
Off the top of my head, I can think of 5 languages that have MASSIVELY improved the software ecosystem over the last 10 years: Golang, Rust, Elixir, Typescript, Elm.
> However, I hope people consider carefully the social costs of creating a new programming language especially if it becomes popular, and understand that in some cases creating a popular new language could actually be irresponsible.
Unsurprising a self-proclaimed christian programmer takes the moral high ground, and asserts writing a new programming language out of passion, love, or an actual need for a new language to allow for cleaner abstractions for your particular use case can POSSIBLY be unethical.
I have a different suggestion: Consider the social costs of parroting idiotic drivel you heard one time and trying to pass it off as actually interesting insight.
I think this is one of the best things about Clojure and similar "hosted" languages...leveraging preexisting huge ecosystems. You really do (as much as one could reasonably expect) get to have your cake and eat it too.
It's a very good question. However, thinking about it, the conclusion should probably be that the cost of continuing to use the same language while there exists another language more suitable to attack your problem is higher.
Fragmentation yields a distribution of effort which yields theoretically inferior products, compared to the theoretical output of all the individual parts working towards the same goal. (Debatable, but we'll assume it.)
Centralization yields a lack of competition yields a lack of a drive to improve inferior products yields stagnation and disenfranchisement. (Also debatable, but probably safe to assume here as well.)
Startup costs in this area (basic compiler stuff, usually handled by llvm and other varied metacompiler frameworks nowadays) become vanishingly small with time, while the long tail becomes ever larger (libraries, tooling, ecosystem goodness, all are expected now).
Given these two observations, when a language becomes popular despite starting in a fragmented ecosystem, it slowly grows, starts to take advantage of network effects, and gains the benefits of growing centralization as it becomes "the defacto choice" in some area (usually at the cost of other players in the space, be they large or small). Eventually, it stagnates, parts of the community become disenfranchised and go and spawn their own languages and variants (taking ideas from their origin, along with their grievances and ideas from other areas with them), and the process begins anew. However because of the growing long tail, each time this cycle takes place, the time between expansion and explosion takes longer and longer.
At least, that's how I've been led to believe systems like these tend to work.
It seems like the call to be _mindful_ of what you're doing when you make a language is sensible; if you contribute to the cycle, you're contributing to what will eventually be 2 years of long-tail work for what will be the de-facto norm for developer tooling 40 years from now; but that's a terrible way to look at it. Wouldn't you rather get in closer to the ground floor and be part of the first iterations that set the standards for the iterations which come after? Isn't thinking of it any other way just being defeatist? (Since it amounts to concluding that future generations of developers must be better at this than you, and so are worth more early-cycle-iteration time?) So what if it's indulging the ego some; if that's the cost of improvement, then so be it. The languages we have today wouldn't have been made without their predecessors, and the languages of tomorrow won't exist without the ones we have today. Is it also equally possible that deciding _not_ to make some new language potentially delaying the progress of the programming language field as a whole? Is making that value judgement in the purview of anything other than hindsight? I can't begin to imagine in what ways future languages may improve life as a developer, but if they're anything like the stark contrasts we've seen recently in certain PL areas (ie, systems languages), it's sure to be exciting.
Points to the author, just because one can create doesn't mean one should. Should there not be more intentional thought in creating and creations ramifications? "Now I am become Death, the destroyer of worlds."
> but it's common for new languages to trigger reimplementation of, e.g., container data structures, HTTP clients, and random number generators. If the new language did not exist, that effort could have been spent on improving existing libraries or some other useful endeavour.
Yes, it could have, but would it? Speaking as someone who has done some of this reimplementation work in a newer language, I can unequivocally say that it would not in my case. I'm just one data point, but I don't think I'm unusual. There is an aspect of greenfield development that is appealing to me. It is a somewhat unique opportunity to execute a vision with a lot of freedom. There is also the benefit of new forms of expression that new tools give you. I'm painting with broad strokes here, but I'm generally a believer in the idea that tools themselves can both limit and empower the expression of ideas.
If it weren't for the new language, I surely would have done something else with my time. I don't know what it would be, but I know it would not be trying to expend the social capital required to make small incremental improvements to existing software using tools that I find too limited. As another commenter mentioned, I might have just watched more Netflix.