The fact that discussions over syntax generate pages and pages of furious bickering, while discussions over semantics (which is what actually matters) get a shrug, is the ultimate example of bikeshedding in PL design. Honestly, syntax just doesn't matter. Yes, particular poor choices can impede usability, but that's not applicable here, and ultimately after two minutes or so of learning the new syntax there's no difference between this and "await foo".
> while discussions over semantics (which is what actually matters) get a shrug, is the ultimate example of bikeshedding in PL design
Worth noting that in this case the discussion over semantics has been ongoing and nonstop since around 2013, since it's such a big feature that it took a lot of trial and experimentation to figure out, and only then after being broken up into several large sub-tasks which all had to be individually discussed as well; see the epic comment thread on the Pin stabilization tracking issue as an example of just one semantic discussion towards this end: https://github.com/rust-lang/rust/issues/49150
Syntax may be irrelevant at the end of the day, but nice syntax can make a big difference in usability imo. I'm not following the Rust example, but discussions of it remind me about discussions about UFCS (Universal Function Call Syntax). That's where `foo(a, b)` can be rewritten as `a.foo(b)`. That may seem minor, but look at this code:
half_square = divide(square(a), 2)
In comparison to:
half_square = a.square().divide(2)
Many find the latter much more pleasant to read, and things like this can make a big difference in how fun I find it to use a language.
I agree with you. I find that in the first example the order of operation is much more explicit at a glance than the second, but I could also get used to the second without much effort. I think it all comes down to what your background is.
Me too but I suspect thats because the first is the pattern used in nearly every language I've spent extended time with since I was a kid (30 years ago).
I don't think either is better but one is definitely familiar.
Generally I try not to care about syntax too much (except putting $ on the front of a variable, that annoys me not so much because they did but because not everyone did so every time I switch from PHP to TypeScript I end up putting $ on at least once a day), the weird part is that I write idiomatic code in both so my brain knows it shifted context but I still put the $.
let $foo = bar;
Just looks wrong.
Weirder still when I run into code written by people who do
I think you can skip the empty () in D. The syntax becomes much more readable when you don't have to return to upper levels of functions. Thankfully pipes exist in functional languages, which makes it feel just right
I left the `()` because in some imperative languages there's a difference between `() -> a` and `a`, which makes `a.b` and `a.b()` different. In Haskell there's `>>>` and `&` in base, which I use all the time for this sort of workflow:
find_half_square = square >>> (flip divide) 2
This creates a function which takes a value, performs `square`, then performs `(flip divide) 2`. It's written in pointfree style, which means the function's input is never written in the definition of the function, which I find to be extremely aesthetically pleasing (as it allows me to focus on the composition of functions, without thinking about passing arguments around). This is one situation where changes in syntax allow you to reason about programs quite differently.
Whatever pleasantness may come from that or any other specific example is totally outweighed by the constant overhead of having to remember that there are N ways to do 1 thing. As a primary rule: the best programming grammar has the fewest such ambiguities, ideally zero.
Folks who really believe that program in lambda calculus.
In practice, syntax is the UI for a programming language. And just as with any UI, there's a bunch of tradeoffs between how easy the language is to use for a newbie (who knows nothing), how easy it is for casual user (who knows a few core constructs but has to lookup advanced functionality), how productive it is for an expert (who has a vast working memory of functionality), and how powerful it is (in terms of which constructs can even be represented). Different users will have different opinions on which tradeoffs are justified, largely depending on where they fall on this continuum.
On one side of the (practical) continuum, you have languages like COBOL, BASIC, PHP, and Hypercard, which are explicitly designed to seem familiar to people who know other non-programming technologies. On another, you have languages like Scheme, C, Go, and Java, which have a small set of broadly-applicable core concepts but require some verboseness to express lots of common patterns. And on the third, you have languages like C++ and Perl where experts can express very powerful programs without a whole lot of typing, but which can be rather impenetrable to people who haven't spent years mastering them.
I think this is a fair summary, but I also think that C++ and Perl can be pretty fairly judged as failures, due to, or in the ways that, you enumerate.
Put another way, it is not true that powerful languages must necessarily be impenetrable.
Put still another way, the amount of typing you do (within certain bounds of reason) is an almost totally irrelevant metric when judging a programming language.
This is a strong assertion with basically no backing, and all modern languages have some level of what you call ambiguity, so this isn't true in practice either.
What's a big difference for you got to do with the design of a language used by many others? What makes your experience more important than the experiences of someone with different qualifications?
Why is optimizing for your tastes the correct thing to optimize for?
If it's a big difference to me, it would be surprising if it was not a big difference to some portion of people (in either direction, I wouldn't be surprised if most people hate it). Since it makes a big difference to some portion of people, it is worth discussing. My intention with my comment was to make the argument that spending a long time discussing syntax can be a useful and productive thing for language designers to do.
If you're going to make that argument, you should probably be less vague. It makes a "big impact" to "some portion of people" that the Earth be thought flat, but you can't justify wasting a "long time" arguing about it.
Language designers don't need vague personal opinions masquerading as 'useful' facts, intentionally sculpted for inappropriate generalization. They need to know what the purpose of the syntax is and what the expectations of the language/user base are. Beyond that, you're just contributing noise.
And given that this criticism is largely against bikeshedding, it is interesting that you would try and justify it with statements of self-importance; bikeshedding largely happens because involving people in a decision leads them to massively over-value the importance of the decision simply because they are a part of it. You may be an expert on your own opinions, but if you're not an expert on their relevance, then you're not helping.
If syntax didn't matter we wouldn't see conformity to a set few styles, with outliers having a far more difficult time gaining traction.
Syntax very obviously matters.
Besides, futures are very old in Rust at this point (relative to the language's age) and had been discussed years ago. The reason people aren't seeing those discussions is because they happened a long time ago but syntax happened a month ago.
I think this also highlights the pitfalls of being community-centric. If Rust were run more like chromium or something, this sort of thing just ceases to be an issue. Major kudos to the Rust team for managing it really well.
There was not just disagreement within the community, but also within the language team. Any other language is going to have the same problem at times (except for those that are a one-man show). The only difference is that you see how the sausage gets made. (Also, electronic communication is less efficient than in-person.)
Oh. There was lots of discussion over semantics, with people who worked on implementing (e.g. language creators) await/async in other languages. The problem/solution space was a lot smaller though.
Also, there is a notable usability difference, wrt reading the flow of lines, chaining and parentheses. Learnabilty/wierdness was a syntax consideration, but not the only one.
While I do have an opinion on the syntax I could never amass the time necessary to weigh the positives/negatives of semantics issues. I reckon that's the reality for many.
Lazy vs strict semantics are completely orthogonal to the syntax [0]. (foo a b c) looks the same in a lazy language as it does in a strict language but oh boy are the results surprisingly different if you don't already know what you're in for.
0. Though some would argue that making them completely orthogonal to the point where the user can't tell the difference is a horrible design decision.
> Lazy vs strict semantics are completely orthogonal to the syntax [0]
Alternatively, the syntax in Haskell just lends itself to lazy evaluation, and requires explicit annotation syntax to be strict. Contrasted with Python, which has a syntax that makes strict evaluation an easier default to express.
If all languages can express, with some effort, the same exact semantics as any other language, then the only difference is syntax.
I'd submit the key difference in Haskell syntax is actually currying, not laziness. Haskell syntax privileges currying, and an executed function is just a curried function that has all of its parameters. By contrast, currying in languages with Algol-descended syntax always requires more rigamarole. It's possible, of course, in a lot of them, but it's harder than just a function call missing some of its arguments.
> By contrast, currying in languages with Algol-descended syntax always requires more rigamarole. It's possible, of course, in a lot of them, but it's harder than just a function call missing some of its arguments.
"What would be so hard about divided_by_2 = divide_by(, 2)"
Well, first of all, it doesn't work right now, which is a problem for what I was trying to say.
Secondly, if you want to curry things, you'll end up with
four_arg_func(1, 2)(3)(4)
at a bare minimum as a "curried application", where in Haskell it's just
four_arg_func 1 2 3 4
Several Algol languages have other issues, such as dealing with optional arguments: in Python!curryable, is the result of
def f(a, b, c = 10):
return a + b + c
f(1, 2)
a function that accepts one more parameter for c, or the number 13? (Even in a dynamic language you ought to think twice before trying to return some sort of "quantum superposition" of those two things!) You'll need to add some syntax that will specify the answer to that question, and now Python!curryable is getting away from just "incompletely applying the function" as it is in Haskell, but now a Thing you have to Do. (Probably by calling https://docs.python.org/2/library/functools.html#functools.p... .)
Haskell hacks that away by making it so functions take a fixed number of parameters, in a fixed order, and there is no such thing as default parameters to a function. A non-Haskell programmer may feel this is not a trade worth making.
Your closure at the end is what it tends to really look like. You'll note that most non-functional code doesn't really do that sort of thing very often, unless you're unlucky enough to stumble across a codebase written by someone trying to write Haskell-in-Python or something.
My point is that the "it's harder" is why syntax is the differentiator.
The idea that syntax isn't a huge differentiator for languages is insane to me. Yes, small syntactic changes like "fun" vs "fn" may not matter overall, but obviously people choose languages based on what they can easily express by typing sourcecode.
Arguably, the difference between await syntaxes is closer to "fn vs fun" than "lazy vs strict", but I think that there's a lot of context that pushes it closer to the latter (we're talking about how a fundamental control flow primitive is implemented, and this will impact future control flow primitives).