What a milestone! This was the most impressive work in open source decision-making that I have ever followed (in my limited experience). People have contributed thousands of comments discussing this for over a year. It has even caused the team to rethink how discussions like this should be guided in the future beyond comments on GitHub issues. I cannot imagine how difficult it has been to manage it all, and I applaud and thank the Rust team for their hard work in fielding comments, weighing trade-offs, and ultimately making a decision in order to move the project forward.
> This was the most impressive work in open source decision-making that I have ever followed...
You'd be blown away by Linux Kernel development then. Multiple magnitudes above any measurement of an await syntax bikeshedding. /u/Perceptes puts it best than I ever could in this top voted comment on /r/rust [1]:
"...I personally don't care at all about the await syntax, and have been very unhappy with that bikeshed being such a focus of time and discussion. The real problems I have with the unstable async system are how difficult it is to reason about. Compiler errors are still largely indecipherable. impl Trait didn't really help here. Documentation is still sparse, so I spend a lot of time banging my head against the wall, trying many formulations of my programs to get just the right incantation of futures wrapped in futures wrapped in futures to get things to compile, only to get page-long type mismatch errors that are near impossible to read, and multiple closures/functions deep of nested futures being returned where any of them could be the culprit. I'm also very frustrated by the lack of progress on "abstract types" that are required for using async functions (and returning impl Trait) from trait methods. Traits are the cornerstone of Rust's type system and yet I haven't seen any activity on this for months. And now we're talking stabilization with this glaring hole. Async streams are another thing I've seen almost no progress on, aside from a blog post from withoutboats on how to deal with the ? operator within async for loops..."
The nuances in this case went well beyond what I was expecting - it's actually more than just syntax, this heavily crosses over into semantics. It's been a very impressive process, and I was surprised at the power of the solution they landed on.
- Rust has to find a syntax that can work with different underlying libraries (Rust can compile to lots of different targets, some will have capability to accommodate larger libraries than others)
- The abstraction can't incur undue overhead (callback-based solutions may cause allocations that are hard to keep under control and will split the language if it spreads too much)
- The syntax should be pleasant to write for at least the majority of people (modeling your entire application as a state machine around async computation is pretty unpleasant)
- The compiler has to be able to give helpful error messages (this was a problem early on I think)
- Language authors had to think about how the borrow checker would work across async points, and whether to invest in changing the language to make that possible.
- and so on and so on
I really loved this talk by Without Boats (the author of the friendly article) going over all this stuff, really gave me a good sense of what they were up against https://www.youtube.com/watch?v=skos4B5x7qE
I'm nit going to stop using rust, but I find this very disappointing.
One thing I strongly disagree with - - this article said other notations would require Rust users having to learn more notation. This still requires learning more notation, just it's easy to not even be aware it exists and think a struct had a member called await instead...
I come from the Node world (I'm not really familiar with Rust at all), where the syntax is "await foo()", and to be honest I really like the .await syntax. First, as others have stated it makes the operator precedence clearer in my opinion. But more importantly, in my brain it makes it feel like await is a "member operator" of Promises (or I guess futures in the Rust case), so I can think "Function returns a promise, and then "calling" await on it yields and then returns the resolved object when done".
My understanding of the issue (which is shallow at best) is that it's a battle between a prefixing keyword or sigil which has the problem of causing the reader to scan back and forth in the statement when chaining multiple items, which sucks), or a postfixing keyword or sigil, with dot notation for a keyword to help parsing (and everyone seems to dislike the overloading of the dot notation for this). Ignoring sigils, I think the chosen solution is the lesser of two evils.
That said, I think a postfixing sigil would have been much better. It seems an important enough thing to denote that a bit of special syntax is called for, and the more unique the better.
This is a great example of why syntax and semantics can't be separated. This works great in GC'd languages, but in Rust, these two things may not be equivalent. With the distinction between owned and temporary values, this may change the lifetime of what stuff is in scope and when. This is reduced a bit with non-lexical lifetimes, but it's not, strictly speaking, actually equivalent.
I think this is a good example to show people who are on the fence about postfix then. I haven't felt strongly about postfix, but this is an eye opener for me.
Yeah, I think the trick is that the theory and practice might be the same. And usually, it's the other way around; spreading it out makes it work, whereas it may not otherwise. For example:
fn main() {
let world = gives_string().split(" ").next();
println!("{:?}", world);
}
fn gives_string() -> String {
String::from("hello world")
}
This will fail because the String is temporary, and we're trying to get a reference to it (via split), and so it would be deallocated at the end of the line, being a use-after-free. This, however, compiles:
fn main() {
let world = gives_string();
let world = world.split(" ").next();
println!("{:?}", world);
}
fn gives_string() -> String {
String::from("hello world")
}
We're shadowing 'world', but the underlying String now lives to the end of main, so everything works, no more use-after free.
I think before I'd want a good real-world example of where doing the multi-line thing goes wrong before I'd want to make an argument that this is why postfix is better.
The compiler is smart enough to deduce lifetimes — that’s why it can throw a compile error — but fixing them (auto-keeping temporaries) might introduce extra memory consumption/leaks that the programmer did not intend. Requiring the programmer to explicitly assign a temporary to a variable makes the programmer’s intent more clear.
A comparison: in C++, the behavior of auto-extending the lifetime of const references to temporaries (but not non-const references) is considered a wart in the language design. (Really, you should just assign a temporary to a value because the compiler can elide the copy.
: https://abseil.io/tips/101 .)
One could argue that the compiler could transform this for you. But systems languages are also about control. For someone versed in the way things are supposed to work, an owned value living longer than it should is surprising.
I totally get where you coming from, and even in non GC'd languages such as C++ lifetimes are extended (or not...) when you break down expression.
However, I also strongly perfer to give these subexpressions names. That tremendously helps when debugging, logging, or just discussing the code with colleagues. In fact, these subexpressions do exist and they are a certain unit to reason about, especially when the program is not doing what it should. In these cases (and not only these) you really want to give them a canonical name and not only "line 42".
Maybe there should be a syntactical equivalent for named subexpressions.
There are a few different ways I feel about this at the same time:
- On the one hand, I agree that "interesting" operations really should be on their own lines. I think I'm going to strongly prefer to keep it to one `await` per line for the foreseeable future.
- On the other hand, I know that many programmers will cram a lot of operations into one line if they can, and when that inevitably happens it would be nice for it to be readable.
- And maybe in the long run, it's possible that a big, mature async ecosystem might make a lot of these operations so commonplace that they aren't "interesting" anymore. Maybe in that world we'll be glad to have a syntax that makes it easy to chain things together.
> That said, I think a postfixing sigil would he been much better. It seems an important enough thing to denote that a bit of special syntax is called for
They can still introduce a sigil later, if the .await syntax turns out to be common enough in code that the long keyword hinders readability. This is how it was done with the try!(...) feature, which now uses ? as a sigil. But as the article points out, pure sigils are a scarce resource so .await might well be the best way to go anyway.
Yeah this argument doesn't hold water for me either. Learning new notation versus learning a special-cased exception to the normal semantics of an existing piece of notation feel like really comparable levels of cognitive overhead to me.
I don't think this is necessarily a bad choice overall but I find this particular line of reasoning a bit specious.
Speaking as a member of the Rust language team: we specifically evaluated all syntax proposals on the assumption that many people may end up reading them without syntax highlighting. People read code in many places outside of a programmer's text editor or a code-highlighting web page, including logs, diffs, and emails.
The parent commenter does not suggest that this syntax was chosen based on assuming users don't have syntax highlighting, only that they did take into consideration that not all users have syntax highlighting.
I understand that, but how much into consideration? Let's just make up some numbers, if 1% of reading of Rust code takes place in environments without code highlight, then it should have been 1% of the consideration. Is that the case?
I don't think you should weight it like that. If you have multiple proposed syntaxes, and both are equally effective with code highlighting but one is moderately ineffective without highlighting, then it makes sense that you should go for the latter. The weighting controls how much of an effectivity loss in the code highlighting case you would be willing to trade off against effectivity improvements in the other case.
It sounds like you're agreeing with me. If two options are indeed essentially equal on all fronts, then one minor use case can be enough to push it towards a winner.
I find the syntax highlighting argument odd. It's like arguing a road surface is acceptable because the dangerous conditions it causes are mitigated by traction control.
I find the don't-consider-syntax-highlighting-at-all argument odd. It's like arguing traffic lights shouldn't use colors to indicate stop-and-go because some people either cannot perceive the color difference or choose to wear glasses that negate the color difference.
(Analogies here suck. Instead, consider that the vast majority of programmers read code that has been syntax highlighted. To completely discount that experience at all would be odd.)
> It's like arguing traffic lights shouldn't use colors to indicate stop-and-go because some people either cannot perceive the color difference or choose to wear glasses that negate the color difference.
There is a reason that we have three separate lights still instead of one (that is, there's more reasons than just inertia), and that's because some people can't tell the color differences. Specifically, my father can't easily tell the color difference between green and yellow. I've had friends that also had this problem. It's not uncommon.
> To completely discount that experience at all would be odd
I'm not saying to discount it entirely, but consider it in context. I think it's a somewhat elitist argument, because it assumes everyone both has syntax highlighting set up and that their highlighting will work as well as yours.
I accept that syntax highlighting helps, and is a positive if it can be applied well to a solution, I just think it's far from sufficient.
Put another way, an optional, environment specific configuration setting is not sufficient to offset the negative aspects of an official language level feature, if we're able to weigh things prior to being implemented.
If Rust were required (or even officially expected) to be written in an environment where that was always available and easily configurable, I would think different. E.g. if we're talking about Visual Basic, or Smalltalk with it's IDE (IIRC, not that I have experience with it), then I would think different, but as long as the Rust compiler is expected to take in files of text and not some rich format that includes extra metadata, I don't think language design choices should weigh non-text considerations too heavily.
Which is exactly what has been done. So I don't understand what you're complaining about. The rest of your comment is a giant straw man. I've never heard of or seen implied that anyone in any decision making position in the Rust project is "assuming everyone has syntax highlighting set up."
When you make ridiculous assumptions about the decision procedures of other people, then it's easy to derive ridiculous conclusions. But reality is always more nuanced than that: https://news.ycombinator.com/item?id=20031706
I never made an argument about the Rust project assuming that. I'm talking about people using the availability syntax highlighting to dismiss a negative aspect of a possible choice. I was making that argument in general, but spurred by a specific comment here. I used the context of the Rust community discussion as an example, but never did I assume Rust was using this as a metric (just noting that it has been put forth in arguments from the community).
My argument at this point (as illustrated by the Rust discussion) is simply:
- Rust/rustc takes in unicode text as source
- Since it has no knowledge at the source level of syntax highlighting, using that to mitigate the downside of a language level syntax for a feature is problematic
- We as the public should keep that in mind when discussing the relative merits of one possible implementation or another of a feature.
That's not denigrating or assuming Rust actually did this, it's a note about the community level discussion and how some people approached it, as evidenced by a very specific comment in this thread, and how I thought it had some problems when applied to language level decisions.
All someone said was "Await is a keyword, syntax highlighters will most likely paint it differently." Which is a perfectly cromulent thing to say, and isn't contradictory of anything you're saying. Now if someone said, "it's impossible to read the await keyword due to its placement, but since everyone uses syntax highlighting, it will be okay since it will be colored differently," then you'd have a point.
I guess I'm just so tired of folks piping up with the "not everyone uses syntax highlighting" crap almost every single time anyone even hints at the notion that syntax highlighting can help a particular piece of syntax. I guess you're probably tired of the opposite.
> All someone said was "Await is a keyword, syntax highlighters will most likely paint it differently."
Which was in response to someone's criticism regarding .await notation that "it's easy to not even be aware it exists and think a struct had a member called await instead..."
> Now if someone said, "it's impossible to read the await keyword due to its placement, but since everyone uses syntax highlighting, it will be okay since it will be colored differently,"
That's how I interpreted it based on it being a reply to that exact criticism.
> I guess I'm just so tired of folks piping up with the "not everyone uses syntax highlighting" crap almost every single time anyone even hints at the notion that syntax highlighting can help a particular piece of syntax. I guess you're probably tired of the opposite.
I understand! As I noted earlier, I'm for syntax highlighting in general. If I was forced to never use syntax highlighting for programming again, I might consider a career change and only programming on things I really care about instead of to pay the bills. It's because of this extreme distaste for how annoying it is without syntax highlighting that I'm extremely adverse to making it any worse than it already is, because there's been a few times where I've been forced to endure it.
For the record, I'm fine with the currently accepted await syntax. Out of what I would consider the ideal outcome to me (postfixing special sigil/character), it's at least postfixed. While prefixing await looks prettier in the singular case, it makes any sort of chaining cumbersome and error prone to parse out by eye, and as I've gone to pains to represent here, when it comes to functionality and prettiness, I error heavily on the side of functionality (where functionality includes safety and a premium on not making complex things harder to deal with than they need to be). My comments are really a tangent on the submission topic and not meant to be applied directly towards the specific solution Rust has gone with (there's a reason I waited until quite deep in the thread to use it specifically as an example). That is, .await is a perfectly acceptable outcome in my eyes without the need to justify it through syntax highlighting.
I think it's totally reasonable to judge a road surface based on how effective it is, in practice, with the kinds of traffic that will be travelling on it. Traction control is a part of that landscape. Designing with awareness of syntax highlighting makes sense. But we should also keep an awareness that it won't always be there.
> I think it's totally reasonable to judge a road surface based on how effective it is, in practice, with the kinds of traffic that will be travelling on it.
Yes, and if we outlawed the use of cars without traction control on certain roads, then I would be fine with designing with the assumption traction control will be on. But as long as you allow older cars without traction control to still drive on the road, you are making the roads less safe for a certain percentage or class of drivers. I view that as different that just making the road safer for some.
> But we should also keep an awareness that it won't always be there.
Exactly. And this is why I think it doesn't make sense as a mitigation to a language level feature which will always be there.
Specifically, I think syntax highlighting is a useful feature and a plus for a comparison, I just don't think it works as a mitigation for a negative for something that pervasive.
All other things being equal, syntax highlighting abilities might push me towards one solution over another, but they won't make me ignore a problem I perceive with a solution.
I'm not sure if you read me as slightly more critical than I had intended or if I'm reading you as slightly more defensive than you'd intended, but I think we more-or-less agree: readability is important both with and without syntax highlighting, and tradeoffs should be considered in that light.
A nitpick:
> But as long as you allow older cars without traction control to still drive on the road, you are making the roads less safe for a certain percentage or class of drivers.
Probably. But if a sufficient fraction of cars have traction control, and we sufficiently improve the performance of traction controlled tires at sufficiently small damage to the performance of other tires, the reduction in the threat of being hit by cars with traction control will outweigh the increase in risk of losing control for those cars without, even before trading off risk between cars (which is certainly more complicated).
I don't think there's a direct mapping back to syntax and highlighting.
> I'm not sure ... or if I'm reading you as slightly more defensive than you'd intended
That one. :)
> But if a sufficient fraction of cars have traction control, and we sufficiently improve the performance of traction controlled tires at sufficiently small damage to the performance of other tires
That's a slightly different scenario, where the change is to increase safety. Then it is, as you say, a trade-off. It could just as easily be a trade-off because material is cheaper though, in which case different calculations are important.
Also, it's important to be careful not to exclude important information from the calculation even if it's about a relative safety increase. For example, if you increase the safety for 90% of the people but decrease it for 10%, and that 10% is mostly comprised a population that is unable to switch and benefit (e.g. poor people with little choice in the vehicle they drive, as they drive what's available and cheap), you might not only be forcing that risk on an already captive population, but they might also be a population that is resistant to change that would mitigate this danger (they can't afford new cars and they can't afford better cars). Conversely, shifting risk to the wealthier (more elastic) part of the market might yield more of a net reduction in risk as they are capable and willing to respond to the increased risk (or more likely, increased annoyance).
> I don't think there's a direct mapping back to syntax and highlighting.
There isn't, we're getting into the weeds, but conceptually I think there are some interesting points that apply at least partially. For example, not all changes are felt and valued by the populations they affect similarly. E.g. not relying on syntax highlighting for a language feature does not affect a person that uses syntax highlighting that much one way or another (it will likely still be highlighted in some way), but it can impact those that don't or can't fairly heavily. If I'm stuck trying to review some bug while on vacation through some crappy online git interface that isn't highlighting code well, or maybe even at all, I'm going to curse the heavens if there's a feature that's easily missed or hard to follow in monochromatic output if it's causing me problems in my stressful moment.
Perhaps that's my sysadmin history coming through, but I want stuff to be simple, reproducible, and obvious. Complexity, fragility and obscurity are the enemy, and I fight them wherever I see them. ;)
It cannot exist as a macro or function, because it transforms the code in ways they can’t. Macros can transform code, but the final output isn’t something representable in stable Rust, and so it would not compile post-expansion.
It seems there's a conflict between the logical flow of prefixes going right to left and postfixes going left to right. I wonder if there's any value (for other future languages) in having syntax to swap post and prefix for convenience
> “Dot keyword:” In the previous post, a sketch was made of an idea in which certain keywords could be postfixed or infixed using the dot operator. This idea is only a sketch, and is not implied or guaranteed by the decision we’ve made here.
Would the macro case be addressed by also supporting a prefix keyword? The post that states macros can't work does not cover this. From what I can tell:
`foo.await!()` would just expand to `(await foo())`, and anyone could write their own `my_await!` macro that works similarly.
I as well think there could be promise in someday exploring that space, but for the moment my enthusiasm for hypothetical postfix macros (which have yet to ever be formally proposed) is somewhat dampened by the realization that `foo.bar.qux.qaz.await!()` would need to expand to `await { foo.bar.qux.qaz }`, which makes me consider how uncomfortable such macros would be to parse (the saving grace of "normal" macro calls being the fact that both the reader and the compiler know exactly where the span of a macro begins and ends by looking merely at the balanced delimeters).
IMO, if the goal were ultimate language consistency then that would involve both adding an `await` keyword (with mandatory braces, like `if` and the others) and a postfix macro for the cases where it looks nicer. However, when considered by itself, there's no denying that `foo.await?.bar` appears nicer than `foo.await!()?.bar`, especially if there is no guarantee that postfix macros will ever become a thing.
(Though in honesty I think all these proposals are just trending towards justifying an F#-style pipeline operator.)
> `foo.bar.qux.qaz.await!()` would need to expand to `await { foo.bar.qux.qaz }`
Interesting point, thank you. I hadn't seen this previously mentioned, and it's definitely a reasonable argument.
> there's no denying that `foo.await?.bar` appears nicer than `foo.await!()?.bar`, especially if there is no guarantee that postfix macros will ever become a thing.
Agreed, for sure. I'm honestly just very concerned about these features because I see them as stepping stones to others. The path from postfix macros seems much brigher than the path from postfix keywords.
I think I agree with you about an await block being a good idea.
Thanks for the response, I think this is the first meaningful response to the prefix await + postfix macro that I've read.
No problem. :) I was actually in the same camp as you until, in the wake of boats' prior post on await syntax, I sat down and got well into writing an RFC for postfix macros until I stumbled upon the parsing concern and shelved it under the category of "not nearly as trivial as I thought it would be".
Thanks, I haven't seen that RFC before, but reading it now it seems it would be insufficient to support this use case without also supporting `let x: impl Future`, since the RFC deliberately chooses to expand to a temporary binding for `self`.
It would also appear to be deficient for the same parsing reason I mentioned, i.e. that you need some way to tell whether `2 + 2.bar!()` should expand to `2 + bar!(2)` or `bar!(2 + 2)`; the RFC appears to choose the latter, whereas a hypothetical `await!()` would want the former. This problem is called out in the RFC:
"Rather than this minimal approach, we could define a full postfix macro system that allows processing the preceding expression without evaluation. This would require specifying how much of the preceding expression to process unevaluated, including chains of such macros. Furthermore, unlike existing macros, which wrap around the expression whose evaluation they modify, if a postfix macro could arbitrarily control the evaluation of the method chain it postfixed, such a macro could change the interpretation of an arbitrarily long expression that it appears at the end of, which has the potential to create significantly more confusion when reading the code."
As far as I can see, the RFC doesn’t mention precedence at all, but I think it’s safe to assume that it’s meant to be the same as method calls. So `2 + 2.bar!()` would expand into something like
Yeah, I was assuming postfix macro + prefix await was accepted, specifically to address the "it can't be a macro" argument in the initial post from boats.
Similarly, postfix keywords are not a thing that exist in rust, and await is the only accepted one.
Postfix macros are a great feature in general, if accepted later. This would open the door for them, and you'd get things like:
"{} + {}".format!("foo", "bar")
and other niceties. It would have also solved the try macro's issues, like:
result.try!()
etc. There are plenty of use cases where postfix macros are awesome.
Macros are already a known control flow mechanism, so a postfix macro should be very easy for both new and old rust users to understand.
It isn't a fake field access, so it's already a step ahead of the competition.
And you get prefix await, which every other language uses, and this addresses the singular argument against macros being used (which is that the macro could not be implemented by users), though I never felt that it was a strong argument anyway.
I don't want to go back and forth discussing it. It isn't happening. I was just curious if the prefix await would have addressed that one argument.
I do, especially in terms of writing. It's quite annoying to 'wrap' things, in my opinion. This is why chaining is desired to begin with - people consistently prefer to append new code than to wrap.
> Similarly with `result.try!()`, that does not look as nice to me as `result?`.
I also prefer ?. 'try' is so common it's worth optimizing down to a single character.
My point is to compare to the prefix try, not the question mark, as a motivator for where postfix macros are a reasonable concept.
> let result = sendRequest().await!()?.getBody().await!()?.Root;
It's an additional 3 character per 'await' vs the other syntax, which I think is fine - a small price to pay for a syntax that makes sense. If await were so common, I would once against think a sigil is the way to go, but I don't believe that await justifies that level of optimization at this point.
It's not just learning the new notation. It's also that you probably can't override it with your own implementation (useful perhaps in embedded environments).
I don't see why you'd need to override await, because unlike in other languages Rust doesn't implicitly insert machinery for facilitating task execution. You need to provide your own task executor, which is the part that embedded environments would want to (and will be able to) have control over.
All control flow mechanisms except await have their own statement: if, for, while, match. Howcome await is different?
The dot await syntax might hide an await within an expression, as "just another method". You'll need to hunt it down.
I feel await has costs and consequences that need to be explicit. The dot syntax does the opposite, and therefore I feel Rust might be making a mistake of a lifetime (pun intended) on this one.
> All control flow mechanisms except await have their own statement: if, for, while, match. Howcome await is different? The dot await syntax might hide an await within an expression, as "just another method". You'll need to hunt it down.
Not quite; Rust is an expression-favoring language, so `if`, `match`, and even the `loop` keyword can all appear deep within expressions (the first two do so somewhat commonly; the latter is comparatively rare). In addition, the error-handling operator `?` is also used within expressions. (And while we're on the topic, let's remember that even C, a statement-heavy language, also has `?` as a control-flow operator that is not in statement position.)
I read your parent as suggesting that the line with `let` was made inside of another `async fn`. You're correct that you can only .await inside of one. You shouldn't be downvoted.
Note "async fn main" on line 15, and the ".await"s on 21 and 24.
Now, if you read my other comment about executors, this is using a crate called "runtime", which handles submitting it for you, see the `#[]` bit on line 14. Regardless, the syntax would look like this even without this crate; it just removes some boilerplate around the executor and submitting your future to it.
A bit off-topic: Is there any theoretical reason you need async / await syntax at all?
(It's certainly desirable for performance and compatibility to avoid making all subroutines into coroutines, so I understand why most languages have done this.)
And restricting it to the case where coroutines have a single return...
Subroutines are naturally coroutines that don't yield. And it seems like the question of whether it's a subroutine or a coroutine shouldn't be something the programmer needs to worry about.
What I've seen in coroutine libraries is the main reason we need await seems to be for cases when we don't want to await a result.
If you need an unawaited invocation to pass to a routine like gather, you do this:
my_results = await gather(run_job(x) for x in work)
But, even if we can't infer from the type of gather that it must accept a future, a function can have a method that asks it to return a future:
my_results = gather(run_job.future(x) for x in work)
You'd often want to dispense with gather. If I have these statements:
In most cases, e.g. if these were requests going over the network, we'd prefer to schedule all three at once. If we want to sequence side-effects, then explicit await makes sense:
> Is there any theoretical reason you need async / await syntax at all? [...] That makes the most concurrent option easiest, as opposed to the clunky idiom of "gathering" many results.
Futures / async-await is an idiom that makes async code easier to reason about, not something that provides any new theoretical foundation or any functionality that wasn't possible before. Like a type system, it's strictly for safety & convenience.
Whether & where you wait on any individual async request is largely separate from the await syntax. If you can start 3 async requests in parallel, and await on the group rather than each request, you probably should.
That said, the idiom of gathering results sequentially is guaranteed to work and be safe, where hidden dependencies and hard to identify race conditions have plagued attempts to make parallel async code since the beginning of time. The syntax of futures in various languages is making async and parallel code easier and safer for most programmers to use without landing in quicksand.
> Futures / async-await is an idiom that makes async code easier to reason about, not something that provides any new theoretical foundation or any functionality that wasn't possible before.
So, this is true in GC'd languages, but if you see my link below, in Rust, async/await does let you write code that was previously impossible. This is because the compiler can't understand lifetimes in the way that the code is written with raw futures.
In theory, someday, if and when generators are stabilized, this will be true again, but instead with the caveat of "async/await doesn't give you any new thing that wasn't possible before, except that it can do it with safe code, rather than unsafe code."
I meant my comment a bit more broadly & abstractly than that. You can write code in a new way with await, and it does help you with how to factor & manage the code. But it does not fundamentally let you make parallel requests that you couldn't make before, somehow. The code you needed before await might be ugly and stateful and not idiomatic, but it was still possible to make requests and wait for them, before await arrived.
What we have now with await does change the game, it makes async code appear synchronous, and the advantages of that change should not be underestimated. I think futures & await are huge and positive changes for every language they're showing up in, and they do allow for new kinds of programming, I'm just answering the parent comment's question about whether they're strictly necessary in theory. The answer is that they're not.
Okay, yeah, that is fair. I have written enough raw futures in anger to be like "it's not practically possible", but you're also right that it is actually practically possible, regardless.
Oh, practical is also separate from my answer, since my reading of the question was strictly "theoretical". ;)
Maybe about a decade ago I wrote a raw futures system for async asset loading off DVDs on the Nintendo Wii in C++, and it had to play nice with the real-time rendering system. I didn't stand back far enough to abstract it the way futures work today, and as a result it was an absolutely awful experience. It was so difficult to understand and debug, I dug myself an enormous hole. Having used futures in JavaScript and Scala and other languages since, I'm certain having real futures would have saved me months of crunching.
I would agree that await syntax brings a huge degree of practicality to what was previously theoretically possible.
Thanks for the link, I'm definitely interested in how it's done without a runtime. I know in theory a coroutine can compile down to a gory switch statement and some state, but doing anything useful with it gets hairy, I'm sure.
So, you do need to include some code to drive the futures. The key is that this is a library in Rust, rather than built into the language, and so Rust programs that don't use async do not pay the cost.
And yeah, that's sort of how it works under the covers :)
I think (but I cannot find a reference to now) one of the proposed syntaxes was that awaits were immediate and implicit in async functions, and that if you didn't want that, you could opt-out and get futures normally. Loosely,
async fn foo() {
let result = blocking_req(); // awaiting immediately here.
async {
let future = blocking_req(); // not awaited.
// can do more complex stuff w/ the future
}
}
IDK what happened to it, and it is a bit more complex IMO than what the Rust folks went with. However, I don't think it helps your gather case.
For gather, I think the problem w/ your proposed
return alpha + beta + gamma
is when does that happen? If I do
let some = alpha + beta;
// < ... code ... >
let more = some + gamma;
what happens / when are things evaluated?
At any rate, with the syntax as it is, I think your gather example is essentially¹:
alpha.join3(beta, gamma).await
or
Future::join3(alpha, beta, gamma).await
(Perhaps minus the `.await`, depending on when you need/want the result.)
There's no operator overloading, which I'm fine w/, as there is a need to choose between, e.g., select or join.
¹but I'm new w/ the async stuff, so if I'm wrong I'm assuming that Cunningham's Law will kick in and someone will correct me.
One of the goals behind it was to align async control flow with sync/thread-based control flow, to make it more intuitively obvious when things would run concurrently and when they would not. That is, calling a function, even if async, would immediately run it to completion, while wrapping it up in a closure or async block that gets spawned would run it concurrently.
This is especially relevant to Rust, which departs from other implementations of async/await in that merely constructing a future is insufficient to start it running, which is why you need that `join3` call to explicitly introduce concurrency.
Under the "implicit await" proposal, your first example would look like this:
async fn foo() {
let result = blocking_req(); // await immediately
let future = async { blocking_req() }; // think of async{} like a lambda function
// can do more complex stuff with the future, like:
// future.await()
// Future::join(future, other_future)
// tokio::spawn(future)
}
The thread-based equivalent looks very similar:
fn foo() {
let result = blocking_req(); // block the current thread immediately
let closure = || { blocking_req() }; // don't run this yet
// can do more complex stuff with the closure, like:
// closure()
// rayon::join(closure, other_closure)
// thread::spawn(closure)
}
The async block syntax would be nice, and probably far more explicit.
I think with the `return alpha + beta + gamma` the idea is that within a scope, everything is done as late as possible, but you can't leave a scope without all results.
Thus, if the code is expanded as:
alpha = run_a()
beta = run_b()
gamma = run_c()
a_plus_b = plus(alpha, beta)
sum = plus(a_plus_b, gamma)
return sum
In that case, the compiler has to determine a partial ordering:
Indeed, I think going forward we will see high-level languages that paper over the distinction between synchronous and asynchronous functions. However for Rust specifically, the performance implications you note are of particular concern, while the remark "it seems like the question of whether it's a subroutine or a coroutine shouldn't be something the programmer needs to worry about" is debatable when considering languages that aim to sit at the same level of the stack as C.
I should revise that: "programmers aren't very good at deciding whether a function should be a subroutine or coroutine."
Specifically, the problem is that making the decision happen requires refactoring and that can make a small change into a big one. My contention is that for some languages (probably not Rust) it would be beneficial if this didn't impact the code structure heavily.
I've seen smart people simply avoid trying to use asynchronous code in Javascript because it "poisons" the whole stack. Part of this was because it was based on promises, and it meant going through a bunch of logic and refactoring. Arguing against my case, that did result in a slightly nicer structure overall.
Hopefully having proper async / await notation avoids 99% of that mess in Rust by making it a matter of decorating code with the appropriate keywords and being a bit more thoughtful about structure.
And you're obviously right that Rust needs to have sane performance characteristics, let alone being able to export C compatible bindings.
Question for people more familiar with async/await semantics, how do you typically control what thread the async procedure runs on? Having spent a lot of time now in rxJava and really getting into the power of stream processing and composition, it feels like async/await is almost too simplistic.
async/await produces a Future. In order for that Future to execute, you place it on an executor. It won't start executing until you do so. Which thread it runs on, and how, is all the executor's job. So the answer is basically "pick the executor which has the semantics you want".
I never liked this behavior in Java. Or rather, I wasn't sure it was right or wrong and then I found code where people returned a future without starting it, causing the caller to wait forever in some situations.
Also if you use multiple modules with their own executors, they don't compose well. You can easily get oversubscription of CPUs.
Javascript's promises are self starting, which I would say is an improvement but the total lack of back pressure is anywhere from galling to highly problematic depending on the problem domain.
Goldilocks solution might be to have default executor behavior with an obvious and straightforward way to override it.
In .NET it defaults to scheduling continuations in the "main" thread as much as possible, with the easiest escape valve being Task<T>.ConfigureAwait(false) that tells it that it can finish continuations wherever they land in the ThreadPool. This is why you can often find a lot of (partly mistaken) advice in .NET to always use ConfigureAwait(false) for "performance" which as often as not leads to people rediscovering the hard way why the easy default is continuations on Main/UI threads. There actually is a more complex TaskScheduler mechanism underlying that big ugly valve, but TaskScheduler is not quite as featureful or easy to control as the RX-equivalent scheduler.
(I heard some rumblings that the .NET team was considering in the long run to better unify async/await TaskScheduler scheduling with the Scheduler model of RX.NET, but I don't know if anything has progressed yet along those lines.)
Python mostly requires manual management of event loops for async/await coroutine scheduling. Most Python code, even/especially using async/await is still generally single-threaded (due to artifacts like the GIL [Global Interpreter Lock] that are getting better with each passing version of Python), so more complicated scheduling is both generally not necessary and also left as an exercise to the user.
ECMAScript 2017 (JavaScript) async/await is also mostly running in single threaded event loops (though not manually managed like Python; generally browser UI event loop managed).
In C#, your tasks are submitted to the ambient task scheduler. In ASP.NET, this is (I believe) the main thread pool that handles requests. The only time I've ever had to care where something was running is when adding response headers because that's available as a thread local API that can't happen on another thread, but it's generally ended up as a "it hurts when I do this" kind of situation.
In C#, this is defined by the current thread's synchronization context.
If you're writing a console application, the main thread doesn't have one. When an async function wants to resume, it will usually resume on some thread pool's thread. Potentially different one each time. Can be any other thread too, the runtime just resumes running the function on the same thread which completed the await.
But if you're writing a GUI app and launching an async function from its GUI thread, that thread has a current sync.context. When the function will want to resume running or fails, the runtime will resume/raise exception on the same thread where it started. More precisely, the runtime delegates the decision to the sync.context, and the contexts set by GUI frameworks choose to resume on the GUI thread.
This may cause some funny deadlock bugs, but in most cases works surprisingly well in practice.
Also it's easy to implement custom synchronization contexts if needed.
Even there, you only care that you aren't running on a particular thread, just NOT on one particular thread. You still pass it to an executor that has it's own thread pool and shouldn't be terribly concerned with the particular thread.
Java parallel streams use a global thread pool by default. The problem with this is that if you mix IO bound code and CPU bound code in the same thread pool the IO bound code will block the CPU bound code. If you don't explicitly choose the thread pool your code runs on then you will suffer from impossible to debug performance problems.
I'm not familiar around the syntax, but I'm interested to know how much of this was a religious battle and how much of it results in meaningful complications for user code down the line. I've heard really good things about the Rust community (like nothing bad at all), until maybe two weeks ago when somebody mentioned this stuff. I'm a really big fan of not having religious battles.
Is this syntax for native coroutines? Can it be combined with existing user and stdlib syntax? What pathways for syntax development does this decision cut off entirely, and pathways does it leave open? What did the Rust community / maintainers learn from this debate, and how can the Rust community avoid such battles in the future?
> Due to the connotations of "religious battle", it's hard to get a good answer here. Nobody wants to be painted in this light.
Haven't we all found ourselves lined up as one of a pair of camps over some heated dispute over a bit of minutiae? The whole time you know it's a bit silly, but not entirely and you feel compelled to continue to argue.
Using a term like "religious battles" with tongue firmly in cheek is recognizing that we're prone to such things because we're human beings, by putting a lampshade on it.
This gets trickier when you have a community where some people practice religion in their daily life, and some people don't, and where the interactions between the two aren't always fun. It's not that the subject needs to be totally forbidden of course, but when it's just as easy to use different metaphors, we might as well.
@steveklabnik My apologies if I appeared to make light of the Rust community’s contributions. My definition of religious war is butting heads about problems that don’t have a clear right answer (tabs/spaces). If there are new things considered by the community that make the discussion worthwhile then it’s absolutely worth having; but if it causes people to burn out on the community down the line then maybe more moderation would be helpful. I have no experience leading anything like Rust, but I talked to contributors who burned out on Python and various libraries and it’s sad to see, so ideally Rust would forge a better path.
> My apologies if I appeared to make light of the Rust community’s contributions.
Not at all!
> My definition of religious war is butting heads about problems that don’t have a clear right answer (tabs/spaces).
So this is where we get into the meat of it. I personally do believe that a lot of the discussion was around things that don't have a clear right answer. But (and the post mentions this a bit), that doesn't mean that it's inherently religious, it means that different people value different aspects of the solution differently. The big difference between this discussion and the tabs/spaces discussion is that each person can make their own call with tabs and spaces, but this syntax affects every user of Rust. That means we can't just say "well this is a religious debate, do whatever" because then the feature does not exist.
> If there are new things considered by the community that make the discussion worthwhile then it’s absolutely worth having; but if it causes people to burn out on the community down the line then maybe more moderation would be helpful.
Yes, it is a fine line. There's a lot to it, but it is something that people are thinking about, for sure.
> I’m not sure whether the debate was taking all that into account
I can assure you that the overall development of the feature took every possible thing into account. Part of the reason that this was so synatx focused is that the team spent multiple years working on the semantics. The syntax was all that was left.
I am not 100% sure what you mean with the blog post link, as I'm not really a Python person, so I can't quite grok what you're talking about, exactly. If you can elaborate a bit I can too!
Maybe a better way I could phrase my concern would be whether the benefit to having the debate outweighs the cost, and what metrics Rust might keep track of that inform what the cost may be at every stage of the discussion.
The debate will always make sense for some time, but after a while it may block discussion of other features the Rust team might like to have for a roadmap milestone. Maybe having two forks with a tiny subset of async/await implemented (with something like tracer bullet development) would give stakeholders useful information on what the implementation cost would look like, what testing coverage overhead would be, and other metrics that balance the theoretical with the practical.
As to the statement that async/await was a smaller part of grokking concurrency in a language, maybe a better blog post might be this one:
Some helpful background: Python has the notion of a global interpreter lock (GIL), which prevents concurrent CPU-bound tasks in threads from being effectively parallelized in Python. As a result, most development of concurrent tasks are I/O-bound, or call some C/Fortran library where the GIL does not block CPU-bound tasks, or use a process pool. I don't think this should affect the discussion because I think the issues with concurrency he talks about don't care about what resource it's bound by, they may just be more evident with I/O-bound tasks or provide greater perspective.
So `njs` (a Python core developer) went and developed an async I/O library called `trio`, which has the concept of "nurseries", which is a container for dynamic tasks. Nursery exit will block on the dynamic tasks within exiting, in order to make sure that the concurrent tasks within can appear as a single black box to the outside world. I believe he was saying how the async/await feature in Python should be combined with other control flow primitives, because otherwise creating tasks that outlive the parent, error propagation, and resource cleanup may result in spaghetti behavior and spaghetti code. He also claims that clean control flow also results in new features that assume said control flow, which makes development more powerful. This goes back to my assertion that async/await isn't the end-all/be-all in terms of making concurrent code grokkable.
One quote from that blog post that mentions Rust and thread correctness (I'm not sure if it's outdated):
"""
Go statements break error handling. Like we discussed above, modern languages provide powerful tools like exceptions to help us make sure that errors are detected and propagated to the right place. But these tools depend on having a reliable concept of "the current code's caller". As soon as you spawn a task or register a callback, that concept is broken. As a result, every mainstream concurrency framework I know of simply gives up. If an error occurs in a background task, and you don't handle it manually, then the runtime just... drops it on the floor and crosses its fingers that it wasn't too important. If you're lucky it might print something on the console. (The only other software I've used that thinks "print something and keep going" is a good error handling strategy is grotty old Fortran libraries, but here we are.) Even Rust – the language voted Most Obsessed With Threading Correctness by its high school class – is guilty of this. If a background thread panics, Rust discards the error and hopes for the best.
"""
> My impression of async/await is that it’s one part of making concurrency grokkable, and I’m not sure whether the debate was taking all that into account
That's a different question than "where does the await go" and it was certainly taken into account.
I am amazed how well documented and transparent was the whole process. They new the decision would be difficult to make and there will always be people who wouldn't like the final syntax so they made sure to state their reasoning and trade offs clearly. Fantastic job. Can't wait for August. Thanks!
Nice! That was quicker then I Expected. How far are they with the implementation? Not using rust, but following progress because it is on hackernews so much.
Why does it take so long for swift to have this? Swift + iOS developers really really benefit from it. And there are loads of ios developers compared to rust developers.
Fantastic! I'm extremely excited with this update on the Rust async/await syntax! I have been following the blog posts and the issue tracker for a long time and couldn't wait till this day arrived!
1. Interacts poorly with the ? operator for propagating errors. You'd need to do (await foo())?
2. Chains poorly. `await foo().bar().baz()`
- a. What are you awaiting? foo() or foo().bar().baz()
- b. What if you wanted to await foo().bar()? (await foo().bar()).baz() is messy.
- c. What if you wanted to chain promises? Like httpRequest.send().body() . In many libraries such as Python's requests, the first returns a future for a response with headers and only a later call waits until the body is returned. `await (await httpRequest.send()).body()`
None of the above. It's compiler magic. It's a keyword that says 'await on this awaitable object' (currently only a Future can be await-ed).
You can read the OP which contains links to more blog posts on the reasoning. I think this is a bad decision precisely because of the questions you're asking. At least making it a 'magic method' (i.e., Future::await(...) or future.await()) with no implementation would have been better in basically every way. A 'dot' operator already exists and this just makes it confusing.
I've been planning to try out Rust for a side project, and this decision is so principle-of-most-surprise that it's got me reconsidering, wondering what other unpleasant weirdness lurks in the language. Looking down their other options in one of the linked articles, they seem to reject a couple fine ones outright and then went with one I'd have rejected outright as being too hostile to anyone who's not quite familiar with the manual, having memorized exceptions to ordinary behavior like this. I'm struggling to understand why anyone liked this one at all, let alone enough to push it to the top of the pile.
OK, if the exceptions are quite exceptional and hoping, as you note, this doesn't represent a trend, I'll keep it in consideration. Thanks for the insight.
To reinforce what others are saying, this is the only Rust decision that gives me even a moments pause. Everything else is pretty pleasant. Honestly, I hope that this backfires and gets fixed in a future Rust Edition. It's a very silly thing.
Disappointed we didn't get something like a magic method (e.g., lang item) (which isn't without precedence [1] [2]) so we could have had `Future::await(fut)` or `fut.await()`.
`fut.await()` looks too magical. There's no hint in that expression that it's anything other than a method call. You could argue the same about `fut.await`, except in this case `await` is a keyword and thus the expression `x.await` is known to be an await expression without knowing anything about `x`, whereas `fut.await()` requires knowing the type of `fut` to understand that the method call `.await()` resolves to the lang item. Also, since it's a lang item, you could write your own nostd library that picks a different name for the same lang item and cause even more confusion.
If we're gonna have await share syntax with some other language feature, method syntax is IMO better than field syntax- it semantically blocks the function until the callee is complete, it evaluates to a temporary instead of an lvalue, it can unwind the caller (via cancelation, like a panic).
You could implement that syntax and still make `await` a keyword, without doing the usual method lookup and without letting no_std libraries change the term, just like `.await`- that's an orthogonal concern.
This just then boils down to saying "the await keyword should have required parens after it", which is a different argument than saying "it should be a lang item".
hell, I'd support that. A function/method 'does a thing' while field access 'reads a thing'. Rust doesn't have implicit getters and setters like Python might have. It doesn't LOOK like `fut.await` should/could DO anything, but it can.
Ah, but isn't the whole point of an `async` function that it syntactically erases the notion of blocking on an operation? If you ignore the fact that it blocks the async function (and yields control back to the executor), then `foo.await` isn't really "doing" anything at all, it's just handing you back the future's output.
Having said that, I'd have to assume that `fut.await` does consume the future, which normal field access doesn't do. Personally, I'm ok with the tradeoff that says "this keyword looks like field access but it consumes the receiver, but as a result we don't have these extra parens all over the place on a keyword", especially because going the other way introduces other costs (e.g. it looks like a method call and yet can't be referred to as Future::await).
It doesn't do anything locally but it does (as part of going back to the executor) let arbitrary other code run, which is basically what a method call does.
Then you CAN do `fut.await()` or `Future::await(fut)`, etc. Assuming this wouldn't be impossible for some other reason. And it makes it clear that `await` is magic, not something that a new programmer just can't find the definition of or assumes is the result of a macro or something.
> we don't have these extra parens all over the place on a keyword
like `fut.await().await().await()` vs. `fut.await.await.await`? Both look equally stupid and I feel like the existing Future APIs already discourage these kinds of things from happening by providing good APIs. But it's been a while since I played with them.
Namely, await isn't a matter of just inserting some compiler-defined behavior at a specific point. It requires rewriting the control flow of the surrounding function. The await! macro did this by invoking a magical unstable keyword `yield`, but even that still has to be done within the current function (e.g. the implementation of await! must be a macro, not a function).
I mean, that just means the compiler needs to ignore the `Future::await()` function scope and rewrite the context above. It's already being magical. A slight indirection in implementation would be worth the pedagogical clarity, I think.
> There's no hint in that expression that it's anything other than a method call.
Compared with `fut.await`...
> There's no hint in that expression that it's anything other than a field access.
At least there is precedence for functions to have magical compiler-ness for 'doing stuff', instead of overloading field access which is one of the simplest operations you can have.
> ... requires knowing the type of `fut` ...
tbh, you don't need to know how await works. It goes and does stuff and returns a value. If you need to know what it does, you're looking up the types and seeing, 'oh its a future' and looking at that.
Contrast with the field access one and you don't even know its something else entirely.
I guess I just don't buy that it can't be a method. Like, formally, yea, it isn't a well-defined, safe Rust method. But make the impl
Now its a method defined on the `Future` trait, it's discoverable, it's clearly magic, it's ergonomic... In my opinion, it beats the field access one in every conceivable metric.
If it's a method you should be able to create a vector of futures and `.map(Future::await)` or even `.map(Future::await as FnMut(_) -> _)`. Clearly this would never work, since `await` is implement by transforming the containing function into a state machine, your suggestion would make this impossible.
I think you fundamentally misunderstand async await. `vec[fut1, fut2, fut3].map(|fut| fut.await).collect()` will not do what you might expect, although I haven't bothered to read the RFC whether it will be allowed at all. Await must have compiler support and cannot just be some function you call.
I am certainly being sloppy, though I think fundamentally misunderstanding it is a bit of an exaggeration. I know you can't await in a non-async function so, at the very least, you'd need (IDR the syntax for async closures?)
But I notice that I expect `await` to be a blocking, non-async function. Which, feels strange in that the lambda in the first block needs to be async but the function itself doesn't.
> Await must have compiler support and cannot just be some function you call.
I'm not saying it doesn't need compiler support. I explicitly state that it does - I just want the magic to be a little more explicit. Moreover, I believe that most programmers think of functions as 'encapsulated code', something that does a thing. In this sense, it matches what await does better than syntax that looks like it's accessing a field. Admittedly, in the purest sense of encapsulation, this method 'escapes' that encapsulation, but I don't consider that terribly important.
> At least there is precedence for functions to have magical compiler-ness for 'doing stuff', instead of overloading field access which is one of the simplest operations you can have.
Except we don't actually have precedence for "I'm going to declare a function, but it's not actually implemented in Rust, instead it's a cue to the compiler to do other magical stuff". Lang items are in fact the opposite; they're telling the compiler "here's the implementation for an extension point you've defined". For example the "panic_impl" lang item is a function that's invoked when code panics. The closest I can think of to what you're referring to is the fact that the type identified as "owned_box" has special dereferencing behavior, but that can be modeled as the compiler providing an implementation for an un-namable trait on the owned_box item where the trait represents the new deref behavior (just as how Deref and DerefMut represent the existing deref behaviors).
> tbh, you don't need to know how await works.
You need to know the type in order to determine if `fut.await()` is invoking this magical semantics-altering compiler behavior or if it's just invoking a method that happens to be named `await` (given that the lang_item proposal means await is no longer a keyword).
> Contrast with the field access one and you don't even know its something else entirely.
Except you do, because await is a keyword. It's impossible in Rust 2018 to have a field named await, and so any expression that looks like `x.await` is guaranteed to be an invocation of the await behavior.
> I guess I just don't buy that it can't be a method.
It cannot be a method that is implemented as a call to an intrinsic. The await keyword requires rewriting the function that invokes it.
The fact that discussions over syntax generate pages and pages of furious bickering, while discussions over semantics (which is what actually matters) get a shrug, is the ultimate example of bikeshedding in PL design. Honestly, syntax just doesn't matter. Yes, particular poor choices can impede usability, but that's not applicable here, and ultimately after two minutes or so of learning the new syntax there's no difference between this and "await foo".
> while discussions over semantics (which is what actually matters) get a shrug, is the ultimate example of bikeshedding in PL design
Worth noting that in this case the discussion over semantics has been ongoing and nonstop since around 2013, since it's such a big feature that it took a lot of trial and experimentation to figure out, and only then after being broken up into several large sub-tasks which all had to be individually discussed as well; see the epic comment thread on the Pin stabilization tracking issue as an example of just one semantic discussion towards this end: https://github.com/rust-lang/rust/issues/49150
Syntax may be irrelevant at the end of the day, but nice syntax can make a big difference in usability imo. I'm not following the Rust example, but discussions of it remind me about discussions about UFCS (Universal Function Call Syntax). That's where `foo(a, b)` can be rewritten as `a.foo(b)`. That may seem minor, but look at this code:
half_square = divide(square(a), 2)
In comparison to:
half_square = a.square().divide(2)
Many find the latter much more pleasant to read, and things like this can make a big difference in how fun I find it to use a language.
I agree with you. I find that in the first example the order of operation is much more explicit at a glance than the second, but I could also get used to the second without much effort. I think it all comes down to what your background is.
Me too but I suspect thats because the first is the pattern used in nearly every language I've spent extended time with since I was a kid (30 years ago).
I don't think either is better but one is definitely familiar.
Generally I try not to care about syntax too much (except putting $ on the front of a variable, that annoys me not so much because they did but because not everyone did so every time I switch from PHP to TypeScript I end up putting $ on at least once a day), the weird part is that I write idiomatic code in both so my brain knows it shifted context but I still put the $.
let $foo = bar;
Just looks wrong.
Weirder still when I run into code written by people who do
I think you can skip the empty () in D. The syntax becomes much more readable when you don't have to return to upper levels of functions. Thankfully pipes exist in functional languages, which makes it feel just right
I left the `()` because in some imperative languages there's a difference between `() -> a` and `a`, which makes `a.b` and `a.b()` different. In Haskell there's `>>>` and `&` in base, which I use all the time for this sort of workflow:
find_half_square = square >>> (flip divide) 2
This creates a function which takes a value, performs `square`, then performs `(flip divide) 2`. It's written in pointfree style, which means the function's input is never written in the definition of the function, which I find to be extremely aesthetically pleasing (as it allows me to focus on the composition of functions, without thinking about passing arguments around). This is one situation where changes in syntax allow you to reason about programs quite differently.
Whatever pleasantness may come from that or any other specific example is totally outweighed by the constant overhead of having to remember that there are N ways to do 1 thing. As a primary rule: the best programming grammar has the fewest such ambiguities, ideally zero.
Folks who really believe that program in lambda calculus.
In practice, syntax is the UI for a programming language. And just as with any UI, there's a bunch of tradeoffs between how easy the language is to use for a newbie (who knows nothing), how easy it is for casual user (who knows a few core constructs but has to lookup advanced functionality), how productive it is for an expert (who has a vast working memory of functionality), and how powerful it is (in terms of which constructs can even be represented). Different users will have different opinions on which tradeoffs are justified, largely depending on where they fall on this continuum.
On one side of the (practical) continuum, you have languages like COBOL, BASIC, PHP, and Hypercard, which are explicitly designed to seem familiar to people who know other non-programming technologies. On another, you have languages like Scheme, C, Go, and Java, which have a small set of broadly-applicable core concepts but require some verboseness to express lots of common patterns. And on the third, you have languages like C++ and Perl where experts can express very powerful programs without a whole lot of typing, but which can be rather impenetrable to people who haven't spent years mastering them.
I think this is a fair summary, but I also think that C++ and Perl can be pretty fairly judged as failures, due to, or in the ways that, you enumerate.
Put another way, it is not true that powerful languages must necessarily be impenetrable.
Put still another way, the amount of typing you do (within certain bounds of reason) is an almost totally irrelevant metric when judging a programming language.
This is a strong assertion with basically no backing, and all modern languages have some level of what you call ambiguity, so this isn't true in practice either.
What's a big difference for you got to do with the design of a language used by many others? What makes your experience more important than the experiences of someone with different qualifications?
Why is optimizing for your tastes the correct thing to optimize for?
If it's a big difference to me, it would be surprising if it was not a big difference to some portion of people (in either direction, I wouldn't be surprised if most people hate it). Since it makes a big difference to some portion of people, it is worth discussing. My intention with my comment was to make the argument that spending a long time discussing syntax can be a useful and productive thing for language designers to do.
If you're going to make that argument, you should probably be less vague. It makes a "big impact" to "some portion of people" that the Earth be thought flat, but you can't justify wasting a "long time" arguing about it.
Language designers don't need vague personal opinions masquerading as 'useful' facts, intentionally sculpted for inappropriate generalization. They need to know what the purpose of the syntax is and what the expectations of the language/user base are. Beyond that, you're just contributing noise.
And given that this criticism is largely against bikeshedding, it is interesting that you would try and justify it with statements of self-importance; bikeshedding largely happens because involving people in a decision leads them to massively over-value the importance of the decision simply because they are a part of it. You may be an expert on your own opinions, but if you're not an expert on their relevance, then you're not helping.
If syntax didn't matter we wouldn't see conformity to a set few styles, with outliers having a far more difficult time gaining traction.
Syntax very obviously matters.
Besides, futures are very old in Rust at this point (relative to the language's age) and had been discussed years ago. The reason people aren't seeing those discussions is because they happened a long time ago but syntax happened a month ago.
I think this also highlights the pitfalls of being community-centric. If Rust were run more like chromium or something, this sort of thing just ceases to be an issue. Major kudos to the Rust team for managing it really well.
There was not just disagreement within the community, but also within the language team. Any other language is going to have the same problem at times (except for those that are a one-man show). The only difference is that you see how the sausage gets made. (Also, electronic communication is less efficient than in-person.)
Oh. There was lots of discussion over semantics, with people who worked on implementing (e.g. language creators) await/async in other languages. The problem/solution space was a lot smaller though.
Also, there is a notable usability difference, wrt reading the flow of lines, chaining and parentheses. Learnabilty/wierdness was a syntax consideration, but not the only one.
While I do have an opinion on the syntax I could never amass the time necessary to weigh the positives/negatives of semantics issues. I reckon that's the reality for many.
Lazy vs strict semantics are completely orthogonal to the syntax [0]. (foo a b c) looks the same in a lazy language as it does in a strict language but oh boy are the results surprisingly different if you don't already know what you're in for.
0. Though some would argue that making them completely orthogonal to the point where the user can't tell the difference is a horrible design decision.
> Lazy vs strict semantics are completely orthogonal to the syntax [0]
Alternatively, the syntax in Haskell just lends itself to lazy evaluation, and requires explicit annotation syntax to be strict. Contrasted with Python, which has a syntax that makes strict evaluation an easier default to express.
If all languages can express, with some effort, the same exact semantics as any other language, then the only difference is syntax.
I'd submit the key difference in Haskell syntax is actually currying, not laziness. Haskell syntax privileges currying, and an executed function is just a curried function that has all of its parameters. By contrast, currying in languages with Algol-descended syntax always requires more rigamarole. It's possible, of course, in a lot of them, but it's harder than just a function call missing some of its arguments.
> By contrast, currying in languages with Algol-descended syntax always requires more rigamarole. It's possible, of course, in a lot of them, but it's harder than just a function call missing some of its arguments.
"What would be so hard about divided_by_2 = divide_by(, 2)"
Well, first of all, it doesn't work right now, which is a problem for what I was trying to say.
Secondly, if you want to curry things, you'll end up with
four_arg_func(1, 2)(3)(4)
at a bare minimum as a "curried application", where in Haskell it's just
four_arg_func 1 2 3 4
Several Algol languages have other issues, such as dealing with optional arguments: in Python!curryable, is the result of
def f(a, b, c = 10):
return a + b + c
f(1, 2)
a function that accepts one more parameter for c, or the number 13? (Even in a dynamic language you ought to think twice before trying to return some sort of "quantum superposition" of those two things!) You'll need to add some syntax that will specify the answer to that question, and now Python!curryable is getting away from just "incompletely applying the function" as it is in Haskell, but now a Thing you have to Do. (Probably by calling https://docs.python.org/2/library/functools.html#functools.p... .)
Haskell hacks that away by making it so functions take a fixed number of parameters, in a fixed order, and there is no such thing as default parameters to a function. A non-Haskell programmer may feel this is not a trade worth making.
Your closure at the end is what it tends to really look like. You'll note that most non-functional code doesn't really do that sort of thing very often, unless you're unlucky enough to stumble across a codebase written by someone trying to write Haskell-in-Python or something.
My point is that the "it's harder" is why syntax is the differentiator.
The idea that syntax isn't a huge differentiator for languages is insane to me. Yes, small syntactic changes like "fun" vs "fn" may not matter overall, but obviously people choose languages based on what they can easily express by typing sourcecode.
Arguably, the difference between await syntaxes is closer to "fn vs fun" than "lazy vs strict", but I think that there's a lot of context that pushes it closer to the latter (we're talking about how a fundamental control flow primitive is implemented, and this will impact future control flow primitives).