I think this is partially true, but more nuanced than just saying that Rust std lib is lacking.
Compared to go and c#, Rust std lib is mostly lacking:
- a powerful http lib
- serialization
But Rust approach, no Runtime, no GC, no Reflection, is making it very hard to provide those libraries.
Within these constraints, some high quality solutions emerged, Tokio, Serde. But they pioneered some novel approaches which would have been hard to try in the std lib.
The whole async ecosystem still has a beta vibe, giving the feeling of programming in a different language.
Procedural macros are often synonymous with slow compile times and code bloat.
But what we gained, is less runtime errors, more efficiency, a more robust language.
TLDR: trade-offs everywhere, it is unfair to compare to Go/C# as they are languages with a different set of constraints.
Having some of those libraries listed and then not being able to change API or the implementation is what killed modern C++ adoption (along with the language being a patchwork on top of C).
As some of the previous commenters said, when you focus your language to make it easy to write a specific type of program, then you make tradeoffs that can trap you in those constraints like having a runtime, a garbage collector and a set of APIs that are ingrained in the stdlib.
Rust isn't like that. As a system programmer I want none of them. Rust is a systems programming language. I wouldn't use Rust if it had a bloated stdlib. I am very happy about its stdlib. Being able to swap out the regex, datetime, arg parsing and encoding are a feature. I can choose memory-heavy or cpu-heavy implementations. I can optimize for code size or performance or sometimes neither/both.
If the trade-offs were made to appease the easy (web/app) development, it wouldn't be a systems programming language for me where I can use the same async concepts on a Linux system and an embedded MCU. Rust's design enables that, no other language's design (even C++) does.
If a web developer wants to use a systems programming language, that's their trade-off for a harder to program language. The similar type safety to Rust's is provided with Kotlin or Swift.
Dependency bloat is indeed a problem. Easy inclusion of dependencies is also a contributing factor. This problem can be solved by making dependencies and features granular. If the libraries don't provide the granularity you want, you need to change libraries/audit source/contribute. No free meals.
Yeah I’ve encountered the benefit of this approach recently when writing WASM binaries for the web, where binary size becomes something we want to optimize for.
The de facto standard regex library (which is excellent!) brings in nearly 2 MB of additional content for correct unicode operations and other purposes. The same author also makes regex-lite, though, which did everything we need, with the same interface, in a much smaller package. It made it trivial to toss the functionality we needed behind a trait and choose a regex library appropriately in different portions of our stack.
Indeed. However, you need to recognize that having those features in stdlib creates a huge bias against swapping them out. How many people in Java actually uses alternative DB APIs than JDBC? How many alternative encoding libraries are out there for JSON in Go? How about async runtimes, can you replace that in Go easily?
> Procedural macros are often synonymous with slow compile times and code bloat.
In theory they should reduce it because you wouldn’t make proc macros to generate code you don’t need…right? How much coding time you save with macros compared to manually implementing them?
To be fair I think Rust has very healthy selection of options for both, with Serde and Reqwest/Hyper being de-facto standard.
Rust has other challenges it needs to overcome but this isn't one.
I'd put Go behind both C#/F# and Rust in this area. It has spartan tooling in odd areas it's expected to be strong at like gRPC and the serialization story in Go is quite a bit more painful and bare bones compared to what you get out of System.Text.Json and Serde.
The difference is especially stark with Regex where Go ships with a slow engine (because it does not allow writing sufficiently fast code in this area at this moment) where-as both Rust and C# have top of the line implementations in each which beat every other engine save for Intel Hyperscan[0].
> (because it does not allow writing sufficiently fast code in this area at this moment)
I don't think that's why. Or at least, I don't think it's straight-forward to draw that conclusion yet. I don't see any reason why the lazy DFA in RE2 or the Rust regex crate couldn't be ported to Go[1] and dramatically speed things up. Indeed, it has been done[2], but it was never pushed over the finish line. My guess is it would make Go's regexp engine a fair bit more competitive in some cases. And aside from that, there's tons of literal optimizations that could still be done that don't really have much to do with Go the language.
Could a Go-written regexp engine be faster or nearly as fast because of the language? Probably not. But I think the "implementation quality" is a far bigger determinant in explaining the current gap.
Compared to go and c#, Rust std lib is mostly lacking:
- a powerful http lib
- serialization
But Rust approach, no Runtime, no GC, no Reflection, is making it very hard to provide those libraries.
Within these constraints, some high quality solutions emerged, Tokio, Serde. But they pioneered some novel approaches which would have been hard to try in the std lib.
The whole async ecosystem still has a beta vibe, giving the feeling of programming in a different language. Procedural macros are often synonymous with slow compile times and code bloat.
But what we gained, is less runtime errors, more efficiency, a more robust language.
TLDR: trade-offs everywhere, it is unfair to compare to Go/C# as they are languages with a different set of constraints.