The gpu driver for Apple silicon is Rust and the author stated it would have been much more difficult to implement in C. It isn't upstreamed yet.
"""
Normally, when you write a brand new kernel driver as complicated as this one, trying to go from simple demo apps to a full desktop with multiple apps using the GPU concurrently ends up triggering all sorts of race conditions, memory leaks, use-after-free issues, and all kinds of badness.
But all that just… didn’t happen! I only had to fix a few logic bugs and one issue in the core of the memory management code, and then everything else just worked stably! Rust is truly magical! Its safety features mean that the design of the driver is guaranteed to be thread-safe and memory-safe as long as there are no issues in the few unsafe sections. It really guides you towards not just safe but good design.
"""
> the whole thing seems kinda cute but like, shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
Rustlang doesn't aim to address race conditions. Sounds to me like overly "cautious" inefficient code you can write in any language. Think using `std::shared_ptr` for everything in C++, perchance…?
The comment probably refers to data races over memory access, which are prevented by usage of `Send` and `Sync` traits, rather than more general race conditions.
I see, but that's not the point of my comment. I don't know rustlang, perhaps I could address that if someone translated the rust-specific parlance to more generally accepted terms.
I'm not sure I understand the point of your comment at all.
Rust does, successfully, guarantee the lack of data races. It also guarantees the lack of memory-unsafety resulting from race conditions in general (which to be fair largely just means "it guarantees a lack of data races", though it does also include things like "race conditions won't result in a use after free or an out of bounds memory access").
If by address it you mean "show how C/C++ does this"... they don't and this is well known.
If by address it you mean "prove that rust doesn't do what it says it does"... as that point you're inviting someone to teach you the details of how rust works down to the nitty gritty in an HN comment. You'd be much better off finding and reading the relevant materials on the internet than someones off hand attempt at recreating them on HN.
The point of my comment is that in my experience, incompetently written, overly-cautious code tends to be more safe at the expense of maintainability and/or performance.
Sadly, I don't know rustlang, so I can't tell if the inability to describe its features in more commonly used terms is due to incompetence or the features being irrelevant to this discussion (see the title of the thread).
The thing is you aren't really asking about a "feature" of rust (as the word is used in the title of the thread), unless that feature is "the absence of data races" or "memory safety" which I think are both well defined terms† and which rust has. Rather you're asking how those features were implemented, and the answer is through a coherent design across all the different features of rust that maintains the properties.
As near as I can tell to give you the answer you're looking for I'd have to explain the majority of rust to you. How traits work, and auto traits, and unsafe trait impls, and ownership, and the borrow checker, and for it to make sense as a practical thing interior mutability, and then I could point you at the standard library concepts of Send and Sync which someone mentioned above and they would actually make sense, and then I could give some examples of how everything comes together to enable memory safe, efficient, and ergonomic, threading primitives.
But this would no longer be a discussion about a rust language feature, but a tutorial on rust in general. Because to properly understand how the primitives that allow rust to build safe abstractions work, you need to understand most of rust.
Send and Sync (mentioned up thread) while being useful search terms, are some of the last things in a reasonable rust curriculum, not the first. I could quickly explain them to someone who already knew rust, and hadn't used them (or threads) at all, because they're simple once you have the foundation of "how the rest of rust works". Skipping the foundation doesn't make sense.
† "Memory safety" was admittedly possibly popularized by rust, but is equivalent to "the absence of undefined behaviour" which should be understandable to any C programmer.
> The point of my comment is that in my experience, incompetently written, overly-cautious code tends to be more safe at the expense of maintainability and/or performance
Well, yes, but that's the whole value of Rust: you don't need to use these overly-cautious defensive constructs, (at least not to prevent data races), because the language prevents them for you automatically.
Safe Rust does. To the extend Rust interfaces that wrap kernel APIs will achieve safety for the drivers that make use of them remains to be seen. I think it will indeed do this to some degree, but I have some doubts whether the effort and overhead is worth it. IMHO all these resources would better be invested elsewhere.
Thats kinda the problem, there are concepts in rust that don't have equivalents in other common languages. In this case, rust's type system models data-race-safety: it prevents data races at compile time in a way unlike what you can do in c or c++. It will prevent getting mutable access (with a compile time error) to a value across threads unless that access is syncronized (atomics, locks, etc)
And from what I can see, rustlang mutability is also a type system construct? I.e. it assumes that all other code is Rust for the purpose of those checks?
> rustlang mutability is also a type system construct?
Yes
> I.e. it assumes that all other code is Rust for the purpose of those checks?
Not exactly, it merely assumes that you upheld the documented invariants when you wrote code to call/be-called-from other languages. For example that if I have a `extern "C" fn foo(x: &mut i32)` that
- x points to a properly aligned properly allocated i32 (not to null, not to the middle of un-unallocated page somewhere)
- The only way that memory will be accessed for the duration of the call to `foo` is via `x`. Which is to say that other parts of the system won't be writing to `x` or making assumptions about what value is stored in its memory until the function call returns (rust is, in principle, permitted to store some temporary value in `x`s memory even if the code never touches x beyond being passed it. So long as when `foo` returns the memory contains what it is supposed to). Note that this implies that a pointer to the same memory isn't also being passed to rust some other way (e.g. through a static which doesn't have a locked lock around it)
- foo will be called via the standard "C" calling convention (on x86_64 linux this for instance means that the stack pointer must be 2-byte aligned. Which is the type of constraint that is very easy to violate from assembly and next to impossible to violate from C code).
That it's up to the programmer to verify the invariants is why FFI code is considered "unsafe" in rust - programmer error can result in unsoundness. But if you, the programmer, are confident you have upheld the invariants you still get the guarantees about the broader system.
Rust is generally all about local reasoning. It doesn't actually care very much what the rest of the system is, so long as it called us following the agreed upon contract. It just has a much more explicit definition of what that contract is then C.
Also we can (in 2024 Edition) say we're vouching for an FFI function as safe to call, avoiding the need for a thin safe Rust wrapper which just passes through. We do still need the unsafe keyword to introduce the FFI function name, but by marking it safe all the actual callers don't care it wasn't written in Rust.
This is fairly narrow, often C functions for example aren't actually safe, for example they take a pointer and it must be valid, that's not inherently safe, or they have requirements about the relative values of parameters or the state of the wider system which can't be checked by the Rust, again unsafe. But there are cases where this affordance is a nice improvement.
Also "safe" and "unsafe" have very specific meanings, not the more widely used meanings. It's not inherently dangerous to call unsafe code that is well written, it's really more a statement about who is taking responsibility for the behavior, the writer or the compiler.
I like the term "checked" and "unchecked" better but not enough to actually lobby to change them, and as a term of art they're fine.
Yes. Just like C++ "const" is a type system construct that assumes all other code is C++ (or at least cooperates with the C++ code by not going around changing random bytes).
As far as I can tell, ANY guarantee provided by ANY language is "just a language construct" that fails if we assume there is other code executing which is ill-behaved.
a data race is specific kind of race condition; it's not rust parlance, but that specificity comes up a lot in rust discussions because that's part of the value
> since Rust is not the only language susceptible to data races.
The point is rather that it’s not. The “trait send sync things” specify whether a value of the type is allowed to be respectively move or borrowed across thread boundaries.
I mean, reliably tracking ownership and therefore knowing that e.g. an aliased write must complete before a read is surely helpful?
It won't prevent all races, but it might help avoid mistakes in a few of em. And concurrency is such a pain; any such machine-checked guarantees are probably nice to have to those dealing with em - caveat being that I'm not such a person.
Heh. This is such a C++ thing to say: “I want to do the right thing, but then my code is slow.” I know, I used to write video games in C++. So I feel your pain.
I can only tell you: open your mind. Is Rust just a fad? The latest cool new shiny, espoused only by amateurs who don’t have a real job? Or is it something radically different? Go dig into Rust. Compile it down to assembly and see what it generates. Get frustrated by the borrow checker rules until you have the epiphany. Write some unsafe code and learn what “unsafe” really means. Form your own opinion.
> Is rust going to synchronize shared memory access for me?
Much better than that. (safe) Rust is going to complain that you can't write the unsynchronized nonsense you were probably going to write, shortcutting the step where in production everything gets corrupted and you spend six months trying to reproduce and debug your mistake...
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Spatial memory safety is easy, just check the bounds before indexing an array. Temporal memory safety is easy, just free memory only after you've finished using it, and not too early or too late. As you say, thread safety is easy.
Except we have loads of empirical evidence--from widespread failures of software--that it's not easy in practice. Especially in large codebases, remembering the remote conditions you need to uphold to maintain memory safety and thread safety can be difficult. I've written loads of code that created issues like "oops, I forgot to account for the possibility that someone might use this notification to immediately tell me to shut down."
What these annotations provide is a way to have the compiler bop you in the head when you accidentally screw something up, in the same way the compiler bops you in the head if you fucked up a type or the name of something. And my experience is that many people do go through a phase with the borrow checker where they complain about it being incorrect, only to later discover that it was correct, and the pattern they thought was safe wasn't.
Proper use of lock ordering is reasonably difficult in a large, deeply connected codebase like a kernel.
Rust has real improvements here, like this example from the fuschia team of enforcing lock ordering at compile time [0]. This is technically possible in C++ as well (see Alon Wolf's metaprogramming), but it's truly dark magic to do so.
The lifetimes it implements is the now unused lexical lifetimes of early Rust. Modern rust uses non-lexical lifetimes which accepts a larger amount of valid programs and the work on Polonius will further allow more legal programs that lexical lifetimes and non lexical lifetimes can’t allow. Additionally, the “borrow checker” they implement is RefCell which isn’t the Rust borrow checker at all but an escape hatch to do limited single-threaded borrow checking at runtime (which the library won’t notice if you use in multiple threads but Rust won’t let you).
Given how the committee works and the direction they insist on taking, C++ will never ever become a safe language.
Oh and to add on, in c++ there’s no borrow checker and no language guarantees that exploit UB in the way Rust does with ownership. What does it matter if two parts of a single threaded program have simultaneous mutable references to something - it’s not a safety or correctness issue as there’s no risk of triggering UB and there’s no ill formed program that could be generated that way. IMHO a RefCell equivalent in C++ is utterly pointless.
Bit of a fun fact, but as one of the linked articles states the C++ committee doesn't seem to be a fan of stateful metaprogramming so its status is somewhat unclear. From Core Working Group issue 2118:
> Defining a friend function in a template, then referencing that function later provides a means of capturing and retrieving metaprogramming state. This technique is arcane and should be made ill-formed.
> Notes from the May, 2015 meeting:
> CWG agreed that such techniques should be ill-formed, although the mechanism for prohibiting them is as yet undetermined.
"Just" annotations... that are automatically added (in the vast majority of cases) and enforced by the compiler.
> proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
Yes, like how avoiding type confusion/OOB/use-after-free/etc. "just require[s] a little bit of discipline and consistency"?
The point of offloading these kinds of things onto the compiler/language is precisely so that you have something watching your back if/when your discipline and consistency slips, especially when dealing with larger/more complex systems/teams. Most of us are only human, after all.
> how well does it all hold up when you have teamwork and everything isn't strictly adherent to one specific philosophy.
Again, part of the point is that Send/Sync are virtually always handled by the compiler, so teamwork and philosophy generally aren't in the picture in the first place. Consider it an extension of your "regular" strong static type system checks (e.g., can't pass object of type A to a function that expects an unrelated object of type B) to cross-thread concerns.
> aren't they just annotations? proper use of mutexes and lock ordering aren't that hard, they just require a little bit of discipline and consistency.
No, they are not. You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
> You also don't need mutex ordering as much since Mutexes in Rust are a container type. You can only get ahold of the inside value as a reference when calling the lock method.
Mutex as a container has no bearing on lock ordering problems (deadlock).
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Rust’s strict ownership model enforces more correct handling of data that is shared or sent across threads.
> Speaking seriously, they surely meant data races, right? If so, what's preventing me from using C++ atomics to achieve the same thing?
C++ is not used in the Linux kernel.
You can write safe code in C++ or C if everything is attended to carefully and no mistakes are made by you or future maintainers who modify code. The benefit of Rust is that the compiler enforces it at a language level so you don’t have to rely on everyone touching the code avoiding mistakes or the disallowed behavior.
Rust's design eliminates data races completely. It also makes it much easier to write thread safe code from the start. Race conditions are possible but generally less of a thing compared to C++ (at least that's what I think).
Nothing is preventing you from writing correct C++ code. Rust is strictly less powerful (in terms of possible programs) than C++. The problem with C++ is that the easiest way to do anything is often the wrong way to do it. You might not even realize you are sharing a variable across threads and that it needs to be atomic.
> What does rust have to do with thread safety and race conditions? Is rust going to synchronize shared memory access for me?
Well, pretty close to that, actually! Rust will statically prevent you from accessing the same data from different threads concurrently without using a lock or atomic.
> what's preventing me from using C++ atomics to achieve the same thing
Now, is it okay to call `frobFoo` from multiple threads at once? Maybe, maybe not -- if it's not documented (or if you don't trust the documentation), you will have to read the entire implementation to answer that.
Now, is `frobFoo` okay to call from multiple threads at once? No, and the language will automatically make it impossible to do so.
If we had `&self` instead of `&mut self`, then it might be okay, you can discover whether it's okay by pure local reasoning (looking at the traits implemented by Foo, not the implementation), and if it's not then the language will again automatically prevent you from doing so (and also prevent the function from doing anything that would make it unsafe).
i don't really care for mindless appeals to authority. make your own arguments and defend them or don't bother.
this gpu driver looks pretty cool though. looks like there's much more to the rust compatibility layer in the asahi tree and it is pretty cool that they were able to ship so quickly. i'd be curious how kernel rust compares to user space rust with respect to bloat. (user rust is pretty bad in that regard, imo)
Mindless appeal to authority? I don't think that's how the fallacy really works. It's pretty much the authority that seems to disagree with your sentiment, that is if we can agree that Torvalds still knows what he's doing. Him not sharing your skepticism is a valid argument. The point being that instead of giving weight to our distant feelings, maybe we could just pause and be more curious as to why someone with much closer involvement would not share them. Why should we care more about the opinions of randos on hn?
To be fair, assigning the highly competent BDFL of Linux who has listened to a bunch of highly competent maintainers some credibility isn't mindless.
Unless you have a specific falsifiable claim that is being challenged or defended, it's not at all a fallacy to assume expert opinions are implicitly correct. It's just wisdom and good sense, even if it's not useful to the debate you want to have.
Not every mention of an authority's opinion needs to be interpreted as an "appeal to authority". In this case I think they're just trying to give you perspective, not use Torvalds opinion as words from god.
It’s very intellectually lazy of you not to be curious about why the creator and decades long, knowledgeable guardian of Linux has the opposite opinion as you, all because you read the Wikipedia about logical fallacies one time.
Also the guy that created "the world's most important piece of software", as you put it. Appealing to the authority on the exact thing you raised concern about is the single most important authority one can cite.
> Surely it's better to cite the authority's reasons as to why they think this way than just to cite the authority itself
Why? When disagreeing with an authority, you want the audience to pay closer attention to your arguments as you demonstrate why the authority has it wrong. When you're just sharing distant and likely under-informed opinions with no arguments to back them up, it's not up to other people to do homework to show you why you're wrong. Appeal to authority is a legit call to a fallacy only when people give next to no consideration to your arguments, focusing instead on the opposing party's stature.
So rather than pointing to experts who're in the best position to know, you'd prefer bad rephrasing and airchair experts? Do you 'do your own research' too?
""" Normally, when you write a brand new kernel driver as complicated as this one, trying to go from simple demo apps to a full desktop with multiple apps using the GPU concurrently ends up triggering all sorts of race conditions, memory leaks, use-after-free issues, and all kinds of badness.
But all that just… didn’t happen! I only had to fix a few logic bugs and one issue in the core of the memory management code, and then everything else just worked stably! Rust is truly magical! Its safety features mean that the design of the driver is guaranteed to be thread-safe and memory-safe as long as there are no issues in the few unsafe sections. It really guides you towards not just safe but good design. """
https://asahilinux.org/2022/11/tales-of-the-m1-gpu/
> the whole thing seems kinda cute but like, shouldn't this experiment in programming language co-development be taking place somewhere other than the source tree for the world's most important piece of software?
Torvalds seems to disagree with you.