As always, improving accessibility for humans makes automation more effective. If the humans need to remember a PhD's worth of source code/documentation to contribute effectively, your codebase stinks.
People at my company have started writing docs specifically for claude. They're quite useful for me too, but kinda disappointing they never wrote these docs for their colleagues.
I recently saw this with the logseq api - the published api was an auto-generated stub. So I tried to grep the source code for the function and found detailed documentation written for claude. So I guess one benefit of all of this is that it's making people actually document things and maybe plan a little bit before implementing.
As someone who has written many docs, it's because 99% won't read it (rightfully so if it's verbose). You can turn that doc into a skill in a repo and Claude will read it everytime it's needed.
The LLM hype train has me reflecting on what a spoiled existence working in a ‘proper’ language provides though…
React devs, JS devs, front-end devs working on large sites and frameworks might be triggering tens of files to be brought into context. What an OCaml dev can bring in through a 5 line union type can look very different in less token-efficient and terse languages.
We'll be much closer to a greenhouse earth than a glacial earth if we get that 4°C warming, so the distinction is more academic than practical in most contexts. What's a century here or there in geologic time?
The Cambrian and Eocene reached around +14C compared to today[1]. Two of the warmest periods in Earth's history, granted. But life thrived. Governments, private property ownership, civilization, not as battle tested.
Our bodies won't be able to handle a temperature regime that hot overall. The factor to research is Wet Bulb Temperature Effect. Basically our bodies are like sports cars and keeping our body cool is a challange under high humidity with temperature near our body temp.
UNIVERSITY PARK, Pa. — As climate change nudges the global temperature higher, there is rising interest in the maximum environmental conditions like heat and humidity to which humans can adapt. New Penn State research found that in humid climates, that temperature may be lower than previously thought.
It has been widely believed that a 35°C wet-bulb temperature (equal to 95°F at 100% humidity or 115°F at 50% humidity) was the maximum a human could endure before they could no longer adequately regulate their body temperature, which would potentially cause heat stroke or death over a prolonged exposure.
Wet-bulb temperature is read by a thermometer with a wet wick over its bulb and is affected by humidity and air movement. It represents a humid temperature at which the air is saturated and holds as much moisture as it can in the form of water vapor; a person’s sweat will not evaporate at that skin temperature.
But in their new study, the researchers found that the actual maximum wet-bulb temperature is lower — about 31°C wet-bulb or 87°F at 100% humidity — even for young, healthy subjects. The temperature for older populations, who are more vulnerable to heat, is likely even lower.
It's a problem anywhere that temperatures reach that high. Higher latitudes have colder climates. Hence, not a problem. If it becomes a problem, people move toward the poles. No longer a problem.
Earth would have to experience > +35 to +50C for the poles to be uninhabitable due to heat.
Yes, polar regions are reliably colder than equatorial regions. Lytton, BC hit the temperature you cite for one day on Tuesday, June 29, 2021. That's a sign of warming, and we should expect more warm days than in the past at any given lattitude. But it is not evidence against the general case that polar regions have colder climates than equatorial regions.
This explains something about why I haven't understood casually mentioning 40c+ temps, 34c in Hong Kong with no breeze is about as much as I can handle.
No reason not. It would push human habitable zones into the high mid-latitudes and subpolar regions though. 55–65° N/S would be closest to comfortable temperatures. So, northern Canada and Russia, Greenland, Antarctica.
The mad rush to get there would likely extract a heavy toll.
The main problem is agriculture. If rain patterns get severely disrupted in most of world's current breadbaskets, it takes time to increase production in areas that may now have more favourable climate. During that time lots of people would starve.
Rain patterns and extreme weather events are the things to really worry about. Temperature changes alone can be mostly dealt with by planting different crops.
No doubt the transition period would likely involve more death than most catastrophes in history. In part because there are simply more people. Available sunlight is also less nearer the poles, which already affects agriculture in places like Greenland. Crops would shift. We'd be more dependent on energy and supplemental light for certain crops. Adjustment would be difficult. But quite a bit of land would still be habitable.
Interesting. Paying close attention to geopolitics lately, it kind of seems like we're already in a slow-motion mad rush to own these places. Remember when Trump almost invaded Greenland?
From what I read recently (and I don't remember where it was), the current thinking is that it wasn't oxygen levels or temperatures, but the lack of predators that let dragonflies grow that big. A big dragonfly is much slower and an easier target. So unless you get rid of birds, you won't have giant dragonflies.
You need high oxygen content in the air though. Insect style circulatory systems aren't efficient enough to get oxygen to the cells without the air having a super high concentration of oxygen to begin with.
Basically like how when people can't breath good you put them on oxygen to keep them alive only getting oxygen into the blood is the bottleneck rather than into the body.
Assume that there will be a mass extinction event somewhere in the next 1000 years - meteor, WW3, whatever. If you'd then play a timelapse of earth, you'd see it on fire, cooling down, oceans forming, greenery forming, continental drift, north/south poles icing over and clearing, snowball (?) earth a few times, then in a short blip the rise and fall of humanity, then uh. more of the same. Geological (and universal) time scales are mind blowing.
In the US, a vehicle with an outstanding recall technically isn't roadworthy, though consumer level enforcement of this is non-existent in practice. It's mostly enforced on dealers, who can't sell a vehicle with active recalls. The only way I can imagine it mattering to a consumer is if they sold it.
Doesn't being legally non roadworthy only apply to NHTSA safety recalls while there are other types of recalls for non compliance or manufacturer voluntary recalls?
Having worked (on the vehicle registration system) for a state agency that is a combination "department of motor vehicles" plus "highway department", there could be a case made that since your vehicle does not meet NTSB/DOT standards, that it isn't roadworthy and the best you could get would be a SALVAGE title. Which would require expensive inspections if you try to sell it or register it.
In Europe, car manufacturers have to show that their cars meet safety standards. In the US, car manufacturers only have to say/certify that their cars meet safety standards. This is the huge sticking point for Trump's attempt to force EU countries to accept cars that have not been proved to meet safety standards (it is portrayed as "unfair/uneven trade barriers" in the US media).
Not disagreeing with you in general, but as another datapoint, in MN a vehicle only needs a theft inspection (no charge) to clear a Salvage title. DMV explicitly states that it's not a safety inspection. They really only care that you didn't repair it with stolen parts. IME you show them receipts for parts you bought, and the inspection is over in less than 5 minutes.
How would they do that? I'm sure you can buy some sort of aerospace component that has the signal integrity to do radios, but it sounds expensive. There's a reason these kinds of components (e.g. muxes) aren't usually physical disconnections.
Automotive power relays are at least a thing, but they're expensive consumables that have significant power draw.
In either case they would have had to add the components at design time and do the physical validation/testing, not ship it as a software update.
You're right, didn't read that properly. Okay then that actually makes sense if that's a (relatively) deterministic way to work out if openclaw is used
If I had to guess, this is a continuation of Anthropic's ongoing war to make third party tool usage go through per-token billing, specifically against a tool called Hermes-agent [0]. You can easily imagine why Anthropic might want to silently re-bill a customer violating their silly ToS restrictions and how an LLM told to implement the feature might arrive at this "solution".
Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.
We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.
Thing is, these tools are so critical that even one error may cause systems to be compromised; rewriting them should never be taken lightly.
(Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)
Is formal verification able to find most of these issues? I'm no expert on formal analysis, but I suspect most systems are not able to handle many of these errors. It seems more likely that the system will assume the file doesn't change between two syscalls - which seems to be the majority of issues. Modeling that possibility at least makes the formal system much harder to make.
When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.
Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.
Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.
I think that legitimate real world issues in rust code should be talked about more often. Right now the language enjoys a reputation that is essentiaöly misleading marketing. It isn't possible to create a programing language that doesn't allow bugs to happen (even with formal verification you can still prove correctness based on a wrong set of assumptions). This weird, kind of religious belief that rust leads to magically completely bug free programs needs to be countered and brought in touch with reality IMO.
Nobody believes Rust programs are but free, though. Rust never promised that. It doesn't even promise memory safety, it only promises memory safety if you restrict yourself to safe APIs which simply isn't always possible.
Or... the NSA wants you to think that the NSA wants you to think that the NSA believes that Rust is a memory-safe language, so that everyone who distrusts the NSA keeps using C.
Is it possible you’ve misunderstood what Rust promises?
> It isn't possible to create a programing language that doesn't allow bugs to happen
Yes, that’s true. No one doubts this. Except you seem to think that Rust promises no bugs at all? I don’t know where you got this impression from, but it is incorrect.
Rust promises that certain kinds of bugs like use-after-free are much, much less likely. It eliminates some kinds of bugs, not all bugs altogether. It’s possible that you’ve read the claim on kinds of bugs, and misinterpreted it as all bugs.
On the other hand, there are too many less-experienced Rust fans who do claim that "Rust" promises this and that any project that does not use Rust is doomed and that any of the existing decades-old software projects should be rewritten in Rust to decrease the chances that they may have bugs.
What is described in TFA is not surprising at all, because it is exactly what has been predicted about this and other similar projects.
Anyone who desires to rewrite in Rust any old project, should certainly do it. It will be at least a good learning experience and whenever an ancient project is rewritten from scratch, the current knowledge should enable the creation of something better than the original.
Nonetheless, the rewriters should never claim that what they have just produced has currently less bugs than the original, because neither they nor Rust can guarantee this, but only a long experience with using the rewritten application.
Such rewritten software packages should remain for years as optional alternatives to the originals. Any aggressive push to substitute the originals immediately is just stupid (and yes, I have seen people trying to promote this).
Moreover, someone who proposes the substitution of something as basic as coreutils, must first present to the world the results of a huge set of correctness tests and performance benchmarks comparing the old package with the new package, before the substitution idea is even put forward.
Where are these rust fans? Are they in the room with us right now?
You’ve constructed a strawman with no basis in reality.
You know what actual Rust fans sound like? They sound like Matthias Endler, who wrote the article we’re discussing. Matthias hosts a popular podcast Rust in Production where talks with people about sharp edges and difficulties they experienced using Rust.
A true Rust advocate like him writes articles titled “Bugs Rust Won’t Catch”.
> Such rewritten software packages should remain for years as optional alternatives to the originals.
> must first present to the world the results of a huge set of correctness tests and performance benchmarks
Yeah, you can see those in https://github.com/uutils/coreutils. This project has also worked with GNU coreutils maintainers to add more tests over time. Check out the graph where the total number of tests increases over time.
> before the substitution idea is even put forward
I partly agree. But notice that these CVEs come from a thorough security audit paid for by Canonical. Canonical is paying for it because they have a plan to substitute in the immediate future.
Without a plan to substitute it’s hard to advocate for funding. Without funding it’s hard to find and fix these issues. With these issues unfixed it’s hard to plan to substitute.
Those Rust fans exist on almost all Internet forums that I have seen, including on HN.
I do not care about what they say, so I have not made a list with links to what they have posted. But even only on HN, I certainly have seen much more than one hundred of such postings, more likely at least several hundreds, even on threads that did not have any close relationship with Rust, so there was no reason to discuss Rust.
Since the shameless promotion with false claims of Java by Sun, during the last years of the previous century, there has not been any other programming language affected by such a hype campaign.
I think that this is sad. Rust has introduced a few valid innovations and it is a decent programming language. Despite this, whenever someone starts mentioning Rust, my first reaction is to distrust whatever is said, until proven otherwise, because I have seen far too many ridiculous claims about Rust.
Could you find one such person on this thread? Someone making ridiculous claims about what Rust offers.
I’ll tell you what I think you’ve seen - there are hundreds of threads where you’ve seen people claim they’ve seen this everywhere. That gives you the impression that it is universal.
The comment you linked says something specific about a specific kind of bug being eliminated - memory safety bugs. And they’re not making a claim, they’re repeating the evidence gathered from the Android codebase. So that’s a fact, memory safety bugs truly did not appear in the Rust parts of Android.
The comment you linked is not claiming Rust code is bug-free. That’s a strawman I’ve seen many, many times. Haters will claim that this happens all the time, but all I see are examples of the haters claiming this. You had to go back 5 months and still couldn’t find anything similar to the strawman.
The only language I've ever seen users make that claim for is Haskell. Rust users have never made the claim, but I've seen it a lot from advocates who appear to find "hello world" a complex hard to write program.
I understand the (narrow) hard guarantees that rust gives. But there there are people in the wider community who think that the guarantees are much, much broader. This is a pretty widespread misconception that should get be rectified.
I have never seen a comment claiming that Rust leads to magically completely bug free programs.
Could you please link one? Because I doubt it exists, or if it does, it is probably on some obscure website or downvoted to oblivion.
On the other hand, I see comments in every Rust thread that are basically restatements of yours attacking a strawman.
The reality: Rust does not prevent all bugs. In fact, it doesn't even prevent any bugs. What it actually does is make a certain particularly common and dangerous class of bugs much more difficult to write.
The "elimination of bugs" is not synonymous with "the elimination of all bugs". The way you're presenting it, any single bug in a rewrite would be grounds to consider the the entire endeavor a failure, which is a ridiculous standard.
There are plenty of strong arguments to be made against rewriting something in Rust, but this is a pretty weak one.
Because the bugs were caused by programmer error, not anything inherent to rust. It was more notable due to cloudflare being a critical dependency for half the internet, but that particular issue could've happened in any language.
This kind of melodramatic reaction to rust code is fatiguing, honestly. Rust does not bill itself as some programming panacea or as a bug free language, and neither do any of the people I know using it. That's a strawman that just won't go away.
Rust applies constraints regarding memory use and that nearly eliminates a class of bugs, provided safe usage. And that's compelling to enough people that it warrants migration from other languages that don't focus on memory safety. Bugs introduced during a rewrite aren't notable. It happens, they get fixed, life moves on.
> caused by programmer error, not anything inherent to Rust
Your argument does not work as a praise for Rust because the bugs in any program are caused by programmer errors, except the very rare cases when there are bugs in the compiler tool chain, which are caused by errors of other programmers.
The bugs in a C or C++ program are also caused by programmer errors, they are not inherent to C/C++. It is rather trivial to write C/C++ carefully, in order to make impossible any access outside bounds, numeric overflow, use-after-free, etc.
The problem is that many programmers are careless, especially when they might be pressed by tight time schedules, so they make some of these mistakes. For the mass production of software, it is good to use more strict programming languages, including Rust, where the compiler catches as many errors as possible, instead of relying on better programmers.
The cloudflare bug was the equivalent of an uncaught exception caused by a malformed config file. There's no recovery from a malformed config file - the software couldn't possibly have done its job. What's salient is that they were using an alternative to exceptions, because people were told exceptions were error-prone, and using this thing instead would make it easier to write bug-free code. But don't do the equivalent of not catching them!
And then, it turned out to not really be any better than exceptions.
Most Rust evangelism is like this. "In Rust you do X and this makes your code have fewer bugs!" Well no it doesn't. Manually propagating exceptions still makes the program crash and requires more typing, and doesn't emit a stack trace.
That was why I brought it up. I wasn't trying to be snarky or haughty. Thank you for filling in the gaps, I should have done that instead of the 1-liner.
I didn't downvote, but I feel the last two points show a lack of nuance. It's saying "Rust doesn't prevent 100% of the bugs, like all other programming languages", while failing to acknowledge that if a programming language prevents entire classes of bugs, it's a very significant improvement.
Nobody disputes that Rust is one of the programming languages that prevent several classes of frequent bugs, which is a valuable feature when compared with C/C++, even if that is a very low bar.
What many do not accept among the claims of the Rust fans is that rewriting a mature and very big codebase from another language into Rust is likely to reduce the number of bugs of that codebase.
For some buggier codebases, a rewrite in Rust or any other safer language may indeed help, but I agree with the opinion expressed by many other people that in most cases a rewrite from scratch is much more likely to have bugs, regardless in what programming language it is written.
If someone has the time to do it, a rewrite is useful in most cases, but it should be expected that it will take a lot of time after the completion of the project until it will have as few bugs as mature projects.
As other people have mentioned, the goal of uutils was not "let's reduce bugs in coreutils by rewriting it in Rust", it was "it's 2013 and here's a pre-1.0 language that looks neat and claims to be a credible replacement for C, let's test that hypothesis by porting coreutils, giving us an excuse to learn and play with a new language in the process". It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.
Whether or not it was wise for Canonical to attempt to then take that codebase and uplift it into Ubuntu is a different story altogether, but one that has no bearing on the motivations of the people behind the original port itself.
You can see an alternative approach with the authors of sudo-rs. Rather than porting all of userspace to Rust for fun, they identified a single component of a particularly security-critical nature (sudo), and then further justified their rewrite by removing legacy features, thereby producing an overall simpler tool with less surface area to attack in the first place. It was not "we're going to rewrite sudo in Rust so it has fewer bugs", it was "we're going to rewrite sudo with the goal of having fewer bugs, and as one subcomponent of that, we're going to use Rust". And of course sudo-rs has had fresh bugs of its own, as any rewrite will. But the mere existence of bugs does not invalidate their hypothesis, which is that a conscientious rewrite of a tool can result in fewer bugs overall.
But are the current uutils developers the same as the 2013 developers? At least based on GitHub's graphs, that's not the case (it looks fairly bimodal to me), and so it wouldn't be unreasonable to treat the 2013-era project differently to the 2020-era project. So judging the 2020-era project for its current and ongoing failures does not seem unreasonable.
Similarly, sudo-rs dropping "legacy" features leaves a bad taste in my mind, there are multiple privilege escalation tools that exist (doas being the first that comes to mind), and doing something better and not claiming "sudo" (and rather providing a compat mode ala podman for docker) would to me seem a better long term path than causing more breakage (and as shown by uutils, breakage on "core" utils can very easily lead to security issue).
I personally find uutils lack of care to be concerning because I've been writing (as a very low priority side project) a network utility in rust, and while it not aiming to be a drop in rewrite for anything, I would much rather not attract the same drama.
doas and sudo-rs occupy different niches, specifically doas aims for extreme minimalism and deliberately sacrifices even more compatibility than sudo-rs, which represents a middle ground.
> It seems worth emphasizing that its creation was neither ideologically motivated nor part of some nefarious GPL-erasure scheme, it was just some people hacking on a codebase for fun.
What the motivation and intent was in 2013 is not necessarily relevant to what the motivation and intent is now.
It's even less relevant to what the effect is: the goal may be to replace $FOO software with $BAR software, but as things stand right now $FOO is "GPL" and $BAR is "MIT".
So, yeah, I don't want them to succeed at their primary goal, because that replaces pro-user software with pro-business software.
No, once you have an MIT-licensed codebase without a copyright assignment scheme, you no longer have the freedom to relicense it at will. You could attempt to have a mixed-license codebase, which is supported by the GPL, and specify that all new contributions must accept the GPL, but this is tantamount to an incompatible fork of the project from the perspective of any downstream users, and anyone who insists on contributing code under the GPL has the freedom to perform this fork themselves.
This is simply false. You can accept GPL contributions and clearly indicate the names of the contributors as required by MIT. There is no "incompatible fork".
No, GPL and MIT have significantly different compliance requirements. You cannot suddenly begin shipping code with stricter compliance requirements to downstream users without potentially exposing them to legal liability.
Most consumer platforms only allow up to 128/256GB of RAM. If you want more you likely need a data centre platform. This is again a mismatch between what companies think consumers are at and the reality.
I think e.g. AMD missed the boat with 9950x3d2 by limiting memory controller. If it was possible to hook it with 1TB of consumer DDR5 RAM, that would be something to write home about.
Some people, including myself, loathe Nvidia with the fiery burning passion of a thousand suns, and will put up with whatever nonsense is necessary to run without them.
Can you provide a different source on that? The govcloud page you've linked says operated by US citizens, not built by US citizens. I'd be pretty surprised if they did the latter. Standard practice as I understand it is to simply run the standard software in a separate environment. A recent Propublica report [0] pointed out that Microsoft was hiring citizens to escort the actual engineers that aren't citizens, for example.
reply