Hacker Newsnew | past | comments | ask | show | jobs | submit | BrouteMinou's commentslogin

I feel like it's worthless to keep up with Zig until they reach 1.0.

That thing, right here, is probably going to be rewritten 5 times and what not.

If you are actively using Zig (for some reasons?), I guess it's a great news, but for the Grand Majority of the devs in here, it's like an announcement that it's raining in Kuldîga...

So m'yeah. I was following Zig for a while, but I just don't think I am going to see a 1.0 release in my lifetime.


IME Zig's breaking changes are quite manageable for a lot of application types since most of the breakage these days happens in the stdlib and not in the language. And if you just want do read and write files, the highlevel file-io interfaces are nearly identical, they just moved to a different namespace and now require a std.Io pointer to be passed in.

And tbh, I take a 'living' language any day over a language that's ossified because of strict backward compatibility requirements. When updating a 3rd-party dependency to a new major version it's also expected that the code needs to be fixed (except in Zig those breaking changes are in the minor versions, but for 0.x that's also expected).

I actually hope that even after 1.x, Zig will have a strategy to keep the stdlib lean by aggressively removing deprecated interfaces (maybe via separate stdlib interface versions, e.g. `const std = @import("std/v1");`, those versions could be slim compatibility wrappers around a single core stdlib implementation.


> I take a 'living' language any day over of a language that's ossified because of strict backward compatibility requirements

Maybe you would, but >95% of serious projects wouldn't. The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace, where low-level languages are commonly used, codebases typically last for over 30 years), and while such changes are manageable early on, they become less so over time.

You say "strict" as if it were out of some kind of stubborn princple, where in fact backward compatibility is one of the things people who write "serious" software want most. Backward compatibility is so popular that at some point it's hard to find any feature that is in high-enough demand to justify breaking it. Even in established languages there's always a group of people who want somethng badly enough they don't mind breaking compatibility for it, but they're almost always a rather small minority. Furthermore, a good record of preserving compatibility in the past makes a language more attractive even for greenfield projects written by people who care about backward compatibility, who, in "serious" software, make up the majority. When you pick a language for such a project, the expectation of how the language will evolve over the next 20 years is a major concern on day one (a startup might not care, but most such software is not written by startups).


> The typical lifetime of a codebase intended for a lasting application is over 15 or 20 years (in industrial control or aerospace).

Either those applications are actively maintained, or they aren't. Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version (e.g. when in doubt, "never change a running system"), old compiler toolchains won't suddenly stop working.

FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial, depending on the complexity of the code base (especially when there's UB lurking in the code, or the code depends on specific compiler bugs to be present - e.g. changing anything in a project setup always comes with risks attached).


> Part of the active maintenance is to decide whether to upgrade to a new compiler toolchain version

Of course, but you want to make that as easy as you can. Compatibility is never binary (which is why I hate semantic versioning), but you should strive for the greatest compatibility for the greatest portion of users.

> FWIW, trying to build a 20 or 30 year old C or C++ application in a modern compiler also isn't exactly trivial

I know that well (especially for C++; in C the situation is somewhat different), and the backward compatibility of C++ compilers leaves much to be desired.


You could fix versions, and probably should. However willful disregard of prior interfaces encourages developers code to follow suit.

It’s not like Clojure or Common Lisp, where a decades old software still runs, mostly unmodified, the same today, any changes mainly being code written for a different environment or even compiler implementation. This is largely because they take breaking user code way more seriously. Alot of code written in these languages seem to have similar timelessness too. Software can be “done”.


I would also add that Rust manages this very well. Editions let you do breaking changes without actually breaking any code, since any package (crate) needs to specify the edition it uses. So when in 30 years you're writing code in Rust 2055, you can still import a crate that hasn't been updated since 2015 :)

Unfortunately editions don't allow breaking changes in the standard library, because Rust codes written in different "editions" must be allowed to interoperate freely even within a single build. The resulting constraint is roughly similar to that of never ever breaking ABI in C++.

> The resulting constraint is roughly similar to that of never ever breaking ABI in C++.

No, not even remotely. ABI-stability in C++ means that C++ is stuck with suboptimal implementations of stdlib functions, whereas Rust only stabilizes the exposed interface without stabilizing implementation details.

> Unfortunately editions don't allow breaking changes in the standard library

Surprisingly, this isn't true in practice either. The only thing that Rust needs to guarantee here is that once a specific symbol is exported from the stdlib, that symbol needs to be exported forever. But this still gives an immense amount of flexibility. For example, a new edition could "remove" a deprecated function by completely disallowing any use of a given symbol, while still allowing code on an older edition to access that symbol. Likewise, it's possible to "swap out" a deprecated item for a new item by atomically moving the deprecated item to a new namespace and making the existing item an alias to that new location, then in the new edition you can change the alias to point to the new item instead while leaving the old item accessible (people are exploring this possibility for making non-poisoning mutexes the default in the next edition).


Only because Rust is a source only language for distribution.

One business domain that Rust currently doesn't have an answer for, is selling commercial SDKs with binary libraries, which is exactly the kind of customers that get pissed off when C and C++ compilers break ABIs.

Microsoft mentions this in the adoption issues they are having with Rust, see talks from Victor Ciura, and while they can work around this with DLLs and COM/WinRT, it isn't optimal, after all Rust's safety gets reduced to the OS ABI for DLLs and COM.


I'm not expecting to convince you of this position, but I find it to be a feature, not a bug, that Rust is inherently hostile to companies whose business models rely on tossing closed-source proprietary blobs over the wall. I'm fairly certain that Andrew Kelley would say the same thing about Zig. Give me the source or GTFO.

In the end it is a matter of which industries the Rust community sees as relevant to gain adoption, and which ones the community is happy that Rust will never take off.

Do you know one industry that likes very much tossing closed-source proprietary blobs over the wall?

Game studios, and everyone that works in the games industry providing tooling for AAA studios.


Tying yourself in a knot around ABI usually isn't worth it. You pick up to two: performance, ABI stability or adaptability.

And you can still internaly have it, if your deps have sources, or compile artifacts for only allow single Rust version (additional rules may apply).

There is work on Rust ABI (crabi), but there isn't a huge push for it.


> Game studios, and everyone that works in the games industry providing tooling for AAA studios.

You know what else is common in the games industry? C# and NDA's.

C# means that game development is no longer a C/C++ monoculture, and if someone can make their engine or middleware usable with C# through an API shim, Native AOT, or some other integration, there are similar paths forward for using Rust, Zig, or whatever else.

NDA's means that making source available isn't as much of a concern. Quite a bit of the modern game development stack is actually source-available, especially when you're talking about game engines.


Do you know what C# has and Rust doesn't? A binary distribution package for libraries with a defined ABI.

100% agreed.

> I'm fairly certain that Andrew Kelley would say the same thing about Zig. Give me the source or GTFO.

Thus it will never be even considered outside the tech bubble.


Rust allows binary libraries with a C ABI. Having safety within any given module is still a big deal, and it's hard to guarantee safety across dynamic modules when the code that's actually loaded can be overridden by a separately-built version.

Compiler vendors are free to chose what ABI stability their C++ implementations provide.

ISO C++ standard is silent on how the ABI actually looks like, the ABI not being broken in most C and C++ compilers is a consequence of customers of those compilers not being happy about breakages.


Yeah but ABI stability isn't really just magic dust you sprinkle in you language/compiler and make it more stable

It's a straightjacket that has application in few select cases.

Things ABI prevents in C++:

- better shared_ptr

- adding UTF8 to regex

- int128_t standardisation

- make most of <cstring> constexpr

And so on: https://cor3ntin.github.io/posts/abi/

I get you might have particular criteria on this. But it's a feature that comes with huge, massive downsides.


> Compiler vendors are free to chose what ABI stability their C++ implementations provide.

In theory. In practice the standards committee, consisting of compiler vendors and some of their users, shape the standard, and thus the standard just so happens to conspire to avoid ABI breakages.

This is in part why Google bowed out of C++ standardization years ago.


I know, but still go try to push for ABI breaks on Android.

Sure, but considering that Zig is a modern C alternative, one should not and cannot afford to forget that C has been successful also because it stayed small and consistent for so long.

The entire C, C ABI and standard lib specs, combined, are probably less words than the Promise spec from ECMAScript 262.

A small language that stays consistent and predictable lets developers evolve it in best practices, patterns, design choices, tooling. C has achieved all that.

No evolving language has anywhere near that freedom.

I don't want an ever evolving Zig too for what is worth. And I like Zig.

I don't think any developer can resolve all of the design tensions a programming language has, you can't make it ergonomic on its own.

But a small, modern, stable C would still be welcome, besides Odin.


I'm pretty sure the point of aggressively evolving now is to have to basically not evolve it at some point in the future?

> The entire C, C ABI and standard lib specs, combined, are probably less words than the Promise spec from ECMAScript 262.

Not if you look into C23, include all the compiler extensions devs keep thinking are part of ISO C, and the "C ABI" is one per each existing OS written in C.


C23 did not increase a lot. The core language spec is also very small. Extensions are also often relatively simple, but most importantly, one does not have to use them. ABI is usually not per OS but per architecture.

Besides Odin? Does Odin give you most of this?

I really love Zig the language, but I'm distancing myself from the stdlib. I dislike the breakage, but I also started questioning the quality of the code recently. I was working on an alternative I/O framework for Zig over the last months, and I was finding many problems that eventually led to me trying to not depend on stdlib at all. Even on the code announced here, the context switching assembly is wrong, it doesn't mark all necessary registers as clobbered. I mentioned this several times to the guys. The fact that it's still unchanged just shows me lack of testing.

It sounds like Zig would benefit from someone like you on the inside, as a member or active contributor, reviewing and participating in the development of the standard library.

Zig is one of my favorite new languages, I really like the cross-compiler too. I'm not a regular user yet but I'm hopeful for its long-term success as a language and ecosystem. It's still early days, beta/dev level instability is expected, and even fundamental changes in design. I think community input and feedback can be particularly valuable at this stage.


I've realized that Zig is a language, in which people can write programs in vastly different styles. And these are not really compatible. This is not unlike C++, for example. I learned Zig in my own bubble, just using my previous programming knowledge, not relaying on existing Zig code much. If I saw Zig's own code at the early stages, I'd probably not pick the language, purely on the style of huge inlined switches and nested conditions all over the place.

I dont think the core team accepts LLM generated code in the std.

I’m confused. The register clobbering is an issue in the compiler, not in the stdlib implementation right? Or are you saying the stdlib has inline assembly in these IO implementations somewhere? I couldn’t find it and I can’t think why you’d need it.

If it’s a compiler frontend-> LLVM interaction bug, I think you are commenting in the spot - it should go in a separate issue not in the PR about io_uring backend. Also, interaction bugs where a compiler frontend triggers a bug in LLVM aren’t uncommon since Rust was the first major frontend other than clang to exercise code paths. Indeed the (your?) fix in LLVM for this issue mentions Rust is impacted too.

I agree with the higher level points about stability and I don’t like Zig not being a safe language in this day and age, but I think your criticism about quality is a bit harsh if your source of this complaint is that they haven’t put a workaround for an LLVM bug.


There is the one issue which I fixed in LLVM, but it should be fixed in Zig as well, because the clobber list in Zig is typed and gives you false impression that adding x30 there is valid. But there is also another issue, x18 is a general purpose register outside of Darwin and Windows and needs to be marked as clobbered on other systems. And yes, look at the linked changes, the stdlib has inline assembly for the context switching.

To each his own, but while I can certainly understand the hesitancy of an architect to pick Zig for a project that is projected to hit 100k+ lines of code, I really think you're missing out. There is a business case to using Zig today.

True in general but in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not nearly enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency. When the system is written in an inefficient language like Python or Node, fundamentally, you have no choice but to start to move the hotpath behind FFI and drop down to a systems language. At that point your choices are basically C, C++, Rust, or Zig. Of the four choices, Zig today is already simplest to learn, with fewer footguns, easier to work with, easier to read and write, and easier to test. And you're not going to write 100k LOC of optimized hotpath code. And when you understand the cost savings involved in reducing your compute needs by sometimes more than 90% by getting the hotpath optimized, you understand that there is very much indeed a business case to learning Zig today.


As a counter argument to this. I was able to replicate the subset of zig that I wanted, using c23. And in the end I have absolute stability unless I break things to “improve”.

Personally, it is a huge pain to rewrite things and update dependencies because the code I am depending on is moving out from under me. I also found this to be a big problem in Rust.

And another huge upside is you have access to best of everything. As an example, I am heavily using fuzz testing and I can very easily use honggfuzz which is the best fuzzer according to all research I could find, and also according to my experience so far.

From this perspective, it doesn’t make sense to use zig over c for professional work. If I am writing a lot of code then I don’t want to rewrite it. If am writing a very small amount of code with no dependencies, then it doesn’t matter what I use and this is the only case where I think zig might make sense.


To add another point to this. W/e people write online isn’t correct all the time. I was thinking zig compiles super fast but found that c with a good build system and well split header/implementation files is basically instant to compile. You can use thin-lto with cache to have instant recompilation for release builds.

Real example: I had to wait some seconds to compile and run benchmarks for a library and it re-compiles instantly (<100ms) with c.

Zig does have a single compilation unit and that might have some advantages but in practice it is a hard disadvantage. And I didn’t ever see someone pointing this out online.

I would really recommend trying to learn c with modernC book and try to do it with c for people like me building something from scratch


Also I was also thinking that breaking doesn’t matter that much, but my opinion changed around 10k lines of code very quickly. At some point I really stopped caring about every piece and wanted to forget about it and move on really

>with fewer footguns, easier to work with, easier to read and write, and easier to test.

With the exception of fewer foot guns, which Rust definitely takes the cake and Zig is up in second, I'd say Zig is in last place in all of these. This really screams that you aren't aware of C/C++ testing/tooling ecosystem.

I say this as a fan of Zig, by the way.


> ...in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not nearly enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency.

That's a very good point, actually. However...

> with fewer footguns

..the Crab People[0] would definitely quibble with that particular claim of yours.

[0] https://en.wikipedia.org/wiki/Crab_People of course.


I would quibble with all of the claims, other than easier to learn.

I really see no advantage for Zig over Rust after you get past that 2 first two weeks.


Coming from Go, I'm really disappointed in Rust compiler times. I realize they're comparable to C++, and you can structure your crates to minimize compile times, but I don't care. I want instant compilation.

Zig is trying to get me instant compilation and I see that as a huge advantage for Zig (even past the first 2 weeks).

I'll probably stick with Rust as my "low level language" due to its safety, type system, maturity, library ecosystem, and career opportunities.

But I remain jealous of Zig's willingness to do extreme things to make compilation faster.


On any Go production projects I worked on or near, the incremental compile time was slower than C++ and Rust.

A full build was definitely much faster, but not as useful. Especially when using a build system with shared networked caching (Bazel for example).

Yes those projects were a bloated mess, as it always seems to be.


Re: slower incremental compile times - not my experience, but interesting data point. I'll keep a look out for this.

The key with c++ is to keep coding while compiling. Otherwise..yeah you're blocked.

The key with C++ is to learn how to use the build system, make use of binary libraries, and if one can afford to use the very latest compiler versions, modules.

And avoid header libraries, C++ isn't a scripting language.


Eh, I'd say that Rust has a different set of footguns. You're correct that you won't run into use-after-free footguns, but Rust doesn't protect you from memory leaks, unsafe code is still unsafe, and the borrow checker and Rust's language complexity are their own kind of footguns.

But I digress. I was thinking of Zig in comparison to C when I wrote that. I don't have a problem conceding that point, but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).


> but Rust doesn't protect you from memory leaks

In theory no. In practice it really does.

> unsafe code is still unsafe

Ok, but most rust code is not unsafe while all zig code is unsafe.

> and the borrow checker and Rust's language complexity are their own kind of footguns

Please elaborate. They are something to learn but I don’t see the footgun. A footgun is a surprisingly defect that’s pointed at your foot and easy to trigger (ie doing something wrong and your foot blows off). I can’t think how the borrow checker causes that when it’s the exact opposite - you can’t ever create a footgun without doing unsafe because it won’t even compile.

> but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).

I agree cross compilation with zig is significantly easier but Rust isn’t that hard, especially with the cross-rs crate making it significantly simpler. Performance, Rust is going to be better - zig makes you choose between safety and performance and even in unsafe mode there’s various things that cause better codegen. For example zig follows the C path of manual noalias annotations which has been proven to be non scalable and difficult to make operational. Rust does this for all variables automatically because it’s not allowed in the language.


> a footgun is a surprising defect that's pointed at your foot and easy to trigger

Close, but not the way I think of a footgun. A footgun is code that was written in a naive way, looks correct, submitted, and you find out after submitting it that it was erroneous. Good design makes it easy for people to do the right thing and difficult to do the wrong thing.

In Rust it is extremely easy to hit the borrow checker including for code which is otherwise safe and which you know is safe. You walk on eggshells around the borrow checker hoping that it won't fire and shoot you in the foot and force you to rewrite. It is not a runtime footgun, it is a devtime footgun.

Which, to be fair, is sometimes desired. When you have a 1m+ LOC codebase and dozens of junior engineers working on it and requirements for memory safety and low latency requirements. Fair enough trade-off in that case.

But in Zig, you can just call defer on a deinit function. Complexity is the eternal enemy, and this is just a much simpler approach. The price of that simplicity is that you need to behave like an adult, which if the codebase (hotpath optimization) is <1k LOC I think is eminently reasonable.


> A footgun is code that was written in a naive way, looks correct, submitted, and you find out after submitting it that it was erroneous.

You’re contradicting yourself a bit here I think. Erroneous code generally won’t compile whereas in Zig it will happily do so. Also, Zig has plenty of foot guns (eg forgetting to call defer on a deinit but even misusing noalias or having an out of bounds result in memory corruption). IMHO the zig footgun story with respect to UB behavior is largely unchanged relative to C/C++. It’s mildly better but it’s closer to C/C++ than being a safe language and UB is a huge ass footgun in any moderate complexity codebase.


> IMHO the zig footgun story with respect to UB behavior is largely unchanged relative to C/C++

The only major UB from C that zig doesn’t address is use after free afaik. How is that largely unchanged???

Just having an actual strong type system w/o the “billion dollar mistake” is a large change.


Depends how you compile it. If you’re compiling ReleaseFast/ReleaseSmall, it’s not very different from C (modulo as you said it has some language features to make it less likely you do it):

* Double free

* Out of bounds array access

* Dereferencing null pointers

* Misaligned pointer dereference

* Accessing uninitialized memory

* Signed integer overflow

* Accessing a union field for which the active tag is something else.


wow, what a list! all of these are statically analyzable using a slightly hacked zig compiler and a library!

https://github.com/ityonemo/clr

(Btw: you can't null pointer dereference in zig without using the navigation operator which will panic on null; you can't misalign a pointer unless you use @alignCast which will also create a panic)


Neat. Why isn’t this in the main compiler / will it be? I’m happy to retract my statement if this becomes actually how zig compiles but it’s not a serious thing as it’s more a PoC of what’s possible today and may break later

It will never be in the main compiler, since it was written by Claude. I think that's ok. The general concept is sound and won't break (modulo names of instructions changing etc). In fact it will get better. With the new io, concurrency checks will be possible

But also, there is no reason why it should have to be in the main compiler. I've architected it as a dlload plugin. It's even crazier! The output is a zig program which you must compile and run to get the final result.


I can also analyse C and C++ code for such issues, while keeping using a mature languages ecosystem.

If you can statically analyze c for memory safety, why did pazlo bother building fil-C?

Where did I wrote that static analysis was enough on its own?

Can you phrase that as a direct answer to my question? Trying to learn something here. Appreciate it!

I the sentence "I can also analyse C and C++ code for such issues, while keeping using a mature languages ecosystem." it is implied there are many tools that perform analysis of C and C++ code.

Some of those tools are static, others are dynamic, some require a special build, others are hybrid, others exist on all modern IDEs.

So it can be a mix of lint, clang tidy, VS analysis, Clion, ASan, USBsan, hardned runtimes, contracts (Frama-C), PVS, PurifyPlus, Insure++,....


This is pretty close to saying Rust is not very different than C because it has the unsafe keyword. That is, either an ignorant (of Zig) or disingenuous statement.

To me the zig position is akin to saying that because Asan, TSAn and ubsan exist, c++ is safe because you’re just running optimized for performance.

If you believe I mischaracterized zig, please enlighten me what I got wrong specifically rather than attacking my ad hominem


I’m not going to write a detailed response to something that’s extremely close to what an LLM responds to “what UB does zig have?”

Arguing about whether certain static analysis should be opt in or opt out is just extremely uninteresting. It’s not like folks are auditing the unsafe blocks in their dependencies anyways.

If you want to talk about actual type system issues that’s more interesting.


So the Fermat defense? “I have the proof but the margin is too small”.

The proof is in the pudding. TigerBeetle despite having a quite opinionated style still almost hit by UB and basically got lucky it wasn’t a worse failure. By contrast, even though unsafe isn’t audited for all dependencies, it does in practice seem to make UB extremely unlikely. And there’s work ongoing in the ecosystem to create safe abstractions to remove existing unsafe into well tested and centralized things


It is worse, because Asan, TSAn and ubsan already have several years of production experience, in a mature ecosystem.

There is no point throwing it all away to get back to the starting line.


As an example to this, I was using polars in rust as a dependency in a fairly large project.

It has issues like panicking or segfaulting when using some data types (arrow array types) in the wrong place.

It is extremely difficult to write an arrow implementation in Rust.

It is much easier to do it in zig or c(without strict aliasing).

I also had the same experience with glommio in Rust.

Also the binary that we produce compiles in several minutes and is above 30mb. This is an insane amount of bloat. And unfortunately I don’t think there is another feasible way of doing this kind of work in rust, because it is so hard to write proper low level code.

I don’t agree with noalias being bad personally. I fuond it is the only way to do it. It is much harder to write code with pointers with implicit aliasing like c has by default and rust has as the only option. And you don’t ever need to use noalias except some rare places.

To make it clear, I mean the huge footgun in rust is producing a ton of bloat and subpar code because you can’t write much and you end up depending on too many libraries


> To make it clear, I mean the huge footgun in rust is producing a ton of bloat and subpar code because you can’t write much and you end up depending on too many libraries

Nothing is forcing you to do that other than it’s easy to add dependencies. I don’t see how zig is much different


I find it easier to write all the code I want in zig or c since it is easy to write low level code.

Hashmap is a good example to this. I was able to fairly easily port some parts of hashbrown to c but I’m pretty sure I can’t write that code in Rust in a reasonable amount of time.


Not the GP, but I've noticed that because if you don't anticipate how you might need to mutate or share state in the future, you can have a "footgun" that forces large-scale code changes for relatively small "feature-level" changes, because of the rust strictness. Its not a footgun in the sense that your code does what you don't expect, its a footgun in that your maintenance and ability to change code is not what you expect (and its easy to do). I'm sure if you are really expert with rust, you see it coming and don't use patterns that will cause waves of changes (but if you're expert at any language you avoid the footguns).

That’s not a footgun and happens in any language. I have not observed rust code to be more prone to it. Certainly less so than c++ for various reasons around the build time and code organization.

It's possible to do memory safety analysis for zig. I think you could pretty easily add a noalias checker on top of this:

https://github.com/ityonemo/clr


> Of the four choices, Zig today is already simplest to learn,

Yes, with almost complete lack of documentation and learning materials it is definitely the easiest language to learn.


For reference, here's where Zig's documentation lives:

https://ziglang.org/learn/

I remember when learning Zig, the documentation for the language itself was extensive, complete, and easily greppable due to being all on one page.

The standard library was a lot less intuitive, but I suspect that has more to do with the amount of churn it's still going through.

The build system also needs more extensive documentation in the same way that the stdlib does, but it has a guide that got me reasonably far with what came out of the box.


People do under estimate how nice it is that the language ref or framework/tool documentation is all on one web page i can easily pdf print it and push it to my ipad for reading.

For what it's worth, Bun is written in Zig (https://bun.sh/). The language isn't exactly in an early stage.

Oh but it is.

Oh but it isn’t.

They just did a massive reactor that broke nearly 100% of existing code. Only an early language can do that.

What version are you referring to? I've had zero issues updating my zig stuff to 0.15.2 with frontier LLM assistance.

I’ll use Ghostty as an example because that’s the only software I use that I know is written in Zig. It’s also a moderately complex project not a toy project.

Its Zig 0.15 effort started in August and was only complete in October (see first PR at https://github.com/ghostty-org/ghostty/pull/8372). And many issues were encountered and solved along the way. And of course during all of this they also encountered an issue in Zig itself: https://github.com/ziglang/zig/issues/24627


The huge change that will be passing Io objects around like you have with Allocator.

0.16 changes things around dramatically.

Docs on this?

Here[1]. This mentions async, but it affects every single use of IO functions.

[1] https://kristoff.it/blog/zig-new-async-io/


Ah. Yeah.

My main Zig project will need over 1,000 edits to get it up there :O I've already had Claude spec out the changes required. I'll just have it or Codex or whatever fork itself into one agent per file and bang on it (for the stuff I can't regex myself) ;)

But the IO thing is frankly a good idea. I/O dependency injection, when I've used it in the past, has made testing quite a bit simpler (such as routing any I/O stream into a string to assert on) and the code much easier to reason about. The extra argument is a bit annoying, but that's the price of purity and it's worth it.


Also anything that reads environment variables.

we (ZML) have been back to following Zig master since std.Io was introduced. It's not that bad tbh. Also most changes really feel like actual improvements to the language on a day to day basis.

No shame in waiting for 1.0. Specially if you want to read docs rather than the code itself.

Akctuyally, reading the code instead of a documentation is one of the nice part of Zig.

It is such a readable language that I found it easier learning the API than most languages.

Zig has this on its side. Reading the unit tests directly from the code give, most of the time, a good example too.


People might be triggered by the word "worthless" but I totally get your point.

I hear great things about the language but only have so many hours in the day and so many usable neurons to spend in that day. Someday it would be nice to play with it.

The easiest way to embrace any new language is to have a compelling use to use it. I've not hit that point yet.


I wouldn't have expected graphic sex slang to be acceptable as a NH user name.

This would translate as ~"eats pussy", where "broûter" is a verb reserved for animals feeding on grass, implying a hefty bush.


You're assuming that 1.0 will bring about stability. For all we know version 1.0 could make way for version 2.0 soon after.

Though perhaps the Zig developers have promised this will not happen.


Please stop posting 0-information-content complaints.

> but for the Grand Majority of the devs in here, it's like an announcement that it's raining in Kuldîga...

Lol, I’ll borrow this.


I mean, you're right that still so many of us can't use the language yet, but I think we can still applaud progress towards major features when it's less than stable.

Kudos Zig contributors!


I'm so sorry to hear about your diagnosis whatever it is :-P.

This is a very strange take. Isn't every pre 1.0 software like that. Heck there are some that claims to be 1.0 but then takes another 50 iteration of 1.51 before it reaches what should have been 1.0 in the first place.

I am not understanding the point here, do people expect they ship 1.0 before they know it is good or ready?

No wonder why software quality have deteriorated rapidly in the past 20 years.


Pretty typical jaded HN comment there, chief. "This language's churn is more than I prefer -- why would anyone use it?" If you're not interested, just downvote and move on. Wondering out loud why anyone would actively use it ("for some reasons?") is a lame waste of bytes.

That comment you're complaining about is a useful signal for me who only watches zig from the far periphery. I feel like I'm getting good mileage out of it, just like I do from other, different ones. I'm glad it's in the mix.

An AI will be able to handle updating your code for 95% of your breaking changes

No it won't.

LLMs are good at dealing with things they've seen before, not at novel things.

When novel things arise, you will either have to burn a shed ton of tokens on "reasoning", hand hold them (so you're doing advanced find and replace in this example, where you have to be incredibly precise and detailed about your language, to the point it might be quicker to just make the changes), or you have to wait until the next trained model that has seen the new pattern emerges, or quite often, all of the above.


Apologies, but your information is either outdated from lack of experience with the latest frontier models, or you don't realize the fact that 99.9% of the work you do is not novel in all capacities. Have you only used Copilot, or something? Because that's what it sounds like. Since the performance of the latest models (Opus 4.6 max-effort, gpt-5.3-Codex) is nothing short of astonishing.

Real-world example: Claude isn't familiar with the latest Zig, so I had it write a language guide for 0.15.2 (here: https://gist.github.com/pmarreck/44d95e869036027f9edf332ce9a...) which pointed out all the differences, and that's been extremely helpful in having me not even have to touch a line of code to do the updates.

On top of that, for any Zig dependency I pull in which is written to an earlier version, I have forked it and applied these updates correctly (or it has, under my guidance, really), 100% of the time.

On the off chance that guide is not in its context, it has seen the expected warning or error message, googled it, and done the correct correction 100% of the time. Which is exactly what a human would do.

Let's play the falsifiability game: Find me a real-world example of an upgrade to a newer API from the just-previous-to-that API that a modern LLM will fail to do correctly. Your choice of beer or coffee awaits you if you provide a link to it.


I’ve been making a project in zig 0.16 with Claude as a learning experiment. It’s a fairly non trivial project (BitTorrent compliant p2p downloader for model weights on top of huggingface xet) - whenever it doesn’t know the syntax or makes errors, it literally reads the standard library code to understand and fix it. The project works too!

yeah, I’ve noticed it do that too, it literally goes to the source and for all intents and purposes, “figures it out”

that’s why it sounds to me like these people commenting haven’t even used these models yet.


> so I had it write a language guide for 0.15.2

Tbh, while impressive that it appears to work, that guide looks very tailored to the Zig stdlib subset used in your projects and also looks like a lot more work than just fixing the errors manually ;) For a large code base which would amortise the cost of this guide I still wouldn't trust the automatic update without carefully reviewing each change.


Just have to wait a few months until a new model with updated pretrained knowledge comes out.

Or spend those few months doing the update :-)

Eh, I've had good luck with porting codebases to newer versions of Bevy by pointing CC to the migration guide, and that is harder to test than a language migration (as much of the changed behaviour would have been at runtime).

I still wouldn't want to deal with that much churn in my language, but I fully believe an agent could handle the majority of, if not all of, the migration between versions.



The kind of "pay me premium, I am giving you non-premium" type of fraud?

It just proves that there is not much of an improvement if they can get away with, it isn't? But hey, I am sure that the benchmarks are all saying otherwise.


That is a good point. I wonder which model pricing was actually billed.

seems like marketing to me...

What's the link between your ex-coworkers bad code and VI again?

Dude, that's just a reason to scream at the clouds... Literally...

Why zulip instead of the good ol' IRC?

It has modern features. It stores message history. It has a fairly unique feature of letting you create ad-hoc "topics" (that go under a "Channel") that make it easier to manage the flood of conversation.

Channels + topics >>> just channels

Last I checked, IRC wasn't really mobile-friendly.

1: IRC loses all messages to you while you are not connected

Not for years. If that is still the case for you, ask your server hosts to update to a version that supports ircv3


Know of a good IRCv3 client for Linux/Web/Android? And what are some good v3 servers these days, besides Libera?

I've set up Ergo and KiwiIRC before, and it seemed pretty cool. I even enabled the Jitsi plugin in KiwiIRC, so it was like an indie alternative to Slack or Teams. I turned it off years ago, but if I were to do that again, I might try with UnrealIRCd.

I am with you on that. HN is becoming a "14 years old edgy mini-tech" Facebook.

"Microsoft bad, Linux good" kind of comments are all over the place. There is no more in depth discussions about projects anymore. Add the people linking their blogs only to sell you thier services for an imaginary problem, and you get HN 2026.

It's maybe the time to find another tech media. If you know one, I would be glad to know.


Lobste.rs mostly are what HN used to be, with less focus on startup culture and more focus on hacking.


And with idiotic blocking of people based solely on the browser they use. They are small minded people over at lobster.rs.


The Windows filesystem isn't slow per se, it's a slowness caused by "a thousand cuts" type of problem.

https://github.com/Microsoft/WSL/issues/873#issuecomment-425...


Microsoft US a massive corporation with so many people, business units, departments.

A comment like yours is just like saying: "I know a buggy open-source software, why would I trust that other open-source project? The open-source community burned all possible goodwill".


Except that a company, no matter how heterogenous, has an overarching organization, whereas the open-source community doesn't.

There is no CEO of open source, there are no open-source shareholders, there are no open-source quarterly earnings reports, there are no open-source P&G policies (with or without stack ranking), and so on.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: