> Comparing the comprehensive Win32 API reference against the incidentally documented Native APIs, its clear which one Microsoft would prefer you use. The native API is treated as an implementation detail, whilst core parts of Windows' backwards compatibility strategy are implemented in Windows subsystem.
> A general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
Zig clearly doesn't actually care that much about building robust and reusable software if they're going to forgo Microsoft's decades-long backwards compatibility functionality for the dubious gains of using bare-metal APIs.
They defer all very real issues caused by their approach as being problems for others to solve (wine, antiviruses, users, even microsoft). That's such a weird level of hubris.
I think the only place where avoiding win32 is desirable is to write drivers, but zig already has support for some level of bare-metal development and I'm sure a package can provide shims to all ntdll utilities for that use-case.
I think it's pretty clear that they're doing it because it's a more fun challenge. As a low-level developer myself, I agree that using the lowest-level API possible is fun, especially if it's poorly documented and you have to try to try to preemptively mitigate breakage! But this is no mentality to have when you're writing a language ecosystem...
The Zig maintainers clearly think that keeping up with the undocumented native API is less headache than using the documented but notoriously inelegant win32 API.
This might very well be a good idea. Microsoft is not going to change a vital piece of their OS just on a whim. I would wager that even if they wanted to, they would not be able to do so that easily. A large organization maintaining a large software with a billion users just does not move that fast.
Take a random Linux binary which does anything non-trivial (has a GUI, does system monitoring, etc.), try running it on a different distribution from 3 years earlier without a packaging system, and tell me how it goes.
What confuses me the most is the kernel goes actually to great lengths not to break userspace, but if you rely on anything else than the kernel stuff breaks all the time, and distributions never update a released version to a newer kernel but just patch old kernels for years. So why do the kernel developers even bother?
Zig is proposing the opposite problem: future versions of windows wont run even trivial zig programs from today.
I can tell you that old Linux binaries run just fine on current distros.
Looking at how many times you repeated your misunderstanding in this thread it's clear that, not only do you not understand the solution, you don't understand the problem either.
> > Won't this get flagged by anti-virus scanners as suspicious?
> Unfortunately, yes. We consider this a problem for the anti-virus scanners to solve.
I don't think the anti-virus scanners consider Zig important enough, or even know about. They will not be the ones experiencing problems. Having executables quarantined and similar problems will fall on Zig developers and users of their software. That seems like a major drawback for using Zig.
Yup. This sentiment expresses quite clearly how Zig has no significant understanding or interest in being a language used for widely distributed applications, like video games.
There's no way I can ship a binary that flags the scanners. This wouldn't be the first language I've avoided because it has this unfortunate behaviour.
And expecting virus scanner developers to relax their rules for Zig is a bit arrogant. Some virus scanners started flagging software built with Nim simply because Nim became popular with virus authors as a means to thwart scanners!
Yeah, I had this problem when shipping go binaries on Windows. Antivirus vendors really do not care that your program regularly shows up as a false positive due to their crappy heuristics, even if you have millions of users.
It was really bad a couple years ago because anything wrapped in Inno Setup kept being flagged. Now maybe one or two flag vendors do; Bkav Pro and CrowdStrike Falcon are the dominate culprits always.
Antivirus is going to flag you no matter what if you're not a big-name developer with an expensive certificate. Even a "hello world" GUI program done with MSVC and Win32 gets called the Wacatac Trojan without one. We shouldn't let their incompetence dictate how our software works.
>> Microsoft are free to change the Native API at will, and you will be left holding both pieces when things break.
> [...] the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
That assumes the application developer will continue to maintain it until the end of time.
Also "the fix" would mean developers wanting to support earlier Windows versions would need to use an older std library? Or is the library going to have runtime checks to see what Windows build its running on?
> Microsoft are free to change the Native API at will,...
But they won't, because if there is one thing that Microsoft has always been extremely good at and cared for is backward compatibility. And changing Native API will break a ton of existing software, because even though undocumented it is very widely used.
They depricate some methods (very rarely and reasonably) and add new enums or struct versions to existing ones, but never change existing semantics, leave alone method signatures. As I said elsewhere, I invite you to find examples of actually destructive Native API changes.
Sounds like the iOS model: your app only exists as long as you are alive and able to pay $99/year. This mentality is a nightmare for software preservation.
> the worst case scenario is really mild: A new version of windows comes out, breaking ntdll compatibility. Zig project adds a fix to the std lib. Application developer recompiles their zig project from source, and ships an update to their users.
The ~only good thing that programmers have achieved in the past ~60 years has been Windows stability.
Create a popular programming language, and then make programs written in it not run on newer Windowses is just something else. I so hate this.
Was "robust, optimal and reusable" always "run an older Windows on your newer Windows to run Zig software"?
... this is just Linux binaries. It's humorous to me that we literally do exactly this, for Linux, with even less stability, but heaven forbid we do something approaching that on Windows despite the snobbery against Windows.
> Performance - using the native API bypasses the standard Windows API, thus removing a software layer, speeding things up.
But the article cites no bemchmarks
> Power - some capabilities are not provided by the standard Windows API, but are available with the native API.
Makes sense when you are doing something that needs that power, but that makes more sense as an exception to prefering win32 than a general reason to prefer native.
> Dependencies - using the native API removes dependencies on subsystem DLLs, creating potentially smaller, leaner executables.
Linking win32 is a miniscule cost. (unless you have a benchmark to show me...)
> Flexibility - in the early stages of Windows boot, native applications (those dependent on NtDll.dll only) can execute, while others cannot.
Is Zig being used for such applications? If so, why are the calls that the document says will be kept on win32 not an issue?
Go famously tried to bypass macOS's libc and directly use the underlying syscall ABI, which is unstable, and then a macOS update came out and broke everything, which taught them the error of their ways (https://github.com/golang/go/issues/17490). I wonder if this will happen to Zig too.
Anyone who has some experience with native apis knows that a standard library should never rely on unstable apis. Ntdll is not "stable" as in Microsoft can change it at any time since they expect anyone to use kernel32. It's questionable that they referenced a random book on this top claiming that ntdll is more performant than kernel32 which is doubtful. There are some specific cases where this is true (the ntfs stuff), but, in general, it's not, at least not in a significant matter. A standard library should never do this, it might break binaries for no reason, other than making a cool blog post. I, as a developer, can choose to use ntfs, but a standard library should never.
The reason apps from Win 9x runs on Windows 11 is that MS puts a ton of effort into explicitly supporting old apps. For popular apps that includes supporting undocumented APIs and even app-specific bug compatibility.
Putting it on app developers to account for infinite forward compatibility is not at all reasonable.
The best outcome would be if many Zig apps become popular enough that Windows is forced to maintain backward compatibility for ntdll. The API is clearly superior to win32 as many other developers have discovered and discussed before. It’d be nice to force MS to take low lever programming seriously instead of chasing AI slop.
Is there an official stance on whether ntdll is stable? Obviously they're not going to change things arbitrarily since applications depend on it, but I'm wondering if there is a guarantee like the linux syscall interface or how you can run a win32 application compiled in 2004 on Win11.
Indeed. Anything documented has a function wrapper. `NtCreateFile` is a function wrapper for the syscall number, so any user-mode code that has `NtCreateFile` instead of directly loading the syscall number 0x55 will be stable. The latter might not. In fact, it is not; the number has increased by 3 since Windows XP[1].
One could probably produce some sort of function pointer loader library with these tables, but at that point... Why not just use the documented APIs?
Only Malware uses the system call numbers directly. Using the system call numbers directly is foolish if they're going to change and break your app. Just import and call a function that will perform the actual SYSENTER (or WOW64 context change).
NTDLL should be stable since it's well documented, and many functions redirect to Ntoskrnl.exe, and things like kernel level drivers call those functions. Those functions won't change without the drivers breaking.
Then there's "Win32u.dll". These correspond to API calls from User32.dll, Gdi32.dll, etc. This DLL didn't even exist during the Windows 2000-XP era. This stuff is not well documented, and I don't know if this is stable.
For people not familiar with Windows development, another name for the NT native API is "the API that pretty much every document on Windows programming tells you not to use". It's like coding to the Linux syscall interface instead of libc.
One thing that is amusing about the prevalence of advanced anti-cheat in Windows gaming is it's actually causing said API/ABIs to undergo ossification. A good data point is the invention of Syscall User Dispatch^1 on Linux which would allow a program to basically install a syscall handler when they originate from various regions of memory. I do not know how usable this is in practice, admittedly -- but I think the fact it was contributed at all speaks to the growing need.
With the crucial difference that Linux places high value on syscall interface binary compatibility, while the NT native API is not guaranteed to be stable in any way.
A bit more comparable is OpenBSD where applications are very much expected to only use libc wrappers, which threw a wrench into the works for the Go runtime.
Go backed out of their strategy on MacOS and started using libc (libsystem?), because when Apple says something is internal and may change without notice, they really mean it. It may be a better risk with Microsoft, but it’s still a risk.
I think they had to revert back to libc on macOS/iOS because those have syscall interfaces that truly are not stable (and golang found that out the hard way). I wonder if they had to do the same on BSDs because of syscall filtering.
Nope, in UNIX proper syscalls and libc overlap, that is how C and UNIX eventually evolved side by side, in a way one could argue UNIX is C's runtime, and hence why most C deployments also expect some level of compatibility with UNIX/POSIX.
Linux is the exception offering its guts to userspace with guarantees of stability.
Funny that you would be arguing for that (unless I misunderstood the intention), given your many other posts about how C is a horrible broken unsafe language that should not be used by anyone ever. I tend to agree with that, btw, even if not so much with the "memory safety" hysteria.
Should every program, now and in the future, be forced to depend on libc, just because it's "grandfathered in"?
IMO, Linux is superior because you are in fact free to ignore libc, and directly interface with the kernel. Which is of course also written in C, but that's still one less layer of crud. Syscalls returning error codes directly instead of putting them into a thread-local variable would be one example of that.
Should a hypothetical future OS written in Rust (or Ada, ALGOL-68, BLISS, ...) implement its own libc and force userspace appplications to go through it, just because that's "proper"?
In traditional UNIX, there is no libc per se, there is the stable OS API set of functions and that's it.
When C was standardised, a subset of the UNIX API became the ISO C standard library, aka libc. When it was proven that wasn't enough for portable C code, the remaining UNIX API surface became POSIX.
Outside UNIX, libc is indeed a thing, because many of those OSes aren't even written in C, just like your language lists example, in those OSes libc ships with C compiler, not the OS per se, as you can check by diving into VMS documentation before it became OpenVMS, or IBM and Unisys systems, where libc is also irrelevant if using PL/I, NEWP, whatever.
Also on Windows, you are not supposed to depend on libc unless you are writing portable C code, there isn't one libc to start with. Just like everyone else, each compiler ships their own C runtime library, and nowadays there is also universal C runtime as well, plenty of libc choices.
If not writing portable C code, you aren't supposed to use memset(), rather FillMemory ().
Same applies to other non-UNIX OSes, you would not be calling memset(), rather the OS API for such service.
I don't think GP is arguing that's the best way to design an OS, just that interfacing with non-Linux Unixes is best done via libc, because that's the stable public interface.
On Windows, the stability guarantees are opposite to that of Linux. The kernel ABI is not guaranteed to be stable, whereas the Win32 ABI is.
And frankly, the Windows way is better. On Linux, the 'ABI' for nearly all user-mode programs is not the kernel's ABI but rather glibc's (plus the variety of third-party libraries, because Win32 has a massive surface area and is an all-in-one API). Now, glibc's ABI constantly changes, so linking against a newer glibc (almost certainly the 'host' glibc, because it is almost impossible to supply a different 'target' glibc without Docker) will result in a program that doesn't run on older glibc. So much for Torvalds' 'don't break userspace'.
Not so for a program compiled for 'newer' Win32; all that matters are API compatibilities. If one only uses old-hat interfaces that are documented to be present on Windows 2000, one can write and compile one's code on Windows 11, and the executable will run on the former with no issues. And vice versa, actually.
It doesn't really matter if it's 'just a wrapper', because said wrapper provides an ABI. Even if the underlying Native API changes, the interface the wrapper presents to other compiled binaries won't. The latter will contain caller/callee register setup, type layouts, function arguments and more for that wrapper.
Cygwin is also 'just a wrapper' for the Native API and Win32, and look how drastically it changes the ABI of applications.
Let me narrow down the scope here. I am a Rust developer, developing software that will run on my Linux server. Why would I want to use libc? Why does Rust standard library use libc? Zig, for example, doesn't.
If you write your software only for yourself, do whatever you want, of course. If you want to share it with other people, artificially limiting it without a very good reason will make it less useful and popular.
Because that's the stable public interface provided by pretty much every OS except Linux. On Linux, if you don't want to depend on the OS-supplied libc, you can use musl.
For comparison, in Rust they track down the differences in which flags are ignored in a certain kind of `fcntl` syscall across all the architectures that have this C function, which includes Solaris, Mac OS, the BSDs, Linux, ...
This is so that this is correctly handled in Miri, which can then be used to run the test-suite of the OS-specific parts of the standard library, and observe if this uses unsupported features in some way. This ensures that the standard library relies on documented features and not on whatever happens to work right now.
Honestly, this sounds like a future headache that would otherwise go unnoticed unless the programmer is dealing with porting or binding over source code meant for older Windows systems to Zig (or supporting older systems in general). Eventually it might result in a bunch of people typing out blogposts venting their frustrations, and the creation of tutorials and shims for hooking to Win32 instead of the Zig standard library with varying results. Which is fine, I suppose. Legacy compiler targets are a thing.
This is already a problem with Linux binaries for systems that don't have a recent enough Glibc (unless the binaries themselves don't link to it and do syscalls directly).
The one thing that really benefits from using NT Native API over Win32 is listing files in a directory. You get to use a 64KB buffer to get directory listing results, while Win32 just does the one at a time. 64KB buffer size means fewer system calls.
(Reading the MFT is still faster. Yes, you need admin for that)
Oh hey this is exactly why I made node-windows-readdir-fast - especially with the way node works, this makes reading filenames and length and times around 50x faster
Windows only of course, but the concept is sound. Was also fun benchmarking to find out that parsing a binary stream was faster than creating a ton of objects through the node api (or json deserialization)
Why not use both DLLs? Prefer win32 wherever possible and use the lower level APIs only if absolutely necessary. Benchmark after you have figured this out. Performance is probably not a thing at this level of abstraction.
Here's one fun example from following development on Zulip: advapi.dll loads bcrypt.dll, which loads bcryptprimitives.dll. bcryptprimitives.dll runs an internal test suite every time it's loaded into any process. So if you can avoid loading advapi.dll, your process will start faster.
Why? Is there any realistic scenario where your cryptography libs worked correctly yesterday but the exact same ones will be buggy today? What would be wrong with them just running once per build instead?
Zig doesn't run any code from the dll that never gets loaded, of course. Why run tests for code that is never called? If another part of your app does load the dll, the tests will still run.
Don’t know about windows programming to give opinion but by the sentiment here maybe they should give dev to choose bashed on some comptime flag or sth and maintain two versions
This is a terrible idea! _Maybe_, _maybe_ using only the documented APIs with only the documented parameters.
Unfortunately it makes too many false assumptions about interoperability between Win32 and the underlying native API that aren't true.
For example (and the Go runtime does this, much to my chagrin), querying the OS version via the native API always gives you "accurate" version information without needing to link a manifest into your application. Unfortunately that lack of manifest will still cause many Win32 APIs above the native layer to drop into a compatibility mode, creating a fundamental inconsistency between what the application thinks the OS capabilities are versus which Win32 subsystem behaviours the OS thinks it should be offering.
> While this can happen, we have not (yet) been affected by any changes in the Win32 -> Native layers.
Frankly this is dumb. Zig hasn't been around long enough to have even seen any changes, so using this as a reason is just plain dumb.
The view that, if windows ever changes, the code must be recompiled is a naive view one would expect from a child, not from a group of experienced devs.
> A general-purpose programming language and toolchain for maintaining robust, optimal, and reusable software.
Zig clearly doesn't actually care that much about building robust and reusable software if they're going to forgo Microsoft's decades-long backwards compatibility functionality for the dubious gains of using bare-metal APIs.
reply