IMHO the security advantage of Wayland is mostly a myth and probably the same is true regarding tearing. The later is probably more an issue with respect to drivers and defaults.
On my desktop computers and on most of my laptops I have never experienced tearing in X11, at least during the last 25 years, using mostly NVIDIA GPUs, but also Intel GPUs and AMD GPUs.
I have experienced tearing only once, on a laptop about 10 years ago, which used NVIDIA Optimus, i.e. an NVIDIA GPU without direct video output, which used the Intel GPU to provide outputs. NVIDIA Optimus was a known source of problems in Linux and unlike with any separate NVIDIA GPU, which always worked out-of-the-box without any problems for me, with that NVIDIA Optimus I had to fiddle with the settings for a couple of days until I solved all problems, including the tearing problem.
Perhaps Wayland never had tearing problems, but I have used X11 for several decades on a variety of desktops and laptops and tearing has almost never been a problem.
However, most of the time I have used only NVIDIA or Intel GPUs for display and it seems that most complaints about tearing have been about AMD. I have always used and I am still using AMD GPUs too, but I use those for computations, not connected to monitors, so I do not know if they could have tearing problems.
Most of systemd's critics are not people that just want to use another init system. They object to it on stupid philosophical grounds for which they should be shamed.
Most of systemd's proponents are not people that care about what service management system they use. They defend systemd despite their ignorance of both systemd and the proposed alternatives solely to feel part of the "in group" of people who moan about people who moan about systemd.
People who defend systemd do so because they want their system to work reliably and quickly. It does that. Before you say "so does sysvinit", no it does not. It was janky-but-workable on servers and desktops in the 90s, that basically never did anything except startup and shutdown. Most modern computers aren't like that.
More straw man arguments. You have just confirmed that you have _no idea_ what alternatives exist, that you have _no idea_ what systemd actually does, and you have _no idea_ what my actual stance is in this discussion.
Please don't insult me by insinuating that I think that sysvinit is anything other than a weird esoteric init program which has, in the past and on linux distros, been the supporting piece of a garbage heap of poorly written shell scripts (and which is currently on BSDs the supporting piece of a relatively okay designed heap of shell scripts which implement a silly service management model that I also don't like).
And X11 always had a mechanism for isolating clients as well, i.e. trusted and untrusted clients. Nobody used it because it was irrelevant before sandboxing.
It is definitely not a language that can be learned by reading blogs.
But the advice really applies to almost everything you do related to security, safety and reliability. In other languages you may have a panic in production or a supply chain issue.
I use "-fno-strict-overflow" so it shouldn't be ub in any case. Also basically never use signed integers and I always use "checked" methods for doing aritmetic except performance critical loops.
Obviously, that should always be used, like also the compiler options for checking integer overflow and accesses out-of-bounds.
However, this kind of implicit conversions must really be forbidden in the standard, because the correct program source is different from the one permitted by the standard.
When you activate most compiler options that detect undefined behaviors, the correct program source remains the same, even if the compiler now implements a better behavior for the translated program than the minimal behavior specified by the standard.
That happens because most undefined behaviors are detected at run time. On the other hand, incorrect implicit conversions are a property of the source code, which is always detected during compilation, so such programs must be rejected.
Integer overflow and accesses out-of-bounds must be checked at runtime that makes the program slower. It looks like -Wsign-conversion can be checked at compilation time, perhaps with a few false positives where the numbers are "always" small enough.
Does it also complain when the assigned variable is big enough to avoid the problem? Does the compiler generate slower code with the explicit conversions?
It looks like an nice task to compile major projects with -Wsign-conversion and send PR fixing the warnings. (Assuming they are only a few, let's say 5. Sending an uninvited PR with a thousand changes will make the maintainers unhappy.)
The standard will not forbid anything that breaks billions of lines of code still be used and maintained.
But it is easy enough to use modern tooling and coding styles to deal with signed overflow. Nowadays, silent unsigned wrap around causing logic errors is the more vexing issue, which indicates the undefined behavior actually helps rather than hurts when used with good tooling.
Silent unsigned wrap around is caused by another mistake of the C language (and of all later languages inspired by C), there is only a single unsigned type.
The hardware of modern CPUs actually implements 5 distinct data types that must be declared as "unsigned" in C: non-negative integers, integer residues a.k.a. modular integers, bit strings, binary polynomials and binary polynomial residues.
A modern programming language should better have these 5 distinct types, but it must have at least distinct types for non-negative integers and for integer residues. There are several programming languages that provide at least this distinction. The other data types would be more difficult to support in a high-level language, as they use certain machine instructions that compilers typically do not know how to use.
The change in the C standard that was made so that now "unsigned" means integer residue, has left the language without any means to specify a data type for non-negative integers, which is extremely wrong, because there are more programs that use "unsigned" for non-negative integers than programs that use "unsigned" for integer residues.
The hardware of most CPUs implements very well non-negative integers so non-negative integer overflow is easily detected, but the current standard makes impossible to use the hardware.
Yes, that's true, but the registers themselves are untyped, what modern CPUs really implement is multiple instruction semantics over the same bit-patterns. In short: same bits, five algebras! The algebras are given by different instructions (on the same bit patterns).
Here is an example, the bit pattern 1011:
• as a non-negative integer: 11. ISA operations: Arm UDIV, RISC-V DIVU, x86 DIV
• as an integer residue mod 16: the class [11] in Z/16Z. ISA operations: Arm ADD, RISC-V ADD/ADDI, x86 ADD
• as a bit string: bits 3, 1, and 0 are set. ISA operations: Arm EOR, RISC-V ANDI/ORI/XORI, x86 AND.
• as a binary polynomial: x^3 + x + 1. ISA operations: Arm PMULL, RISC-V clmul/clmulh/clmulr, x86 PCLMULQDQ
• as a binary polynomial residue modulo, say, x^4 + x + 1: the residue class of x^3 + x + 1 in GF(2)[x] / (x^4 + x + 1). ISA operations: Arm CRC32* / CRC32C*, x86 CRC32, RISC-V clmulr
And actually ... the floating point numbers also have the same bit patters, and could, in principle reside in the same registers. On modern ISAs, floats are usually implemented in a distinct register file.
You can use different functions in C on the bit patterns we call unsigned.
Yes, registers are untyped, like also memory is untyped, there is no difference, and this is a good thing.
If you had a data type with type tags, that still would not mean that the storage location for it is typed, it would only mean that you have implemented a union type.
Typed memory would mean to partition the memory into separate areas for integers, floating-point numbers, strings, etc., which makes no sense because you cannot predict the size of the storage area required for each data type.
In modern CPUs, the registers are typically partitioned by data type into only 3 or 4 sets: first the so-called general purpose registers, which are used for any kind of scalar data types except floating-point numbers, second a set of scalar floating-point registers, third a set of vector registers used for any kind of vector data types and in very recent CPUs there may be a fourth set of matrix registers, also used for many data types.
In most current CPUs, e.g. Intel/AMD x86-64 and ARM Aarch64, the scalar floating-point registers are aliased over the vector registers, so these 2 do not form separate register sets.
A finer form of typing for CPU registers is not useful, because it cannot be predicted how many registers of each type will be needed.
Therefore, as you say, the data type of an operation is encoded in the instruction and it is independent of the registers used for operands or results.
Moreover, there are several cases when the same instruction code can be used for multiple data types and the context determines which was the intended data type.
For instance, the same instruction for register addition can be used to add signed integers, non-negative integers and integer residues. The intended data types are distinguished by the following instructions. If the overflow flag is tested, it was an addition of signed integers. If the carry flag is tested, it was an addition of non-negative integers. If the flags are ignored, it was an addition of integer residues.
Another example is the bitwise addition modulo 2 (a.k.a. XOR), which, depending on the context, can be interpreted as addition of bit strings or as addition of binary polynomials.
Yet another example is a left rotation instruction, which can be interpreted as either a rotation of a bit string or as a multiplication by a power of 2 of an integer residue modulo 2^N-1 (this is less known than the fact that shift left is equivalent with a multiplication modulo 2^N).
While registers and even instruction encodings can be reused for multiple data types, which leads to significant hardware savings, any program, including the programs written in assembly language, should better define clearly and accurately the exact types of any variables, both to ensure that the program will be easily understood by maintainers and to enable the detection of bugs by program analysis.
The most frequent use of "unsigned" in C programs is for non-negative integers, despite the fact that the current standard specifies that the operations with "unsigned" must be implemented as operations with integer residues. This obviously bad feature of the standard has the purpose of allowing lazy programmers to avoid the handling of exceptions, because operations with integer residues cannot generate exceptions. This laziness can frequently lead to bugs that are not detected or they are detected only after they had serious consequences.
I believe that if one reserves "unsigned" to mean "non-negative integer", then one should use typedefs for different data types whenever "unsigned" is used for another data type, and that includes bit strings, which is probably the next most frequently used data type for which "unsigned" is used.
IBM PL/I, from which the C language has taken many keywords and symbols, including "&" and "|", had distinct types for integers and for bit strings, but C did not also take this feature.
There are even more algebras on the same bits, when you take signed integers into account, such as saturating arithmetic.
One interesting programming language construct that might be useful in this context are Opaque Type Synonyms, a refined form of C's typedef, which modern languages like Rust, Haskell, Go or Scala offer. This allows the programmer to use the same underlying types (e.g. int), give it different names, and define different algebras with the alias. The typing system prevents the different aliases accidentally to flow into each other. Of course that alone does not help to manage the profusion of algebras over the same bits. I think a better approach for a high-level programming language is to follow assembly and really use different names for different operations, e.g. not have + build in. Instead use explicit names like add_uint32, add_polynomials_gf_2,
add_satur_arith, etc etc. The user can then explicitly define (scoped) aliases for them, including +, as long as the typing system can disambiguate the uses. The Sail DSL for ISA specification (https://github.com/rems-project/sail) does this, and it is nice.
Indeed, that is the standard approach. It is also how some of the aforementioned languages desugar opaque type synonyms during compilation. It has the slight disadvantage that we can no longer use variables like
x
in some situations, but need to use
x._polynomials_gf_2
or whatever is the structure's field name. It is nice to avoid this boilerplate, which can become annoying quickly. Let the type-checker not the human do this work ...
> You do not need another language for this.
By the Church-Turing thesis you never need another language, but empirical practise has shown that the software engineering properties we see with real-world code and real-world programmers differ significantly between languages.
There are other languages such as Ada that allow you to more precisely specify such things. Before requesting many new types for C, one should clarify why those languages did not already replace C.
I agree though that using "unsigned" for non-negative integers is problematic and that there should be a way to specify non-negative integers. I would be fine with an attribute.
The problem is also that the standard committee is not the ruling body of the C language. It is the place where people come together to negotiate some minimal requirements. If you want something, you need to first convince the compilers vendors to implement it as an extension.
> which indicates the undefined behavior actually helps rather than hurts when used with good tooling
No, one doesn't need undefined behavior for that at all (which does hurt).
What actually helps is diagnosing the issue, just like one can diagnose the unsigned case just fine (which is not UB).
Instead, for this sort of thing, C could have "Erroneous Behavior", like Rust has (C++ also added it, recently).
Of course, existing ambiguous C code will remain to be tricky. What matters, after all, is having ways to express what we are expecting in the source code, so that a reader (whether tooling, humans or LLMs) can rely on that.
One doesn't need the undefined behavior and this is not what I said, and one could make it erroneous behavior. But with mainstream compilers being able to trap or wrap, it would be no practical difference except in marketing.
Funny. But I have to say the shaming of users who have different opinions or want to make different choices (the whole point of free software) is one of the saddest development in the free software world, such as the push for BSD replacements for GPL components, the entanglement of software components in general, or breaking of compatibility, etc. No matter whether you stand, that it is becoming harder to choose components in your system to your liking should give everybody pause. And if your argument involves the term "Boomer" because you prefer the new choice, you miss the point. Android should be a clear warning that we can loose freedoms again very quickly (if recent US politics is not already a warning enough).
Sadly everyone wants convenience. Nobody hates MS because they are bad, they hate them because they are inconvenient. People are missing the fact that Google is exactly where MS was in the 90s and is most definitely as bad if not worse. I hate android sadly linux isn't looking too good rigt now on mobile.
Devs are are missing the point with linux on phone. Get the point part working first lol so that people have some incentive to carry the damned thing. Apps come later
Over the decades I have used Neo Freerunner, Nokia N900 and now Librem 5. All of them were fully usable, though I'll admit the first one required quite some patience (similarly to the PinePhone these days I'd say).
reply