It opens the door to tech-illiterate users being tricked into disabling security features, doesn’t it? Not saying I agree with it but I imagine that’s the motivation.
I’d argue they aren’t, but the number one threat actor in the privacy space is Google.
I occasionally have to use Chrome to test with it. Can someone explain concisely how it manages Google logins? They clearly bolted it in at some low level to help violate privacy, and or shove dark patterns.
Also, the out of the box spam and dark patterns are over the top. It reminds me of Win 95 bundled software bullshit.
That’s to say nothing of their B-tier properties, like Google TV or YouTube client:
When the kids use this garbage it’s all “Bruh, what is this screen?”, or “I swear I’m not touching the remote!”
(The official YouTube client loses monitor sync(!!) as it rapid cycles through ads on its own now. I guess this is part of an apparent google-run ad fraud campaign, since it routinely seems to think it ran > 5-10 ads to completion in ~15 seconds. We can’t even see all the ads start because each bumps the monitor settings around, which has the effect of auto-mute.)
Video games are a subset of entertainment which is capped in TAM by the population the game reaches, the amount of money they're willing to spend per hour on average, and average number of hours they can devote to entertainment.
In other words, every dollar you make off a game is a dollar that wasn't spent on another game, or trip to the movies, or vacation. And every hour someone plays your game is an hour they didn't spend working, studying, sleeping, eating, or doing anything else in the attention economy.
What makes this different from other markets is that there is no value creation or new market you can create from the aether to generate 10x/100x/1000x growth. And there's no rising tide to lift your boat and your competitors - if you fall behind, you sink.
The only way to grow entertainment businesses by significant multiples is by increasing discretionary income, decreasing working hours, or growing population with discretionary time and money. But those are societal-level problems that take governments and policy, and certainly not venture capital.
I shudder to think about the impact of concurrent data structures fsync'ing on every write because the programmer can't reason about whether the data is in memory where a handful of atomic fences/barriers are enough to reason about the correctness of the operations, or on disk where those operations simply do not exist.
Also linear regions make a ton of sense for disk, and not just for performance. WAL-based systems are the cornerstone of many databases and require the ability to reserve linear regions.
Linear regions are mostly a figment of imagination in real life, but they are a convenient abstraction and a concept.
Linear regions are nearly impossible to guarantee, unless the underlying hardware has specific, controller-level provisions.
1) For RAM, the MCU will obscure the physical address of a memory page, which can come from a completely separate memory bank. It is up to the VMM implementation and heuristics to ensure the contiguous allocation, coalesce unrelated free pages into a new, large allocation or map in a free page from a «distant» location.
2) Disks (the spinning rust variety) are not that different. A freed block can be provided from the start of the disk. However, a sophisticated file system like XFS or ZFS, and others like it, will make an attempt do its best to allocate a contiguous block.
3) Flash storage (SSDs, NVMe) simply «lies» about the physical blocks and does it for a few reasons (garbage collection and the transparent reallocation of ailing blocks – to name a few). If I understand it correctly, the physical «block» numbers are hidden even from the flash storage controller and firmware themselves.
The only practical way I can think of to ensure the guaranteed contiguous allocation of blocks unfortunately involves a conventional hard drive that has a dedicated partition created just for the WAL. In fact, this is how Oracle installation worked – it required a dedicated raw device to bypass both the VMM and the file system.
When RAM and disk(s) are logically the same concept, WAL can be treated as an object of the «WAL» type with certain properties specific to this object type only to support WAL peculiarities.
Ultimately everything is an abstraction. The point I'm making is that linear regions are a useful abstraction for both disk and memory, but that's not enough to unify them. Particularly in that memory cares about the visibility of writes to other processes/threads, whereas disk cares about the durability of those writes. This is an important distinction that programmers need to differentiate between for correctness.
Perhaps a WAL was a bad example. Ultimately you need the ability to atomically reserve a region of a certain capacity and then commit it durably (or roll back). Perhaps there are other abstractions that can do this, but with linear memory and disk regions it's exceedingly easy.
Personally I think file I/O should have an atomic CAS operation on a fixed maximum number of bytes (just like shared memory between threads and processes) but afaik there is no standard way to do that.
I do not share the view that the unification of RAM and disk requires or entails linear regions of memory. In fact, the unification reduces the question of «do I have a contiguous block of size N to do X» to a mere «do I have enough memory to do X?», commits and rollbacks inclusive.
The issue of durability, however, remains a valid concern in either scenario, but the responsibility to ensure durability is delegated to the hardware.
Futhermore, commits and rollbacks are not sensitive to the memory linearity anyway; they are sensitive to durability of the operation, and they may be sensitive to the latency, although it is not a frequently occurring constraint. In the absence of a physical disk, commits/rollbacks can be implemented using the software transactional memory (STM) entirely in RAM and today – see the relevant Haskell library and the white paper on STM.
Lastly, when everything is an object in the system, the way the objects communicate with each other also changes from the traditional model of memory sharing to message passing, transactional outboxes, and similar, where the objects encapsulate the internal state without allowing other objects to access it – courtesy of the object-oriented address space protection, which is what the conversation initially started from.
Multiple versions of GTK or QT can coexist on the same system. GTK2 is still packaged on most distros, I think for example GIMP only switched to GTK3 last year or so.
GTK update schedule is very slow, and you can run multiple major versions of GTK on the same computer, it's not the right argument. When people says GTK backwards compatibility is bad, they are referring in particular to its breaking changes between minor versions. It was common for themes and apps to break (or work differently) between minor versions of GTK+ 3, as deprecations were sometimes accompanied with the breaking of the deprecated code. (anyway, before Wayland support became important people stuck to GTK+ 2 which was simple, stable, and still supported at the time; and everyone had it installed on their computer alongside GTK+ 3).
Breaking between major versions is annoying (2 to 3, 3 to 4), but for the most part it's renaming work and some slight API modifications, reminiscent of the Python 2 to 3 switch, and it only happened twice since 2000.
The difference is that you can statically link GTK+, and it'll work. You can't statically link glibc, if you want to be able to resolve hostnames or users, because of NSS modules.
Not inherently, but static linking to glibc will not get you there without substantial additional effort, and static linking to a non-glibc C library will by default get you an absence of NSS.
The problem is not the APIs, it's symbol versions. You will routinely get loader errors when running software compiled against a newer glibc than what a system provides, even if the caller does not use any "new" APIs.
glibc-based toolchains are ultimately missing a GLIBC_MIN_DEPLOYMENT_TARGET definition that gets passed to the linker so it knows which minimum version of glibc your software supports, similar to how Apple's toolchain lets you target older MacOS from a newer toolchain.
Yes, so that's why freezing the glibc symbol versions would help. If everybody uses the same version, you cannot get conflicts (at least after it has rippled through and everybody is on the same version). The downside is that we can't add anything new to glibc, but I'd say given all the trouble it produces, that's worth accepting. We can still add bugfixes and security fixes to glibc, we just don't change the APIs of the symbols.
It should not be necessary to freeze it. glibc is already extremely backwards compatible. The problem is people distributing programs that request the newest version even though they do not really require it, and this then fails on systems having an older version. At least this is my understanding.
The actual practical problem is not glibc but the constant GUI / desktop API changes.
Making an executable “request” older symbol versions is incredibly painful in practice. Basically every substantial piece of binary software either compiles against an ancient Debian sysroot (that has to have workarounds for the ancient part) or somehow uses a separate glibc copy from the base system (Flatpak, etc.). The first greatly complicates building the software, the second is recreating Windows’ infamous DLL hell.
Both are way more annoying than anything the platforms without symbol versioning suffer from because of its lack. I’ve never encountered anyone who has packaged binaries for both Linux and Windows (or macOS, or the BSDs) that missed anything about Linux userspace ABIs when working with another platform.
It has to be as ancient as the oldest glibc you want to support, usually a Red Hat release with very old version and manual security backports. These can have nearly decade-old glibc versions, especially if you care about extended support contracts.
You generally have difficulty actually running contemporary build tools on such a thing, so the workaround is to use —-sysroot against what is basically a chroot of the old distro, as if cross-compiling. But there are still workarounds needed if the version is old enough. Chrome has a shorter support window than some Linux binaries, but you can see the gymnastics they do to create their sysroot in some Python scripts in the chromium repo.
On Windows, you install the latest SDK and pass a target version flag when setting up the compiler environment. That’s it. macOS is similar.
If the problem is getting the build tools to work in your old chroot, then the problem is still "people distributing programs that request the newest version", i.e. the build tool developers / packagers. I generally do not have this problem, but I am a C programmer building computational tools.
Look, it’s not that complicated. If you just build your software with gcc or whatever in a docker container with pinned versions, put the binary on your website, and call it a day, 5 minutes later someone is going to complain it doesn’t work on their 3 year old Linux Mint install. The balkanization of Linux is undeniable at this point. If you want to fix this problem without breaking anything else, you have to jump through hoops (and glibc is far from the only culprit).
You can see what the best-in-class hoop jumping looks like in a bunch of open source projects that do binary releases — it’s nontrivial. Or you can see all the machinations that Flatpak goes through to get userspace Mesa drivers etc. working on a different glibc version than the base system. On every other major OS, including other free software ones, this isn’t a problem. Like at all. Windows’ infamous MSVC versioning is even mostly a non-issue at this point, and all you had to do before was bundle the right version in your installer. I’ll take a single compiler flag over the Linux mess every day of the week.
Do you distribute a commercial product to a large Linux userbase, without refusing to support anything that isn’t Ubuntu LTS? I’m kind of doubting that, because me and everyone I know who didn’t go with a pure Electron app (which mostly solves this for you with their own build process complexity) has wasted a bunch of time on this issue. Even statically linking with musl has its futziness, and that’s literally impossible for many apps (e.g. anything that touches a GPU). The Linux ecosystem could make a few pretty minor attitude adjustments and improve things with almost no downside, but it won’t. So the year of the Linux desktop remains illusive.
> The balkanization of Linux is undeniable at this point.
Again this same old FUD.
The situation would be no different if there was only a single distro - you would still need to build against the oldest version of glibc (and other dependencies) you want to support.
In principle you can patch your binary to accept the old local version, though I don't remember ever getting it to work right. Anyway here it is for the brave or foolhardy, here's the gist:
Oh, sure, rpath/runpath shenanigans will work in some situations but then you'll be tempted to make such shenanigans work in all situations and then the madness will get you...
To save everyone a click here are the first two bullet points from Exhibit A:
* If an executable has `RPATH` (a.k.a. `DT_RPATH`) set but a shared library that is a (direct or indirect(?)) dependency of that executable has `RUNPATH` (a.k.a. `DT_RUNPATH`) set then the executable's `RPATH` is ignored!
* This means a shared library dependency can "force" loading of an incompatible [(for the executable)] dependency version in certain situations. [...]
Further nuances regarding LD_LIBRARY_PATH can be found in Exhibit B but I can feel the madness clawing at me again so will stop here. :)
2. Replace libc.so with a fake library that has the right version symbol with a version script
e.g. version.map
GLIBC_2.29 {
global:
*;
};
With an empty fake_libc.c
`gcc -shared -fPIC -Wl,--version-script=version.map,-soname,libc.so.6 -o libc.so.6 fake_libc.c`
3. Hope that you can still point the symbols back to the real libc (either by writing a giant pile of dlsym C code, or some other way, I'm unclear on this part)
Ideally glibc would stop checking the version if it's not actually marked as needed by any symbol, not sure why it doesn't (technically it's the same thing normally, so performance?).
So you can do e.g. `patchelf --remove-needed-version libm.so.6 GLIBC_2.29 ./mybinary` instead of replacing glibc wholesale (step 2 and 3) and assuming all of used glibc by the executable is ABI compatible this will just work (it's worked for a small binary for me, YMMV).
The people will complain that glibc doesn't implement what they want.
The solution is simply to build against the oldest glibc version you want to support - we should focus on making that simpler, ideally just a compiler flag.
We definitely can, because almost every other POSIX libc doesn’t have symbol versioning (or MSVC-style multi-version support). It’s not like the behavior of “open” changes radically all the time, and you need to know exactly what source symbol it linked against. It’s really just an artifact of decisions from decades ago, and the cure is way worse than the disease.
This isn't a problem in other languages because most other languages don't have strong, statically typed errors that need to compose across libraries. And those that do have the same problem.
The general argument against adding something to `std` is that once the API is stabilized, it's stabilized forever (or at least for an edition, but practically I don't think many APIs have been changed or broken across editions in std).
The aversion to dependencies is just something you have to get over in Rust imo. std is purposefully kept small and that's a good thing (although it's still bigger and better than C++, which is the chief language to compare against).
reply