Even I would like to tune in to Docker bashing (in this case one can actually say with confidence: "Hitler was right"[1]) the fundamental architectural problem is on the OS side.
UNIX, and especially Linux, is a monolithic design. Even such an OS is able to separate user processes form each other all system parts run by concept in the form of a "big ball of mud", with "god-like" capabilities available to them by default. Sure, some internal "barriers" have been added, and per process capability dropping has been retrofitted, but this is backwards form the architectural point of view. Cutting things in peaces after the fact is almost always way more complicated and awkward compared to designing things in a modular way form the get go.
This is related as virtualizing a modular OS is almost a no-brainer (conceptually). You just need to start additional instances of the required system servers / modules / whatever-you-call-that-parts. Compared to that virtualizing a monolith is like trying to construct a kind of Ouroboros: It needs to run itself (with an altered, usually constrained view on the 'outside' world) from inside of itself; and it can't just globally drop the "god-like capabilities" its execution context provides—like it would be possible with an external process. It needs to "hide or manipulate things in front of its own eyes" even "it" has the "all seeing eye". Or to put it even more metaphorical: "A God tries to use his divine powers to constrain his omnipotence so he can lie to himself about the things he sees, without himself ever being able to look through this jugglery". Formulated like that the architectural issue is obvious, I guess.
I was thinking recently all these layers of abstraction is crazy. I would be simpler to write a Python web server using micro python and deploy to a server farm of esp32 micro controllers.