You’re thinking of containerisation. Virtualisation does abstract away direct interfaces with the hardware. And some virtual machines are literal emulators.
Virtualisation shares some areas with Emulation, but it's essentially passing CPU instructions to the CPU without translation from some alternative CPU machine language.
The difference here is the level; in descending order:
* Containerisation emulates a userland environment, shares a OS/Kernel interfaces.
* Virtualisation emulates hardware devices, but not CPU instructions; there are some "para-virt" providers (Xen) that will share even a kernel here, but this is not that.
* Emulation emulates an entire computer including its CPU
I don’t think you’ve read the comment chain correctly because you’re literally just repeating what I just said.
Though you make a distinction between virtualisation and emulation when in fact they can be the same thing (they aren’t always, but sometimes they are. It just depends on the problem you’re trying to solve).
>> The whole point of virtualization is you're running as close as possible to directly on native hardware. That's literally what makes virtualization distinct from emulation for VMs.
Unlikely. It’s too recent for Wayback machine to cache.
Their post was ostensibly the same but much more vaguely worded. And if you say “virtualisation is about being as close to the hardware as possible” without much more detail to someone else who talks about wanted to run a VM with a different guest CPU, then it’s understandable that people will assume the reply is mixing up virtualisation with containerisation since there’s nothing in virtualisation that says you cannot emulate hardware in the guest. Whereas containerisation is very much intended to run natively on the hardware
who's post was edited? If they're referring to my original one, then there's only one possible edit I made, because I recall making a comment today that was missing a word, but I updated that instantly so there were no comments or anything, or really likelihood anyone had read it prior to my comment. My "edit" that now has to be an additional comment is at https://news.ycombinator.com/item?id=39061507 - I'm curious if you agree with my justification on treating virtualization is meaning explicitly non-emulated these days. From your other comments it seems you do agree with me, but I'd like to know how you feel about the rationale.
In response to the editing comment made, my rule for editing or changing comments is that if it's not an instantaneous edit, then the edit should be marked. In response to this thread (the whole thing, not just the one leading down these branches) I did write a stupidly long addendum but spent so long trying to find old marketing material I then couldn't make the edit \o/
My more careful rule about editing is that if someone has replied to a comment, I won't change the text that they replied to, unless the confusion is something very simple like a missing "not" or something where the original text was clearly wrong, and I'll generally do some variation of " .... [edit: not] ...". Otherwise I try to do it as adding additional text with a note saying the new text was added.
I find silent edits incredibly annoying, as it means you can't tell if/what has changed in a comment you replied to, and it allows for people to exhibit some really screwed up behaviour. Basically along the same lines as people brought up when apple added editing/deleting to iMessage, where the original betas I think didn't show edits/deletion had occurred, nor what had been changed. I don't think it's reasonable in this day and age for sites like HN to not provide an edit history for comments.
> I'm curious if you agree with my justification on treating virtualization is meaning explicitly non-emulated these days.
I often see people try to make a distinction between hardware virtualisation and hardware emulation but the reality is they're just different sides of the same coin. It's like saying Typescript isn't a compiled language because it doesn't produce assembly like C would. Sometimes you need to emulate the entire CPU. Other times the guest and host CPUs are the same and that CPU supports hardware assisted virtualisation. But even AMD64 virtualisation solutions have to emulate some parts of the hardware stack (like the virtual network card) regardless of any paravirtalisation and other virt-extensions available by the CPU, GPU, and so on.
To compound the confusing jargon. All emulators are a virtual machine but not all virtual machines are emulators.
The distinction between containerisation and virtualisation is a lot easier to describe. Rather than placing hardware gaps (whether those hardware gaps are defined in software or hardware) like you do with virtualisation, instead you have all code running natively with only your kernel for protection.
I guess you could draw some parallels between modern hypervisors and kernels, but even hear, you wouldn't run multiple kernels on top of each other in a container. However you would run multiple kernels on top of a hypervisor. The layer of separation is a lot different
This dichotomy of hardware assisted vs emulation is too simplistic though.
> Sometimes you need to emulate the entire CPU. Other times the guest and host CPUs are the same and that CPU supports hardware assisted virtualisation.
You don't need dedicated hardware assistance to do some amount of virtualization if you have an MMU and memory protection. This is how early VMware on x86 worked, it was not a full blown emulator, it did not need to emulate the entire CPU. Most guest code ran unaltered without emulation, only certain ring 0 code had to be emulated.
VMware ESX and workstation ran usably well back in the x86 days, I mean it was a viable product. It was not just an "emulator", in contrast to something like BOCHS or early qemu.
I don't think an accurate categorization really. Emulation is a technique that may be used in parts of the implementation for virtualization, but they're different concepts, one is not a subset of the other.
> All emulators are a virtual machine
No, I can emulate a single piece of hardware like a network adapter. I can emulate just a CPU core indepedent of a particular virtual environment.
> This is how early VMware on x86 worked, it was not a full blown emulator, it did not need to emulate the entire CPU.
Of course it didn't. But it did (and parts of it still does) emulate hardware. Which is my point.
> This dichotomy of hardware assisted vs emulation is too simplistic though.
Is it though? Or is this need to define different aspects of virtualisation as entirely different fields effectively being too simplistic? I'm acknowledging that there is shared heritage and commonality between principles -- even to the extent that some directly borrow from the other.
You say "dichotomy" to refer to my comments yet you're the one trying to divide a complex field into small pigeonholes without acknowledging that they're intermingled. This is what PR people do to sell products to customers, not engineers like us.
> VMware ESX and workstation ran usably well back in the x86 days
Workstation predates ESX by a few years and the 1.x versions of Workstation really didn't run well _at all_. I used it personally, it was a bloody cool tech demo and even back then I could see the potential, but it was far too slow to use for anything serious. Particularly when UNIX and BSDs were still in vogue (albeit barely) and they had excellent support for application sandboxing via containerisation.
> I don't think an accurate categorization really. Emulation is a technique that may be used in parts of the implementation for virtualization,
So you agree with the technical description but not the categorisation?
> but they're different concepts, one is not a subset of the other.
That's like saying DocumentDB and Postgres aren't both databases because one is SQL and the other is not.
The concepts between emulation and virtualisation are similar enough that I'd argue that one is a subset of another. What you're discussing is the implementation detail of one form of virtualisation -- baring in mind that there are other ways to run a virtual machine which we also haven't covered here. You've even agreed yourself that a VM requires parts of the full machine to be emulated for it to work. So the only reason people don't (still) refer to emulation as a subset of virtualisation is marketing.
> It was not just an "emulator", in contrast to something like BOCHS or early qemu.
I really don't get what you're trying to say with "early qemu". qemu still does full emulation. It also supports virtualisation too. If anything, it's another example of the point that I'm making which is that hardware virtualisation and emulation are too frequently intermingled for it to be sensible claiming they're distinct categories of computing.
> No, I can emulate a single piece of hardware like a network adapter. I can emulate just a CPU core indepedent of a particular virtual environment.
And that's exactly why they're a subset of virtual machines. I know this isn't the common way to refer to a VM but I'd argue that an emulated hardware adapter is still a virtual machine because it takes input, returns output, and is sandboxed and self contained. In the pure mathematical sense, it is a virtual machine just like how some software runtimes are also classified as virtual machines.
I feel a lot of the issue here is down to businesses redefining common terms over the years to make their products seem extra special.
By early qemu, meaning qemu prior to KVM and KQEMU (ie between 2003 and 2007), which only did full CPU emulation.
> Workstation predates ESX by a few years and the 1.x versions of Workstation really didn't run well _at all_
It improved quickly. By 3.0 (early 2002) I was using an Windows 2000 desktop with Visual Studio on a Linux host as a daily driver workstation. That was still on pre VT-x x86.
I then used this on a Pentium M (no VT-x) Thinkpad for years.
> I know this isn't the common way to refer to a VM but I'd argue that an emulated hardware adapter is still a virtual machine ...
The thing is overall I agree with you, I think. I agree that these terms have been used in different ways over the years to discuss overlapping concepts. Maybe the only thing is I was pushing back on a notion of "one true" ontology, but maybe that was not really your point.
I also thought you were implying that system virtual machines for targets without hardware assistance (your other comment seems to suggest this) require full CPU emulation, but perhaps that was not your intent.
> I also thought you were implying that system virtual machines for targets without hardware assistance (your other comment seems to suggest this) require full CPU emulation, but perhaps that was not your intent.
In fairness, I did write my comment that way. So I can see why that's the conclusion you drew. Sorry for the confusion there.
1. containerization is often implemented these days in terms of partial virtualization, because the historical approach that were essentially a bunch of variations of chrooting were not sufficiently isolated to create a sufficient security boundary for a multiuser "cloud" hosting service.
2. virtualization, as my update/comment up the thread said, the definition of virtualization being "host os code runs directly on the cpu" has been pretty much the standard definition for a couple of decades at this point. If you say you offer virtualization, but you implement it using an academically "accurate" definition that allows emulation I would imagine that you would have difficulty finding a user that accepts that definition. Again, as I've stated elsewhere "hardware virtualization support" that CPUs have acquired since the 90s is essentially multi-level page tables and cpu mode options so that a virtual machine runtime doesn't need to rewrite kernel mode code. It has not meaningfully impacted user mode code at all.
It is not reasonable to reject the evolution of language when considering the meaning of a word, nor the context of the environment in which it is discussed. The fact that 30 years ago you could say an emulator was a virtual machine is not relevant today, where the terminology very clearly does not include ISA emulation. This is as true for modern tech terminology like "virtualization" as it is for other tech terminology. For example, no one would accept me presenting a person good at maths as a "computer" either, despite that being what it used to mean.
> containerization is often implemented these days in terms of partial virtualization, because the historical approach that were essentially a bunch of variations of chrooting were not sufficiently isolated to create a sufficient security boundary for a multiuser "cloud" hosting service.
Containerisation has zero virtualisation. There's no virtual environment at all. It's just using kernel primitives to create security boundaries around native processes and syscalls.
You're also talking very Linux specific. Linux was late to the containerisation game. Like decades late. And frankly, I think FreeBSD and Solaris still have superior implementations too. Linux is getting there though.
> 2. virtualization, as my update/comment up the thread said, the definition of virtualization being "host os code runs directly on the cpu" has been pretty much the standard definition for a couple of decades at this point.
The issue that I take with your comment here is that it implies containerisation doesn't run directly on the CPU. Or that some forms of emulation cannot run directly on the CPU either -- sometimes the instruction sets are similar enough that you can dynamically translate the differences in real time.
The actual implementation of these things is, well, complicated. Making sweeping generalisations that one cannot do the other is naturally going to be inaccurate.
> If you say you offer virtualization, but you implement it using an academically "accurate" definition that allows emulation I would imagine that you would have difficulty finding a user that accepts that definition.
What you're talking about now is entirely product marketing. And that's what I take issue with when talking about this topics. Just because something is marketed as "emulation" or "virtualisation" it doesn't mean the disciplines of one cannot be a subset of the other.
> Again, as I've stated elsewhere "hardware virtualization support" that CPUs have acquired since the 90s is essentially multi-level page tables and cpu mode options so that a virtual machine runtime doesn't need to rewrite kernel mode code.
If you're talking about x86 (which I assume is the case because that's when virtualisation became a commodity) then you're out by about a decade. It was around 2006 when Intel and AMD released their x86 virt extensions and before then, VMWare ran like shit on consumer hardware.
I was there, using VMware 1.0 in the late 90s / early 00s. I remember the excitement I had for x86 finally adding hardware assistance.
Sure, there were ways to get virtualised software to run natively on x8 before then. But it didn't work for privileged code. And as we all know swapping memory in and out of ring-0 is expensive. So having that part of the process artificially slowed down killed VMware's performance for a lot of people.
This is also why paravirtualisation was such a popular concept back then. It allowed you to bypass those constraints.
> It is not reasonable to reject the evolution of language when considering the meaning of a word, nor the context of the environment in which it is discussed.
But that's the problem here. The evolution is completely arbitrary. It's based on marketing and PR rather than technology. In technical terms, there are multiple different ways to virtualise hardware (even without discussing emulation). And in technical terms, full virtual environments are still dependant on parts of that machine being emulated. So it's not unreasonable to argue that emulation is a subset of virtualisation. It always used to be considered that way and little has changed aside how companies now market their software to management.
> For example, no one would accept me presenting a person good at maths as a "computer" either, despite that being what it used to mean.
That's probably not the best of examples because even in the 1800s a "computer" wasn't just someone who was good at maths. It was someone who computed mathematical problems. The term was used to define the machine (albeit a fleshy organic one) rather than the skill.
To the end, you do still sometimes see people refer to themselves as a computer if they're manually computing stuff. It's typically said in jest though, but it does demonstrate that the term hasn't actually drifted as far from it's original meaning as you claim.
There definitely are other examples of terms that have evolved and I'm all for languages evolving. Plenty of terms we use in tech are borrowed terms that have evolved. Like `file`, `post`, `terminal`, etc. But they still refer to the same specific properties of computing which have evolved with time.
The problem with this virtualisation vs emulation discussion is that those concepts haven't evolved in the same dramatic way. Methods of virtualisation have diversified but it still relies on elements of emulation too. And modern emulation borrows a lot from principles learned through hardware assisted virtualisation. They're fields that are still heavily entwined. And "virtualisation" itself isn't a single method of implementation (neither is emulation for that matter, but more so for virtualisation). So arguing that emulation isn't a subset of virtualisation is just bullshit marketing.
And that's why I'm unwilling to acknowledge this rebranding of the term. Once you start building virtual machines you end up with a real mix and match of paravirtualisation, hardware assisted virtualisation, emulation and so on and so forth, all components powering the same VM.
Note that these aren't necessarily layered: You can virtualize with emulation, but you can also emulate without virtualization, which is what e.g. Rosetta does on macOS, or QEMU's userland emulation mode.