The point about inducing segmentation faults is interesting and sounds like it could work to implement the `hint_read` mechanism. I guess it would mostly be a question of how performant userfaultfd or SIGSEGV handling is. In any case it will be sub-optimal to having it in the kernel's own fault handler, since each userfaultfd read or SIGSEGV callback is already a user-kernel-user switch, and it still needs to perform another system call to do the actual reads, and even more system calls to mmap the bits of memory again.
Presumably having fine-grained mmaps will be another source of overhead. Not to mention that each mmap requires another system call. Instead of a single fault or a single call to `readv`, you're doing many `mmap` calls.
> I tried implementing your "hint_read" years ago in userspace in a search engine I wrote, by having a "readahead thread" read from pages before the main thread got to them.
Yeah, doing it in another thread will also have quite a bit of overhead. You need some sort of synchronisation with the other thread, and ultimately the "readahead" thread will need to induce the disk reads through something other than a page fault to achieve concurrent reads, since within the readahead thread, the page faults are still synchronous, and they don't know what the future page faults will be.
It might help to do `readv` into dummy buffers to force the kernel to load the pages from disk to memory, so the subsequent page faults are minor instead of major. You're still not reducing the number of page faults though, and the total number of mode switches is increased.
Anyway, all of these workarounds are very complicated and will certainly be a lot more overhead than vectored IO, so I would recommend just doing that. The overall point is that using mmap isn't friendly to concurrent reads from disk like io_uring or `readv` is.
Major page faults are basically the same as synchronous read calls, but Golang read calls are asynchronous, so the OS thread can continue doing computation from other Goroutines.
Fundamentally, the benchmarks in this repository are broken because in the mmap case they never read any of the data [0], so there are basically no page faults anyway. With a well-written program, there shouldn't be a reason that mmap would be faster than IO, and vectored IO can obviously be faster in various cases.
Inducing segmentation faults is literally how the kernel implements memory mapping, and virtual memory in general, by the way. From the CPU's perspective, that page is unmapped. The kernel gets its equivalent of a SIGSEGV signal (which is a "page fault"=SIGSEGV "interrupt"=signal), checks its own private tables, decides the page is currently on disk, schedules it to be read from disk, does other stuff in the meantime, and when the page has finished being read from disk, it returns from the interrupt.
(It does get even deeper than that: from the CPU's perspective, the interrupt is very brief, just long enough to take note that it happened and avoid switching back to the thread that page-faulted. The rest of the stuff I mentioned, although logically an "interrupt" from the application's perspective, happens with the CPU's "am I handling an interrupt?" flag set to false. This is equivalent to writing a signal handler that sets a flag saying the thread is blocked, edits its own return address so it will return to the scheduler instead of the interrupted code, then calls sigreturn to exit the signal handler.)
munmap + signal handling is terrible not least of which that you don’t want to be fucking with the page table in that way as an unmap involves a cross cpu TLB shoot down which is slooow in a “make the entire machine slow” kind of way.
Presumably having fine-grained mmaps will be another source of overhead. Not to mention that each mmap requires another system call. Instead of a single fault or a single call to `readv`, you're doing many `mmap` calls.
> I tried implementing your "hint_read" years ago in userspace in a search engine I wrote, by having a "readahead thread" read from pages before the main thread got to them.
Yeah, doing it in another thread will also have quite a bit of overhead. You need some sort of synchronisation with the other thread, and ultimately the "readahead" thread will need to induce the disk reads through something other than a page fault to achieve concurrent reads, since within the readahead thread, the page faults are still synchronous, and they don't know what the future page faults will be.
It might help to do `readv` into dummy buffers to force the kernel to load the pages from disk to memory, so the subsequent page faults are minor instead of major. You're still not reducing the number of page faults though, and the total number of mode switches is increased.
Anyway, all of these workarounds are very complicated and will certainly be a lot more overhead than vectored IO, so I would recommend just doing that. The overall point is that using mmap isn't friendly to concurrent reads from disk like io_uring or `readv` is.
Major page faults are basically the same as synchronous read calls, but Golang read calls are asynchronous, so the OS thread can continue doing computation from other Goroutines.
Fundamentally, the benchmarks in this repository are broken because in the mmap case they never read any of the data [0], so there are basically no page faults anyway. With a well-written program, there shouldn't be a reason that mmap would be faster than IO, and vectored IO can obviously be faster in various cases.
[0] Eg, see here where the byte slice is assigned to `_` instead of being used: https://github.com/perbu/mmaps-in-go/blob/7e24f1542f28ef172b...