What's the hurry to optimize away the revalidation requests when the user clicks reload? Is it just beancounter mindset about saving a few "304 Not modified" responses? In that case they shouldn't count the percentage of requests, but percentage of bandwidth or CPU seconds. Tiny responses are much cheaper with HTTP/2, so be sure to benchmark with that.
I'd be happier if my browser made fewer requests, however small they may be. Even if they make up a tiny percentage of my internet traffic, it all adds up. Making computers do less unnecessary work is a good thing.
I'm not sure it's a good argument to take up the biggest companies and then tally up effects of a micro improvement. You could argue for all kinds of complexity increasing changes resulting in %0.01 efficiency improvements this way.
Think dropped requests. If you're operating with 99% packet loss like many people around the world minimizing the absolute number of requests can have dramatic impact on load times.
What do you mean? I've operated at 99% packet loss plenty of times in the rural parts of Vietnam. The key is you can't use any of the websites or apps that are popular in North America / Europe.
The likelihood of even opening a TCP connection is going to be very low. For phase 1, stacks typically send 5 SYNs before giving up. So 95% of your TCP connection estabilishments will slowly fail due to timeout before you even get to phase 1 of the 3-stage connection estabilishment. To say nothing of actually transmitting or receiving any payload data successfully. Which you would have to many times to open a web page.