Hacker Newsnew | past | comments | ask | show | jobs | submit | jcalvinowens's commentslogin

>> 10Mbps is still way too high of a minimum. It's more than YouTube uses for full motion 4k.

> And 2-3 JPEGs per second won't even look good at 1Mbps.

Unqualified claims like these are utterly meaningless. It depends too much on exactly what you're doing, some sorts of images will compress much better than others.


> In modern times, the bandwidth saved by Nagle is rarely worth the latency cost.

I actually took some packet dumps and did the math on this once, assuming any >=2 non-mtu-sized segments from the same flow within 10ms could have been combined (pretty conservative imo). The extra bandwidth cost of NODELAY amounted to just over 0.1% of the total AWS bandwidth bill, which, while negligible, was more than I expected.


There's a reason nobody does this: RAM is expensive. Disabling overcommit on your typical server workload will waste a great deal of it. TFA completely ignores this.

This is one of those classic money vs idealism things. In my experience, the money always wins this particular argument: nobody is going to buy more RAM for you so you can do this.


Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.

Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.

Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.

For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.


You're correct it doesn't prefault the mappings, but that's irrelevant: it accounts them as allocated, and a later allocation which goes over the limit will immediately fail.

Remember, the limit is artificial and defined by the user with overcommit=2, by overcommit_ratio and user_reserve_kbytes. Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).


> Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).

The RAM is not unusable, it will be used. Some portion of ram may be unallocatable, but that doesn't mean it's wasted.

There's a tradeoff. With overcommit disabled, you will get allocation failure rather than OOM killer. But you'll likely get allocation failures at memory pressure below that needed to trigger the OOM killer. And if you're running a wide variety of software, you'll run into problems because overcommit is the mainstream default for Linux, so many things are only widely tested with it enabled.


> The RAM is not unusable, it will be used. Some portion of ram may be unallocatable

I think that's a meaningless distinction: if userspace can't allocate it, it is functionally wasted.

I completely agree with your second paragraph, but again, some portion of RAM obtainable with overcommit=0 will be unobtainable with overcommit=2.

Maybe a better way to say it is that a system with overcommit=2 will fail at a lower memory pressure than one with overcommit=0. Additional RAM would have to be added to the former system to successfully run the same workload. That RAM is waste.


it's absolutely wasted if apps on server don't use disk (disk cache is pretty much only thing that can use that reserved memory).

You can have simple web server that took less than 100MB of RAM take gigabytes, just because it spawned few COW-ed threads


If the overcommit ratio is 1, there is no portion rendered unusable? This seems to contradict your "necessarily" wastes RAM claim?

Read the comment again, that wasn't the only one I mentioned.

Please point out what you're talking about, because the comment is short and I read it fully multiple times now.

> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.

Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.

And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.

Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus


If you have enough swap there's no waste.

Wasted swap is still waste, and the swapping costs cycles.

With overcommit off the swap isn't used; it's only necessary for accounting purposes. I agree that it's a waste of disk space.

How does disabling overcommit waste RAM?

Because userspace rarely actually faults in all the pages it allocates.

Surely the source of the waste here is the userspace program not using the memory it allocated, rather than whether or not the kernel overcommits memory. Attributing this to overcommit behavior is invalid.

The waste comes with asterisk

That "waste" (that overcommit turns into "not a waste") means you can do far less allocations, with overcommit you can just allocate a bunch of memory and use it gradually, instead of having to do malloc() every time you need a bit of memory and free() every time you get rid of it.

You'd also increase memory fragmentation that way possibly hitting performance.

It's also pretty much required for GCed languages to work sensibly


Obviously. But all programs do that and have done it forever, it's literally the very reason overcommit exists.

Only the poorly-written ones, which are unfortunately the majority of them.

Reading COW memory doesn't cause a fault. It doesn't mean unused literally.

And even if it's not COW, there's nothing wrong or inefficient about opportunistically allocating pages ahead of time to avoid syscall latency. Or mmapping files and deciding halfway through you don't need the whole thing.

There are plenty of reasons overcommit is the default.


overcommit 0

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (per-thread vars * 32)

overcommit 2

- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (5032) + (per-thread vars 32)

Boom, Now your simple apache server serving some static files can't fit on 512MB VM and needs in excess of 1.7GB of memory just to allocate


If you pick "I longer need this item" as the return reason, you are pretty much 100% guaranteed to get a full refund from amazon scammers in my experience. If you pick any reason suggesting the product is deficient, they'll fight you and waste your time, even if it's demonstrably true. Given that, I take the risk, it saves me money when the thing turns out to work.

> If you pick any reason suggesting the product is deficient, they'll fight you and waste your time, even if it's demonstrably true.

Presumably because if those returns were processed, it would give Amazon cause to take action against them.


Exactly. If amazon wants buyers to actually report scammers, they need to make it easier. I gave up on it long ago, it's not worth my time.

This classic paper from 1996 describes a simple unbounded mpmc queue: https://www.cs.rochester.edu/~scott/papers/1996_PODC_queues....

No. Never accept responsibility without power. That is a lesson I wish I'd learned much earlier in my life.

This, and use power sparingly.

I love sim hijinks. It's possible to reliably land a 737 on the carrier in X-plane: just take off with 30min of fuel, drag it in with full flaps and high power, and set the parking brake before you touch down.

I wouldn't be surprised if it's possible in real life. The Navy tested a C-130 on the USS Forrestal and accomplished 21 landings. I'm sure a C-130 has better short-field performance than a 737, but they were also testing it with substantial cargo on board. Official figures for required runway distance for a 737 are far in excess of a carrier's deck length, of course, but those figures include weird things like "safety" that are not strictly required, and tend not to fully account for the 40+kt headwind you can get from a carrier steaming into the wind.

> The Navy tested a C-130 on the USS Forrestal and accomplished 21 landings.

my son plays DCS and that game just got a C-130 module and he showed me youtube vids of people landing on carriers. I had to think hard if that was an actual thing or not. Seems like a C-130 can land/take-off pretty much anywhere so why not a carrier?


Also, of course, the 73 would have to have a hook, which shortens the landing quite some (as well as the lifespan of the fuselage, I assume), as long as you catch the wire, that is. Wingspan-wise, this could also work. @RedBull, how bout it?!

Not necessarily. The C-130 didn't use one. That doesn't mean the 737 could get away with it, but with the carrier going max speed, a decent wind, the plane at min weight and speed, and touching down as early as possible, I wouldn't be surprised if the landing roll was shorter than the length of the deck.

Lawyers who make these decisions are so risk averse in my experience they'd still probably insist on it being nonmodifiable.

Sure, which is why right to repair laws are so important.

I understand this discussion as being about how society should deal with it, not how you could try to make the argument internal to a company.


Right, I'm saying I don't think codifying limitations on liability in law will be effective, because it probably wouldn't be absolute enough to satisfy the lawyers. You need a law that actually says "the user must be free to modify the device".

They don't seem to be too risk averse about misusing free software though.

The verbiage in the PR reminds me of a bit from The Night Watch [1]:

> [...] and at some point, you will have to decide whether serifs are daring statements of modernity, or tools of hegemonic oppression that implicitly support feudalism and illiteracy

[1] https://www.usenix.org/system/files/1311_05-08_mickens.pdf


This really mirrors my experience trying to get LLMs to clean up kernel driver code, they seem utterly incapable of simplifying things.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: