Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

IPv6 changes far more than the address size.

Why mandate the use of Neighbor Discovery Protocol instead of the much simpler ARP?

Why change the rules for UDP checksums? The checksum field in UDP over IPv4 is optional. The checksum field in UDP over IPv6 is required. This is a major pain for protocols that change fields in transit, such as PTP.

I could go on. There are important reasons for each of these decisions, but the fact is that every little change slows adoption. IETF could have stayed focused on solving address scarcity alone, but instead they chose to boil the ocean.



As you noted, there were important reasons for those changes – they even helpfully summarized them in a dedicated section of https://datatracker.ietf.org/doc/html/rfc4861#section-3.1; the checksum benefits are obvious – and none of them were major factors in the rollout delays.

The single biggest factor was that changing the header format broke every decoder in existence, and it took a long time both to get all of that old hardware and software aged out of common use since there wasn’t a legal or financial compulsion to do so. Nobody delayed migration because they liked supporting ARP+ICMP more or critically depended on being able to half-ass the implementation of an obscure time sync protocol - if you don’t update checksums, lots of things will stop your traffic even in an IPv4-only world. The main reason was that everyone had to replace their network infrastructure, review firewall rules, etc. and early adopters were only rewarded with more pain. Given how painful that has been, I sympathize with the people who said we should go to 128-bits because we never want to repeat the process.


As someone who has spent the last several years of my career implementing picosecond-precise time transfer using an "obscure time sync protocol", and holds a few patents in this field, kindly check your dismissive attitude.

When you're working on an FPGA or an ASIC, everything about the UDP checksum is a total pain in the ass. It is entirely redundant with the MAC-layer checksum. The field comes before the contents it checks, and depends on the entirety of the message contents, which must be buffered in the meantime while 10+ Gbps of data continue to arrive. The logical thing to do is to disable it, which clients are explicitly required to accept under IPv4. There is no "half-assing" here, only a logical decision to avoid spending 16 kiB of precious SRAM on every network port. That is the reason why the product line in question doesn't support PTP over IPv6 and never will.


First, I’m not saying that it’s obscure to be dismissive but simply recognizing that the number of people who need to have picosecond precision is not a significant factor in global IPv6 adoption.

Second, while it’s certainly true that having to buffer packets to calculate the checksum is more expensive that doesn’t mean that the best option is to ignore concerns about data integrity which was a far more frequent source of problems. If they hadn’t developed an encapsulation mechanism, using an alternate protocol like UDP-Lite would avoid the issue and anyone needing extremely high-precision already has to have tight enough control over their network to deploy it since they’d need to avoid having random middleboxes interfering with timing.


Data integrity is ensured by the MAC layer. Ethernet's CRC32 is substantially stronger than the weak and unnecessary 16-bit IP-checksum in the UDP header. It is also infinitely easier to calculate in hardware, because it is placed where it belongs (i.e., after the end of protected data).

I acknowledge that PTP is not that widespread, but this isoteric issue is emblematic of broader overreach with the IPv6 design. This decision is one of dozens (hundreds?) that are nice-to-have for many users, but catastrophically disruptive for others.

Such concerns are individually minor, but I assert that they collectively represent a significant barrier to adoption.


If you're running at 10Gbps and you don't spare the memory to buffer a single packet, that's desire not need.

Your expertise does not make you automatically right about every tradeoff.

Also why does on-the-fly editing for PTP packets in particular require your buffer to be bigger than a PTP packet? Aren't those small?


It's very much "need" in this case. This was considered at length.

To be clear, we are talking about exotic custom hardware that has little in common with the average x86/x64 desktop.

For something like a 24-port 10 GbE switch, the platform might have a gigabyte of off-chip DRAM, but only a megabyte of on-chip SRAM. An ask of 16 kiB SRAM per port is 37% of that capacity, which is badly needed for other things.

The other complicating factor is that the PTP egress timestamp and update pipeline needs to be predictable down to the clock cycle, so DRAM isn't an option.

Most PTP packets are small, yes, but others have a lot of tags and metadata. They may also be tucked between other packets. To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

And yes, we did consider RFC1141 and RFC1624. We use those when we can, but unfortunately not possible in this case.

Say what you will about the rest of IPv6, but I am particularly salty about the UDP checksum requirement.


> To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

Well, fully compliant except for IPv6. If you said no jumbo frames for PTP, or no jumbo frames for specifically IPv6 PTP, then the extra buffer for PTP checksums only needs 4% of your SRAM.

> They may also be tucked between other packets.

Does that matter? Let's say a particular PTP packet is 500 bytes. If there's a packet immediately after it, I would expect it to flow through the extra buffer like it's a 500 byte shift register.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: