Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> One key reason for this uneven progress is the extension of IPv4’s lifespan through interim technologies like Network Address Translation (NAT) and IPv4 address transfers.

they completely ignore the actual problem with IPv6 which is that they didn't just extend IPv4 in a straightforward manner. they could have made the address fields 64 bits and been done with it. but, oh no, they had to make it the protocol for the ages.

it's completely analogous to the failed Intel Itanium vs. AMD x64.



I've never seen anyone explain a "straightforward" way to extend the bits without having 90% of the same adoption difficulty. What's your idea, specifically?

Also extension mechanisms like that already exist as part of ipv6.


The issue isn't that the data sent over the wire changed too much. Instead, the semantics changed too much.

Adopting IPv6 would ideally have been as simple as changing a socket definition and your address types. But so much of the semantics changed that it isn't that easy at all. It also prevented backwards capability.


IPv6 changes far more than the address size.

Why mandate the use of Neighbor Discovery Protocol instead of the much simpler ARP?

Why change the rules for UDP checksums? The checksum field in UDP over IPv4 is optional. The checksum field in UDP over IPv6 is required. This is a major pain for protocols that change fields in transit, such as PTP.

I could go on. There are important reasons for each of these decisions, but the fact is that every little change slows adoption. IETF could have stayed focused on solving address scarcity alone, but instead they chose to boil the ocean.


As you noted, there were important reasons for those changes – they even helpfully summarized them in a dedicated section of https://datatracker.ietf.org/doc/html/rfc4861#section-3.1; the checksum benefits are obvious – and none of them were major factors in the rollout delays.

The single biggest factor was that changing the header format broke every decoder in existence, and it took a long time both to get all of that old hardware and software aged out of common use since there wasn’t a legal or financial compulsion to do so. Nobody delayed migration because they liked supporting ARP+ICMP more or critically depended on being able to half-ass the implementation of an obscure time sync protocol - if you don’t update checksums, lots of things will stop your traffic even in an IPv4-only world. The main reason was that everyone had to replace their network infrastructure, review firewall rules, etc. and early adopters were only rewarded with more pain. Given how painful that has been, I sympathize with the people who said we should go to 128-bits because we never want to repeat the process.


As someone who has spent the last several years of my career implementing picosecond-precise time transfer using an "obscure time sync protocol", and holds a few patents in this field, kindly check your dismissive attitude.

When you're working on an FPGA or an ASIC, everything about the UDP checksum is a total pain in the ass. It is entirely redundant with the MAC-layer checksum. The field comes before the contents it checks, and depends on the entirety of the message contents, which must be buffered in the meantime while 10+ Gbps of data continue to arrive. The logical thing to do is to disable it, which clients are explicitly required to accept under IPv4. There is no "half-assing" here, only a logical decision to avoid spending 16 kiB of precious SRAM on every network port. That is the reason why the product line in question doesn't support PTP over IPv6 and never will.


First, I’m not saying that it’s obscure to be dismissive but simply recognizing that the number of people who need to have picosecond precision is not a significant factor in global IPv6 adoption.

Second, while it’s certainly true that having to buffer packets to calculate the checksum is more expensive that doesn’t mean that the best option is to ignore concerns about data integrity which was a far more frequent source of problems. If they hadn’t developed an encapsulation mechanism, using an alternate protocol like UDP-Lite would avoid the issue and anyone needing extremely high-precision already has to have tight enough control over their network to deploy it since they’d need to avoid having random middleboxes interfering with timing.


Data integrity is ensured by the MAC layer. Ethernet's CRC32 is substantially stronger than the weak and unnecessary 16-bit IP-checksum in the UDP header. It is also infinitely easier to calculate in hardware, because it is placed where it belongs (i.e., after the end of protected data).

I acknowledge that PTP is not that widespread, but this isoteric issue is emblematic of broader overreach with the IPv6 design. This decision is one of dozens (hundreds?) that are nice-to-have for many users, but catastrophically disruptive for others.

Such concerns are individually minor, but I assert that they collectively represent a significant barrier to adoption.


If you're running at 10Gbps and you don't spare the memory to buffer a single packet, that's desire not need.

Your expertise does not make you automatically right about every tradeoff.

Also why does on-the-fly editing for PTP packets in particular require your buffer to be bigger than a PTP packet? Aren't those small?


It's very much "need" in this case. This was considered at length.

To be clear, we are talking about exotic custom hardware that has little in common with the average x86/x64 desktop.

For something like a 24-port 10 GbE switch, the platform might have a gigabyte of off-chip DRAM, but only a megabyte of on-chip SRAM. An ask of 16 kiB SRAM per port is 37% of that capacity, which is badly needed for other things.

The other complicating factor is that the PTP egress timestamp and update pipeline needs to be predictable down to the clock cycle, so DRAM isn't an option.

Most PTP packets are small, yes, but others have a lot of tags and metadata. They may also be tucked between other packets. To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

And yes, we did consider RFC1141 and RFC1624. We use those when we can, but unfortunately not possible in this case.

Say what you will about the rest of IPv6, but I am particularly salty about the UDP checksum requirement.


> To be fully compliant, we have to handle the worst case, which means a full-size buffer for a jumbo frame.

Well, fully compliant except for IPv6. If you said no jumbo frames for PTP, or no jumbo frames for specifically IPv6 PTP, then the extra buffer for PTP checksums only needs 4% of your SRAM.

> They may also be tucked between other packets.

Does that matter? Let's say a particular PTP packet is 500 bytes. If there's a packet immediately after it, I would expect it to flow through the extra buffer like it's a 500 byte shift register.


not have 128 bit addresses for one thing. 64 bits would have been fine. that was one of the biggest consternations that's a huge hit for small packets.

so nat sucks. we needed to have something better. but instead of just extending to 64 bit src/dest addresses, align the fields and drop the checksum or any straightforward extension like that we got an entirely new protocol with new rules, nuances and complexity. so people just said nope. if it had been just a superset of IP with a different packet format and wider fields, it would have been adopted widely 20 years ago.

this wasn't intended to be a contentious take, btw: i was genuinely surprised that the article was ignoring this take. it was a very common feeling in the late 90s and 2000s when IPv6 was coming out. "over-engineered"


> What's your idea, specifically?

This is the problem. Lots of arm-chair protocol engineers claim it'd be easy if 'They did X'. Of course, these immediately fall apart under the barest of scrutiny but they keep coming up.

Here is your challenge. Create a way to add this address space extension in a way that doesn't break backwards compatibility. Remember, you need to be specific how you would add the change and how it would keep backwards compatibility.


> address space extension in a way that doesn't break backwards compatibility.

i didn't say it wouldn't break backward compatibility - you're moving the goal posts. what i said was "a superset of IP with a different packet format and wider fields"

> arm-chair protocol engineers

don't be condescending. i've likely been designing protocols for longer than you think.

> Remember, you need to be specific how you would add the change and how it would keep backwards compatibility.

if all you had to do to deal with IPv6 was bigger addresses and a slightly different wire format, it wouldn't have had the barrier to adoption. don't design an entirely different protocol. the wire format is the least of the problems.


>if all you had to do to deal with IPv6 was bigger addresses and a slightly different wire format,

This again. The biggest barrier to IPv6 adoption has always been a different wire format, it doesn't matter the degree of difference.


I'm just a casual homelab guy, but all my hardware now supports IPv6 but I'm not really using it precisely because it is just so different from IPv4.

How you assign addresses is completely different. How you configure your firewall as a result is completely different. In fact software support for the latter was one of the things I struggled with for years before having to change router software from pfSense to OpenWRT. Last I checked pfSense still didn't have full support.

They changed the way you write the addresses, using the port separator as group separator as well, leading to needing special software support for parsing IPv6 addresses. I know because I had to fix this in a few projects where we bothered to add IPv6 support, and that was the biggest PITA by far when adding IPv6 support, the rest was trivial.

Out of all the trouble I've had with IPv6, the wire format was the least problematic by far. All the wire format did was cause it to take some time to get IPv6 capable hardware.

But I've had that hardware for decades at this point. The thing keeping IPv6 back is all the other things they changed.


> They changed the way you write the addresses, using the port separator as group separator as well, leading to needing special software support for parsing IPv6 addresses. I know because I had to fix this in a few projects where we bothered to add IPv6 support, and that was the biggest PITA by far when adding IPv6 support, the rest was trivial.

OMFG!

The hardest part of supporting IPv6 was fixing your address parsing? THAT!?

Here's my frustration. Everyone who doesn't understand the why of IPv6 always complains that the address format is such a huge problem and that is why the IPv6 deployment is so slow and hard. It's basically a shibboleth for poor understanding.

The reason why this isn't a good take is that IP address parsing is a standard function of every standard library on the planet. You dump in a string and they all figure it out, and spit back an object with everything you need. The reason you had so much trouble with supporting it is that you weren't using the platform libraries. You hacked together some junk, probably a few broken regex's and string concatenation. Your homebrew IP library was broken and I guarantee you didn't handle all the IPv4 parsing rules correctly.


> The hardest part of supporting IPv6 was fixing your address parsing?

That was actually a non-trivial part of implementing IPv6. Sure RFC 2732 had come out a few years earlier, but we weren't parsing URLs so it was not clear if it applied to our use-case.

All the rest that was required for us to support IPv6 was quite trivial. This was the only thing we had to spend time on.

> The reason why this isn't a good take is that IP address parsing is a standard function of every standard library on the planet.

Ok, I stand to be corrected, after all none of us were network programming experts.

How do you parse a IPv6 address, including the port number if present, using Boost 1.35 or C++03 STL? Note should run on Windows XP, as well as Linux and OSX of similar era. Does your solution require the format specified in RFC 2732?

Anyway my point still stands. There main friction to adopting IPv6 is not the wire format, it's everything else they changed.


So here is an exercise: go look at the structure of an IPv4 packet. It’s not complicated. Can you see where you can cram 32 additional bits? Or even 24? Because if there isn’t a place for them then you cannot possibly extend the IPv4 address space without breaking backwards compatibility. Anyone can do this exercise, and anyone who has an opinion should do this exercise.

Spoiler: you will come to the conclusion that you can’t find the additional bits. Your only option is to break compatibility and create a new packet header format. At this point you can choose literally any size address larger than 32 bits. 64 is good, but the cost to go to 128 is literally nothing while giving you a lot more possibilities of what you can do with it.

Lastly, IPv6 fixes a lot of craft from IPv4. It is a more streamlined protocol that is actually easier to work with than IPv4. The people who told you that IPv6 is overengineered didn’t have an alternative better protocol. Their point was that IPv4 is fine and we don’t need anything but what it provides because a new protocol is scary and annoying to learn because new things are scary. Literally, mathematically, there is no alternative that solves address exhaustion in a backwards compatible way. CGNAT is the overengineered hack, not IPv6.

I really hope you stop respond in to people with nonsense before you look at the packet structure yourself.


what i said was "a superset of IP with a different packet format and wider fields"

well, yes obviously you need more bits. what you don't need is all the other changes.

> I really hope you stop respond in to people with nonsense before you look at the packet structure yourself.

don't be condescending.


Again, look at the IP header format and see for yourself that there is no place to create wider fields. This is what everyone here has been trying to tell you in a myriad ways and you are not hearing this. There is no possible way to do what you are proposing. What you are saying is the definition of nonsense because there is no sense in it. You are arguing that an 18 wheeler should be able to fit inside the trunk of a car and getting upset when people tell you that it doesn’t fit.


A different address size is your main suggestion? Anything that isn't 32 is going to have the same problem.

> if it had been just a superset of IP with a different packet format and wider fields

It pretty much is...

Changes like DHCP are not the deciding factor.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: