First, I’m not saying that it’s obscure to be dismissive but simply recognizing that the number of people who need to have picosecond precision is not a significant factor in global IPv6 adoption.
Second, while it’s certainly true that having to buffer packets to calculate the checksum is more expensive that doesn’t mean that the best option is to ignore concerns about data integrity which was a far more frequent source of problems. If they hadn’t developed an encapsulation mechanism, using an alternate protocol like UDP-Lite would avoid the issue and anyone needing extremely high-precision already has to have tight enough control over their network to deploy it since they’d need to avoid having random middleboxes interfering with timing.
Data integrity is ensured by the MAC layer. Ethernet's CRC32 is substantially stronger than the weak and unnecessary 16-bit IP-checksum in the UDP header. It is also infinitely easier to calculate in hardware, because it is placed where it belongs (i.e., after the end of protected data).
I acknowledge that PTP is not that widespread, but this isoteric issue is emblematic of broader overreach with the IPv6 design. This decision is one of dozens (hundreds?) that are nice-to-have for many users, but catastrophically disruptive for others.
Such concerns are individually minor, but I assert that they collectively represent a significant barrier to adoption.
Second, while it’s certainly true that having to buffer packets to calculate the checksum is more expensive that doesn’t mean that the best option is to ignore concerns about data integrity which was a far more frequent source of problems. If they hadn’t developed an encapsulation mechanism, using an alternate protocol like UDP-Lite would avoid the issue and anyone needing extremely high-precision already has to have tight enough control over their network to deploy it since they’d need to avoid having random middleboxes interfering with timing.