> If there are more errors in a burst CRC will not catch it.
*Might* not catch it. Just like how SHA1 might not catch a 3-bit error if you find a collision in just the right spots.
Consider a 1024-bit message, and 1023-bits are in error. The CRC would trivially, and obviously, catch the error (because this would be "obviously" a 1-bit parity error, and the CRC's bottom bit is always a simple 1-bit parity)
In fact, there's all sorts of errors CRC are guaranteed to catch. The "burst error" is the one that most people care about, but "odd errors" (ex: 511-bit or 4095-bit errors, thanks to the bottom parity bit) are guaranteed to be caught as well.
---------
CRC is "so regular", it is very easy to prove the specific kinds of errors that can, or cannot, be caught. According to classical theory of communications, there are really only two errors to care about:
1. Random errors -- your methodology doesn't matter. Simple checksum or even XOR is provably optimal vs random errors (!!), and indistinguishable from a SHA1.
2. Burst errors -- Your connection goes dead for a time: this leads to 10 bits all in a row in error (lets say some electrical storm or static-shock, or other such phenomenon causes the antenna / line / whatever to be dead for 10-microseconds, on a 1MHz baud rate connection. That's 10-bits that disappear, aka a 10-bit burst error).
Because #1 doesn't care "which methodology" you use, its all about optimizing against #2.
*Might* not catch it. Just like how SHA1 might not catch a 3-bit error if you find a collision in just the right spots.
Consider a 1024-bit message, and 1023-bits are in error. The CRC would trivially, and obviously, catch the error (because this would be "obviously" a 1-bit parity error, and the CRC's bottom bit is always a simple 1-bit parity)
In fact, there's all sorts of errors CRC are guaranteed to catch. The "burst error" is the one that most people care about, but "odd errors" (ex: 511-bit or 4095-bit errors, thanks to the bottom parity bit) are guaranteed to be caught as well.
---------
CRC is "so regular", it is very easy to prove the specific kinds of errors that can, or cannot, be caught. According to classical theory of communications, there are really only two errors to care about:
1. Random errors -- your methodology doesn't matter. Simple checksum or even XOR is provably optimal vs random errors (!!), and indistinguishable from a SHA1.
2. Burst errors -- Your connection goes dead for a time: this leads to 10 bits all in a row in error (lets say some electrical storm or static-shock, or other such phenomenon causes the antenna / line / whatever to be dead for 10-microseconds, on a 1MHz baud rate connection. That's 10-bits that disappear, aka a 10-bit burst error).
Because #1 doesn't care "which methodology" you use, its all about optimizing against #2.