The primary and effective measure of data integrity is called the bit-error-ration(BER). This Answer Record covers the following:
That can be done using the concept of statistical confidence levels (CLs). In statistical terms, the BER CL can be defined as the probability-based on E detected errors out of N transmitted bits-that the "true" BER would be less than a specified ratio R.
For purposes of this definition, true BER means the BER that would be measured if the number of transmitted bits was infinite. Mathematically, that can be expressed as:
CL = PROB [BERT<R], given E and N,
Here CL represents the BER confidence level, PROB[ ] indicates "probability that," and BERT is the true BER. Because CL is by definition a probability, the range of possible values is 0% to 100%. Once the CL has been computed, we can say that we have CL percent confidence that the true BER is less than R. Another interpretation is that if we were to repeatedly transmit the same number of bits N through the system and count the number of detected errors E each time we repeated the test, we would expect the resulting BER estimate E/N to be less than R for CL percent of the repeated tests.
As interesting as the equation is, what we really want to know is how to turn it around so that we can calculate how many bits need to be transmitted to calculate the CL.
To do that, we make use of statistical methods involving the binomial distribution function and Poisson theorem. The resulting equation is:
If we convert the equation to 2-D, we can illustrate the relationship between the number of bits that must be transmitted (normalized to the BER) versus CL for zero, one, and two bit errors, as in the below figure: