UPGRADE YOUR BROWSER

We have detected your current browser version is not the latest one. Xilinx.com uses the latest web technologies to bring you the best online experience possible. Please upgrade to a Xilinx.com supported browser:Chrome, Firefox, Internet Explorer 11, Safari. Thank you!

AR# 66799

BER measure and test time

Description

The primary and effective measure of data integrity is called the bit-error-ration(BER). This Answer Record covers the following:

  • What is the BER?
  • How to understand BER and confidence level (CL)
  • How long does it take to measure?

Solution

A digital communications system's BER can be defined as the estimated probability that any bit transmitted through the system will be received in error (for example, a transmitted "1" received as a "0" and vice versa).

In practical tests, the BER is measured by transmitting a finite number of bits through the system and counting the number of bit errors received. The ratio of the number of bits received erroneously to the total number of bits transmitted is the BER.

The quality of the BER estimation increases as the total number of transmitted bits increases. In the limit, as the number of transmitted bits approaches infinity, the BER becomes a perfect estimate of the true error probability. Since a practical BER testing requires finite test time, we should find a way to determine how many transmitted bits are sufficient for the desired estimate quality.

That can be done using the concept of statistical confidence levels (CLs). In statistical terms, the BER CL can be defined as the probability-based on E detected errors out of N transmitted bits-that the "true" BER would be less than a specified ratio R.

For purposes of this definition, true BER means the BER that would be measured if the number of transmitted bits was infinite. Mathematically, that can be expressed as:

CL = PROB [BERT<R], given E and N,

Here CL represents the BER confidence level, PROB[ ] indicates "probability that," and BERT is the true BER. Because CL is by definition a probability, the range of possible values is 0% to 100%. Once the CL has been computed, we can say that we have CL percent confidence that the true BER is less than R. Another interpretation is that if we were to repeatedly transmit the same number of bits N through the system and count the number of detected errors E each time we repeated the test, we would expect the resulting BER estimate E/N to be less than R for CL percent of the repeated tests.

As interesting as the equation is, what we really want to know is how to turn it around so that we can calculate how many bits need to be transmitted to calculate the CL. 

To do that, we make use of statistical methods involving the binomial distribution function and Poisson theorem. The resulting equation is:

or

E represents the total number of errors detected and ln[ ] is the natural logarithm.

When there are no errors detected (i.e., E=0), the second term in this equation is equal to zero and the solution to the equation is greatly simplified. When E is not zero, the equation can still be solved empirically (for example, by using a computer spreadsheet).

If we convert the equation to 2-D, we can illustrate the relationship between the number of bits that must be transmitted (normalized to the BER) versus CL for zero, one, and two bit errors, as in the below figure:


As an example, a 5-minute BER test at 10 Gb/s receives 3 trillion bits. This test ensures that the BER is less than 10-12 with 95% accuracy, if no errors are observed.

A longer 30-minute test ensures that the BER is less than 10-12 with 99.999999% accuracy, if no errors are observed.

For more information, you can refer to the Lightwave.
AR# 66799
Date Created 03/10/2016
Last Updated 04/15/2016
Status Active
Type General Article
Devices
  • FPGA Device Families
Tools
  • Vivado Design Suite