What Good is BER (Bit Error Rate)?

229

It has been a long time since I posted last, but I received an email from a gentleman in New Zealand and his questions prompted me to begin where I left off: “What Good IS BER?”

His letter is reprinted, with permission, below. My response is following his letter.

From Bonner:

Hi

As a retiree I run a small antenna business in Katikati New Zealand and
are very interested in picture quality. I have observed that not all
pictures on our COFDM digital system have the same degree of clarity.

Most technical papers would have you believe that a digital picture
stays the same quality right down to where the picture pixelates. In my
opinion there is picture degrading before you get to the point of
pixelation. I believe it is due to the level of BER. I was pleased to
find this article at
http://www.wowvision.tv/signal_strength_meters_BER.htm which lead me to
you.

I believe the following story will help with my explanation.

Katikati is a rural town and is served by two digital transmitter points
and we are a fringe to deep fringe area. Typical signal level 35 to 42
db microvolts. It you are lucky you may get a job where you have 50db
microvolts. The most common digital TV sold in our area is a Samsung
Series 6 50 inch which I have one. On puchasing one I connected it up to
our UHF antenna and got a good picture, not as sharp as I would like and
with only the occasional pixelization on our HD and SD digital channels.

It always annoyed me that I would go and install new antenna for people
who had just purchased a new Samsung 50 inch series 6 TV and they would
get a sharper, more detailed picture than what we have. Most annoying, I
wondered if I had purchased a lemon. Anyhow the other day a local
street light went faulty and started causing a lot of digital
interference every time the street lights came on. So I wondered if I
replaced my 15 year old UHF antenna with a new Wisi phased array
antenna would improve matters. see;
http://www.wisi.de/cgi-bin/online_katalog.pl?prod_id=36 This antanna I
have found to be the best antenna I have ever come across and is used by
TX Australia as their reference antenna. I replaced the antenna and
mast head amplifier and what a difference. I now had the same sharp
picture that I had got for so many of my customers and I had reduced the
faulty street light interference.

As you will appreciate it has strengthened my belief that poor bit error
rate does degrade a digital TV picture and the only article I could find
on the internet is this one;
http://www.wowvision.tv/signal_strength_meters_BER.htm.

Your comments would be appreciated.

Attached is an antenna test I did of 5 different UHF antennas available
here in New Zealand

Thanks

Bonner Martin
Katikati
New Zealand

Dear Bonner,

I am pleased to receive your correspondence! I have written some other things at DTVUSA
Forum, but had left off at this part: “What good is BER?” (I will be posting
this as an article there, or on my blog, but I would like to use your letter, all or in
part, if it is O.K. with you.)

You will be happy to know that your observations are correct. Digital picture quality
does indeed vary!!! But the most likely culprits are plain ole’ signal strength and noise, or more
accurately, the signal-to-noise ratio (SNR).

We have been led to believe that digital is “All or nothing” but nothing is further from
the truth. Picture quality DOES vary. We are being deceived in hopes that digital
technology (in the form of more efficient FEC algorithms) will AGAIN be able to produce a
non-variable picture. (Although digital SD wasn’t actually “all or nothing”, it appeared
to us as though it was.)

When it comes to the reason for the variation, you are very close! BER is a relative
digital value that is determined by SNR. BER will reflect the SNR level, letting you know
that “X” SNR = “X” BER number of errors/time. BER is the number of bits in error over
time,
or the rate of errors. SNR is the reason, BER is the result.

You actually need to look no further than the analog measurements that you are already
familiar with to diagnose and repair any deficiency. The place to look is the signal-to-noise ratio (SNR). SNR is STILL the way to achieving high quality viewing!

Let me ask you this:

If 10-6 BER is the suggested minimum benchmark for good HD quality, and 10-6 BER equates
to one visible artifact per hour, then how long would you have to leave your meter
attached to get an accurate reading? (An accurate reading that is not approximated or interpolated…)

You would have to watch that meter for ONE HOUR. Anything less will give you interpolated
results. Digital processing itself relies and depends on interpolated data. (This is
“WHY” higher BER equals less picture quality. The more corrupted data that has to be
covered, or concealed, or recreated, means that more of the final information will be
approximatedfrom nearby or similar data. The more uncorrectable errors, the further from
the original will be your result.)

I would NEVER use BER in the installation process! In my opinion though, there are two
valid and good uses of BER

First, while you are watching television, you can determine the most crucial benchmark of
BER. If you observe one visible error per hour while watching, then you are at 10-6 BER.
This is the “minimum” and starting point of “acceptable quality” HD. If you observe two
or more visible errors per hour, you are at a higher BER. Signal degradation is
considered noticeable at the 10-6 BER, and further down toward signal loss. Above 10-6
BER, the average individual will notice no picture quality differences. This is the point determined by the powers that be, as the point where the most visible degradation occurs, or where it will be noticed. (For “high quality” HD, the BER benchmark is 10-10!)

The second good use of BER, is while troubleshooting. If a BER meter is installed in your
TV or STB, it will have a long time record of performance that will give you a more
accurate BER reference to determine the level of performance. But to affect BER in a
positive way there are only two things to do — either increase signal, or decrease
noise.

Attention to detail is very important in HDTV! Every detail neglected, is a detail lost
in the picture.
All the stuff about proper connections and terminations, grounding,
excess cable, etc. are all back on the table and NOW they will cause problems if ignored.
Since we are now pushing digital to its limits, we need to maximize these systems to get
the best quality.

It is unfortunate (and intentional) that installers in the states are not taught any of
this. They are taught the hard line of “All or nothing” and are TOLD that “picture
quality doesn’t vary”. They are taught that “as long as you have lock” everything’s O.K.
Installers and industry individuals that have been taught by the industry will constantly
say that signal won’t affect picture quality, but it does (and ALWAYS HAS – we just didn’t
have any other picture quality to compare to, and comparing to analog makes it all look
great!) Nothing has changed in the physics of signal transmission and reception.

Regarding Samsung TVs, I have said numerous times, that Samsung is a television I would
avoid (especially the low end products – difficult to calibrate correctly)
. The second reason that I would avoid them is that their FEC is not as good as others, and the picture quality readily shows this when
it is in the “low-to-reasonable” signal strength range. (I don’t normally include this in
my recommendations because of the distractions it would cause as some people would
desire to argue the point.)

I see that you have also experienced visible interference from known outside sources! COOL! Lights
with ballasts (fluorescent and compact fluorescent, metal halide, High pressure sodium,
etc) emit a LARGE amount of EMI/Rf radiation that will affect systems. To reference this to what
was written above, as you increasedyoursignal strength, you reducedthe appearance of a
portion of the noise. You increase the SNR by increasing signal strength. Signal overcomes noise.

Remember this:
Poor BER does not CAUSE picture degradation, it is a measureof rate of
error. Poor SNR causes picture degradation and SNR is what determines BER.

I am happy to write to you and I will be available to answer any more questions you might
have. God bless you this day!

13 Comments
  1. MrPogi says

    Jeff,
    Thank you for pointing this out. The “all or nothing” view seems to have taken ahold, and we need to shake that. Noise and low signal are my main causes of poor signal. Sometimes, it is easy to fix- simply by moving, fixing, or replacing the device causing interference, or replacing a poorly performing pre amp. As noted, a higher gain antenna can improve signal, but only if it doesn’t also increase the signal from the interfering device. Other times, as in the case of a device external to us, or not under our control, it can be almost impossible to cure. In the case of a streetlight as mentioned, it falls to the local government which can be quite unresponsive to this problem. There is also a lot of emi emitted by motor vehicles, particularly motorcycles, four-wheelers, and construction vehicles… not to mention lawnmowers!

  2. n2rj says

    Call me a skeptic, but I want to see actual visual proof of this phenomena.

    I do really believe that digital is “all or nothing” and that any errors would show up as crude errors. From my own observation of cable vs ota signals I don’t see a difference, even when I take still frames and compare them side by side.

    But if someone can show me visual proof it would be most appreciated.

    Also, the article implies that error correction produces an approximation, i.e. an imperfect result. If this is the case then fault tolerance technologies like RAID would not be able to guarantee data integrity. From my extensive work with storage technology I know that this is not really the case.

    1. highdefjeff says

      Dear Ryan,

      Between data storage and broadcast, there are two different methods of data transfer. One type is “lossless” data transfer where there is NO loss and the result is an exact duplicate of the original. In lossless data transfer, there are methods of resending the compromised or corrupted data, so that none is lost.

      But when we are dealing with broadcast media, we use “lossy” data transfer. Broadcast media lives by the saying “the show must go on”. Since none of us would be happy to stare at a still picture while some missing information is re-sent, the receiver has to do the best with what it has to work with. Because it is a “one-shot deal” information is sent with some redundancy in hierarchical layers. Forward Error Concealment/Correction algorithms use the digital data stream to replace or “recreate”, “imitate”, or interpolate (mathematically guess) the missing bits.

      When it comes to actual visual proof, I don’t believe that I can help you. You confess NOT seeing a difference between OTA and cable. It could be that they are the same quality in your case, but I suspect that your eyes do not have the ability to see fine grain detail. We do not all have the same eyes; there are physical differences related to fine grain detail, and vertical and horizontal perceptions, as well. Some people will never see these differences and you may be one of those people.

  3. n2rj says

    Maybe one day I should do a bit by bit comparison. Our cable company claims that they pass on the feeds “as is” but I also have access to FiOS and also other feeds, or I could simply use a variable attenuator.

    As for my eyesight, it probably is bad or maybe not. I do wear glasses and I do clean them with microfiber and ROR. 🙂

    That said:

    Forward Error Concealment/Correction algorithms use the digital data stream to replace or “recreate”, “imitate”, or interpolate (mathematically guess) the missing bits.

    I don’t think it is a “guess” at all. It is a recalculation. If the bits aren’t correctly recalculated/reconstructed then the packet is discarded. Up to 10 bytes per payload packet can be reconstructed per the Reed-Solomon algorithm. Otherwise fuhgeddaboudit.

    But like I said, without visual proof, I’m a non believer, sorry.

    1. highdefjeff says

      Maybe one day I should do a bit by bit comparison. Our cable company claims that they pass on the feeds “as is” but I also have access to FiOS and also other feeds, or I could simply use a variable attenuator.

      As for my eyesight, it probably is bad or maybe not. I do wear glasses and I do clean them with microfiber and ROR. 🙂

      That said:

      I don’t think it is a “guess” at all. It is a recalculation. If the bits aren’t correctly recalculated/reconstructed then the packet is discarded. Up to 10 bytes per payload packet can be reconstructed per the Reed-Solomon algorithm. Otherwise fuhgeddaboudit.

      But like I said, without visual proof, I’m a non believer, sorry.

      I am well aware that there is a world of skeptics regarding picture quality. (I have been attacked by non-believers since breaking the digital picture quality story in 2006) That is precisely the reason that I used the observations of another fringe installer to set the stage for this article.

      I appreciate your comments, n2rj, and I appreciate that you and I can respectfully disagree.

      But, you need not take my word for it. The “clear” answer to blurry digital picture is found in Scalable Video Coding extension H.264.

      Please review the two previous articles Examining MPEG 2, 4, and FEC… starting here:

      http://www.dtvusaforum.com/content/128-examining-mpeg-2-4-fec-part-1.html

      These articles form a foundation upon which I am writing.

      Regarding eyesight:

      It is not the prescription strength of eyeglasses that determines whether or not fine detail is recognized. I have eyesight that is 20/800-blind without glasses (corrected with coke bottle glasses). The fine grain detail perception comes from receptors that are found in the retina. Not everyone has physical make-up to discern fine grain.

      I am an individual with Asperger’s syndrome (one of the autistic spectrum disorders). It is noted that many Asperger’s individuals have heighted attention to detail with increased visual and/or audio sensitivity. This explained to me why I’ve been able to see this. It also explains why there is a group of people in the calibration industry that are referred to as having “Golden Eyes”. These are certainly some of the individuals that have the “eyes to see”.

      Thank you for your comments!

  4. IDRick says

    I would NEVER use BER in the installation process!

    Interesting article and interesting comment above. Why would you ignore any measure of BER? My converter box reports both Signal strength and signal quality. My antenna is mounted in the attic. During installation testing, there were spots in the attic with similar signal strengths but varying signal quality. I chose the spot the spot with 100% quality.

    1. highdefjeff says

      I would NEVER use BER in the installation process!

      Interesting article and interesting comment above. Why would you ignore any measure of BER? My converter box reports both Signal strength and signal quality. My antenna is mounted in the attic. During installation testing, there were spots in the attic with similar signal strengths but varying signal quality. I chose the spot the spot with 100% quality.

      You make a good point that I forgot to include in the article. SOME signal quality scales “top-out” at 100%. This “100%” signal quality can be very deceiving. Some register 100% when the BER reaches 10-6 (the minimum accepted standard for viewing good quality HDTV). Any increases beyond 10-6 will continue to read as 100%, though performance and picture quality can still improve. I have attached an appropriate graph. This graph illustrates the “cliff effect”. As I have previously mentioned that there is no “cliff” at all. The actual performance graph is a waterfall.

      I choose to ignore BER because of its digital nature. The delay in receiving a digital measure is from the time it takes for conversion from analog to digital, and from the time needed for the mathematical calculations. This data is not as precise as analog data. Since BER is DETERMINEDby SNR (which is analog Signal Quality), I only use BER in the ways described in the article. Fix your SNR and you will have fixed your BER.

      1. IDRick says

        You make a good point that I forgot to include in the article. SOME signal quality scales “top-out” at 100%. This “100%” signal quality can be very deceiving. Some register 100% when the BER reaches 10-6 (the minimum accepted standard for viewing good quality HDTV). Any increases beyond 10-6 will continue to read as 100%, though performance and picture quality can still improve. I have attached an appropriate graph. This graph illustrates the “cliff effect”. As I have previously mentioned that there is no “cliff” at all. The actual performance graph is a waterfall.

        I choose to ignore BER because of its digital nature. The delay in receiving a digital measure is from the time it takes for conversion from analog to digital, and from the time needed for the mathematical calculations. This data is not as precise as analog data. Since BER is DETERMINEDby SNR (which is analog Signal Quality), I only use BER in the ways described in the article. Fix your SNR and you will have fixed your BER.

        The main point was signal strength and quality can and do vary by antenna mount location. Evaluating both measures is a very important consideration when selecting the final antenna mount location. Ignoring effect of mount location on signal quality is not a good idea, at least IMO. Since you are heavily invested in your view on BER effects on signal quality, it seemed reasonable that you would also consider BER effects on antenna mount location and support measuring signal quality during antenna installation.

  5. Fringe Reception says

    Jeff,

    I have to add your thread (paraphrased) … if the viewers don’t say WOW … something is wrong!

    Neighbors and friends have visited here over the last 18+? months since we went to dedicated digital OTA and I have lost count of the “wow’s” we have had and it must have to do with the lack of bit errors received here.

    JEFF! NEW QUESTION FOR YOU:

    My 75 mile away KVOS-12 (35) Bellingham, WA. has always (VHF and now UHF) had a better balanced audio stream than ALL others OTA channels we have ever received: they believe a bass component should not be attenuated from their audio. Is this a function of their ‘studio’ to make their audio correct and clear as a bell? Can a station ‘dial-in’ their transmitted audio frequency response range/balancing?

    My point: I think I already know the answer, but the followup Q is important: why limit audio bandwidth since it is so incredibly narrow-banded compared to a television signal? Is this one way how multiple sub-channels have “the room” to ‘be’? KVOS has no subs …

    Jim

    1. highdefjeff says

      Jeff,

      I have to add your thread (paraphrased) … if the viewers don’t say WOW … something IS wrong!

      Neighbors and friends have visited here over the last 18+? months since we went to dedicated digital OTA and I have lost count of the “wow’s” we have had and it must have to do with the lack of bit errors received here.

      BINGO!

      JEFF! NEW QUESTION FOR YOU:

      My 75 mile away KVOS-12 (35) Bellingham, WA. has always (VHF and now UHF) had a better balanced audio stream than ALL others OTA channels we have ever received: they believe a bass component should not be attenuated from their audio. Is this a function of their ‘studio’ to make their audio correct and clear as a bell? Can a station ‘dial-in’ their transmitted audio frequency response range/balancing?

      My point: I think I already know the answer, but the followup Q is important: why limit audio bandwidth since it is so incredibly narrow-banded compared to a television signal? Is this one way how multiple sub-channels have “the room” to ‘be’? KVOS has no subs …

      Jim

      Dear Jim,

      You and I both know that “if your HDTV doesn’t make you say “WOW!”…” then something is wrong! :applause:

      The higher the bit errors or rate of bit errors (BER), the worse your DTV performance will be. Whether it is loss of signal, lip sync, picture quality, or DVR function, something will suffer as there is higher rate of errors.

      You asked some questions that “maybe” should be answered by a studio engineer, but I will give you my thoughts. You ask:

      Is this a function of their ‘studio’ to make their audio correct and clear as a bell? (Yes)

      Can a station ‘dial-in’ their transmitted audio frequency response range/balancing? (Yes)

      and,

      why limit audio bandwidth since it is so incredibly narrow-banded compared to a television signal? (Don’t know the answer for this question, but I would suggest that $Money$ is somewhere in the picture) Is this one way how multiple sub-channels have “the room” to ‘be’? (No, the reason that multiple sub-channels can exist is a direct result of compression technology.)

      All stations have the same bandwidth to work with, AND the same science to work with. I believe that if we rule out reception problems, the only difference is the skill of the engineers at the studio, and the equipment that they have to work with.

      I said that these questions “might” be better asked of a studio engineer. Because of the lack of updated training and because of the varying amount of understanding that I have witnessed by many who claim to be engineers, I would expect that you might get different answers depending on the ability and knowledge of the one you ask.

      I suspect that the engineers at KVOS have spent their time optimizing the stream that they do have and I would ask THEIR engineer why they have such good audio.

      God bless,
      Highdefjeff

  6. highdefjeff says

    Found this article today. It helps to explain what goes on.

    I believe that this article will shed some light on the answers you are looking for…

    Technical paper finds HD channel sharing to be viable in achieving FCC spectrum goals

  7. n2rj says

    Like I said, I need visual proof.

    If a (fine) difference in quality exists, it should be visible and demonstrable.

    Otherwise what I say stands – any errors will show up as crude errors, not as subtle errors.

    As for Scalable Video Coding, that is an entirely different animal and not a standard extension of MPEG2 which is used in US OTA broadcasting. It is a standard extension of H.264/AVC but we do not use that here for OTA. It is used in DVB but I am not 100% sure if SVC is implemented there.

    But anyway, by all means, if you believe that there is gradual degradation in ATSC OTA MPEG-2 signals, you are more than welcome to show me visual proof. It should not be hard at all.

    1. n2rj says

      My final post on this, and (hopefully) we’ll put this to bed.

      There is NO WAY that a subtle/gradual degradation of quality can happen with an ATSC MPEG-2 transport stream because is protected by CRC error detection.

      For reference, check out the chapter on MPEG transport basics (chapter 3) of Data Broadcasting: Understanding the ATSC Data Broadcast Standard.

      Available on google books:

      Data broadcasting: understanding the … – Google Books

Leave A Reply

Your email address will not be published.