Interpreting Bit Error Rate (BER)
The bit error rate of a digital system is the number of errors of bits in relation to time. The bit error rate (BER) is the digital expression of the signal quality or signal-to-noise ratio. BER is digital signal quality and it depends on SNR. (You may want to read “Signal Quality and the RF Front End” if you haven’t.) http://www.dtvusaforum.com/dtv-hdtv-reception-antenna-discussion/15390-signal-quality-rf-front-end.html
How does BER relate to a signal meter?
To understand the BER, let’s put it in the context of a signal quality meter. There are several benchmarks in BER that are identified with performance.
A system can get lock on a signal as low as 10 -1 bit error rate (BER). It would be the lowest signal quality meter reading that you can still have reception. You can in most cases watch a program “as long as you have lock”. With my digital converter box (which has both strength and quality readings) this is around 58-60 in signal strength and 17-19 in signal quality. Everyday for a couple of years I’ve watched programming at this level of signal. I choose to view at this level because this is where I get to see the widest variety, and largest number of compression artifacts.
As we will see, digital isn’t “all-or-nothing” and lock is enough for a picture, but not enough for great picture quality. If you think just having lock gives you great picture, then here is some information that you might be interested in knowing.
MPEG compression (Forward Error Correction) doesn’t even start working until the BER reaches 10-4! Or conversely, FEC stops working when you drop below 10-4 BER. Lock can and does occur at signal level that has insufficient information to even get the FEC to work. If you are watching a program that is very close to lock, the error correction isn’t even working. (As a matter of fact, the misnamed “digital cliff” (more later) is caused by MPEG FEC. The “fast drop” comes with the sudden loss of the correction benefits of MPEG as signal fluctuations cause the BER to cross the 10-4 threshold where MPEG stops working.
It is obvious that not everyone will see a problem at low signal levels. Some will have compromised picture quality but won’t notice. Cool! They don’t have to worry about picture quality concerns at all (save money on calibration, too). Others who have eyes to see, may notice a blurry or grainy picture, and other compression artifacts. Still others (with eyes to see) may not experience deficiency in performance that is visible. The degraded performance may show up in loss of signal, pixilation, timer issues, lip sync, DVR function, and an array of compression artifacts. The bottom line is that errors increase as SNR decreases, and signal quality is the determinant factor in all areas of performance from stability and function to quality.
Another notable bit rate benchmark is 10-6. A bit error rate of 10-6 is the point where, below 10-6, the average person perceives a degraded picture. 10-6 BER is also the Quasi error free (QEF) point described as one visible error per hour. If there HAS to be a minimum to be achieved (versus maximizing your signal at the highest level obtainable which is what should be the goal), 10-6 is that minimum for “good quality” viewing, the Quasi Error Free point. It is at this point that FEC has enough good information from the signal to give its best guesses about the stuff that is corrupted or missing.
With Dish Network, I believe the QEF point of 10-6 to be about 66, including some headroom to stay above 10-6. When watching Dish programming received at 55 on the signal meter, it is clear that this is short of the goal of 10-6 as evidenced by the far greater number of errors seen. Remember that 10-6 is characterized as ONE visible artifact per hour.
MPEG error correction is in full swing at 10-6 BER input and with all of its tools and tricks, the output “resembles” a much lower but higher quality BER. At BER of 10-6 there are still a lot of errors but the forward error correction (FEC) compensates, covers, and hides the errors quite well for a BER of 10-6 and fewer errors.
It is not until we’ve moved further up the signal quality performance scale, to a BER of 10 -10, do we reach the benchmark labeled “High quality video”. Here is where “WOW!” is actually found! For the most discriminating eye and for the highest quality picture with trouble free performance, this is really where we want to be! Beyond a BER of 10-10, the top end of the scale is between 10-12 and 10-13. In this area there becomes too much signal for the digital signal processor and it becomes overloaded resulting in pixilation and loss of signal similar to the bottom of the scale.
Here’s the thing…digital performance is not actually “all-or-nothing”. If the BER performance were actually “all-or-nothing”, a graph of it would be a straight line. A straight horizontal line on a graph represents “no change”, or a constant value; which is exactly what we would expect in an “All-or-nothing” scenario.
When we actually take a look at the digital performance graphs, the first thing we see is that we are not dealing with a straight line of constant performance, but a curved line denoting variable performance. While the “digital cliff” idea only hints at the inaccuracy of the “All-or-nothing” fable, it has been known for quite some time that there isn’t even a digital “cliff”, but rather a digital waterfall. It seems to me that no one wants to tell us about it.
Here’s a quote from an article written in 2002, How Forward Error Correction Works
“Represented graphically, the general error-performance characteristics of most digital communication systems have a waterfall-shaped appearance. System performance improves (i.e., bit-error rate decreases) as the signal-to-noise ratio increases.”
Here is the link: How Forward Error-Correcting Codes Work
I found the graph in this article to be in a strange orientation, versus what I would call a typical graph. I am accustomed to viewing a graph that increases in performance when read from left to right. This graph presents the digital waterfall, but the way it is oriented, you might think, as I did at first, that the “water’ is flowing “down”, from left to right, but that is not correct. For the “waterfall”, to be accurately reflected as flowing “down” (as water does) requires a different orientation.
To view the graph in a more sensible fashion (reading left to right, low performance to high performance, I have included the graph with its new orientation.
This graph gives clear evidence of the BER to SNR relationship and now, with different orientation, represents a performance graph from “nothing” (bottom left), to higher performance as we read to the right.
This graph represents only the portion that used to be called the “digital cliff” but is more accurately called the digital waterfall.
I’ll leave you with a couple more graphs.
Note: You might notice that the graph in the article only represents BER in the range of 10-1 to 10-6, or so. It is quite common in the BER/performance graphs that I have found, for them to only include portions of the total graph. Common among them are graphs that stop at 10-4 or 10-6. These stopping points are common because 10-4 is where you should reach for FEC to begin working and 10-6 is where you should be to begin watching really good (but not “High Quality”) video.
Thanks for the ariticle Jeff! Have a question, are there processors that can handle above 10-12 – 10-13 BER? Is there any advantage to being above BER 10-10?
Your welcome, my pleasure.
It is currently not possible to achieve greater BER. It is in this higher range that the upper limits are determined. (Actually, after telling you that the digital cliff is a misnomer, there is a digital cliff and it is at the top of the performance spectrum.) Digital systems fail in the presence of too much signal. Here at the top end of the scale the picture will pixilate and lose signal altogether, similar to bottom end performance.
The only advantage to being above 10-10 is headroom so as to not fall below 10-10. Very few people would ever notice any difference in PQ or performance due to any normal fluctuations of signal while in this range.
Interesting point, the NTIA set the following standard for the converter boxes:
“Equipment shall achieve a bit error rate (BER) in the transport stream of no worse than 3×10-6 for input RF signal levels directly to the tuner from -83 dBm to -5 dBm over the tuning range. Subjective video/audio assessment methodologies could be used to comply with the bit error rate requirement. Test conditions are for a single RF channel input with no noise or channel impairment. Refer to ATSC A/74 Section 4.1 for further guidance. (Note the upper limit specified here is different than that in A/74 4.1).”
“Subjective evaluation methodologies use the human visual and auditory systems as the primary measuring “instrument.” These methods may incorporate viewing active video and audio segments to evaluate the performance as perceived by a human observer. For subjective measurement, the use of an expert viewer is recommended. The viewer shall observe the video and listen to the audio for at least 20 seconds in order to determine Threshold of Visibility (TOV) and Threshold of Audibility (TOA). Subjective evaluation of TOV should correspond with achievement of transport stream error rate not greater than a BER of 3×10-6. If there is disagreement over TOV performance evaluation, it will be resolved with a measurement of actual BER.”
See: http://www.ntia.doc.gov/dtvcoupon/dtvmanufacturers.pdf
FWIW, it would help if hdjeff were to use the proper exponential notation relating to BER calculations.
BER is properly denoted by scientific notation that consists of 1 x 10^-x where x is an integer that typically ranges from 1 to 12. A “good” BER is usually LESS that 1 x 10^-6. Keep in mind that the better the signal quality is, the greater is the ABSOLUTE VALUE of the exponent. That is, a BER of 1×10^-9 is 1,000 times better than 1×10^-6 which is 100 times better than 1×10^-4, etc.
Jeff,
I understand that the error rate does vary over the spectrum. But, I’ve used pad attenuators to reduce my signal down to the dropout point. Signal quality, as measured by the converter box, does not change until I’m within 5 dB of the dropout point. My computer capture card starts having problems recording in this range but the picture on the tv still has the wow factor (no different than at full received signal). It doesn’t start breaking up until 1 or 2 dB above the complete dropout point. I am using LG tuners which are known to be very good. Perhaps that is the difference?
Thanks,
Rick
Wow, very informative and possibly a little technical for a lot of the newbies, but that is powerful stuff. You made my point 1000 times over that Signal Quality is part of the battle, and that the consumer / viewer has no idea that all of this can effect their reception of a TV signal.
A lot of people are still stuck with an analog mindset regarding reception of digital TV signals in regards to antenna gain, and amplifiers etc., and they still think of brute force amplification as the cure to reception problems. I am not being critical of them, but only trying to reinforce the signal quality concept as also being considered when trying to advise users on a proper antenna installation for their area, especially if they have one or two problem channels when most of the others come in with no problem.
I have been amazed at just how critical antenna aiming is in relation to signal quality in general. Does everyone now need a bit stream analyzer to aim and place DTV antennas? There are now basically spectrum analyzers on a chip set that could be integrated into DTV receivers to help in aiming for highest quality signal and diagnosing in reception problems.
The biggest issue I have in my area is that I live to close to the transmitters to adequately test an antenna for fringe area reception characteristics, and virtually every antenna I build cannot be “receptionally” (I made up a new word here) challenged at my location. I have to drive about 75 miles to get in a weak signal area, and that is not actually practical for all of the antennas I have built with gas prices being what they are, even though I drive a company vehicle full time with them paying the freight on gas.
I can literally pick up most of the transmitters in this area at my house with a correctly sized piece of wire on the center conductor only and that does little to challenge a new antenna design that I may want to try, and I have many of them to test. Maybe I need to set aside a sort of field day, and load up all of my antennas and test gear and take that 75 mile drive to see if my antennas work as well in the fringe like I think they will.
The Aussies, with their longer experience in OTA reception with their DVB-T have said that you shouldn’t hire an antenna installer if he doesn’t have a BER meter to aim an antenna. Horizon makes one for use in the UK and Australia, but it doesn’t do 8VSB.
Bad reception areas to digital ? – DTV Forum Australia – Australia’s Leading Digital TV and AV Forum post #11
Get The Best Reception – DTV Forum Australia – Australia’s Leading Digital TV and AV Forum 366, 369
I too didn’t fully appreciate the importance of signal quality for digital signals until one day I was aiming my 4-bay 4221 antenna at CH41.
I had orginally aimed the antenna with my Sadelco signal level meter for max signal and also had my Apex DT502 CECB connected with a splitter.
When I was testing the DT502 with my CM4221 antenna on my marginal signal (13.1 on RF41, since moved down to CH13), I got:
Signal Quality 60%
Signal Strength 55%
I had aimed the antenna with my SLM, but when I rotated the 4221 slightly to the right I got:
Signal Quality 100%
Signal Strength 56%
Note the BIG change in signal quality with only a slight change in signal strength.
It seems that the signal quality indication is a more sensitive aiming toolthan signal strength, because it shows the increase in BER from multipath reflections. In my situation the BER is affected by the weak signal, the fixed multipath reflections, and the changing multipath reflections from traffic in front of the antenna (which shows the need for the new ATSC M/H standard).
With strong signals it is necessary to use an attenuator to reduce the signal to fully challenge the FEC with a weak signal, multipath problems, and ambient noise to be near the “cliff” where the signal quality indicator is most sensitive. Obviously, a spectrum analyzer would be a better tool to see the shape of the signal as affected by multipath, but it costs a lot more than a CECB.
That’s a good idea since the tuner is already there, but the marketing department probably wouldn’t go for it because of the extra expense.
The signal diagnostics screen of my Sony Bravia seems to give information that would help to optimize antenna aim:
http://www.avsforum.com/avs-vb/showpost.php?p=17539658&postcount=10649
A 75 ohm variable attenuator would be worth a try to challenge your antenna in a strong signal area to test the margin to dropout. That’s what I use; see the link below.
“If you can not measure it, you can not improve it.”
Lord Kelvin, 1883
http://www.megalithia.com/elect/aerialsite/dttpoorman.html
Thanks for the updated information, and for bolstering the signal quality is sometimes more important than signal strength theory. It is amazing that sometimes the best aiming of an antenna IS NOT ALWAYS in the direction of the strongest signal level, and to also to see the difference in the signal quality between different antennas looking at the same signal.
We use a variable attenuator at our transmitter sites because we take a forward power sample directly from the RF system, and if the RF is not attenuated, we would burn up an Agilent (Expensive) DTV power meter in a flash.
That would be a good way to judge the RF potential of my antennas in a low signal environment without burning a lot of gas finding a true fringe signal.
As for the built in spec analyzer idea, I am talking mainly about high end sets such as Sony and Samsung where a couple of hundred dollars more is not all that big a deal. When you say Sony, you are automatically saying expensive, so whats a few hundred dollars more among friends. BesideS that, Sony’s economy probably needs stimulating too !!
Thanks for the suggestion !!
Thanks, I needed that!
That is one reason why I suggested that you use a variable attenuator.
The other reason is that even though the gain is knocked down, the pattern of the antenna is maintained (horiz & vertical beamwidth, nulls, and F to B). It is important to maintain the pattern of the antenna when you are comparing it to another antenna in a difficult reception location because some antennas are better than others at handling multipath problems. A good example would be the CM4221 VS the 4228. Even though the 4228 doesn’t deliver anywhere near the theoretical 3 dB improvement in gain, its much narrower horiz beamwidth is better at handling some multipath situations.
The test setup that I had in mind was to compare two antennas not only for gain but for ability to handle multipath. It would consist of each antenna connected to its own step attenuator (at least 1 dB per step or better) and then going to the two inputs of an A/B switch. The common output of the switch would go to a splitter that would feed a signal level meter and a CECB or TV tuner. The SLM would give the difference in gain of the antennas and would provide its reading of BER and MER if available (like Trip’s impressive new Sencore SLM1456CM that he mentions here and here).
Because some tuners are better able to handle errors than others, the tuner itself would show the actual ability to decode the signal near the cliff because we don’t watch TV on a BER meter we watch it on a monitor. The attenuators would be adjusted so that each antenna would decode the signal equally as well with increasing attenuation to the same point at the cliff, and the difference between the attenuator settings would be the reception figure of merit for the comparison taking into consideration gain and pattern.
You could use only one attenuator and put the A/B switch before it, but that might be a little more tedious because the attenuator would have to be adjusted for each antenna when going back and forth. Meanwhile, the signal level of the OTA signal might be changing. It would then be more difficult to make the comparison of equal reception ability when constantly changing attenuation and some tuners suffer from a recovery hysteresis effect in that they need much more signal to recover lock than what they needed to maintain it.
YES! That’s what I’m saying and this applies to satellite also. A small increase in signal strength makes a great increase in SNR (signal quality) which is also a dramatic decrease in BER.
Remember that the converse is true. A slight misalignment of an antenna (dish) can cause poor reception, function, and picture quality.
Thanks for your support of my comments.
Actually, I have linked to or quoted YOU as an authority many times; just a few of them:
AVS Forum – View Single Post – Old TV field strength meter any good?
AVS Forum – View Single Post – Official TV Fool forum
AVS Forum – View Single Post – Old TV field strength meter any good?
Thank you rabbit. I appreciate it!
———————————————————–
FOX TV,
I wonder if you could use a loop antenna turned to the null, to simulate a fringe location. You might have to add some attenuation, but it might work. I’m within less than a mile of an antenna ‘farm’ and the loop I built pictured below is very directional, but I think it suffers from the balun and the connecting twin-lead being unshielded: I need to ‘hide’ it inside a project box, grounded to the coax shield (connector) and try it again.
Jim
You do have a point as Loop antennas are highly directional, and I have several of them lying around. Your antenna looks nice, and I love Lexan as an insulator. I do have a few questions about the loop itself. Did you fabricate it, or is it an antenna that may have come with a TV set? The reason I ask is because there seems to be several versions of the standard loop that came with TV sets depending on when it was made. I have seen both in the past.
If it is very old, it may be cut for the old UHF band up to channel 83. If it is newer, it may be cut for channels up to 69, but in either case, it may not be optimized for the current band up to channel 51. If you made it yourself based on formulas, then forget what I said, but I normally cut my UHF antenna elements at channel 33 which is the approximate center of the current band at 584-590 MHz. I normally use 587 MHz as the BUILD frequency of my UHF antenna elements.
I don’t think the exposed balun would cause an issue, as it is a very inefficient antenna itself, but the length of exposed twin lead may have an effect on impedance. Can you possibly shorten it some? The project box may help too, and you could even try painting the inside of it with aluminum or metal based paint as an internal reflector to try and shield the twin lead from Rf energy.
You will never know until you try is my attitude on antenna experimenting. Besides, what could be better than watching TV? Finding better ways to receive it !! 😀
Keep thinking about things and new ideas to try will come along. My new favorite trick is a product called “Liquid Tape” used to seal “Everything” on my feed lines or baluns.:usa2:
FOX_TV,
The loop I used was included with an older TV so its likely not ideally sized, but it’s certainly directional and has the anticipated null. This one was the type with a plastic base and a stub that ‘plugs’ the top of the TV set. I cut that off, squared the remaining plastic base and you saw the result. Since it has a ‘square’ base I also tried it horizontally, but its ‘behavior’ was unpredictable and it did not act as a loop at all (or maybe that’s how a horizontal loop acts, but not what I wanted!). It took a few minutes to build.
Had it done anything beyond what other antennas here already do, my thought was to slim the plastic support and create an antenna with very low wind-loading and a very low ‘neighbor-annoyance’ factor!
Jim :behindsofa:
—————————
Here are some interesting discoveries about how different antennas interpret the same signal with drastically different results in terms of signal quality.
I sat down and tested 6 different antennas, on a transport stream analyzer with 5 being UHF only, and the other being a VHF / UHF combo, brand unknown. The combo was saved from the trash man, and revived to live again receiving DTV signals. When this antenna was made, personal computers did not even exist, so this will help bolster the concept that “Your existing antenna will work for DTV”, even if it is 20 or more years old.
The antennas range from a combo mentioned above, to several home made knockoffs with some original modifications made by me, and some original designs I also built.
Antennas listed here were all tested on channel 20 for station WWCW and this is the most challenging signal I have to test UHF antennas with. Here is the link to the TV FOOL data plot. Look closely at the TV Fool data, as this is far from a perfect reception scenario with low signals and a 2 edge obstruction path. In addition, the C2 Knockoffs and the “dish Type” antennas were tested indoors.
TV Fool
Tested Antenna list below
1. VHF / UHF combo, brand unknown mounted at 25 feet on the chimney
2. Blonder Tongue UHF yagi, model unknown and cut for channel 20 mounted at 30 feet on the chimney
3. A Clear Stream C2 Knockoff with 300 ohm balun type feed point.
4. The same Clear Stream C2 Knockoff with an F type feed point configuration
5. A “Reflector type”, or dish antenna that was actually made from a parabolic light reflector taken from a torchiere style lamp that was destined for the land fill. It uses a single tapered element design like the Clear Stream C1
6. A double stacked 2 element bowtie design that is installed in a challenging place to receive signals from this transmitter at 25 feet elevation. This antenna is used for a different transmitter site, thus it is mounted in a bad place behind my house to receive these signals from the channel 20 transmitter.
I am posting 6 screen shots taken from a Sencor DTU-236 transport stream analyzer. These analyzers gather a lot of data, so I will start out by posting the shots from the bit error readings only to show the difference in signal quality between the various antennas listed above.
I have one that I cannot upload due to the 5 image limit. The one that cannot be posted is antenna number 5, the “Dish” type antenna that actually started out as a Joke.
Hello Foxtv,
Nice work! 🙂
Could I ask a favor? I’m not an engineer and do not understand the output displayed in the jpg files. Could you please define the terms and what they mean?
Thanks!
Rick
But I heard a rumor you did have a train and a cap?
LOL :brick:
LOL! It’s true! I’m a model railroader! 😀
Sure, I can define the terms, and elaborate briefly on what they mean, and even how some of it works.
1. Pre FEC BER. Pre Forward Error Correction Bit Error Rate. This represents the bit error rate of the data stream before any error corrections routines are performed on the signal.
2. Post FEC BER. This stands for Post Forward Error Correction of the data stream after error corrections routines are performed on the signal. Some data errors can be recovered by this routine.
3. PER. Packet Error rate. Digital TV data is sent in “packets” instead of in a continuous stream of data; it is actually a continuous stream of packets, and this is a measure of the packet Error rate, or how many packets actually contain errors.
4. Errorsec. rate of all errors detected per second.
5. BurstES. An error burst is a continuous sequence of data symbols. The first and last symbols are in error and there exists no contiguous sub sequence of correctly received symbols within the error burst. If the first or last symbols were still intact, then the data errors may still be recoverable.
6. PN-23. A patented method for reducing transmission problems in a parallel digital data transmission system such as the DTV data stream. It is a relevant protocol, but not always monitored by some test equipment.
As some of you probably know, DTV signals are similar in nature to a streaming video you may see on the internet. The difference between the two is that computers on the internet can do a form of error checking similar to this simplified explanation below.
Let’s say that you click on a link to a you tube video. Your computer sends this request to the you tube server. The server sends the first data packet that contains X number of data bits and it sends that data along with the information about that data that basically says “I sent you 20 data bits, did you receive 20 data bits? Your computer answers ” yes, I got 20 data bits, so the data is processed as it is sent because the error checking on both ends confirmed that 20 data bits were sent.
If your computer reports that errors exist, the server simply re-sends the corrupt data over again. This is one reason that streaming video on the internet will sometimes freeze up, as your computer is actually waiting on correct, or error free data that may either be caused by actual data corruption, or a computer with a slow response time, or even a slow connection.
You can now equate this internet scenario to the DTV signal drop outs, blocking, or the dreaded “No Signal” logo we all know so well in DTV reception, as these two situations are now similar in nature, except that the DTV system cannot request a re-send of the data as your computer can.
This is a method of error correction, or actually data error detection, which in this case is a method used to ensure that the data sent is the same as the data received. This is simplified basic error detection in a two way data path. Since DTV signals are a one way data path, this type of data detection and correction obviously cannot be used in digital TV broadcasting.
Since DTV signals are a one way data path, every method possible is used to correct any missing or corrupt data before it is transmitted. That is the reason that all of the above processes are needed to ensure that an accurate as possible data stream leaves the transmitter. It is now the job of the “Receive System” at the viewers end to try and recover the same data that was transmitted.
All of this processing is handled by a device in the transmitter known as the “EXCITER”, which is basically a computer amplifier combination device that takes the incoming data from the studio, and converts it to a low level electrical signal that contains the digital data.
It then sends this data stream through another higher power pre-amplifier that drives the final high power amplification system, which at my station is an IOT type or “Inductive Output Tube” which is the high power, and the final stage of the transmitters function. It then sends this high power signal that contains digital data to the antenna for radiation into the atmosphere.
As you can see, this is a complete “transmitting system”, but at your end, you also require a complete “Receive System” to recover this data. This “Receive System” starts at the driven element of your antenna, and ends with the picture and sound you see and hear, and encompasses everything in between the antenna element and the picture and sound.
This is only the transmitter related portion of how it all works, and there are many more complications and data corruption points that the data stream is subjected to before it gets to the actual transmitter. Knowing how many processes and computing devices it takes to complete the entire chain that makes up the DTV signal, it is an actual “Wonder of the World’ that it works as well as it does.
There has been many Dictionary sized text and reference books and materials written on this part of the signal chain.