#2
 
MPEG Information.
What is MPEG Digital TV ???
 Digital Video Broadcasting

                   Introduction
                   At the start of the satellite broadcast era, all radio and television programs were transmitted
                  in an analogue format. Only the last couple of years one has started to transmit in a digital
                   format. This was made possible by establishing digital transmission formats. The advantage
                   of digital data transfer is the high quality of the picture and the low losses involved. Also,
                   thanks to the used compression techniques, more programs can be transmitted over the
                   available distribution channels.
                   Digital television systems
                   Using digital television, the amount of data when no compression is used, is very high. For
                   digital television, the following sample frequencies are used according to ITU-R
                   recommendation no. 601 :
                   13.5MHz for the luminance signal (Y) and 6.75MHz for both color difference signals. Using a
                   8-bit quantising method, we end up with a bitstream of 216Mbit/s. The required bandwidth for
                   a signal like this is so big, that even a satellite transponder can not cope with it. The
                   technique used in digitising and compressing digital broadcasts has been developed by the
                   Motion Picture Expert Group (MPEG). Digital Video Broadcasting (DVB) in Europe uses the
                   MPEG-2 format. Using this format and a modem, a variety of extra services becomes
                   available, like extensive interactive services.
                   Compression converts the analogue video signal in a digital signal with a bitstream varying
                   between 2 and 15Mbit/s (MPEG-2 video). The audio signal is compressed between 32 and
                   448Kbit/s. Multiplexing (MPEG-2 systems) combines video and audio, but can also add
                   multiple AV signals together in a single Transport Stream (TS). - Modulation takes care of a
                   transparent bitstream of say 38.1Mbit/s in a single 8MHz channel. For cable systems, QAM
                   is used, for satellite transponders QPSK is used. By using compression, it is now possible to
                   transport more than a single channel over either cable or satellite transponder.
                   Within DVB, the scrambling of the signals is standarized. Various Conditional Access
                   (CA)-systems are offered. The use of these enables services like pay-TV and pay per view.
                   The DVB service information (SI) offers the possibility to add special information to the
                   datastream to describe the contents of the program transmitted. It enables the set top box to
                   configure itself and aid the viewer in finding TV or radio programs.
                   Digital signal processing
                   Audio and video signals are essentially analogue signals, that is, signals of which shape,
                   amplitude and frequency continuously changes. Until recently, processing of these signals
                   could only be accomplished in an analogue way. The characteristics of analogue signals,
                   when processed in electronic circuits, will be influenced. At every processing stage, the
                   quality of the signal degrades. Just think about copying videotapes. A copy of a copy
                   posesses less quality than the original tape.
                   By applying digital techniques, these disadvantages have come to an end. Using this
                   technique, we can now keep the quality of the processed signals at a constant quality and
                   level. A digitised signal can be displayed, transported and processed completely free of
                   distortion.
                   A digital signal no longer exists of a continuously varying signal but of several individual
                   signals. Every momentarely signal is represented by a digital code. We call it digital signal
                   processing when all binary representations of the original signal are processed following the
                   rules of digital technique.
                   Source coding of high quality picture- and sound signals plays an important role. One of the
                   reasons is the high transferbandwidth and enormous storage capacity needed for linear
                   coded signals.
                   Digitising
                   Digitising means the conversion of an analogue signal into a digital signal consisting of
                   zeroes and ones. Converting analogue signals to digital (A/D conversion in short) is done in
                   three steps :
                        Sampling
                        Quantising
                        Conversion to digital numbers
                   To convert a digital signal back into its analogue format, all three stages of the process are
                   carried out in reverse order. One than uses a D/A convertor.
                   Sampling means that the source signal is sliced into equal sections over a time period of 1
                   second. The audio on a Compact Disc e.g. is divided in 44100 sections per second.
                   Therefore, the sampling frequency used is 44.1KHz.
                   For a video signal, the signal is divided into 13500000 sections per second. This explains the
                   sampling frequency of 13.5MHz. For this process, a couple of conditions apply. The most
                   important being that the sampling frequency has to be at least twice the highest frequency
                   present in the source signal.
                   Quantising means that every section has its own scale with which the amplitude of the signal
                   is measured. This scale is notated in bits. For video, an 8-bit quantizing method is used.
                   Using 8 bits, one can have a total of 256 variances of a signal. This is more than sufficient for
                   the human eye, since we can only recognise about 200 variances in luminance anyway.
                   For audio, normally 16 bits are used. Using 16 bits, a total of 65536 different sound variances
                   are possible. In other words, the resolution is 65536, which is more than sufficient for the
                   human ear.
                   Conversion to digital numbers means, that a measured audio value of say 32768 is not
                   represented as a number but as a binary value of ones and zeroes (in this example as
                   0111111111111111).
                   Following this, the digital signal is coded and given an error coding to enable us to correct
                   errors at an later stage. The amount of data is given in the number of bits per second. A
                   digital signal contains a fair amount more information than an analogue signal. To be able to
                   store all this information, datacompression is used.
                   DIGITIZING AND COMPRESSING AUDIO
                   Introduction
                   To digitise audio signals, a couple of different sampling frequencies (depending on the
                   application) are used : 32KHz, 44.1KHz or 48KHz. The quantising scheme is normally 16
                   bits. On an audio CD, all information is registered and therefore a lot of bits are used. This
                   form of coding is called linear coding.
                   By using compressing techniques, the amount of data can be strongly reduced. These forms
                   of datareduction are used in e.g.
                        Digital Compact Cassette (DCC)
                        Mini Disc
                        Digital Audio Broadcasting (DAB)
                        Astra Digital Radio (ADR)
                        Digital Video Broadcast (DVB)
                        Digital Video Disc (DVD)
                   Source coding
                   The MPEG system committee determines the norm for the combination of a large number of
                   coded audio- and videosignals in a single datastream. This norm guarantees the
                   synchronisation of audio and video and enables the transfer of combined information by
                   using various digital media. After having tested various applications, the MPEG experts have
                   established the audio coding algorithm. Depending on the application, a total of three layers
                   with increasing complexity and reduction capacity can be used. For a recording that can not
                   be distinguished from the original, this comes to :
                        Layer 1, 2x 192Kbit/s
                        Layer 2, 2x 126Kbit/s
                        Layer 3, 2x 64Kbit/s
                   Important for an economical use of the number of available bits is the source coding of the
                   signal. Source coding means the amount of bits that are created after the A/D conversion. As
                   an example : By sampling an audio signal with a sampling frequency of 44.1KHz, using 16
                   bits per sample, an audio stream of 44100 x 16 x 2 = 1410000 bits/s is generated.
                   By using intelligent algorithms which take the properties of the human ear into account, the
                   amount of data can be strongly reduced. Bits can be saved by redundancy, or by simply
                   throwing away irrelevant parts of the signal. With irrelevant, we mean those parts that the
                   human ear does not use. In other words, frequencies that are outside our hearing capabilities
                   do not have to be registered.
                   Apart from this, there is another important dynamical effect. This is the phenomena that a
                   very load tone masks a weaker tone so that it can not be heard anymore. This is a very
                   complex psycho-acoustic effect. By leaving off this information, the total soundimpression is
                   not effected. To calculate which parts of an audio signal can be heard or not, the signal is
                   divided into subbands. For Musicam for instance, the digital audio signal is divided into 32
                   subbands with an equal width. In the coder, 12 subsequent samples of the subband are
                   combined to a block to calculate the mask effect.
                   Every subband is allocated a couple of bits, depending on the need. This way, no more bits
                   than stricktly necessary are used. Also, this way, the accuracy is as high as possible. By
                   using this method, it is now possible to reduce the 1.4Mbit/s bitstream on a CD to just
                   200Kbit/s.
                   The most important data-reduction and coding methods for recordings are :
                   MUSICAM (Masking Pattern Universal Subband Integrated Coding and Multiplexing PASC
                   (Precision Adaptive Subband Coding).
                   Musicam is used in DVB, DAB and ADR, whilst PASC is used on a DCC.
                   DIGITIZING VIDEO IMAGES
                   Introduction
                   Digitising a video image is far more complex than for an audio signal. Because of the far
                   higher frequencies used in television, the datarate is much higher. For a video image, this is
                   about 100 times than what is needed for audio.
                   Television pictures consist of lines. In Europe, we use 25 frames per second, each frame
                   consisting of 625 lines. At 25 frames per second, the human eye experiences the frame
                   changes as a flicker. For this reason, interlacing is used. That is, every 625-line image is
                   divided into two equal 312.5 line frames. The first frame carries the even lines, the second
                   carries the odd lines. The two frames combined for the complete image again.
                   To get the proper definition, the television signal should have a certain bandwidth. A complete
                   picture should consist of 530 x 400 - 212000 pixels. The required bandwidth than is 5MHz.
                   This applies to a black and white picture.
                   For a color picture, another calculation applies. A PAL color pictures is made up out of 768 x
                   576 pixels for a complete frame.
                   Color television
                   Color television uses the primairy colors RED, GREEN and BLUE. By adding those primairy
                   colors, other colors, including white, can be constructed. Those three colors are not
                   transmitted individually by a television transmitter, but as a luminance signal (Y) and the color
                   difference signals R-Y and B-Y. Both color difference signals R-Y and B-Y are also called U-
                   and V signals, once adapted in amplitude.
                   The required bandwidth for the color signals is far less than for the Y-signal. For TV signals,
                   the ratio between luminance (Y) and color signals is given as 4:1:1. The required bandwidth
                   of the color signal is four times less than that of the luminance signal. In professional studios,
                   one normally uses a ratio of 4:2:2, but due to the limited bandwidth of a television transmitter,
                   it can not be broadcasted in this format.
                   When we want to digitise such a signal, we can only do it on a component level, which
                   means that the video signal has to be split up.
                   For a sampling frequency of 13.5MHz and an eight bit quantising scheme, we get the
                   following video bitstream :
                        Y-signal : 13.5MHz x 8 bit = 108Mbit
                        R-Y signal : 3.375MHz x 8 bit = 27Mbit
                        B-Y signal : 3.375MHz x 8 bit = 27Mbit
                   which totals to 162Mbit/s. To transmit this amount of data, a very high bandwidth is required.
                   Was the bitstream for CD audio 1.4Mb/s, for a video signal this is about 100x higher.
                   MPEG VIDEO COMPRESSION
                   Introduction
                   When we have to start using digital source coding in television systems, some international
                   agreements have to be made. This not only applies to video, but also audio, the multiplexing
                   of video and audio as well as other signals like teletext.
                   Between 1988 and 1994 an international standard has been agreed on by MPEG, a subgroup
                   of ISO and IEC.
                   The goal of MPEG was :
                      1.To produce a world wide standard for video and audio coding, with options for various
                        applications
                      2.To define transmissions specifics that can be used for all media, including
                        transmission and recording
                      3.Defining a compliance procedure by which systems can be evaluated
                      4.Defining a datastructure that can be used to develop encoders and decoders
                   The first standard agreed on in 1992 was MPEG-1, used for computers and CD-ROM. The
                   datastream here is 1.15Mbit/s. Picture quality is comparable to VHS recorders.
                   In november 1994, MPEG-2 was established. This not only enables a datastream of
                   100Mbit/s but also it created the possibility to have multiple programs in one single
                   datastream.
                   MPEG-2 is the basis for Digital Video Disc and Digital Video Broadcast. For the European
                   market, the DVB project has established almost the complete system for the new generation
                   of digital TV on both cable and satellite. This standard not only allows data to be transported
                   more efficiently, bit also various new serviced can be exploited. DVB has standarized the
                   scrambling of the signals and can add special information about things like program
                   contents, transmission path etc. It not only allows the set top box to configure itself but also
                   aids the viewer in finding programs.
                   MPEG-1
                        Name : ISO/IEC 11172
                        Bit rate : Usually 1.5Mbit/s
                        Video : Resolution CIF (354 pixels * 256 lines * 25Hz)
                        Audio : 64Kbit/s to 384 Kbit/s (Musicam)
                        Systems : Multiplexing 1 * video + stereo audio + data
                        Applications : CDI and computer games
                   MPEG-2
                        Name : 13818
                        Bit rate : Usually 2 - 15Mbit/s
                        Video : Resolution ITU-R 601 (720 pixels * 576 lines * 25Hz) and HDTV
                        Audio : 64Kbit/s to 384Kbit/s, stereo + surround (5 channels)
                        Systems : Multiplexing video, audio, data, conditional access, multiple video signals,
                        each with their own timebase
                        Applications : Digital TV
                   To reduce the number of bits in a digital TV system, reduction and compression techniques
                   have been developed to make this possible. It is called compression once the picture image
                   in the decoder can be perfectly reconstructed. Reduction will allways show a difference
                   between an original and a decoded image.
                   Compression techniques
                   By using the properties of the human senses like eyes and ears, it is possible to apply a
                   hardly visible datareduction hence is acceptable for certain applications.
                   The basis is formed by leaving off information that can not be registered by the eye
                   (irrelevance-reduction) because of which not always the same information has to be
                   transmitted when the picture contents has not changed. This is called redundancy
                   reduction.
                   In MPEG-videocompression, multiple methods are used to reduce the number of bits like :
                        Compression based on Discrete Cosinus Transformation (DCT)
                        Segmentation, splitting the image into blocks
                        Movement Compensation
                        Temporal prediction and interpolation
                   Adjacent pictures in a television signal are pretty much the same. Every picture is built out of
                   2 frames which carry the same information. This is what we call redundancy information.
                   There are several forms of redundancy :
                        Spatial redundancy
                        Temporal redundancy
                        Static redundancy
                   To understand redundancy, a couple of agreements have been made within MPEG. A MPEG
                   decoder has various picture memories in which different frames for decoding are stored and
                   out of which the original picture can be reconstructed.
                   This way, the bandwidth necessary for a single analogue television channel can now contain
                   5 - 7 digital television channels using the MPEG-2 data compression. This technique allows a
                   4-hour movie to be recorded on a double sided Digital Video Disc (DVD).
                   ADR / DMX DIGITAL RADIO
                   Digital Music Express (DMX) is a digitally coded radio signal complying to the Astra Digital
                   Radio (ADR) specifications. The applied technique is constructed in a way the standard
                   180KHz spacing of satellite audio transponders could be used. This enables the
                   simultaneous transmission of both analogue and digital audio to assist in a fluent transition
                   from analogue to digital. Per transponder, a maximum of 12 radio programmes can be put
                   'behind' the television program, or a total of 48 radioprogrammes can be transmitted on a
                   single transponder.
                   Important is, that a current analogue system needs two carriers to carry both left- and right
                   channel for stereo transmissions in contrast to ADR and DMX where only a single carries is
                   needed.
                   Firstly, the left- and right audio signal are digitised with a sampling frequency of 48KHz / 16
                   bits resulting in an audio stream of 1.536Mbit/s. This has to be reduced by a factor 8 to be
                   put in the narrow banded transmitter signal. This is accomplished using the MUSICAM
                   encoder technique.
                   After the MUSICAM encoder, extra data like RDS and programme information is added. The
                   encoder output delivers a datastream of 192Kbit/s.
 

                                      All Rights Reserved.