Data Compression
Data Compression
Data Compression
CONTENTS
Introduction What ,when Some question Uses Major steps Type of data compression disadvantages conclusion
OBJECTIVES:
The objectives are: To store the data in a compressed form in which the quality of the data will not be degraded. To transfer the compressed data easily and efficiently and read the data easily by using the decompression technique
Finally, even if the storage and transportation problems of digital video were overcome, the processing power needed to manage such volumes of data would make the receiver hardware very expensive.
USES OF DATA
COMPRESSION : More and more data is being stored electronically. Digital video libraries, for example, contain vast amounts of data, and compression allows costeffective storage of the data.
New technology has allowed the possibility of interactive digital television and the demand is for high-quality transmissions, a wide selection of programs to choose from and inexpensive hardware. But for digital television to be a success, it must use data compression . Data compression reduces the number of bits required to represent or transmit information.
MAJOR STEPS
o Preparation:-It include analog to digital conversion and generating appropriate digital representation of the information. An image is divided into blocks of 8/8 pixels, and represented by affix no. of bit per pixel. o Processing:-It is 1st stage of compression process which make use sophisticated algorithms.
o Quantization:-It is the result of previous step. It specifies the granularity of the mapping of real number into integer number. This process results in a reduction of precision. o Entropy encoding: - It is the last step. It compresses a sequential digital data stream without loss. For ex:compress sequence of zeroes specifying the no. of occurrence.
TYPES OF DATA COMPRESSION Entropy encoding -- lossless. Data considered a simple digital sequence and semantics of data are ignored.
: Source encoding -- lossy. Takes semantics of data into account. Amount of compression depends on data contents.
Hybrid encoding -- combination of entropy and source. Most multimedia systems use these.
Relative Encoding:
Relative encoding is a transmission technique that attempts to improve efficiency by transmitting the difference between each value and its predecessor, in place of the value itself. Thus the values 15106433003 would be transmitted as 1+4-4-1+6-2-1+03+0+3.
Huffman Coding:
The total number of bits required to transmit the data can be considerably less than the number required if the fixed length representation is used. Huffman coding is particularly effective where the data are dominated by a small number of symbols. Huffman coding is a popular compression technique that assigns variable length codes (VLC) to symbols, so that the most frequently occurring symbols have the shortest codes. On decompression the symbols are reassigned their original fixed length codes. When used to compress text, for example, variable length codes are used in place of ASCII codes, and the most common characters, usually space, e, and t are assigned the shortest codes.
Arithmetic Coding:
Arithmetic coding is usually more efficient than the more popular Huffman technique. Although more efficient than Huffman coding, arithmetic coding is more complex.
INTRAFRAME COMPRESSION:
Intraframe compression is compression applied to still images, such as photographs and diagrams. Intraframe compression techniques can be applied to individual frames of a video sequence.
LZ-77 ENCODING
Good as they are, Huffman and arithmetic coding are not perfect for encoding text because they don't capture the higherorder relationships between words and phrases. There is a simple, clever, and effective approach to compressing text known as "LZ-77", which uses the redundant nature of text to provide compression.
VECTOR QUANTIZATION: Vector quantization is a more complex form of quantization that first divides the input data stream into blocks. A pre-defined table contains a set of patterns for blocks and each block is coded using the pattern from the table that is most similar. If the number of quantization levels (i.e. blocks in the table) is very small, as in Figure, then the compression will be lossy. Because images often contain many repeated sections vector quantization can be quite successful for image compression
COARSE QUANTIZATION:
It is similar to sub-sampling in that information is discarded, but the compression is accomplished by reducing the numbers of bits used to describe each pixel, rather than reducing the number of pixels. Each pixel is reassigned an alternative value and the number of alternate values is less than that in the original image. Quantization where the number of ranges is small is known as coarse quantization.
INTERFRAME COMPRESSION:
Interframe compression is compression applied to a sequence of video frames, rather than a single image. In general, relatively little changes from one video frame to the next. Interframe compression exploits the similarities between successive frames, to reduce the volume of data required to describe the sequence.
Difference Coding
Block Based Difference Coding
Block
LOSSY COMPRESSION
Data is lost, but not too much: Audio.Video. Still images, medical images, photographs. Compression ratios of 10:1 often yield quite Major techniques include: Vector Quantization ,Block transforms, Standards JPEG, JPEG 2000, MPEG (1, 2, 4, 7).
DISADVANTAGES
Some technique are there by which data can compress efficiently. But there is a chance of losses data.
CONCLUSION:
Improving television with larger screens and better resolution requires a huge increase in transmission bit rates. The bit rates are, however, limited by the available broadcast spectrum or network connection. The only resource is lossy image compression, most commonly JPEG, MPEG-2. Lossy by name and lossy by nature. The more the image is compressed, using lossy methods, the worse the image quality. Because random noise in the images is interpreted as movement any increase in noise increases the transmissions. The cleaner the images and the less noise, the lower the average transmission bandwidth. It is therefore cheaper to transmit high quality images.