Presentation On Image Compression
Presentation On Image Compression
Presentation On Image Compression
CONTENTS :
Definition Classification of Image data Need for Compression Compression Algorithms Lossy & lossless compression
JPEG Image
DCT & DFT Encoding proess
DCT encoding
Decoding process Conclusion
DEFINITION
Compression is a reversible
conversion of data to a format that requires fewer bits, usually performed so that the data can be stored or transmitted more efficiently. It minimizes the size in bytes of a graphic file without degrading the quality of the image to an unacceptable level . It analyzes the techniques allowing to reduce the amount of data to describe the information content of the image .
frequency variations
COMPRESSION ALGORITHMS :
The image compression algorithms can be divided into two branches: Lossless algorithms: There is no information loss, and the image can be reconstructed exactly the same as the original.
of a cat compressed with successively more lossy compression ratios from right to left .
An image with lossless compression where the bytes are reduced from
83,261 to 1,523 .
1. PNG : <Portable Network Graphics > Works on screen shots esp., of movies , games or similar content 2. JPEG : <Joint Photographic Experts Group>
Works with color and grayscale images, e.g., satellite, medical, ...
3. GIF : <Graphics Interchange format>
JPEG
(Intraframe coding)
First generation JPEG uses DCT+Run length Huffman
entropy coding.
Second generation JPEG uses wavelet transform + bit plane
exhibit a more graceful degradation of quality as the bit usage decreases by using transforms with a larger spatial extent for the lower frequency coefficients and by using overlapping transform basis functions.
The coefficients of DCT are real valued instead of complex valued in DFT.
DISCRETE COSINE TRANSFORM (DCT) The source image samples are grouped into 8x8 blocks, shifted from unsigned integers to signed integers and input to the DCT. The following equation is the idealized mathematical definition of the 8x8 DCT :
for u,v=0;
The DCT transforms an 88 block of input values to a linear combnation of these 64 patterns. The patterns are referred to as the two-dimensional DCT basis functions, and the output values are referred to as transform coefficients.
DCT PROCESSING
The DCT is related to the Discrete Fourier Transform (DFT). Some simple
intuition for DCT-based compression can be obtained by viewing the FDCT as a harmonic analyzer and the IDCT as a harmonic synthesizer. Each 8x8 block of source image samples is effectively a 64-point discrete signal which is a function of the two spatial dimensions x and y.
The FDCT takes such a signal as its input and decomposes it into 64 orthogonal basic signals. Each contains one of the 64 unique twodimensional (2D) spatial frequencies which comprise the input signals spectrum.
The ouput of the FDCT is the set of 64 basis-signal amplitudes or DCT coefficients whose values are uniquely determined by the particular 64point input signal.
QUANTIZATION
To achieve further compression, each of the 64 DCT coefficients is uniformly quantized in conjunction with a 64-element Quantization Table, which is specified by the application. The purpose of quantization is to discard information which is not visually significant. Because quantization is a many-to-one mapping, it is fundamentally lossy. Moreover, it is the principal source of lossiness in DCT-based encoder. Quatization is defined as the division of each DCT coefficient by its corresponding quantizer step size followed by rounding to the nearest integer as following equation :
Each step size of quantization ideally should be chosen as the perceptual threshold to compress the image as much as poissible without visible artifacts. It is also a function of the source image characteristeics, display characteristics and viewing distance.
ENTROPY CODING
The final processing step of encoder is entropy coding This step achieves additional compression losslessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. It is useful to consider entropy coding as a 2-step process. The first step converts the zig-zag sequence of quantized coefficients into an intermediate sequence of symbols. The second step converts the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols is dependent on both the DCT-based mode of operation and the entropy coding method.
Now the outputs of DPCM(Differential Pulse Code Modulation ) and "zigzag" scanning can be encoded by entropy coding seperately. It encodes the quantized DCT coeffieicts more compactly based on their statistical characteristics
DEQUANTIZATION: The following step is to dequantize the output of entropy decoding, returning the result to a representation appropriate for input to the IDCT. The equation is as followed :
coefficients and reconstructs a 64-point output image signal by summing the basis signals.
JPEG does not specify a unique IDCT algorithm in its standard either. The mathematical definition of the 8x8 IDCT is as followed :
PERFORMANCE EVALUATION
The performance of an image compression technique must be evaluated considering three different aspects: Compression efficiency (Compression Ratio/Factor, Bit rate); Image quality (Distortion Measure);
Computational cost,
Transmission time.
CONCLUSION :
REFERENCES:
1. Introduction To Data Compression By Guy Blelloch 2. Adobe Systems Inc. PostScript Language Reference Manual. Second Ed. Addison Wesley, Menlo Park, Calif. 1990 3. Digital Compression and Coding of Continuous tone Still Images, Part 1, Requirements and Guidelines. ISO/IEC JTC1 Draft International Standard 10918-1, Nov. 1991. 4. Digital Compression and Coding of Continuous tone Still Images, Part 2, Compliance Testing. ISO/IEC JTC1 Committee Draft 10918-2, Dec. 1991. 5. Encoding parameters of digital television for studios. CCIR Recommendations, Recommendation 601, 1982. 6. Howard, P.G., and Vitter, J.S. New methods for lossless image compression using arithmetic coding. Brown University Dept. of Computer Science Tech. Report No. CS-91-47, Aug. 1991. 7. Huffman, D.A. A method for the construction of minimum redundancy codes. In Proceedings IRE, vol. 40, 1962, pp. 1098-1101. 8. Lger, A. Implementations of fast discrete cosine transform for full color videotex services and terminals. In Proceedings of the IEEE Global Telecommunications Conference, IEEE Communications Society, 1984, pp. 333-337. 9. Office Document Architecture (ODA) and Interchange Format, Part 7: Raster Graphics Content Architectures. ISO/IEC JTC1 International Standard 8613-7. 10. Pennebaker, W.B., JPEG Tech. Specification, Revision 8. Informal Working paper JPEG-8R8, Aug. 1990.