Synopsis
Synopsis
Synopsis
AIM:
An actual image however consumes a large memory for its storage the main reason being the data redundancy. The main objective of the project is to reduce the amount of repeated data and to reduce the amount of space required to store the image. This transformation is applied prior to storage or transmission of the image. And some time later, the compressed image is decompressed to reconstruct the original image or an approximation of it. The Image Compression System reduces the size of an input image by compressing it ,using an Lossless Technique, called RLE (Run Length Encoding), and the compressed image is decompressed to reconstruct into an original image using a technique called LZW (Lempel-ZivWelch) Coding.
INTRODUCTION:
Image compression addresses the problem of reducing the amount of data required to present the digital image. The underlying basis of the reduction process is the removal of redundant data. From a mathematical viewpoint, this amounts to transforming a 2-D pixel array into a statistically uncorrelated data set. This transformation is applied prior to the storage or transmission of the image. And later, the compressed image is decompressed to reconstruct the original image or an approximation of it. Image compression is recognized as an enabling technology. Just as mentioned it is a natural technology for handling the increased spatial resolutions of todays imaging sensors and
evolving broadcast television standards. In short an ever-expanding number of applications depend on the effective manipulation, storage and transmission of binary, gray scale and color images.
DESCRIPTION:
A compression system consists of two distinct structural blocks: an encoder and a decoder. An input image f(x, y) is fed into the encoder, which creates a set of symbols from the input data. After transmission over the channel, the encoded representation is fed to the decoder, where a reconstructed output image ^f(x, y) is generated. In reality ^f(x,y) may or may not be a exact replica of f(x,y). If it is, then the system is error free or information preserving; if not then some level of distortion is present in the reconstructed image. Both the encoder and decoder consist of two relatively independent functions or subblocks .The encoder is made up of a source encoder, which removes input redundancies and a channel encoder which increases the noise immunity of the source encoders output. Whereas the decoder includes a channel decoder followed by a source encoder. If the channel between the encoder and the decoder is noise free(not prone to error) ,then the channel encoder and decoder are omitted and the general encoder and decoder becomes the source encoder and decoder. The two categories of image file compression are: lossy and lossless. The lossy compression algorithm takes advantage of the limitations of the human visual senses and discards information that would not be sensed by the eye. The loss of information is tolerable, and in many cases goes unnoticed. File size is reduced as file compression is increased. At some point image deterioration becomes noticeable. The Lossless algorithms compress the image data with no loss in image quality, but this results in larger files than the lossy algorithms. Lossless compression is a compression technique that does not lose any data in the compression process. Lossless compression "packs data" into a smaller file size by using a kind of internal shorthand to signify redundant data. If an original file is 1.5MB (megabytes), lossless compression can reduce it to about half that size, depending on the type of file being compressed. This makes lossless compression convenient for transferring files across the Internet, as smaller files transfer faster. Lossless compression is also handy for storing files as they take up less room. When an image is losslessly compressed, repetition and predictability is used to represent all the information using less memory.
Lossless techniques: Run-length encoding (RLE) is a very simple form of data compression in which runs of data (that is, sequences in which the same data value occurs in many consecutive data elements) are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs; for example, simple graphic images such as icons and line drawings. The technique, called Lempel-Ziv-Welch (LZW) coding, assigns fixed-length code words to variable length sequences of source symbols but requires no a priori knowledge of the probability of occurrence of the symbols to be coded. LZW coding is conceptually very simple. A unique feature of the LZW coding just demonstrated is that the coding dictionary or code book is created while the data are being encoded. An LZW decoder builds an identical decompression dictionary as it decodes simultaneously the encoded data stream.