Matlab Code For Image Compression Using SPIHT Algorithm
Matlab Code For Image Compression Using SPIHT Algorithm
wcm9qZWN0cy9tYXRsYWItY29kZS1mb3ItaW1hZ2UtY29tcHJlc3Npb24tdXNpbmctc3BpaHQtYWxnb3JpdGhtP19fX1NJRD1V/)
(https://www.pantechsolutions.net/)
0 My Bag
0 item(s) - 0.00
(https://www.pantechsolutions.n
Add to Cart
Demonstration Video
SPIHT Algorithm using Matlab
There are several different ways in which image files can be compressed. For Internet use, the two
most common compressed graphic image formats are the JPEG format and the GIF format. The
JPEG method is more often used for photographs, while the GIF method is commonly used for line
art and other images in which geometric shapes are relatively simple.
Other techniques for image compression include the use of fractals and wavelets. These methods
have not gained widespread acceptance for use on the Internet as of this writing. However, both
methods offer promise because they offer higher compression ratios than the JPEG or GIF methods
for some types of images. Another new method that may in time replace the GIF format is the PNG
format.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 2/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
Compressing an image is significantly different than compressing raw binary data. Of course,
general-purpose compression programs can be used to compress images, but the result is less than
optimal. This is because images have certain statistical properties, which can be exploited by
encoders specifically designed for them. Also, some of the finer details in the image can be
sacrificed for the sake of saving a little more bandwidth or storage space. This also means that
lossy compression techniques can be used in this area.
A text file or program can be compressed without the introduction of errors, but only up to a certain
extent. This is called lossless compression. Beyond this point, errors are introduced. In text and
program files, it is crucial that compression be lossless because a single error can seriously
damage the meaning of a text file, or cause a program not to run. In image compression, a small
loss in quality is usually not noticeable. There is no "critical point" up to which compression works
perfectly, but beyond which it becomes impossible. When there is some tolerance for loss, the
compression factor can be greater than it can when there is no loss tolerance. For this reason,
graphic images can be compressed more than text files or programs.
In a distributed environment large image files remain a major bottleneck within systems.
Compression is an important component of the solutions available for creating file sizes of
manageable and transmittable dimensions. Increasing the bandwidth is another method, but the
cost sometimes makes this a less attractive solution.
The figures in Table 1 show the qualitative transition from simple text to full-motion video data and
the disk space, transmission bandwidth, and transmission time needed to store and transmit such
uncompressed data.
Table.1 Multimedia data types and uncompressed storage space, transmission bandwidth,
and transmission time required. The prefix kilo- denotes a factor of 1000 rather than 1024.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 3/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
(B for bytes)
resolution
Image
Image
Image
Image
Video 1 min
(30 frames/sec)
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 4/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
br>
The examples above clearly illustrate the need for sufficient storage space, large transmission
bandwidth, and long transmission time for image, audio, and video data. At the present state of
technology, the only solution is to compress multimedia data before its storage and transmission,
and decompress it at the receiver for play back. For example, with a compression ratio of 32:1, the
space, bandwidth, and transmission time requirements can be reduced by a factor of 32, with
acceptable quality.
Image compression addresses the problem of reducing the amount of data required to represent a
digital image. The underlying basis of the reduction process is the removal of redundant data. From
a mathematical viewpoint, this amounts to transforming a 2-D pixel array into a statistically
uncorrelated data set. The transformation is applied prior to storage and transmission of the image.
The compressed image is decompressed at some later time, to reconstruct the original image or an
approximation to it.
Image compression research aims at reducing the number of bits needed to represent an image by
removing the spatial and spectral redundancies as much as possible.
(a) Lossless vs. Lossy compression: In lossless compression schemes, the reconstructed image,
after compression, is numerically identical to the original image. However lossless compression can
only a achieve a modest amount of compression. An image reconstructed following lossy
compression contains degradation relative to the original. Often this is because the compression
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 5/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
scheme completely discards redundant information. However, lossy schemes are capable of
achieving much higher compression. Under normal viewing conditions, no visible loss is perceived
(visually lossless).
The information loss in lossy coding comes from quantization of the data. Quantization can be
described as the process of sorting the data into different bits and representing each bit with a
value. The value selected to represent a bit is called the reconstruction value. Every item in a bit
has the same reconstruction value, which leads to information loss (unless the quantization is so
fine that every item gets its own bit).
(b) Predictive vs. Transform coding: In predictive coding, information already sent or available is
used to predict future values, and the difference is coded. Since this is done in the image or spatial
domain, it is relatively simple to implement and is readily adapted to local image characteristics.
Differential Pulse Code Modulation (DPCM) is one particular example of predictive coding.
Transform coding, on the other hand, first transforms the image from its spatial domain
representation to a different type of representation using some well-known transform and then
codes the transformed values (coefficients). This method provides greater data compression
compared to predictive methods, although at the expense of greater computation
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 6/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
Quantizer:
A quantizer simply reduces the number of bits needed to store the transformed coefficients by
reducing the precision of those values. Since this is a many-to-one mapping, it is a lossy process
and is the main source of compression in an encoder. Quantization can be performed on each
individual coefficient, which is known as Scalar Quantization (SQ). Quantization can also be
performed on a group of coefficients together, and this is known as Vector Quantization (VQ). Both
uniform and non-uniform quantizers can be used depending on the problem at hand.
Entropy Encoder:
An entropy encoder further compresses the quantized values losslessly to give better overall
compression. It uses a model to accurately determine the probabilities for each quantized value and
produces an appropriate code based on these probabilities so that the resultant output code stream
will be smaller than the input stream. The most commonly used entropy encoders are the Huffman
encoder and the arithmetic encoder, although for applications requiring fast execution, simple run-
length encoding (RLE) has proven very effective.
It is important to note that a properly designed quantizer and entropy encoder are absolutely
necessary along with optimum signal transformation to get the best possible compression.
Wavelet Transform
One of the most important characteristics of wavelet transform is multiresolution decomposition. An
image decomposed by wavelet transform can be reconstructed with desired resolution. In the
thesis, a face image is first transformed to the wavelet domain using pyramid decomposition [16].
Fig. 3.1 shows a four-level wavelet decomposition. In our experiments, an image is decomposed
using the Daubechies orthogonal filter of
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 7/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
length 8 up to five decomposition levels [17]. The procedure goes like this. A low pass filter and a
high pass filter are chosen, such that they exactly halve the frequency range between themselves.
This filter pair is called the Analysis Filter pair. First, the low pass filter is applied for each row of
data, thereby getting the low frequency components of the row. But since the LPF is a half band
filter, the output data contains frequencies only in the first half of the original frequency range. So,
by Shannon's Sampling Theorem, they can be subsampled by two, so that the output data now
contains only half the original number of samples. Now, the high pass filter is applied for the same
row of data, and similarly the high pass components are separated, and placed by the side of the
low pass components. This procedure is done for all rows.
Next, the filtering is done for each column of the intermediate data. The resulting two-dimensional
array of coefficients contains four bands of data, each labelled as LL (low-low), HL (high-low), LH
(low-high) and HH (high-high). The LL band can be decomposed once again in the same manner,
thereby producing even more subbands. This can be done upto any level, thereby resulting in a
pyramidal decomposition as shown below.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 8/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
As mentioned above, the LL band at the highest level can be classified as most important, and the
other 'detail' bands can be classified as of lesser importance, with the degree of importance
decreasing from the top of the pyramid to the bands at the bottom.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 9/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
pair. The filtering procedure is just the opposite - we start from the topmost level, apply the filters
column wise first and then row wise, and proceed to the next level, till we reach the first level.
Each of these properties is discussed below. Note that different compression methods were
developed specifically to achieve at least one of those objectives. What makes SPIHT really
outstanding is that it yields all those qualities simultaneously.
Image Quality
SPIHT wins in the test of finding the minimum rate required to obtain a reproduction
indistinguishable from the original. The SPIHT advantage is even more pronounced in encoding
color images, because the bits are allocated automatically for local optimality among the color
components, unlike other algorithms that encode the color components separately based on global
statistics of the individual components. We can see that visually lossless color compression is
obtained with some images at compression ratios from 100-200:1.
method that was designed for optimal progressive transmission (and still beats most non-
progressive methods!). It does so by producing a fully embedded coded file, in a manner that at any
moment the quality of the displayed image is the best available for the number of bits received up to
that moment. So, SPIHT can be very useful for applications where the user can quickly inspect the
image and decide if it should be really downloaded, or is good enough to be saved, or need
refinement.
Let's see how this abstract definition is used in practice. Suppose you need to compress an image
for three remote users. Each one have different needs of image reproduction quality, and you find
that those qualities can be obtained with the image compressed to at least 8 Kb, 30 Kb, and 80 Kb,
respectively. If you use a non-embedded encoder (like JPEG), to save in transmission costs (or
time) you must prepare one file for each user. On the other hand, if you use an embedded encoder
(like SPIHT) then you can compress the image to a single 80 Kb file, and then send the first 8 Kb of
the file to the first user, the first 30 Kb to the second user, and the whole file to the third user.
Surprisingly, with SPIHT all three users would get (for the same file size) an image quality
comparable or superior to the most sophisticated non-embedded encoders available today. SPIHT
achieves this feat by optimizing the embedded coding process and always coding the most
important information first.
Lossless Compression
SPIHT codes the individual bits of the image wavelet transform coefficients following a bit-plane
sequence. Thus, it is capable of recovering the image perfectly (every single bit of it) by coding all
bits of the transform. However, the wavelet transform yields perfect reconstruction only if its
numbers are stored as infinite-precision numbers. In practice it is frequently possible to recover the
image perfectly using rounding after recovery, but this is not the most efficient approach.
A codec that uses this transformation to yield efficient progressive transmission up to lossless
recovery is among the SPIHT .A surprising result obtained with this codec is that for lossless
compression it is as efficient as the most effective lossless encoders (lossless JPEG is definitely
not among them). In other words, the property that SPIHT yields progressive transmission with
practically no penalty in compression efficiency applies to lossless compression too.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 11/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
Almost all image compression methods developed so far do not have precise rate control. For some
methods you specify a target rate, and the program tries to give something that is not too far from
what you wanted. For others you specify a "quality factor" and wait to see if the size of the file fits
your needs.
The embedded property of SPIHT allows exact bit rate control, without any penalty in performance
(no bits wasted with padding, or whatever). The same property also allows exact mean squared-
error (MSE) distortion control. Even though the MSE is not the best measure of image quality, it is
far superior to other criteria used for quality specification.
Encoding/Decoding Speed
A straightforward consequence of the compression simplicity is the greater coding/decoding speed.
The SPIHT algorithm is nearly symmetric, i.e., the time to encode is nearly equal to the time to
decode. (Complex compression algorithms tend to have encoding times much larger than the
decoding times.)
Hierarchical Tree
In this subsection, we will describe the proposed algorithm to code the wavelet coefficients. In
general, a wavelet decomposed image typically has non-uniform distribution of energy within and
across subbands. This motivates us to partition each subband into different regions depending on
their significance and then assign these regions with different quantization levels.
The proposed coding algorithm is based on the set partitioning in hierarchical trees (SPIHT)
algorithm [18], which is an elegant bit-plane encoding method that generates M embedded bit
sequence through M stages of successive quantization. Let s0, s1, …, sM-1 denote the encoder’s
output bit sequence of each stage. These bit sequences are ordered in such a way that s0 consists
of the most significant bit, s1 consists of the next most significant bit, and so on.
The SPIHT algorithm forms a hierarchical quadtree data structure for the wavelet transformed
coefficients. The set of root node and corresponding descendents are referred to as a spatial
orientation tree (SOT)(see Fig. 3.2). The tree is defined in such a way that each node has either no
leaves or four offspring, which are from 2 x 2 adjacent pixels. The pixels on the LL subimage of the
highest decomposition level are the tree roots and are also grouped in 2 x 2 adjacent pixels.
However, the upper-left pixel in 2 x 2 adjacent pixels (as shown in Fig. 3.2) has no descendant.
Each of the other three pixels has four children.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 12/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
For the convenience of illustrating the real implementation of SPIHT, the following sets of
coordinates are defined.
Sn(τ) indicate the significance of a set of coordinates ,where T(n) is the preset significant threshold
used in the nth stage. A detailed description of the SPIHT coding algorithm is given as follows.
Initially,T(0) is set to be 2M-1 , where M is selected such that the largest coefficient magnitude, say
cmax, satisfies 2M-1≤Cmax<2M . The encoding is progressive in coefficient magnitude for
successively using a sequence of thresholds T(n)=2(M-1)-n,n=0.1...,M-1. Since the thresholds are a
power of two, the encoding method can be regarded as "bit-plane" encoding of the wavelet
coefficients. At stage n, all coefficients with magnitudes between T(n) and 2T(n) are identified as
"significant," and their positions and sign bits are encoded. This process is called a sorting pass.
Then, every coefficient with magnitude at least 2T(n) is "refined" by encoding the nth most
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 13/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
significant bit. This is called a refinement pass. The encoding of significant coefficient position and
the scanning of the significant coefficients for refinement are efficiently accomplished through using
the following three lists: the list of significant pixels (LSP), the list of insignificant pixels (LIP), and
the list of insignificant set (LIS). Each entry in the LIP and LSP represents an individual pixel which
is identified by coordinates (i, j). While in the LIS, each entry represents either the set D(i j) or L(i,
j). An LIS entry is regarded as of type A if it represents D(i, j) and of type B if it represents L(i, j).
add the coordinates (i,j) with descendants to the list LIS, as type A entries,
if Sn(i, j)=1
2.2) for each entry (i, j) in the LIS do: 2.2.1) if the entry is of type A then
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 14/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
if Sn(k, l) = 1 then
go to Step 2.2.2).
otherwise
output Sn(L(i, j)), if Sn(L(i, j)) = 1 then *add each to the end of the LIS as an entry of type A
For each entry (i, j) in the LSP, except those included in the last sorting pass (i.e., with the
same n), output the nth most significant bit of | ci,j |,
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 15/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
their memory utilization. To the best of our knowledge, our work is the first to propose a detailed
implementation of a low memory wavelet image coder. It others a significant advantage by making a
wavelet coder attractive both in terms of speed and memory needs. Further improvements of our
system especially in terms of speed can be achieved by introducing a lattice factorization of the
wavelet kernel or by using the lifting steps. This will reduce the computational complexity and
complement the memory reductions mentioned in this work
References
[1] C. Chrysa_s and A. Ortega, Line Based Reduced Memory Wavelet Image Compression," in
Proc. IEEE Data Compression Conference, (Snowbird, Utah), pp. 398{407, 1998.
[2] W. Pennebaker and J. Mitchell, JPEG Still Image Data Compression Standard. Van Nostrand
Reinhold, 1994.
[3] D. Lee, \New work item proposal: JPEG2000 image coding system." ISO/IEC JTC1/SC29/WG1
N390, 1996.
[4] J. M. Shapiro, \Embedded Image Coding Using Zerotrees ofWavelet Coe_cients," IEEE Trans.
Signal Processing, vol. 41, pp. 3445{3462, December 1993.
[5] habibollah danyoli and Alfred mertins .Highly scalable image compression based on spiht for
network applications
[8] A. M. Tekalp, Digital Video Processing. Englewood Cliffs, NJ: Prentice-Hall, 1995.
[9] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, “Image coding using wavelet transform,”
IEEE Trans. Image Processing, vol. 1, pp. 205–220, Apr. 1992.
[10] D. Taubman and A. Zakhor, “Multirate 3-D subband coding of video,” IEEE Trans. Image
Processing, vol. 3, pp. 572–588, Sept. 1994.
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 16/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
Aucun commentaire
Powered by (http://my.yotpo.com/landing_page?redirect=https%3A%2F%2Fwww.y
reviews-by-
yotpo%2F&utm_campaign=branding_link_reviews_widget_v2&utm_medium=widge
(http://my.yotpo.com/landing_page?redirect=h
Ajouter un commentaire en tant que hali
yotpo%2F&utm_campaign=branding_link_reviews
WRITE A REVIEW
More Information
order/)
holidays (https://www.pantechsolutions.net/holidays/)
software/)
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 17/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
rohs (https://www.pantechsolutions.net/rohs/)
Dealership (https://www.pantechsolutions.net/dealership/)
Company Resources
conditions/)
Careers (https://www.pantechsolutions.net/careers/)
Events (https://www.pantechsolutions.net/events/)
SiteMap (https://www.pantechsolutions.net/site-map/)
Contact us (https://www.pantechsolutions.net/contact-us/)
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 18/19
4/11/2018 Matlab Code for Image Compression using SPIHT Algorithm
https://www.pantechsolutions.net/image-processing-projects/matlab-code-for-image-compression-using-spiht-algorithm 19/19