unit 5 data compression
unit 5 data compression
Data compression
Syllabus
Storage space, coding requirements, source, entropy and hybrid coding,
lossy sequential OCT based mode, expanded lossy OCT based mode,
JPEG and MPEG compression.
Storage space
Multimedia systems handle large volumes of data from various sources like images,
videos, and audio. Without compression, the storage and transmission of this data
would be inefficient. Data compression addresses this challenge by reducing the size of
the files while attempting to maintain quality levels suited to their use. Here's an in-
depth exploration:
1. Multimedia Storage Space characteristics
Multimedia files, including images, audio, and video, require significant storage
due to their high data intensity:
•Raw formats (e.g., uncompressed video or audio) can take up a lot of space.
•Storage size is measured in bytes and depends on:
•Resolution (images/videos)
•Sampling rate and bit depth (audio)
•Frame rate (video)
2. Data Compression Overview
Data compression reduces the storage size of multimedia data by:
•Exploiting redundancy:
• Spatial Redundancy: Similarities in neighboring pixels of an image.
• Temporal Redundancy: Similarities between consecutive frames in
video.
• Psychoacoustic Redundancy: Removing audio frequencies inaudible to
humans.
•Encoding efficiently: Using fewer bits to represent repetitive patterns or
essential data.
• Data compression implies sending or storing a smaller number of bits.
Although many methods are used for this purpose, in general these methods
can be divided into two broad categories: lossless and lossy methods.
3. Types of Compression
Lossless Compression
•Preserves all original data; exact reconstruction is possible.
•Common for applications requiring perfect fidelity (e.g., medical imaging, archives).
•Examples:
• Images: PNG, BMP.
• Audio: FLAC.
• Video: HuffYUV.
Techniques:
•Run-Length Encoding (RLE): Compresses sequences of identical values.
•Huffman Coding: Assigns shorter codes to frequently occurring data.
•Lempel-Ziv (LZ): Finds repeating patterns and stores references instead of actual data.
Compression Ratio: Typically lower compared to lossy methods (e.g., 2:1 to 3:1).
Lossy Compression
•Removes non-essential or perceptually irrelevant data.
•Often used for consumer-level multimedia (streaming, web applications).
•Examples:
• Images: JPEG.
• Audio: MP3, AAC.
• Video: H.264, HEVC.
Techniques:
•Transform Coding (e.g., DCT - Discrete Cosine Transform):
• Converts data into frequency components.
• Keeps low-frequency components (important for quality) and discards high-frequency ones.
• JPEG, MPEG, MP4
•Quantization:
• Groups similar values to reduce precision, lowering data requirements.
•Motion Compensation (for videos):
• Stores differences between frames instead of complete frames.
Compression Ratio: Much higher than lossless methods (e.g., 10:1 to 100:1), with quality loss depending on
the compression level.
Need for data compression
Data compression is the process of converting an input data stream or the source or the original
raw data into another data stream that has a smaller size. Media compression is used to save
compressed image, audio, and video files. Examples of compressed media formats include
JPEG images, MP3 audio, and MPEG video files.
Low network's bandwidth doesn't allow for real time video transmission
5.Lossy compression has more data holding capacity. 5. Lossless compression has less data holding capacity.
6.Algorithms used in lossy compression are: 6.Algorithms used in lossless compression are: Run
Transform coding, Discrete Cosine Transform, length Encoding, Huffman coding, Arithmetic coding.
Discrete wavelet transform.
Coding Requirements in Multimedia Data Compression
Multimedia data compression involves reducing file sizes for storage and transmission while
maintaining quality. Coding requirements depend on the type of multimedia and compression
method. Here's a concise summary:
1. General Skills and Knowledge
•Mathematics: Linear algebra, Fourier/DCT transforms, and information theory.
•Programming Languages: Python, C/C++, Java, or MATLAB.
•Key Algorithms:
•Lossless: Huffman Coding, Run-Length Encoding (RLE).
•Lossy: Discrete Cosine Transform (DCT), Quantization, Entropy Coding.
2. Tools and Libraries
•Python: SciPy, NumPy, OpenCV for prototyping.
•C/C++: Libx264/x265 for high-performance compression.
•Multimedia Frameworks: FFmpeg, GStreamer for video/audio encoding.
3. Multimedia-Specific Requirements
Image Compression:
• Lossless: PNG (RLE, Huffman Coding).
• Lossy: JPEG (DCT, Quantization).
Audio Compression:
• Lossless: FLAC (Huffman Coding).
• Lossy: MP3/AAC (Psychoacoustic models, MDCT).
Video Compression:
• Standards: H.264/AVC, HEVC/H.265.
• Techniques: Motion Estimation, Block-based encoding.
4. Testing and Metrics
•PSNR: Evaluates compression quality.
•SSIM: Measures perceptual similarity.
•Compression Ratio: Efficiency metric.
5. Hardware and Optimization
• CPUs/GPUs: Necessary for real-time
compression.
• Optimization: Balance storage savings and
quality using configurable parameters like
compression ratios.
1. Entropy Encoding
Concept:
• Entropy encoding is a lossless compression technique that reduces data size by exploiting
statistical properties of the data.
• It assigns shorter codes to more frequent elements and longer codes to less frequent ones.
Key Techniques:
• Huffman Coding: Uses a binary tree to create prefix-free codes based on the frequency of
symbols.
• Arithmetic Coding: Encodes entire messages as a single number between 0 and 1, using
probabilities of symbols.
Characteristics:
• Independent of the data type.
• Relies on the probability distribution of symbols.
Use in Multimedia:
• Often used as a final stage in compression algorithms (e.g., JPEG, MP3, MPEG).
Example:
• In JPEG compression, after transforming and quantizing image data, entropy encoding (e.g.,
Huffman coding) compresses the data.
2. Source Encoding
Concept:
• Source encoding reduces data by removing redundancy or irrelevance, typically in the
context of the specific data type (e.g., audio, video, image).
• It can be lossless (no information loss) or lossy (some information is discarded to achieve
higher compression).
Key Techniques:
• Lossless: Run-Length Encoding (RLE), Differential Encoding.
• Lossy: Transform coding (e.g., Discrete Cosine Transform in JPEG), predictive coding.
Characteristics:
• Tailored to the nature of the data (e.g., removing imperceptible sounds in audio or
redundant pixels in images).
Use in Multimedia:
• Forms the basis of most multimedia compression standards.
• Example: Removing imperceptible frequencies in audio (MP3) or using motion estimation
in video (MPEG).
Example:
• In JPEG compression, source encoding involves using the DCT to transform spatial pixel
data into frequency components.
3. Hybrid Encoding
Concept:
• Combines both source encoding and entropy encoding for efficient compression.
• Source encoding reduces the data size by removing redundancies and irrelevancies, while entropy
encoding optimally represents the remaining data.
How It Works:
• Source Encoding: The raw data is pre-processed to remove unnecessary information (e.g., transform,
quantization).
• Entropy Encoding: The processed data is encoded using lossless techniques like Huffman or arithmetic
coding.
Characteristics:
• Exploits the strengths of both techniques for maximum compression efficiency.
• Balances compression ratio and quality, especially in lossy systems.
Use in Multimedia:
• Widely used in standard multimedia compression schemes like JPEG, MPEG, and H.264.
Example:
• JPEG:
• Transform coding (DCT) and quantization (source encoding).
• Huffman coding (entropy encoding).
• MPEG:
• Motion compensation and prediction (source encoding).
• Entropy encoding for residual data.
In the context of multimedia, entropy refers to the measurement of information content,
randomness, or uncertainty in multimedia data, such as images, videos, audio, and text. It
plays a significant role in areas like compression, transmission, encryption, and quality
assessment.
lossy sequential DCT-based mode
• The term lossy sequential DCT-based mode typically refers to a method of encoding or
compressing data, often in the context of image or video compression, using the
Discrete Cosine Transform (DCT).
• This mode is used for transforming data into a frequency domain and selectively
discarding certain parts of the information (lossy compression), while attempting to
preserve perceptible quality.
Let me break it down:
1. Discrete Cosine Transform (DCT):
DCT is a mathematical transformation used in signal processing, particularly in image and
video compression. It converts a signal from the time or spatial domain to the frequency
domain. The DCT is often used because it tends to concentrate energy into a small number
of coefficients, allowing for efficient data compression.
In the context of images or video, DCT is applied to small blocks (e.g., 8x8 or 16x16 pixels)
of the image. The result is a set of DCT coefficients, which represent the image in terms of
its frequency components.
2. Lossy Compression:
• In lossy compression, some information from the original data is discarded during
compression. This is done to reduce the file size and improve efficiency. The key is
to discard less perceptible data, which humans are less likely to notice.
• In the context of DCT, this often means quantizing the DCT coefficients (reducing
precision), which leads to a loss of some high-frequency components that typically
correspond to details and noise in the image.
3. Sequential Mode:
• In sequential mode, data is processed in a linear or ordered manner, often with
the compression algorithm applied to the data in a step-by-step sequence.
• In the case of DCT-based compression, this could refer to processing each block of
an image one after the other.
4. How it Works:
In a typical lossy DCT-based compression scheme (like JPEG), the following steps are
often followed:
• Divide the image into blocks (e.g., 8x8).
• Apply the DCT to each block, transforming the pixel values into frequency
coefficients.
• Quantize the DCT coefficients to reduce precision, discarding high-frequency
components that are less perceptible to the human eye.
• Encode the coefficients (usually with run-length encoding or Huffman coding) to
further compress the data.
• Store the compressed data.
5. Lossy DCT in Practice:
• JPEG: JPEG uses DCT-based compression in a lossy fashion. The image is divided
into blocks, the DCT is applied, and the resulting coefficients are quantized before
encoding.
• Video Compression (e.g., MPEG): Similar techniques are used in video codecs,
where both spatial (DCT) and temporal (motion compensation) compression are
combined to reduce file size.
6. Lossy Sequential DCT-based Mode:
• When referring to a lossy sequential DCT-based mode, it may refer to the process
of compressing an image or video using DCT in a sequential manner (block-by-
block) while using lossy techniques like quantization to reduce the data size.
• This is a common approach in formats like JPEG, where the image is processed
sequentially, applying DCT to each block, and discarding less important frequency
components.
Original Image --> [Divide into Blocks] --> [Apply DCT] --> [Quantize Coefficients]
(e.g., 8x8 blocks) (Frequency Domain) (Lossy Step)
↓
[Decode Data] --> [Inverse Quantize] --> [Inverse DCT]
(Reconstructed Image) (Visual Artifacts)
Advantages:
• High Compression Ratios: The ability to reduce file size significantly while maintaining
acceptable quality is one of the key benefits of this method.
• Efficient for Natural Images: DCT-based compression is particularly well-suited for
natural images and video content, where high-frequency details can often be
discarded without significantly affecting perceptual quality.
Disadvantages:
• Loss of Quality: The loss of data leads to a reduction in image or video quality,
particularly noticeable at higher compression levels.
• Block Artifacts: In block-based DCT compression (e.g., JPEG), visible artifacts can
appear at lower bit rates, such as blocking effects, banding, or ringing around edges.
Expanded lossy DCT-based mode
• An expanded lossy DCT-based mode typically refers to a more detailed and in-depth
explanation of how the Discrete Cosine Transform (DCT) is used in lossy image or video
compression, with further emphasis on the underlying steps and additional techniques
involved in the compression pipeline.
• It can refer to a series of advanced stages, optimizations, and the role of quantization,
entropy encoding, and post-processing. Let's expand the previous explanation to cover
more advanced aspects and the theoretical foundations.
↓
[Decode Compressed Data]
↓
[Entropy Decoding] --> [Dequantization] --> [Inverse DCT (IDCT)] --> Reconstructed Image
(Approximation)
JPEG
• JPEG is an image compression standard that was developed by the
joint photographic expert group"
• It a lossy image compression method. It employs a transform coding
method using the DCT (Discrete Cosine Transfer);
Zig-Zag Quantization
DPCM RLC
Entropy Coding
Compressed
Image
Media Characteristics
• Audio: MPEG Coding
• Images: JPEG Coding background
• Video: MPEG-2 Principles
• 3D Models: Progressive Representations
• 3D Motions: Data Representation
B. Prabhakaran 53
MPEG Audio Features
• MPEG Audio Layers
• Layer 1 allows for a maximal bit rate of 448 Kbits/second.
• Layer 2 allows for 384 Kbits/second
• Layer 3 for 320 Kbits/second.
• All samplings at 16 bits.
• Sampling frequencies: CD (compact disc) digital audio (44.1
kHz) and digital audio tapes (48 kHz); Sampling at 32 kHz is
also available.
B. Prabhakaran 54
MPEG Audio Coding
55
MPEG Audio Coding…
• Uncompressed audio transformed into 32 non-interleaved sub-bands using Fast
Fourier Transform (FFT).
• Amplitude and noise level of signal in each sub-band determined using a psycho-
acoustic model.
• Next, quantization of the transformed signals.
• MPEG audio layers 1 and 2 are PCM-encoded. Layer 3: Huffman coded.
• Types of channels
• Single channel, two independent channels, or one stereo channel.
• Stereo channels: processed independently or jointly. Joint stereo exploits redundancy of both
channels
56
MPEG-2 Audio
• Different channels:
• 5 full bandwidth channels (left, right, center, and two surround
channels).
• Additional low frequency enhancement (LFE) channel.
• Up to seven commentary or multilingual channel.
• Sampling rates defined for MPEG-2 audio include 16 KHz,
22.05KHz, and 24 KHz
B. Prabhakaran 57
JPEG Compression
• Modes of compression
• Lossy Sequential DCT-based Mode: also known as
baseline process; Needs to be supported by every JPEG
implementation.
• Expanded Lossy DCT-based Mode: Provides an
additional set of further enhancements to the baseline
mode.
• Lossless Mode: Allows perfect reconstruction of the
original image; lower compression ratio.
• Hierarchical Mode: consists of images with different
resolutions generated using the methods described
above.
B. Prabhakaran 58
Comparision of JPEG and MPEG
Comparision of JPEG and MPEG
Image Preparation
• Source image can have up to 255 components (instead of
only three components Y,U, and V).
• E.g., components of an image can be the color components
(Red R, Green G, Blue B) or luminance components
(Luminance Y, chrominance U and V).
• Each component of the image may have the same or
different resolution, in terms of the number of pixels in the
horizontal and vertical axis.
B. Prabhakaran 61
Image Preparation ..
B. Prabhakaran 62
Image Preparation …
• Each pixel represented by p bits; values 0 - 2p-1.
• Value of p depends on the mode of compression.
• Lossy modes use either 8 or 12 bits per pixel.
• Lossless modes use 2 upto 12 bits per pixel.
• All pixels of all components within the same image are coded with
the same number of bits.
• An application can have different number of bits per pixel,
provided a suitable transformation of the image to the well
defined numbers of bits in the JPEG standard.
B. Prabhakaran 63
Non-Interleaved Ordering
B. Prabhakaran 64
Interleaving of Components
B. Prabhakaran 65
Interleaving of Components ..
• MCU1 consists of regions R1 in components C1 and C2.
• Data units within one region are ordered as in the earlier
way: left-to-right and top-to-bottom.
• MCU1 in Example figure:
• C100 C101 C110 C111 C200 C201;
• MCU2:
• C102 C103 C112 C113 C202 C203 and so on.
B. Prabhakaran 66
Lossy Sequential Mode
B. Prabhakaran 67
Lossy Sequential Mode
• Values of each pixel is shifted in the range -128 to 127,
with zero as the center. Achieved by Discrete Cosine
Transformation (DCT).
• For a block of 8 X 8 pixels, shifted values are represented
by Sxy, 0 ≤ x ≤ 7, 0 ≤ y ≤ 7.
• Each of these values are transformed using Forward DCT
(FDCT).
• Above FDCT transformation has to be done 64 times per
block, resulting in 64 coefficients Suv per block.
• Cosine expressions depend on x, y, u, and v, but not on Sxy.
Computation can be optimized to take advantage of this
fact.
B. Prabhakaran 68
Lossy Sequential Mode ..
• FDCT maps the value from the time to the frequency domain.
• Each coefficient of Suv can be regarded as a two-dimensional frequency.
• Coefficient S00, the DC-coefficient, corresponds to the lowest frequency in
both dimensions. Also describes the fundamental color of the 8 X 8 pixels
block.
• Rest of the coefficients known as AC-coefficients.
B. Prabhakaran 69
Quantization Phase
• FDCT coefficients in a block may have low or zero
values, if the block has only one predominant color.
• Entropy encoding is used for further compression
• Each of the 64 coefficient value is scaled by a factor Q, the
quantization factor.
• E.g., the quantized value of the DC-coefficient, SQ00, is : SQ00 = S00
/ Q.
• In most cases, quantization is not done in a uniform
manner.
• Low frequencies of the FDCT coefficients describe the boundaries
among regions in the image being compressed.
• If low frequency coefficients are quantized in a very coarse
manner (i.e., with high values of Q), boundaries in the
reconstructed image may not be as sharp.
B. Prabhakaran 70
Quantization Phase
• Low frequency coefficients quantized in finer manner (i.e.,
with lower values of Q) than higher frequency ones.
• Table with 64 entries used for representing the values of
the quantization factor Q, for each of the 64 FDCT
coefficients.
• Quantization of each coefficient is: SQuv = Suv / Quv, Quv the
quantization factor for uvth coefficient.
B. Prabhakaran 71
Quantization Phase..
B. Prabhakaran 72
Entropy Encoding
B. Prabhakaran 73
Entropy Encoding
• Lower frequency AC-coefficients have higher values than
higher frequencies ones (which are usually very small or
zero).
• Hence, zig-zag ordering of AC-coefficients produces a
sequence where similar values will be together.
• Such a sequence is highly suitable for efficient entropy
encoding.
• Next step: run-length encoding of zero values. JPEG
specifies Huffman and Arithmetic encoding. (Arithmetic
encoding is protected by a patent).
• For the lossy JPEG mode (i.e., the baseline process), only
Huffman encoding is allowed to be used.
B. Prabhakaran 74
Image Reconstruction
• Decompress the data in Huffman/Arithmetic coded form.
• Dequantization is then performed: Ruv = SQuv X Quv. Must
use the same table as the one used in the quantization
process.
• Dequantized DCT coefficients are then subject to IDCT
(Inverse DCT).
• If FDCT and IDCT can determine the values with full
precision, reconstruction can be lossless (assuming lossless
quantization).
• However, precision is restricted and hence the
reconstruction process is lossy.
B. Prabhakaran 75
Image Reconstruction
B. Prabhakaran 76
Expanded Lossy Mode
• Progressive representation realized by expansion of
quantization.
• Expansion done by addition of an output buffer to the
quantizer, storing all coefficients of the quantized DCT.
• Encoding process follows either:
• Encoder processes DCT-coefficients of low frequencies
successively (low frequencies describe border outlines). Hence,
encoding low frequency coefficients successively decode the
boundaries of various objects successively.
• Another approach: use all the DCT-coefficients in the encoding
process but single bits are differentiated according to their
significance (i.e., most significant bit first and then the least
significant bits are encoded).
B. Prabhakaran 77
Progressive Spectral Selection
• DCT coefficients are grouped into several spectral bands.
• Low-frequency DCT coefficients sent first, and then higher-
frequency coefficients. E.g.,
• Band 1: DC coefficients only
• Band 2: AC1 and AC2 coefficients
• Band 3: AC3, AC4, AC5, and AC6 coefficients
• Band 4: AC7 … AC63 coefficients
Bits
n-1
band1 band2 band3 band4
B. Prabhakaran 78
Prog. Successive Approximation
• All DCT coefficients sent first with lower precision.
• Then refined in later scans. E.g.,
• Band 1: All DCT coefficients divided by 4.
• Band 2: All DCT coefficients divided by 2
• Band 3: All DCT coefficients at full resolution
Bits
n-1
band1
band2
band3
0
DC AC1 AC2 AC63
B. Prabhakaran 79
Combined Progressive …
• Combines both spectral & successive approximations
• SCAN 1: DC band 1; Scan 2: AC band 1
• Scan 3: AC b2; Scan 4: AC b3; Scan 5: AC b4;
• Scan 6: AC b5; Scan 7: DC b2; Scan 8: AC b6
Bits
n-1
DC AC b2 AC b3
AC b1
b1
AC b4 AC b5
DC
AC b6
0 b2
DC AC1 AC2 AC63
B. Prabhakaran 80
Lossless Mode
hemanta .
Lossless Mode..
• 8 possible predictor values are defined for each pixel
based on the values of the adjacent pixels.
• Table describes predictors for pixel X
• 0 No Prediction 1 X=A
• 2 X=B 3 X=C
• 4 X=A+B-C 5 X = A + (B - C) / 2
• 6 X = B + (A - C) / 2 7 X = (A + B)/2
• For each pixel, the number of the chosen predictors as
well as the difference of the prediction to the actual value
are entropy encoded.
X
Hierarchical JPEG
• Progressive JPEG at multiple resolutions
Rn
Image Res 2
Image Res 1
Video Compression
• Asymmetric Applications
• Compression process is performed only once and at the time of
storage. E.g., on-Demand servers (such as Video-on-Demand and
News-on-Demand) and electronic publishing (travel guides,
shopping guides, and educational materials).
• Symmetric Applications
• Equal use of compression and decompression process.
• E.g., information generated through video cameras or by editing
pre-recorded material.
• Video conferencing, video telephone applications involve
generation, compression, and decompression of information
generated through video cameras.
• Desktop video publishing applications require edit operations on
pre-recorded material.
B. Prabhakaran 84
Desirable Features for Video Compression
• Random Access
• Fast Forward / Rewind
• Reverse Playback
• Audio-Visual Synchronization
• Robustness to Errors
• Coding / Decoding Delay
• Edit ability
• Format Flexibility
• Cost Tradeoffs
B. Prabhakaran 85
MPEG Standard
• MPEG-Video: compression of video signals at about 1.5
Mbits per second.
• MPEG-Audio: compression of digital audio signal at the
rates of 64, 128, and 192 kbps per channel.
• MPEG-System deals with the issues relating to audio-visual
synchronization.
• Also handles multiplexing of multiple compressed audio
and video bit streams.
B. Prabhakaran 86
MPEG Video
• Primary aim of MPEG-Video is to compress a video signal
to a bit rate of about 1.5 Mbits/s with an acceptable
quality.
• MPEG is often termed a generic standard, implying that it
is independent of a particular application.
• Benefited from other following standards. :
• JPEG
• H.261: standard was already available during MPEG
standardization. MPEG technique is more advanced than H.261.
B. Prabhakaran 87
MPEG Video
• Two nearly conflicting requirements.
• Random access requirements for MPEG video are best satisfied
with pure intra-frame coding.
• High compression rates are not possible unless a fair amount of
inter-frame coding is done.
• Intra-frame coding: targets spatial redundancy reduction.
• Inter-frame coding: targets temporal redundancy
reduction.
• Delicate balance between inter- and intra-frame coding.
B. Prabhakaran 88
Temporal Redundancy Reduction
• Temporal redundancies present in video when subsequent
frames carry similar but slightly varying content.
• E.g., video frames of a person walking in a street show a gradual
variation in the contents based on the walking speed of the
person.
• Most widely used techniques for achieving temporal
redundancy reduction is motion compensation.
• Motion information comprises of the amplitude and the
direction of displacement of the contents.
• MPEG uses block-based motion compensation technique.
B. Prabhakaran 89
Temporal Redundancy Reduction
• Significant cost associated with motion information coding.
• Hence, 16 X 16 bit blocks are chosen as motion
compensation units (MCUs), called macro-blocks.
• Two types of motion compensation are applied over these
macro-blocks.
• Causal predictors (Predictive coding): i.e., generate the contents
of a subsequent frame based on the motion information and the
contents of the current one.
• Non-causal predictors (interpolative coding): Frame coded based
on both a previous and a successive frame.
B. Prabhakaran 90
Interpolative Coding
B. Prabhakaran 91
Interpolative Coding
• Advantages of interpolative coding:
• Compression obtained using interpolative coding is
very high.
• Results in better noise reduction: the coded block is
based on both a past and a future frame.
• Helps in efficiently coding new blocks (i.e., the blocks
that are not present in the future) in the frame to be
coded. New blocks may be properly predicted from the
future frame.
B. Prabhakaran 92
Predictive Coding
B. Prabhakaran 93
MPEG Frame Sequence
B. Prabhakaran 94
Motion Estimation
Previous Current Future
Frame Frame Frame
A
B
C
B. Prabhakaran 95
Motion Estimation …
• MPEG does not specify the motion estimation technique.
• Block-matching techniques likely to be used. Goal:
• Estimation motion of a n X m block in the present frame with
respect to a previous or a future frame.
• Block is compared to with a corresponding block within a search
area G of size (m + 2p X n + 2p) in the previous/future frame.
• Typical: n = m = 16 (16 X 16 pixels) and parameter p = 6.
n
m
p p
Search Area G
B. Prabhakaran 96
Block Matching Approaches
• Exhaustive Search or brute force
• 3-step search
• 2-D Logarithmic search
• Conjugate direction search
• Parallel hierarchical 1-D search
• Modified pixel-difference classification
B. Prabhakaran 97
MPEG Layers
• Sequence Layer l Context unit
• Group of Pictures l Random access unit
• Picture Layer l Primary coding unit
• Slice Layer l Resynchronization unit
• Macro-blocks l Motion compensation
• Blocks l DCT unit
B. Prabhakaran 98