Image Processing Subject
Image Processing Subject
Image Processing Subject
Two Marks:
2. Define Pixel or Picture Elements: A pixel (short for "picture element") is the
smallest unit of a digital image. It is a single point in an image's grid, and its color or
intensity value represents the visual information at that specific location.
1. Image Acquisition
2. Preprocessing
3. Enhancement
4. Restoration
5. Compression
6. Transformation
7. Segmentation
8. Representation & Description
9. Recognition & Interpretation
6. Difference between Rods and Cones: Rods and cones are photoreceptor cells in
the human retina. Rods are more sensitive to low light levels and are responsible for
black-and-white vision. Cones are responsible for color vision and work best under
higher light conditions.
12. Zooming and Shrinking: Zooming is the process of enlarging an image, while
shrinking is the process of reducing its size. Various algorithms are used for these
operations, such as interpolation for zooming and downsampling for shrinking.
13. Moiré Pattern: A moiré pattern is an unwanted pattern that appears when two
regular grids or patterns overlap or interact, causing interference and producing an
irregular visual pattern.
14. False Contouring: False contouring is an artifact that occurs when an image
processing operation introduces artificial boundaries or levels of intensity that were
not present in the original image.
16. Checkerboard Effect: The checkerboard effect is an artifact that can occur in
digital images due to aliasing, leading to a distorted pattern resembling a
checkerboard.
Big Question:
1. Steps in DIP: The steps in Digital Image Processing (DIP) are a series of processes
used to manipulate and analyze digital images. The typical steps include:
• Image Input: Capturing images from various sources (cameras, scanners, etc.).
• Image Output: Displaying processed images on screens or other media.
• Image Processing Algorithms: A set of mathematical and computational
methods for manipulating images.
• Image Storage: Storing images in digital formats for easy access.
• Hardware and Software: Computers, processors, and software tools for
image processing.
• Image Analysis: Extracting information and patterns from images for further
interpretation.
Unit – 2
Two Marks:
10. Derivative Filter Types: Derivative filters highlight edges and rapid changes in
intensity within an image. Examples include the Robert Cross, Prewitt, and Sobel
operators.
11. Robert Cross, Prewitt & Sobel Operators: These are gradient-based edge
detection operators used to detect edges in an image by calculating the gradient
magnitude and direction. They differ in the way they calculate these values using
neighboring pixel intensities.
13. High Boost Filtering: High boost filtering enhances the high-frequency
components of an image to improve its sharpness. It's achieved by combining the
original image with a scaled version of its high-pass filtered version.
Big Question:
• Linear Spatial Smoothing Filter: These filters, like the Gaussian filter, operate
by taking the weighted average of pixel intensities within a local
neighborhood. The weights are determined by a kernel (also called a mask or
filter matrix). The Gaussian filter is an example of a linear spatial filter used for
smoothing, where the weights are determined by the Gaussian distribution.
• Non-Linear Spatial Smoothing Filter: Filters like the median filter and the
bilateral filter are non-linear spatial filters used for smoothing. The median
filter replaces the center pixel value with the median value of the surrounding
pixels, which is effective at removing impulse noise. The bilateral filter
considers both spatial and intensity differences to preserve edges while
reducing noise.
4. Spatial Sharpening Filter: Spatial sharpening filters enhance edges and fine
details in an image. The Laplacian filter is a common example of a spatial sharpening
filter. It calculates the second derivative of the image and accentuates areas with
rapid intensity changes, which correspond to edges. However, using the Laplacian
filter alone can amplify noise, so techniques like unsharp masking or high boost
filtering are often used to achieve better results.
Unit – 3
6. Blind Image Restoration: Blind image restoration aims to recover an image from
its degraded version without prior knowledge of the degradation process. This is a
challenging problem since the degradation function and noise characteristics are
unknown. Various algorithms and techniques are used, often involving assumptions
about the image and noise statistics.
7. How Edges Are Detected in an Image? Edges in an image can be detected using
techniques like gradient-based methods. These methods calculate the gradient of
the image intensity and identify regions where the gradient magnitude is high. The
edges correspond to rapid changes in intensity, and the gradient provides
information about the direction and strength of these changes.
9. LoG / Mexican Hat Function: The Laplacian of Gaussian (LoG) filter, also known
as the Mexican Hat filter, is a second-order edge detection filter. It is obtained by
convolving the image with the Laplacian of a Gaussian function. The LoG filter
highlights edges by detecting zero crossings.
• Region Growing: Starting from a seed point, pixels are added to a region if
they meet certain criteria, such as having similar intensity values to the seed.
• Splitting and Merging: This is an iterative approach where regions are split
into smaller regions if they do not meet certain criteria. Conversely,
neighboring regions with similar properties are merged together.
14. Markers: Markers are used in watershed segmentation to define initial regions or
seeds from which the segmentation process starts. Markers guide the segmentation
algorithm by indicating which areas belong to different objects or regions.
• Mean Filter: A spatial restoration filter that replaces the center pixel value
with the average value of the pixel intensities within a local neighborhood. It's
effective at removing uniform noise but can blur edges.
• Order Statistics Filter: A type of filter that uses order statistics (e.g., median)
of pixel intensities within a neighborhood to restore an image. It's particularly
good at removing impulse noise.
• Adaptive Filter: These filters adjust their weights according to the local
characteristics of the image. Adaptive filters are useful when the noise
characteristics vary across the image.
3. Frequency Restoration Filters: Band Reject, Band Pass, Notch, and Optimum
Notch:
Examples:
Image Restoration:
Comparison:
where x represents the noise magnitude, and σ is the scale parameter that controls
the spread of the distribution.
3. Probability Density Functions for Erlang Noise Models: The Erlang distribution
is often used to model more complex types of noise that might be influenced by
multiple factors. The probability density function (PDF) of the Erlang noise is given
by: