Unit 2 - Merged-Lecture Slides
Unit 2 - Merged-Lecture Slides
• where ’s’ is the output pixel value and ‘r’ is the input pixel value
Gray scale transformation – Piecewise Linear
transformation
• Two types of piecewise linear
transformation:
• Contrast stretching: brightness values
between p1 = (r1, s1)and p2 = (r2, s2) are
enhanced in accordance with a piecewise
linear function
• The transform 𝑇 is
Histogram equalization for contrast
enhancement
• The integral in the pixel brightness transform equation is called the
cumulative histogram
𝑥′ 𝑎 0 𝑥
=
𝑦′ 0 𝑏 𝑦
𝑥′ 1 tan ∅ 𝑥
= 𝑦
𝑦′ 0 1
• Determinant of the transformation matrix is |J| = 1
Geometric transformations – Bilinear
transform
• Bilinear transform – transforms input coordinates 𝑥, 𝑦 to output
coordinates 𝑥′, 𝑦′ as follows:
• Bicubic interpolation
Brightness interpolation – Nearest neighbor
interpolation
• Assigns to the point 𝑥, 𝑦 the brightness value of the
nearest point 𝑔 in the discrete raster
• Solid lines show raster of original input image
• Dashed lines represent the raster of input image obtained
from inverse transformation of output image to input image
Output image
Each pixel is average of
corresponding input image
Smoothing without pixels
Equals zero since
blurring can be achieved noise is zero mean
Image smoothing - averaging
• In most situation multiple images of the same scene are NOT
available – only one noise corrupted image is available
• In such cases, averaging is performed based on local neighborhood
• For a 3 x 3 neighborhood, the averaging mask ‘h’ is
Impulse noise is
significantly reduced
with median filtering
Edge detectors
• Collection of local preprocessing methods used
to locate changes in intensity function – edges
are pixels where brightness changes rapidly
• Edges are important for image perception
• Edge is described as the gradient of the image
function g(x,y)
• Edge is a vector property – has magnitude and
direction
• Edge magnitude is the magnitude of the
gradient
• Edge direction is the gradient direction rotated
by -90o i.e. edge direction is perpendicular to
gradient direction
Edge detectors – gradient magnitude and
direction
• Partial derivatives are used for gradient since image function g(x,y)
depends on two variables
• When only the magnitude of the gradient is required (and not the
direction), a linear differential operator called Laplacian is used
Roberts operator is sensitive to noise as very few pixels are used to approximate the gradient
Edge detectors – Gradient operators
• Laplace operator ∇2 – approximates the second derivative and
computes the edge magnitude only
• 3 x 3 convolution mask is often used
Position of edge
in image function
Zero crossing of
second derivative
Extremum of first
derivative
• Therefore,
Zero crossings of the second derivative
• A normalizing multiplicative coefficient ‘c’ is introduced in the
expression for ∇2G to get the convolution mask of the LoG operator
Gy Gx
$% &!
• Magnitude of the gradient is 𝐺!" + 𝐺#" and direction 𝑛 = 𝑡𝑎𝑛 &"
Canny edge detection
$% &!
• Estimating the gradient direction 𝑛 = 𝑡𝑎𝑛 is analytically
&"
equivalent to where ∇ operator denotes first
derivative
If gradient at a pixel is
• Above High, declare it as edge pixel
• Below Low, declare it as non-edge pixel
• Between Low and High, consider its
neighbors and declare it as ’edge pixel’
if it is connected to another edge pixel
directly or via other pixels between low
and high
Parametric edge models
• Discrete image intensity function 𝑔𝑠(𝑙∆𝑥, 𝑘∆𝑦) is a sampled and noisy
approximation of the continuous image intensity function 𝑓(𝑥, 𝑦)
• 𝑓(𝑥, 𝑦) is not known but can be estimated from 𝑔𝑠(𝑙∆𝑥, 𝑘∆𝑦)
• Modeling the discrete image as a continuous function 𝑓(𝑥, 𝑦)
requires higher order functions in 𝑥 𝑎𝑛𝑑 𝑦 – hence practically
impossible to represent the discrete image function using a single
continuous function
• Solution: piecewise continuous functions called facets are used to
represent a pixel and its neighborhood – such a representation is
called facet model
Parametric edge models
• Applications of facet model
• Peak noise removal Edge detectors based on
• Segmentation into constant gray-level regions parametric models are more
precise than convolution
• Gradient edge detection and zero-crossing edge detection based edge detectors
• Line detection and corner detection
• Types of facet models:
• Flat facet model – uses piecewise constant functions
• Linear (sloped) facet model – uses piecewise linear functions
• Quadratic facet model – uses piecewise quadratic functions (of 2 variables: x
and y)
• Cubic facet model – uses piecewise cubic functions (of 2 variables: x and y)
• Example of a cubic facet model: Coefficients c1, c2, … can be
estimated using least
squares method or singular
value decomposition (SVD)
Edges in multispectral images
Low pass filtered image High pass filtered image Band pass filtered image
Local preprocessing in frequency domain –
Homomorphic filtering
• Homomorphic filtering is used to remove multiplicative noise
• It improves image contrast and normalizes image intensity across the
image
• Homomorphic filtering procedure:
Factorize image function Apply logarithmic transform Apply Fourier transform
𝑓 𝑥, 𝑦 = 𝑖 𝑥, 𝑦 . 𝑟(𝑥, 𝑦) 𝑧 𝑥, 𝑦 = log 𝑓(𝑥, 𝑦) = log 𝑖(𝑥, 𝑦) + log 𝑟(𝑥, 𝑦) 𝑍 𝑢, 𝑣 = 𝐼 𝑢, 𝑣 + 𝑅(𝑢, 𝑣)
• Details of tunnel surface at the top and right are visible after
homomorphic filtering
Lecture 12
Line detection, Image restoration – Inverse filtering, Weiner filtering
ℎ4 = ?