Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
3 views

Adv DIP Third Lecture

Chapter 3 discusses image enhancement techniques, focusing on spatial domain transformations and filtering methods. It explains how spatial filtering operates by applying a predefined operation over a neighborhood of pixels to enhance image quality. The chapter also covers intensity transformation functions, contrast stretching, thresholding, and histogram processing as methods for improving image visibility and detail.

Uploaded by

rababrongon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Adv DIP Third Lecture

Chapter 3 discusses image enhancement techniques, focusing on spatial domain transformations and filtering methods. It explains how spatial filtering operates by applying a predefined operation over a neighborhood of pixels to enhance image quality. The chapter also covers intensity transformation functions, contrast stretching, thresholding, and histogram processing as methods for improving image visibility and detail.

Uploaded by

rababrongon
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 174

Chapter 3

Intensity Transformations
and Spatial Filtering

1
Remember?

2
Our Objective

Image Enhancement
 Enhancement is the process of manipulating an
image so that the result is more suitable than the
original for a specific application.
 Very subjective
 Application dependent
 Viewer is the ultimate judge
Classification of Image Enhancement

 Enhancement in spatial domain (Chapter 3)


Spatial domain is a plane where a digital image is
defined by the spatial coordinates of its pixels.

 Enhancement in frequency domain (Chapter 4)


Frequency domain where a digital image is defined
by its decomposition into spatial frequencies
participating in its formation.
Classification of Image Enhancement

 Enhancement in spatial domain


• The section of the real plane spanned by the
coordinates of an image is called the spatial
domain.
• manipulates pixel by pixel basis
How does It Work?
Enhancement in Spatial Domain

 The spatial domain processes can be denoted by


the expression

g(x, y) = T [ f(x, y)]

 f(x, y) is the input image and g(x, y) is the output


image and T is an operator defined over a
neighborhood of point (x, y)
Enhancement in Spatial Domain

3X3
neighborhood
(x, y)

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain

 Neighborhood can be as small as a single pixel

(x, y)

Enhanced
Image, f(x, y) Image, g(x, y)
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Enhancement in Spatial Domain
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Spatial Filtering

 This procedure is called Spatial Filtering.


 The neighborhood along with a predefined
operation is called a spatial filter.
 Spatial filter is also known as spatial mask,
kernel, template or window. Spatial mask is the
most popular.
Intensity Transformation Function
Intensity Transformation Function

 If the neighborhood is of size 1 × 1, g depends


only on the value of f at a single point (x, y) and T
becomes intensity (gray-level) transformation
function of the form
s = T(r)
Contrast Stretching

 Input gray level R


 Produce an image of higher
contrast than the original.
 Below a threshold (m), image
is compressed towards black
 Values of r lower than m are
compressed by the
transformation function into a
narrow range of s, toward
black.
 Above m, image is stretched
towards white
Thresholding

 Thresholding is a special
case of contrast stretching.
 Produces a binary image
 Useful in image
segmentation
Thresholding

r s
Bit Plane Slicing

What is this all about ???


Bit-plane Slicing
Bit-plane Slicing
Bit-plane Slicing

 The four higher order bit, especially the last two, contain
a significant amount of visually significant data.
 The lower-order planes contribute to more subtle
intensity details in the image.

original
image
Bit-plane Slicing

Some of the bit plane sliced


images have black border.
Look at this gray border. Some of the bit plane sliced
Its decimal value is 194 images have white border.

original
image
Bit-plane Slicing

Binary of 194 is 11000010

original
image
Bit-plane Slicing

Binary of 194 is 1100001 0


original
image
Bit-plane Slicing

1
Binary of 194 is 110000 0

original
image
Bit-plane Slicing

Original Image
Bit-plane Slicing

 Useful for analyzing the relative importance of


each bit in an image.
 This process aids in determining the adequacy of
the number of bits used to quantize the image.
 Useful in image compression.
Bit-plane Slicing

Image Compression
Bit planes 8 & 7 Bit planes 8, 7, 6 & 5

Bit planes 8, 7 & 6 Original Image


Image Enhancement using
Histogram Processing
Image Histogram

4
9 8 9 8

3
2 7 4 7

2
6 4 6 1
4 0 7 4
1
An image 0

Histogram:
0 1 2 3 4 5 6 7 8 9

h: 1 1 1 0 4 0 2 3 2 2
Normalized Image Histogram

9 8 9 8
2 7 4 7
6 4 6 1
4 0 7 4
An image

Histogram:
0 1 2 3 4 5 6 7 8 9

h: ? ? ? 0 ? 0 ? ? ? ?
Normalized Image Histogram

p (r k ) = nk / n
 Normalized histogram is more like probabilities

 p(rk): how likely a pixel will have gray level rk


Normalized Image Histogram

9 8 9 8
2 7 4 7
6 4 6 1
4 0 7 4
An image

Histogram:
0 1 2 3 4 5 6 7 8 9

h: 1/16 1/16 1/16 0 1/4 0 1/8 3/16 1/8 1/8


Image Histogram

Histogram for dark image


Image Histogram

Histogram for bright image


Image Histogram

Histogram for low


contrast image
Image Histogram
Image Histogram

Dark

• Dark images: bins in the low


end
Bright
• Bright images: bins in the high
end
• Low contrast images: bins in a
Low Contrast narrow range
• High contrast images: bins are
in the entire range
High Contrast
Image Histogram

 An image whose pixels tend to occupy the entire


range of possible intensity level, will have an
appearance of high contrast.
Image Histogram

 histogram(S,'Normalization','probability')
Histogram Equalization
Histogram Equalization
Histogram Equalization

Output gray
levels

L-1

Input gray
L-1 levels
Histogram Equalization

• Let s, r: are all discrete and in [0, L-1]


s = T (r ) 0 ≤ r ≤ L -1
• Let, 2 conditions hold:
• T(r): single valued and monotonous in 0 ≤ r ≤ L - 1
• 0 ≤ T (r ) ≤ L − 1 for 0 ≤ r ≤ L − 1

L-1
Condition 1 guarantees that
output intensity values will never
be less than corresponding input
values thus preventing artifacts
created by reversals of intensity.
L-1
Histogram Equalization

• Let s, r: are all discrete and in [0, L-1]


s = T (r ) 0 ≤ r ≤ L -1
• Let, 2 conditions hold:
• T(r): single valued and monotonous in 0 ≤ r ≤ L - 1
• 0 ≤ T (r ) ≤ L − 1 for 0 ≤ r ≤ L − 1

L-1
Condition 2 guarantees that the
range of output intensities is the
same as the input.

L-1
Histogram Equalization

• Let,
– pr(r): the equalized histogram of the given image
– ps(s): the equalized histogram of the output image

L-1

L-1
Histogram Equalization

• Objective: to find the transformed image so that ps(s) of


the transformed image is uniform

Apply this Get this


p(s)

s
Histogram Equalization
Histogram Equalization
Local Histogram Processing
Local Histogram Processing

 The previous method was global because pixels


are modified by a transformation function based
on the intensity distribution of an entire image.
 In some cases, it is necessary to enhance details
over small areas in an image.
 The solution is to devise transformation
functions based on the intensity distribution in a
neighborhood of every pixel in the image.
Local Histogram Processing

 Define a neighborhood and move its center from


pixel to pixel.
 At each location, the histogram of the points in
the neighborhood is computed.
 Histogram equalization is applied.
Local Histogram Processing

original Equalized Equalized with


with global 3X3 local
histogram histogram
Local Histogram Processing

 Global histogram equalization increases noise


 Local histogram equalization provides more fine
details than global histogram equalization
 The intensity values of the objects were too close
to the intensity of the large squares and their
sizes were too small to influence global
histogram equalization.
Spatial Filtering
Spatial Filtering
 Neighborhood (sub-image) rolls over the entire image,
each time centering a different pixel.
 For any specific location (x, y), the value of the output
image g at those coordinates is equal to the result of
applying T to the neighborhood with origin at (x, y).

Enhanced
Image f(x, y) Image g(x, y)

g ( x, y ) = T [ f ( x, y ) ]
Spatial Filtering

 This procedure is called Spatial Filtering.


 The neighborhood along with a predefined
operation is called a spatial filter.
 Spatial filter is also known as spatial mask,
kernel, template or window. Spatial mask is the
most popular.
Spatial Filtering

 Spatial filter consists of


• A neighborhood
• A predefined operation
 Filtering creates a new pixel with coordinates
equal to the coordinates of the center of the
neighborhood and whose value is the result of
the filtering operation.
Spatial Filtering

 Linear spatial filter


If the operation performed on the image pixels is
linear, then the filter is called a linear spatial
filter.
 Non-linear spatial filter
If the operation performed on the image pixels is
non-linear, then the filter is called a non-linear
spatial filter.
Spatial Filtering

 A mask or filter or template or kernel or window


defines the neighborhood

 Mask size is usually m × n

m = 2a + 1, n = 2b + 1

where a & b are positive integers.

Let’s take a look how it works!!!


Spatial Filtering
Image
Origin w(-1,-1) w(-1,0) w(-1,1)

w(0,-1) w(0,0) w(0,1)


Pixel under
consideration w(1,-1) w(1,0) w(1,1)
Mask

Mask Coefficients
Image f(x,y) showing coordinate
arrangement
Spatial Filtering
Image
Origin w(-1,-1) w(-1,0) w(-1,1)

w(0,-1) w(0,0) w(0,1)

w(1,-1) w(1,0) w(1,1)

Mask Coefficients showing


coordinate arrangement
Image f(x,y)

f(x-1,y-1) f(x-1,y ) f(x-1,y+1)

f(x,y-1) f(x,y) f(x,y+1)

f(x+1,y-1) f(x+1,y) f(x+1,y+1)

Pixels of image section under Mask


Spatial Filtering
Image
Origin w(-1,-1) w(-1,0) w(-1,1)

w(0,-1) w(0,0) w(0,1)

w(1,-1) w(1,0) w(1,1)

Mask Coefficients showing


coordinate arrangement
Image f(x,y)

f(x-1,y-1) f(x-1,y ) f(x-1,y+1)

f(x,y-1) f(x,y) f(x,y+1)

f(x+1,y-1) f(x+1,y) f(x+1,y+1)

Pixels of image section under Mask


Spatial Filtering

w(-1,-1) w(-1,0) w(-1,1)


Response of the filter
at point (x, y): w(0,-1) w(0,0) w(0,1)

R = w( −1,−1) f ( x − 1, y − 1)
w(1,-1) w(1,0) w(1,1)
+ w( −1,0) f ( x − 1, y ) + 
 + w(1,0) f ( x + 1, y ) Mask Coefficients showing
+ w(1,1) f ( x + 1, y + 1) coordinate arrangement

f(x-1,y-1) f(x-1,y ) f(x-1,y+1)

f(x,y-1) f(x,y) f(x,y+1)

f(x+1,y-1) f(x+1,y) f(x+1,y+1)

Pixels of image section under Mask


Spatial Filtering

A more general equation for response:

a b
g ( x, y ) = ∑ ∑ w(s, t ) f ( x + s, y + t )
s = − at = − b

Convolution operation w(-1,-1) w(-1,0) w(-1,1)

w(0,-1) w(0,0) w(0,1)


Convolution mask
w(1,-1) w(1,0) w(1,1)
Spatial Correlation and
Convolution
Spatial Correlation

00010000
Discrete unit impulse function
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Correlation
Spatial Convolution

Now, Convolute 0 0 0 1 0 0 0 0 by 1 2 3 2 8

Steps
 Reverse 1 2 3 2 8 to 8 2 3 2 1
 Then apply correlation as mentioned in
previous slides
Spatial Convolution

Position after one shift

Position after 4
Starting position shifts

Final position

Full padded

Position after one shift


2D Correlation and Convolution

240 225 210 190 160


220 195 170 140 115
190 160 145 125 100
1/9 1/9 1/9
150 130 115 95 75
1/9 1/9 1/9
110 85 65 50 40
1/9 1/9 1/9
70 60 45 30 15
Mask
An Image

Apply the mask and calculate the first row


Issues in spatial filtering

 When the mask moves closer to the border


– Closer to that distance: some rows/columns of the
mask are out of image

 Way out:
– Ignoring the outside columns/rows
– Padding
• Zero padding
• Mirror padding
Smoothing Spatial Filters
Smoothing Spatial Filters

 Smoothing filters are used for

• blurring

• noise reduction

• sharp transition reduction

• finding large objects, ignoring small details


Smoothing Linear Filters

 The output of a smoothing linear spatial filter is


simply the (weighted) average of the pixels
contained in the neighborhood.

 Also known as averaging filters or lowpass


filters.
Smoothing Linear Filters

1 16
R = ∑ wi zi
9
1
R = ∑ zi
9 i =1 16 i =1
1 1 1 1 2 1

1 1
× 1 1 1 × 2 4 2
9 16
1 1 1 1 2 1

A spatial averaging filter in This mask yields a weighted


which all coefficients are average. It gives more
equal is called a box filter. importance to the center
pixels and the 4-neighbors.
Smoothing Linear Filters

clc;
clear all;
close all;
I = imread('cameraman.tif');
figure, imshow(I);

h = fspecial('average',5); % h = fspecial('average',hsize)
localMean = imfilter(I,h,'replicate');
imshowpair(I,localMean,'montage')
Smoothing Linear Filters

 Black square: 3, 5, 9, 15, 25, 35, 45

 Border 25 pixels apart

 Circle: 25 pixels, gray: 0, 20, ..., 100%

 Small a’s: 10, 12, 24 pixels

 Large a: 60 pixels

 Bars: 5X100 pixels, Sep: 20 pix

 Noise boxes: 50X120 pixels


Smoothing Linear Filters

After applying
Original Image 3 × 3 filter

After applying After applying


5 × 5 filter 9 × 9 filter

After applying After applying


15 × 15 filter 25 × 25 filter
Smoothing Linear Filters

 Blurring helps to find


major objects

 Removes fine details

 Original image contains


lots of small objects
Smoothing Linear Filters

Original image Smoothed by Thresholding


15X15 mask after smoothing
Smoothing Non-linear Filters

clc;
clear all;
close all;
I = imread('cameraman.tif');
figure, imshow(I);

h = fspecial('average',5); % h = fspecial('average',hsize)
localMean = imfilter(I,h,'replicate');
imshowpair(I,localMean,'montage')
T = graythresh(localMean);
BW = im2bw(localMean,T);
figure, imshow(BW);
Smoothing Non-linear Filters

Order-Statistic(Nonlinear) Filters

 Nonlinear filtering

 Response is based on ordering(ranking) the


pixels

 Example : Median filter


Median Filter

Advantage
 For certain type of random noise, they provide excellent
noise reduction
 Considerably less blurring effect
 Particularly effective in the presence of impulse noise,
also called salt-and-pepper noise.
Median Filter

• Policy:
– Sort the values of enclosed pixels
– Select the median as output pixel level
• Example: 10 20 20
– Let, 3X3 mask size
20 100 20

25 20 15

– Sorted values: 10, 15, 20, 20, 20, 20, 20,


25, 100 Image pixel values
under masks
– Median is 20
– Output pixel value is 20
Median Filter

Noisy image Noise reduction Noise reduction


by Avg. filter by median filter
Median Filter

clc;
clear all;
close all;
I = imread('cameraman.tif');
figure, imshow(I);

J = medfilt2(I,[5 5]);
%J = medfilt2(I,[m n])
figure, imshow(J);
Median Filter

J = imnoise(I,'salt & pepper',0.02);


Median Filter

clc;
clear all;
close all;
I = imread('cameraman.tif');
figure, imshow(I);

J = imnoise(I,'salt & pepper',0.02);


Gaussian Filter

clc;
clear all;
close all;
I = imread('cameraman.tif');
figure, imshow(I);

J = imnoise(I,'salt & pepper',0.1);


figure, imshow(J);
J = medfilt2(J,[5 5]);

h = fspecial('gaussian',5,0.5);
J = imfilter(I,h,'replicate');

figure, imshow(J);
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Histogram processing
Sharpening Spatial Filters

The concept of
sharpening filter
Sharpening Spatial Filters

To highlight fine detail in an image or to enhance detail that


has been blurred, either in error or as a natural effect of a
particular method of image acquisition.

Blurring vs Sharpening
•Blurring/smooth is done in spatial domain by pixel averaging
in a neighbors, it is a process of integration.

• Sharpening is an inverse process, to find the difference by


the neighborhood, done by spatial differentiation. spatial
differentiation.
Sharpening Spatial Filters

 Sharpening filters highlights transitions in


intensity

 Very useful for highlighting fine details

 Helps to remove blurring


Smoothing vs Sharpening

Smoothing Sharpening

Averaging Difference

Integration Differentiation
Image Sharpening

Sharpening  Strength of response of


derivative proportional to
image discontinuity
Difference
 Sharp changes noise point,
edges, lines, grey ramp are
Differentiation easily detected
Sharpening Spatial Filters
Criteria for Optimal Edge Detection
Criteria for Optimal Edge Detection
Criteria for Optimal Edge Detection
Image Sharpening

Derivative operator

•The strength of the response of a derivative operator is


proportional to the degree of operator is proportional to
the degree of discontinuity of the image at the point at
which the operator is applied.

•Image differentiation

–enhances edges and other discontinuities (noise)


–deemphasizes area with slowly varying gray-level
values.
Derivatives and Noise
Derivatives and Noise
Image Sharpening
Image Sharpening

 1st and 2nd order derivatives will be used

 Behavior to check

• Const gray level

• At onset and end of discontinuities (ramp


and step)

• Along gray-ramp
Properties of Derivatives

 Value of 1st order derivative will be


• 0 at const gray level
• Nonzero at onset of step and ramp
• Nonzero along ramp

 Value of 2nd order derivative will be


• 0 at const gray level
• Nonzero at onset and end of step and ramp
• 0 along ramp
Digital 1st Order Derivative

∂f change of f
=
∂x change of x

∂f f ( x + 1) − f ( x)
=
∂x x +1− x
= f ( x + 1) − f ( x)
Digital 2nd Order Derivative

∂f
change of
∂ f
2
∂x
=
∂x 2 change of x

∂ f
2
f ( x + 1) − f ( x) − ( f ( x) − f ( x − 1)
=
∂x 2
x +1− x
= f ( x + 1) + f ( x − 1) − 2 f ( x)
1st Order & 2nd Order Derivative

 Gray-ramp (smooth transition betn white and black)


 Isolated noise point
 Line
 Edge
1st Order & 2nd Order Derivative
1st Order & 2nd Order Derivative

ramp

edge

Noise point line


1st Order & 2nd Order Derivative

∂f
= f ( x + 1) − f ( x)
∂x
∂2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x)
∂x 2
1st Order & 2nd Order Derivative
1st Order & 2nd Order Derivative
1st Order & 2nd Order Derivative

∂f
= f ( x + 1) − f ( x)
∂x
1st Order & 2nd Order Derivative
∂f
= f ( x + 1) − f ( x)
∂x
∂2 f
= f ( x + 1) + f ( x − 1) − 2 f ( x)
∂x 2
1st Order & 2nd Order Derivative

 Value of 1st order derivative will be


• 0 at const gray level
 Value of 2nd order derivative will be
• 0 at const gray level
1st Order & 2nd Order Derivative

 Value of 1st order derivative will be


• Nonzero at onset of step and ramp
 Value of 2nd order derivative will be
• Nonzero at onset and end of step and ramp
1st Order & 2nd Order Derivative

 Value of 1st order derivative will be


• Nonzero along ramp
 Value of 2nd order derivative will be
• 0 along ramp
1st Order & 2nd Order Derivative

 First derivative results is non-zero along ramp where second


derivative produce a double edge one pixel thick, separated by
zeros. Second order derivative has the zero crossing property.
 Second order enhances fine details much better than the first
derivative
The Laplacian
The Laplacian

 We are interested in isotropic filters.

Isotropic having a physical property which has the


same value when measured in different directions.

 Isotropic filters are rotation invariant, means that


rotating the image and then applying the filter
gives the same result as applying the filter to the
image and then rotating the result.
The Laplacian

Simplest isotropic derivative operator is the


Laplacian second order derivative, which, for a
function f(x, y) of two variable is defined as

∂ 2
f ∂ 2
f
∇ f = 2 + 2
2

∂x ∂y

The laplacian is a non-linear operator.


The Laplacian

∂ f ∂ f2 2
∇ f = 2 + 2
2

∂x ∂y
We know,

∂ f
2
= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y )
∂x 2

and,
∂2 f
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
∂y 2
The Laplacian

∂ 2
f ∂ 2
f
∇ f = 2 + 2
2

∂x ∂y
∂2 f
= f ( x + 1, y ) + f ( x − 1, y ) − 2 f ( x, y )
∂x 2

∂2 f
= f ( x, y + 1) + f ( x, y − 1) − 2 f ( x, y )
∂y 2

Therefore,

∇ 2 f = f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )
The Laplacian

∇ 2 f = f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )

The mask gives isotropic


result for rotations in
increments of 90o

mask
The Laplacian

Diagonal directions are


incorporated. The mask gives
isotropic result for rotations in
increments of 45o
Other two Implementations
Properties of Laplacian

 Because the laplacian is a derivative operator, it


highlights intensity discontinuities in an image.

 It deemphasizes regions with slowly varying


intensity levels

 Produces images that have grayish edge lines


and other discontinuities.
Way out from the side effects of Laplacian

 Add/ subtract the sharpened image from original image


 Recovers everything but still preserves the sharpening effect

 f ( x, y ) − ∇ 2 f ( x, y ) if center of coeff < 0


g ( x, y ) = 
 f ( x , y ) + ∇ 2
f ( x, y ) if center of coeff > 0
Effect of the Laplacian operator
Effect of the Laplacian operator

Applying the
Laplacian mask

Large section of the image is black because the Laplacian


contains both positive and negative values, all negative
values are clipped at 0 by the display. The image needs to be
scaled to [0, L - 1]
Effect of the Laplacian operator

Scaled image. The grayish appearance


is typical of Laplacian images that have
been scaled properly.
Effect of the Laplacian operator

Laplacian image added


with original image
Simplification of Laplacian Sharpening

 Original Laplacian sharpening requires two passes


 It can be reduced to a single pass
We know,
 f ( x, y ) − ∇ 2 f ( x, y ) if center of coeff < 0
g ( x, y ) = 
 f ( x , y ) + ∇ 2
f ( x, y ) if center of coeff > 0

and,
∇ 2 f = f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )

Therefore,
g ( x, y ) = f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )]
= 5 f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1)]
Simplification of Laplacian Sharpening

g ( x, y ) = f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1) − 4 f ( x, y )]
= 5 f ( x, y ) − [ f ( x + 1, y ) + f ( x − 1, y ) + f ( x, y + 1) + f ( x, y − 1)]
Simplification of Laplacian Sharpening
Unsharp Masking &
Highboost Filtering
Unsharp Masking

Sharp Image can be obtained by subtracting


smoothed version of an image from the original
image.This is unsharp masking.
Unsharp Masking

Steps

 Blur the original image.

 Subtract the blurred image from the original


(this image is called the mask.)

 Add the mask to the original


Unsharp Masking & Highboost Filtering

g mask ( x, y ) = f ( x, y ) − f ( x, y )
Blurred or Unsharp or
Average image

g ( x, y ) = f ( x, y ) + k × g mask ( x, y )

When K=1, we get unsharp masking and when K > 1,


the process is called highboost filtering
Unsharp Masking & Highboost Filtering
Unsharp Masking & Highboost Filtering

Original Sharp
Image Image

Unsharp Highboost
Masking Filtering
What have we learned so far?
What have we learned so far?

 Thresholding  Smoothing Filter


(Average Filter)
 Linear
 Median Filter
 Bit plane slicing
 Laplacian Operator
 Histogram
Equalization  Unsharp masking/
Highboost Filtering
Combining Spatial Filters
Combining Spatial Filters

 A single approach often cannot achieve good enhancement

 A nuclear body scan image


 Objective: enhance by
sharpening to get the fine details
 Challenges:
• Noise
• Narrow dynamic range
Combining Spatial Filters

 Utilize Laplacian to highlight fine detail


Combining Spatial Filters

After
Original
Laplacian
image
applied
Combining Spatial Filters

After
Original Laplacian added to
Laplacian
image original image
applied
Reference

Reference :
3.1, 3.2, 3.3(3.3.1, 3.3.3), 3.4(3.4.1, 3.4.2), 3.5, 3.6, 3.7
(Only the parts taught in the class)
THANK YOU!!!

You might also like