Digital Image Processing (Image Enhancement)
Digital Image Processing (Image Enhancement)
1
2 2
( , ) ( , ) g x y x y
K
1.
2.
Response is a sum of
products of the filter
co-efficient.
R = w(-1,-1) f(x-1,y-1)
+
w(-1,0) f(x-1,y) +
+
w(0,0) f(x,y) + +
w(1,0) f(x+1,y) +
w(1,1) f(x+1,y+1).
They do not explicitly use
co-efficients in the sum-of-
products.
R = w1z1 + w2z2 +
+w9z9
9
= wizi
i=1
12. What is image Negatives?
The negative of an image with gray levels in the range [0, L-
1] is obtained by using the negative transformation, which is
given by the expression.
s = L-1- r, Where s is output pixel, r is input pixel
13. Differentiate between Correlation and Convolution with
specific reference to an image and a filter mask.
Convolution in frequency domain reduces the multiplication in
the x domain
The correlation of 2 continuous functions f(x) and g(x) is
defined by
14. Define derivative filter.
For a function f (x, y), the gradient f at co-ordinate (x, y) is
defined as the vector
15. What is the principal difficulty with the smoothing method
with reference to edges and sharp details?
Median filtering is a powerful smoothing technique that does
not blur the edges significantly .
Max/min filtering is used where the max or min value of the
neighbourhood gray levels replaces the candidatepel .
Shrinking and expansion are useful operations especially in
two tone images.
16. What is the basic characteristic of a high pass filter mask
coefficients?
The basic strategy behind weighting the center point the
highest and then reducing the value of the coefficients as a
function of increasing distance from the origin is simply an
attempt to reduce blurring in the smoothing process.
17. What is the effect of averaging with reference to detail in
an image?
An important application of image averaging is in the field
of astronomy, where imaging with very low light levels is
routine, causing sensor noise frequently to render single
images virtually useless for analysis.
18. Outline a simple procedure to produce an enhanced
image using a fourier transform and a filter transfer function.
Frequency domain techniques are based on modifying the
Fourier transform of an image.
19. How can blurring or smoothing process be explained in
the frequency domain?
Smoothing Filters are used for blurring and for noise reduction
Blurring is used for removal of small details prior to
object extraction.
bridging of small gaps in lines or curves.
Smoothing Linear Filters (Averaging Filters)
replace the average value defined by the filter mask.
have the undesirable effect of blur edges
20. How can image sharpening be achieved by a high pass
filtering process in the frequency domain?
Image sharpening deals with enhancing detail information in
an image.
The detail information is typically contained in the high spatial
frequency components of the image. Therefore, most of the
techniques contain some form of high pass filtering.
High pass filtering can be done in both the spatial and
frequency domain.
Spatial domain: using convolution mask (e.g.
enhancement filter).
Frequency domain: using multiplication mask.
21. What is homomorphic filtering?
Homomorphic filtering is a generalized technique for
signal and image processing, involving a nonlinear mapping to
a different domain in which linear filter techniques are applied,
followed by mapping back to the original domain.
22. Write the application of sharpening filters.
The applications of sharpening filters are as follows,
i. Electronic printing and medical imaging to industrial
application
ii. Autonomous target detection in smart weapons.
23. What do you mean by point processing?
Image enhancement at any Point in an image depends
only on the gray level at that point is often referred to as Point
processing.
24. Define high boost filter.
High boost filtered image is defined as
HBF= A (original image)-LPF = (A-1) original image +
original image LPF
HBF= (A-1) original image +HPF
25. Name the different types of derivative filters.
The different types of derivative filters are
i. Perwitt operators
ii. ii) Roberts cross gradient operators
iii. iii)Sobel operators.
Twelve mark Questions
1. What is image enhancement? Explain Contrast stretching and
compression of dynamic range.
Image enhancement is a technique to process an
image so that the result is more suitable than the
original image for specific applications.
The suitableness is up to each application.
A method which is quite useful for enhancing an image
may not necessarily be the best approach for
enhancing another images
Image enhancement widely used in computer graphics.
It is the sub areas of image processing.
Enhancement approaches:
1. Spatial domain 2. Frequency domain
1)Spatial Domain : (image plane)
Techniques are based on direct manipulation of pixels in
an image.
2)Frequency Domain :
Techniques are based on modifying the Fourier transform
of an image.
There are some enhancement techniques based on various
combinations of methods from these two categories.
Contrast Stretching
Low contrast images occur often due to poor or
nonuniform lighting conditions or due to nonlinearity or small
dynamic range of the image sensor.
Expands the range of intensity levels in an image so that
it spans the full intensity range of the recording medium or
display device.
The figure shows a typical contrast stretching transformation
FIG : Contrast Stretching Transformation
For , Dark region stretch
1,
3
L
a > ;
;
Mid region stretch
2
1,
3
L
b > ;
;
Bright region stretch
1 >
Which can be expressed as,
, 0
( ) ,
( ) ,
a
b
u for u a
u a V for a u b
u b V for b u L
+
'
0 1 k L
Example:
Consider an 8-level 64 x 64 image with gray values (0, 1, ,7).
The normalized gray values are (0, 1/7, 2/7, , 1). The
normalized histogram is given below:
NB: The gray values in output are also (0, 1/7, 2/7, , 1).
Notice that there are only five distinct gray levels --- (1/7,
3/7,5/7, 6/7, 1) in the output image. We will relabel them as
(s
0
,s
1
, , s
4
).
With this transformation, the output image will have histogram
Histogram Specification (Histogram Matching)
Histogram equalization yields an image whose pixels are (in
theory) uniformly distributed among all gray levels.
Sometimes, this may not be desirable. Instead, we may want a
transformation that yields an output image with a pre-specified
histogram. This technique is called histogram specification.
Given Information
(1) Input image from which we can compute its
histogram .
(2) Desired histogram.
Goal
Derive a point operation, H(r), that maps the input
image into an output image that has the user-specified
histogram.
Again, we will assume, for the moment, continuous-gray
values.
Approach of derivation
Step1: Equalize the levels of the original image
Step2: Specify the desired pdf and obtain the transformation
function
Step3: Apply the inverse transformation function to the levels
obtained in step 1
Histogram equalization has a disadvantage which is that it can
generate only one type of output image.
With Histogram Specification, we can specify the shape of the
histogram that we wish the output image to have.
It doesnt have to be a uniform histogram
Consider the continuous domain ,
Let p
r
(r) denote continuous probability density function of
gray-level of input image, r
Let p
z
(z) denote desired (specified) continuous probability
density function of gray-level of output image, z
Let s be a random variable with the property
Histogram equalization
Where w is a dummy variable of integration
Next, we define a random variable z with the property
Histogram equalization
Where t is a dummy variable of integration
Thus, s = T(r) = G(z)
Therefore, z must satisfy the condition, z = G
-1
(s) = G
-
1
[T(r)]
Assume G
-1
exists and satisfies the condition (a) and (b)
We can map an input gray level r to output gray level z
r
r
dw ) w ( p ) r ( T s
0
s dt ) t ( p ) z ( g
z
z
0
Procedure Conclusion:
1. Obtain the transformation function T(r) by calculating the
histogram equalization of the input image
( ) ( )
0
r
s T r p w dw
r
2. Obtain the transformation function G(z) by calculating
histogram equalization of the desired density function
( ) ( )
0
z
G z p t dt s
z
)} , ( { y x g E
= expected value of g (output after
averaging)
= original image f(x,y)
) , (
2
) , (
2
,
y x y x g
= variances of g and
if K increase, it indicates that the variability (noise) of the pixel at
each location (x,y) decreases.
(or) Assume n(x,y) a white noise with mean=0,
and variance
If we have a set of noisy images
The noise variance in the average image
is
Note: the images g
i
(x,y) (noisy images) must be registered
(aligned) in order to avoid the introduction of blurring and
other artifacts in the output image.
1
2 2
,
,
x y
g x y
M
_
_
,
,
2 2
( , ) E n x y
' )
@
( , ) g x y
i
1
( , ) ( , )
1
M
g x y g x y
ave
i
M
i
2
1 1 1
2 2
( , ) ( , )
2
1 1
M M
E n x y E n x y
i i
M M
M
i i
_
' ) ' )
,
5. What are smoothing filters? Explain low pass spatial filtering
and median filtering.
Smoothing is fundamentally a low pass operation in the
frequency domain.
Spatial Filtering
Spatial filters are designed to highlight or suppress specific
features in an image based on their spatial frequency..
Filtering is performed by using convolution windows.
Used to enhance the appearance of an image
It is based on concept of image texture
It highlight or suppress specific features in an image based on
their spatial frequency
use filter (can also be called as mask/kernel/template or
window)
the values in a filter subimage are referred to as coefficients,
rather than pixel.
our focus will be on masks of odd sizes, e.g. 3x3, 5x5,
Spatial Filtering Process
simply move the filter mask from point to point in an image.
at each point (x,y), the response of the filter at that point is
calculated using a predefined relationship.
...
1 1 2 2
R w z w z w z
mn mn
mn
w z
i i
i i
+ + +
2
( 1) ( 1) 2 ( )
2
f
f x f x f x
x
+ +
2 2
( , ) ( , )
2
2 2
f x y f x y
f
x y
+
Effect of Laplacian Operator
as it is a derivative operator,
it highlights gray-level discontinuities in an image
it deemphasizes regions with slowly varying gray levels
tends to produce images that have
grayish edge lines and other discontinuities, all
superimposed on a dark,
featureless background.
The gradient of an image f(x,y) at location (x,y) is the vector
The gradient vector points are in the direction of maximum
rate of change of f at (x,y)
In edge detection an important quantity is the magnitude of
this vector (gradient) and is denoted as f.
f = mag (f) = [Gx
2
+Gy
2
]
'
>
Where,
D(u,v) : the distance from point (u,v) to the center of their
frequency rectangle (M/2, N/2)
1
2 2
2
( , ) ( / 2) ( / 2) D u v u M v N
1
1
]
+
Fig: a) Perspective plot of an ideal low pass filter transfer
function
Fig : b) Filter displayed as an image
Fig : c) Filter radial cross section
LPF is a type of nonphysical filters and cant be realized with
electronic components and is not very practical.
7. Explain low pass filtering in frequency domain. Differentiate
between using ideal filter and Butterworth filter for low pass
filtering.
The basic model for filtering in the frequency domain
( , ) ( , ) ( , ) G u v H u v F u v
Where,
F(u,v): the Fourier transform of the image to be
smoothed
H(u,v): a filter transfer function
Smoothing is fundamentally a lowpass operation in the
frequency domain.
There are several standard forms of lowpass filters (LPF).
Ideal lowpass filter
Butterworth lowpass filter
Gaussian lowpass filter
Ideal Lowpass Filters (ILPFs)
The simplest low pass filter is a filter that cuts off all high-
frequency components of the Fourier transform that are at a
distance greater than a specified distance D
0
from the origin of
the transform.
The transfer function of an ideal lowpass filter
1 if ( , )
0
( , )
0 if ( , )
0
D u v D
H u v
D u v D
'
>
Where,
D(u,v) : the distance from point (u,v) to the center of their
frequency rectangle (M/2, N/2)
1
2 2
2
( , ) ( / 2) ( / 2) D u v u M v N
1
1
]
+
Fig: a) Perspective plot of an ideal low pass filter transfer
function
Fig : b) Filter displayed as an image
Fig : c) Filter radial cross section
LPF is a type of nonphysical filters and cant be realized
with electronic components and is not very practical.
The drawback of this filter function is a ringing effect which
occurs along the edges of filtered real domain image.
The drawback of this filter function is a ringing effect which
occurs along the edges of the filtered real domain image.
Butterworth low pass filter
The BLPF may be viewed as a transition between ILPF and
GLPF, BLPF of order 2 is a good compromise between effective
low pass filtering and acceptable ringing characteristics.
The transfer function of a Butterworth lowpass filter of order n
with cutoff frequency at distance D
0
from the origin is defined
as:
1
( , )
2
1 ( , ) /
0
H u v
n
D u v D
1
1
]
+
Fig: a) Perspective plot of a Butterworth low pass filter
transfer function
Fig : b) Filter displayed as an image
Fig : Filter radial Cross Sections on order 1 through 4
Smooth transfer function, no sharp discontinuity, no clear
cutoff frequency.
The vertical edges and sharp corners of Ideal low pass filter
are non-realizable in the physical world. Although we can
emulate these filter masks with a computer, side effects such
as blurring and ringing become apparent.
BLPF does not have a sharp discontinuity that establishes a
clear cutoff between passed and frequencies
H(u, v) = 0.5 (down 50% from its maximum value of 1)
when D(u, v) = D
o
.
What is homomorphic filtering? Explain.
Homomorphic filtering is a generalized technique for
signal and image processing, involving a nonlinear mapping to
a different domain in which linear filter techniques are applied,
followed by mapping back to the original domain.
The digital images are created from optical image that consist
of two primary components:
The lighting component
The reflectance component
The lighting component results from the lighting condition
present when the image is captured.
Can change as the lighting condition change.
The reflectance component results from the way the objects in
the image reflect light.
Determined by the intrinsic properties of the object itself.
Normally do not change.
In many applications, it is useful to enhance the reflectance
component, while reducing the contribution from the lighting
component.
Homomorphic filtering is a frequency domain filtering process
that compresses the brightness (from the lighting condition)
while enhancing the contrast (from the reflectance properties
of the object).
The homomorphic filtering process consists of five steps:
A natural log transform (base e)
The Fourier transform
Filtering
The inverse Fourier transform
The inverse log function (exponential)
A simple image model
f(x,y): the intensity is called the gray level for
monochrome image
f(x, y) = i(x, y).r(x, y)
0 < i(x, y) < inf, the illumination
0< r(x, y) < 1, the reflectance
( ) ( ) ( )
( ) ( ) ( ) ( )
( ) { } ( )
{ }
( )
{ }
( ) ( ) ( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( ) ( )
, , ,
, ln , ln , ln ,
, ln , ln ,
( , ) ( , ) ( , )
, , , , ,
, , ,
, exp , exp , exp ,
f x y i x y r x y
z x y f x y i x y r x y
F z x y F i x y F r x y
Z u v F u v F u v
r
i
S u v H u v F u v H u v F u v
r
i
s x y i x y r x y
g x y s x y i x y r x y
1 1
1
1 1 ]
] ]
+
+
+
+
+
Fig : Homomorphic filtering approach for image
enhancement
The illumination component
Slow spatial variations
Low frequency
The reflectance component
Vary abruptly, particularly at the junctions of
dissimilar objects
High frequency
Homomorphic filters
Affect low and high frequencies differently
Compress the low frequency dynamic range
Enhance the contrast in high frequency
Fig : Cross section of a circularly symmetric filter function.
D(u,v) is the distance from the origin of the centered
transform
1
1
H
L
>
<
2 2
( ( , )/ )
0
( , ) ( )[1 ]
c D u v D
H u v e
H L L
+
Explain with necessary diagrams how Histogram modeling
techniques modify an image?
Histogram
Useful to graphically represent the distribution of pixel values
in a histogram.
The histogram of an image represents the relative frequency
of occurrence of the various grey levels in the image.
Plots the number of pixels in the image (vertical axis) with a
particular brightness value (horizontal axis).
Histogram modeling is the basis for numerous powerful spatial
domain processing techniques, especially for image
enhancement.
Histogram Processing
Basic for numerous spatial domain processing techniques
Used effectively for image enhancement
Information inherent in histograms is also useful in image
compression and segmentation
Histogram & Image Contrast
Dark Image
Components of histogram are concentrated on the low
side of the gray scale.
Bright Image
Components of histogram are concentrated on the high
side of the gray scale.
Low-contrast Image
Histogram is narrow and centred towards the middle of
the gray scale.
High-contrast Image
Histogram covers a broad range of the gray scale and the
distribution of pixels is not too far from uniform, with very
few vertical lines being much higher than others
We consider the gray values in the input image and output
image as random variables in the interval [0, 1].
Let p
in
(r) and p
out
(s) denote the probability density of the Gray
values in the input and output images.
If p
in
(r) and T(r) are known, and r = T
-1
(s) satisfies condition
1, we can write (result from probability theory):
( ) ( )
1
( )
dr
p s p r
out
in
ds
r T s
1
1
]
1
n
n
p
u
i x
f(u)= , n=2, 3,...
1
L-1 x
n
p
x ( ) i
u
=0
i x
u v v'
Uniform
quantizer
f(u)
Approach of derivation
Step1: Equalize the levels of the original image
Step2: Specify the desired pdf and obtain the transformation
function
Step3: Apply the inverse transformation function to the levels
obtained in step 1
Procedure Conclusion:
1. Obtain the transformation function T(r) by calculating the
histogram equalization of the input image.
( ) ( )
0
r
s T r p w dw
r
2. Obtain the transformation function G(z) by calculating
histogram equalization of the desired density function.
( ) ( )
0
z
G z p t dt s
z