Unit 2
Unit 2
Unit 2
Frequency Domain
UNIT-2
Fig: A 3×3 neighborhood about a point (x, y) in an image in the spatial domain.
The point (x, y) is an arbitrary location in the image and the small region shown
containing the point is a neighborhood of (x, y). The neighborhood is rectangular, centered on
(x, y) and much smaller in size than the image.
The process consists of moving the origin of the neighborhood from pixel to pixel and
applying the operator T to the pixels in the neighborhood to yield the output at that location.
Thus for any specific location (x, y) the value of the output image g at those coordinates is
equal to the result of applying T to the neighborhood with origin at (x, y) in f. This procedure
is called spatial filtering, in which the neighborhood, along with a predefined operation is
called a spatial filter. The smallest possible neighborhood is of size 1×1. In this case, g
depends only on the value of f at a single point (x, y) and T becomes an intensity
transformation or gray level mapping of the form
s = T(r)
Where s and r are variables represents the intensity of g and f at any point (x, y). The
effect of applying the transformation T(r) to every pixel of f to generate the corresponding
pixels in g would produce an image of higher contrast than the original by darkening the
levels below m and brightening the levels above m in the original image. This is known as
contrast stretching (Fig.(a)), the values of r below m are compressed by the transformation
function into a narrow range of s, toward black. The opposite effect takes place for values of
r above m. In the limiting case shown in Fig.(b), T(r) produces a two-level (binary) image. A
mapping of this form is called a thresholding function. Hence the enhancement at any point
in an image depends only on the gray level at that point, and the techniques in this category
are referred to as point processing.
Fig: Some basic Intensity transformation functions used for image enhancement.
Image Negatives
The negative of an image with gray levels in the range [0, L-1]is obtained by using
the negative transformation which is given by the expression
s=L-1–r
Reversing the intensity levels of an image in this manner produces the equivalent of a
photographic negative. This type of processing is particularly suited for enhancing white or
gray detail embedded in dark regions of an image, especially when the black areas are
dominant in size.
Fig: (a) Original digital mammogram. (b) Negative image obtained using the negative
transformation
Log Transformations
The general form of the log transformation is
S=c log (1+r)
where c is a constant, and it is assumed that r ≥ 0.The shape of the log curve shows that this
transformation maps a narrow range of low gray-level values in the input image into a wider
range of output levels. The opposite is true of higher values of input levels. We would use a
transformation of this type to expand the values of dark pixels in an image while compressing
the higher-level values. The opposite is true of the inverse log transformation. The log
transformation function has an important characteristic that it compresses the dynamic range
of images with large variations in pixel values. Log transformation is basically employed in
Fourier transform.
Fig: (a) Fourier spectrum. (b) Result of applying the log transformation
Power –Law Transformations
Power-law transformations have the basic form s = crγ
Where c and γ are positive constants. However Plots of s versus r for various values
of γ are shown in the following figure.
Fig: Plots of the equation s=crγ for various values of g (c=1 in all cases).
The curves generated with values of γ>1 have exactly the opposite effect as those
generated with values of γ<1. It reduces to the identity transformation when c=γ=1. The
power law transformation used in a variety of devices for image capture, printing, and
display. The exponent in the power-law equation is referred to as gamma is used to correct
this power-law response phenomena is called gamma correction.
Piecewise-Linear Transformation Functions
The principal advantage of piecewise linear functions is that these functions can be
arbitrarily complex. But their specification requires considerably more user input. These
transformations are of 3 types.
Contrast Stretching:
Contrast Stretching is a process that expands the range of intensity values in an image,
in order to utilize the dynamic range of intensity values. It is one of the simplest piecewise
linear function. Low contrast images can result from poor illumination. The following figure
shows that typical transformation used for contrast stretching.
Intensity-Level Slicing:
The process of highlighting a specific range of intensities in an image is known as
Intensity-Level Slicing. There are two basic approaches can be adopted for Intensity-Level
Slicing.
One approach is to display a high value for all gray levels in the range of interest and
a low value for all other intensities. This transformation produces a binary image.
The second approach, based on the transformation, brightens the desired range of gray
levels but preserves the background and gray-level in the image without change.
Fig: (a) The transformation highlights range [A, B] of gray levels and reduces all others to a constant
level. (b) The transformation highlights range [A, B] but preserves all other levels.
Bit-Plane Slicing:
Instead of highlighting gray-level ranges, highlighting the contribution made to total
image appearance by specific bits might be desired. Suppose that each pixel in an image is
represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from
bit-plane 0 for the least significant bit to bit plane 7 for the most significant bit. In terms of 8-
bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the
image and plane 7 contains all the high-order bits. The following figure shows the various bit
planes for an image.
The higher-order bits contain the majority of the visually significant data. The other
bit planes contribute to more subtle details in the image. Separating a digital image into its bit
planes is useful for analyzing the relative importance played by each bit of the image, a
process that aids in determining the adequacy of the number of bits used to quantize each
pixel.
for k = 0, 1, 2,…..L-1
Histograms are the basis for numerous spatial domain processing techniques.
Histogram manipulation can be used effectively for image enhancement and also is quite
useful in other image processing applications, such as image compression and segmentation.
Histograms are simple to calculate in software and also lend themselves to economic
hardware implementations, thus making them a popular tool for real-time image processing.
The purpose of histogram is to classify the image falls in which category. Generally
images are classified as follows,
Dark images: The components of the histogram are concentrated on the low side of
the intensity scale.
Bright images: The components of the histogram are biased towards the high side of
the intensity scale.
Low contrast: An image with low contrast has a histogram that will be narrow and
will be centered toward the middle of the gray scale.
High contrast: The components of histogram in the high-contrast image cover a
broad range of the gray scale.
Fig: Dark, light, low contrast, high contrast images, and their corresponding histograms.
Histogram Equalization
Let the variable r represent the gray levels of the image to be enhanced. We assume
that r has been normalized to the interval [0, L-1], with r=0 representing black and r=L-1
representing white. For any r the conditions, then transformations of the form,
s = T(r) 0 ≤ r ≤ L-1
It produces an output intensity level s for every pixel in the input image having
intensity r. Assume that the transformation function T(r) satisfies the following conditions:
(a) T(r) is single-valued and monotonically increasing function in the interval 0 ≤ r ≤ L-1
(b) 0 ≤T( r) ≤ L-1 for 0 ≤ r ≤ L-1
s = T(r) =
Hence the Ps(s) is a uniform probability function and independent of the form Pr(r).
For discrete values we deal with probabilities and summations instead of probability density
functions and integrals. The probability of occurrence of gray level rk in an image is
approximated by
k = 0, 1, 2,…….L-1
Where MN is the total number of pixels in the image, nk is the number of pixels that
have gray level rk and L is the number of possible intensity levels in the image. The discrete
version of the transformation function given is
sk = Tk(r) =
Thus, a processed (output) image is obtained by mapping each pixel with level rk in
the input image into a corresponding pixel with level sk in the output image. Hence the
transformation is called histogram equalization or histogram linearization.
Histogram Matching
Histogram equalization automatically determines a transformation function that seeks
to produce an output image that has a uniform histogram. This is a good approach because
the results from this technique are predictable and the method is simple to implement.
However in some applications the enhancement on a uniform histogram is not the best
approach. In particular, it is useful sometimes to be able to specify the shape of the histogram
that we wish the processed image to have. The method used to generate a processed image
that has a specified histogram is called histogram matching or histogram specification.
Let us consider a continuous gray levels r and z (considered continuous random
variables), and let pr(r) and pz(z) denote their corresponding continuous probability density
functions. Where r and z denote the gray levels of the input and output (processed) images
respectively. We can estimate pr(r) from the given input image, while pz(z) is the specified
probability density function that we wish the output image to have. Let s be a random
variable with the property,
s = T(r) =
G (z) = =s
Where t is a dummy variable of integration, the two equations that G(z)=T(r) and,
therefore, that z must satisfy the condition
Z= G-1 (s) = G-1 [T(r)]
The transformation T(r) can be obtained once pr(r) has been estimated from the input
image. Similarly, the transformation function G(z) can be obtained when pz(z) is given.
Assuming that G–1 exists and show that an image with a specified probability density function
can be obtained from an input image by using the following procedure:
Obtain the output image by applying Z= G-1 (s) = G-1 [T(r)] to all the pixels in the
input image. The result of this procedure will be an image whose gray levels, z, have
the specified probability density function pz(z).
Local Histogram Processing
The histogram processing methods are global, in the sense that pixels are modified by
a transformation function based on the intensity distribution of an entire image. Although this
global approach is suitable for overall enhancement, there are cases in which it is necessary to
enhance details over small areas in an image. The number of pixels in these areas may have
negligible influence on the computation of a global transformation whose shape does not
necessarily guarantee the desired local enhancement. The solution is to devise transformation
functions based on the intensity distribution or other properties in the neighborhood of every
pixel in the image.
The histogram processing techniques previously described are easily adaptable to
local enhancement. The procedure is to define a square or rectangular neighborhood and
move the center of this area from pixel to pixel. At each location, the histogram of the points
in the neighborhood is computed and either a histogram equalization or histogram
specification transformation function is obtained. This function is finally used to map the
intensity of the pixel centered in the neighborhood. The center of the neighborhood region is
then moved to an adjacent pixel location and the procedure is repeated. Since only one new
row or column of the neighborhood changes during a pixel-to-pixel translation of the region,
updating the histogram obtained in the previous location with the new data introduced at each
motion step is possible. This approach has obvious advantages over repeatedly computing the
histogram over all pixels in the neighborhood region each time the region is moved one pixel
location. Another approach used some times to reduce computation is to utilize non
overlapping regions, but this method usually produces an undesirable checkerboard effect.
2.4 Enhancement Using Arithmetic/Logic Operations
Arithmetic/logic operations involving images are performed on a pixel-by-pixel basis
between two or more images. For an example, subtraction of two images results in a new
image whose pixel at coordinates (x, y) is the difference between the pixels in that same
location in the two images being subtracted. Depending on the hardware and/or software
being used, the actual mechanics of implementing arithmetic/logic operations can be done
sequentially, one pixel at a time, or in parallel, where all operations are performed
simultaneously.
(x, y) =
As K increases, the variability (noise) of the pixel values at each location (x, y)
decreases. Because E { (x, y)} = f(x, y), this means that (x, y) approaches f(x, y) as the
number of noisy images used in the averaging process increases. The images gi(x, y) must be
registered (aligned) in order to avoid the introduction of blurring and other artifacts in the
output image.
Fig: The mechanics of linear spatial filtering using a 3×3 filter mask.
For the 3×3 mask shown in the figure, the result (or response), g(x, y) of linear
filtering with the filter mask at a point (x, y) in the image is
It is the sum of products of the mask coefficients with the corresponding pixels
directly under the mask. Observe that the coefficient w (0, 0) coincides with image value f(x,
y), indicating that the mask is centered at (x, y) when the computation of the sum of products
takes place. For a mask of size m×n, we assume that m=2a+1 and n=2b+1, where a and b are
nonnegative integers.
In general, linear filtering of an image f of size M×N with a filter mask of size m×n is
given by the expression:
Where x and y are varied so that each pixel in w visits every pixel in f
Spatial Correlation and Convolution
Correlation is the process of moving a filter mask over the image and computing the
sum of products at each location. The mechanism of convolution is the same except that the
filter is first rotated by 1800. The difference between the correlation and convolution can be
explained with a 1-D image as follows.
The correlation of a filter w (x, y) of size m×n with an image is given by the equation
The convolution of a filter w (x, y) of size m×n with an image is given by the equation
It is the average of the gray levels of the pixels in the 3×3 neighborhood defined by
the mask. An m×n mask would have a normalizing constant equal to 1/mn. A spatial
averaging filter in which all coefficients are equal is called a box filter.
The second mask is called weighted average which is used to indicate that pixels are
multiplied by different coefficients. In this mask the pixel at the center of the mask is
multiplied by a higher value than any other, thus giving this pixel more importance in the
calculation of the average. The other pixels are inversely weighted as a function of their
distance from the center of the mask. The diagonal terms are further away from the center
than the orthogonal neighbors and, thus, are weighed less than these immediate neighbors of
the center pixel.
The general implementation for filtering an M×N image with a weighted averaging
filter of size m×n (m and n odd) is given by the expression
The complete filtered image is obtained by applying the above equation for x=0, 1, 2,
………. M-1 and y=0, 1, 2,………., N-1. The denominator is simply the sum of the mask
coefficients and, therefore, it is a constant that needs to be computed only once. This scale
factor is applied to all the pixels of the output image after the filtering process is completed.
Order-Statistics Filters
Order-statistics filters are nonlinear spatial filters whose response is based on ordering
(ranking) the pixels contained in the image area encompassed by the filter, and then replacing
the value of the center pixel with the value determined by the ranking result. The best-known
example in this category is the “Median filter”. It replaces the value of a pixel by the median
of the gray levels in the neighborhood of that pixel. Median filters are quite popular because,
they provide excellent noise-reduction capabilities. They are effective in the presence of
impulse noise, also called salt-and-pepper noise because of its appearance as white and black
dots superimposed on an image.
In order to perform median filtering at a point in an image, we first sort the values of
the pixel in question and its neighbors, determine their median, and assign this value to that
pixel. For example, in a 3×3 neighborhood the median is the 5th largest value, in a 5×5
neighborhood the 13th largest value, and so on. When several values in a neighborhood are
the same, all equal values are grouped. For example, suppose that a 3×3 neighborhood has
values (10, 20, 20, 20, 15, 20, 20, 25, 100). These values are sorted as (10, 15, 20, 20, 20, 20,
20, 25, 100), which results in a median of 20. Thus, the principal function of median filters is
to force points with distinct gray levels to be more like their neighbors.
Digital Image Processing 16
CHAPTER-2 Intensity Transformations & Spatial Filtering, Filtering in the
Frequency Domain
The median represents the 50th percentile of a ranked set of numbers, but the ranking
lends itself to many other possibilities. For example, using the 100th percentile filter is called
max filter, which is useful to find the brightest points in an image. The 0th percentile filter is
the min filter, used for the opposite purpose.
2.7 Sharpening Spatial Filters
The principal objective of sharpening is to highlight fine detail in an image or to
enhance detail that has been blurred, either in error or as a natural effect of a particular
method of image acquisition. It includes applications ranging from electronic printing and
medical imaging to industrial inspection and autonomous guidance in military systems.
As smoothing can be achieved by integration, sharpening can be achieved by spatial
differentiation. The strength of response of derivative operator is proportional to the degree of
discontinuity of the image at that point at which the operator is applied. Thus image
differentiation enhances edges and other discontinuities and deemphasizes the areas with
slow varying grey levels.
The derivatives of a digital function are defined in terms of differences. There are
various ways to define these differences. A basic definition of the first-order derivative of a
one-dimensional image f(x) is the difference
The first order derivative must satisfy the properties such as,
Must be zero in the areas of constant gray-level values.
Must be nonzero at the onset of a gray-level step or ramp.
Must be nonzero along ramps.
The second order derivative must satisfy the properties such as,
Must be zero in the areas of constant gray-level values.
Must be nonzero at the onset and end of a gray-level step or ramp.
Must be zero along ramps of constant slope.
The second order derivatives in image processing are implemented by using the
Laplacian operator. The Laplacian for an image f(x, y), is defined as
Similarly in y-direction is
Then
This equation can be implemented using the mask shown in the following, which
gives an isotropic result for rotations in increments of 90°.The diagonal directions can be
incorporated in the definition of the digital Laplacian by adding two more terms to the above
equation, one for each of the two diagonal directions. The form of each new term is the same
but the coordinates are along the diagonals. Since each diagonal term also contains a –2f(x, y)
term, the total subtracted from the difference terms now would be –8f(x, y). The mask used to
implement this new definition is shown in the figure. This mask yields isotropic results for
increments of 45°. The other two masks are also used frequently in practice.
Fig.(a) Filter mask used to implement the digital Laplacian (b) Mask used to implement an
extension of this equation that includes the diagonal neighbors. (c)&(d) Two other
Implementations of the Laplacian.
Digital Image Processing 18
CHAPTER-2 Intensity Transformations & Spatial Filtering, Filtering in the
Frequency Domain
The Laplacian with the negative sign gives equivalent results. Because the Laplacian
is a derivative operator, it highlights gray-level discontinuities in an image and deemphasizes
regions with slowly varying gray levels. This will tend to produce images that have grayish
edge lines and other discontinuities, all superimposed on a dark, featureless background.
Background features can be “recovered” while still preserving the sharpening effect of the
Laplacian operation simply by adding the original and Laplacian images. If the definition
uses a negative center coefficient, then we subtract, rather than add, then Laplacian image is
used to obtain a sharpened result. Thus, the basic way in which we use the Laplacian for
image enhancement is as follows:
A process that has been used for many years in the publishing industry to sharpen
images consists of subtracting a blurred version of an image from the original image. This
process, called unsharp masking, consists of following steps
Blur the original image.
Subtract the blurred image from the original.
Add the mask to the original.
Let denotes the blurred image, unsharp masking is expressed as
gmask (x, y) = f(x, y) -
Then we add a weighted portion of the mask to the original image
g(x,y) = f(x,y)+k* gmask(x,y) Where k is a weighted coefficient.
When k=1, which acts as unsharp masking
When k>1, It is referred as High-Boost Filtering.
When k<1, It de-emphasizes the contribution of the unsharp mask.
The first derivatives in image processing are implemented using the magnitude of the
gradient. The gradient of f at coordinates (x, y) is defined as the two-dimensional column
vector
The components of the gradient vector itself are linear operators, but the magnitude of
this vector obviously is not because of the squaring and square root. The computational
burden of implementing the above equation over an entire image is not trivial, and it is
common practice to approximate the magnitude of the gradient by using absolute values
instead of squares and square roots:
M(x, y) ≡ | gx | + | gy|
This equation is simpler to compute and it still preserves relative changes in gray
levels, but the isotropic feature property is lost in general. However, as in the case of the
Laplacian, the isotropic properties of the digital gradient are preserved only for a limited
number of rotational increments that depend on the masks used to approximate the
derivatives. As it turns out, the most popular masks used to approximate the gradient give the
same result only for vertical and horizontal edges and thus the isotropic properties of the
gradient are preserved only for multiples of 90°.
Let us denote the intensities of image points in a 3×3 region shown in figure (a). For
example, the center point, z5 , denotes f(x, y), z1 denotes f(x-1, y-1), and so on. The simplest
approximations to a first-order derivative that satisfy the conditions stated are gx= (z8-z5) and
gy = (z6-z5). Two other definitions proposed by Roberts [1965] in the early development of
digital image processing use cross differences:
This equation can be implemented with the two masks shown in figure (b) and
(c).These masks are referred to as the Roberts cross-gradient operators. Masks of even size
are difficult to implement. The smallest filter mask in which we are interested is of size 3×3.
An approximation using absolute values, still at point z5 , but using a 3×3 mask, is
These equations can be implemented using the masks shown in figure (d) and (e). The
difference between the third and first rows of the 3×3 image region approximates the
derivative in the x-direction, and the difference between the third and first columns
approximates the derivative in the y-direction. The masks shown in figure (d) and (e) are
referred as Sobel operator. The magnitude of gradient by using these masks is
The sifting property involves an impulse located at an arbitrary point to, denoted by
δ (t – to) is defined as
Let x represent a discrete variable, the unit discrete impulse, δ (x) is defined as
Convolution:
The convolution of two continuous functions, f(t) and h(t), of one continuous variable,
t, is defined as
and
Where t) denotes the sampled function and each component of this summation is
an impulse weighted by the value of f(t) at the location of the impulse. The value of each
sample is then given by the "strength" of the weighted impulse, which we obtain by
integration. That is, the value, f k, of an arbitrary sample in the sequence is given by
Fig: (a) A continuous function. (b) Train of impulses used to model the sampling process.(c)
Sampled function formed as the product of (a) and (b). (d) Sample values obtained by
integration and using the sifting prope.rty of the impulse.
Where
The summation in the last line shows that the Fourier transform of the sampled
function is an infinite, periodic sequence of copies of the transform of the original,
continuous function. The separation between copies is determined by the value of 1/ ΔT.
The quantity 1/ ΔT, is the sampling rate used to generate the sampled function. The
sampling rate was high enough to provide sufficient separation between the periods and thus
preserve the integrity of F(µ) is known as over-sampling. If the sampling rate was just
enough to preserve F(µ) is known as critically-sampling. The sampling rate was below the
minimum required to maintain distinct copies of F(µ) and thus failed to preserve the original
transform is known as under-sampling.
Fig. (a) Fourier transform of a band-limited function. (b)-(d) Transforms of the corresponding
sampled function under the conditions of over-sampling, critically sampling, and under-
sampling, respectively.
Sampling Theorem:
A function f(t) whose Fourier transform is zero for values of frequencies outside a
finite interval [-µmax, µmax] about the origin is called a band-limited function. We can recover
f(t) from its sampled version- if we can isolate a copy of F(µ) from the periodic sequence of
copies of this function contained in (µ). (µ) is a continuous, periodic function with period
1/ ΔT. This implies that we can recover f(t) from that single period by using the inverse
Fourier transform. Extracting from (µ) a single period that is equal to F (µ) is possible if the
separation between copies is sufficient with separation period
Fig. (a) Transform of a band-limited function. (b) Transform resulting from critically
sampling the same function.
2.9. Extension to Functions of Two Variables
The 2-D Impulse and Its Sifting Property:
The impulse, δ (t, z), of two continuous variables, t and z, is defined as in
Let f (t, z) be a continuous function of two continuous variables, t and z. The two-
dimensional, continuous Fourier transform pair is given by the expressions
Where ΔT and ΔZ are the separations between samples along the t- and z-axis of the
continuous function f (t, z). Function f(t, z) is said to be band-limited if its Fourier Transform
is zero outside a rectangle established by the intervals [-µmax, µmax] and [-vmax , vmax] that is,
Where R (u, v) and I (u,v) are the real and imaginary parts of F(u,v). Some properties
of Fourier transform are listed below,
7. Obtain the final processed result, g(x,y), by extracting the M×N region from the top,
left quadrant of gp(x,y).
2.11. Image Smoothing using Frequency Domain Filters:
Smoothing is achieved in the frequency domain filtering by attenuating a specified
range of high frequency components in the transform of a given image.
Ideal Low pass Filter
The ideal low pass filter that passes without attenuation all frequencies within a circle
of radius D0 from the origin and “cuts off” all frequencies outside this circle. The 2-D low
pass filter
Where D(u,v) is the distance between a point (u,v) and the centre of the frequency
rectangle:
Fig: 3.3 (a) Perspective plot of an ideal Low pass Filter transfer function. (b) Filter displayed
as an image. (c) Filter Radial cross section.
The point of transition between H(u,v) = 1 and H(u,v) = 0 is called the cutoff
frequency. The sharp cutoff frequencies of an ILPF cannot be realized with electronic
components and it produces ringing effect where a series of lines decreasing intensity lie
parallel to the edges. To avoid this ringing effect Gaussian low-pass or Butterworth low-pass
filters are preferred.
If n increases, the filter becomes sharper with increased ringing in the spatial domain.
For n=1 it produces no ringing effect and n=2 ringing is present but imperceptible.
Fig: 3.4 (a) Perspective plot of a Butterworth Low pass Filter transfer function. (b) Filter
displayed as an image. (c) Filter Radial cross section of orders through n=1 to 4.
Gaussian Low pass Filter:
The transfer function of a 2D Gaussian low-pass filter (GLPF) is defined as
The Gaussian LPF transfer function is controlled by the value of cut-off frequency D0.
The advantage of the Gaussian filter is that it never causes ringing effect.
Fig. (a) Perspective plot of a Butterworth Low pass Filter transfer function. (b) Filter
displayed as an image. (c) Filter Radial cross section for various values of D0.
The IHPF is the opposite of the ILPF in the sense that it sets to zero all frequencies
inside a circle of radius Do while passing, without attenuation all frequencies ouside the
circle.
The order n determines the sharpness of the cutoff value and the amount of ringing.
The transition into higher values of cutoff frequencies is much smoother with the BHPF.
The results obtained by using GHPF are more gradual than with the IHPF, BHPF
filters. Even the filtering of the smaller objects and thin bars is cleaner with the Gaussian
filter.
Fig. Perspective plot, Filter displayed as an image and Filter Radial cross section.
2.13. The Laplacian in the Frequency Domain
The Laplacian for an image f(x, y) is defined as
We know that,
Then
Hence the Laplacian can be implemented in the frequency domain by using the filter
In all filtering operations, the assumption is that the origin of F (u, v) has been
x+y
centered by performing the operation f(x, y) (-1) prior to taking the transform of the
image. If f (and F) are of size M X N, this operation shifts the center transform so that
(u, v) = (0, 0) is at point (M/2, N/2) in the frequency rectangle. As before, the center of the
filter function also needs to be shifted:
Conversely, computing the Laplacian in the spatial domain and computing the Fourier
transform of the result is equivalent to multiplying F(u, v) by H(u, v). Hence this dual
relationship in the Fourier-transform-pair notation,
The Enhanced image g(x, y) can be obtained by subtracting the Laplacian from the
original image,
Where Fi(u, v) and Fr (u, v) are the Fourier transforms of lni(x, y) and ln r(x, y),
respectively. If we process Z (u, v) by means of a filter function H (u, v) then, from
Finally, z (x, y) was formed by taking the logarithm of the original image f (x, y), and
the inverse operation yields the desired enhanced image, denoted by g(x, y).
Where
are the illumination and reflectance components of the output image. The filtering
appraoch issummarized as shown in the figure.
Where ‘W’ is the width of the band, D is the distance D (u, v) from the centre of the
filter, Do is the cutoff frequency and n is the order of the Butterworth filter. The band reject
filters are very effective in removing periodic noise and the ringing effect normally small
A Band pass filter is obtained from the band reject filter as
Fig: Band Reject Filter and its corresponding Band Pass Filter
Notch Filters
A Notch filter Reject (or pass) frequencies in a predefined neighborhood about the
centre of the frequency rectangle. It is constructed as products of high pass filters whose
centers have been translated to the centers of the notches. The general form is defined as
Where Hk(u,v ) and H-k(u,v ) are high pass filters whose centers are at (uk, vk) and
(-uk, -vk) respectively. These centers are specified with respect to the center of the frequency
rectangle (M/2, N/2). The distance computations for each filter are defined as
A Notch Pass filter (NP) is obtained from a Notch Reject filter (NR) using:
For example the Butterworth notch reject filter of order n, containing three notch pairs
is defined as
PREVIOUS QUESTIONS
1. What is meant by image enhancement? Explain the various approaches used in image
enhancement.
2. a) Explain the Gray level transformation. Give the applications.
b) Compare frequency domain methods and spatial domain methods used in image
enhancement
3. Explain in detail about histogram processing.
4. What is meant by Histogram Equalization? Explain.
5. Explain how Fourier transforms are useful in digital image processing?
6. What is meant by Enhancement by point processing? Explain.