Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
30 views

Digital Image Processing Notes

Uploaded by

AshleyDam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Digital Image Processing Notes

Uploaded by

AshleyDam
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

DIGITAL IMAGE PROCESSING

INTENSITY TRANSFORMATIONS AND SPATIAL FILTERING:


Some Basic Intensity Transformation Functions, Histogram Processing, Fundamentals of Spatial
Filtering, smoothing (Lowpass) Spatial Filters, Sharpening (High pass) Spatial Filters, High pass,
Band reject, and Bandpass Filters from Lowpass Filters, Combining Spatial Enhancement
Methods (Text Book-1: Chapter 3, Page no: 122 to 191)

1. What is spatial domain?


The term spatial domain refers to the image plane itself, and image processing methods in this
category are based on direct manipulation of pixels in an image.
Two principal categories of spatial processing are intensity transformations and spatial
filtering.
Intensity transformations operate on single pixels of an image for tasks such as contrast
manipulation and image thresholding.
Spatial filtering performs operations on the neighborhood of every pixel in an image.
Examples of spatial filtering include image smoothing and sharpening

THE BASICS OF INTENSITY TRANSFORMATIONS AND SPATIAL FILTERING


2. Explain the basics of intensity transformation.
The spatial domain processes we discuss in this chapter are based on the expression
g(x, y) = T[ f (x, y)]
where f (x, y) is an input image, g(x, y) is the output image, and
T is an operator on f defined over a neighborhood of point (x, y).
The operator can be applied to the pixels of a single image (our principal focus in this chapter)
or to the pixels of a set of images, such as performing the elementwise sum of a sequence of
images for noise reduction, as discussed in Section 2.6. Figure 3.1 shows the basic
implementation of Eq. on a single image.
3. What is spatial filtering(neighbour hood processing), explain with a neat diagram.
The point (x0 , y0 ) shown is an arbitrary location in the image, and the small region shown is
a neighborhood of (x0 , y0 ), as explained in Section 2.6. Typically, the neighborhood is
rectangular, centered on (x0 , y0 ), and much smaller in size than the image. The process that
Fig. 3.1 illustrates consists of moving the center of the neighborhood from pixel to pixel and
applying the operator T to the pixels in the neighborhood to yield an output value at that
location. Thus, for any specific location (x0 , y0 ),

the value of the output image g at those coordinates is equal to the result of applying T to the
neighborhood with origin at (x , y ) 0 0 in f. For example, suppose that the neighborhood is a
square of size 3 × 3 and that operator T is defined as “compute the average intensity of the
pixels in the neighborhood.” Consider an arbitrary location in an image, say (100,150). The
result at that location in the output image, g(100,150), is the sum of f (100,150) and its 8-
neighbors, divided by 9. The center of the neighborhood is then moved to the next adjacent
location and the procedure is repeated to generate the next value of the output image g.
Typically, the process starts at the top left of the input image and proceeds pixel by pixel in a
horizontal (vertical) scan, one row (column) at a time. We will discuss this type of
neighborhood processing beginning in Section 3.4.

The smallest possible neighborhood is of size 1 × 1. In this case, g depends only on the value
of f at a single point (x, y) and T in Eq. (3-1) becomes an intensity (also called a gray-level, or
mapping) transformation function of the form
s = T(r)
for simplicity in notation, we use s and r to denote, respectively, the intensity of g and f at any
point (x, y). For example, if T(r) has the form in Fig. 3.2(a), the result of applying the
transformation to every pixel in f to generate the corresponding pixels in g would be to produce
an image of higher contrast than the original, by darkening the intensity levels below k and
brightening the levels above k. In this technique, sometimes called contrast stretching (see
Section 3.2), values of r lower than k reduce (darken) the values of s, toward black. The
opposite is true for values of r higher than k. Observe how an intensity value r0 is mapped to
obtain the corresponding value s0. In the limiting case shown in Fig. 3.2(b), T(r) produces a
twolevel (binary) image. A mapping of this form is called a thresholding function. Some fairly
simple yet powerful processing approaches can be formulated with intensity transformation
functions. In this chapter, we use intensity transformations principally for image enhancement.
Approaches whose results depend only on the intensity at a point sometimes are called point
processing techniques, as opposed to the neighborhood processing techniques discussed in the
previous paragraph.

4. Explain Intensity Transformation (point processing) and its methods. mention


applications of Intensity transformation
Approaches whose results depend only on the intensity at a point sometimes are called point-
processing techniques
Contrast manipulation & Image thresholding are 2 types of Intensity Transformation (point
processing methods.
Spatial Filtering: 3x3
|101|
|010|
|101|
• Spatial Mask
• Kernel
• Template
• Window
The smallest possible neighborhood is of size 1 × 1.
In this case, g depends only on the value of f at a single point (x, y) and T in Eq.
g(x, y) = T[ f (x, y)]
becomes an intensity (also called a gray-level, or mapping) transformation function of the form
s=T (r )
where, for simplicity in notation, we use s and r to denote, respectively, the intensity of g and
f at any point (x, y).

Applications:
• Enhancement
• Segmentation
• Contrast stretching function
• Thresholding function

5. Mention 3 main types of transformation functions.


three basic types of functions used frequently in image processing:
• linear (negative and identity transformations),
• logarithmic (log and inverse-log transformations), and
• power-law (nth power and nth root transformations).
The identity function is the trivial case in which the input and output intensities are identical.

6. Explain image negatives with examples.

• The negative of an image with intensity levels in the range [0, L − 1] is obtained
by using the negative transformation function shown in intensity transformation Fig.
which has the form:
S = L−1−r
• Reversing the intensity levels of a digital image in this manner produces the equivalent of
a photographic negative.
• This type of processing is used, for example, in enhancing white or gray detail embedded
in dark regions of an image, especially when the black areas are dominant in size The
original image is a digital mammogram showing a small lesion.
• Despite the fact that the visual content is the same in both images, some viewers find it
easier to analyze the fine details of the breast tissue using the negative image.

7. What are identity functions?

➢ The identity transformation is considered an essential process in creating a reusable


transformation library.
➢ By creating a library of variations of the base identity transformation, a variety of data
transformation filters can be easily maintained
.
8. Explain in detail 3 main types of transformation functions with a neat diagram?
• linear (negative and identity transformations),
• logarithmic (log and inverse-log transformations), and
• power-law (nth power and nth root transformations).
1. The negative of an image with intensity levels in the range [0, L − 1] is obtained
by using the negative transformation function shown in intensity transformation Fig.
which has the form:
S = L−1−r
Reversing the intensity levels of a digital image in this manner produces the equivalent of a
photographic negative.
The identity transformation is considered an essential process in creating a reusable
transformation library.
By creating a library of variations of the base identity transformation, a variety of data
transformation filters can be easily maintained
2. logarithmic (log and inverse-log transformations),

Inverse log transformation is used in image processing to expand the values of light-level
pixels while compressing the darker-level values.
It's the opposite of the log transformation, which maps low-intensity values to higher-
intensity values.

3. power-law (nth power and nth root transformations


Power Law Transformation, also known as Gamma Correction, is a technique used in image
processing to adjust the brightness and contrast of an image by applying a non-linear
mapping of pixel intensities

Displaying images accurately: The human brain processes images using gamma correction,
which is important for correctly displaying images on computer monitors and television
screens.
Improving image quality: Power law transformation is a popular approach for improving
the aesthetic appeal and diagnostic usefulness of images.
Enhancing images with poor contrast: Power law transformation is especially useful for
images with poor contrast,
Improving visibility of details in medical imaging: Power law transformation can be used to
improve the visibility of details in medical imaging.
Power law transformations are nonlinear transformations that apply a non-linear mapping of
pixel intensities. The effect of a power law transformation depends on the value of gamma:
• Gamma > 1: Brightens the image
• Gamma < 1: Darkens the image
• Gamma = 1: Has no effect on the image
By convention, the exponent in a power-law equation is referred to as gamma
The process used to correct these power-law response phenomena is called gamma correction
or gamma encoding.
Contrast enhancement using power-law intensity transformations.
Figure 3.9(a) shows the opposite problem of that presented in Fig. 3.8(a). The image to be
processed now has a washed-out appearance, indicating that a compression of intensity levels
is desirable. This can be accomplished with
Eq. S = c r γ

using values of g greater than 1. The results of processing Fig. 3.9(a) with g = 3 0 . , 4.0, and
5.0 are shown in Figs. 3.9(b) through (d), respectively. Suitable results were obtained using
gamma values of 3.0 and 4.0.
The airport runways near the middle of the image appears clearer in Fig. 3.9(d) than in any of
the other three images.
9. What is Piecewise Linear Transformation Functions? Explain 3 types of Piecewise
Linear Transformation Functions.
Piece-wise Linear Transformation is a type of gray-level transformation that is used for
image enhancement.
It is a spatial domain method.
It is used for the manipulation of an image so that the result is more suitable than the original
for a specific application.
a. Contrast Stretching
i. Poor illumination
ii. Lack of dynamic range of image sensor
iii. Wrong setting of lens aperture
b. Intensity Level Slicing
c. Bit-Plane Slicing
a. Contrast Stretching:

Low-contrast images can result from poor illumination, lack of dynamic range in the
imaging sensor, or even the wrong setting of a lens aperture during image acquisition.
Contrast stretching expands the range of intensity levels in an image so that it spans the
ideal full intensity range of the recording medium or display device.
Figure 3.10(a) shows a typical transformation used for contrast stretching.
The locations of points (r1,s1) and (r2,s2) control the shape of the transformation function.
If r1 = s1 and r2 = s2 the transformation is a linear function that produces no changes in
intensity.
If r1=r2 , s1 = 0, and s2 = L2 -1 the transformation becomes a thresholding function that
creates a binary image [see Fig. 3.2(b)].
b. Intensity-level slicing.

• There are applications in which it is of interest to highlight a specific range of intensities


in an image.
• Some of these applications include enhancing features in satellite imagery, such as
masses of water, and enhancing flaws in X-ray images.
• The method, called intensity-level slicing, can be implemented in several ways, but
most are variations of two basic themes.
• One approach is to display in one value (say, white) all the values in the range of interest
and in another (say, black) all other intensities.
• This transformation, shown in Fig. 3.11(a), produces a binary image.
• The second approach, based on the transformation in Fig. 3.11(b), brightens (or
darkens) the desired range of intensities, but leaves all other intensity levels in the image
unchanged.

c. Bit-Plane Slicing

Pixel values are integers composed of bits. For example, values in a 256-level grayscale
image are composed of 8 bits (one byte). Instead of highlighting intensity-level ranges, as
3.3, we could highlight the contribution made to total image appearance by specific bits.
As Fig. 3.13 illustrates, an 8-bit image may be considered as being composed of eight
one-bit planes, with plane 1 containing the lowest-order bit of all pixels in the image, and
plane 8 all the highest-order bits. Figure 3.14(a) shows an 8-bit grayscale image and Figs.
3.14(b) through (i) are its eight one-bit planes, with Fig. 3.14(b) corresponding to the
highest-order bit.

Observe that the four higher-order bit planes, especially the first two, contain a significant
amount of the visually-significant data. The lower-order planes contribute to more subtle
intensity details in the image. The original image has a gray border whose intensity is
194.
Bit Plane Decomposition & Pixel Representation:
• Each pixel in an 8-bit grayscale image can be represented by 8 binary values
corresponding to different bit planes (e.g., binary 11000010 for decimal 194).
• The 8th bit plane (most significant bit) binary image is created by thresholding the
input image:
o 0 for pixel values between 0 and 127
o 1 for values between 128 and 255.
Image Reconstruction & Compression:
• Reconstruction Process: Each bit plane is multiplied by 2n−12^{n-1}2n−1 and
summed to create a grayscale image. Example:
o Plane 8 × 128 + Plane 7 × 64 = Fig. 3.15(a) (flat background with only 4
intensity levels).
• Adding more planes improves detail but may introduce false contouring (e.g., Fig.
3.15(b)).
• Using the top 4 bit planes provides sufficient quality, requiring 50% less storage
compared to the original image. This technique is useful for image compression.
HISTOGRAM PROCESSING
10. Explain Histogram processing with mathematical equations?
• Let rk , for k = 0,1, 2,…,L − 1, denote the intensities of an L-level digital image, f (x,
y).
• The unnormalized histogram of f is defined as
h(rk)=nk for k = 0,1, 2,…, L− 1
• where nk is the number of pixels in f with intensity rk, and the subdivisions of the
intensity scale are called histogram bins.
• Similarly, the normalized histogram of f is defined as

• where, as usual, M and N are the number of image rows and columns, respectively.
• The sum of p (rk) for all values of k is always 1.
• Histograms are simple to compute and are also suitable for fast hardware
implementations, thus making histogram-based techniques a popular tool for real-time
image processing.
• Histogram shape is related to image appearance.
• For example, Fig. 3.16 shows images with four basic intensity characteristics: dark,
light, low contrast, and high contrast; the image histograms are also shown.

• Assuming initially continuous intensity values, let the variable r denote the intensities
of an image to be processed.
• As usual, we assume that r is in the range [0,L − 1], with r = 0 representing black and
r = L − 1 representing white.
• For r satisfying these conditions, we focus attention on transformations (intensity
mappings) of the form
s = T(r) 0≤r≤L−1
• that produce an output intensity value, s, for a given intensity value r in the input
image.
• We assume that
• (a) T(r) is a monotonic increasing function in the interval 0 ≤ r ≤ L − 1; and
• (b) 0 ≤ T(r) ≤ L − 1 for 0 ≤ r ≤ L − 1.
• In some formulations we use the inverse transformation
r = T−1(s) 0≤s≤L−1
• in which case we change condition (a) to: (a’) T(r) is a strictly monotonic increasing
function in the interval 0 ≤ r ≤ L − 1.
• Figure 3.17(a) shows a function that satisfies conditions (a) and (b).
• Here, we see that it is possible for multiple input values to map to a single output
value and still satisfy these two conditions.
• That is, a monotonic transformation function performs a one-to-one or many-to-one
mapping. This is perfectly fine when mapping from r to s.
Histogram equalization and histogram specification are both image-processing techniques
that alter the distribution of an image's pixel values:
11. Differentiate between Histogram equalization and Histogram specification
• Histogram equalization
• Adjusts pixel values based on an image's intensity histogram to create a flat histogram
with a uniform distribution of intensities. This technique enhances image details by
using the full dynamic range.
• Histogram specification
• Transforms an image's histogram to match another image's histogram. This technique
involves calculating the original image's histogram, then mapping pixel values from
the original to new value
Comparison between histogram equalization and histogram specification.
Figure 3.23(a) shows an image of the Mars moon, Phobos, taken by NASA’s Mars Global
Surveyor.
Figure 3.23(b) shows the histogram of Fig. 3.23(a).
The image is dominated by large, dark areas, resulting in a histogram characterized by a large
concentration of pixels in the dark end of the gray scale.
At first glance, one might conclude that histogram equalization would be a good approach to
enhance this image, so that details in the dark areas become more visible. It is demonstrated
in the following discussion that this is not so.
12. Contrast between 2 levels of histogram processing?
2 levels of Histogram processing
• In the global level the histogram of the entire image is processed whereas at the
• local level, the given image is subdivided and the histograms of the subdivisions (or
subimages) are manipulated individually.
• This example enhances an image with low contrast, using local histogram
equalization, which spreads out the most frequent intensity values in an image.
• The equalized image has a roughly linear cumulative distribution function for each
pixel neighborhood.
• The local version of the histogram equalization emphasized every local gray level
variations.
• These algorithms can be used on both 2D and 3D images.

LOCAL HISTOGRAM PROCESSING


• The histogram processing methods discussed thus far are global, in the sense that
pixels are modified by a transformation function based on the intensity distribution of
an entire image.
• This global approach is suitable for overall enhancement, but generally fails when the
objective is to enhance details over small areas in an image.
• This is because the number of pixels in small areas have negligible influence on the
computation of global transformations.
• The solution is to devise transformation functions based on the intensity distribution
of pixel neighborhoods.
• The histogram processing techniques previously described can be adapted to local
enhancement.
• The procedure is to define a neighborhood and move its center from pixel to pixel in
a horizontal or vertical direction.
• At each location, the histogram of the points in the neighborhood is computed, and
either a histogram equalization or histogram specification transformation function is
obtained.
• This function is used to map the intensity of the pixel centered in the neighborhood.
• The center of the neighborhood is then moved to an adjacent pixel location and the
procedure is repeated.
• Because only one row or column of the neighborhood changes in a one-pixel
translation of the neighborhood, updating the histogram obtained in the previous
location with the new data introduced at each motion step is possible

FUNDAMENTALS OF SPATIAL FILTERING


13. Explain the fundamentals of spatial filtering with a neat diagram.
• The name filter is borrowed from frequency domain processing.
• Where “filtering” refers to passing, modifying, or rejecting specified frequency
components of an image.
• For example, a filter that passes low frequencies is called a lowpass filter.
• The net effect produced by a lowpass filter is to smooth an image by blurring it.
• We can accomplish similar smoothing directly on the image itself by using spatial
filters.
• Spatial filtering modifies an image by replacing the value of each pixel by a function
of the values of the pixel and its neighbors.
• If the operation performed on the image pixels is linear, then the filter is called a
linear spatial filter. Otherwise, the filter is a nonlinear spatial filter.
• A linear spatial filter performs a sum-of-products operation between an image f and a
filter kernel, w.
• The kernel is an array whose size defines the neighborhood of operation, and whose
coefficients determine the nature of the filter.
• Other terms used to refer to a spatial filter kernel are mask, template, and window. We
use the term filter kernel or simply kernel.
• At any point (x, y) in the image, the response, g(x, y), of the filter is the sum of
products of the kernel coefficients and the image pixels encompassed by the kernel:
14. Contrast between Convolution and Correlation with a example for 1D signal
• Spatial Filtering
– Correlation
• Process of moving the filter mask over the image and compute the sum-of-products at
every location.
• linear spatial filtering of an image
• of size M × N with a kernel of size m × n is given by the expression

– Convolution
• Same as correlation except the filter is first rotated by 180°.
• Thus, when the values of a kernel are symmetric about its center, correlation and
convolution yield the same result.
15. Explain SPATIAL CORRELATION AND CONVOLUTION for 2D image
• For a kernel of size m × n, we pad the image with a minimum of (m − 1) 2 rows of 0’s at
the top and bottom and (n − 1) 2 columns of 0’s on the left and right. In this case, m and n
are equal to 3, so we pad f with one row of 0’s above and below and one column of 0’s to
the left and right, as Fig. 3(b) shows.
• Figure (c) shows the initial position of the kernel for performing correlation, and
• Fig. (d) shows the final result after the center of w visits every pixel in f, computing a
sum of products at each location.
• As before, the result is a copy of the kernel, rotated by 180
• For convolution, we pre-rotate the kernel as before and repeat the sliding sum of products.
Figures (f) through (h) show the result.
• You see again that convolution of a function with an impulse copies the function to the
location of the impulse. As noted earlier, correlation and convolution yield the same
result if the kernel values are symmetric about the center

SOME IMPORTANT COMPARISONS BETWEEN FILTERING IN THE SPATIAL


AND FREQUENCY DOMAINS
16. Explain the differences between filtering in the spatial and frequency domain.
• The tie between spatial- and frequency-domain processing is the Fourier transform.
• We use the Fourier transform from the spatial to the frequency domain.
• To return to the spatial domain we use the inverse Fourier transform.
• The focus here is on two fundamental properties relating to the spatial and frequency
domains:
• Convolution, which is the basis for filtering in the spatial domain, is equivalent
to multiplication in the frequency domain, and vice versa.
• An impulse of strength A in the spatial domain is a constant of value A in the
frequency domain, and vice versa.

• For simplicity, consider a 1-D function (such as an intensity scan line through an
image) and suppose that we want to eliminate all its frequencies above a cutoff value,
u0 , while “passing” all frequencies below that value.
Figure 3.32(a) shows a frequency-domain filter function for doing this.

• The term filter transfer function is used to denote filter functions in the frequency
domain—this is analogous to our use of the term “filter kernel” in the spatial domain.
Appropriately, the function in Fig. 3.32(a) is called a lowpass filter transfer function.

SMOOTHING (LOWPASS) SPATIAL FILTERS


17. Explain lowpass(smoothing)spatial filters?
• Smoothing (also called averaging) spatial filters are used to reduce sharp transitions in
intensity.
• Because random noise typically consists of sharp transitions in intensity, an obvious
application of smoothing is noise reduction.
• BOX FILTER KERNELS
• The simplest, separable lowpass filter kernel is the box kernel, whose coefficients
have the same value (typically 1).
• The name “box kernel” comes from a constant kernel resembling a box when viewed
in 3-D. We showed a 3 × 3 box filter in Fig. 3.31(a).
• An m × n box filter is an m × n array of 1’s, with a normalizing constant in front,
whose value is 1 divided by the sum of the values of the coefficients (i.e., 1/mn when
all the coefficients are 1’s).
• First, the average value of an area of constant intensity would equal that intensity in
the filtered image, as it should.
• Second, normalizing the kernel in this way prevents introducing a bias during
filtering; that is, the sum of the pixels in the original and filtered images will be the
same.
• Because in a box kernel all rows and columns are identical, the rank of these kernels
is 1, which, as we discussed earlier, means that they are separable.
18. what are lowpass Gaussian Filter kernels?
• The kernels of choice in applications such as those just mentioned are circularly
symmetric (also called isotropic, meaning their response is independent of orientation).
• As it turns out, Gaussian kernels of the form are the only circularly symmetric kernels
that are also separable
19. What are High pass(Sharpening)spatial filter
• Sharpening highlights transitions in intensity.
• Uses of image sharpening range from electronic printing and medical imaging to
industrial inspection and autonomous guidance in military systems.
• Sharpening can be accomplished by spatial differentiation

• There are various ways to define these differences. However, we require that any
definition we use for a first derivative:
• Must be zero in areas of constant intensity.
• Must be nonzero at the onset of an intensity step or ramp.
• Must be nonzero along intensity ramps.
• Similarly, any definition of a second derivative
• Must be zero in areas of constant intensity.
• Must be nonzero at the onset and end of an intensity step or ramp.
• Must be zero along intensity ramps.

A basic definition of the first-order derivative of a one-dimensional function f (x) is the


difference

We define the second-order derivative of f (x) as the difference

20. Explain the first- and second-order derivatives of a digital function with equations
and diagram?
• These two equations satisfy the conditions stated above, as we illustrate in Fig,
• where we also examine the similarities and differences between first- and second order
derivatives of a digital function.
• The values denoted by the small squares in Fig. (a) are the intensity values along a
horizontal intensity profile (the dashed line connecting the squares is included to aid
visualization).
• As Fig. (a) shows, the scan line contains three sections of constant intensity, an intensity
ramp, and an intensity step. The circles indicate the onset or end of intensity transitions.
• The actual numerical values of the scan line are shown inside the small boxes in Fig. (b).
• The first- and second-order derivatives, computed using the two preceding definitions, are
shown below the scan line values in Fig. (b), and are plotted in Fig. (c)
21. Explain four broad categories of Spatial and frequency-domain linear filters.
1. low-pass filters
2. High-pass filters
3. Band-pass filters
4. Band-reject filters
1. Figure (a) shows the transfer function of a 1-D ideal lowpass filter in the frequency
domain [this is the same as Fig. (a)].
2. We know from earlier discussions in this chapter that lowpass filters attenuate or delete
high frequencies, while passing low frequencies.
3. A high-pass filter behaves in exactly the opposite manner.
4. As Fig. (b) shows, a high-pass filter deletes or attenuates all frequencies below a cut-
off value, u0 , and passes all frequencies above this value.
5. Comparing Figs. (a) and (b), we see that a high-pass filter transfer function is obtained
by subtracting a lowpass function from 1.
6. This operation is in the frequency domain
7. Thus, we obtain a high-pass filter kernel in the spatial domain by subtracting a lowpass
filter kernel from a unit impulse with the same center as the kernel.
COMBINING SPATIAL ENHANCEMENT METHODS
1. With a few exceptions, such as combining blurring with thresholding, we have
focused attention thus far on individual spatial-domain processing approaches.
2. The image in Fig. (a) is a nuclear whole body bone scan, used to detect diseases such
as bone infections and tumors.
3. Our objective is to enhance this image by sharpening it and by bringing out more of
the skeletal detail.
4. Figure (b) shows the Laplacian of the original image, obtained using the kernel.
5. We can obtain a sharpened image at this point simply by adding Figs. (a) and (b). As
show in Fig. c
6. Figure (d) shows the Sobel gradient of the original image

You might also like