Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
35 views

Digital Image Processing Module4

Digital image processing techniques can be used to restore degraded images. Image restoration aims to compensate for defects that degrade the quality of an image, such as motion blur or noise. Common restoration techniques include inverse filtering, Wiener filtering, and wavelet-based restoration. Inverse filtering uses a known blurring function to undo the degradation, but is sensitive to noise. Wiener filtering provides an optimal tradeoff between de-noising and inverse filtering. Image segmentation partitions an image into meaningful regions or objects. Edge detection techniques identify boundaries between objects by looking for discontinuities in pixel intensity values.

Uploaded by

kuku84619
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Digital Image Processing Module4

Digital image processing techniques can be used to restore degraded images. Image restoration aims to compensate for defects that degrade the quality of an image, such as motion blur or noise. Common restoration techniques include inverse filtering, Wiener filtering, and wavelet-based restoration. Inverse filtering uses a known blurring function to undo the degradation, but is sensitive to noise. Wiener filtering provides an optimal tradeoff between de-noising and inverse filtering. Image segmentation partitions an image into meaningful regions or objects. Edge detection techniques identify boundaries between objects by looking for discontinuities in pixel intensity values.

Uploaded by

kuku84619
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 58

Digital Image Processing

Module IV
Rameela Ravindran K
• Image restoration - model of Image degradation/restoration
process - noise models – inverse filtering - least mean square
filtering - constrained least mean square filtering
• Edge detection - thresholding - region based segmentation -
boundary representation.
The purpose of image restoration
• "compensate for" or "undo" defects which degrade an image.
• Degradation may be due to motion blur, noise, and camera
misfocus.
• motion blur - very good estimate of the actual blurring function
helps undo the blur to restore the original image.
• image corrupted by noise - compensate for the degradation it
caused.
Degradation Model

• The block diagram for our general degradation model is

• where g is the corrputed image obtained by passing the original


image f through a low pass filter (blurring fuction) b and
adding noise to it. We present four different ways of restoring
the image.
• Degradation function along with some additive noise operates
on f(x, y) to produce degraded image g(x, y)
• Given g(x, y), some knowledge about the degradation function
H and additive noise η(x, y), objective of restoration is to obtain
estimate f’(x, y) of the original image.
• If H is linear, position invariant process then degraded image in
spatial domain is given by:
• h(x, y) = Spatial representation of H
• * indicates convolution
Noise models
• Noise model is decided based on understanding of the physics of the
sources of noise.
Gaussian:
Rayleigh: Gamma(Erlang)
Exponential:
Impulse:
Uniform:
Noise probability density functions

• Noises are taken as random variables


Probability density function (PDF)
• Noise cannot be predicted but can be approximately described in
statistical way using the probability density function (PDF)
Gaussian noise
• Mathematical tractability in spatial and frequency
domains
• Used frequently in practice
• Electronic circuit noise and sensor noise

1  ( z   ) 2 / 2 2
p( z )  e
2 
Intensity mean variance

Note:
 p( z )dz  1

Rayleigh noise
2 ( z  a ) 2 / b
 ( z  a )e for z  a
p( z )   b
0 for z  a
• The mean and variance of this density are given by

b( 4   )
  a  b / 4 and  2 
4
• a and b can be obtained through mean and variance
Erlang (Gamma) noise

 a b z b 1  az
 e for z  0
p ( z )   (b  1)!
0 for z  0
• The mean and variance of this density are given by

• a and b can be obtained


b through mean and variance
  b / a and  
2

a2
Exponential noise
ae  az for z  0
p( z )  
0 for z  0

• The mean and variance of this density are given by


1
  1 / a and   2
2

a
Uniform noise

 1
 if a  z  b
p( z )   b  a
 0 otherwise

ab
Mean: 
2
(b  a ) 2
Variance: 2 
12
Impulse (salt-and-pepper) nosie

 Pa for z  a

p ( z )   Pb for z  b
0 otherwise

If either Pa or Pb is zero, it is called unipolar.


Otherwise, it is called bipolar.
Restoration Techniques
I. Inverse Filter
• assuming a known blurring function.
• restoration is good when noise is not present and not so good when it is.
I. Weiner Filtering
• provides us with the optimal trade-off between de-noising and inverse filtering
• result is in general better than with straight inverse filtering.
I. Wavelet Restoration
• wavelet based algorithms to restore the image.\
I. Blind Deconvolution
• do not have any information about the blurring function or on the additive
noise.
• very hard.
Inverse Filters
• inverse filter is a form of high pass filter
• inverse filtering responds very badly to any noise that is present
in the image because noise tends to be high frequency.
• two methods of inverse filtering - a thresholding method and an
iterative method.
Thresholding
A blurred image by

where f is the original image, b is some kind of a low pass filter


and g is our blurred image. So to get back the original image, we
would just have to convolve our blurred function with some kind
of a high pass filter

• Inverse
Gaussian Noise Rayleigh Noise Gamma Noise

mean

variance
mean

variance
Sample and
histogram
Sample and
histogram
Periodic Noise
• usually present due to electrical or electromechanical
interference during the image acquisition process
Filters
• The inverse filtering is a restoration technique for
deconvolution, i.e., when the image is blurred by a known
lowpass filter, it is possible to recover the image by inverse
filtering or generalized inverse filtering.
• inverse filtering is very sensitive to additive noise.
• It minimizes the overall mean square error in the process of
inverse filtering and noise smoothing. The Wiener filtering is a
linear estimation of the original image. The approach is based on a
stochastic framework. The orthogonality principle implies that the
Wiener filter in Fourier domain can be expressed as follows:

where are respectively power spectra of the original


image and the additive noise, and is the blurring filter. The Wiener
filter has two separate part, an inverse filtering part and a noise
smoothing part. It not only performs the deconvolution by inverse
filtering (highpass filtering) but also removes the noise with a
compression operation (lowpass filtering).
Image segmentation
Goal: partition the image into its constituent objects
Approaches
• Discontinuity: detect abrupt changes in gray levels 
edge detection
• Similarity: group pixels based on their similarity with
respect to a predefined criterion  region-based
processing
• Feature extraction
• Region growing
• Feature clustering/classification
Boundary based segmentation (edge detection)
• Local discontinuities in image intensity fall into three categories:
points, lines, and edges.
Point and line detection
• The most common way to look for an arbitrary image pattern (e.g.,
point, or edge) is to convolve the image with a mask of size N1 ×
N2(e.g., 3 × 3, 5 × 5). The size of the mask and its content depends
on the type of the detected object.
Point detection mask
• The point is rendered if |D|>T where
Tis a non-negative threshold
D is a similarity measure between the image and the template
Line detection masks
horizontal
+450

vertical -450
Edge Detection
• Definition of edges
• Edges are significant local changes of intensity in an image.
• Edges typically occur on the boundary between two different
regions in an image.
• Goal of edge detection
• Produce a line drawing of a scene from an image of that
scene.
• Important features can be extracted from the edges of an
image (e.g., corners, lines, curves).
• These features are used by higher-level computer vision
algorithms (e.g., recognition).
• Various physical events cause intensity changes.
• Geometric events
• object boundary (discontinuity in depth and/or surface color and
texture)
• surface boundary (discontinuity in surface orientation and/or
surface color and texture)
• Non-geometric events
• specularity (direct reflection of light, such as a mirror)
• shadows (from other objects or from the same object)
• inter-reflection
What do we detect?
Depending on the impulse response of the filter, we can detect
different types of gray level discontinuities
• Isolate points (pixels)
• Lines with a predefined slope
• Generic contours
• Edge detection implies the evaluation of the local gradient and
corresponds to a (directional) derivative
Edge descriptors

• Edge normal: unit vector in the direction of maximum intensity


change.
• Edge direction: unit vector to perpendicular to the edge normal.
• Edge position or center: the image position at which the edge is
located.
• Edge strength: related to the local image contrast along the
normal.
• Edges can be modeled according to their intensity profiles.
• Step edge: the image intensity abruptly changes from one value
to one side of the discontinuity to a different value on the
opposite side.
• Ramp edge: astep edge where the intensity change is not
instantaneous but occur overafinite distance.
• Ridge edge: the image intensity abruptly changes value but
then returns to the starting value within some short distance
(generated usually by lines).
• Roof edge: aridge edge where the intensity change is not
instantaneous but occur overafinite distance (generated usually
by the intersection of surfaces).
The four steps of edge detection
• Smoothing: suppress as much noise as possible, without destroying
the true edges.
• Enhancement: apply a filter to enhance the quality of the edges in the
image (sharpening).
• Detection: determine which edge pixels should be discarded as noise
and which should be retained (usually, thresholding provides the
criterion used for detection).
• Localization: determine the exact location of an edge (sub-pixel
resolution might be required for some applications, that is, estimate
the location of an edge to better than the spacing between pixels).
Edge thinning and linking are usually required in this step.
Methods of Edge Detection
• GRADIENT Methods(1st order derivatives)
• Local maxima and minima using first derivative in
the image
• Compute gradient magnitude horizontally and
vertically
• Zero crossing methods (Second order derivatives)
• Locate zeros in the second derivative of an image
• Laplacian of an image
Gradient based edge detection
• Best used in abrupt discontinuities
• Perform better in less noised images
• Magnitude of the gradient –strength of the edge
• Direction – opposite of the edge direction
• |G|= ≈|Gx|+|Gy|
Gradient based Edge Detection
• Roberts Edge Detector.
• Prewitt Edge Detector.
• Sobel Edge Detector,
• Canny Edge Detector.
Roberts
• 2x2 convolution mask
• Gx 1 0 Gy= 0 -1
0 -1 1 0
Difference are computed at interpolated points (i+ ½ , j+ ½ )
Response to edge with 450
Prewitt
• Convolution Mask
-1 0 1 Gy= 1 1 1
-1 0 1 0 0 0
-1 0 1 -1 -1 -1
• The differences are calculated at the centre pixel of the mask
Sobel
•`
Laplacian Filter Mask
• The Laplacian plays a secondary role in edge
detection due to the following shortcomings:
• is unacceptably sensitive to noise (second derivative),
• produces double edges,
• unable to detect edge direction.
Prewitt Sobel
Simpler to implement Better noise suppression
(smoothing)
Give isotropic results only for Give isotropic results only for
vertical and horizontal edges vertical and horizontal edges
Preferred because noise
suppression is an important
issue when dealing with
derivatives .

The coefficients of all mask sum to zero. That gives


a response of zero to an areas of constant intensity.
Boundary representation
Boundary Representation
Boundary Following
Chain code
Polygon

You might also like