Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit 3 (Slides) PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 233

DIGITAL IMAGE PROCESSING - 2

Lecture 24: Image Segmentation

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Unit 3: Image Segmentation

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Grayscale Morphology Operations


• Erosion
• Dilation
• Opening
• Closing

3
DIGITAL IMAGE PROCESSING - 2
Image Segmentation

• Image segmentation is an essential preliminary step in most


automatic pictorial pattern recognition and scene analysis
algorithms.

• Segmentation is to subdivide an image into constituent regions


or objects.

• Image segmentation algorithms are generally based on one of


the two basic properties of intensity value: similarity and
discontinuity.
4
DIGITAL IMAGE PROCESSING - 2
Image Segmentation

• Segmentation techniques are either region-based or edge-


based

• Region-based techniques rely on common patterns in


intensity values within a cluster of neighboring pixels.
• The cluster is referred to as the region, and the goal of
the segmentation algorithm is to group regions
according to their anatomical or functional roles

• Edge-based techniques rely on discontinuities in image


values between distinct regions, and the goal of the
segmentation algorithm is to accurately demarcate the
boundary separating these regions
5
DIGITAL IMAGE PROCESSING - 2
Image Segmentation

To segment the image,


we assign one level
(say, white) to the
pixels on or inside the
boundary, and another
Image of a constant Boundary based on level (e.g., black) to all
intensity discontinuities. Result of segmentation.
intensity region. points exterior to the
boundary.

Result of intensity discontinuity Result of segmentation 6


Image of a texture region computations based on region properties.
DIGITAL IMAGE PROCESSING - 2
Image Segmentation

• Two types
• Local Segmentation
• Deals with sub-images which are small windows of the whole
image
• Number of pixels available is much lower than global
segmentation
• Global Segmentation
• Concerned with segmenting the whole image
• Deals with segments consisting of a relatively large number of
pixels hence estimated parameter values are more robust

7
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques


• Three Approaches
• Points
• Lines
• Edges

8
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
9
DIGITAL IMAGE PROCESSING - 2
Isolated Point, Line and Edge

10
DIGITAL IMAGE PROCESSING - 2
Basic Concept

• The most common way to look for discontinuities is to run a


mask through the image.
• The response of the filter at the center point of the kernel is

w1 w2 w3
w4 w5 W6
w7 w8 w9

Point Detection Mask

11
DIGITAL IMAGE PROCESSING - 2
Next Session

• Segmentation
• Edge Approach

12
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

13
DIGITAL IMAGE PROCESSING - 2

Lecture 25: Image Segmentation

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Unit 3: Image Segmentation

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Images segmentation
• Types of segmentation
• Local
• Global
• Types of segmentation techniques
• Edge approach
• Boundary approach
• Region approach

3
DIGITAL IMAGE PROCESSING - 2
Today’s Session

• Types of segmentation techniques


• Edge approach
• Boundary approach
• Region approach

4
DIGITAL IMAGE PROCESSING - 2
Image Segmentation

• Two types
• Local Segmentation
• Deals with sub-images which are small windows of the whole image

• Number of pixels available is much lower than global segmentation

• Global Segmentation
• Concerned with segmenting the whole image

• Deals with segments consisting of a relatively large number of pixels

5
DIGITAL IMAGE PROCESSING - 2
Image Segmentation Techniques

• Segmentation techniques are either region-based or edge-


based

• Region-based techniques rely on common patterns in


intensity values within a cluster of neighboring pixels.
• The cluster is referred to as the region, and the goal of
the segmentation algorithm is to group regions
according to their anatomical or functional roles

• Edge-based techniques rely on discontinuities in image


values between distinct regions, and the goal of the
segmentation algorithm is to accurately demarcate the
boundary separating these regions
6
DIGITAL IMAGE PROCESSING - 2
Isolated Point, Line and Edge

7
DIGITAL IMAGE PROCESSING - 2
Derivatives of digital functions/images

• Derivatives of a digital function are defined in terms of finite


differences.
• Approximation used for First derivative:
• must be zero in areas of constant intensity
• must be nonzero at the onset of an intensity step or ramp; and
• must be nonzero at points along an intensity ramp
• Approximation used for second derivatives
• must be zero in areas of constant intensity
• must be nonzero at the onset and end of an intensity step or ramp
• must be zero along intensity ramps
• Because we are dealing with digital quantities whose values
are finite, the maximum possible intensity change is also finite,
and the shortest distance over which a change can occur is
between adjacent pixels 8
DIGITAL IMAGE PROCESSING - 2
First Order Derivatives of digital functions/images
• Using Taylor series expansion we obtain approximations to the
first-order derivative
• First-order derivative at an arbitrary point x of a one-
dimensional function f (x) is obtained by expanding the
function into a Taylor series about x

• Where is the separation between samples of f.


• Here this separation is measured in pixel units.
• Thus, following the convention in the book, = -1 for the
sample preceding x and = 1 for the sample following x
9
DIGITAL IMAGE PROCESSING - 2
First Order Derivatives of digital functions/images

• First-order derivative at an arbitrary point x of a one-


dimensional function f (x) is given by
for

for

10
DIGITAL IMAGE PROCESSING - 2
Derivatives using approximations to Taylor Series

• We compute intensity differences using just a few terms of the


Taylor series.
• For first-order derivatives we use only the linear terms, and we
can form differences in one of three ways:
• Forward Difference(keeping only linear terms):

• Backward Difference:

• Central Difference(obtained from Taylor series by subtracting


first order derivatives(backward from forward difference):

11
DIGITAL IMAGE PROCESSING - 2
Derivatives using approximations to Taylor Series

• In general, the more terms we use from the Taylor series to


represent a derivative, the more accurate the approximation
will be
• It turns out that central differences have a lower error for the
same number of points
• For this reason, derivatives are usually expressed as central differences.
• The second order derivative based on a central difference,
∂2 f (x)/∂x2 is obtained by adding backward difference and
forward difference equations(neglecting higher order terms)

12
DIGITAL IMAGE PROCESSING - 2
Derivatives in digital image

• Determine the first and second order derivatives of the following image

Horizontal intensity profile


that includes the isolated point
indicated by the arrow

Subsampled profile 13
DIGITAL IMAGE PROCESSING - 2
Derivatives in digital image

• The transition in the ramp spans four pixels, the noise point is a single pixel,
• the line is three pixels thick, and the transition of the step edge takes place
between adjacent pixels.
• The number of intensity levels was limited to eight for simplicity. 14
DIGITAL IMAGE PROCESSING - 2
Properties of first and second Derivatives in digital image

• Initially, the first-order derivative is nonzero at the onset and along


the entire intensity ramp, while the second-order derivative is
nonzero only at the onset and end of the ramp.
• Because the edges of digital images resemble this type of transition,
we conclude that first-order derivatives produce “thick” edges, and
second-order derivatives much thinner ones.
15
DIGITAL IMAGE PROCESSING - 2
Properties of first and second Derivatives in digital image

• Next we encounter the isolated noise point.


• Here, the magnitude of the response at the point is much stronger for the
second than for the first-order derivative.
• A second-order derivative is much more aggressive than a first-order
derivative in enhancing sharp changes.
• Thus, we can expect second-order derivatives to enhance fine detail
(including noise) much more than first-order derivatives. 16
DIGITAL IMAGE PROCESSING - 2
Properties of first and second Derivatives in digital image

• The line in this example is rather thin, so it too is fine detail, and we see
again that the second derivative has a larger magnitude.
• At both the ramp and step edges the second derivative has opposite signs
(negative to positive or positive to negative) as it transitions into and out
of an edge. This “double-edge” effect is an important characteristic that
can be used to locate edges
• As we move into the edge, the sign of the second derivative is used also to
determine whether an edge is a transition from light to dark (negative
17
second derivative), or from dark to light (positive second derivative)
DIGITAL IMAGE PROCESSING - 2
Summary of first and second Derivatives in digital image

• First-order derivatives generally produce thicker edges

• Second-order derivatives have a stronger response to


fine detail, such as thin lines, isolated points, and noise.

• Second-order derivatives produce a double-edge


response at ramp and step transitions in intensity

• The sign of the second derivative can be used to


determine whether a transition into an edge is from
light to dark or dark to light.

18
DIGITAL IMAGE PROCESSING - 2
Basic Concept

• The approach of choice for computing first and second


derivatives at every pixel location in an image is to use spatial
convolution.
• For the 3 × 3 filter kernel shown, the procedure is to compute
the sum of products of the kernel coefficients with the
intensity values in the region encompassed by the kernel
• That is, the response of the filter at the center point of the
kernel is

where zk is the intensity of the pixel whose spatial location


corresponds to the location of the kth kernel coefficient. 19
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques


• Three Approaches

20
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
21
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• Point Detection:
• Point detection should be based on the second derivative using
the Laplacian:

where the partial derivatives are computed using the second-


order finite differences.
The Laplacian is then

• This expression can be implemented using the Laplacian kernel


1 1 1

1 8 1

1 1 1 22
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• A point has been detected at a location (x, y) on which the


kernel is centered if the absolute value of the response of the
filter at that point exceeds a specified threshold.
• Such points are labeled 1 and all others are labeled 0 in the
output image, thus producing a binary image
• Use the expression

where g(x, y) is the output image, T is a nonnegative threshold,


and Z is given as indicated

23
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• This formulation simply measures the weighted differences


between a pixel and its 8-neighbors

• Isolated point will be quite different from its surroundings


hence easily detectable by this mask

• Note that, as usual for a derivative kernel, the coefficients sum


to zero, indicating that the filter response will be zero in areas of
constant intensity

24
DIGITAL IMAGE PROCESSING - 2
Next Session

• Image Segmentation Cont..


• Point detection
• Line detection

25
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

26
DIGITAL IMAGE PROCESSING - 2

U3 Lec 26-27: Image Segmentation

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques


• Three Approaches

3
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
4
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• Point Detection:
• Point detection is based on the second derivative using the
Laplacian:

where the partial derivatives are computed using the second-order


finite differences.
The Laplacian is then

• This expression can be implemented using the Laplacian kernel


1 1 1

1 8 1

1 1 1 5
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• A point has been detected at a location (x, y) on which the


kernel is centered if the absolute value of the response of the
filter at that point exceeds a specified threshold.
• Such points are labeled 1 and all others are labeled 0 in the
output image, thus producing a binary image
• Use the expression

where g(x, y) is the output image, T is a nonnegative threshold,


and Z is given as indicated

6
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point

• This formulation simply measures the weighted differences


between a pixel and its 8-neighbors

• Isolated point will be quite different from its surroundings


hence easily detectable by this mask

• Note that, as usual for a derivative kernel, the coefficients sum


to zero, indicating that the filter response will be zero in areas of
constant intensity

7
DIGITAL IMAGE PROCESSING - 2
Point Detection Example

1 1 1

1 8 1

1 1 1

Laplacian kernel
Result of Result of using used
X-ray image of a turbine
convolving the thresholding where a for point detection
blade with a porosity
kernel with the single point is detected
manifested by a single
image (shown enlarged at the
black pixel
tip of the arrow)

8
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation

• Deals with detection of sharp, local changes in intensity


• Three basic types of gray-level discontinuities in a digital image are:
Isolated Points
• Foreground (background) pixel surrounded by background
(Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on either
side of the line is either much higher or much lower than intensity
of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes abruptly

9
DIGITAL IMAGE PROCESSING - 2
Coordinate Convention in Digital Image

Coordinate convention used to represent


digital images. Because coordinate values
are integers, there is a one-to-one
correspondence between x and y and the
rows (r) and columns (c) of a matrix. 10
DIGITAL IMAGE PROCESSING - 2
Line Detection

• Consider kernels as shown below:


Detection angles are with respect to the axis
system where positive
angles measured counter clockwise with
respect to the (vertical) x-axis.

Horizontal +450 Vertical - 450

Line detection kernels.

• First mask would respond more strongly to lines oriented


horizontally
• Maximum response would result when line passes through
middle row
• Second would respond best to line oriented at +450 and so on

11
DIGITAL IMAGE PROCESSING - 2
Line Detection Procedure

• Let Z1, Z2, Z3 and Z4 be the responses to masks from left to right
• Let the masks be run individually through
with these the
fourimage
• If at a certain point Z k > Z j for all j ¹ k
that point is said toinbethe tionassociated
direc-
more likely of kernel with a line in the
direction of the kernel k
• For example, if at a point in the image, Z 1 > Z j for
j = 2,3,4, that point is said to be more likely associated with a
horizontal line

12
DIGITAL IMAGE PROCESSING - 2
Line Detection Procedure

• If we are interested in detecting all the lines in an image in the


direction defined by a given kernel, we simply run the kernel
through the image and threshold the absolute value of the
result

• The nonzero points remaining after thresholding are the


strongest responses which, for lines one pixel thick, correspond
closest to the direction defined by the kernel

13
DIGITAL IMAGE PROCESSING - 2
Line Detection Example

• Determine lines that are


one pixel thick and
oriented towards + 450
• To get one pixel thick line
we use threshold equal
to (maximum value-1)
(254)

14
DIGITAL IMAGE PROCESSING - 2
Line Detection Example

• we use the following kernel

the shades darker than the gray


background correspond to negative
values

The result of filtering the image with this kernel. 15


DIGITAL IMAGE PROCESSING - 2
Line Detection Example

The result of filtering There are two principal segments in the image oriented in
the image with this kernel. the +45 direction, one in the top left and one at the
bottom right.
The straight line segment in third figure is brighter than the
segment second figure because the line segment in the
bottom right of given figure is one pixel thick, while the
one at the top left is not.
The kernel is “tuned” to detect one-pixel-thick lines in the
+45 direction, so we expect its response to be stronger 16
when such lines are detected
DIGITAL IMAGE PROCESSING - 2
Line Detection Example

• Next we apply the threshold for stronger response by letting T=254

The result of filtering


The image with all negative All points (in white) whose
the image with this kernel. values satisfied the condition
values set to zero.
g > T,
where g is the image in
second figure and T = 254

17
DIGITAL IMAGE PROCESSING - 2
Observations

• The isolated points in the figure are points that also had
similarly strong responses to the kernel. 1 1 1

• In the original image, these points and their immediate 1 8 1

neighbors are oriented in such a way that the kernel produced 1 1 1

a maximum response at those locations.


• These isolated points can be detected using the kernel for
point detection and then deleted, or they can be deleted using
morphological operators

18
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation

• Deals with detection of sharp, local changes in intensity


• Three basic types of gray-level discontinuities in a digital image are:
Isolated Points
• Foreground (background) pixel surrounded by background
(Foreground) pixels
Lines
• Thin edge segment in which intensity of the background on either
side of the line is either much higher or much lower than intensity
of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes abruptly

19
DIGITAL IMAGE PROCESSING - 2
Edge Detection

• Edge Detection
• We use edge models
• Ideally, edges should be one pixel thin
• In practice, they are blurred and noisy
• Edge models are classified according to their intensity
profiles.

A step edge is characterized by a transition


between two intensity levels occurring
ideally over the distance of one pixel
Digital step edges are used frequently as edge
Step Edge models in algorithm development

20
DIGITAL IMAGE PROCESSING - 2
Edge Models

Edges that are blurred and noisy, with the degree of


blurring determined principally by limitations in the
focusing mechanism (e.g., lenses in the case of
optical images), and the noise level determined
principally by the electronic components of the
imaging system.
Ramp Edge
• The slope of the ramp is inversely proportional to the
degree to which the edge is blurred

In this model, we no longer have a single “edge


point” along the profile.
Instead, an edge point now is any point contained in
the ramp, and an edge segment would then be a set
of such points that are connected.
21
DIGITAL IMAGE PROCESSING - 2
Edge Models

Roof edges are models of lines through a region, with the


base (width) of the edge being determined by the
thickness and sharpness of the line.
• In the limit, when its base is one pixel wide, a roof
edge is nothing more than a one-pixel-thick line
running through a region in an image.
Roof Edge

Roof edges arise, for example, in range imaging, when thin


objects (such as pipes) are closer to the sensor than the
background (such as walls).
The pipes appear brighter and thus create an image similar to
the model shown.
Other areas in which roof edges appear routinely are in the
digitization of line drawings and also in satellite images, where
thin features, such as roads, can be modeled by this type of
22
edge.
DIGITAL IMAGE PROCESSING - 2
Edge Detection

The profiles are from dark to light, in the areas


enclosed by the small circles.
The ramp and step profiles span 9 pixels and 2
pixels, respectively. The base of the roof edge is 3
pixels.

A 1508 x 1970 image showing (zoomed) actual ramp


(bottom, left), step (top, right), and roof edge profiles.

23
DIGITAL IMAGE PROCESSING - 2
Edge Detection

• Need to develop algorithms to detect these edges


• The performance of these algorithms will depend on the
differences between actual edges and the models used in
developing the algorithms
• Consider the following image and its first and second
derivatives calculated at the edge

24
DIGITAL IMAGE PROCESSING - 2
Edge Detection

Detail near the edge, showing a horizontal intensity 25


profile, and its first and second derivatives.
DIGITAL IMAGE PROCESSING - 2
Edge Detection

• Edge Detection
• Magnitude of the first derivative.
• Slope of first derivative is inversely proportional to degree of
blurring
• Thickness of edge is determined by length of ramp
• Sign change of the second derivative.
• Observations:
• Second derivative produces two values for an edge
(undesirable).
• Its zero crossings may be used to locate the centers of thick
edges.

26
DIGITAL IMAGE PROCESSING - 2
Edge Profiles

• Profiles with embedded


noise

8-bit images with values in the range [0, 255], and


intensity profiles of a ramp edge corrupted by Gaussian
noise of zero mean and standard deviations of 0.0, 0.1,
1.0, and 10.0 intensity levels, respectively.
27
DIGITAL IMAGE PROCESSING - 2
Edge Profiles

• Profiles with embedded noise

8-bit images with values in the range [0, 255], and


First-derivative images and intensity profiles of images
corrupted by Gaussian noise of zero mean and standard
deviations of 0.0, 0.1, 1.0, and 10.0 intensity levels,
respectively.

28
DIGITAL IMAGE PROCESSING - 2
Edge Profiles

• Profiles with embedded noise


• Observations:
• Second derivative is more
sensitive to noise.
• Its zero crossings may be
used to locate the centers
of thick edges.

8-bit images with values in the range [0, 255], and


Second-derivative images and intensity profiles of images
corrupted by Gaussian noise of zero mean and standard
deviations of 0.0, 0.1, 1.0, and 10.0 intensity levels,
respectively. 29
DIGITAL IMAGE PROCESSING - 2
Basic Edge Detection

• Detecting changes in intensity for the purpose of finding edges


can be accomplished using first- or second-order derivatives.
Image Gradient and its Properties
Gradient operation
• The tool of choice for finding edge strength and direction at an
arbitrary location (x, y) of an image, f, is the gradient, denoted
by and defined as the vector

• We can use this to find the magnitude and direction angle


• This vector has the well-known property that it points in the direction
of maximum rate of change of f at (x, y)
30
DIGITAL IMAGE PROCESSING - 2
Basic Edge Detection

• Magnitude is given by:

• Direction of gradient vector is given by:

Angles are measured in the counterclockwise direction with respect to


the x-axis
This is also an image of the same size as f, created by the
elementwise division of gx and gy over all applicable values of x
and y. 31
DIGITAL IMAGE PROCESSING - 2
Example

• Determine the derivatives in x and y directions using a


3x3 neighborhood centered at a point

¶f ¶f
= -2 and =2
¶x ¶y
32
DIGITAL IMAGE PROCESSING - 2
Example

• Consider an image containing straight edge segment:

• Objective is to find strength and direction of edge at the


highlighted point
• Assume shaded pixel is 0 and white is 1

33
DIGITAL IMAGE PROCESSING - 2
Example

• Then

• From this we get

Ñf = 2 2

34
DIGITAL IMAGE PROCESSING - 2
Detection of Edge Direction

• Direction of the gradient vector at the same point is


a = tan -1 ( g y / g x ) = -450

• This is same as 1350 measured in positive


(counterclockwise) direction w.r.t x-axis

35
DIGITAL IMAGE PROCESSING - 2
Detection of Edge Direction

• Direction of edge at a point is orthogonal to gradient vector at


that point
• Hence direction angle of edge is
α-900= 1350-900 =450

• All edge points in this figure have same gradient.


So entire edge segment has same direction.
• Gradient vector is sometimes called edge normal

36
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection

Local Processing
The two principal properties used for establishing similarity of
edge pixels in this kind of analysis are :
1. The strength of the response of the gradient operator used to
produce the edge pixe

2. The direction of the gradient vector

37
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection

• An edge pixel with coordinates (x0,y0) in neighborhood of (x,y)


is similar in magnitude to the pixel at (x,y) if

where E is a positive threshold


• An edge pixel with coordinates (x0,y0) in neighborhood of (x,y)
has an angle similar to the pixel at (x,y) if

where A is a positive threshold


Direction of edge at (x,y) is perpendicular to direction of gradient
vector at that point
38
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection

• A pixel with coordinates (x0,y0) is considered to be linked to the


pixel at (x,y) if both magnitude and angle criteria are satisfied

• This process is repeated for every edge pixel

• This is a very computationally expensive process

39
DIGITAL IMAGE PROCESSING - 2
Alternate Method

• Simple procedure for real time applications is:


• Compute gradient magnitude and angle arrays of the input
image f(x,y)
• Form a binary image g(x,y) whose value at any point (x,y) is
given by

where TM is a threshold, A is a specified angle direction and ±TA


defines a band of acceptable direction about A

40
DIGITAL IMAGE PROCESSING - 2

• Scan rows of g and fill (set to 1) all gaps (sets of zeros) in each
row that do not exceed a specified length, L.
• A gap is bounded at both ends by one or more 1’s
• To detect gaps on any other direction θ, rotate g by this angle
and apply the horizontal scanning procedure in Step 3. Rotate
the result by –θ
• This algorithm is more practical and gives good results

41
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection: Example

• Find rectangular shapes similar to license plate

42
DIGITAL IMAGE PROCESSING - 2
Steps Involved

• Detect strong horizontal and vertical edges


• Find gradients
• Connect edge points
• Check horizontal-vertical proportion

43
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection

44
DIGITAL IMAGE PROCESSING - 2
Next Session

• Edge linking Cont..


• Thresholding

45
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

46
DIGITAL IMAGE PROCESSING - 2

U3 Lec28: Image Segmentation


Dr. Shruthi M.L.J.
Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

•Segmentation
•Line and edge detection
•Edge Linking

3
DIGITAL IMAGE PROCESSING - 2
This Session

• Segmentation
• Gradient Operators
• Thresholding

4
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• Obtaining the gradient of an image requires computing the


partial derivatives ∂f /∂x and ∂f/∂y at every pixel location in
the image.
• For the gradient, we typically use a forward or centered finite
difference
• Using forward differences we obtain

1-D kernel

1-D kernel 5
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• When diagonal edge direction is of interest, we need 2-D


kernels:
• Roberts operators are based on implementing the diagonal
differences

6
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• Kernels of size 2 × 2 are simple conceptually, but they are not


as useful for computing edge direction as kernels that are
symmetric about their centers
• The smallest of which are of size 3 × 3.
• These kernels take into account the nature of the data on
opposite sides of the center point, and thus carry more
information regarding the direction of an edge
• The simplest digital approximations to the partial derivatives
using kernels of size 3 × 3 are given by

; approximates the derivative in the x-direction

; approximates the derivative in the y-direction


7
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• Kernels used to implement these equations are

Prewitt Operators [1970]

8
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• A slight variation of the preceding two equations uses a weight


of 2 in the center coefficient

Sobel Operators [1970]

9
DIGITAL IMAGE PROCESSING - 2
Comparison of Operators

• The Prewitt kernels are simpler to implement than the Sobel


kernels

• Sobel kernels have better noise-suppression (smoothing)


characteristics hence more preferred

• Noise suppression is an important issue when dealing with


derivatives

10
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• Any of the pairs of kernels are convolved with an image to


obtain the gradient components gx and gy at every pixel
location.

• These two partial derivative arrays are then used to estimate


edge strength and direction.

• The 3 × 3 kernels exhibit their strongest response


predominantly for vertical and horizontal edges.

11
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• Obtaining the magnitude of the gradient requires the


computations of squares and square roots

• Not always desirable because of the computational burden


required by squares and square roots

• An approach used frequently is to approximate the


magnitude of the gradient by absolute values:

12
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

• The Kirsch compass kernels (Kirsch [1971]) are designed to detect


edge magnitude and direction (angle) in all eight compass directions

• Instead of computing the magnitude using the equations, Kirsch’s


approach was to determine the edge magnitude by convolving an
image with all eight kernels and assign the edge magnitude at a point
as the response of the kernel that gave strongest convolution value at
that point.

• The edge angle at that point is then the direction associated with
that kernel

13
DIGITAL IMAGE PROCESSING - 2
Gradient Operators

If the strongest value at a point in the image


resulted from using the north (N) kernel, the edge
magnitude at that point would be assigned the
response of that kernel, and the direction would
be 0°

Kirsch compass kernels.


The edge direction of strongest response of each kernel is labeled
below it. 14
DIGITAL IMAGE PROCESSING - 2
Example

Image of size 834 x 1114 pixels, the component of the gradient obtained using the Sobel operator
with in
intensity values scaled to the range the x-direction, obtained using
[0,1] the
Sobel kernel
The gradient image

15
DIGITAL IMAGE PROCESSING - 2
Observations

• This example illustrates the Sobel absolute value response of the two
components of the gradient and , as well as the gradient image
formed from the sum of these two components
• The directionality of the horizontal and vertical components of the gradient
is evident in Figs. (b) and (c)
• The roof tile, horizontal brick joints and the horizontal segments of the
windows are very strong compared to other edges.
• In contrast, Fig. (c) favors features such as the vertical components of the
façade and windows.
• It is common terminology to use the term edge map when referring to an
image whose principal features are edges, such as gradient magnitude
images

16
DIGITAL IMAGE PROCESSING - 2
Combining gradient with smoothing

• Edge detection can be made more selective by smoothing the


image prior to computing the gradient

Same sequence as in
earlier figure, but the
original image is smoothed
using a 5 x 5 averaging
kernel prior to edge
detection.

17
DIGITAL IMAGE PROCESSING - 2
Combining gradient with thresholding

• Another approach aimed at achieving the same objective is to


threshold the gradient image.

Result of thresholding the Result of thresholding the gradient


gradient of the original of the smoothed image.
image

18
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector

• Canny edge detector (Canny [1986]) is more complex but its


performance is superior in general to most edge detectors
• Canny’s approach is based on three basic objectives:
1. Low error rate:
• All edges should be found, and there should be no
spurious responses.
2. Edge points should be well localized:
• The edges located must be as close as possible to the
true edges.
• That is, the distance between a point marked as an edge by the
detector and the center of the true edge should be minimum

19
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector

3. Single edge point response:


• The detector should return only one point for each true
edge point
• That is, the number of local maxima around the true edge
should be minimum.
• This means that the detector should not identify multiple edge
pixels where only a single edge point exists

• The essence of Canny’s work was in expressing the preceding


three criteria mathematically, and then attempting to find
optimal solutions to these formulations
• as achieving all three simultaneously is not possible

20
DIGITAL IMAGE PROCESSING - 2
Next Session

• Edge linking using Hough Transform


• Thresholding

21
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

22
DIGITAL IMAGE PROCESSING - 2

U3 Lec 29: Image Segmentation


Dr. Shruthi M.L.J.
Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Segmentation
• Gradient Operators
• Thresholding

3
DIGITAL IMAGE PROCESSING - 2
This Session

• Segmentation
• Edge Linking via Hough transform

4
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector

• Canny edge detector (Canny [1986]) is more complex but its


performance is superior in general to most edge detectors
• Canny’s approach is based on three basic objectives:
1. Low error rate:
• All edges should be found, and there should be no
spurious responses.
2. Edge points should be well localized:
• The edges located must be as close as possible to the
true edges.
• That is, the distance between a point marked as an edge by the
detector and the center of the true edge should be minimum

5
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector

3. Single edge point response:


• The detector should return only one point for each true
edge point
• That is, the number of local maxima around the true edge
should be minimum.
• This means that the detector should not identify multiple edge
pixels where only a single edge point exists

• The essence of Canny’s work was in expressing the preceding


three criteria mathematically, and then attempting to find
optimal solutions to these formulations
• as achieving all three simultaneously is not possible

6
DIGITAL IMAGE PROCESSING - 2
Global Processing

• Previous methods are applicable in situations where


knowledge about pixels belonging to individual objects is
available
• Does not work in unstructured environments in which all we
have is an edge map and no knowledge about where objects of
interest might be
• Here all pixels are candidates for linking, and thus have to
be accepted or eliminated based on pre- defined global
properties
• We need to develop approach based on whether sets of pixels
lie on curves of a specified shape.
• Once detected, these curves form the edges or region
boundaries of interest.
7
DIGITAL IMAGE PROCESSING - 2
Global Processing

• Global Processing via the Hough Transform [1962]


• Determine if points lie on a curve of specified shape
• Unlike local analysis we now consider global relationships
between pixels
• Consider a point (xi,yi) and the general equation of a straight
b
line yi = axi +b

Slope =a

• With varying ‘a’ and ‘b’ we get infinite lines passing


through (xi,yi)
8
DIGITAL IMAGE PROCESSING - 2
Global Processing using Hough Transform

• Now write the equation with respect to ab-plane


(parametric space) as b = -xi a + yi
• This gives the equation of a single line with fixed pair (xi,yi)
where the line maps to one point in xy plane

9
DIGITAL IMAGE PROCESSING - 2
Global Processing using Hough Transform

• Now consider a second point (xj,yj) in the parameter space.


This intersects the line associated with (xi,yi) at (a’,b’)
• All points on a line in (x,y) plane intersect at the same point (
Ex. (aI,bI) ) in parametric plane
• In fact, all points on the line have lines in parameter space that
intersect at (a’ , b’ )
• In principle, the parameter space lines corresponding to all
points (xk , yk) in the xy-plane could be plotted, and the
principal lines in that plane could be found by identifying
points in parameter space where large numbers of parameter-
space lines intersect

10
DIGITAL IMAGE PROCESSING - 2
Computational Aspects of Hough Transform

• Subdivide the parametric space into accumulator cells


• The cell at (i,j) with accumulator values A(i, j)
corresponds to (ai,bj)
• Initialize A(i, j) = 0

• For every point (xk,yk) vary a from cell to cell and solve
for b using:

• If a choice of ap results in bq , then increment the


accumulator A(p,q)=A(p,q)+1

11
DIGITAL IMAGE PROCESSING - 2
Computational Aspects of Hough Transform

• At the end of the procedure, a value of Q in A(i,j)


corresponds to Q points in the xy-plane lying on the line

• K different increments of a generate K different values of b;


for n different image point, the method involves nK
computations (linear complexity)

• Number of subdivision of subdivision in ab-plane


determines the accuracy of the colinearity of these points

12
DIGITAL IMAGE PROCESSING - 2
Hough Transform using Normal Representation

• Drawback of this method:


• Slope approached infinity as line approaches vertical
• Computationally very expensive
• Hence we use normal representation of line

( , ) parameterization of a line in the xy-plane


13
DIGITAL IMAGE PROCESSING - 2
Hough Transform using Normal Representation

• Instead of straight lines, there are sinusoidal curves in the


parameter space
• The number of intersecting sinusoids is accumulated and then
the value Q in the accumulator A(i,j) shows the number of
collinear points lying on a line

Sinusoidal curves in the 𝜌𝜃 -plane;the point of intersection


(𝜌𝐼 , 𝜃𝐼 ) corresponds to the line passing through points (xi , Division of the 𝜌𝜃 -plane into accumulator cells.
14
yi ) and (xj , yj ) in the xy-plane.
DIGITAL IMAGE PROCESSING - 2
Example

• Consider an image of size M x M (M=101) with 5 labeled


points

Image of size 101 x 101 pixels,


containing five white points
(four in the corners and one
in the center)
15
DIGITAL IMAGE PROCESSING - 2
Example

• Map these points mapped onto the -plane using


subdivisions of one unit for the and axes

• The range of values is ±90°, and the range of values is


Q
100
2

R
50
Corresponding parameter space
S A 1 S
0
r

3
50 R
4
B

Q
100

80 60 40 20 0 20 40 60 80
16
u
DIGITAL IMAGE PROCESSING - 2
Example

• Each curve has a different sinusoidal shape. 100


Q

• The horizontal line resulting from the mapping of point 1 is a


R
50

sinusoid of zero amplitude.


S A 1 S
0

r
3
R

• The points labeled A and B illustrate the colinearity detection


50 4
B

property of the Hough transform.


100

• For example, point B, marks the intersection of the curves


80 60 40 20 0 20 40 60 80
u

corresponding to points 2, 3, and 4 in the xy image plane.


• The location of point A indicates that these three points lie on
a straight line passing through the origin ( = 0) and oriented
at −45°
• Similarly, the curves intersecting at point B in parameter
space indicate that points 2, 3, and 4 lie on a straight line
oriented at 45°, and whose distance from the origin is = 71 17
DIGITAL IMAGE PROCESSING - 2
Example

• Finally, the points labeled Q, R, and S in Figure illustrate the


fact that the Hough transform exhibits a reflective adjacency
relationship at the right and left edges of the parameter space.
• This property is the result of the manner in which and
change sign at the ±90° boundaries.

Q
100
2

R
50

S A 1 S
0
r

3
50 R
4
B

Q
100

80 60 40 20 0 20 40 60 80
18
u
DIGITAL IMAGE PROCESSING - 2
Edge Linking using Hough Transform

Steps involved in Hough transform for edge linking


1. Obtain a binary edge map
2. Specify subdivisions in the -plane
3. Examine the counts of the accumulator cells for high pixel
concentrations.
4. Examine the relationship (principally for continuity)
between pixels in a chosen cell.

19
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform

• Continuity in this case usually is based on computing the


distance between disconnected pixels corresponding to a
given accumulator cell.
• A gap in a line associated with a given cell is bridged if the
length of the gap is less than a specified threshold.
• Being able to group lines based on direction is a global
concept applicable over the entire image, requiring only that
we examine pixels associated with specific accumulator cells

20
DIGITAL IMAGE PROCESSING - 2
Example: Edge Linking Using Hough Transform

• Use the Hough transform to extract the two edges defining


the principal runway in the aerial image of an airport
(applications involving autonomous air navigation)

21
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform

Step 1: Obtain the edge map

Hough parameter space (the boxes highlight the points


Edge map obtained using Canny’s algorithm associated with long vertical lines)
Obtained using 1° increments for 𝜃, and one-pixel increments
for 𝜌

22
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform

• The runway of interest is oriented approximately 1° off the


north direction
• So we select the cells corresponding to ±90° and containing
the highest count because the runways are the longest lines
oriented in these directions.
• The small boxes on the edges of Fig below highlight these
cells.

23
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform

• The Hough transform exhibits adjacency at the edges.


• Another way of interpreting this property is that a line
oriented at +90° and a line oriented at −90° are equivalent
(i.e., they are both vertical)

Lines in the image plane corresponding to the points highlighted by the


boxes.
24
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform

• The lines are then superimposed on the original image.


• The lines were obtained by joining all gaps not exceeding 20%
(approximately 100 pixels) of the image height.
• These lines clearly correspond to the edges of the runway of
interest.

Lines superimposed on the original image


25
DIGITAL IMAGE PROCESSING - 2
Observation
• Note that the only information needed to solve this problem was the
orientation of the runway and the observer’s position relative to it.
• In other words, a vehicle navigating autonomously would know that if the
runway of interest faces north, and the vehicle’s direction of travel also is
north, the runway should appear vertically in the image.
• Other relative orientations are handled in a similar manner.
• The orientations of runways throughout the world are available in flight
charts, and the direction of travel is easily obtainable using GPS (Global
Positioning System) information.
• This information also could be used to compute the distance between the
vehicle and the runway, thus allowing estimates of parameters such as
expected length of lines relative to image size

26
DIGITAL IMAGE PROCESSING - 2
Next Session

• Segmentation by Thresholding

27
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

28
DIGITAL IMAGE PROCESSING - 2

U3 Lec 30: Image Segmentation


Dr. Shruthi M.L.J.
Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Segmentation
• Edge Linking via Hough transform

3
DIGITAL IMAGE PROCESSING - 2
This Session

• Segmentation using Thresholding

4
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding

• Thresholding enjoys a central position in applications of image


segmentation due of its intuitive properties, simplicity of
implementation, and computational speed

5
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding

• Objective: To segment object from background


• Assume one light object and dark background
• Obtain the histogram
• Histogram dominant modes: two or more
• Threshold and thresholding operation

• Object if f(x,y)>T and background if f(x,y) <T (light object on


dark background)

6
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding

• If there are 3 dominant modes (like 2 light objects and dark


background) : Multilevel thresholding
• A point (x,y) belongs to background if f(x,y)<T1
• To one object if T1 <f(x,y) ≤ T2
• To the other object if f(x,y) >T2

7
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding

• The segmented image is given by:

8
DIGITAL IMAGE PROCESSING - 2
Thresholding

Threshold:
• If T is a constant applicable over an entire image, it is referred
to as global thresholding
• When the value of T changes over an image, we use the term
variable thresholding.
• The terms local or regional thresholding are used
sometimes to denote variable thresholding in which the
value of T at any point (x,y) in an image depends on
properties of a neighborhood of (x,y) (for example, the
average intensity of the pixels in the neighborhood)
• If it depends on spatial coordinates x and y as well it is called
dynamic or adaptive

9
DIGITAL IMAGE PROCESSING - 2
Role of illumination

• Illumination
• Image (f(x,y) is a product of reflectance and illuminance (i x r)

• Reflection depends on nature of objects and background

• Poor (nonlinear) illumination could impede the segmentation

10
DIGITAL IMAGE PROCESSING - 2
Role of illumination
• Consider a reflectance function of a noisy image and its
histogram
• Its histogram is bi-modal and can be partitioned easily:
Global T

Noisy image Corresponding histograms

11
DIGITAL IMAGE PROCESSING - 2
Role of illumination
• Now Multiplying the reflectance by illumination(non uniform)
yields the image f(x,y)=i(x,y)r(x,y)

12
DIGITAL IMAGE PROCESSING - 2
Role of illumination

• The deep valley between peaks was corrupted to the point


where separation of the modes without additional
processing is not possible
• Similar results would be obtained if the illumination was
perfectly uniform, but the reflectance of the image was
not, as a results, for example, of natural reflectivity
variations in the surface of objects and/or background.
• Illumination and reflectance play a central role in the
success of image segmentation using thresholding or other
segmentation techniques.
• Therefore, controlling these factors when possible should
be the first step considered in the solution of a
segmentation problem.

13
DIGITAL IMAGE PROCESSING - 2
Role of illumination

• There are three basic approaches to the problem when control


over these factors is not possible.
• The first is to correct the shading pattern directly.
• For example, nonuniform (but fixed) illumination can be
corrected by multiplying the image by the inverse of the
pattern, which can be obtained by imaging a flat surface of
constant intensity.
• The second is to attempt to correct the global shading pattern
via processing using, for example, the top-hat transformation
• The third approach is to “work around” nonuniformities using
variable thresholdin

14
DIGITAL IMAGE PROCESSING - 2
Solution to the Illumination problem

• When the original valley is completely eliminated making


segmentation using single threshold impossible
• Solution:
f(x,y) = i(x,y)r(x,y)
Taking log of equation

Where i’ and r’ are independent random variables


• Degree of distortion would depend on broadness of
histogram of i’(x,y) which in tern depends on non
uniformity of illumination function

15
DIGITAL IMAGE PROCESSING - 2
Solution to the Illumination problem
• The final histogram is a result of convolution of the histogram
of the log reflectance and log illuminance functions
• If i(x,y) was constant iI(x,y) would also be constant and its
histogram would be simple spike (like impluse).
• The convolution of this with histogram of rI(x,y) would leave its
basic shape unchanged
• But if iI(x,y) has a broader histogram (due to non uniform
illumination) the convolution would smear the histogram of
rI(x,y) yielding a histogram of z(x,y) whose shape would be
different from rI(x,y)

16
DIGITAL IMAGE PROCESSING - 2
Solution
• When access to illumination source is available, a solution to
compensate for non uniformity is to project the illumination
pattern onto a constant white reflective surface
• This yields an image g(x,y)= k i(x,y), k is constant that depends
on surface
• Then for any image f(x,y) = i(x,y)r(x,y) obtained with same
illumination pattern we give normalized function
h(x,y)=f(x,y)/g(x,y)= r(x,y)/k
Thus if r(x,y) can be segmented using threshold T then h(x,y) can
be thresholded by T/k

17
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
• Consider a synthetic image without noise and with noise
(AWGN) and their corresponding histograms

Image with AWGN of mean 0 and Image with AWGN of mean 0 and
Noiseless 8-bit image.
SD SD
of 10 intensity levels. of 50 intensity levels.

18
Corresponding Histograms
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding

• Segmenting noiseless image into two regions is a trivial task:


we just select a threshold anywhere between the two modes.
• In the second image the modes are broader but their
separation is enough so that the depth of the valley between
them is sufficient to make the modes easy to separate.
• A threshold placed midway between the two peaks would
do the job.
• The situation in third case is much more serious, as there is no
way to differentiate between the two modes without
additional processing

19
DIGITAL IMAGE PROCESSING - 2
Next Session

• Optimum Global Thresholding


• Region Based Segmentation

20
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

21
DIGITAL IMAGE PROCESSING - 2

U3 Lec 31- 32: Image Segmentation


Dr. Shruthi M.L.J.
Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Segmentation
• Edge Linking via Hough transform
• Using Thresholding
• Role of Illumination
• Role of Noise

3
DIGITAL IMAGE PROCESSING - 2
This Session

• Segmentation using Thresholding


• Role of Noise Cont..
• Region based Segmentation

4
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
• Consider a synthetic image without noise and with noise
(AWGN) and their corresponding histograms

Image with AWGN of mean 0 and Image with AWGN of mean 0 and
Noiseless 8-bit image.
SD SD
of 10 intensity levels. of 50 intensity levels.

5
Corresponding Histograms
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding

• Segmenting noiseless image into two regions is a trivial task:


we just select a threshold anywhere between the two modes.
• In the second image the modes are broader but their
separation is enough so that the depth of the valley between
them is sufficient to make the modes easy to separate.
• A threshold placed midway between the two peaks would
do the job.
• The situation in third case is much more serious, as there is no
way to differentiate between the two modes without
additional processing

6
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding

• Success of this method depends on how well the histogram


can be partitioned
• Consider the image shown and its histogram

7
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding

• We can use a threshold T midway between the maximum and


minimum gray levels
• Any pixel ≤ T can be labeled black (0) and any pixel with gray
level > T can be labeled white(255)

8
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding

• Appropriate for industrial inspection applications


with controllable illumination
• The threshold can be obtained automatically using following
algorithm
• Select an initial estimate for T
• Segment image with initial T into regions G1 and G2
• Compute the average gray level values m1 and m2 in
regions G1 and G2
• Compute new threshold T=0.5 (m1 + m2)
• Repeat until difference in T in successive iterations is
smaller than a predefined parameter T0

9
DIGITAL IMAGE PROCESSING - 2
Criteria for Initial Threshold

• When the object and background occupy comparable areas in


image initial T can be average gray level of the image

• When objects are small compared to background (or vice-


versa), then one group will dominate the histogram

• Initial threshold can be midway between maximum and


minimum gray level values in such cases

10
DIGITAL IMAGE PROCESSING - 2
Example
• Consider the image and its histogram

Noisy fingerprint Histogram Segmented result using a global threshold


• Iterative algorithm was used for thresholding with initial T as
average of the gray levels
• After 3 iterations threshold was 125.4 11
DIGITAL IMAGE PROCESSING - 2
Example

Noisy fingerprint Segmented result using a global threshold

• Due to clear separation of modes in the histogram, the


segmentation between object and background was perfect. 12
DIGITAL IMAGE PROCESSING - 2
Global Thresholding

0 63 127 191 255

Due to clear separation of


modes in the histogram,
the segmentation
between object and
background was perfect.

0 63 127 191 255

Noisy image histogram Result obtained using Otsu’s method. 13


DIGITAL IMAGE PROCESSING - 2
Basic Adaptive Thresholding

• If the histogram can’t be partitioned effectively (due to some


imaging factors like uneven illumination) adaptive thresholding
can be used
• Here the image is divided into subimages and then different
thresholds(local thresholds) are used for each subimage
• Since threshold for each pixel depends on the location of the
pixel thresholding is adaptive

14
DIGITAL IMAGE PROCESSING - 2
Example

• Consider the image with uneven illumination and its histogram

15
DIGITAL IMAGE PROCESSING - 2
Eaxmple

• Result of global thresholding • Subdivide the image into smaller 4


taking threshold at the valley of equal parts
histogram

16
DIGITAL IMAGE PROCESSING - 2
Example

• All subimages that did not contain boundary between object


and background had variance less than 75
• Can be segmented using single threshold
• Those with boundaries had variances >100
• Segmented using iterative thresholding with initial T as
midway between maximum and minimum gray levels
• Result of using adaptive threshold

17
DIGITAL IMAGE PROCESSING - 2
Example

• Two subimages where boundary between object and


background was small and dark did not give good result

18
DIGITAL IMAGE PROCESSING - 2
Example

• Divide the failed subimage into much smaller subimages


and use local thresholds again
• The smaller subimage has bimodal histogram and hence
easily segmentable

19
DIGITAL IMAGE PROCESSING - 2
Global Thresholding

• There are various methods to improve the segmentation using


global thresholding by selecting optimum threshold:

• Otsu’s method (statistical approach)

• Image smoothing

20
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• Method for estimating thresholds that produce minimum


average segmentation error
• In an image let z denote gray-level values which can be viewed
as random quantities
• Their histogram may be considered as estimate of their
probability density function(PDF),p(z)
• If image has two gray level regions then overall density
function is sum or mixture of 2 densities.

21
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• If the form of densities is known or assumed, it is possible to


determine an optimal threshold (in terms of minimum error)
for segmenting the image into 2 distinct regions
• Consider PDF of 2 regions, one object(smaller) and background
(larger)

22
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• Mixture probability

where P1 and P2 are probability of occurrence of 2 classes of


pixels
• Select the value of T that minimizes the average error in
making the decision that a given pixel belongs to an object or
to the background
• Probability of a random variable is integral of its PDF between
the known interval

23
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• Probability of erroneously classifying a background as


object point is

• Probability of erroneously classifying an object point


as background is

24
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• Then the overall probability of error is

• E1 and E2 are weighted by probability of occurrence of background


or object pixels
• To find the threshold for which error is minimum
• Differentiate the error equation, equate to zero and solve for T
• We get

P1 p1 (T ) = P2 p2 (T )

25
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding

• This equation is solved to get optimum value of T


• If P1 = P2 then optimum threshold is where p1(z) and
p2(z) intersect

26
DIGITAL IMAGE PROCESSING - 2
Thresholding based on several variables
• Thresholds based on several variables
• Color or multispectral histograms
• Thresholding is based on finding clusters in multi-
dimensional space
• Example: face detection
• Different color models
• Hue and saturation instead of RGB

27
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques


• Three Approaches

28
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

• Segmentation: Partition an image into regions


• In all previous methods:
• Segmentation was based on finding boundaries between
regions based on discontinuities in gray levels
• Based on thresholds using distribution of pixel properties
like gray level values or color
• We can also segment image based on finding the regions
directly: Region Based Segmentation

29
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

• Basic formulation
• Every pixel must be in a region
• Points in a region (Ri ) must be connected
• Regions must be disjoint

• Define Logical predicate for one region and for


distinguishing between regions

30
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation using Region Growing

• Region growing
• Group pixels from sub-regions to larger regions based on
predefined criteria for growth.

• Basic Approach:
• Start from a set of ’seed’ pixels and append pixels
with similar properties
• Selection of similarity criteria: color, descriptors (gray
level + moments)
• Stopping rule

31
DIGITAL IMAGE PROCESSING - 2
Challenges in Region Growing
• Challenges in region based segmentation:
• The selection of similarity criteria depends not only on
the problem under consideration, but also on the type of
image data available (color/monochrome image)
• Descriptors alone can yield misleading results if
connectivity properties are not used in the region-
growing process
• Visualize a random arrangement of pixels that have three
distinct intensity values.
• Grouping pixels with the same intensity value to form a
“region,” without paying attention to connectivity, would yield a
segmentation result that is meaningless
• Formulation of a stopping rule.
• Region growth should stop when no more pixels satisfy the
criteria for inclusion in that region
32
DIGITAL IMAGE PROCESSING - 2
Challenges in Region Growing

• Additional criteria that can increase the power of a region-growing


algorithm utilize the concept of size, likeness between a candidate
pixel and the pixels grown so far (such as a comparison of the intensity
of a candidate and the average intensity of the grown region)

• and the shape of the region being grown

• The use of these types of descriptors is based on the assumption that a


model of expected results is at least partially available

33
DIGITAL IMAGE PROCESSING - 2
Algorithm for Region Growing

• Let f(x,y) denote an input image;

S(x,y) denote a seed array containing 1’s at the locations of seed


points and 0’s elsewhere;

and Q denote a predicate to be applied at each location (x, y).

Arrays f and S are assumed to be of the same size.

34
DIGITAL IMAGE PROCESSING - 2
Algorithm for Region Growing
• A basic region-growing algorithm based on 8-connectivity may be
stated as follows:
1. Find all connected components in S(x, y) and reduce each
connected component to one pixel; label all such pixels found as 1.
All other pixels in S are labeled 0.
2. Form an image fQ such that, at each point (x, y), fQ(x, y) = 1 if the
input image satisfies a given predicate, Q, at those coordinates,
and fQ( x, y) = 0 otherwise.
3. Let g be an image formed by appending to each seed point in S all
the 1-valued points in fQ that are 8-connected to that seed point.

4. Label each connected component in g with a different region label


(e.g.,integers or letters).
This is the segmented image obtained by region growing. 35
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation Example

Consider an 8-bit X-ray image of a weld (the horizontal


dark region) containing several cracks and porosities (the
bright regions running horizontally through the center of
the image).
Segment the defective weld regions using region growing

The first step is to determine the seed points.


X-Ray image of weld with
a crack that is to be
segmented We expect the regions containing these types of defects
to be significantly brighter than other parts of the X-ray
image as cracks and porosities will attenuate X-rays
considerably less than solid welds

We can extract the seed points by thresholding the


original image, using a threshold set at a high percentile
(99.9) of intensity values which is 254
36
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

Seed image (99.9% of max


X-Ray image of weld with value in the initial
a crack that is to be Histogram image).Crack pixels are
segmented missing.

The weld is very bright. The predicate used for region


growing is to compare the absolute difference
between a seed point and a pixel to a threshold. If the
difference is below it we accept the pixel as crack.
37
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

• Next, we have to specify a predicate.


• In this example, we are interested in appending to each
seed all the pixels that
(a) are 8-connected to that seed, and
(b) are “similar” to it.
Using absolute intensity differences as a measure of
similarity, the predicate applied at each location (x, y) is

38
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

0 63 127 191 255

Absolute value of the


Final seed image using Histogram of (e)
difference between the
morphological erosion
seed value (255) and (a)

Difference image Difference image


Segmentation result obtained
thresholded thresholded with the 39
by region growing
using dual smallest of the dual
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation

• Region that resulted by starting with previous seeds

• Superimposing the boundaries with original image shows that


defective welds got segmented with acceptable accuracy

40
DIGITAL IMAGE PROCESSING - 2
Region Splitting and Merging

• Region growing grows regions from seed points


• An alternative is to sub- divide an image initially into a set of disjoint
regions and then merge and/or split the regions in an attempt to satisfy
the conditions of segmentation
Basic Approach:
• Let R represent the entire image region and select a predicate Q.
• One approach for segmenting R is to subdivide it successively into
smaller and smaller quadrant regions so that, for any region Ri ,
Q(Ri ) = TRUE

41
DIGITAL IMAGE PROCESSING - 2
Region Splitting and Merging
• We start with the entire region, R
• If Q(R) = FALSE, we divide the image into quadrants.
• If Q is FALSE for any quadrant, we subdivide that quadrant into sub-
quadrants, and so on.
• This splitting technique has a convenient representation in the form
of so-called quadtrees; that is, trees in which each node has exactly
four descendants

In this case, only R4 was subdivided further.

Partitioned Corresponding quadtree.


image. R represents the entire image region. 42
DIGITAL IMAGE PROCESSING - 2
Region Splitting and Merging

• If only splitting is used, the final partition normally contains


adjacent regions with identical properties.

• This drawback can be remedied by allowing merging as well as


splitting.

• Satisfying the constraints of segmentation requires merging only


adjacent regions whose combined pixels satisfy the predicate Q.
That is, two adjacent regions Rj and Rk are merged only if

43
DIGITAL IMAGE PROCESSING - 2
Procedure for Region Splitting and Merging

1. Split into four disjoint quadrants any region Ri for which

Q(Ri ) = FALSE.

2. When no further splitting is possible, merge any adjacent


regions Rj and Rk for which

3. Stop when no further merging is possible.

44
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging

• Consider a 566 × 566 X-ray image of the Cygnus Loop supernova


taken in the X-ray band by NASA’s Hubble Telescope

• Objective is to segment (extract from the image) the “ring” of


less dense matter surrounding the dense inner region

45
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
Observations:
• Data in this region has a random nature, indicating that its
standard deviation should be greater than the standard deviation
of the background (which is near 0) and of the large central
region, which is smooth
• The mean value (average intensity) of a region containing data
from the outer ring should be greater than the mean of the
darker background and less than the mean of the lighter central
region
• Thus, we should be able to segment the region of interest using
the following predicate:
Where and are the standard deviation and mean of
the region being processed, and a and b are nonnegative
constants.
46
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
Observations:
• Analysis of several regions in the outer area of interest revealed
that the mean intensity of pixels in those regions did not exceed
125, and the standard deviation was always greater than 10
• With these values for a and b vary the minimum size allowed for
the quadregions from 32 to 8.

The smallest The smallest The smallest


allowed allowed allowed
quadregion of quadregion of quadregion of 47
sizes of 32 x 32 sizes of 16 x 16 sizes of 8 x 8
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
• The pixels in a quadregion that satisfied the predicate were set
to white; all others in that region were set to black.
• The best result in terms of capturing the shape of the outer
region was obtained using quadregions of size 16 × 16.
• The small black squares are quadregions of size 8 × 8 whose
pixels did not satisfy the predicate.
• Using smaller quadregions would result in increasing numbers of
such black regions. Using regions larger than the one illustrated
here would result in a more “block-like” segmentation

48
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
• Note that in all cases the segmented region (white pixels) was a
connected region that completely separates the inner, smoother
region from the background.
• Thus, the segmentation effectively partitioned the image into
three distinct areas that correspond to the three principal
features in the image: background, a dense region, and a sparse
region.
• Using a suitable mask the white region can be separated

49
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering

• This is a classical approach based on seeking clusters in


data, related to such variables as intensity and color
• Basic Idea:
• To partition a set, Q, of observations into a specified number, k, of
clusters.
• In k-means clustering, each observation is assigned to the cluster
with the nearest mean and each mean is called the prototype of its
cluster.
• A k-means algorithm is an iterative procedure that successively
refines the means until convergence is achieved

50
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering

• Let {z1 , z2 , ... , zQ } be set of vector observations (samples).


• These vectors have the form

• In image segmentation, each component of a vector z represents a


numerical pixel attribute
• For example, if segmentation is based on just grayscale intensity, then z = z is
a scalar representing the intensity of a pixel.
• If we are segmenting RGB color images, z typically is a 3-D vector, each
component of which is the intensity of a pixel in one of the three primary
color images,

51
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering

• The objective of k-means clustering is to partition the set Q of


observations into k (k ≤ Q) disjoint cluster sets C = {C1 , C2 , ... , Ck },
so that the following criterion of optimality is satisfied:

where mi is the mean vector (or centroid) of the samples in set Ci and
is the vector norm of the argument
• This equation says that we are interested in finding the sets
C = {C1 , C2 , ... , Ck } such that the sum of the distances from each
point in a set to the mean of that set is minimum.
• Finding this minimum is an NP-hard problem for which no practical
solution is known
• We have “Standard” k-means algorithm, based on the Euclidean
distance to find the clusters
52
DIGITAL IMAGE PROCESSING - 2
Next Session

• Segmentation using watersheds

53
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

54
DIGITAL IMAGE PROCESSING - 2

U3 Lec 33: Image Segmentation


Dr. Shruthi M.L.J.
Department of Electronics &
Communication Engineering

1
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques

Dr. Shruthi M.L.J.


Department of Electronics & Communication Engineering
2
DIGITAL IMAGE PROCESSING - 2
Last Session

• Segmentation using Thresholding


• Role of Noise Cont..
• Region based Segmentation
• Region growing, splitting, merging
• Segmentation using clustering

3
DIGITAL IMAGE PROCESSING - 2
This Session

• Segmentation using Morphological watershed

4
DIGITAL IMAGE PROCESSING - 2

Image Segmentation Techniques


• Three Approaches

5
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds

• We studied segmentation using 3 principal concepts:


• Edge detection
• Thresholding and
• Region extraction
• Each of these approaches was found to have advantages and
disadvantages
• Advantages: Speed in the case of global thresholding
• Disadvantages: The need for post-processing, such as
edge linking, in edge- based segmentation

6
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds

• Segmentation by watersheds embodies many of the concepts of the


other three approaches

• And often produces more stable segmentation results, including


connected segmentation boundaries

• This approach also provides a simple framework for incorporating


knowledge-based constraints

7
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds

Concept: Visualize an image topographically in 3-D


• The two spatial coordinates and the intensity
• We consider three types of points
• Points belonging to a regional minimum
• Points to which a drop of water would fall certainly to
a regional minimum (catchment basin or watershed).
• Points at which the water would be equally likely to
fall to more than one regional minimum (crest lines,
divide lines or watershed lines)
Objective: The principal objective of segmentation algorithms
based on these concepts is to find the watershed lines

8
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds

• Objective: To find the watershed lines


• Consider an example
 Topographic representation:
 The height is proportional to
the image intensity.
 Backsides of structures are
shaded for better
visualization.

Original Image Topographic View


Only the background is black.
The basin on the left is slightly lighter than black.

9
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• Before flooding:
• To prevent water from spilling through the image borders,
we consider that the image is surrounded by dams of
height greater than the maximum image intensity.

10
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

Regional minima

• Topographic representation of the image.


• A hole is punched in each regional minimum (dark areas) and
the topography is flooded by water (at equal rate) from below
through the holes.
11
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• First stage of flooding.


• The water covered areas
corresponding to the dark
background.
• Next stages of flooding.
• The water has risen into the other
catchment basin.
• Further flooding.
• The water has risen into the third
catchment basin.

12
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• Further flooding.
Short dam
• The water from the left basin
overflowed into the right basin.
• A short dam is constructed to
prevent water from merging.

• Further flooding.
• The effect is more pronounced.
• The first dam is now longer. New dams
• New dams are created.

Longer dam

13
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• The process continues until the maximum level of flooding is


reached.
• The final dams correspond to the watershed lines which is the
result of the segmentation.

Final watershed lines


superimposed on the
image.

• Important: The watershed lines form connected paths and we


get continuous segment boundaries
14
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• Summary:
• A hole is punched in each regional minimum and the topography is
flooded by water from below through the holes.
• When the rising water is about to merge in catchment basins, a dam
is built to prevent merging.
• There will be a stage where only the tops of the dams will be visible.
• These continuous and connected boundaries are the result of the
segmentation. 15
DIGITAL IMAGE PROCESSING - 2
Dam Construction

• Dam construction is based on binary images, which are members of 2-D


integer space Z2
• The simplest way to construct dams separating sets of binary points is to
use morphological dilation

Flooding at stage n, showing that water has spilled


between basins
Two partially flooded catchment basins The water has spilled from one basin to the another and,
at stage n 1 of flooding therefore, a dam must be built to keep this from
happening.
16
DIGITAL IMAGE PROCESSING - 2
Dam Construction Steps
Cn-1(M2)
Cn-1(M1)

Flooding step n-1. Flooding step n.

• let M1 and M2 denote the sets of coordinates of points in two regional minima.
• Then let the set of coordinates of points in the catchment basin associated with these
two minima at stage n − 1 of flooding be denoted by Cn−1(M1) and Cn−1(M2), respectively.
These are the two gray regions
• If we continue flooding, then we will have one connected component.
• This indicates that a dam must be constructed.
• Let q be the merged connected component if we perform flooding at step n.
17
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

• Each of the connected components is dilated by the SE shown, subject


to:
• The center of the SE has to be contained in q.
• The dilation cannot be performed on points that would cause the
sets being dilated to merge.

18
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

Conditions
1.Center of SE in q.
2.No dilation if merging.

• In the first dilation, condition 1 was satisfied by every point and


condition 2 did not apply to any point.
• In the second dilation, several points failed condition 1 while meeting
condition 2 (the points in the perimeter which is broken).

19
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)

Conditions
1.Center of SE in q.
2.No dilation if merging.

• The only points in q that satisfied both conditions form the 1-pixel thick
path.
• This is the dam at step n of the flooding process.
• These points should satisfy both conditions.
20
DIGITAL IMAGE PROCESSING - 2
Summary of Segmentation Techniques

• Point detection
• Line and Edge detection and linking
• Segmentation by thresholding
• Region based segmentation
• Segmentation using Morphological watersheds

21
DIGITAL IMAGE PROCESSING - 2
Next Session

• Introduction to Wavelet transforms

22
THANK YOU

Dr. Shruthi M.L.J.


Department of Electronics &
Communication Engineering

shruthimlj@pes.edu
+91 8147883012

23

You might also like