Unit 3 (Slides) PDF
Unit 3 (Slides) PDF
Unit 3 (Slides) PDF
1
DIGITAL IMAGE PROCESSING - 2
3
DIGITAL IMAGE PROCESSING - 2
Image Segmentation
• Two types
• Local Segmentation
• Deals with sub-images which are small windows of the whole
image
• Number of pixels available is much lower than global
segmentation
• Global Segmentation
• Concerned with segmenting the whole image
• Deals with segments consisting of a relatively large number of
pixels hence estimated parameter values are more robust
7
DIGITAL IMAGE PROCESSING - 2
8
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
9
DIGITAL IMAGE PROCESSING - 2
Isolated Point, Line and Edge
10
DIGITAL IMAGE PROCESSING - 2
Basic Concept
w1 w2 w3
w4 w5 W6
w7 w8 w9
11
DIGITAL IMAGE PROCESSING - 2
Next Session
• Segmentation
• Edge Approach
12
THANK YOU
shruthimlj@pes.edu
+91 8147883012
13
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
• Images segmentation
• Types of segmentation
• Local
• Global
• Types of segmentation techniques
• Edge approach
• Boundary approach
• Region approach
3
DIGITAL IMAGE PROCESSING - 2
Today’s Session
4
DIGITAL IMAGE PROCESSING - 2
Image Segmentation
• Two types
• Local Segmentation
• Deals with sub-images which are small windows of the whole image
• Global Segmentation
• Concerned with segmenting the whole image
5
DIGITAL IMAGE PROCESSING - 2
Image Segmentation Techniques
7
DIGITAL IMAGE PROCESSING - 2
Derivatives of digital functions/images
for
10
DIGITAL IMAGE PROCESSING - 2
Derivatives using approximations to Taylor Series
• Backward Difference:
11
DIGITAL IMAGE PROCESSING - 2
Derivatives using approximations to Taylor Series
12
DIGITAL IMAGE PROCESSING - 2
Derivatives in digital image
• Determine the first and second order derivatives of the following image
Subsampled profile 13
DIGITAL IMAGE PROCESSING - 2
Derivatives in digital image
• The transition in the ramp spans four pixels, the noise point is a single pixel,
• the line is three pixels thick, and the transition of the step edge takes place
between adjacent pixels.
• The number of intensity levels was limited to eight for simplicity. 14
DIGITAL IMAGE PROCESSING - 2
Properties of first and second Derivatives in digital image
• The line in this example is rather thin, so it too is fine detail, and we see
again that the second derivative has a larger magnitude.
• At both the ramp and step edges the second derivative has opposite signs
(negative to positive or positive to negative) as it transitions into and out
of an edge. This “double-edge” effect is an important characteristic that
can be used to locate edges
• As we move into the edge, the sign of the second derivative is used also to
determine whether an edge is a transition from light to dark (negative
17
second derivative), or from dark to light (positive second derivative)
DIGITAL IMAGE PROCESSING - 2
Summary of first and second Derivatives in digital image
18
DIGITAL IMAGE PROCESSING - 2
Basic Concept
20
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
21
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
• Point Detection:
• Point detection should be based on the second derivative using
the Laplacian:
1 8 1
1 1 1 22
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
23
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
24
DIGITAL IMAGE PROCESSING - 2
Next Session
25
THANK YOU
shruthimlj@pes.edu
+91 8147883012
26
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
3
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
• Deals with detection of sharp, local changes in intensity
• Three basic types of gray-level discontinuities in a digital image
are:
• Isolated Points
• Foreground (background) pixel surrounded by
background (Foreground) pixels
• Lines
• Thin edge segment in which intensity of the background on
either side of the line is either much higher or much lower
than intensity of the line pixels
• Edges
• Edges: Set of connected edge pixels
• Edge Pixels are pixels at which intensity of image changes
abruptly
4
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
• Point Detection:
• Point detection is based on the second derivative using the
Laplacian:
1 8 1
1 1 1 5
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
6
DIGITAL IMAGE PROCESSING - 2
Detection of Isolated Point
7
DIGITAL IMAGE PROCESSING - 2
Point Detection Example
1 1 1
1 8 1
1 1 1
Laplacian kernel
Result of Result of using used
X-ray image of a turbine
convolving the thresholding where a for point detection
blade with a porosity
kernel with the single point is detected
manifested by a single
image (shown enlarged at the
black pixel
tip of the arrow)
8
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
9
DIGITAL IMAGE PROCESSING - 2
Coordinate Convention in Digital Image
11
DIGITAL IMAGE PROCESSING - 2
Line Detection Procedure
• Let Z1, Z2, Z3 and Z4 be the responses to masks from left to right
• Let the masks be run individually through
with these the
fourimage
• If at a certain point Z k > Z j for all j ¹ k
that point is said toinbethe tionassociated
direc-
more likely of kernel with a line in the
direction of the kernel k
• For example, if at a point in the image, Z 1 > Z j for
j = 2,3,4, that point is said to be more likely associated with a
horizontal line
12
DIGITAL IMAGE PROCESSING - 2
Line Detection Procedure
13
DIGITAL IMAGE PROCESSING - 2
Line Detection Example
14
DIGITAL IMAGE PROCESSING - 2
Line Detection Example
The result of filtering There are two principal segments in the image oriented in
the image with this kernel. the +45 direction, one in the top left and one at the
bottom right.
The straight line segment in third figure is brighter than the
segment second figure because the line segment in the
bottom right of given figure is one pixel thick, while the
one at the top left is not.
The kernel is “tuned” to detect one-pixel-thick lines in the
+45 direction, so we expect its response to be stronger 16
when such lines are detected
DIGITAL IMAGE PROCESSING - 2
Line Detection Example
17
DIGITAL IMAGE PROCESSING - 2
Observations
• The isolated points in the figure are points that also had
similarly strong responses to the kernel. 1 1 1
18
DIGITAL IMAGE PROCESSING - 2
Edge-Based Segmentation
19
DIGITAL IMAGE PROCESSING - 2
Edge Detection
• Edge Detection
• We use edge models
• Ideally, edges should be one pixel thin
• In practice, they are blurred and noisy
• Edge models are classified according to their intensity
profiles.
20
DIGITAL IMAGE PROCESSING - 2
Edge Models
23
DIGITAL IMAGE PROCESSING - 2
Edge Detection
24
DIGITAL IMAGE PROCESSING - 2
Edge Detection
• Edge Detection
• Magnitude of the first derivative.
• Slope of first derivative is inversely proportional to degree of
blurring
• Thickness of edge is determined by length of ramp
• Sign change of the second derivative.
• Observations:
• Second derivative produces two values for an edge
(undesirable).
• Its zero crossings may be used to locate the centers of thick
edges.
26
DIGITAL IMAGE PROCESSING - 2
Edge Profiles
28
DIGITAL IMAGE PROCESSING - 2
Edge Profiles
¶f ¶f
= -2 and =2
¶x ¶y
32
DIGITAL IMAGE PROCESSING - 2
Example
33
DIGITAL IMAGE PROCESSING - 2
Example
• Then
Ñf = 2 2
34
DIGITAL IMAGE PROCESSING - 2
Detection of Edge Direction
35
DIGITAL IMAGE PROCESSING - 2
Detection of Edge Direction
36
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection
Local Processing
The two principal properties used for establishing similarity of
edge pixels in this kind of analysis are :
1. The strength of the response of the gradient operator used to
produce the edge pixe
37
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection
39
DIGITAL IMAGE PROCESSING - 2
Alternate Method
40
DIGITAL IMAGE PROCESSING - 2
• Scan rows of g and fill (set to 1) all gaps (sets of zeros) in each
row that do not exceed a specified length, L.
• A gap is bounded at both ends by one or more 1’s
• To detect gaps on any other direction θ, rotate g by this angle
and apply the horizontal scanning procedure in Step 3. Rotate
the result by –θ
• This algorithm is more practical and gives good results
41
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection: Example
42
DIGITAL IMAGE PROCESSING - 2
Steps Involved
43
DIGITAL IMAGE PROCESSING - 2
Edge Linking and Boundary Detection
44
DIGITAL IMAGE PROCESSING - 2
Next Session
45
THANK YOU
shruthimlj@pes.edu
+91 8147883012
46
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
•Segmentation
•Line and edge detection
•Edge Linking
3
DIGITAL IMAGE PROCESSING - 2
This Session
• Segmentation
• Gradient Operators
• Thresholding
4
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
1-D kernel
1-D kernel 5
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
6
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
8
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
9
DIGITAL IMAGE PROCESSING - 2
Comparison of Operators
10
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
11
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
12
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
• The edge angle at that point is then the direction associated with
that kernel
13
DIGITAL IMAGE PROCESSING - 2
Gradient Operators
Image of size 834 x 1114 pixels, the component of the gradient obtained using the Sobel operator
with in
intensity values scaled to the range the x-direction, obtained using
[0,1] the
Sobel kernel
The gradient image
15
DIGITAL IMAGE PROCESSING - 2
Observations
• This example illustrates the Sobel absolute value response of the two
components of the gradient and , as well as the gradient image
formed from the sum of these two components
• The directionality of the horizontal and vertical components of the gradient
is evident in Figs. (b) and (c)
• The roof tile, horizontal brick joints and the horizontal segments of the
windows are very strong compared to other edges.
• In contrast, Fig. (c) favors features such as the vertical components of the
façade and windows.
• It is common terminology to use the term edge map when referring to an
image whose principal features are edges, such as gradient magnitude
images
16
DIGITAL IMAGE PROCESSING - 2
Combining gradient with smoothing
Same sequence as in
earlier figure, but the
original image is smoothed
using a 5 x 5 averaging
kernel prior to edge
detection.
17
DIGITAL IMAGE PROCESSING - 2
Combining gradient with thresholding
18
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector
19
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector
20
DIGITAL IMAGE PROCESSING - 2
Next Session
21
THANK YOU
shruthimlj@pes.edu
+91 8147883012
22
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
• Segmentation
• Gradient Operators
• Thresholding
3
DIGITAL IMAGE PROCESSING - 2
This Session
• Segmentation
• Edge Linking via Hough transform
4
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector
5
DIGITAL IMAGE PROCESSING - 2
Canny Edge Detector
6
DIGITAL IMAGE PROCESSING - 2
Global Processing
Slope =a
9
DIGITAL IMAGE PROCESSING - 2
Global Processing using Hough Transform
10
DIGITAL IMAGE PROCESSING - 2
Computational Aspects of Hough Transform
• For every point (xk,yk) vary a from cell to cell and solve
for b using:
11
DIGITAL IMAGE PROCESSING - 2
Computational Aspects of Hough Transform
12
DIGITAL IMAGE PROCESSING - 2
Hough Transform using Normal Representation
R
50
Corresponding parameter space
S A 1 S
0
r
3
50 R
4
B
Q
100
80 60 40 20 0 20 40 60 80
16
u
DIGITAL IMAGE PROCESSING - 2
Example
r
3
R
Q
100
2
R
50
S A 1 S
0
r
3
50 R
4
B
Q
100
80 60 40 20 0 20 40 60 80
18
u
DIGITAL IMAGE PROCESSING - 2
Edge Linking using Hough Transform
19
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform
20
DIGITAL IMAGE PROCESSING - 2
Example: Edge Linking Using Hough Transform
21
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform
22
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform
23
DIGITAL IMAGE PROCESSING - 2
Edge Linking Using Hough Transform
26
DIGITAL IMAGE PROCESSING - 2
Next Session
• Segmentation by Thresholding
27
THANK YOU
shruthimlj@pes.edu
+91 8147883012
28
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
• Segmentation
• Edge Linking via Hough transform
3
DIGITAL IMAGE PROCESSING - 2
This Session
4
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding
5
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding
6
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding
7
DIGITAL IMAGE PROCESSING - 2
Segmentation using Thresholding
8
DIGITAL IMAGE PROCESSING - 2
Thresholding
Threshold:
• If T is a constant applicable over an entire image, it is referred
to as global thresholding
• When the value of T changes over an image, we use the term
variable thresholding.
• The terms local or regional thresholding are used
sometimes to denote variable thresholding in which the
value of T at any point (x,y) in an image depends on
properties of a neighborhood of (x,y) (for example, the
average intensity of the pixels in the neighborhood)
• If it depends on spatial coordinates x and y as well it is called
dynamic or adaptive
9
DIGITAL IMAGE PROCESSING - 2
Role of illumination
• Illumination
• Image (f(x,y) is a product of reflectance and illuminance (i x r)
10
DIGITAL IMAGE PROCESSING - 2
Role of illumination
• Consider a reflectance function of a noisy image and its
histogram
• Its histogram is bi-modal and can be partitioned easily:
Global T
11
DIGITAL IMAGE PROCESSING - 2
Role of illumination
• Now Multiplying the reflectance by illumination(non uniform)
yields the image f(x,y)=i(x,y)r(x,y)
12
DIGITAL IMAGE PROCESSING - 2
Role of illumination
13
DIGITAL IMAGE PROCESSING - 2
Role of illumination
14
DIGITAL IMAGE PROCESSING - 2
Solution to the Illumination problem
15
DIGITAL IMAGE PROCESSING - 2
Solution to the Illumination problem
• The final histogram is a result of convolution of the histogram
of the log reflectance and log illuminance functions
• If i(x,y) was constant iI(x,y) would also be constant and its
histogram would be simple spike (like impluse).
• The convolution of this with histogram of rI(x,y) would leave its
basic shape unchanged
• But if iI(x,y) has a broader histogram (due to non uniform
illumination) the convolution would smear the histogram of
rI(x,y) yielding a histogram of z(x,y) whose shape would be
different from rI(x,y)
16
DIGITAL IMAGE PROCESSING - 2
Solution
• When access to illumination source is available, a solution to
compensate for non uniformity is to project the illumination
pattern onto a constant white reflective surface
• This yields an image g(x,y)= k i(x,y), k is constant that depends
on surface
• Then for any image f(x,y) = i(x,y)r(x,y) obtained with same
illumination pattern we give normalized function
h(x,y)=f(x,y)/g(x,y)= r(x,y)/k
Thus if r(x,y) can be segmented using threshold T then h(x,y) can
be thresholded by T/k
17
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
• Consider a synthetic image without noise and with noise
(AWGN) and their corresponding histograms
Image with AWGN of mean 0 and Image with AWGN of mean 0 and
Noiseless 8-bit image.
SD SD
of 10 intensity levels. of 50 intensity levels.
18
Corresponding Histograms
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
19
DIGITAL IMAGE PROCESSING - 2
Next Session
20
THANK YOU
shruthimlj@pes.edu
+91 8147883012
21
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
• Segmentation
• Edge Linking via Hough transform
• Using Thresholding
• Role of Illumination
• Role of Noise
3
DIGITAL IMAGE PROCESSING - 2
This Session
4
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
• Consider a synthetic image without noise and with noise
(AWGN) and their corresponding histograms
Image with AWGN of mean 0 and Image with AWGN of mean 0 and
Noiseless 8-bit image.
SD SD
of 10 intensity levels. of 50 intensity levels.
5
Corresponding Histograms
DIGITAL IMAGE PROCESSING - 2
Role of Noise in Image Thresholding
6
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding
7
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding
8
DIGITAL IMAGE PROCESSING - 2
Basic Global Thresholding
9
DIGITAL IMAGE PROCESSING - 2
Criteria for Initial Threshold
10
DIGITAL IMAGE PROCESSING - 2
Example
• Consider the image and its histogram
14
DIGITAL IMAGE PROCESSING - 2
Example
15
DIGITAL IMAGE PROCESSING - 2
Eaxmple
16
DIGITAL IMAGE PROCESSING - 2
Example
17
DIGITAL IMAGE PROCESSING - 2
Example
18
DIGITAL IMAGE PROCESSING - 2
Example
19
DIGITAL IMAGE PROCESSING - 2
Global Thresholding
• Image smoothing
20
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
21
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
22
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
• Mixture probability
23
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
24
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
P1 p1 (T ) = P2 p2 (T )
25
DIGITAL IMAGE PROCESSING - 2
Optimal Global Thresholding
26
DIGITAL IMAGE PROCESSING - 2
Thresholding based on several variables
• Thresholds based on several variables
• Color or multispectral histograms
• Thresholding is based on finding clusters in multi-
dimensional space
• Example: face detection
• Different color models
• Hue and saturation instead of RGB
27
DIGITAL IMAGE PROCESSING - 2
28
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation
29
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation
• Basic formulation
• Every pixel must be in a region
• Points in a region (Ri ) must be connected
• Regions must be disjoint
30
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation using Region Growing
• Region growing
• Group pixels from sub-regions to larger regions based on
predefined criteria for growth.
• Basic Approach:
• Start from a set of ’seed’ pixels and append pixels
with similar properties
• Selection of similarity criteria: color, descriptors (gray
level + moments)
• Stopping rule
31
DIGITAL IMAGE PROCESSING - 2
Challenges in Region Growing
• Challenges in region based segmentation:
• The selection of similarity criteria depends not only on
the problem under consideration, but also on the type of
image data available (color/monochrome image)
• Descriptors alone can yield misleading results if
connectivity properties are not used in the region-
growing process
• Visualize a random arrangement of pixels that have three
distinct intensity values.
• Grouping pixels with the same intensity value to form a
“region,” without paying attention to connectivity, would yield a
segmentation result that is meaningless
• Formulation of a stopping rule.
• Region growth should stop when no more pixels satisfy the
criteria for inclusion in that region
32
DIGITAL IMAGE PROCESSING - 2
Challenges in Region Growing
33
DIGITAL IMAGE PROCESSING - 2
Algorithm for Region Growing
34
DIGITAL IMAGE PROCESSING - 2
Algorithm for Region Growing
• A basic region-growing algorithm based on 8-connectivity may be
stated as follows:
1. Find all connected components in S(x, y) and reduce each
connected component to one pixel; label all such pixels found as 1.
All other pixels in S are labeled 0.
2. Form an image fQ such that, at each point (x, y), fQ(x, y) = 1 if the
input image satisfies a given predicate, Q, at those coordinates,
and fQ( x, y) = 0 otherwise.
3. Let g be an image formed by appending to each seed point in S all
the 1-valued points in fQ that are 8-connected to that seed point.
38
DIGITAL IMAGE PROCESSING - 2
Region Based Segmentation
40
DIGITAL IMAGE PROCESSING - 2
Region Splitting and Merging
41
DIGITAL IMAGE PROCESSING - 2
Region Splitting and Merging
• We start with the entire region, R
• If Q(R) = FALSE, we divide the image into quadrants.
• If Q is FALSE for any quadrant, we subdivide that quadrant into sub-
quadrants, and so on.
• This splitting technique has a convenient representation in the form
of so-called quadtrees; that is, trees in which each node has exactly
four descendants
43
DIGITAL IMAGE PROCESSING - 2
Procedure for Region Splitting and Merging
Q(Ri ) = FALSE.
44
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
45
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
Observations:
• Data in this region has a random nature, indicating that its
standard deviation should be greater than the standard deviation
of the background (which is near 0) and of the large central
region, which is smooth
• The mean value (average intensity) of a region containing data
from the outer ring should be greater than the mean of the
darker background and less than the mean of the lighter central
region
• Thus, we should be able to segment the region of interest using
the following predicate:
Where and are the standard deviation and mean of
the region being processed, and a and b are nonnegative
constants.
46
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
Observations:
• Analysis of several regions in the outer area of interest revealed
that the mean intensity of pixels in those regions did not exceed
125, and the standard deviation was always greater than 10
• With these values for a and b vary the minimum size allowed for
the quadregions from 32 to 8.
48
DIGITAL IMAGE PROCESSING - 2
Example: Region Splitting and Merging
• Note that in all cases the segmented region (white pixels) was a
connected region that completely separates the inner, smoother
region from the background.
• Thus, the segmentation effectively partitioned the image into
three distinct areas that correspond to the three principal
features in the image: background, a dense region, and a sparse
region.
• Using a suitable mask the white region can be separated
49
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering
50
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering
51
DIGITAL IMAGE PROCESSING - 2
Region Segmentation Using K-Means Clustering
where mi is the mean vector (or centroid) of the samples in set Ci and
is the vector norm of the argument
• This equation says that we are interested in finding the sets
C = {C1 , C2 , ... , Ck } such that the sum of the distances from each
point in a set to the mean of that set is minimum.
• Finding this minimum is an NP-hard problem for which no practical
solution is known
• We have “Standard” k-means algorithm, based on the Euclidean
distance to find the clusters
52
DIGITAL IMAGE PROCESSING - 2
Next Session
53
THANK YOU
shruthimlj@pes.edu
+91 8147883012
54
DIGITAL IMAGE PROCESSING - 2
1
DIGITAL IMAGE PROCESSING - 2
3
DIGITAL IMAGE PROCESSING - 2
This Session
4
DIGITAL IMAGE PROCESSING - 2
5
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds
6
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds
7
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds
8
DIGITAL IMAGE PROCESSING - 2
Segmentation Using Morphological Watersheds
9
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
• Before flooding:
• To prevent water from spilling through the image borders,
we consider that the image is surrounded by dams of
height greater than the maximum image intensity.
10
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
Regional minima
12
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
• Further flooding.
Short dam
• The water from the left basin
overflowed into the right basin.
• A short dam is constructed to
prevent water from merging.
• Further flooding.
• The effect is more pronounced.
• The first dam is now longer. New dams
• New dams are created.
Longer dam
13
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
• Summary:
• A hole is punched in each regional minimum and the topography is
flooded by water from below through the holes.
• When the rising water is about to merge in catchment basins, a dam
is built to prevent merging.
• There will be a stage where only the tops of the dams will be visible.
• These continuous and connected boundaries are the result of the
segmentation. 15
DIGITAL IMAGE PROCESSING - 2
Dam Construction
• let M1 and M2 denote the sets of coordinates of points in two regional minima.
• Then let the set of coordinates of points in the catchment basin associated with these
two minima at stage n − 1 of flooding be denoted by Cn−1(M1) and Cn−1(M2), respectively.
These are the two gray regions
• If we continue flooding, then we will have one connected component.
• This indicates that a dam must be constructed.
• Let q be the merged connected component if we perform flooding at step n.
17
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
18
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
Conditions
1.Center of SE in q.
2.No dilation if merging.
19
DIGITAL IMAGE PROCESSING - 2
Morphological Watersheds (cont.)
Conditions
1.Center of SE in q.
2.No dilation if merging.
• The only points in q that satisfied both conditions form the 1-pixel thick
path.
• This is the dam at step n of the flooding process.
• These points should satisfy both conditions.
20
DIGITAL IMAGE PROCESSING - 2
Summary of Segmentation Techniques
• Point detection
• Line and Edge detection and linking
• Segmentation by thresholding
• Region based segmentation
• Segmentation using Morphological watersheds
21
DIGITAL IMAGE PROCESSING - 2
Next Session
22
THANK YOU
shruthimlj@pes.edu
+91 8147883012
23