Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ktunotes - In: Apj Abdul Kalam Technological University

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY

SIXTH SEMESTER B.TECH DIGREE MODEL EXAM


Course code:EC370
Course Name: Digital Image Processing( EC)
Max.Marks:100 Duration:3Hrs
PART A
Answer two questions (question 1 is compulsory)

1. A​.​ Explain the various basic relationships between pixels. (8)


​ ​.​ What is the ​need for image transform? Explain
B (4)
​ ​.Define Walsh Transform and write its properties
C (3)
​2.​ A​. Explain the following terms:
(i) Adjacency (ii) Connectivity (iii) Regions (iv) Boundaries (8)
​ B. ​Explain the basic concepts of sampling and quantization in the generation of
digital image (7)
3.​ A.​ ​ ​Explain the following two properties of 2D-DFT:
i) Convolution ii) Correlation (6)
​ B. ​Find the Haar transformation matrix for N = 8. (9)

PART B

S . I N
Answer two questions (question 4 is compulsory)
E
NOT
4. A. What are the advantages of filtering in frequency domain? (4)

KTU
B. Describe about histogram processing in color images. (7)
C. Write the difference between image restoration and image enhancement. (4)
5. A. With an example, explain the concept of histogram equalization. (6)
B. Explain about color image sharpening. (3)
C. Explain about intensity slicing and write its applications. (6)
6. A. Write a short note on Lagrange multiplie. (7)
B. Explain about image restoration using inverse filtering. Write the draw backs of
this method. (8)
PART C
Answer two questions (question 7 is compulsory)
7. A. What is meant by image segmentation? Write its use in image processing. (4)
B. Explain about edge detection using gradient operator. (7)
C. With an example, explain wavelet coding. (9)
8. A. Explain edge linking using Hough transform. (7)
B. Discuss about region based segmentation. (8)
C. Explain the effect of noise in edge detection. (5)
9. A. What is image compression? Why it is needed? Explain. (6)
B. Short note 0n Image Compression Standards. (7)
C. Write a short note on Wavelet Transforms. (7)

Downloaded from Ktunotes.in


APJ ABDUL KALAM TECHNOLOGICAL UNIVERSITY
SIXTH SEMESTER B.TECH DIGREE MODEL EXAM
Course code:EC370
Course Name: Digital Image Processing( EC)

Answer Key
Max.Marks:100 Duration:3Hrs
PART A
Answer two questions (question 1 is compulsory)
1. A.​ (Neighbors of a Pixel 3M, Adjacency 3M, Connectivity 2M)
Neighbors of a Pixel
1.​N4(p)​: 4-neighbors of p.
•​Any pixel ​p(x, y) ​has two vertical and two horizontal neighbors, given by ​(x+1,y), (x-1, y),
(x,y+1), (x, y-1)
• ​This set of pixels are called the 4-neighbors of ​P​, and is denoted by ​N4(P)
•​Each of them is at a unit distance from ​P​.
2. ND(p)
•​This set of pixels, called 4-neighbors and denoted by ​ND(p)​.
•ND(p) ​: four diagonal neighbors of​p ​have coordinates: (​ x+1,y+1), (x+1,y-1), (x-1,y+1),
(x-1,y-1)

. I N
•​Each of them are at Euclidean distance of 1.414 from ​P​.
E S
NOT
3.​N8(p)​:8-neighbors of ​p​.

KTU
•N4(P) ​and ​ND(p) ​together are called 8-neighbors of​p​, denoted by ​N8(p).
•N8 = N4 U ND
•​Some of the points in the ​N4, N​D ​and​N8 ​may fall outside image when P lies on the border of image.

(x-1,y-1) (x-1, y) (x-1,y+1)

(x, y-1) ​p(x, y) (x,y+1)

(x+1,y-1 (x+1,y) (x+1,y+1)


)
Adjacency
• ​Two pixels are connected if they are neighbors and their gray levels satisfy some specified criterion of
similarity.
• ​For example, in a binary image two pixels are connected if they are 4-neighbors and have same value
(0/1)
• ​Let v : a set of intensity values used to define adjacency and connectivity.
• ​In a binary Image v={1}, if we are referring to adjacency of pixels with value 1.
• ​In a Gray scale image, the idea is the same, but v typically contains more elements, for example v={180,
181, 182,....,200}.
• ​If the possible intensity values 0 to 255, v set could be any subset of these 256 values.
Types of adjacency

Downloaded from Ktunotes.in


1. 4-adjacency ​: Two pixels ​p ​and ​q ​with values from v are 4-adjacent if ​q ​is in the set ​N4(p)
2. 8-adjacency :​ Two pixels ​p ​and ​q ​with values from v are 8-adjacent if ​q ​is in the set ​N8(p).
3. M-adjacency (mixed)​: two pixels ​p ​and ​q ​with values from v are m-adjacent if: ​q ​is in ​N4(p) or
q ​is in ​ND(P) and ​The set ​N4(p) ∩ N4(q) ​has no pixel whose values are from v (No
intersection).
•​ Mixed adjacency is a modification of 8-adjacency ''introduced to eliminate the ambiguities that
often arise when 8- adjacency is used. (eliminate multiple path connection)
• ​Pixel arrangement as shown in figure for v= {1}.
Connectivity
•​Let ​S ​represent a subset of pixels in an image, Two pixe ls ​p ​and ​q ​are said to be connected in ​S ​if there
exists a path between them.
• ​Two image subsets ​S1 ​and ​S2 ​are adjacent if some pixel in ​S1 ​is adjacent to some pixel in ​S2
1. B (​Need For Image Transforms 4M)
Need For Image Transforms
● Better image processing
❖ Take into account long-range correlations in space.
❖ Conceptual insights in spatial-frequency information. what it means to be “smooth, moderate
change, fast change, ...”
❖ Used for denoising, enhancement, restoration, ...
● Fast computation​:
❖ convolution vs. multiplication
● Alternative representation and sensing

E S . I N
❖ Obtain transformed data as measurement in radiology images (medical and astrophysics), inverse
transform to re cover image

KTU
● Efficient storage and transmission
❖ Energy compaction
NOT
❖ Pick a few “representatives” (basis)
❖ Just store/send the “contribution” from each basis

1. C(definition 1.5M any three Property 1.5M)

2. A​ ( Adjacency 2M , Connectivity 2M , Regions 2M , Boundaries 2M)

Downloaded from Ktunotes.in


E S . I N
KTU NOT

Downloaded from Ktunotes.in


E S . I N
KTU NOT

Downloaded from Ktunotes.in


2. B (Image Sampling 5M, Quantization 2M)
Image sampling and Quantization

E S . I N
KTU NOT

3. A (3 Marks Each)
3. B(9M)

Downloaded from Ktunotes.in


Part B

4. A. ( any 4 advantages 1M each)

4. ​B. (7M)

E S . I N
KTU NOT

4. C(2 Marks Each)


1. Image enhancement is largely a subjective process which means that it is a
heuristic procedure designed to manipulate an image in order to achieve the
pleasing aspects of a viewer. On the other hand image restoration involves
formulating a criterion of goodness that will yield an optimal estimate of the
desired result.

Downloaded from Ktunotes.in


2. In image enhancement the degradation is not usually modeled. Image restoration
attempts to reconstruct or recover an image that has been degraded by using the
prior knowledge of the degradation. That is restoration techniques try to model
the degradation and apply the inverse process in order to recover the original
image.

5. A. (6M)

E S . I N
KTU NOT

5. C.(6M)
Intensity Level Slicing:
The images which consist of grey levels in between intensity at background and other objects
require to reduce the intensity of the object. This process of changing intensity level is done with
the help of intensity level slicing. They are expressed as

Downloaded from Ktunotes.in


When an image is uniformly quantized then, the nth most significant bit can be extracted and
displayed.
Let, u = k1 2B-1 + k2 2B-2 +……………..+ kB-1 2 + kB
Then, the output is expressed as

6. A. (7M)
6. ​B

E S . I N
KTU NOT

is usually the highest value of H (u, v) in the frequency domain. Thus, by limiting the analysis to
frequencies near the origin, the probability of encountering zero values is reduced.
Part C
7. A.
Image Segmentation
Digital image processing is the use of computer algorithms to perform image processing on digital images. Image
segmentation is an important and challenging process of image processing. Image segmentation technique is used to
partition an image into meaningful parts having similar features and properties. The main aim of segmentation is
simplification i.e. representing an image into meaningful and easily analyzable way. Image segmentation is
necessary first step in image analysis. The goal of image segmentation is to divide an image into several
parts/segments having similar features or attributes.
Say for example, if we are interested in detecting the movement of vehicles on a road; so on a
busy road, we want to find out what is the movement pattern of different vehicles and the image
that is given that is an aerial image taken either from a satellite or from an from a helicopter.

Downloaded from Ktunotes.in


So in this particular case, our interest is to detect the moving vehicles on the road. So, the first level of
segmentation or the first level of subdivision should be to extract the road from those aerial images and
once we identify the roads, then we have to go for further analysis of the road so that we can identify
every individual vehicle on the road and once we have identified the vehicles, then we can go for vehicle
motion analysis.
segmentation or level of subdivision is application dependent
The basic applications of image segmentation are: Content-based image ​retrieval​, Medical imaging, Object
detection and Recognition Tasks, Automatic traffic control systems and Video surveillance, etc. The image
segmentation can be classified into two basic types: Local segmentation (concerned with specific part or region of
image) and Global segmentation (concerned with segmenting the whole image, consisting of large number of
pixels).
If R represents an image, then the image segmentation can be defined mathematically as the division of R into
sub-regions R1,R2…..Rn , such that[12]
a) Ri is a connected set, i=1,2,…..n.
b) Ri ∩ Rj = Ø for all i and j, i ≠ j
c) Q(Ri) = True for i= 1,2,…n.
d) Q(Ri U Rj) = False for adjoint regions, Ri and Rj Where Q(Rk) is a logical predicate
The image segmentation technique in which each pixel is assigned to the correct object segment is called as „Perfect
Segmentation‟.
It is only a theoretical concept and cannot be achieved, because of the following reasons:
(a) ​Pixels belonging to the same object are classified as belonging to different segments. As a result, a single object
may be represented by two or more segments. This is called oversegmentation. [2][11][12]
(b) Pixels belonging to different objects are classified as belonging to the same object. Thus, a single segment may

. I N
contain several objects. This is called as undersegmentation.[13]

E S
NOT
The image segmentation approaches can be categorized into two types based on properties of image.
A. Discontinuity detection based approach​. This is the approach in which an image is segmented into regions

KTU
based on discontinuity that is partition an image based on abrupt changes in intensity The edge detection based
segmentation falls in this category in which edges formed due to intensity discontinuity are detected and linked to
form boundaries of regions [1].
B. Similarity detection based approach This is the approach in which an image is segmented into regions based on
similarity. The techniques that falls under this approach are: thresholding techniques, region growing techniques and
region splitting and merging. These all divide the image into regions having similar set of pixels. The clustering
techniques also use this methodology. These divide the image into set of clusters having similar features based on
some predefined criteria [1] [2]. In other words, also we can say that image segmentation can be approached from
three perspectives: Region approach, Edge approach and Data clustering. The region approach falls under similarity
detection and edge detection and boundary detection falls under discontinuity detection. Clustering techniques are
also under similarity detection.
DISCONTINUETY BASED APPROACH
As we​ have already said that in discontinuity based image segmentation approach our interest is mainly to identify

1. ​Isolated points (Point detection)


2. ​Edge detection
3. ​Line detection
7. B.
​EDGE DETECTION
Edge detection techniques are well developed techniques of image processing on their own. The edge
based segmentation methods are based on the rapid change of intensity value in an image because a single
intensity value does not provide good information about edges. Edge detection techniques locate the
edges where either the first derivative of intensity is greater than a particular threshold or the second

Downloaded from Ktunotes.in


derivative has zero crossings. In edge based segmentation methods, first of all the edges are detected and
then are connected together to form the object boundaries to segment the required regions. The basic two
edge based segmentation methods are: Gray histograms and Gradient based methods. To detect the edges
one of the basic edge detection techniques like sobel operator, canny operator and Robert‟s operator etc
can be used. Result of these methods is basically a binary image. These are the structural techniques based
on discontinuity detection.
Now, edge detection is one of the most common approaches, most commonly used approach for detection
of discontinuity of an image in an image. So, we say that an edge is nothing but a boundary between 2
regions having distinct intensity levels or having distinct gray level. So, it is the boundary between 2
regions in the image, these two regions have distinct intensity levels. So, as is shown in this particular
slide.

E S . I N
7. C.
Wavelet Coding:
KTU NOT
The wavelet coding is based on the idea that the coefficients of a transform that decorrelates
the pixels of an image can be coded more efficiently than the original pixels themselves. If the
transform's basis functions—in this case wavelets—pack most of the important visual
information into a small number of coefficients, the remaining coefficients can be quantized
coarsely or truncated to zero with little image distortion. Figure 11 shows a typical wavelet
coding system. To encode a 2J X 2J image, an analysing wavelet, Ψ, and minimum
decomposition level, J - P, are selected and used to compute the image's discrete wavelet
transform. If the wavelet has a complimentary scaling function φ, the fast wavelet transform can
be used. In either case, the computed transform converts a large portion of the original
image to horizontal, vertical, and diagonal decomposition coefficients with zero mean and
Laplacian-like distributions.

Downloaded from Ktunotes.in


8. ​A(7M)

1. Global Processing Approach(Hough Transform)

The Hough transform is a technique which can be used to isolate features of a particular shape within an
image. Hough transform is most commonly used for the detection of regular curves such as lines, circles,
ellipses, etc. The main advantage of the Hough transform technique is that it is tolerant of gaps in feature
boundary descriptions and is relatively unaffected by image noise.
It maps a straight line y=mx+c in a Cartesian coordinate system into a single point in the (ρ,θ) plane or
ρ=xcos θ+ysin θ. For a point (x,y) in the Cartesian coordinate plane , there will be an infinite number of
curves in the (ρ,θ) plane.
That is, We attempt to link edge pixels that lie on specified curves.

. I N
Brute force method:​When the specified curve is a straight line, the line between each pair of edge pixels

E S
in the image is considered. The distance between every other edge pixel and the line in question is then

KTU
is considered to be part of the line.
NOT
calculated. When the distance is less than a specified threshold, the pixel

When different values for a and b are considered,


gives all possible lines through the point (xi, yi)
The equation b = + gives one line in the ab-plane for a specific (xi, yi)
When another point (xj, yj) is considered, b = + represents another line in the ab-plane.
Suppose that these two lines intersect at the point (a’, b’), then y = a’x+b’ represents the line in the
xy-plane on which both (xi, yi) and (xj, yj) lie.
Since a computer can only deal with a finite number of straight lines, we subdivide the parameter space ab
into a finite number of accumulator cells.
In order to deal with this problem, we now represent a straight line as follows
x cos θ + y sin θ = ρ
so that (a, b) → (ρ, θ)

Downloaded from Ktunotes.in


E S . I N
8. B. KTU NOT
REGION BASED SEGMENTATION
The region based segmentation is partitioning of an image into similar/homogenous areas of connected
pixels through the application of homogeneity/similarity criteria among candidate sets of pixels. Each of
the pixels in a region is similar with respect to some characteristics or computed property such as colour,
intensity and/or texture. Failure to adjust the homogeneity/similarity criteria accordingly will produce
undesirable results. The following are some of them:

● The segmented region might be smaller or larger than the actual

● Over or under-segmentation of the image (arising of pseudo objects or missing objects)

● Fragmentation
Region growing is a simple region-based image segmentation method. It is also classified as a pixel-based
image segmentation method since it involves the selection of initial seed points. This approach to
segmentation examines neighboring pixels of initial “seed points” and determines whether the pixel
neighbors should be added to the region. The process is iterated on, in the same manner as general data
clustering algorithms. The fundamental drawback of histogram-based region detection is that histograms
provide no spatial information (only the distribution of gray levels).
1. Region growing Technique

Downloaded from Ktunotes.in


Region-growing approaches exploit the important fact that pixels which are close together have similar
gray values. Region growing approach is the opposite of the split and merge approach.

● An initial set of small areas are iteratively merged according to similarity constraints.

● Start by choosing an arbitrary seed pixel and compare it with neighbouring pixels.

● Region is grown from the seed pixel by adding in neighbouring pixels that are similar,
increasing the size of the region.

● When the growth of one region stops we simply choose another seed pixel which does not
yet belong to any region and start again.

● This whole process is continued until all pixels belong to some region.
Region growing methods often give very good segmentations that correspond well to the observed edges
Basic concept of seed points:
The first step in region growing is to select a set of seed points. Seed point selection is based on some user
criterion (for example, pixels in a certain gray-level range, pixels evenly spaced on a grid, etc.). The
initial region begins as the exact location of these seeds. The regions are then grown from these seed
points to adjacent points depending on a region membership criterion. The criterion could be, for
example, pixel intensity, gray level texture, or color. Since the regions are grown on the basis of the
criterion, the image information itself is important. For example, if the criterion were a pixel intensity

. I N
threshold value, knowledge of the histogram of the image would be of use, as one could use it to

E S
determine a suitable threshold value for the region membership criterion.

KTU NOT
Some important issues:​ Then we can conclude several important issues about region growing are:
1. The suitable selection of seed points is important.
2. More information of the image is better.
3. The value, “minimum area threshold”.
4. The value, “Similarity threshold value“.
Advantages and Disadvantages of Region Growing:
We briefly conclude the advantages and disadvantages of region growing.
Advantages:
1. Region growing methods can correctly separate the regions that have the same properties we define.
2. Region growing methods can provide the original images which have clear edges the good
segmentation results.
3. The concept is simple. We only need a small numbers of seed point to represent the property we want,
then grow the region.
4. We can determine the seed point and the criteria we want to make.
5. We can choose the multiple criteria at the same time.
6. It performs well with respect to noise.
Disadvantage:
1. The computation is consuming, no matter the time or power.
2. Noise or variation of intensity may result in holes or oversegmentation.
3. This method may not distinguish the shading of the real images.
We can conquer the noise problem easily by using some mask to filter the holes or outlier. Therefore, the
problem of noise actually does not exist..
Region Growing Methods:

Downloaded from Ktunotes.in


There are a few main points that are important to consider when trying to segment an image. You must
have regions that are disjoint because a single point cannot be contained in two different regions. The
regions must span the entire image because each point has to belong to one region or another. To get
regions at all, you must define some property that will be true for each region that you define. To ensure
that the regions are well defined and that they are indeed regions themselves and not several regions
together or just a fraction of a single region, that property cannot be true for any combination of two or
more regions. If these criteria are met, then the image is truly segmented into regions. This paper
discusses two different region determination techniques: one that focuses on edge detection as its main
determination characteristic and another that uses region growing to locate separate areas of the image.

Region Split and Merge Method


This method proposed by B. Penetal [92] works on the basis of quadtree with main objective to
distinguish the homogeneity of the image. In this method entire image is considered as one single region
and then divide the image into four different quadrants based on certain predefined criteria. Fig. illustrates
the method.

E S . I N
KTU NOT

Split and merge method is the opposite of the region growing. This technique works on the complete image. Region
splitting is a top-down approach. It appears with a complete image and splits it up such that the segregated sliced are
more homogenous than the total. Splitting single is insufficient for sensible segmentation as it severely limits the
shapes of segments. Hence, a merging phase after the splitting is always desirable, which is termed as the split-
and-merge algorithm. Any region can be split into sub regions, and the appropriate regions can be merged into a
region. Rather than choosing kernel points, user can divide an image into a set of arbitrary unconnected regions and
then integrate the regions [2]-[3] in an attempt to serve the shapes of rational image segmentation. Region splitting
and merging is usually executed with theory based on quadtree data.
8. C.
● Presence of noise.
That is if the image is noisy, then after doing the edge detection operation either the boundary points will
not be continues or there may be some spurious edge points which are not actually edge points of the
boundary points of any of the regions.
So, to tackle this problem, we have to go for linking of the edge points so that after linking, we get a
meaningful description of the boundary of a particular segment.

Downloaded from Ktunotes.in


So, we have said that there are mainly 2 approaches for edge linking operation. The first approach is the
local processing approach and the second approach is the global processing approach domain.
9. A

The term data compression refers to the process of reducing the amount of data required to
represent a given quantity of information. A clear distinction must be made between data and
information. They are not synonymous. In fact, data are the means by which information is
conveyed. Various amounts of data may be used to represent the same amount of information.
might be the case, for example, if a long-winded individual and someone who is short and to the
point were to relate the same story. Here, the information of interest is the story; words are the
data used to relate the information. If the two individuals use a different number of words to tell
the same basic story, two different versions of the story are created, and at least one includes
nonessential data. That is, it contains data (or words) that either provide no relevant information
or simply restate that which is already known. It is thus said to contain data redundancy. Data
redundancy is a central issue in digital image compression. It is not an abstract concept but
mathematically quantifiable entity. If n1 and n2 denote the number of information-carrying units
in two data sets that represent the same information, the relative data redundancy RD of the
first data set (the one characterized by n1) can be defined as

E S . I N
KTU NOT
where CR , commonly called the compression ratio, is

In digital image compression, three basic data redundancies can be identified and exploited:
coding redundancy, interpixel redundancy, and psychovisual redundancy. Data compression is
achieved when one or more of these redundancies are reduced or eliminated.
Coding Redundancy
Interpixel Redundancy
Psychovisual Redundancy
8. A(7M)
1. Global Processing Approach(Hough Transform)

The Hough transform is a technique which can be used to isolate features of a particular
shape within an image. Hough transform is most commonly used for the detection of regular
curves such as lines, circles, ellipses, etc. The main advantage of the Hough transform
technique is that it is tolerant of gaps in feature boundary descriptions and is relatively
unaffected by image noise.
It maps a straight line y=mx+c in a Cartesian coordinate system into a single point in the
(ρ,θ) plane or ρ=xcos θ+ysin θ. For a point (x,y) in the Cartesian coordinate plane , there will be
an infinite number of curves in the (ρ,θ) plane.

Downloaded from Ktunotes.in


That is, We attempt to link edge pixels that lie on specified curves.
Brute force method:When the specified curve is a straight line, the line between each pair of
edge pixels in the image is considered. The distance between every other edge pixel and the
line in question is then calculated. When the distance is less than a specified threshold, the pixel
is considered to be part of the line.
When different values for a and b are considered,

gives all possible lines through the point (xi, yi)


The equation b = + gives one line in the ab-plane for a specific (xi, yi)
When another point (xj, yj) is considered, b = + represents another line in the ab-plane.
Suppose that these two lines intersect at the point (a’, b’), then y = a’x+b’ represents the line in
the xy-plane on which both (xi, yi) and (xj, yj) lie.
Since a computer can only deal with a finite number of straight lines, we subdivide the
parameter space ab into a finite number of accumulator cells.
In order to deal with this problem, we now represent a straight line as follows
x cos θ + y sin θ = ρ
so that (a, b) → (ρ, θ)

B.
REGION BASED SEGMENTATION
E S . I N
KTU NOT
The region based segmentation is partitioning of an image into similar/homogenous areas of
connected pixels through the application of homogeneity/similarity criteria among candidate sets
of pixels. Each of the pixels in a region is similar with respect to some characteristics or
computed property such as colour, intensity and/or texture. Failure to adjust the
homogeneity/similarity criteria accordingly will produce undesirable results. The following are
some of them:
The segmented region might be smaller or larger than the actual
Over or under-segmentation of the image (arising of pseudo objects or missing objects)
Fragmentation
Region growing is a simple region-based image segmentation method. It is also classified as a
pixel-based image segmentation method since it involves the selection of initial seed points.
This approach to segmentation examines neighboring pixels of initial “seed points” and
determines whether the pixel neighbors should be added to the region. The process is iterated
on, in the same manner as general data clustering algorithms. The fundamental drawback of
histogram-based region detection is that histograms provide no spatial information (only the
distribution of gray levels).
i) Region growing Technique
Region-growing approaches exploit the important fact that pixels which are close together
have similar gray values. Region growing approach is the opposite of the split and merge
approach.
An initial set of small areas are iteratively merged according to similarity constraints.

Downloaded from Ktunotes.in


Start by choosing an arbitrary seed pixel and compare it with neighbouring pixels.
Region is grown from the seed pixel by adding in neighbouring pixels that are similar, increasing
the size of the region.
When the growth of one region stops we simply choose another seed pixel which does not yet
belong to any region and start again.
This whole process is continued until all pixels belong to some region.
Region growing methods often give very good segmentations that correspond well to the
observed edges
Basic concept of seed points:
The first step in region growing is to select a set of seed points. Seed point selection is
based on some user criterion (for example, pixels in a certain gray-level range, pixels evenly
spaced on a grid, etc.). The initial region begins as the exact location of these seeds. The
regions are then grown from these seed points to adjacent points depending on a region
membership criterion. The criterion could be, for example, pixel intensity, gray level texture, or
color. Since the regions are grown on the basis of the criterion, the image information itself is
important. For example, if the criterion were a pixel intensity threshold value, knowledge of the
histogram of the image would be of use, as one could use it to determine a suitable threshold
value for the region membership criterion.
Some important issues: Then we can conclude several important issues about region growing
are:
1. The suitable selection of seed points is important.
2. More information of the image is better.
3. The value, “minimum area threshold”.
E S . I N
KTU NOT
4. The value, “Similarity threshold value“.
Advantages and Disadvantages of Region Growing:
We briefly conclude the advantages and disadvantages of region growing.
Advantages:
1. Region growing methods can correctly separate the regions that have the same properties
we define.
2. Region growing methods can provide the original images which have clear edges the good
segmentation results.
3. The concept is simple. We only need a small numbers of seed point to represent the property
we want, then grow the region.
4. We can determine the seed point and the criteria we want to make.
5. We can choose the multiple criteria at the same time.
6. It performs well with respect to noise.
Disadvantage:
1. The computation is consuming, no matter the time or power.
2. Noise or variation of intensity may result in holes or oversegmentation.
3. This method may not distinguish the shading of the real images.
We can conquer the noise problem easily by using some mask to filter the holes or
outlier. Therefore, the problem of noise actually does not exist..
Region Growing Methods:
There are a few main points that are important to consider when trying to segment an
image. You must have regions that are disjoint because a single point cannot be contained in

Downloaded from Ktunotes.in


two different regions. The regions must span the entire image because each point has to belong
to one region or another. To get regions at all, you must define some property that will be true
for each region that you define. To ensure that the regions are well defined and that they are
indeed regions themselves and not several regions together or just a fraction of a single region,
that property cannot be true for any combination of two or more regions. If these criteria are
met, then the image is truly segmented into regions. This paper discusses two different region
determination techniques: one that focuses on edge detection as its main determination
characteristic and another that uses region growing to locate separate areas of the image.

ii) Region Split and Merge Method


This method proposed by B. Penetal [92] works on the basis of quadtree with main
objective to distinguish the homogeneity of the image. In this method entire image is considered
as one single region and then divide the image into four different quadrants based on certain
predefined criteria. Fig. illustrates the method.

Split and merge method is the opposite of the region growing. This technique works on the
complete image. Region splitting is a top-down approach. It appears with a complete image and
splits it up such that the segregated sliced are more homogenous than the total. Splitting single
is insufficient for sensible segmentation as it severely limits the shapes of segments. Hence, a
merging phase after the splitting is always desirable, which is termed as the split- and-merge

E S . I N
algorithm. Any region can be split into sub regions, and the appropriate regions can be merged
into a region. Rather than choosing kernel points, user can divide an image into a set of

KTU NOT
arbitrary unconnected regions and then integrate the regions [2]-[3] in an attempt to serve the
shapes of rational image segmentation. Region splitting and merging is usually executed with
theory based on quadtree data.
C.
Presence of noise.
That is if the image is noisy, then after doing the edge detection operation either the
boundary points will not be continues or there may be some spurious edge points which are not
actually edge points of the boundary points of any of the regions.
So, to tackle this problem, we have to go for linking of the edge points so that after linking, we
get a meaningful description of the boundary of a particular segment.
So, we have said that there are mainly 2 approaches for edge linking operation. The first
approach is the local processing approach and the second approach is the global processing
approach domain.

9. A
The term data compression refers to the process of reducing the amount of data required to
represent a given quantity of information. A clear distinction must be made between data and
information. They are not synonymous. In fact, data are the means by which information is
conveyed. Various amounts of data may be used to represent the same amount of information.
might be the case, for example, if a long-winded individual and someone who is short and to the
point were to relate the same story. Here, the information of interest is the story; words are the

Downloaded from Ktunotes.in


data used to relate the information. If the two individuals use a different number of words to tell
the same basic story, two different versions of the story are created, and at least one includes
nonessential data. That is, it contains data (or words) that either provide no relevant information
or simply restate that which is already known. It is thus said to contain data redundancy. Data
redundancy is a central issue in digital image compression. It is not an abstract concept but
mathematically quantifiable entity. If n1 and n2 denote the number of information-carrying units
in two data sets that represent the same information, the relative data redundancy RD of the
first data set (the one characterized by n1) can be defined as

where CR , commonly called the compression ratio, is

In digital image compression, three basic data redundancies can be identified and exploited:
coding redundancy, interpixel redundancy, and psychovisual redundancy. Data compression is
achieved when one or more of these redundancies are reduced or eliminated.
Coding Redundancy
Interpixel Redundancy
Psychovisual Redundancy

E S . I N
KTU NOT

Downloaded from Ktunotes.in


E S . I N
KTU NOT

Downloaded from Ktunotes.in

You might also like