Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

Ch07_Color Image Processing

Chapter Seven discusses color image processing, highlighting the fundamentals of color, including hue, brightness, and saturation, as well as the motivation for using color in image processing. It covers various color models such as RGB, CMY, CMYK, and HSI, and explains the basics of full-color image processing and morphological image processing techniques. Key operations like dilation, erosion, opening, and closing are also introduced as essential tools for manipulating image shapes and structures.

Uploaded by

edendebebe72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Ch07_Color Image Processing

Chapter Seven discusses color image processing, highlighting the fundamentals of color, including hue, brightness, and saturation, as well as the motivation for using color in image processing. It covers various color models such as RGB, CMY, CMYK, and HSI, and explains the basics of full-color image processing and morphological image processing techniques. Key operations like dilation, erosion, opening, and closing are also introduced as essential tools for manipulating image shapes and structures.

Uploaded by

edendebebe72
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

Chapter Seven

Color Image Processing

1
Color Fundamentals

 Color is the aspect of things that is caused by


different qualities of light being reflected or
emitted by them.
 The characteristics used to distinguish on
color from another are:
 Hue – refers to primary colors - red, green and
blue and secondary colors - magenta (red plus
blue), cyan (green plus blue), and yellow (red
plus green)
 Brightness – the amount of intensity
 Saturation – the degree of purity of hue 2
…cont’d

 Motivation to use color in image processing :


 Color is a powerful descriptor that simplifies
object identification and extraction from a
scene.
 Humans can perceive thousands of color
shades as opposed to only about two dozen
shades of gray.
 Color image processing is divided into two
major areas:
 Full-color processing – image are acquired
and processed in full color 3
…cont’d

 Pseudo-color processing – images are by


nature grayscale and are converted to color
images for visualization purposes.
 In 1666, Isaac newton discovered that when
a beam of sunlight passes through a glass
prism, the emerging beam of light is split into
a spectrum of colors ranging from violet at
one end to red at the other.

4
…cont’d

 Three basic quantities used to describe the


quality of a chromatic light source are:
 Radiance - the total amount of energy that
flows from the light source
 Luminance - a measure of the amount of
energy that an observer perceives from a light
source, and
 Brightness - a subjective descriptor that is
practically unmeasurable.

5
…cont’d

 The human eye sees colors as variable


combinations of primary colors: red (R),
green (G), and blue (B).
 The primary colors can be added together to
produce the secondary colors of light—
magenta (red plus blue), cyan (green plus
blue), and yellow (red plus green).
 Mixing the three primaries, or a secondary
with its opposite primary color, in the right
intensities produces white light.
6
…cont’d

7
…cont’d

 Hue and saturation taken together are called


chromaticity
 A color may be characterized by its
brightness and chromaticity
 The amounts of red, green, and blue needed
to form any particular color are called the
tristimulus values, and are denoted, X, Y, and
Z, respectively.
 A color is then specified by its trichromatic
coefficients, defined as
8
…cont’d

9
7.2 Color Models (color space or
color system)
 The color model aims to facilitate the
specification of colors in some standard way.
 In digital image processing, commonly used
color models are:
 RGB (Red, Green, Blue)
 CMY (Cyan, Magenta, Yellow)
 CMYK (Cyan, Magenta, Yellow, Black)
 HSI (Hue, Saturation, Intensity)

10
The RGB Model

 Each color appears in its primary colors red,


green and blue.
 This model is based on a Cartesian
coordinate system.
 All color values R, G, and B are normalized in
the range [o, 1]
 We can represent each of R, G, and B from 0
to 255
 Each RGB color image consist of three
component images, one for each primary
11
color.
…cont’d

 The number of bits used to represent each


pixel in RGB space is called the pixel depth.
 24-bit image often referred as full-color image

12
CMY Model

 This model is made up of secondary colors of


light – Cyan, Magenta and Yellow
 Light reflected from a surface coated with
pure cyan does not contain red
 Similarly, pure magenta does not reflect
green, and pure yellow does not reflect blue.
 Therefore,

13
CMYK Model

 CMYK color space is a variation on the CMY


model.
 It adds black
 When equal components of cyan, magenta
and yellow inks are mixed the result is usually
a dark brown not black
 Adding black ink solve problem

14
HSI Model

 When humans view a color object, we


describe it by its hue, saturation, and
brightness.
 Hue is a color attribute that describes a pure
color (pure yellow, orange, or red), whereas
saturation gives a measure of the degree to
which a pure color is diluted by white light.
 Brightness is a subjective descriptor that is
practically impossible to measure.

15
…cont’d

 The HSI (hue, saturation, intensity) color


model, decouples the intensity component
from the color-carrying information (hue and
saturation) in a color image.
 As a result, the HSI model is a useful tool for
developing image processing algorithms

16
7.3 Basics of Full-Color Image
Processing
 Full-color image processing approaches fall
into two major categories:
 Processing each grayscale component image
individually, then form a composite color
image from the individually processed
components.
 Processing color pixels directly.
 Full-color images have at least three
components, and hence color pixels are
vectors.
17
…cont’d

 For example, in the RGB system, each color


point can be interpreted as a vector extending
from the origin to that point in the RGB
coordinate system
 Let c represent an arbitrary vector in RGB
color space:

 c are the RGB components of a color image


at a point.
18
…cont’d

 The colors of the pixels in an image are a


function of spatial coordinates (x, y) by using
the notation

 For an image of size MxN, there are MN such


vectors,c(x, y), for x = 0, 1, 2 ,…, M – 1 and
y = 0,1, 2 , …, N -1 .
19
…cont’d

 The results of individual color component


processing are not always equivalent to direct
processing in color vector space.
 In this case we must use approaches for
processing the elements of color points
directly.
 We use the terms vectors, points, and voxels
interchangeably when we are referring to
images composed of more than one 2-D
image.
20
…cont’d

 In order for per-component-image and vector-


based processing to be equivalent, two
conditions have to be satisfied:
 first, the process has to be applicable to both
vectors and scalars;
 second, the operation on each component of a
vector (i.e., each voxel) must be independent
of the other components.

21
…cont’d

 Spatial neighborhood processing of grayscale


and full-color images

(a) (b) 22
…cont’d
 In (a), averaging would be done by summing
the intensities of all the pixels in the 2-D
neighborhood, then dividing the result by the
total number of pixels in the neighborhood.
 In (b), averaging would be done by summing
all the voxels in the 3-D neighborhood, then
dividing the result by the total number of
voxels in the neighborhood.
 Each of the three component of the average
voxel is the sum of the pixels in the single
image neighborhood centered on that
location. 23
…cont’d

 But the same result would be obtained if the


averaging were done on the pixels of each
image, independently, and then the sum of
the three values were added for each.
 Thus, spatial neighborhood averaging can be
carried out on a per-component-image or
directly on RGB image voxels.

24
7.4 Morphological image
processing
 Mathematical morphology a tool used for
extracting image components that are useful
in the representation and description of
region shape, such as boundaries, skeletons,
and the convex hull.
 In image processing, we use morphology with
two types of sets of pixels: objects and
structuring elements (SE’s).
 Objects are defined as sets of foreground
pixels.
25
…cont’d

 Structuring elements can be specified in terms


of both foreground and background pixels.
 Images are rectangular arrays, and sets in
general are of arbitrary shape, applications of
morphology in image processing require that
sets be embedded in rectangular arrays.
 In forming such arrays, we assign a
background value to all pixels that are not
members of object sets.

26
…cont’d

Left: Objects represented as graphical sets.


Center: Objects embedded in a background to
form a graphical image. Right: Object and
background are digitized to form a digital image

27
…cont’d

 Structuring elements are defined in the same


manner

 Difference between the way we represent


digital images and digital structuring
elements:
 There is a border of background pixels
surrounding the objects, while there is none in
the Structuring element.
28
…cont’d

 The reflection of a set (structuring element)


Bˆ about its origin, denoted by, is defined as

 If B is a set of points in 2-D, then Bˆ is the set


of points in B whose (x, y) coordinates have
been replaced by (-x, -y).

29
…cont’d

 Structuring elements and their reflections


about the origin

30
…cont’d

 The translation of a set B by point z = (z1, z2)


is denoted (B)z , is defined as

 If B is a set of pixels in 2-D, then (B)z is the


set of pixels in B whose (x, y)coordinates
have been replaced by (x + z1 , y + z2 )

31
 For morphological image processing, we
need a structuring element.
 It is similar to a mask used in spatial
convolution.
 Morphological operations are defined for two
images.
 The image being processed is the active
image, and the second image is called
(kernel) or (structuring element)

32
…cont’d

 Each structuring element has a prespecified


shape, which is applied as a filter on the
active image.
 The active image can be modified by masking
it with the structuring elements of different
sizes and shapes.
 The basic operations in mathematical
morphology are dilation and erosion. These
two operations can be combined in sequence
to develop other operations, such as opening
and closing. 33
…cont’d

 Dilation operation:
 Given a set A and the structuring element B,
the dilation of A with B is defined as:

 Generally size of B or B^ is smaller than A. If


B^ is placed at the boundary of A, then the
size of A increases to include B^ points. So, all
the points touching the boundary will be
included because of the dilation.
34
…cont’d

 If there is a very small object, say (hole) inside


the object A, then this unfilled hole inside the
object is filled up.
 Small disconnected regions outside the
boundary may be connected by dilation.
 Irregular boundary may also be smoothened
out by dilation. Dilation is translation invariant.

35
…cont’d

Low-resolution text showing broken characters (b)


Structuring element. (c) Dilation of (a) by (b). Broken
segments were joined. 36
…cont’d

 Erosion operation:
 Erosion of A with B is given by:

 In this operation, the structuring element B


should be completely inside the object A, and
that is why the boundary pixels are not
included.
 Two nearly connected regions will be
separated by erosion operation.
37
…cont’d

 Any hole inside the object will be increased


and boundary of an object may be
smoothened by the erosion operation.
 Dilation and erosion are dual operations in
the sense that,

 Dilation expands the components of a set and


erosion shrinks it.
38
…cont’d

 Closing operation:
 The closing of set A by structuring element B,
is defined as

 Closing tends to smooth sections of contours,


it generally fuses narrow breaks and long thin
gulfs, eliminates small holes, and fills gaps in
the contour.

39
…cont’d

(a) Image I, composed of set (object) A, and


background. (b) Structuring element B. (c) Translations
of B such that B does not overlap any part of A. (A is
shown dark for clarity.) (d) Closing of A by B.
40
…cont’d

 Opening Operation
 The opening of set A by structuring element B,
is defined as

 Opening generally smoothes the contour of an


object, breaks narrow isthmuses, and
eliminates thin protrusions.

41

Image I, composed of set (object) A and background.


(b) Structuring element, B. (c) Translations of B while
being contained in A. (A is shown dark for clarity.) (d)
Opening of A by B. 42

You might also like