Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
84 views

Digital Image Processing

The document discusses color fundamentals and color models used in digital image processing. It introduces color models like RGB, HSV and YUV. It describes color transformations, color slicing, tone and color corrections, smoothing and sharpening techniques. It also discusses color segmentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

Digital Image Processing

The document discusses color fundamentals and color models used in digital image processing. It introduces color models like RGB, HSV and YUV. It describes color transformations, color slicing, tone and color corrections, smoothing and sharpening techniques. It also discusses color segmentation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

SVCE TIRUPATI

SVCE TIRUPATI

COURSE MATERIAL

DIGITAL IMAGE PROCESSING


SUBJECT (EC20APE704)

UNIT 5

COURSE B.TECH

DEPARTMENT 4-1

ELECTRONICS AND COMMUNICATION


SEMESTER ENGINEERING

B Chandrakala
Assistant Professor
PREPARED BY
Revised by
(Faculty Name/s) A Krishna Mohan
Asst professor

Version 2

PREPARED / REVISED DATE 23-08-2023

DIP-UNIT-V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
TABLE OF CONTENTS – UNIT 5
S. NO CONTENTS PAGE NO.
1 COURSE OBJECTIVES 1
2 PREREQUISITES 1
3 SYLLABUS 1
4 COURSE OUTCOMES 1
5 CO - PO/PSO MAPPING 1
6 LESSON PLAN 2
7 ACTIVITY BASED LEARNING 2
8 LECTURE NOTES 3
5.1 Introduction 3
5.2 Color Models 5
5.2.1 RGB color model 5
5.2.2 HSI color model 6
5.2.3 YUV color model 6
5.3 Color Transformations 8
5.3.1 Formulation 9
5.3.2 Color complements 10
5.4 Color Slicing 10
5.5 Tone and Color Corrections 11
5.6 Smoothing and Sharpening 13
5.7 Color Segmentation 115
9 PRACTICE QUIZ 18
10 ASSIGNMENTS 20
11 PART A QUESTIONS & ANSWERS (2 MARKS QUESTIONS) 21
12 PART B QUESTIONS 22
13 SUPPORTIVE ONLINE CERTIFICATION COURSES 22
14 REAL TIME APPLICATIONS 22
15 CONTENTS BEYOND THE SYLLABUS 23
16 PRESCRIBED TEXT BOOKS & REFERENCE BOOKS 23
17 MINI PROJECT SUGGESTION 24

DIP-UNIT-V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
Course Objectives
The objectives of this course is to
 To introduce fundamentals of Image Processing.
 To expose various intensity transformations in spatial and frequency domains.
 To dissimilate various segmentation techniques for images.
 To impart concepts of wavelets and various coding techniques for image
compression.
 To teach various color models and to introduce the concepts of color image
segmentation.
1. Prerequisites
Students should have knowledge on
1. Preliminary Mathematics
2. Principals of Signals and systems

2. Syllabus
UNIT V
Color Fundamentals, Color Models - RGB, YUV, HIS, Pseudo Color, Full Color image
processing, Color transformations – formulation, Color complements, Color slicing,
tone and Color corrections. Color image smoothing and Sharpening.
.
3. Course outcomes
 Analyze various types of images mathematically.
 Compare image enhancement methods in spatial and frequency domains.
 Demonstrate various segmentation algorithms for given image.
 Justify DCT and wavelet transform techniques for image compression.
 Describe various color models for color image processing.
4. Co-PO / PSO Mapping
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 P10 PO11 PO12 PSO1 PSO2

CO1 3 3 2 3 3
CO2 3 3 2 2 3 3 3
CO3 3 3 3 2 2 3 3 3
CO4 3 3 3 3 2 2 3 3 3
CO5 3 3 2 3 3 3

1|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
5. Lesson Plan

Lecture No. Weeks Topics to be covered References

1 Introduction T1

2 Color Models T1, R1

3 RGB color model T1, R1


1
4 HSI color model T1, R1

5 YUV color model T1, R1

6 Color Transformations T1, R1

7 Formulation T1, R1

8 Color complements T1, R1

9 Color Slicing T1, R1


2
10 Tone and Color Corrections T1, R1

11 Smoothing and Sharpening T1, R1

12 Colour Segmentation T1, R1

7. Activity Based Learning


1. Observing Digital image compression and redundancy applications used in
home appliances or electronic gadgets used in daily routine.
2. Analyzing the distinct image formats and standards

2|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
8. Lecture Notes
5.1 INTRODUCTION
Color is the perceptual sensation of light in the visible range incident upon the
retina. Color is the most visually striking feature of any image and it has a
significant bearing on the scenic beauty of an image, To understand color, it is
necessary to understand the nature of light. Light exhibits a dual nature. It λ rise to
electric impulses, which on reaching the brain are translated into color. Different
wavelengths of light are perceived as different colors. However, not every
wavelength can be perceived by the human eye. The wavelengths between
380nm and 780nm form the visible spectrum.
Light and Color

The frequency of light determines the color. The amount of light determines the
intensity. The famous Einstein relation is given by

ℎ𝑐
E = hv =
𝜆

As stated earlier, the visible spectrum is approximately between 400nm to 700 nm.
The human visual system perceives electromagnetic energy having wavelengths
in the range 400-700 nm as visible light. Lightness of brightness refers to the
amount of light a certain color reflects or transmits. Light that has a dominant
frequency or set of frequencies is called chromatic. Achromatic light has no color
and it contributes only to quantity or intensity. The intensity is determined by the
energy, whereas brightness is determined by the perception of the color; hence it
is psychological. Color depends primarily on the reflectance properties of an
object.

COLOR FORMATION

Basically there are three color formation processes

(i) Additive process

(ii) Subtractive process

(iii) Pigmentation

3|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
Additive color formation: In additive color formation, the spectral distributions
corresponding to two or more light rays get added. The resulting color is the sum
of the number of photons in the same range present in the component colors.
The additive color-formation is employed in TV monitors.

Subtractive color formation: Subtractive color formation occurs when the light is
passed or transmitted through a light filter. A light filter partly absorbs part of the
light that reaches it and transmits the rest. For example, a green filter lets through
the radiation in the green part of the spectrum, while radiation with other
wavelengths is blocked. Several filters can be used in series, the resulting color
being made up of those wavelengths that can go through all of them.
Subtractive color formation occurs when color slides are projected onto a screen.

Color formation by pigmentation: A pigment consists of colored particles in


suspension in a liquid. These particles can absorb or reflect the light that reaches
them. When a light ray reaches a surface covered with a pigment, it is scattered
by the particles, with successive and simultaneous events of reflection,
transmission and absorption. These events determine the nature(color) of the light
reflected by the surface. Color formation through pigmentation allows one to see
colors in painting.

Color fundamentals

Color are seen as variable combinations of the primary colors of light: Red(R),
Green(G), Blue(B). The primary colors can be mixed to produce the secondary
colors: Magenta(Red+Blue), Cyan(Green+Blue), yellow(Red+Green). Mixing the
three primaries, or a secondary with its opposite primary color, produces white
light.

Fig 5.1.Primary and Secondary colors of light

4|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
RGB colors are used for color TV, Monitors, and video cameras.
However, the primary colors of pigments are cyan (C), magenta (M), and yellow
(Y), and the secondary colors are red, green, and blue. A proper combination of
the three pigment primaries, or a secondary with its opposite
primary, produces black.CMY colors are used for color printing.
Color characteristics
The characteristics used to distinguish one color from another are:

Brightness: means the amount of intensity (i.e. color level).

Hue: represents dominant color as perceived by an observer.

Saturation: refers to the amount of white light mixed with a hue.

5.2 Color Models


The purpose of a color model is to facilitate the specification of colors in some
standard way. A color model is a specification of a coordinate system and a
subspace within that system where each color is represented by a single point.
Color models most commonly used in image processing are:
 RGB model for color monitors and video cameras
 CMY and CMYK (cyan, magenta, yellow, black) models for color printing
 HSI (hue, saturation, intensity) model
5.2.1 The RGB color model:
In this model, each color appears in its primary colors red, green, and blue. This
model is based on a Cartesian coordinate system. The color subspace is the cube
shown in the figure below. The different colors in this model are points on or inside
the cube, and are defined by vectors extending from the origin.

Fig 5.2.RGB color model


5|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
All color values R, G, and B have been normalized in the range [0, 1].
However, we can represent each of R, G. and B from 0 to 255. Each RGB color
image consists of three component images, one for each primary color as shown
in the figure below. These three images are combined on the screen to
produce a color image.
The total number of bits used to represent each pixel in RGB image is called pixel
depth. For example, in an RGB image if each of the red, green, and blue images
is an 8-bit image, the pixel depth of the RGB image is 24-bits. The figure below
shows the component images of an RGB image.

Fig 5.3. A Full color image and its RGB component images
5.2.2 The CMY and CMYK color model
Cyan, magenta, and yellow are the primary colors of pigments. Most printing
devices such as color printers and copiers require CMY data input or perform an
RGB to CMY conversion internally. This conversion is performed using the equation

5.2.3 The HSI color model


The RGB and CMY color models are not suited for describing colors in terms of
human interpretation. When we view a color object, we describe it by its hue,
saturation, and brightness (intensity). Hence the HSI color model has been
presented. The HSI model decouples the intensity component from the color-
carrying information (hue and saturation) in a color image. As a result, this model
is an ideal tool for developing color image processing algorithms.
6|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
The hue, saturation, and intensity values can be obtained from the RGB color
cube. That is, we can convert any RGB point to a corresponding point is the HSI
color model by working out the geometrical formulas.
Converting colors from RGB to HSI
The hue H is given by

7|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI

5.2.4 YUV Color modeling

The Y′UV model defines a color space in terms of one luma component (Y′) and
two chrominance (UV) components. The Y′ channel saves black and white data.
If there is only the Y component and there are no U and V components, then the
graph represented is grayscale.
The Y component can be calculated with the following equation: Y = 0. 299R+ 0.
587G+ 0. 114*B, which is the commonly used grayscale formula. The color
difference U and V are compressed by B-Y and R-Y in different proportions. The
YUV model is used in PAL and SECAM composite color video standards. Previous
black and white and white systems used only luma (Y’) information.
Color information (U and V) was added separately via a sub-carrier so that a
black- and-white receiver would still be able to receive and display a color
picture transmission in the receiver's native black- and-white format.
The Digital Video Signal or YUV
YUV has basically three components:
the luminance or green channel (Y),the color value of the luminance deducted
from the color red (R-Y),the color value of the luminance deducted from the color
blue (B-Y).When digitized, these three parameters of the component video signal
are assigned a numeric value.
5.3 COLOR TRANSFORMATIONS
The techniques described in this section, collectively called color transformations,
deal with processing the components of a color image within the context of a

8|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
single color model, as opposed to the conversion of those components between
models (like the RGB-to-HSI and HSI-to-RGB conversion transformations.
5.3.1 Formulation

As with the intensity transformation techniques, we model color transformations


using the expression
g(x, y) = T[f(x, y)]

where f(x, y) is a color input image, g(x, y) is the transformed or processed color
output image T is an operator on f over a spatial neighborhood of (x, y) The
principal difference between this equation and equation used in segmentation is
in its interpretation. The pixel values here are triplets or quartets (ie., groups of
three or four values) from the color space chosen to represent the
images.Analogous to the approach we used to introduce the basic intensity
trans- formations we will restrict attention in this section to color
transformations of the form

s i =T i( r_{1}, r_{2} ,...,r n ),

i =1,2,...,n

where, for notational simplicity, r, and s, are variables denoting the color
components of f(x, y) and g(x, y) at any point (x, y), n is the number of color
components, and {T1, T2,..., T) is a set of transformation or color mapping functions
that operate on r, to produce s i. Note that n transformations, I, combine to
implement the single transformation function, T. The color space chosen to
describe the pixels of ƒ and g determines the value of n. If the RGB color space is
selected, for example, n = 3 and r₁, r2, and r3, denote the red, green, and blue
components of the input image, respectively. If the CMYK or HSI color spaces are
chosen, n = 4 or n = 3.
For example, that we wish to modify the intensity of the full- color image using
g(x, y) = kf(x, y)
Where 0 < k <1 . In the HSI color space, this can be done with the
simple transformation
s3 = k*r3

9|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
Where s₁ = r1 and s2 = r2 Only HSI intensity component r3 is modified. In the RGB
color space, three components must be transformed:
si = k*ri
i =1,2,3
5.3.2 Color Complements
The hues directly opposite one another on the color circle of Figare called
complements. Our interest in complements stems from the fact that they are
analogous to the gray-scale negatives(image negatives).As in the gray-scale case,
color complements are useful for enhancing detail that is embedded in dark
regions of a color image-particularly when the regions are dominant in size.

Fig 5.4.Complements on color circle

5.4 COLOR SLICING


Highlighting a specific range of colors in an image is useful for separating ob- jects
from their surroundings. The basic idea is either to (1) display the colors of interest so
that they stand out from the background or (2) use the region de- fined by the
colors as a mask for further processing. The most straightforward approach is to
extend the intensity slicing technique, because a color pixel is an n-dimensional
quantity, however, the resulting color trans- formation functions are more
complicated than their gray-scale counterparts in Fig. 3.11. In fact, the required
transformations are more complex than the color component transforms
considered thus far. This is because all practical color-slicing approaches require
each pixel's transformed color components to be a function of all n original pixel's
color components.
One of the simplest ways to "slice" a color image is to map the colors outside some
range of interest to a non-prominent neutral color. If the colors of interest are
enclosed by a cube (or hypercube for n > 3) of width W and centered at a
10|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
prototypical (e.g., average) color with components, the necessary set of
transformations is

These transformations highlight the colors around the prototype by forcing all
other colors to the midpoint of the reference color space (an arbitrarily chosen
neutral point). For the RGB color space, for example, a suitable neutral point is
middle gray or color (0.5, 0.5, 0.5). If a sphere is used to specify the colors of
interest, the above equation becomes

Here, Ro is the radius of the enclosing sphere (or hyper sphere for n > 3) and (a,
a... a) are the components of its center (i.e., the prototypical color) other useful
variations of Eqs include implementing multiple color prototypes and reducing
the intensity of the colors outside the region of interest-rather than setting them to
a neutral constant.

5.5 TONE AND COLOR CORRECTIONS


Color transformations can be performed on most desktop computers. In con-
junction with digital cameras, flatbed scanners, and inkjet printers, they turn a
personal computer into a digital darkroom allowing tonal adjustments and color
corrections, the mainstays of high-end color reproduction systems, to be
performed without the need for traditionally outfitted wet processing (i.e.,
darkroom) facilities. Although tone and color corrections are useful in other areas
of imaging, the focus of the current discussion is on the most common uses-photo
enhancement and color reproduction.
The effectiveness of the transformations examined in this section is judged
ultimately in print. Because these transformations are developed, re- fined, and
evaluated on monitors, it is necessary to maintain a high degree of color
consistency between the monitors used and the eventual output de- vices. In

11|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
fact, the colors of the monitor should represent accurately any digi- tally scanned
source images, as well as the final printed output. This is best accomplished with a
device-independent color model that relates the color gamuts (see Section 6.1)
of the monitors and output devices, as well as any other devices being used, to
one another. The success of this approach is a function of the quality of the color
profiles used to map each device to the model and the model itself. The model of
choice for many color management systems (CMS) is the CIE Lab model, also
called CIELAB (CIE [1978], Robertson [1977]). The Lab color components are given
by the following equations:

and Xw. Yw, and Zw are reference white tristimulus values-typically the white of a
perfectly reflecting diffuser under CIE standard D65 illumination (defined by x =
0.3127 and y = 0.3290 in the CIE chromaticity diagram of Fig. 6.5). The Lab color
space is colorimetric (i.e., colors perceived as matching are encoded identically),
perceptually uniform (i.e., color differences among various hues are perceived
uniformly-see the classic paper by MacAdams [1942]), and device independent.
While not a directly displayable format (conversion to another color space is
required), its gamut encompasses the entire visible spectrum and can represent
accurately the colors of any display, print, or input device. Like the HSI system, the
L*a*b* system is an excellent decoupler of intensity (represented by lightness L*)

12|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
and color (represented by a* for red minus green and b* for green minus blue),
making it useful in both image manipulation (tone and contrast editing) and
image compression applications.

The principal benefit of calibrated imaging systems is that they allow tonal and
color imbalances to be corrected interactively and independently-that is, in two
sequential operations. Before color irregularities, like over- and under- saturated
colors, are resolved, problems involving the image's tonal range are corrected.
The tonal range of an image, also called its key type, refers to its general
distribution of color intensities. Most of the information in high-key im- ages is
concentrated at high (or light) intensities; the colors of low-key images are
located predominantly at low intensities; middle-key images lie in between. As in
the monochrome case, it is often desirable to distribute the intensities of a color
image equally between the highlights and the shadows. The following examples
demonstrate a variety of color transformations for the correction of tonal and
color imbalances.

5.6 Smoothing and Sharpening


The next step beyond transforming each pixel of a color image without regard to
its neighbors (as in the previous section) is to modify its value based on the
characteristics of the surrounding pixels. In this section, the basics of this type of
neighborhood processing are illustrated within the context of color image
smoothing and sharpening,

Color Image Smoothing

Gray- scale image smoothing can be viewed as a spatial filtering operation in


which the coefficients of the filtering mask have the same value. As the mask is
slid across the image to be smoothed, each pixel is replaced by the average of
the pixels in the neighborhood defined by the mask. This concept is easily
extended to the processing of full-color images. The principal difference is that
instead of scalar intensity values we must deal with component vectors of the
form given in Eq.

Let S, denote the set of coordinates defining a neighborhood centered at

13|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
(x, y) in an RGB color image. The average of the RGB component vectors in

this neighborhood is

It follows from Eq. and the properties of vector addition that

We recognize the components of this vector as the scalar images that would be
obtained by independently smoothing each plane of the starting RGB image
using conventional gray-scale neighborhood processing. Thus, we conclude that
smoothing by neighborhood averaging can be carried out on a per-color- plane
basis. The result is the same as when the averaging is performed using
RGB color vectors.

Color Image Sharpening

In this section we consider image sharpening using the Laplacian. From vector
analysis, we know that the Laplacian of a vector is defined as a vector whose
components are equal to the Laplacian of the individual scalar components of
the input vector. In the RGB color system, the Laplacian of vector e in Eq.is

14|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
which, as in the previous section, tells us that we can compute the Laplacian of a
full-color image by computing the Laplacian of each component
image separately.

These results were combined to produce the sharpened full-color result. This result
was generated by combining the Laplacian of the intensity component with the
unchanged hue and saturation components. The difference between the RGB
and HSI sharpened images.

5.7 Image Segmentation Based on Color


Segmentation is a process that partitions an image into regions. We consider color
segmentation briefly here for the sake of continuity. You will have no difficulty
following the discussion.

Segmentation in HSI Color Space

If we wish to segment an image based on color, and, in addition, we want to


carry out the process on individual planes, it is natural to think first of the HSI space
because color is conveniently represented in the hue image. Typically, saturation
is used as a masking image in order to isolate further regions of interest in the hue
image. The intensity image is used less frequently for segmentation of color
images because it carries no color information. The following example is typical of
how segmentation is performed in the HSI color space.

Segmentation in RGB Vector Space

Although, as mentioned numerous times in this chapter, working in HSI space is


more intuitive, segmentation is one area in which better results generally are
obtained by using RGB color vectors. The approach is straightforward. Sup- pose
that the objective is to segment objects of a specified color range in an RGB
image. Given a set of sample color points representative of the colors of interest,
we obtain an estimate of the "average" color that we wish to segment. Let this
average color be denoted by the RGB vector a. The objective of seg- mentation
is to classify each RGB pixel in a given image as having a color in the specified
range or not. In order to perform this comparison, it is necessary to have a
measure of similarity. One of the simplest measures is the Euclidean distance. Let z
15|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
denote an arbitrary point in RGB space. We say that z is similar to a if the distance
between them is less than a specified threshold, D_{0} The Euclidean distance
between z and a is given by

where the subscripts R, G and B denote the RGB components of vectors a and z.
The locus of points such that D(z, a) <= D_{0} is a solid sphere of radius Do, as il-
lustrated in Fig. 6.43(a). Points contained within the sphere satisfy the specified
color criterion; points outside the sphere do not. Coding these two sets of points in
the image with, say, black and white, produces a binary segmented image. A
useful generalization of Eq. (6.7-1) is a distance measure of the form

D(z, a) = [(z - a) ^ T * C ^ - 1 * (z - a)]

Fig 5.5.Three approaches for enclosing data regions for RGB vector segmentation

Where C is the covariance matrix' of the samples representative of the color we


wish to segment. The locus of points such that D(z. a) Do describes a solid 3-D
elliptical body [Fig. 6.43(b)] with the important property that its principal axes are
oriented in the direction of maximum data spread. Segmentation is as described
in the preceding paragraph.

Because distances are positive and monotonic, we can work with the distance
squared instead, thus avoiding square root computations. However,
implementing Eq. is computationally expensive for images of practical size, even if

16|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
the square roots are not computed. A compromise is to use a bounding box. In
this approach, the box is centered on a, and its dimensions along each of the
color axes is chosen proportional to the standard deviation of the samples along
each of the axis Computation of the standard deviations is done only once using
sample color data.

Given an arbitrary color point, we segment it by determining whether or not it is


on the surface or inside the box, as with the distance formulations However,
determining whether a color point is inside or outside a box is much simpler
computationally when compared to a spherical or elliptical enclosure. Note that
the preceding discussion is a generalization of the method introduced in
connection with color slicing.

17|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
9. PRACTICE QUIZ

1) Color model is also named as


a) color space
b) color gap
c) color system
d) Color space & color system

2) How many bit RGB color image is represented by full-color image?


a) 32-bit RGB color image
b) 24-bit RGB color image
c) 16-bit RGB color image
d) 8-bit RGB color image
3) In an image accentuating a specific range i called
a) slicing
b) cutting
c) color slicing
d) color enhancement

4) RGB stands for


a) Red Grey Black
b) Red Green Blue
c) Red Grey Blue
d) Red Green Black

5) In c(x,y) the x,y are the


a) frequency variables
b) spatial variables
c) intensity variables
d) Both (a) and (b)

6) Green plus blue color produces


a) yellow
b) red
c) magenta
d) cyan

7) Three primary colors are


a) Red, cyan, blue
b) Red, green, blue
c) Red, white, black
d) Red, green, yellow

8) Transformation set is also called


a) Color mapping functions
b) Color space
c) chromaticity
d) both a and b

18|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI

9) Hues opposite to each others are known as


a) edges
b) boundaries
c) complements
d) saturation

10) The color spectrum consists of_________ colors


a) 4
b) 6
c) 7
d) 8

11) Pseudo colors are also known as


a) primary colors
b) true colors
c) false colors
d) secondary colors

12) One that is nor a color model is


a) RCB
b) CMYK
c) RGB
d) HIS

13) RGB colors in equal amount give


a) magenta color
b) yellow color
c) cyan color
d) white color

14) Full color image is a


a) 20 bit image
b) 28 bit image
c) 24 bit image
d) 32 bit image

15) Color transformation is modeled using


a) g(x,y) = [f(x,y)]
b) g(x,y) = T[f(x)]
c) g(x,y) = T[f(y)]
d) g(x,y) = T[f(x,y)]

16) Color transformation technique is used in


a) image processing
b) Sharpness
c) multi-image processing
d) multispectral image processing
19|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
17) HSI color model stands for
a) hue, system, intensity
b) high, saturation, intensity
c) hue, saturation, intensity
d) high, system, intensity

18) RGB space is also known as


a) pixel depth
b) color depth
c)coordinates
d) pixels

19) Intensity slicing is called


a) region slicing
b) image slicing
c) density slicing
d) Blurring

20) What is the equation used for calculating G value in terms of HSI components?
a) G=3I-(R+B)
b) G=3I+(R+B)
c) G=3I-(R-B)
d) G=2I-(R+B)

10. Assignments

S.No Question BL CO
1 Explain the color image fundamentals? 2 3
2 Explain the RGB color model? 2 3
3 Explain the HSI color model? 2 3
Explain how the HSI color model converted in to RGB color
4 2 3
space?
Explain how the RGB color model converted in to HSI color
5 3 3
space?
6 What is color slicing and give the basic formulation ? 3 3

20|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
11. Part A- Question & Answers

S.No Question& Answers BL CO


1 Draw the complement circle.
Ans.

1 5

Complement of the circle

2 Give the basic formulation for color image smoothing.


Ans.

1 5

3 Give the basic formulation for color image sharpening.


Ans.

1 5

4 Give the basic formulation for Tone and Color Corrections

Ans.

1 5

21|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI

12. Part B- Questions

S.No Question BL CO
1 Explain the color image fundamentals? 2 5

2 Explain the RGB color model? 2 5

3 Explain the HSI color model? 1 5


4 Explain how the HSI color model converted in to RGB color 2 5
space?
5 Explain how the RGB color model converted in to HSI color 2 5
space?
6 What is color slicing and give the basic formulation ? 3 5
7 Give basic formulation for color image smoothing, sharpening 3 5
and segmentation ?

13. Supportive Online Certification Courses


1. Digital image processing By Prof. P K Biswas, conducted by IIT Kharagpur on NPTEL
– 4 weeks
2. Fundamental of digital image processing By Prof. Aggelos K. Katsaggelos,
conducted by Northwestern University on Coursera – 12 weeks.

14. Real Time Applications


S.No Application CO
1 What Is Hue And Saturation? 1
Answer :Hue is a color attribute that describes a pure color where
saturation gives a measure of the degree to which a pure color is diluted
by white light.
2 What Is Luminance? 1
Luminance measured in lumens (lm), gives a measure of the amount of
energy an observer perceiver from a light source.

3 List The Hardware Oriented Color Models? 1


RGB model CMY model
YIQ mode HSI model

4 What is the need for color in image processing? 1


22|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
In automated image analysis, color is a powerful descriptor,
which simplifies object identification and extraction. and intensities but
only about 20-30 shades of gray. Hence, use of color in human image
processing would be very effective.
5 Define psycho visual redundancy? 1
In normal visual processing certain information has less importance than
other information. So this information is said to be psycho visual
redundant.

15. Contents Beyond the Syllabus


1.RGB color merging
This can be used to merge red, green and/or blue channel images or Image Stacks
This reduces 16-bit images to 8-bits (based on the current Brightness and Contrast
values) then generates a 24-bit RGB image. An alternative to the normal Red-Green
merge is to merge the images based on Cyan and Magenta, or Cyan-Yellow or any
other color combination. This can aid visualization of co localization due to our poor
perception of red and green colors.

2 Noise in color images


The noise content of a color image has the same characteristics in each color
channel, However, different noise leel are more likely to be caused by differences
in the reative strength of illumination available to each of the color channels. For
example use of red filter in CCD camera will reduce the strength of illumination
available to the red sensor.

3. Color image compression


The number of bits used to represent color is typically three or four times greater
than the number employed in the representation of gray levels, data compression
plays central role in the storage and transmission of color images. Compression is
the process of reducing or eliminating redundant and / or irrelevant data.

16. Prescribed Text Books & Reference Books


Text Books:
1. R.C .Gonzalez& R.E. Woods, “Digital Image Processing”, AddisonWesley/Pearson
education, 3rd Edition, 2010 Edition.
2. A .K. Jain, “Fundamentals of Digital Image processing”, PHI.

23|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41
SVCE TIRUPATI
SVCE TIRUPATI
References:
1. S jayaraman, S Esakkirajan, T Veerakumar, “Digital Image processing”,Tata
McGraw Hill
2. William K. Pratt, “Digital Image Processing”, John Wilely, 3rd Edition, 2004
3. Rafael C. Gonzalez, Richard E woods and Steven L.Eddins, “Digital Image
processing using MATLAB”, Tata McGraw Hill, 2010.

17. Mini Project Suggestion

1. Multiple Color Detection in Real-Time using Python-OpenCV


Some Real-world Applications
In self-driving car, to detect the traffic signals.
Multiple color detection is used in some industrial robots, to performing pick-
and-place task in separating different colored objects.
his is an implementation of detecting multiple colors (here,
only red, green and blue colors have been considered) in real-time using Python
programming language.
For further reference visit

https://www.geeksforgeeks.org/multiple-color-detection-in-real-time-using-
python-opencv/

24|D I P - U N I T - V

BTECH_ECE-SEM
BTECH_ECE-SEM 41
41

You might also like