Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views

Dip Unit1 Lecture Notes (1)

The document outlines a syllabus for a Digital Image Processing course, detailing its content, objectives, and outcomes. It covers topics such as image representation, transformation, restoration, compression, segmentation, and applications in various fields. Additionally, it explains fundamental concepts of digital images, image acquisition, and the components of an image processing system.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Dip Unit1 Lecture Notes (1)

The document outlines a syllabus for a Digital Image Processing course, detailing its content, objectives, and outcomes. It covers topics such as image representation, transformation, restoration, compression, segmentation, and applications in various fields. Additionally, it explains fundamental concepts of digital images, image acquisition, and the components of an image processing system.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 75

Digital Image Processing:

Introduction
6CS3-01: Digital Image Processing
SYLLABUS
S.N. Content Hours

1 Introduction: Objective, scope and outcome of the course. 01


2Introduction to Image Processing: Digital Image representation, Sampling &
2 Quantization, Steps in image Processing, Image acquisition, color image 04
representation.
3Image Transformation & Filtering: Intensity transform functions, histogram
3 processing, Spatial filtering, Fourier transforms and its properties, frequency domain 06
filters, colour models, Pseudo colouring, colour transforms, Basics of Wavelet
Transforms.
4Image Restoration: Image degradation and restoration process, Noise Models, Noise
4 Filters, degradation function, Inverse Filtering, Homomorphism Filtering. 07

5Image Compression: Coding redundancy, Interpixel redundancy, Psychovisual


5 redundancy, Huffman Coding, Arithmetic coding, Lossy compression techniques, 05
JPEG Compression.
6Image Segmentation & Representation: Point, Line and Edge Detection,
6 Thresholding, Edge and Boundary linking, Hough transforms, Region Based 05
Segmentation, Boundary representation, Boundary Descriptors.
Total 28
COURSE OUTCOME

S.No. Description
Blooms’ taxonomy
After completion of course the student will be able level

To define the basics of image & different steps involved in Define


CO1
digital image processing. [Level 1]
To explain the concepts within image processing & provide Explain
CO2
detailed insights into image processing steps. [Level 2]
To apply various image processing techniques to propose
Apply
CO3 optimized solution for enhancement, restoration,
[Level 3]
compression and segmentation tasks in diverse scenarios.
To demonstrate a comprehensive understanding & Demonstrate
CO4
practical application of image processing principles. [Level 3]
To analyze & evaluate advance concepts in image
Analyze & Evaluate
CO5 processing & solve the complex problem related to image
[Level 4]
processing.
What is Image Processing ?

Digital Image Processing is a process that involves


analyzing and manipulating images digitally via
computer to make them more informative for human
interpretation and picture information for tasks such
as maintaining storage, fast transmission, and
extraction of pictorial data.

This is used in various domains such as automation,


medicine, remote sensing, and more.
What are Digital images?

An image is a two-dimensional rectilinear of pixels that


gives a pictorial representation of something.

An image can also be defined as a two-dimensional


function f(x,y) where X and Y are spatial coordinates,
and f at any pair of coordinates (x,y) is called the
intensity. When the x, y, and f are all finite then the
image is called a digital image.
What is Digital Image Processing?

Digital Image Processing is a process of analysis and


manipulation of images via digital computer to get some
information. In a broader sense, one can say it is a
processing of two-dimensional data.

Digital Image Processing focuses on two major tasks:


improvement of pictorial representation for human
interpretation and Processing of image data for storage,
transmission, and representation for autonomous machine
perception.
Origin of Digital Image Processing
Digital Image Processing was first used in the newspaper industry.

At that time images were sent by submarine cable between London


and New York.

In the early 1920s, the Bartlane cable picture transmission system was
introduced which reduced the time requirement to transport a picture
across the Atlantic from more than a week to less than three hours.

The birth of the Digital Image Processing was in 1960.

The first picture of the moon was taken by U.S. spacecraft Ranger 7 to
Jet Propulsion Laboratory in 1964 which serves as the basis of future
image processing.

In the year 1970 image processing was used in medical imaging,


astronomy, and remote sensing.
Applications of Digital Image
Processing
Image enhancement: Digital Image Processing analyzes and manipulates the
image to make it more informative for human interpretation and make it more
suitable for better representation of the image.

Automatic inspection: In a bottling plant, Digital Image Processing checks the


quality of the product and detects empty or partially filled bottles.

Medical visualization: In medicine, Digital Image Processing helps to enhance


the quality of images and analyze medical images such as X-rays, MRI, and CT
scans. It also helps to assist in diagnosis and treatment planning.

Biometrics: Digital image processing is used in security systems to perform


tasks such as fingerprint recognition, facial recognition, and object tracking to
improve security systems.

Remote Sensing: Digital Image Processing is essential in remote sensing and


helps to analyze satellite and aerial images which helps with agriculture,
environmental monitoring, and managing disaster.
Video Processing: These techniques can spot moving stuff in videos.
Fundamental Steps in Digital Image Processing:
Outputs of these processes generally are images

Wavelets &
Colour Image Image Morphological
Multiresolution
Processing Compression Processing
processing

Image
Restoration
Segmentation

Image
Knowledge Base Representation
Enhancement
& Description

Image
Acquisition Object
Recognition

Problem Domain
Step 1: Image Acquisition
The image is captured by a sensor (eg. Camera),
and digitized if the output of the camera or
sensor is not already in digital form, using
analogue-to-digital convertor
Step 2: Image Enhancement
The process of manipulating an image so that the result
is more suitable than the original for specific
applications.

The idea behind enhancement techniques is to bring


out details that are hidden, or simple to highlight
certain features of interest in an image.
Step 3: Image Restoration

- Improving the appearance of an image


- Tend to be mathematical or probabilistic models.

Enhancement, on the other hand, is based on human


subjective preferences regarding what constitutes a
“good” enhancement result.
Step 4: Colour Image Processing

Use the colour of the image to extract features of interest


in an image.
Colour modeling and processing in a digital domain etc.
Step 5: Wavelets
Are the foundation of representing images in various
degrees of resolution. It is used for image data
compression where images are subdivided into smaller
regions.
Step 6: Compression
Techniques for reducing the storage required to
save an image or the bandwidth required to
transmit it.
Step 7: Morphological Processing
Tools for extracting image
components that are useful in
the representation and
description of shape.
In this step, there would be a
transition from processes that
output images, to processes that
output image attributes.
Step 8: Image Segmentation
Segmentation procedures partition an image into its
constituent parts or objects.
Important Tip: The more accurate the segmentation, the
more likely recognition is to succeed.
Step 9: Representation and Description
- Representation: Make a decision whether the data should
be represented as a boundary or as a complete region. It is
almost always follows the output of a segmentation stage.
- Boundary Representation: Focus on external shape
characteristics, such as corners and inflections.
- Region Representation: Focus on internal properties,
such as texture or skeleton shape.

Transforming raw data into a form suitable for subsequent


computer processing. Description deals with extracting
attributes that result in some quantitative information of
interest or are basic for differentiating one class of objects
from another.
Step 10: Object Recognition
Recognition: the process that assigns
label to an object based on the information
provided by its description.
Recognition is the process that
assigns a label, such as, “vehicle” to an object
based on its descriptors.
Components of an Image Processing
System
Network

Image displays Computer Mass storage

Specialized image Image processing


Hardcopy
processing hardware software

Typical general-
Image sensors purpose DIP
Problem Domain system
Components of an Image Processing
System
1. Image Sensors
Two elements are required to acquire digital
images. The first is the physical device that is
sensitive to the energy radiated by the object
we wish to image (Sensor). The second,
called a digitizer, is a device for converting
the output of the physical sensing device into
digital form.
Components of an Image Processing
System
2. Specialized Image Processing Hardware
Usually consists of the digitizer, mentioned before, plus
hardware that performs other primitive operations, such as an
arithmetic logic unit (ALU), which performs arithmetic and
logical operations in parallel on entire images.

This type of hardware sometimes is called a front-end


subsystem, and its most distinguishing characteristic is speed.
Components of an Image Processing
System
4. Image Processing Software
Software for image processing consists of specialized modules
that perform specific tasks. A well-designed package also
includes the capability for the user to write code that, as a
minimum, utilizes the specialized modules.
Components of an Image Processing
System
5. Mass Storage Capability
Mass storage capability is a must in a image processing
applications. And image of sized 1024 * 1024 pixels requires
one megabyte of storage space if the image is not compressed.

Digital storage for image processing applications falls into three


principal categories:
1. Short-term storage for use during processing.
2. on line storage for relatively fast recall
3. Archival storage, characterized by infrequent access
Components of an Image Processing
System
6. Image Displays
The displays in use today are mainly color (preferably
flat screen) TV monitors. Monitors are driven by the
outputs of the image and graphics display cards that
are an integral part of a computer system.
Components of an Image Processing
System
7. Hardcopy devices
Used for recording images, include laser
printers, film cameras, heat-sensitive
devices, inkjet units and digital units,
such as optical and CD-Rom disks.
Components of an Image Processing
System
8. Networking
Is almost a default function in any computer system,
in use today. Because of the large amount of data
inherent in image processing applications the key
consideration in image transmission is bandwidth.

In dedicated networks, this typically is not a


problem, but communications with remote sites via
the internet are not always as efficient.
Sampling, Quantisation And Resolution

In the following slides we will consider what is


involved in capturing a digital image of a real-
world scene
– Image sensing and representation
– Sampling and quantisation
– Resolution
Image Sampling And Quantisation
A digital sensor can only measure a limited
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

number of samples at a discrete set of energy


levels
Quantisation is the process of converting a
continuous analogue signal into a digital
representation of this signal
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Sampling And Quantisation


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Sampling And Quantisation


Image Sampling And Quantisation (cont…)

Remember that a digital image is always only an


Images taken from Gonzalez & Woods, Digital Image Processing (2002)

approximation of a real world scene


Light And The Electromagnetic Spectrum

Light is just a particular part of the


electromagnetic spectrum that can be sensed by
the human eye
The electromagnetic spectrum is split up
according to the wavelengths of different forms
of energy
Reflected Light
The colours that we perceive are determined by
the nature of the light reflected from an object
For example, if white
light is shone onto a
green object most
wavelengths are Colours
absorbed, while green Absorbed

light is reflected from


the object
Image Representation
Before we discuss image acquisition recall that a
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

digital image is composed of M rows and N


columns of pixels
each storing a value col

Pixel values are most


often grey levels in the
range 0-255(black-white)
We will see later on
that images can easily
be represented as
matrices f (row, col)
row
Image Acquisition
Images are typically generated by illuminating a
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

scene and absorbing the energy reflected by the


objects in that scene
– Typical notions of
illumination and scene
can be way off:
• X-rays of a skeleton
• Ultrasound of an
unborn baby
• Electro-microscopic

images of molecules
Image Sensing
Incoming energy lands on a sensor material
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

responsive to that type of energy and this


generates a voltage
Collections of sensors are arranged to capture
images

Imaging Sensor

Line of Image Sensors


Array of Image Sensors
Idea behind image sensing
• The idea is simple: Incoming energy is
transformed into a voltage by the combination
of input electrical power and sensor material
that is responsive to the particular type of
energy being detected.
• The output voltage waveform is the
response of the sensor(s), and a digital
quantity is obtained from each sensor by
digitizing its response.
Image Sensors

Single sensor

Line sensor

Array sensor

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Image Sensors : Single Sensor

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Single sensor
Figure shows the components of a single sensor. Perhaps the most familiar sensor
of this type is the photodiode, which is constructed of silicon materials and whose
output voltage waveform is proportional to light.

The use of a filter in front of a sensor improves selectivity. For example, a green
(pass) filter in front of a light sensor favors light in the green band of the color
spectrum.

As a consequence, the sensor output will be stronger for green light than for
other
components in the visible spectrum.

In order to generate a 2-D image using a single sensor, there has to be relative
displacements in both the x- and y-directions between the sensor and the area to
be imaged
Image Sensors : Line Sensor

Fingerprint sweep sensor


Computerized Axial Tomography
(Images from Rafael C. Gonzalez and Richard E.
Wood, Digital Image Processing, 2nd Edition.
Line sensor
A geometry that is used much more frequently than single sensors consists of an
in-line arrangement of sensors in the form of a sensor strip

strip provides imaging elements in one direction. Motion perpendicular to the


strip provides imaging in the other direction

In-line sensors are used routinely in airborne imaging applications, in which the
imaging system is mounted on an aircraft that flies at a constant altitude and
speed over the geographical area to be imaged

Sensor strips mounted in a ring configuration are used in medical and industrial
imaging to obtain cross-sectional (―slice‖) images of 3-D objects.

Ex: X ray , MRI scans


ARRAY SENSOR
Numerous electromagnetic and some ultrasonic sensing devices frequently are
arranged in an array format. This is also the predominant arrangement found in
digital cameras.

The response of each sensor is proportional to the integral of the light energy
projected onto the surface of the sensor.

Its key advantage is that a complete image can be obtained by focusing the
energy
pattern onto the surface of the array.

An energy from an illumination source being reflected from a scene element

This function is performed by the imaging system(Array sensor) to collect the


incoming energy and focus it onto an image plane
ARRAY SENSORS CONTD…
Fundamentals of Digital Images

Origin

y
Image “After snow storm” f(x,y)
 An image: a multidimensional function of spatial coordinates.
 Spatial coordinate: (x,y) for 2D case such as photograph,
(x,y,z) for 3D case such as CT scan images
(x,y,t) for movies
 The function f may represent intensity (for monochrome
images)
or color (for color images) or other associated values.
Conventional Coordinate for Image Representation

(Images from Rafael C. Gonzalez and Richard E.


Wood, Digital Image Processing, 2nd Edition.
Image Types : Binary Image

Binary image or black and white image


Each pixel contains one bit :
1 represent white
0 represents black

Binary data
0 0 0 0
0 
0 0 0

1 1 1 1
1 1
 1 1 
Digital Image Types : Intensity Image(GRAYSCALE
IMAGE)
Intensity image or monochrome image
each pixel corresponds to light intensity
normally represented in gray scale (gray
level).

Gray scale values

10 10 16 28
 9 37
6
15 26 22
 25 
13
32 15 87 39
Digital Image Types : RGB Image
Color image or RGB image:
each pixel contains a vector
representing red, green and
blue components.

RGB components
10 10 16 28
 9 65 6 7026 56 43
15  37  5
32 99
60 70 56 4 78
25 13 22
21 
32 90 43 67
96 992
54 85 1 6
85 5
32
65 87 6 99
8 7
7 
Color Imaging
Introduction
we’ll look at color image processing, covering:
– Color fundamentals
– Color models
Color Fundamentals
In 1666 Sir Isaac Newton discovered that when a
Images taken from Gonzalez & Woods, Digital Image Processing

beam of sunlight passes through a glass prism,


the emerging beam is split into a spectrum of
colors
(2002)
Color Fundamentals (cont…)
The colors that humans and most animals perceive
in an object are determined by the nature of the
light reflected from the object
For example, green
objects reflect light
with wave lengths
primarily in the range
of 500 – 570 nm while
absorbing most of the Colors
Absorbe

energy at other d

wavelengths
Color Fundamentals (cont…)
Chromatic light spans the electromagnetic
Images taken from Gonzalez & Woods, Digital Image Processing

spectrum from approximately 400 to 700 nm


As we mentioned before human color vision is
achieved through 6 to 7 million cones in each
eye
(2002)
Color Fundamentals (cont…)
3 basic qualities are used to describe the quality
of a chromatic light source:
– Radiance: the total amount of energy that flows
from the light source (measured in watts)
– Luminance: the amount of energy an observer
perceives from the light source
(measured in lumens)
• Note we can have high radiance, but low
luminance
– Brightness: a subjective (practically
unmeasurable) notion that embodies the intensity
of light
Describing Chromatic lights
• Radiance (watt):
– Total amount of energy flow from the light source.
• Luminance (lumens, lm):
– measure of amount of energy an observer perceives from
a light source. It varies based on distance from the
source, wavelength, etc.
• Brightness:
– a subjective descriptor, describing color sensation.
Primary Colors

• Primary colors of light


(additive):
– Red (700 nm),
65%
cones sensitive to red
light.
– Green (546.1nm),
33%
– Blue(435.8nm).
2%
cones sensitive to blue
light.
• Mixing of R,G,B may
NOT generate ALL
visible colors.
Primary and Secondary Colors of Lights and
Pigments
• Primary colors of
pigment (subtractive):
– magenta,
– cyan, and
– yellow.
Color Models

• RGB color model: monitor, video


• CMY (CMYK) color model: printing
• HSI: close to HVS
RGB Color Model

• R, G, B at 3 axis ranging in
[0 1] each
• Gray scale along the
diagonal
• If each component is
quantized into 256 levels
[0:255], the total number
of different colors that can
be produced is (28)3 = 224 =
16,777,216 colors.
• RGB safe color:
– Quantize each
components into 6 levels
from 0 to 255.

24-bit RGB color RGB safe color


cube cube
Color Models
From the previous discussion it should be
obvious that there are different ways to model
color
We will consider two very popular models used
in color image processing:
– RGB (Red Green Blue)
– HSI (Hue Saturation Intensity)
RGB
In the RGB model each color appears in its primary
spectral components of red, green and blue
The model is based on a Cartesian coordinate
system
– RGB values are at 3 corners
– Cyan magenta and yellow are at three other
corners
– Black is at the origin
– White is the corner furthest from the origin
– Different colors are points on or inside the cube
represented by RGB vectors
Images taken from Gonzalez & Woods, Digital Image Processing
(2002)

RGB (cont…)
The HSI Color Model
RGB is useful for hardware implementations and
is serendipitously related to the way in which
the human visual system works
However, RGB is not a particularly intuitive way
in which to describe colors
Rather when people describe colors they tend to
use hue, saturation and brightness
RGB is great for color generation, but HSI is
great for color description
The HSI Color Model (cont…)
The HSI model uses three measures to describe
colors:
– Hue: A color attribute that describes a pure color
(pure yellow, orange or red)
– Saturation: Gives a measure of how much a pure
color is diluted with white light
– Intensity: Brightness is nearly impossible to
measure because it is so subjective. Instead we
use intensity. Intensity is the same
achromatic notion that we have seen in grey
level images
HSI, Intensity & RGB
Intensity can be extracted from RGB images –
which is not surprising if we stop to think about
it
Remember the diagonal on the RGB color cube
that we saw previously ran from black to white
Now consider if we stand this cube on the black
vertex and position the white vertex directly
above it
HSI, Intensity & RGB (cont…)
Now the intensity component
Images taken from Gonzalez & Woods, Digital Image Processing

of any color can be


determined by passing a
plane perpendicular to
the intensity axis and
containing the color
point
The intersection of
the plane
with the intensity axis gives us the intensity
component of the color
(2002)
HSI, Hue & RGB
In a similar way we can extract the hue from the
Images taken from Gonzalez & Woods, Digital Image Processing

RGB color cube


Consider a plane defined by
the three points cyan, black
and white
All points contained in
this plane must have the
same hue (cyan) as black
and white cannot contribute
hue information to a color
(2002)
HSI Color Model

• Hue:
– an attribute describing
pure color
• Saturation:
– The degree of which a
pure color is diluted by
white light.
• HSI model
– Hue and saturation lie
in a plane
perpendicular to an
intensity axis.
Color Coordinate Transform

• RGB  CYM • RGB  HSI


C  (R  G)  (R  B) / 2 
 Y   1  G  1
1    
   cos 
 (R  G)2  (R  B)(G  B) 
RM   
  1   
H  BG
B 
 360  B
• HSI  RGB G 3min(R, G,
S  1 RG
0  H  120o B)
R  I  S cos H 
B  I (1 B I   R  G  B
 Scos(60
) o
1  H ) /3
G  1 (R 
B)

You might also like