Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chap 1 Digital Image Fundamentals DD

Download as pdf or txt
Download as pdf or txt
You are on page 1of 63

1

What is Digital Image


Processing ?
© Dr. Dafda
Contents:
1. What is an Image and what is Digital Image?
2. What are types of Digital Images?
3. What is Image Processing?
4. Types of Image Processing
5. What is Digital Image Processing?
6. Types of Digital Image Processing
7. Applications of Digital Image Processing
1. What is an Image and what is Digital Image?
• Image is an pictorial representation of someone or something.

• Someone has truly said “ A picture is worth a thousand words”.


1. What is an Image and what is Digital Image?…

• An image is a two-dimensional function f(x,y), where x and y are the


spatial (plane) coordinates, and the amplitude of f at any pair of
coordinates (x,y) is called the intensity of the image at that level.

• If x,y and the amplitude values of f are finite and discrete quantities, we
call the image a digital image. A digital image is composed of a finite
number of elements called pixels, each of which has a particular
location and value.
1. What is an Image and what is Digital Image?…

f (x0,y0) f (x1,y1)

x x
1. What is an Image and what is Digital Image?…

Pixel intensity value Consider the following


f(1,1) = 33 image (2724x2336 pixels)
to be 2D function or a
Pixel location matrix with rows and
columns

In 8-bit representation
Pixel intensity values
rows columns change between 0 (Black)
f(645:650,1323:1328) = and 255 (White)
83 82 132 132 131 130
82 82 132 132 131 130
82 82 132 132 131 130
82 82 132 132 132 131
80 79 133 133 132 131
80 79 133 133 132 131
f(2724,2336) = 83
1. What is an Image and what is Digital Image?…

One Pixel

Remember digitization implies that a digital image is an approximation


of a real scene
1. What is an Image and what is Digital Image?…

Conventional coordinate for image representation


2. What are types of Digital Images?
2. What are types of Digital Images? …
2. What are types of Digital Images? …
2. What are types of Digital Images? …
3. What is Image Processing ?
• Image processing is manipulation of images by a brain or computers.

Object

Object
4. Types of Image processing
Image processing

Analog Image processing Digital Image processing


E.g.: Processing CRT TV image E.g.: Processing image using PCs, Smartphones
5. What is Digital Image Processing ?

• Digital Image Processing is a method to perform some operations on an digital


image, in order to get an enhanced image or to extract some useful information
from it.

• It is a type of signal processing in which input is an image and output may be


image or characteristics/features associated with that image.

• The motivation or purposes behind DIP is:


(1) Improvement of picture information for interpretation.
(2) Storage, transmission and representation of digital data for machine perception
is possible.
Digital Image Processing
6. Types of Digital Image Processing ?

(1) Low level processing: Primitive operations such as noise reduction, image
sharpening, enhancement etc. Input and output are images.

(2) Mid level processing: Image segmentation, classification of individual objects etc.
Here input are images but output are attributes of images for e.g. edges of image.

(3) High level processing: It involves making sense of recognized objects and
performing functions associated with visions. For e.g. Automatic character
recognition, military recognition, autonomous navigation etc.
7. Applications of Digital Image Processing ?

1. Biometrics
2. Vehicle number plate detection
3. Content based Image retrieval (CBIR)
4. Steganography
5. Medical imaging
6. Object recognition
7. Image enhancement and noise removal.
Thank You
2

Human Visual System


and
Elements of DIP © Dr. Dafda
Contents:
1. Structure of Human Eye
2. Realization of the blind spot
3. Contrast sensitivity
4. Brightness adaptation and discrimination
5. Simultaneous contrast
6. Optical Illusions
7. Elements of Digital Image Processing System
1. Structure of Human Eye
• The lens focuses the light reflected from different objects onto the retina
which is composed of photoreceptors-rods and cones.

• Nerves containing the retina leave the eyeball through the optic nerve bundle.
2. Realization of the blind spot
• Draw the two letters ‘D’ and ‘A’ on a piece of paper around 3 inches apart.

• Close your right eye and focus on letter ‘A’. Slowly move the paper near to your face.

• At some particular distance the letter ‘D’ will disappear. This is due to blind spot.

D A

Realization of Blind Spot


3. Contrast sensitivity
• The response of the human eye to changes in the intensity of illumination is non-linear.
• The ratio of ΔIc/I is called the weber ratio,
where I is constant illumination and Ic is the incremental value.
4. Brightness adaptation and discrimination
• The dynamic range of human eye is enormous. But at a given point of time, the
eye can observe a small range of illuminations.
• This phenomena is called Brightness Adaptation.

Range of intensities human visual system can adapt


• The human visual system can perceive approximately 1010 different light
Intensities levels. But at any time we can hardly discriminate between 40-50 shades
due to brightness adaptation. Also the perceived intensity of a region is related to
light intensities of region surrounding it.

Mach Bands
5. Simultaneous contrast
• Each of small squares have the same intensity, but as the surrounding grey
level of each bigger square is different, the small squares do not appear equally
bright.
• Hence the intensity that we perceive are not actually the absolute values.
6. Optical Illusions

• Our visual system plays a lot of


interesting tricks on us.
7. Elements of Digital Image Processing System
Image Processing System is the combination of the different elements involved in the digital
image processing.
It consists of following components:
•Image Sensors:
With reference to sensing, two elements are required to acquire digital image. The first is a physical device that is sensitive to the
energy radiated by the object we wish to image(e.g. Camera) and second is specialized image processing hardware.
•Image Processing Hardware:
Image processing hardware is the dedicated hardware that is used to process the instructions obtained from the image sensors.
•Computer:
Computer used in the image processing system is the general purpose computer that is used by us in our daily life.
•Image Processing Software:
Image processing software is the software that includes all the mechanisms and algorithms that are used in image processing
system.
•Mass Storage:
Mass storage stores the pixels of the images during the processing.
•Hard Copy Device:
Once the image is processed then it is stored in the hard copy device. It can be a pen drive or any external ROM device.
•Image Display:
It includes the monitor or display screen that displays the processed images.
•Network:
Network is the connection of all the above elements of the image processing system.
Thank You
3

Fundamental steps in
Digital Image Processing
© Dr. Dafda
Fundamental steps in Digital Image Processing

The knowledge about a Problem


Domain is coded into an image
processing system.
Knowledge Base
The Knowledge Base controls
the interaction between different
modules of an image processing
system.
Image Acquisition In this step, the image is captured
by a camera and is digitized if not
in digital form.

Object
Image Enhancement It is the process of manipulating an
image so that the result Is more
suitable than original for specific
applications.
Image Restoration
It is the process of recovering an
image that has been degraded. It uses
mathematical or probabilistic models.
Morphological Processing
Are the tools for extracting image
components that are useful in the
representation and description of shape.
Image Segmentation
Here the computer tries to
separate objects from the image.
Object Recognition
It is a process that assigns a label
(e.g. display board) to an object based
on its description.
Representation and Description
Representation deals with converting
the data into a suitable form for
computer processing.
Description deals with extracting
features.
Image Compression It deals with techniques for reducing
the storage space required to save an
Image or the bandwidth required to
transmit it.
Colour Image Processing
It handles the image processing
of coloured images either as
Indexed image or RGB images.
Thank You
4

Image Sensing and


Acquisition
© Dr. Dafda
• Image sensing is done by a sensor, which will convert the optical energy into
electrical energy or digital image.

In image acquisition there are three components:


(1) Illumination
(2) Optical system (lens system)
(3) Sensor system
• The three principal sensor arrangements used to transform illumination energy into
digital images are:
(1) Single sensor
(2) Line sensor
(3) Array sensor

(1) Single sensor (3) Array sensor

(2) Line sensor


Image Acquisition using a Single Sensor:
• The best example for a single sensor is the photodiode, which is constructed of silicon
material whose output voltage waveform is proportional to incident light.
• The use of a filter in front of a sensor improves selectivity. For example, a green (pass)
filter in front of a light sensor allows only green light .
• In order to generate a 2-D image using a single sensor, there has to be relative
displacements in both the x- and y-directions between the sensor and the area to be imaged.
Image Acquisition Using Sensor Strips:
• The sensor strip provides imaging elements in one direction. Motion perpendicular
to the strips provides imaging in the other direction.
• This arrangement is used in most flat-bed scanners
• Sensing devices with 4000 or more in-line sensors are possible.
Image Acquisition Using Circular sensor strip

• The sensor strip arrangement mounted in a


ring configuration is used in X-ray, medical
and industrial imaging to obtain cross-
sectional (slice) images of 3-D objects.
• A rotating X-ray source provides
illumination and the portion of the sensors
opposite the source collect the X-ray energy
that pass through the object.
Image Acquisition using Array sensor:
• The typical array sensor is a CCD array, which can be manufacture with
4000*4000 elements or more.
• The response of each sensor is proportional to the integral of the light energy
projected onto the surface of the sensor.

CCD KAF-3200E from Kodak.


(2184 x 1472 pixels,
Pixel size 6.8 microns2)
A Simple Image formation model:
• An image is defined by two dimensional function f(x,y). The value or amplitude of f
at spatial coordinates (x,y) is a positive scalar quantity.
0 < f(x,y) < ∞

• The function f(x,y) may be characterized by two components: (1) the amount of
source illumination incident on the scene being viewed and (2) the amount of
illumination reflected by the objects in the scene.
f(x,y) = i(x,y) r(x,y), where 0 < i(x,y) < ∞ and 0 < r(x,y) < 1

• The interval of l ranges from [0,L-1]. Where l=0 indicates black and l=1 indicates
white. All the intermediate values are shades of gray varying from black to white.
Thank You
5

Relationship between Pixels:


Neighbourhood, Adjacency and
Distance measures between pixels in
DIP © Dr. Dafda
Basic relationships between pixels (Neighborhood):

• Any image is denoted by function f(x,y) and the image is composed of many pixels.
• N4(p) = (x+1, y), (x-1, y), (x, y+1), (x, y-1)
• ND(p) = (x+1, y+1), (x+1, y-1), (x-1, y+1), (x-1, y-1)
Basic relationships between pixels (Adjacency):
• Connectivity between pixels is a fundamental concept that simplifies the definition of
numerous digital image concepts, such as regions and boundaries.
(a) 4 - adjacency: Two pixels p and q with values from V are 4-adjacent if q is in the set
N4(p).
(b) 8 - adjacency: Two pixels p and q with values from V are 8-adjacent if q is in the set
N8(p).
(c) m – adjacency (mixed adjacency): Two pixels p and q with values from V are m-
adjacent if (i) q is in the set N4(p), or (ii) q is in ND(p) and the set N4(p) ꓵ ND(p) has no
pixels whose values are from V.
V = {1} V = {0,1,2,3,…..10}
0 1 0 1 66 9 90 7
0 0 1 0 92 166 6 19

0 0 1 0 110 66 5 77
1 0 0 0 8 80 133 70
Distance measures between Pixels:
• For pixels p, q and z with coordinates (x,y), (s,t) and (u,v) respectively, D is the
distance function or metric.
(a) Euclidean distance: De(p,q) = [ ( x- s)2 + (y – t)2]1/2
(b) City block distance: D4(p,q) = │ x- s│ + │y – t │
(c) Chessboard distance: D8(p,q) = max(│ x- s│,│y – t │)

p(x,y)

z(u,v) q(s,t) D4(p,q) D8(p,q)


Thank You
6

Image Sampling and


Quantization
© Dr. Dafda
• Sampling and quantization are the two important processes used to
convert continuous analog image into digital image.

• Image sampling refers to discretization of spatial coordinates whereas


quantization refers to discretization of gray level values or amplitude
values.

• Given a continuous image f(x,y), digitizing the coordinate values is


called sampling and digitizing the amplitude (intensity) values is called
quantization.
Representing Digital Image
An image may be defined as a two-dimensional function, f(x, y), where x and y are
spatial (plane) coordinates, and the amplitude of ‘f ‘ at any pair of coordinates (x, y)
is called the intensity or gray level of the image at that point.

Pixel

Coordinate convention used to represent digital images Boxes inside the image represent pixels
If k is the number of bits per pixel, then the number of gray levels, L, is an integer
power of 2.
L = 2k
When an image can have gray levels, it is common practice to refer to the image as a
“k-bit image”. For example, an image with 256 possible grey level values is called an 8
bit image.
Therefore the number of bits required to store a digitalized image of size M*N is
b = M*N*k

Example:
How much storage capacity is required to store an image with size of 1024*768 and
256 gray levels?
As it has 256 gray levels, it is an 8 bit image since 28 = 256.
Hence storage capacity required = M*N*k= 1024*768*8 = 62,91,456 bits = 7,86,432
bytes = 786.432 KB
Thank You

You might also like