Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

EE-333 Digital Image Processing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 50

EE-333 Digital Image Processing

Image Processing Fundamentals

Dr Tahir Nawaz
Website: https://www.tahirnawaz.com
Email: tahir.nawaz@ceme.nust.edu.pk
Human visual perception
• Before embarking on our journey of learning image processing, it is
very important to understand how human perception works.

• This is because the underlying principle as to how image processing


works is based on human visual perception.

• So, without understanding the human vision system (the eye), we


cant really think of image processing!
Human visual perception
Structure of an eye
• Lets “extract” and “disassemble” an eye!
Human visual perception
Structure of an eye
• Simplified view of the cross-section of an eye
Human visual perception
Structure of an eye
• A simplified midline view of an eye
Human visual perception
Structure of an eye
• How does an eye image an object?
Human visual perception
Structure of an eye
• How does an eye focus?
Human visual perception
From Eye to the Brain
• Once an image is focused on the sensitive part of the retina, energy
in the light that makes up that image creates an electrical signal.
• Nerve impulses can then carry information about that image to the
brain through the optic nerve.
Human visual perception
From Eye to the Brain
• The brain interprets the image orientation correctly
Human visual perception
Photo-receptors: Cones and Rods
• Retina contains specialized cells, (photo) receptors, that converts
light into electrical signals
• Rods  Black & White (Gray) images in low light (night)
• Cones  Color Vision in bright light (day)
Human visual perception
Photo-receptors: Cones and Rods
• Cones (6 to 7 million in number) are the cells responsible for
daylight vision and they are actually of three different kinds – each
responding to a different wavelength of light: one responds to red
light, one to green light and one to blue light. It is these cones which
allow us to see things in color.
• Rods (75 to 150 million in number), on the other hand, are
responsible for night vision. They are sensitive to light but not to
wavelength information (color). In darkness, the cones do not
function at all – so we need rods in order to see things even if it is
only in shades of grey
Human visual perception
Photo-receptors: Cones and Rods
• Figure shows that rods and cons are distributed (largely) around
fovea (central portion of the retina) in a radially symmetric manner
except that on the ‘blind spot’ (where there are no receptors)
Human visual perception
Brightness adaptation and discrimination
• The range of light intensity levels to which the human visual system
can adapt is enormous – on the order of 1010 – thus exhibiting a very
impressive dynamic range

• However, the visual system cannot operate over this dynamic range
simultaneously; rather, at any one time humans can only
discriminate between a much smaller number of intensity levels.

• This is known as “brightness adaptation”.


Human visual perception
Brightness adaptation and discrimination
• A related phenomenon to
highlight is that visual
system tends to
undershoot or overshoot
around the boundary
regions of different
intensities.
• See the following figure in
which intensities of stripes
is constant but we
perceive a brightness
patterns that is strongly
scalloped, especially near
the boundaries
Human visual perception
Brightness adaptation and discrimination
• Another phenomenon that demonstrates that a region’s perceived
brightness does not simply depend on its intensity is called
“simultaneous contrast”.
• See the following figure in which all center squares have exactly the
same intensity. However they appear to become darker as the
background gets lighter.
Human visual perception
Brightness adaptation and discrimination
• Other examples of human perception phenomena are “optical
illusions”, in which the eye fills in non-existing information or
wrongly perceives geometrical properties of objects.
Digital images
• Digital image is a grid of squares, each of which contains a specific
color
• Each square is called a pixel that is the basic element of an image
Digital images
Digital image representation
Digital images
Common image formats
• 1 sample per point (Grayscale image)
• 3 samples per point (Color image containing Red, Green, and Blue
channels)
Digital images
• Color images have 3 (intensity) values per pixel; monochrome or
greyscale images have 1 (intensity) value per pixel.
Digital images
• So, a digital image is a set of pixels (picture elements, pels)
• Pixel means
– pixel coordinate
– pixel value
– or both
• Both coordinates and value are discrete
Digital images
Pixel
• p = (r,c) is the pixel location indexed by row, r, an column, c
• I(p) = I(r,c) is the value of the pixel at location p
• If I(p) is a single number then I is called a monochrome or (more
commonly) a grayscale image
• If I(p) contains a list of numbers the I has multiple bands or channels
and is commonly called as a color image.
Digital images
Pixel
Image sensing and acquisition
• The types of images we are here interested are generated by the
combination of “illumination” source and the reflection or absorption
of energy from the source by the elements of the “scene” being
imaged.

• The illumination may originate from a source of electromagnetic


energy such radar, visible, infrared, or X-ray imaging system or
ultrasound or even a computer-generated illumination pattern.

• Depending on the nature of the source, illumination energy is


reflected from, or transmitted through objects.
Image sensing and acquisition
Three principal sensor arrangements for image acquisition

Image acquisition
using a single sensor

Image acquisition
using a sensor strip

Image acquisition
using a sensor array
Image sensing and acquisition
Digital image acquisition process

An example of the digital image acquisition process


Image sensing and acquisition
A simple image formation model
f(x,y) = i(x,y) r(x,y)

f(x,y): intensity the point (x,y)

i(x,y): illumination at the point (x,y)


(the amount of the source illumination incident on the scene)

r(x,y): reflectance/transmissivity at the point (x,y)


(the amount of illumination reflected/transmitted by the object)

where 0 < i(x,y) < ∞ and 0 < r(x,y) < 1


Image sampling and quantization
• Sampling:
– Digitization of the spatial coordinates (x,y)
– Commonly used number of samples (resolution)
• Digital still cameras: 640x480, 1024x1024, 4064 x 2704
• Digital video cameras: 640x480 at 30 frames/second (fps)

• Quantization:
– Digitization in amplitude (also known as gray level quantization)
– 8 bit quantization: 28=256 gray levels (0: black, 255: white)
– 1 bit quantization: 2 gray levels (0: black, 1: white) – binary
Image sampling and quantization
Image sampling and quantization

Left: Continuous image projected onto a sensor array


Right: Results of image sampling and quantization
Image sampling and quantization
Representation of a digital image

 f (0, 0) f (0,1) ... f (0, N  1) 


 f (1, 0) f (1,1) ... f (1, N  1) 
f ( x, y )  
 ... ... ... ... 
 
 f ( M  1, 0) f ( M  1,1) ... f ( M  1, N  1) 

 a0,0 a0,1 ... a0, N 1 


 a a1,1 ... a1, N 1 
A
1,0

 ... ... ... ... 


 
 aM 1,0 aM 1,1 ... aM 1, N 1 
Image sampling and quantization
Representation of a digital image

• Discrete intensity interval [0, L-1], L=2k,


where k is the number of bits

• The number of bits (b), i.e. image size, required to store a M × N


digitized image
b=M×N×k
Image sampling and quantization
Spatial resolution
• The smallest discernible detail in an image and often refers to the
number of pixels in an image
Image sampling and quantization
Spatial resolution
Image sampling and quantization
Intensity resolution
• The smallest discernible change in intensity level and refers to the
number of intensity levels used to represent the image
• The more intensity levels used, the finer the level of detail in an
image
• Intensity level resolution is usually given in terms of the number of
bits used to store each intensity level
Image sampling and quantization
Intensity resolution
Image sensing and acquisition
Intensity resolution
Relationship between pixels
Neighborhood
• Neighbors of pixel are the pixels that are adjacent pixels of an
identified pixel
Relationship between pixels
Neighborhood
• 4-neighbors of a pixel
Relationship between pixels
Neighborhood
• Diagonal neighbors of a pixel
Relationship between pixels
Neighborhood
• 8-neighbors of a pixel
Relationship between pixels
Adjacency
• Let V be the set of intensity values used to define the criteria for
similarity

• 4-adjacency: Two pixels p and q with values from V are 4-adjacent


if q is in the set N4(p).

• 8-adjacency: Two pixels p and q with values from V are 8-adjacent


if q is in the set N8(p).

• m-adjacency: Two pixels p and q with values from V are m-adjacent


if
(i) q is in the set N4(p), or
(ii) q is in the set ND(p) and the set N4(p) ∩ N4(q) has no pixels
whose values are from V.
Relationship between pixels
Adjacency

V = {1, 2}
Relationship between pixels
Path
• A (digital) path (or curve) from pixel p with coordinates (x0, y0) to
pixel q with coordinates (xn, yn) is a sequence of distinct pixels with
coordinates

(x0, y0), (x1, y1), …, (xn, yn);

where (xi, yi) and (xi-1, yi-1) are adjacent for 1 ≤ i ≤ n.

• Here n is the length of the path.

• If (x0, y0) = (xn, yn), the path is closed path.

• We can define 4-, 8-, and m-paths based on the type of adjacency
used.
Relationship between pixels
Path
Relationship between pixels
Path
Relationship between pixels
Path
Relationship between pixels
Distance metrics
• Given pixels p, q, and z with coordinates (x, y), (s, t), (u, v)
respectively, the distance function D has following properties:

• D(p, q) ≥ 0, and
• D(p, q) = 0, iff p = q, and
• D(p, q) = D(q, p), and
• D(p, z) ≤ D(p, q) + D(q, z)
Relationship between pixels
Different types of distance metrics
• City block distance (D4 distance)

• Chessboard distance (D8 distance)

• Euclidean distance
Acknowledgement/References
• Digital Image Processing”, Rafael C. Gonzalez & Richard E. Woods,
Addison-Wesley, 2002
• Statistical Pattern Recognition: A Review – A.K Jain et al., PAMI (22)
2000
• Pattern Recognition and Analysis Course – A.K. Jain, MSU
• “Pattern Classification” by Duda et al., John Wiley & Sons.
• “Machine Vision: Automated Visual Inspection and Robot Vision”,
David Vernon, Prentice Hall, 1991
• www.eu.aibo.com/
• Advances in Human Computer Interaction, Shane Pinder, InTech,
Austria, October 2008
• https://www.cs.nmt.edu/~ip/lectures.html

You might also like