Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
62 views17 pages

Fundamentals of Image Processing: Lect # 2 Outline

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1/ 17

Fundamentals of Image

Processing
Lect # 2

OUTLINE:
• Types of images
• Digital imaging systems
Types of images
 Binary image
◦ two valued [0,1] image
◦ Example: used where information about shape or
outline is required
 Grayscale image
◦ Monochrome images where no. of bits define no.
of gray-levels (eg. 8 bit image has 256 graylevels)
 Color image
◦ 3-band monochrome image where each band
corresponds to a different color (red, Green,
Blue) (8 bits per plane=> 24 bits per pixel)
Types of images
 Types of images:
◦ Bitmap or Raster image: 2-D function
◦ Vector image: storing lines, curves and shape
using key points
 The 5 formats of Digital Image Files:
 TIFF
 JPEG
 GIF
 PNG
 Raw Image Files
File Format Full form Property Usage
(extension)

TIFF Tagged Image File TIFF images are used in photo software (such as Photoshop)
(.tiff or .tif) Format uncompressed and thus and page layout software (such as Quark and
contain a lot of detailed image InDesign)
data

JPEG Joint Photographic Uses lossy compression • Used for photographs on the web
(.jpeg or .jpg) Experts Group Bad for line drawings or logos • Used by digital cameras
or graphics, as compression
makes them look jagged lines
GIF Graphic Interchange Uses lossless compression • limited color range suitable for the web
(.gif) Format but not for printing.
• Never used for photography, due to limited
number of colors.
• Can also be used for animations.
PNG Portable Network allows for a full range of color • created as an open format to replace GIF,
(.png) Graphics and better compression. • Used for web images, not for print images
• Not for photographs, due to large size
• Better than JPEG for images with some
text, or line art
Raw image files Contain unprocessed data
from a digital camera. (usually).
Each camera company has its
own proprietary format.
Usually they are converted to
TIFF before editing and color-
correcting.
How human eye works?
 Basic principle followed by cameras has
been taken from the way , the human eye
works.

Object

The cornea & lens serve to refract light and form image on retina
Quantitative relationship between
object & image size
 Let f be the focal length of lens assembly
 Di be the distance of image from lens
 Do is the distance of object from lens
Then, from lens equation we have,
1 1 1
= +
𝑓 𝑑𝑖 𝑑𝑜
If height of object and image is ho and hi
respectively, the relationship is
ℎ𝑖 𝑑𝑖
= -
ℎ𝑜 𝑑𝑜
Numerical:
 Assume that the cornea-lens system has a
focal length of 1.80 cm (0.0180 m).
Determine the image size and image
location of a 6-foot tall man (1.83 m) who
is standing a distance of approximately 10
feet away (3.05 meters).

 ANSWER: -1.09 cm tall at approx. 1.81


cm from the lens
Dependence of himage and dimage on dobject
(focal length is fixed at 1.8 cm)
Object Distance Image Distance Image Height
1.00 m 1.83 cm 3.35 cm
3.05 m 1.81 cm 1.09 cm
100 m 1.80 cm 0.329 cm
Thin lens assumption
The thin lens assumption assumes the lens has no
thickness, but this isn’t true…

Object Lens Film

Focal
point

By adding more elements to the lens, the distance at which a


scene is in focus can be made roughly planar.
Depth of field Aperture Film

f / 5.6

f / 32
 Changing the aperture size affects depth of field
◦ A smaller aperture increases the range in which the
object is approximately in focus
Image formation in digital camera
Camera Sensors
Sensors consist of pixels and the main component of a pixel is the
Photodiode which converts the energy of the incoming photons
(light energy) into an electrical charge.

The electrical charge is then converted to a voltage which is then


amplified to a level at which it can be processes further by the
Analog to Digital Converter.

Thus, discretized amplitude for discretized location obtained!!


Colored images
 Photodiodes are monochrome
devices. They are unable to tell the
difference between different
wavelengths of light (color).
 A mosaic pattern of color filters
Color Filter Array is positioned
on the top of the sensor to filter
out the red, green, and blue
components of light falling onto it.
 Missing information in each color
layer (beige squares) has to be
estimated by the software in the
camera.
 These approximations reduce
sharpness and introduce
inaccuracies called demosaic
artifacts.
White balance & Bayer’s
interpolation
 Step 1: Perform a white balance operation
A white object will have equal values of reflectivity
for each primary color:
R=G=B
The color channel that has the highest level is set
as the target mean and the remaining two channels
are increased with a gain multiplier to match. For
example, if Green channel has the highest mean,
gain ‘a’ is applied to Red and gain ‘b’ is applied to
Blue.
G’ = R’a = bB’
The “White Patch” method attempts to locate the objects that are truly white,
within the scene; by assuming the whites pixels are also the brightest (I =
R+G+B). Then, only the top percentage intensity pixels are included in the
calculation of means, while excluding any pixels that may have any channel that is
saturated.
Effect of white balance
White balance & Bayer’s
interpolation
B(x,y)
 Interpolating red and blue
R(x,y)
components G(x,y)

x,y
White balance & Bayer’s
interpolation
 Interpolating Green components

R1

x,y
R4 R2

R3

You might also like