Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
38 views

Digital Image Processing - Introduction

The document introduces digital image processing and describes its key concepts and stages. It defines digital images and DIP, and discusses the differences between image processing, analysis, and computer vision. It also outlines the fundamental steps in DIP including image acquisition, filtering, restoration, representation, and recognition.

Uploaded by

LEKHA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Digital Image Processing - Introduction

The document introduces digital image processing and describes its key concepts and stages. It defines digital images and DIP, and discusses the differences between image processing, analysis, and computer vision. It also outlines the fundamental steps in DIP including image acquisition, filtering, restoration, representation, and recognition.

Uploaded by

LEKHA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 90

Introduction to

Digital Image Processing


What is Digital Image Processing?
 Image:
An image may be defined as a two dimensional function, f(x,y),
where x and y are spatial (plane) coordinates, and the amplitude of f
at any pair of coordinates (x,y) is called the intensity, or gray level
of the image at that point. When (x,y) and f are all finite and discrete
quantities, we call the image a digital image.

 Digital Image Processing (DIP):


It refers to the processing of digital image by means of a digital
computer.

2
Image Processing, Image Analysis and
Computer Vision
 Broadly image processing consists of three levels of processes: Low, Middle,
and High.

 The low level processes involves those operations whose inputs are images
and whose outputs are also images. For example, image enhancement
operations such as image negative and smoothing of an image fall in this
category. These operations are the mainstay (backbone) of the DIP.

 The mid level processes are characterized by the fact that its inputs generally
are images, but its outputs are attributes extracted from these images, e.g.
edges, contours and the identity of individual objects.

Mid level processing on images involves tasks such as:


 Segmentation (partitioning an image into regions or objects),
 Description of those objects (in terms of features extracted) to reduce them
to a form suitable for computer processing and
 Classification (recognition) of individual objects.

3
Image Processing, Image Analysis and
Computer Vision
 The higher level processing involves “making sense” of an ensembles (group)
of recognized objects and performing the cognitive functions normally
associated with vision.
 There is very little difference between DIP and Image Analysis and Computer
Vision.
 DIP covers both low and mid level processing till recognition of individual
regions or objects in the image.
 Image Analysis deals with the Knowledge extraction part out of an image (or
visual input) i.e., whether the recognized objects in the image makes any sense
or not (i.e. give any useful knowledge or not).
 Computer Vision then uses this knowledge to take decisions and make any
necessary actions.
 The ultimate goal of Computer Vision is to use computers to emulate human
vision, including learning and being able to make inferences and take actions
based on visual inputs.

4
Difference between Image Processing
and Computer Graphics

 Computer Graphics: Computer Graphics is a branch of


computer science that deals with the creation, modification and
manipulation of images with the help of a digital computer.

 The key element that distinguishes DIP from Computer Graphics


is that the Image Processing generally begins with images,
while the basic operation in Computer Graphics is to create
images. While each of these two fields has its own focus and
strength, they also overlap and complement each other.

5
Fundamental Steps in
Digital Image Processing
All steps include the following

1. Image Acquisition
2. Image Filtering and Enhancement
3. Image Restoration
4. Color Image Processing
5. Wavelets and Multiresolution Processing
6. Image Compression
7. Morphological Processing
8. Image Segmentation
9. Image Representation and Description
10.Object Recognition
6
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
7
Fundamental Steps in
Digital Image Processing
1. Image Acquisition: This step pertains to the acquisition of an image
agreeable to process it digitally. Whatever be the process of acquiring an
image, it will finally be in a digital form which can be stored in a system in
the form of a file.
Note that acquisition of an image could be as simple as being an
image that is already in digital form.

2. Image Filtering and Enhancement: It is the simplest and the


most appealing area of digital image processing. It is process of
manipulating the image so that the result is more suitable than the original
one for a specific application.
The ideas behind enhancement techniques are to bring out details
that are obscured, or simply to highlight certain features of interest of the
image. For example: Increasing the contrast of an image.
The measure is “goodness” or “badness” depending upon the
psychophysical aspect of human visual system.
8
Fundamental Steps in Digital Image
Processing
3. Image Restoration: This also deals with improving the appearance
of an image. However, unlike enhancement, which is subjective
(depends upon perception of an individual), image restoration is
objective. Image restoration techniques tend to be based on
mathematical or probabilistic models of image degradation. This approach
usually involves formulating a criterion of goodness (e.g.. PSNR) that will
yield an optimal estimate of the desired result.
4. Color Image Processing: Color Image Processing is used to
extract features of interest in color images. It is gaining importance
because of increasing use of color images over the Internet.

5. Wavelets and Multi-resolution Analysis: Wavelets are used to


represent images in various degrees of resolution. It is also used in image
compression and for pyramidal representation in which images are
subdivided successively into smaller regions.

9
Fundamental Steps in Digital Image
Processing
6. Image Compression: Deals with reducing the storage requirements
of an image. The use of images has increased manifold in information
technology for the last few years, especially after the intensive use of the
Internet. The size of image affects the transmission quality of information
over Internet.

7. Morphological Processing: Deals with the tools for extracting


image components, which are useful in the representation and description
of region shape, such as boundaries, skeletons, and the convex hull. This
also includes the techniques for pre and post processing, such as
morphological filtering, thinning and pruning.

8. Image Segmentation: Deals with procedures for partitioning an


image into its constituent parts. For example, a scanned page may be
split into photographs, texts, tables and columns. In general autonomous
segmentation is one of the most difficult task in digital image processing.

10
Fundamental Steps in Digital Image
Processing
9. Image Representation and Description: It deals with extracting
attributes that result in some quantitative information of interest or are
basic for differentiating one class of objects from others. This stage
usually processes the data of segmentation stage. Description deals with
extracting attributes that are helpful in identifying an object.

10. Image Recognition: Image Recognition is a process which assigns


a label (based on its descriptors) to the digital image after matching it with
a known object. For example, the decision whether an image or part
thereof is the human face or not.

11
Key Stages in Digital Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
12
Key Stages in Digital Image Processing:
Image Aquisition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
13
Key Stages in Digital Image Processing:
Image Enhancement
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
14
Key Stages in Digital Image Processing:
Image Restoration
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
15
Key Stages in Digital Image Processing:
Morphological Processing
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Color Image Image
Processing Compression
16
Key Stages in Digital Image Processing:
Segmentation
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
17
Key Stages in Digital Image Processing:
Object Recognition
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
18
Key Stages in Digital Image Processing:
Representation & Description
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Representation
Acquisition & Description

Object
Problem Domain
Recognition
Color Image Image
Processing Compression
19
Key Stages in Digital Image Processing:
Image Compression

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Color Image Image
Processing Compression
20
Key Stages in Digital Image Processing:
Colour Image Processing

Image Morphological
Restoration Processing

Image
Segmentation
Enhancement

Image Object
Acquisition Recognition

Representation
Problem Domain
& Description
Color Image Image
Processing Compression
21
Components of an
Image Processing System
An image processing system consists of a set of hardware
devices and software components. The major components are:

1.Image Sensors and Digitizers


2.Specialized Image Processing Hardware or Computer
3.Image Processing Software
4.Mass storage
5.Image Display and Hardcopy Devices
6.Image Transmission Systems

22
23
Components of an
Image Processing System
1. Image Sensors and Digitizer: Image Sensor is a physical
device that is sensitive to the light radiated by the object we wish to
image. A digitizer is a device that converts the analog form of the output
of an image sensor into digital form. For example, in a digital video
camera, the sensors produce an electrical output proportional to light
intensity and the digitizer converts these outputs to digital data.

2. Specialized Image Processing Hardware or Computer:


Although a general purpose computer can perform the operations on an
image after the related software is run on it, specialized hardware, on the
other hand, can perform some of the primitive operations very fast. For
example, a specialized hardware can be used to average image pixels
immediately after digitizing an image for the purpose of noise reduction.
The Computer performs the general purpose operations.

24
Components of an
Image Processing System
3. Image Processing Software: Software is required for image
processing operations. For example, image segmentation software can
segment the image into various components, an OCR system can convert
a document image into editable text and it can separate out the various
components of the document such as the text, tables, figures, images,
etc.
4. Mass Storage: Images require very large amount of storage. For
example, an image of the size 1024×1024 and capable of displaying 256
gray levels requires 1MB of space. Storage may be divided into three
categories:
 Short term storage: Storage for use during processing (computer
memory-RAM and video memory for image display).
 On-line Storage: for relatively fast recall (normally a hard disk or CDs).
 Archival Storage: for back up purpose (offline storage of many images)
such as magnetic tapes and optical disks.
25
Components of an
Image Processing System

5. Image Display and Hardcopy Devices: The display is


normally made on a video screen-monitor or computer
screen. Hardcopy devices include laser and inkjet printers
and plotters.

6. Image Transmission System: Image transmission system


has become an integral part of image processing. Images are
frequently exchanged over the Internet for processing them.
The key consideration in image transmission is the
bandwidth.

26
What is a Digital Image?
Functions can be categorized as:
• Continuous: Domain and Range both are
continuous
• Discrete: If domain is discrete.
• Digital: If domain and range both are
discrete.
• Domain and Range in case of digital
images is discrete and achieved through
sampling and quantization.
27
What is an Image?
• Image is basically a continuous function of two variables f(x,y),
where x,y are coordinates in a plane
• The image function values correspond to brightness at image points.
• f(x,y) must be non-zero and finite.
0<f(x,y)<∞
f(x,y)=i(x,y)r(x,y)
Where
i(x,y) is the amount of source illumination incident on the scene being
viewed and 0<i(x,y)<∞
r(x,y) is the amount of illumination reflected by objects in the scene and
0<r(x,y)<1

28
Image Sampling And Quantization

 A digital image is always only an approximation of a


real world scene.

 Digitizing the coordinate values is called sampling.

 Digitizing the amplitude values is called quantization.

29
Image Sampling And Quantization

30
Types of Images
 There are basically three types of images:
1. Binary Image
2. Gray Scale Image
3. Color Image
1. Binary Image: A binary image has two levels: 0 (black) and 1 (white). Only one bit
is required to represent a pixel. Thus if the size of a binary image is M×N, then
M×N bits are needed to represent the image.
2. Gray Scale Image: A gray scale image has 256 gray levels, 0 to 255. A pixel is
represented by 8 bits or a byte. Thus a gray scale image of the size M×N will need
M×N bytes.
3. Color Image: A color image has more than one component and normally two
representations, RGB and CMY are most common. The color representation RGB
is used for video display devices and CMY is used for hard copy devices.
Each component requires one byte for its representation. Thus, a color pixel
requires 3 bytes of storage. The total numbers of colors is 256×256×256 = 2^24=
16.7 Million. An M×N size of color image will require 3×M×N bytes of storage.
31
Spatial Resolution
 Spatial Resolution: It represents the number of pixels in an image. It is
given by
𝑵𝒐. 𝒐𝒇 𝒑𝒊𝒙𝒆𝒍𝒔 𝒂𝒍𝒐𝒏𝒈 𝒘𝒊𝒅𝒕𝒉 × 𝑵𝒐. 𝒐𝒇 𝒑𝒊𝒙𝒆𝒍𝒔 𝒂𝒍𝒐𝒏𝒈 𝒉𝒆𝒊𝒈𝒉𝒕
 Two other units for spatial resolution depending upon the number of
pixels per unit length: dots per inch (dpi) and pixels per inch (ppi).
 Formula for computing the PPI of a mobile device or any screen

𝒘𝟐 + 𝒉𝟐
𝑷𝑷𝑰 =
𝒅
where
𝑑 is the size of the screen (diagonally)
𝑤 × 𝑕 is the maximum resolution supported by the screen.

32
Spatial Resolution
 Bit depth: The number of bits required to store a pixel of an image.

 Aspect Ratio: The ratio of an image width to its height, measured in


unit length or number of pixels, is referred to as its aspect ratio. A
1024×768 image has an aspect ratio of 4/3.

33
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

34
Spatial Resolution (cont…)
Typical effects of
reducing spatial
resolution.

Images shown at
(a) 1250 ppi,
(b) 300 ppi,
(c) 150 ppi and
(d) 72 ppi

35
Intensity Level Resolution
Intensity level resolution refers to the
number of intensity levels used to represent
the image
– The more intensity levels used, the finer the level of
detail in an image
– Intensity level resolution is usually given in terms of
the number of bits used to store each intensity level
Number of Intensity
Number of Bits Examples
Levels
1 2 0, 1
2 4 00, 01, 10, 11
4 16 0000, 0101, 1111
8 256 00110011, 01010101
16 65,536 1010101010101010 36
Intensity Level Resolution (cont…)
256 grey levels (8 bits per pixel) 128 grey levels (7 bpp) 64 grey levels (6 bpp) 32 grey levels (5 bpp)
Images taken from Gonzalez & Woods, Digital Image Processing (2002)

16 grey levels (4 bpp) 8 grey levels (3 bpp) 4 grey levels (2 bpp) 37


2 grey levels (1 bpp)
Some Basic Relationships between
Pixels
• A pixel 𝑝 at location (𝑥, 𝑦) has 8-neighbors comprise four neighbors
(horizontal and vertical) at the locations:
𝑥 + 1, 𝑦 , 𝑥 − 1, 𝑦 , 𝑥, 𝑦 + 1 𝑎𝑛𝑑 (𝑥, 𝑦 − 1)
• and four diagonal neighbors at the locations:
𝑥 + 1, 𝑦 + 1 , 𝑥 − 1, 𝑦 − 1 , 𝑥 − 1, 𝑦 + 1 𝑎𝑛𝑑 𝑥 + 1, 𝑦 − 1 .
• 𝑁4 𝑝 → 4 − 𝑛𝑒𝑖𝑔𝑕𝑏𝑜𝑟𝑠 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙 𝑝
• 𝑁𝐷 𝑝 → 4 − 𝑑𝑖𝑎𝑔𝑜𝑛𝑎𝑙 𝑛𝑒𝑖𝑔𝑕𝑏𝑜𝑟𝑠 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙 𝑝
• 𝑁8 𝑝 → 8 − 𝑛𝑒𝑖𝑔𝑕𝑏𝑜𝑟𝑠 𝑜𝑓 𝑝𝑖𝑥𝑒𝑙 𝑝 𝑁4 𝑝 ∪ 𝑁𝐷 𝑝
(𝑥 − 1, 𝑦 − 1) (𝑥 − 1, 𝑦) (𝑥 − 1, 𝑦 + 1)

8-neighborhood of pixel
(𝑥, 𝑦 − 1) (𝑥, 𝑦) (𝑥, 𝑦 + 1) (𝒙, 𝒚)

(𝑥 + 1, 𝑦 − 1) (𝑥 + 1, 𝑦) (𝑥 + 1, 𝑦 + 1)

38
Adjacency
• Let V be the set of gray-level values used to define adjacency.
• In a binary image, 𝑉 = *1+ if we are referring to adjacency of pixels with
value 1.
• In a gray scale image, the idea is the same, but set 𝑉 typically contains
more elements. For example, in the adjacency of pixels with a range of
possible gray-level values 0 to 255, set 𝑉 could be any subset of these
256 values.

The three types of adjacency:

• 4-Adjacency: Two pixels 𝑝 and 𝑞 with values from 𝑉 are adjacent if 𝑞 is


in the set 𝑁4(𝑝). (Here 𝑉 is the set of gray values 𝑉 = 𝑉1 , 𝑉2 , … , 𝑉𝑛 ,
where 𝑛 is less than or equal to the total number of gray levels).
𝒑 𝒂𝒏𝒅 𝒒 𝒂𝒓𝒆 𝟒 − 𝒂𝒅𝒋𝒂𝒄𝒆𝒏𝒕 𝒊𝒇 𝒒 ∈ 𝑵𝟒 𝒑 , 𝒂𝒏𝒅 𝒑, 𝒒 ∈ 𝑽

39
Adjacency
• 8-Adjacency: Two pixels 𝑝 and 𝑞 with values from 𝑉 are adjacent if 𝑞 is
in the set 𝑁8(𝑝).
𝒑 𝒂𝒏𝒅 𝒒 𝒂𝒓𝒆 𝟖 − 𝒂𝒅𝒋𝒂𝒄𝒆𝒏𝒕 𝒊𝒇 𝒒 ∈ 𝑵𝟖 𝒑 , 𝒂𝒏𝒅 𝒑, 𝒒 ∈ 𝑽
where 𝑁8 𝑝 = 𝑁4 𝑝 ∪ 𝑁𝐷 𝑝 .

• m-Adjacency (Mixed Adjacency): Two pixels 𝑝 and 𝑞 with values from


𝑉 are m-adjacent if

𝒒 𝒊𝒔 𝒊𝒏 𝑵𝟒 𝒑 , 𝒐𝒓
𝒒 𝒊𝒔 𝒊𝒏 𝑵𝑫 (𝒑) and the set 𝑵𝟒 𝒑 ∩ 𝑵𝟒 𝒒 = ∅,
(i.e. the set 𝑁4 𝑝 ∩ 𝑁4 𝑞 has no pixel whose values are from V)

40
Adjacency
• Mixed adjacency is a modification of 8-adjacency. This is
used to eliminate the ambiguities that often arise when 8-
adjacency is used. Two image subsets S1 and S2 are adjacent
if some pixel in S1 is adjacent to some pixel in S2.

(a) (b) (c)


Pixels Pixels that are 8-Adjacent m-Adjacency
Arrangement to the Central Pixel.

• The top 3 pixels in (b) show multiple (ambiguous) 8-


adjacency, as indicated by the dashed lines. The ambiguity is
removed by using m-adjacency in (c).
41
Connectivity
• A (digital) path or curve from pixel 𝑝 with coordinate (𝑥, 𝑦) to pixel 𝑞
with coordinate (𝑠, 𝑡) is a sequence of distinct pixels with coordinates
𝒙𝟎 , 𝒚𝟎 , 𝒙𝟏 , 𝒚𝟏 , . . . , 𝒙𝒏 , 𝒚𝒏 .

where 𝒙𝟎 , 𝒚𝟎 ≡ 𝒙, 𝒚 𝒂𝒏𝒅 𝒙𝒏 , 𝒚𝒏 ≡ 𝒔, 𝒕 , and pixels 𝒙𝒊 , 𝒚𝒊 are


adjacent for 𝟏 ≤ 𝒊 ≤ 𝒏 − 𝟏. In this case n is the length of the path. If
𝒙𝟎 , 𝒚𝟎 ≡ 𝒙𝒏 , 𝒚𝒏 , then the path is a closed path. We can define 4-, 8-,
or m-paths depending upon the type of adjacency specified.

• Let S represent a subset of pixels in an image. Two pixels p and q are


said to be connected in S if there exists a path between them consisting
entirely of pixels in S.

• For any pixel p in S, the set of pixels that are connected to it in S is


called a connected component. If it is has only one connected
component, then set S is called a connected set.

42
Distance Measure
• For pixels p, q and z, with coordinates 𝑥, 𝑦 , 𝑠, 𝑡 𝑎𝑛𝑑 (𝑣, 𝑤), D is a
distance function or metric if

 𝐷 𝑝, 𝑞 ≥ 0 (𝐷 𝑝, 𝑞 = 0, 𝑖𝑓𝑓, 𝑝 = 𝑞)
 𝐷 𝑝, 𝑞 = 𝐷 𝑞, 𝑝
 𝐷 𝑝, 𝑧 ≤ 𝐷 𝑝, 𝑞 + 𝐷(𝑞, 𝑧)

1. The Euclidean distance between p and q is defined as:

𝐷𝑒 𝑝, 𝑞 = 𝑥−𝑠 2 + 𝑦−𝑡 2

For this distance measure, the pixels having a distance less than or
equal to some value 𝑟 from (𝑥, 𝑦) are the points contained in a disk
of radius 𝑟, centered at 𝑥, 𝑦 .

43
Distance Measure
2. The D4 distance (also called city-block distance) between p and q is
defined as:
𝐷4 𝑝, 𝑞 = 𝑥 − 𝑠 + 𝑦 − 𝑡

In this case, the pixels with D4 distance from (𝑥, 𝑦) less than or equal to
some value 𝑟 form a diamond centered at (𝑥, 𝑦). For example, the
pixels with 𝐷4 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 ≤ 2 from (𝑥, 𝑦) (the center point) form the
following
2
2 1 2
2 1 0 1 2
2 1 2
2

The pixels with 𝐷4 = 1 are the 4-neighbors of (𝑥, 𝑦).

44
Distance Measure
3. The D8 distance (also called chessboard distance) between p and q
is defined as:
𝐷8 𝑝, 𝑞 = max( 𝑥 − 𝑠 , 𝑦 − 𝑡 )

In this case, the pixels with D8 distance from (𝑥, 𝑦) less than or equal to
some value 𝑟 form a square centered at (𝑥, 𝑦). For example, the pixels
with 𝐷8 𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 ≤ 2 from (𝑥, 𝑦) (the center point) form the following
2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2

• The pixels with 𝐷8 = 1 are the 8-neighbors of (𝑥, 𝑦).

45
Distance Measure
4. The Dm distance:
• D4 and D8 between p and q are independent of any
paths that might exist between points, because
these distances involve only coordinates of the
points.
• Dm distance between two points is defined as
shortest m-path between the points.
• In this case distance between two pixels will
depend on the values of the pixels along the path,
as well as values of their neighbours.

46
Example
 Let 𝑉 = 1 and 𝑝 = 𝑝2 = 𝑝4 = 1
 Assume pixel 𝑝 as source and pixel
𝑝4 as destination. 𝒑𝟑 𝒑𝟒

Case 1: If 𝑝1 = 𝑝3 = 0, 𝒑𝟏 𝒑𝟐

𝑚 − 𝑝𝑎𝑡𝑕 𝑝 → 𝑝2 → 𝑝4 𝒑
𝐷𝑚 2

Case 2: If 𝑝1 = 1 𝑝3 = 0,

𝑚 − 𝑝𝑎𝑡𝑕 𝑝 → 𝑝𝟏 → 𝑝𝟐 → 𝑝𝟒
𝐷𝑚 3
47
Example-Contd
 Let 𝑉 = 1 and 𝑝 = 𝑝2 = 𝑝4 = 1
 Assume pixel 𝑝 as source and pixel
𝑝4 as destination. 𝒑𝟑 𝒑𝟒

Case 1: If 𝑝1 = 0 𝑎𝑛𝑑 𝑝3 = 1, 𝒑𝟏 𝒑𝟐

𝑚 𝑝 → 𝑝2 → 𝑝𝟑 → 𝑝4 𝒑
− 𝑝𝑎𝑡𝑕
𝐷𝑚 3
Case 2: If 𝑝1 = 𝑝3 = 1,

𝑚 − 𝑝𝑎𝑡𝑕 𝑝 → 𝑝𝟏 → 𝑝𝟐 → 𝑝𝟑 → 𝑝𝟒
𝐷𝑚 4
48
Image Processing Operations
Array Operation vs Matrix Operation

49
Image Processing Operations
Linear Vs Non Linear Operation

50
Image Processing Operations
Linear Vs Non Linear Operation

51
Image Processing Operations
Linear Vs Non Linear Operation

52
Arithmetic Operations
 Arithmetic operations are point operations applied
between two images of equal size.
 They include following operations:
 Image Addition
 Image Subtraction
 Image Multiplication
 Image Division

53
Arithmetic Operations:
Image Addition
 The output image is formed by the addition of two input
images or addition of an image with some constant 𝐾
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 + 𝑓2 𝑥, 𝑦
OR
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 + 𝐾
Applications
 To increase the brightness of the image.
The brightness of an image is the average pixel intensity of an
image. If a positive or negative constant is added to all the
pixels of an image, the average intensity of the image
increases or decreases.
54
Arithmetic Operations:
Image Addition

+ 𝟓𝟎 =

Original Image 𝑲 Output Image

55
Arithmetic Operations:
Image Addition

Original Image 𝒇𝟏 𝒙, 𝒚 Noise 𝒇𝟐 𝒙, 𝒚


56
Arithmetic Operations:
Image Addition

Output Image
𝒈 𝒙, 𝒚

57
Arithmetic Operations:
Image Subtraction
 The output image is formed by the subtraction of two input
images or subtraction of an image with some constant 𝐾
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 − 𝑓2 𝑥, 𝑦
OR
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 − 𝐾
Applications
 To decrease the brightness of the image.
 Motion or Change Detection in video sequences.

58
Arithmetic Operations:
Image Subtraction

− 𝟓𝟎 =

Original Image 𝑲 Output Image

59
Arithmetic Operations:
Image Subtraction

F𝐫𝐚𝐦𝐞𝟏 𝒇𝟏 𝒙, 𝒚 Frame 2 𝒇𝟐 𝒙, 𝒚

60
Arithmetic Operations:
Image Subtraction

Output Image
𝒈 𝒙, 𝒚

61
Arithmetic Operations:
Image Multiplication
 The output image is formed by the point (or pair) wise
multiplication of two input images or multiplication of an
image with some constant 𝐾
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 × 𝑓2 𝑥, 𝑦
OR
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 × 𝐾
Applications
 To increase the contrast of the image.
 It is useful for extracting the area of interest in an
image.
62
Arithmetic Operations:
Image Multiplication

× 𝟐=

Original Image 𝑲 Output Image

63
Arithmetic Operations:
Image Multiplication

Original Image 𝒇𝟏 𝒙, 𝒚 Binary Mask 𝒇𝟐 𝒙, 𝒚

64
Arithmetic Operations:
Image Multiplication

Output Image
𝒈 𝒙, 𝒚

65
Arithmetic Operations:
Image Division
 The output image is formed by the point (or pair) wise
division of two input images or division of an image with
some constant 𝐾
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 /𝑓2 𝑥, 𝑦
OR
𝑔 𝑥, 𝑦 = 𝑓1 𝑥, 𝑦 /𝐾
Applications
 To decrease the contrast of an image

66
Arithmetic Operations:
Image Division

/𝟐=

Original Image Output Image


𝑲

67
Arithmetic Operations:
Image or Alpha Blending
 This operation blends two images of same size to yield a
resultant image. This can be mathematically written as:
𝑔 𝑥, 𝑦 = 𝛼 × 𝑓1 𝑥, 𝑦 + 1 − 𝛼 × 𝑓2 𝑥, 𝑦

 𝛼 is called blending ratio, which determines the influence of


each image on the resultant image.

68
Arithmetic Operations:
Image or Alpha Blending

Image 1 𝒇𝟏 𝒙, 𝒚 I𝐦𝐚𝐠𝐞 𝟐 𝒇𝟐 𝒙, 𝒚
69
Arithmetic Operations:
Image or Alpha Blending

Blended Output
Image 𝒈 𝒙, 𝒚

70
Arithmetic Operations:
Image or Alpha Blending

Image 1 𝒇𝟏 𝒙, 𝒚 I𝐦𝐚𝐠𝐞 𝟐 𝒇𝟐 𝒙, 𝒚
71
Arithmetic Operations:
Image or Alpha Blending

Blended Output
Image 𝒈 𝒙, 𝒚

72
Logical Operations
 Arithmetic operations are point operations applied
between two images of equal size.
 They include following operations:
 OR/UNION (∪)
 AND/INTERSECTION (∩)
 NOT/COMPLEMENT
 These operators are widely used in morphological
image processing such as image thinning and
thickening, boundary extraction, extraction of connected
components etc.
73
Logical Operations

74
Image Processing Operations
Divided into three categories
 Point Processing Operations
𝑂𝑢𝑡𝑝𝑢𝑡 𝑥, 𝑦 ∝ 𝐼𝑛𝑝𝑢𝑡(𝑥0 , 𝑦0 )

 Local Operations
𝑂𝑢𝑡𝑝𝑢𝑡 𝑥, 𝑦 ∝ 𝑁𝑒𝑖𝑔𝑕𝑏𝑜𝑟𝑕𝑜𝑜𝑑 𝑜𝑓𝐼𝑛𝑝𝑢𝑡(𝑥0 , 𝑦0 )

 Global Operations
𝑂𝑢𝑡𝑝𝑢𝑡 𝑥, 𝑦 ∝ 𝐴𝑙𝑙 𝑝𝑖𝑥𝑒𝑙𝑠 𝑜𝑓 𝐼𝑛𝑝𝑢𝑡 𝐼𝑚𝑎𝑔𝑒
75
Image Processing Operations

76
Image Interpolation
• Image Interpolation is a basic tool used extensively in Zooming,
Shrinking, Rotating, and Geometric Corrections.

• Interpolation is the process of using known data to estimate values


at unknown locations.
• Many interpolation schemes are available, however, we will study two
basic interpolations schemes namely:
 Nearest Neighbor Interpolation
 Bilinear Interpolation

• For a given image, if scaling factor (𝑠)


𝒔 > 𝟏 → 𝐈𝐦𝐚𝐠𝐞 𝐙𝐨𝐨𝐦𝐢𝐧𝐠
𝒔 < 𝟏 → 𝐈𝐦𝐚𝐠𝐞 𝐒𝐡𝐫𝐢𝐧𝐤𝐢𝐧𝐠
𝒔 = 𝟏 → 𝐍𝐨 𝐜𝐡𝐚𝐧𝐠𝐞 𝐢𝐧 𝐬𝐢𝐳𝐞
77
Image Interpolation

78
Image Interpolation

79
Image Interpolation

80
Image Interpolation

81
Image Interpolation

82
Vector and Matrix Operations

83
Vector and Matrix Operations

84
Vector and Matrix Operations

85
Image Transforms

86
Image Transforms

87
Image Transforms

88
Image Transforms

89
Probabilstic Methods
• Probability finds its way into image processing work in a number of
ways. The simplest is when we treat intensity value as random
values.
• Let Zi i=0,1,2,……L-1 denote the values of all possible intensities in
an M×N digital image. The probability p(Zk) of intensity level Zk
occurring in a given image is
P(zk)=nk/MN
Where nk is the no. of times that intensity zk occurs in image.

• Once we have p(zk), we can determine a number of important


image characterstics. e.g. the mean intensity and variance.

90

You might also like