Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

(Main Concepts) : Digital Image Processing

Download as pdf or txt
Download as pdf or txt
You are on page 1of 7

Introduction

(Main Concepts)

Digital Image Processing


An image may be defined as a two dimensional function f(x,y) , where
x and y are spatial or plane coordinates, and the amplitude of f at any pair of
coordinates (x,y) is called the intensity or grey level of the image at that
point. When x,y and the amplitude values of f are all finite (discrete
quantities), we call the image a digital image. The field of digital image
processing refers to processing digital images for the aim of two principal
application areas:

1- Improving the visual appearance of images to a human viewer, and


2- Processing of scene data for autonomous machine perception.
Area of Applications

Image processing is used in many areas as indicated in Figure (1). For


example, commercial silk screening now uses image processing algorithms
to manipulate images, to separate colors, and to produce the masks used in
the screening process.

Entertainment Silk
Screening

Education Medical
Industry

Remote
Archeology Sensing Space

Business&Finance Military
Videoconferencing
Figure (1) Areas of Applications of Image Processing

Components of an Image Processing System


Sampling and quantization
An image may be continuous with respect to the x- and y coordinates,
and also in amplitude. If we want to convert such continuous image to
digital form, we have to sample the function in both coordinates and
amplitude. Digitizing the coordinate's values is called sampling. Digitizing
the amplitude values is called quantization.

We can express the result of sampling and quantization in a matrix form of


N rows and M columns. The values of the coordinates (x,y) becomes
discrete (integer) value L.

So, we can express the digital image in the following matrix


form:

 a 0, 0 a 0,1 .. a 0, N 1 
 a a1,1 a1, N 1 
 1,0
f ( x, y )   .. .. .. .. 
 
 .. .. .. .. 
 a M 1, 0 a M 1,1 .. a M 1, N 1 
 [1]

where : ai,j = f(x=i y=j)= Li,j, which is called an image element value or
pixel value.

This digitization process requires decisions about values for M,N, and for
the number , L, of discrete grey levels allowed for each pixel. M and N
should be positive integers due to processing, storage, and sampling
hardware consideration, the number of grey levels typically is an integer
power of 2:

L  2n [2]

where n is the number of bits in the binary representation of the brightness


levels.

When n>1 we speak of a grey-level image;


when n=1 we speak of a binary image having just two grey levels "black"
and "white" or "0" and "1".

The discrete values are equally spaced and that they are integers in the
interval [0,L-1]. This range is called the dynamic range of an image.

Spatial and grey level resolution


Spatial resolution is the smallest discernible detail in an image, while
grey level resolution refers to the smallest discernible change in grey level.

Multi-Spectral data (multi dimensional )

Usually in some applications, each pixel has several values in


different regions of the electromagnetic spectrum, this variety
of pixel values is called the multi-spectral images.

Image storage
The number of bits required to store a digitized image is:

b  MxNxkxd [3]

where MxN is the image size

k is the number of bits and

d is the dimensionality of the image(number of recorded bands)

Zooming and Shrinking


Zooming may be viewed as over sampling, while shrinking may be
viewed as under sampling, the only difference between these operations and
sampling and quantization is that zooming and shrinking operations are
applied to a discrete image. Zooming operation required two steps:

1- the creation of new pixel locations, and

2- the assignment of grey levels to those new locations.

A famous and simple technique for assigning grey level value for each pixel
is called nearest neighbor interpolation, which assigns the pixel value of
the closest pixel in the original image. Pixel replication is a special case of
nearest neighbor interpolation in the case of increasing the size of an image
an integer number of times.

Another more sophisticated way of accomplish grey level assignment is


bilinear interpolation which uses the four nearest neighbors of a point. Let
(x’,y’) denote the coordinates of a point in the zoomed image, and let
g(x’,y’) denote the grey level assigned to it. For bilinear interpolation, the
assigned grey level is given by :

g ( x' , y ' )  ax'by'cx' y ' d [4]

where a,b,c&d are determined from the four equations in four unknown that
can be written using the four nearest neighbor of point (x’,y’).

It is possible to use more than four neighbor’s pixels for interpolation which
implies fitting the points with a more complex surface. This method gives
smoother results

Image shrinking is done in similar way as zooming. Row and column


deletion is performed as the opposite operation of pixel replication. If the
shrinking consider non integer factor, we can use the zooming grid analogy
by grey level nearest neighbor or bilinear interpolation, and then shrink the
grid back to its original specified size. Some times we need to blur an image
slightly before shrinking it to reduce what is called aliasing effect.

Aliasing effect: If the sampling rate is less than twice the highest
frequency (Nyquuist rate), the transformed image is corrupted by
frequencies from adjacent periods
Some relationships between pixels:

Adjacency

4-adjacency: Two pixels p,q with values from V are 4- adjacent if


q is in the set N4(p)

8-adjacency: Two pixels p,q with values from V are 8- adjacent if


q is in the set N8(p)

m-adjacency (mixed adjacency) : Two pixels p,q with values


from V are m- adjacent if
i) q is in the set N4(p) or
ii) q is in ND(p) and the set N4(p)∩N4(q) has no pixels whose
values are from V.

You might also like