Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
23 views8 pages

Computer and Information Engineering Department DIP Laboratory

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 8

Computer and Information Engineering Department

DIP Laboratory

LAB 2: Intensity Transformation functions.

Objective :
To implement different types of gray level transformation functions
for grayscale image.

Introduction :

The term spatial domain refers to the image plane itself, and
methods in this category are based on direct manipulation of pixels in an
image. we focus attention on two important categories of spatial domain
processing :

1- Intensity ( or gray-level ) transformations.

2- Spatial filtering ( neighborhood processing ) or ( spatial


convolution ).

The spatial domain processes discussed are denoted by

g(x,y)=T[ f(x,y)]

Where

f(x,y) is the input image.


g(x,y) is the output image.

T is the operator on f, defined over a specified neighborhood about


point (x,y).

The principle approach for defining spatial neighborhoods about a point


(x,y) is to use a square or rectangular region centered at (x,y).

Intensity Transformation Functions :

The simplest form of the transformations T is when the


neighborhood is of size 1×1 ( a single pixel ). In this case the value of g at
(x,y) depends only on the intensity of f at that point, and T becomes an
intensity transformation function.

Because they depend only on intensity values, intensity transformation


functions frequently written in simplified form as

s =T (r)

Where r denotes the intensity of f And s the intensity of g, both


corresponding point (x,y) in the images.

Types Of Intensity Transformation Functions :

1. Image Negative Transformation :

The negative of an image with grey levels in the range [0, L-1] is obtained
by using the negative transformation function which is given by the
expression:

s=L-1-r
This type of processing is particularly suited for enhancing white or gray
details embedded in dark regions of an image, especially when the black
areas are dominant in size.

 implement function for the evaluation of image negative for the


image cameraman.tif.

2. Log Transformations :

Logarithmic transformation are implemented using the expression :

s=C log(1+r)

Where C is a constant

One of the principle uses of log transformations is to compress dynamic


range.

When performing a logarithmic transformation, it is often desirable to


bring the resulting compressed values back to the full range of the display.
For 8-bit, the easiest way to do this in MATLAB is with statement :

Gs=im2uint8(mat2gray(g));

Use mat2gray brings the values to the range [0,1] and im2uint8 brings
them to the range [0,255].

 Implement function for the evaluation of the log transformation


for the image cameraman.tif.

3. Power-law Transformations :

Power-law transformation have the basic form :

s= C rγ

Where C and γ are positive constants. Some times written as :


s= C ( r+ €)γ

To account for an offset according to a power law.

A variety of devices used for image capture, printing and display respond
by convolution, the exponent in the power law equation is referred to as
gamma. The process used to correct this power law response phenomenon
is called gamma correction.

 implement function for the evaluation of the power law


transformation for the image cameraman.tif.

Function imadjust

Function imadjust is a standard MATLAB function for intensity


transformations of grayscale images. It has the syntax

g = imadjust(f , [low_in high_in], [low_out high_out], gamma);

As illustrated in Fig.1, this function maps the intensity values in image f to


new values in g, such that values between low_in and high_in map to
values between low_out and high_out. Values below low_in and above
high_in are clipped; that is, values below low_in map to low_out, and
those above high_in map to high_out. The input image can be of class
uint8, uint16,or double, and the output image has the same class as the
input. All inputs to function imadjust, other than f, are specified as values
between 0 and 1, regardless of the class of f. If f is of class uint8, imadjust
multiplies the values supplied by 255 to determine the actual values to use;
if f is of class uintl6, the values are multiplied by 65535. Using the empty
matrix ([ ]) for[low_in high_in] or for [low_out high_out] results in
the default values [0 1]. If high_out is less than low_out , the output
intensity is reversed.
Parameter gamma specifics the shape of the curve that maps the intensity
values in f to create g. If gamma is less than 1, the mapping is weighted
toward higher (brighter) output values, as Fig.1(a) shows. If gamma is
greater than 1, the mapping is weighted toward lower (darker) output
values. if it is omitted from the function argument, gamma defaults to 1
(linear mapping).

abc

Figure 1: The various mappings available in imadjust function.


4. Contrast-Stretching Transformations :

The function shown in Fig.2(a) is called a contrast-stretching


transformation function. it compresses the input levels lower than m into a
narrow range of dark levels in The output image; similarly, it compresses
the values above m into a narrow band of light levels in the output. The
result is an image of higher contrast.

In fact, in the limiting case shown in Fig.2(b), the output is a binary image.
This limiting function is called a thresholding function, which is a simple
tool used for image segmentation.

the function shown in Fig.1(a) has the form :

Where r represents the intensity of the input image, s the corresponding


intensity values in the output image. E is controls the slope of the function.

ab

Figure 2: (a) Contrast-stretching Transformation,(b) Thresholding


Transformation.
 implement function for the evaluation of the contrast stretching
transformation for the image cameraman.tif.

5- Piecewise linear stretching function :

We can easily write our own function to perform piecewise linear


stretching as shown in Fig 3. To do this, we will make use of the find
function, to find the pixel values in the image between ai and ai+1. Since
the line between the coordinates (ai, bi) and (ai+1,bi+1) has the equation

We can create a function to do the piecewise linear stretching.

Figure 3: piecewise linear stretching function.

 implement function for the evaluation of the Piecewise linear


stretching transformation for the image cameraman.tif.
Bit planes:

Grayscale images can be transformed into a sequence of binary


images by breaking them up into their bit-planes. If we consider the grey
value of each pixel of an 8-bit image as an 8-bit binary word, then the 0th
bit plane consists of the last bit of each grey value. Since this bit has the
least effect in terms of the magnitude of the value, it is called the least
significant bit, and the plane consisting of those bits the least significant
bit plane. Similarly the 7th bit plane consists of the first bit in each value.
This bit has the greatest effect in terms of the magnitude of the value, so it
is called the most significant bit, and the plane consisting of those bits the
most significant bit plane. If we take a grayscale image, we start by
making it a matrix of type double; this means we can perform arithmetic
on the value. Then we isolate the bit planes by simply dividing the matrix
by successive powers of 2, throwing away the remainder, and seeing if the
final bit is 0 or 1. We can do this with the mod function.

 write a Matlab function called bitpl that extracts the bit plane
for a grayscale image.

You might also like