Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Unit-2 Image Processing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

Unit-II

Image Enhancement
Image enhancement refers to the process of highlighting certain information of an image, as well as
weakening or removing any unnecessary information according to specific needs. For example,
eliminating noise, revealing blurred details, and adjusting levels to highlight features of an image.

Image enhancement techniques can be divided into two broad categories:

 Spatial domain — enhancement of the image space that divides an image into uniform pixels
according to the spatial coordinates with a particular resolution. The spatial domain methods
perform operations on pixels directly.
 Frequency domain — enhancement obtained by applying the Fourier Transform to the spatial
domain. In the frequency domain, pixels are operated in groups as well as indirectly.

This chapter discusses the image enhancement techniques implemented in the spatial domain.

Types of Spatial Domain Technique


Types of spatial domain operator:

 Point operation (intensity transformation) - Point operations refer to running the same conversion
operation for each pixel in a grayscale image. The transformation is based on the original pixel
and is independent of its location or neighbouring pixels.
 Spatial filter (or mask, kernel) - The output value depends on the values of f(x,y) and its
neighbourhood.

Useful examples of image enhancement

Here are some examples of image enhancement:

 Smooth and sharpen


 Noise removal
 Deblur images
 Contrast adjustment
 Brighten an image
 Grayscale image histogram equalization
Grayscale Threshold Transform

Grayscale Threshold Transform converts a grayscale image into a black and white binary image. The user
specifies a value that acts as a dividing line. If the Gray value of a pixel is smaller than the dividing, the
intensity of the pixel is set to 0, otherwise it’s set to 255. The value of the dividing line is called the
threshold. The grayscale threshold transform is often referred to as thresholding, or binarization.

Contrast-Stretching Transformation

The goal of the contrast-stretching transformation is to enhance the contrast between different parts of an
image, that is, enhances the Gray contrast for areas of interest, and suppresses the Gray contrast for areas
that are not of interest.

Below are two possible functions:

1. Power-Law Contrast-Stretching Transformation

If T(r) has the form as shown in the figure, the effect of applying the transformation to every pixel to
generate the corresponding pixels produce higher contrast than the original image, by:

 Darkening the levels below k in the original image


 Brightening the levels above k in the original image

2. Linear Contrast-Stretching Transformation


Points (r1, s1) and (r2, s2) control the shape of the transformation. The selection of control points depends
upon the types of images and varies from one image to another image. If r1 = s1 and r2 = s2 then the
transformation is linear and this doesn’t affect the image. In other case we can calculate the intensity of
output pixel, provided intensity of input pixel is x, as follows:

 For 0 <= x <= r1 output = s1 / r1 * x


 For r1 < x <= r2 output = ((s2 – s1)/(r2 – r1))*(x – r1) + s1
 For r2 < x <= L-1 output = ((L-1 – s2)/(L-1 – r2))*(x – r2) + s2

Bit-plane Slicing

As discussed previously, each pixel of a grayscale image is stored as a 8-bit byte. The intensity spans
from 0 to 255, which is ‘00000000’ to ‘11111111’ in binary.

Bit-plane slicing refers to the process of slicing a matrix of 8 bits into 8 planes.

The most left bit (the 8th bit from right to left) carries the most weight so it is called the Most
Signification Bit. The most right bit (the 1st bit from right to left) carries the least weight so it is called
the Least Signification Bit.

Advanced: Histogram Equalization

Histogram equalization, also known as grayscale equalization, is a practical histogram correction


technique.

It refers to the transformation where an output image has approximately the same number of pixels at
each Gray level, i.e., the histogram of the output is uniformly distributed. In the equalized image, the
pixels will occupy as many Gray levels as possible and be evenly distributed. Therefore, such image has a
higher contrast ratio and a larger dynamic range.

This method is to boost the global contrast of an image to make it look more visible.
The general histogram equalization formula is:

 CDF refers to the cumulative distribution function


 L is the maximum intensity value (typically 256)
 M is the image width and N is the image height
 h (v) is the equalized value

Before Histogram Equalization

After Histogram Equalization

A General Concept
The spatial domain enhancement is based on pixels in a small range (neighbour). This means the
transformed intensity is determined by the Gray values of those points within the neighbourhood, and
thus the spatial domain enhancement is also called neighbourhood operation or neighbourhood
processing.
A digital image can be viewed as a two-dimensional function f (x, y), and the x-y plane indicates spatial
position information, called the spatial domain. The filtering operation based on the x-y space
neighbourhood is called spatial domain filtering.

The filtering process is to move the filter point-by-point in the image function f (x, y) so that the centre of
the filter coincides with the point (x, y). At each point (x, y), the filter’s response is calculated based on
the specific content of the filter and through a predefined relationship called ‘template’.

If the pixel in the neighbourhood is calculated as a linear operation, it is also called ‘linear spatial domain
filtering’, otherwise, it’s called ‘nonlinear spatial domain filtering’. Figure 2.3.1 shows the process of
spatial filtering with a 3 × 3 template (also known as a filter, kernel, or window).

The coefficients of the filter in linear spatial filtering give a weighting pattern. For example, for Figure
2.3.1, the response ‘R’ to the template is:

R = w(-1, -1) * f (x-1, y-1) + w(-1, 0) * f (x-1, y) + …+ w( 0, 0) * f (x, y) +…+ w(1, 0) *


f (x+1, y) + w (1, 1) * f( x+1, y+1)

In mathematics, this is known as element-wise matrix multiplication. For a filter with a size of (2a+1,
2b+1), the output response can be calculated with the following function:
In the following, we will take a look at the filters of image smoothing and sharpening.

Smoothing Filters
Image smoothing is a digital image processing technique that reduces and suppresses image noises. In
the spatial domain, neighbourhood averaging can generally be used to achieve the purpose of smoothing.
Commonly seen smoothing filters include average smoothing, Gaussian smoothing, and adaptive
smoothing.

Average Smoothing

First, let’s take a look at the smoothing filter in its simplest form — average template and its
implementation.

The points in the 3 × 3 neighbourhood centred on the point (x, y) are altogether involved in determining
the (x, y) point pixel in the new image ‘g’. All coefficients being 1 means that they contribute the same
(weight) in the process of calculating the g(x, y) value. The last coefficient, 1/9, is to ensure that the sum
of the entire template elements is 1. This keeps the new image in the same grayscale range as the original
image (e.g., [0, 255]). Such a ‘w’ is called an average template.

How it works?

In general, the intensity values of adjacent pixels are similar, and the noise causes grayscale jumps at
noise points. However, it is reasonable to assume that occasional noises do not change the local continuity
of an image. Take the image below for example, there are two dark points in the bright area.
For the borders, we can add a padding using the “replicate” approach. When smoothing the image with a
3×3 average template, the resulting image is the following.

The two noises are replaced with the average of their surrounding points. The process of reducing the
influence of noise is called smoothing or blurring.

Gaussian Smoothing

The average smoothing treats the same to all the pixels in the neighbourhood. In order to reduce the blur
in the smoothing process and obtain a more natural smoothing effect, it is natural to think to increase the
weight of the template centre point and reduce the weight of distant points. So that the new centre point
intensity is closer to its nearest neighbours. The Gaussian template is based on such consideration.

The commonly used 3 × 3 Gaussian template is shown below.


Adaptive Smoothing

The average template blurs the image while eliminating the noise. Gaussian template does a better job,
but the blurring is still inevitable as it’s rooted in the mechanism. A more desirable way is selective
smoothing, that is, smoothing only in the noise area, and not smoothing in the noise-free area. This way
potentially minimizes the influence of the blur. It is called adaptive filtering.

So how to determine if the local area needs to be smoothed with noise? The answer lies in the nature of
the noise, that is, the local continuity. The presence of noise causes a grayscale jump at the noise point,
thus making a large grayscale span. Therefore, one of the following two can be used as the criterion:

1. The difference between the maximum intensity and the minimum intensity of a local area is greater than a
certain threshold T, ie: max(R) – min(R) > T, where R represents the local area.
2. The variance is greater than a certain threshold T, i.e, D(R) > T, where D(R) represents the variance of the
pixels in the area R.

Others

There are some other approaches to tackle the smoothing, such as median filter and adaptive median
filter.

Sharpening Filters
Image sharpening filters highlight edges by removing blur. It enhances the grayscale transition of an
image, which is the opposite of image smoothing. The arithmetic operators of smoothing and sharpening
also testifies the fact. While linear smoothing is based on the weighted summation or integral operation
on the neighbourhood, the sharpening is based on the derivative (gradient) or finite difference.

How to distinguish noises and edges still matters in sharpening. The difference is that, in smoothing we
try to smooth noise and ignore edges and in sharpening we try to enhance edges and ignore noise.

Some applications of where sharpening filters are used are:

 Medical image visualization


 Photo enhancement
 Industrial defect detection
 Autonomous guidance in military systems

There are a couple of filters that can be used for sharpening. In this article, we will introduce one of the
most popular filters — Laplace operator, which is based on second order differential.

The corresponding filter template is as follows:


With the sharpening enhancement, two numbers with the same absolute value represent the same
response, so w1 is equivalent to the following template w2:

Taking a further look at the structure of the Laplacian template, we see that the template is isotropic for a
90-degree rotation. Laplace operator performs well for edges in the horizontal direction and the vertical
direction, thus avoiding the hassle of having to filter twice.

You might also like