Intro To Biomedical Signal Processing
Intro To Biomedical Signal Processing
In this chapter:
Different types of signals are defined.
The fundamental concepts of signal transformation processing are
presented while avoiding detailed mathematical formulations.
WHAT IS A ONE-DIMENSIONAL SIGNAL?
Analog Signals.
Discrete Signals.
Digital Signals.
Analog Signals:
Both time and amplitude axes are continuous axes.
At any given real values of time “t” the amplitude value “g(t)” can
take any number belonging to a continuous interval of real
numbers.
Discrete Signals: g(n)=g(nTs)
Is the sampled version of the analog signal. The amplitude axis is
continuous but the time axis is discrete.
The measurements of the quantity are available only at certain
specific time.
Sampling period “TS.”
Nyquist theorem: is described that gives a limit on the size of the
sampling period TS. This size limit guarantees that the discrete signal
contains all information of the original analog signal.
PREFERENCE OF DIGITAL SIGNALS OVER ANALOG
SIGNALS.
One can easily measure and sample the temperature only at
certain times. The times at which the temperature is sampled are
often multiples of a certain sampling period “TS.”
The discrete signal can be easily stored while the analog signal
needs a large amount of storage space. It is also evident that
signals with smaller size are easier to process.
Digital Signals:
Both time and amplitude axes are discrete.
Is defined only at certain times and the amplitude of the signal at
each sample can only be one of a fixed finite set of values.
PROCESSING AND TRANSFORMATION OF
SIGNALS
Image capturing:
in medical imaging sensors of different physical properties of materials
(including light intensity and color) are employed to record functional
information about the tissue under study.
Image representation:
images are all visually represented as digital images. These images are either
gray-level images or color images.
In a gray-level image: the light intensity or brightness of an object
shown at coordinates (x,y) of the image is represented by a number called
“gray level”.
The gray points that are partially bright and partially dark get a gray-level
value that is between 0 and the maximum value of brightness.
The most popular ranges of gray level used in typical images are 0–255, 0–
511, 0–1023.
The gray levels are almost always set to be nonnegative integer numbers.
This saves a lot of digital storage space.
The wider the range of the gray level becomes, the better resolution is
achieved.
Color images: “red green blue” or “RGB” standard. RGB is formed based
on the philosophy that each color is a combination of the three primary colors:
red, green, and blue.
The screen provides three dots for every pixel: one red dot, one green dot,
and one blue dot. This means that in color images for every coordinate (x,
y), three numbers are provided.
This in turn means that the image itself is represented by three 2-D signals,
gR(x, y), gG(x, y), and gB(x, y), each representing the intensity of one
primary color.
As a result, every one of the 2-D signals (for one color) can be treated as
one separate image and processed by the same image processing
methods designed for gray-level images.
Image histogram:
Assume that the gray level of all pixels in an image belong to the interval
[0,G−1]. If “r” represents the gray level of a pixel of the image, then 0 ≤ r ≤ G−1.
Now, for all values of r, calculate the normalized frequencies, p(r).
We count the number of pixels in the image whose gray level equals r and
name it as n(r). Then, we divide that number by the total number of points in
the image n.
EXAMPLE: