Real Time Mass Flow Rate Measurement Using Multiple Fan Beam Optical Tomography
Real Time Mass Flow Rate Measurement Using Multiple Fan Beam Optical Tomography
Real Time Mass Flow Rate Measurement Using Multiple Fan Beam Optical Tomography
www.elsevier.com/locate/isatrans
Real time mass flow rate measurement using multiple fan beam optical
tomography
R. Abdul Rahim a,∗ , L.C. Leong a , K.S. Chan a , M.H. Rahiman b , J.F. Pang a
a Process Tomography Research Group (PROTOM), Department of Control and Instrumentation Engineering, Faculty of Electrical Engineering,
Universiti Teknologi Malaysia, 81310 UTM Skudai, Johor, Malaysia
b Department of Mechatronics, Universiti Malaysia Perlis, Malaysia
Abstract
This paper presents the implementing multiple fan beam projection technique using optical fibre sensors for a tomography system. From the
dynamic experiment of solid/gas flow using plastic beads in a gravity flow rig, the designed optical fibre sensors are reliable in measuring the mass
flow rate below 40% of flow. Another important matter that has been discussed is the image processing rate or IPR. Generally, the applied image
reconstruction algorithms, the construction of the sensor and also the designed software are considered to be reliable and suitable to perform
real-time image reconstruction and mass flow rate measurements.
c 2007, ISA. Published by Elsevier Ltd. All rights reserved.
Keywords: Optical tomography; Fan beam; Mass flowrate; Projection geometry; Sensors mapping
4. Hardware construction
Table 1
Configuration of transmitters and corresponding receivers
one frame of light emission with Txn , Tx8+n , Tx16+n and By distributing the fibre optics evenly around the
Tx24+n (taking n as the respective number projection ranging circumference of the fixture as shown in Fig. 6, the angle
from 0 to 7) transmitting light at the same time. between each emitter and its adjacent receiver viewed from the
The projection geometry of the sensors determines the centre of the circle is 5.625◦ (dimensions not to scale in figure).
relationship between the sensors’ arrangements and its The optical fibre sensors arrangement in Fig. 7 is seen
mathematical modeling. In this research, thirty-two pairs of from the top view. The sensor’s fixture is designed to be
optical sensors are employed. The optical sensors are divided bigger than the pipeline used. Tx and Rx represent the
into thirty-two infrared transmitters and thirty-two photodiode respective transmitters and receivers as they are arranged
receivers which are placed alternately. These optical sensors alternately in clockwise direction, starting from Tx0 and
are each coupled to separate fibre optics to the periphery of ending at Rx31. The 80 mm dimension is the diameter of the
the sensor’s fixture. The diameter of the sensor’s fixture which investigated pipe and the red gridlines are the 32 × 32 mapping
mounts the fibre optics is 100 mm and its circumference equal resolution which is applied in solving the forward and inverse
to 314.2 mm. problems.
R. Abdul Rahim et al. / ISA Transactions 47 (2008) 3–14 7
To simplify the mapping of optical fibre sensors to the two- Pn .x = the xth coordinate of the nth sensor.
dimension image plane, it is assumed that diameters of both Pn .y = the xth coordinate of the nth sensor.
optical fibre sensors (transmitters and receivers) are the same r = the radius of the circle which is 320 pixels.
which is 2.20 mm. This assumption is made because the size of θ = angle between emitter and its adjacent receiver viewed
the fibre optic’s inner core is 1 mm and after they are lensed, from the centre of the circle, which is 5.625◦ .
the surface of the inner core has an approximate diameter of
By using the mapping methods as above, the computer
2.20 mm. A two-dimension image plane which is made up of
generated coordinates of all the sensors are tabulated in Table 3.
640 × 640 pixels in Fig. 8 serves as the mapping platform for
From the 640 × 640 pixels, the resolution of the plane is
the optical fibre sensors.
divided into 40 × 40 rectangles. Each rectangle has a total of
The mapping of sensors on the image plane is important in 256 pixels. By definition, pixel represents the smallest unit of
solving the forward problem. Through Visual C++ program- an image and resolution refers to the clarity or sharpness of
ming, the image plane is set to 640 × 640 pixels using two- an image, initiated by the number of pixels per square inch
dimensional Cartesian coordinates. The top-left corner coordi- on a computer-generated display. When the transmitters and
nate is at (0, 0) and the right-bottom end coordinate is (640, receivers are distributed evenly on the periphery of the sensor’s
640) as shown in Fig. 8. To map the sensors, labeled as P0–P64, fixture, the equispaced interval [5] between each transmitter and
calculations are done by dividing the circle into four quadrants the subsequent receiver is 2.71 mm by using Eq. (1).
as presented in Fig. 8.
For all the quadrants in Fig. 9, sensors P0–P16 can be C − (2.20)n
E.S = (1)
mapped by using the drawings and equations in Table 2, n
whereby: whereby:
8 R. Abdul Rahim et al. / ISA Transactions 47 (2008) 3–14
Table 2
Mapping sensors to image plane by using Cartesian coordinates
image plane, each light beam will spread across each rectangle
in different weights as illustrated in Fig. 10.
The solution of forward problem generates a series of
Pn .x = r + r sin θ
sensitivity maps. And the inversion of these sensitivity
Q2 matrixes therefore provides a reconstruction of the optical
Pn .y = r + r cos θ
properties [10]. Basically, the number of generated sensitivity
maps is dependent to the number of projections for the
sensors. No matter whether the applied projection method is the
2-projection or 4-projection method, each light beam must be
sampled individually. Therefore, for the thirty-two transmitters
and the corresponding six receivers which receive light per
emission of each transmitter, the total projections are 192.
Pn .x = r − r sin θ This means that there will be 192 sensitivity maps generated
Q3
Pn .y = r + r cos θ as well. Before these sensitivity maps can be used for image
reconstruction, they must be normalized first.
Normalization of sensitivity maps is done when there is no
simple linear relationship between the weight distributions in
a rectangle for different light projections. For example, the
rectangle (25, 30) might have certain percentage of light passing
through for the light beam of transmitter 10 and receiver 25.
When transmitter 0 emits light to receiver 16, this light beam
Pn .x = r − r sin θ might pass through rectangle (25, 30) with a different weight
Q4
Pn .y = r − r cos θ distribution. If there are other light beams passing through the
same pixel as well, the weight distribution of the pixel will
become complicated. Thus, it is necessary to build a general
relationship between the pixels and all the projection beams
that pass through the pixels. A simple approach to normalize the
rectangle values in this research is engaged in a similar manner
as normalizing the pixel values in ECT so that they have the
values of zero and one when the rectangle contains the lower
E.S = Equispaced interval (mm). and higher weight distributions respectively.
C = Circumference (mm). As a reference to map the sensitivity maps onto the
n = A total of 64 optical fibre sensors. 2-dimension image plane, Fig. 8 is being referred. From the
These predetermined geometrical coordinates and rectangles 640 × 640 pixels plane, there are gridlines to divide the pixels
will be useful in sensor modelling and generating sensitivity into 40 × 40 rectangles where each rectangle contains 16 × 16
maps. pixels. When a light projection maps on the rectangles, a certain
number of pixels in once particular rectangle will be dubbed
4.3. Sensitivity maps into either ‘1’ for pixels covered with light beam or ‘0’ for
pixels not covered by light beam. The total number of ‘1’ pixels
From the fan beam projection properties [1], it is known of light beam will be counted for each rectangle which will be
that the fan beam projection can be seen as a point source of then represented as the total weight of the rectangle involved.
radiation that emanates as a fan shaped beam. The emanation In image reconstruction, the actual area of flow is from pixel
covers in many directions in different angles. For a given source 64 to pixel 575 in x-axis and y-axis. In this section the image
and detector combination, functions are calculated that describe plane is reconfigured so that all calculations and scanning of
the sensitivity of each source and measurement pair to changes pixels are done only for the part of the actual flow area. This
in optical properties within each pixel of the model [10]. Thus, is being done to avoid wasting unnecessary time and resources
when the projection beam is mapped onto the two-dimensional used to calculate and scan the whole 640 × 640 pixels. The
R. Abdul Rahim et al. / ISA Transactions 47 (2008) 3–14 9
Table 3
The full coordinates of all the sensors from P0–P63
Sensor Coordinate Sensor Coordinate Sensor Coordinate Sensor Coordinate
P0 (351, 2) P16 (638, 351) P32 (289, 638) P48 (2, 289)
P1 (382, 7) P17 (633, 382) P33 (258, 633) P49 (7, 258)
P2 (412, 14) P18 (626, 412) P34 (228, 626) P50 (14, 228)
P3 (442, 25) P19 (615, 442) P35 (198, 615) P51 (25, 198)
P4 (470, 38) P20 (602, 470) P36 (170, 602) P52 (38, 170)
P5 (497, 54) P21 (586, 497) P37 (143, 586) P53 (54, 143)
P6 (523, 73) P22 (567, 523) P38 (117, 567) P54 (73, 117)
P7 (546, 94) P23 (546, 546) P39 (94, 546) P55 (94, 94)
P8 (567, 117) P24 (523, 567) P40 (73, 523) P56 (117, 73)
P9 (586, 143) P25 (497, 586) P41 (54, 497) P57 (143, 54)
P10 (602, 170) P26 (470, 602) P42 (38, 470) P58 (170, 38)
P11 (615, 198) P27 (442, 615) P43 (25, 442) P59 (198, 25)
P12 (626, 228) P28 (412, 626) P44 (14, 412) P60 (228, 14)
P13 (633, 258) P29 (382, 633) P45 (7, 382) P61 (258, 7)
P14 (638, 289) P30 (351, 638) P46 (2, 351) P62 (289, 2)
P15 (639, 320) P31 (320, 639) P47 (1, 320) P63 (320, 1)
resolution of the actual image plane thus becomes 32 × 32 in Next is to sum and normalize all the 192 sensitivity maps
image reconstruction which contains 512 × 512 pixels. This using the following two approaches:
explanation can be further explained graphically in Fig. 11.
After the actual flow area is being identified, the sensitivity 1. Summing the maps according to the weights of the same
maps are generated using Visual C++ programming language rectangles from all projections and normalizing the maps
as shown in Eq. (2). (rectangle-based normalization).
511 X
511 2. Summing the maps based on weights of all the rectangles
X
Si, j (x, y) = Px,y (a, b) for individual light projection and normalizing the maps
a=0 b=0 (projection-based normalization).
Px,y (a, b) = 1;
if black = changed The rectangle-based normalization is to total up all the total
(2)
Px,y (a, b) = 0; if white = unchanged black pixels from all different projections in a same rectangle
whereby: using Eq. (3) and normalize the maps according to Eq. (4).
T 1(x, y) = the total or sum of the same element in rectangle- 5. Results and discussions
x y obtained from the 192 sensitivity maps.
Si, j (x, y) = the 192 sensitivity maps. 5.1. Mass flow rate results
N 1i, j (x, y) = the normalized sensitivity maps for light beams Mass flow rate or MFR can be defined as the number
of all Tx (0 ≤ i < 32) to Rx (0 ≤ j < 6) using the of grams or kilograms of flow that flows pass a given
rectangle-based normalization. cross-sectional area per second (unit of g/s or kg/s). For a
tomographic system with two layers of sensors (upstream and
Meanwhile, by using the projection-based normalization, the downstream), the mass flow rate of the flow can be obtained
total weights of pixels for all rectangles (32 rectangles) for each by multiplying the density of the flow material (g/m3 ) with
light projection is being summed by using Eq. (5) and then velocity of the flow (m/s) and also with the cross-sectional of
normalized as stated in Eq. (6). the pipe which the flowing objects flow through (m2 ). Here,
32 X
32 the velocity of the flow is being determined by performing
X
T 2(i, j) = Si, j (x, y) (5) cross-correlation of the concentration profiles for the upstream
x=0 y=0 and downstream sensors. However, for the single layer of
sensors implemented using the optical fibre fan beam method
T 2(i, j) > 0
Si, j (x, y) in this research, the velocity of the flowing object cannot be
N 2i, j (x, y) = 0 ≤ i < 32 (6) determined by using the cross-correlation method.
T 2(i, j)
0≤ j <6 Instead, an alternative approach introduced by Ruzairi [8]
whereby: is taken to obtain the mass flow rate for the solid/gas flow.
First, the solid material for the flow system must be determined
T 2(i, j) = the total of pixel weights for all rectangles in because different solid materials have different densities and
individual light projection. sizes and therefore yields different flow properties. Second,
by using the selected solid material, the specified flow rig
Si, j (x, y) = the 192 sensitivity maps.
must be calibrated. This will result in a relation between the
N 2i, j (x, y) = the normalized sensitivity maps for light concentration measurements and the manually obtained mass
beams of all Tx to Rx using the projection-based flow rates. The manually obtained mass flow rates are the actual
normalization. mass flow rates for the flowing system.
Instead of using the gravity conveying system with rotary
Since the normalized maps generated in Eqs. (4) and (6) has screw feeder, a range of filters are designed and used. This
a total of 192 maps each. is to solve the problems highlighted by Chan and Ruzairi [3]
R. Abdul Rahim et al. / ISA Transactions 47 (2008) 3–14 11
whereby the resulting flow of the rotary screw feeder can cause
un-uniformed flow. The main aim to investigate the MFR in this
research is to provide an estimate of the range of mass flow that
will allow the optical fibre sensors to perform reliably. The flow
diagram of the calibration process is shown in Fig. 12 while the
experimental properties of the dynamic flow system are being
identified as follows:
Table 4
Percentage of flow obtained from concentration profiles
Table 6
Investigation of image processing rate using different computer speeds
the calibration graph. By applying the calculated K values into Although the functions above are being selected to execute
both image reconstruction algorithms, the new linear equation the image processing and display in a most optimized manner,
for the graphs is presented in Eq. (9). it is found out that the speed of the image processing rate is
dependent on the processing speed of the computer used. The
yLBP = (26.000 ∗ xLBP ) ∗ 0.897 (9a) image processing rate for the image reconstruction using LBP
y10th Ie = (24.211 ∗ x10th Ie ) ∗ 0.961. (9b) and iterative algorithms is investigated using different speed
specifications of the computer. The result of the investigation
Overall, the conclusion that can be drawn is that for the
is tabulated in Table 6.
plastic bead flow using the gravity flow rig on an experimental
During the investigation of the image processing rate, the
scale, the optical fibre sensors implemented in this research are
same GUI program is being run in four different computers
suitable to measure mass flow rates at the range of zero to not
with different processor speed. It is noticed that the IPR
more than 40% of the flow. The slope ratio, r , obtained proved
increases with the increase of processor’s speed as shown in
that the iterative method results in a better image for low density
Table 6. From the table also, the ISA slot availability is being
flows.
checked for each of the computer because the available Keithley
DAS1802HC data acquisition card in the laboratory requires
5.3. Image processing rate (IPR)
the ISA slot for communication. Recent Intel Pentium IV
computers have only the PCI slots and therefore are not suitable
The image processing rate is the comparative measurements to be used for a real-time data acquisition process although
of the time taken to process and display the image using they have the potential to achieve a higher image processing
different reconstruction algorithms. Based on the optimization rate. Instead, the Intel Pentium III 933 MHz computer is used
concept, a programmer should use fewer codes to accomplish because it has the fastest processor speed among the computers
the application and spend the minimum time to execute the with ISA slots. As a conclusion, it is thus proved that using
program written. In terms of optimizing the execution of the same programming, different computer speeds will result in
image reconstruction and displaying the image in bitmaps, a different image processing rates.
suitable timer and image display method are essential. In this
research, the Device Dependent Bitmap (DDB) method is used 6. Conclusions
to draw the tomogram because it is considered the fastest
way to construct and display an image compared to the other From the dynamic experiment of solid/gas flow using the
methods [7]. DDB is controlled by the device driver of display plastic beads in a gravity flow rig, the designed optical fibre
card and can only be displayed on the device with the correct sensors are reliable in measuring the mass flow rate below 40%
configuration. By comparing to the Device Independent Bitmap of flow.
(DIB), DDB can save processing time because DIB has to be Another important matter that has been discussed is the
converted to DDB before displaying the image onto screen. image processing rate or IPR. The IPR is dependent on
Next, is the concern of using the suitable timer to execute the optimization of certain functions used in processing the
the counting of time needed to process and display the image. image using different image reconstruction algorithms and also
From the research done by Pang [7], regarding the timing displaying the tomogram GUI. This research has opted for
functions found in Win32 library he has found out that the the most optimized applications of DDB display method and
high performance timer is the only timer that has the ability also the high performance timer which is able to measure
to measure a minimum time resolution of 1 µs while the rest of the minimum time resolution of 1 µs. The only factor that
the timers such as the GetTickCount, Clock, timeGetTIme and limits the IPR of this research is that fastest available processor
ftime timers can only count the minimum time resolution of speed of the computer with ISA slot is the Intel Pentium III
1 ms. Pang [7] has also done an experiment to find the standard 933 MHz, which has a slow processor speed compared to the
deviation of the timers by counting a routine’s execution Intel Pentium IV processors. The choice of computer is limited
time using 20 sets of data. The results showed that the high because the Keithley DAS 1802HC requires communication
performance timer has the lowest standard deviation value with the ISA slot and only the Intel Pentium III processors
which means that this timer is good in repeatability and has have this feature. Generally, the applied image reconstruction
the highest precision. Thus, based on his deduction, the high algorithms, the construction of the sensor and also the designed
performance timer is being applied in this research. software is considered to be reliable and is suitable to
14 R. Abdul Rahim et al. / ISA Transactions 47 (2008) 3–14
perform real-time image reconstruction and mass flow rate tomography sensor configuration using 4 parallel beam projections. In:
measurements with an overall accuracy of about 85%. 3rd international symposium on process tomography. 2004, p. 48–51.
[8] Ruzairi Abdul Rahim. A tomography imaging system for pneumatic
conveyors using optical fibres. Ph.D. thesis. Sheffield Hallam University;
References 1996.
[9] Ibrahim Sallehuddin. Measurement of gas bubbles in a vertical water
[1] Abdul Rahim R, Chan KS, Pang JF, Leong L. A hardware development column using optical tomography. Ph.D. thesis. Sheffield Hallam
for optical tomography system using switch mode fan beam projection. University; 2000.
Journal of Sensors & Actuators A: Physical 2005;120(1):277–90. [10] Xu H, Dehghani H, Pogue PW, Springett R, Paulsen KD, Dunn JF.
[2] Chan Kok San. Real time image reconstruction for fan beam optical Near-infrared imaging in the small animal brain: Optimization of fiber
tomography system. M.Sc. thesis. Universiti Teknologi Malaysia; 2002. positions. Journal of Biomedical Optics 2003;8(1):102–10.
[3] Chan KS, Ruzairi AR. Tomographic imaging of pneumatic conveyor
using optical sensor. In: 2nd world engineering congress. 2002.
[4] Green RG, Horbury NM, Abdul Rahim R, Dickin FJ, Naylor BD, R. Abdul Rahim received BEng. Degree with Honours
Pridmore TP. Optical fibre sensors for process tomography. Measurement in Electronic System And Control Engineering in
Science & Technology 1995;6:1699–704. 1992 from Sheffield City Polytechnic. He received his
[5] Kak AC, Slaney Malcolm. Principles of Computerized Tomographic Ph.D. in Instrumentation & Electronics Engineering
Imaging. Electronic ed. New York (United States): IEEE Press; 1999. from Sheffield Hallam University in 1996. Currently
[6] Leong Lai Chen. Implementation of multiple fan beam projection he is a Deputy Dean (Corporate Services), Research
Management Centre, Universiti Teknologi Malaysia.
technique in optical fibre process tomography. M.Sc. thesis. Universiti
His current research interests are process tomography
Teknologi Malaysia; 2005. and process measurement.
[7] Pang Jon Fea, Ruzairi Abdul Rahim, Chan Kok San. Infrared