Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
13 views

Development and Field Testing of a Time-Synchronized System for Multi-Point Displacement Calculation Using Low-Cost Wireless Vision-Based Sensors

Research Papar on time-synch system for displacement measurement.

Uploaded by

Ammar Ajmal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Development and Field Testing of a Time-Synchronized System for Multi-Point Displacement Calculation Using Low-Cost Wireless Vision-Based Sensors

Research Papar on time-synch system for displacement measurement.

Uploaded by

Ammar Ajmal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

9744 IEEE SENSORS JOURNAL, VOL. 18, NO.

23, DECEMBER 1, 2018

Development and Field Testing of a Time-


Synchronized System for Multi-Point
Displacement Calculation Using Low-
Cost Wireless Vision-Based Sensors
Darragh Lydon, Myra Lydon , Jesús Martínez del Rincón , Susan E. Taylor,
Desmond Robinson, Eugene O’Brien, and F. Necati Catbas

Abstract— This paper presents a contactless multi-point dis- among the G7 countries and there is a bridge maintenance
placement measurement system using multiple synchronized backlog valued at £3.9bn. According to the 2017 Infrastructure
wireless cameras. Our system makes use of computer vision report card, the corresponding figure in the USA is $123bn
techniques to perform displacement calculations, which can be
used to provide a valuable insight into the structural condition resulting in 188 million daily trips across structurally deficient
and service behavior of bridges under live loading. The system bridges [1]. In the UK, the budget for core bridge maintenance
outlined in this paper provides a low-cost durable solution, which has been reduced by up to 40% in recent years [2]. This bud-
is rapidly deployable in the field. The architecture of this system getary shortfall means that cost effective and accurate struc-
can be expanded to include up to ten wireless vision sensors, tural information on bridge condition is becoming increasingly
addressing the limitation of current existing solutions limited
in scope by their inability to reliably track multiple points important. According to literature [3], the prevalent method for
on medium- and long-span bridge structures. Our multi-sensor bridge monitoring continues to be visual inspections which
approach facilitates multi-point displacement and additional can be highly subjective and differ depending on climatic
vision sensors for vehicle identification and tracking that could conditions. This study goes into further detail on the efficacy
be used to accurately relate the bridge displacement response to of routine and in-depth inspections and determines that most
the load type in the time domain. The performance of the system
was validated in a series of controlled laboratory tests. This paper inspection teams fail to determine bridge condition accurately.
will significantly advance current vision-based structural health A recent study has indicated that bridge inspections vary in
monitoring systems, which can be cost prohibitive and provides quality and are not always carried out by a senior engineer,
a rapid method of obtaining data which accurately relates to with many companies outsourcing the inspections to untrained
measured bridge deflections. individuals [4].
Index Terms— Computer vision, feature extraction, HD video, Structural Health Monitoring (SHM) systems provide a
image motion analysis, image processing, motion estimation. valuable alternative to traditional inspections and overcome
many of the previous limitations. SHM can provide an unbi-
I. I NTRODUCTION ased means of determining the true state of our ageing
infrastructure. Sensor systems are used to monitor bridge
F ACILITATING over 90% of motorized passenger travel
and 65% of domestic freight, the road network is the
most popular means of transport in the United Kingdom (UK).
deterioration and provide real information on the capacity of
individual structures, hence extending the safe working life of
Currently UK transport infrastructure is rated as second worst bridges and improving safety. Monitoring of the displacement
of a structure under live loading provides valuable insight into
Manuscript received April 9, 2018; revised July 3, 2018; accepted the structural behavior and can provide an accurate descriptor
July 4, 2018. Date of publication July 6, 2018; date of current version
November 13, 2018. This work was supported by the U.S.-Ireland Collabora- of bridge condition. However, to monitor deterioration over
tive Project between Queen’s University Belfast, University College Dublin, time it is vital that the cause of displacement is also under-
and the University of Central Florida under Grant USI0067. The associate stood. Relating real time displacement along the span of a
editor coordinating the review of this paper and approving it for publication
was Prof. Kazuaki Sawada. (Corresponding author: Darragh Lydon.) bridge to load type and location provides an opportunity to
D. Lydon, M. Lydon, S. E. Taylor, and D. Robinson are with the accurately identify localized damage within the structure.
School of Natural and Built Environment, Queen’s University Belfast, Displacement can be measured using traditional sensors
Belfast BT9 5AG, U.K. (e-mail: dlydon01@qub.ac.uk; m.lydon@qub.ac.uk;
s.e.taylor@qub.ac.uk; des.robinson@qub.ac.uk). such as LVDT’s. These instruments require contact with the
J. M. del Rincón is with the School of Electronics, Electrical Engineering bridge structure to obtain measurements, and an indepen-
and Computer Science, Queen’s University Belfast, Belfast BT9 5BN, U.K. dent and rigid support system, which can be difficult in in
(e-mail: j.martinez-del-rincon@qub.ac.uk).
E. O’Brien is with the School of Civil Engineering, University College many field applications. Accelerometers provide a promising
Dublin, Dublin, D04 K3H4 Ireland (e-mail: eugene.obrien@ucd.ie). alternative. The drawback with the usage of accelerome-
F. N. Catbas is with the Civil, Environmental and Construction Engineering ters is that they can be vulnerable to numerical error from
Department, University of Central Florida, Orlando, FL 32816 USA (e-mail:
catbas@ucf.edu). double integration and initial condition analysis [5]. Laser
Digital Object Identifier 10.1109/JSEN.2018.2853646 vibrometers can provide an accurate measurement at a single
1558-1748 © 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
LYDON et al.: DEVELOPMENT AND FIELD TESTING OF A TIME-SYNCHRONIZED SYSTEM 9745

monitoring location, with the disadvantage of not providing Previous research has shown that Computer Vision is viable
the flexibility of measurement available in vision systems due as a method of displacement calculation. The increase in
to being required to be fixed at a single point throughout resolution of small, durable action cameras has led to their
measurement. Global Position Systems (GPS) can also be usage as a means of displacement calculation, as shown
used for displacement calculation, but the accuracy of the in [15]. Previous work in this area by the authors of this
system is not comparable to that of other systems, with the study has proven that our system is viable for usage in the
majority of commercial systems only capable of obtaining laboratory and in field trials, with accurate results obtained
a resolution at centimeter resolution and far away from sub compared to traditional sensors. The results obtained in field
millimeter [6]. Traditional sensors also have challenges in trials in this study were 0.952053 correlation coefficient (CC)
evaluating displacement of a structure as a reaction to live and 0.0314 Root Mean Square Error (RMSE) when comparing
loading, or to be accurately synchronized to this loading due vision based results to traditional displacement sensors.
to sensor setup and LVDT or slide wire potentiometer internal This paper aims to expand on that work by implementing
mechanics. this algorithm in tandem with multiple time-synchronized
high-resolution action cameras. This will provide a greater
II. C OMPUTER V ISION BASED S ENSORS FOR flexibility in monitoring locations on bridge structures, and
S TRUCTURAL H EALTH M ONITORING allow the development of a time-synchronized, portable, wire-
There are numerous examples in the literature of the efficacy less easy-to-use Computer Vision system for SHM which has
of Computer Vision as a tool for SHM. In [7] Feng et al. been validated in laboratory trials and field experiments.
developed a low cost contactless system for the monitoring of
displacement of a bridge structure; where results comparable III. D EVELOPMENT OF A W IRELESS V ISION BASED
to LVDT were obtained at distance of approximately 30 m. S ENSOR N ETWORK FOR S TRUCTURAL A SSESSMENT
These readings are useful for displacement calculation but As previously mentioned vision-based sensors provide a
the requirement of having a laptop computer connected to completely contactless means of displacement measurement of
the camera used for obtaining video images is restrictive in structures. In many cases the vision sensor is built into a spe-
rural field applications. The capabilities of a Computer Vision cialist camera and controlled by use of a laptop computer and
system to monitor movement of a stadium structure under frame grabber apparatus. These systems can be cost prohibitive
severe crowd loading has been proven, indicating its suitability and require onerous site set-up and wiring arrangements. This
to monitor structures under dynamic loading [8]. There are research has been based on the development of a low cost
additional examples of vision-based displacement calculation and easy to deploy system using the commercially available
in [9] and [10]. action cameras commonly known as “GoPros” [16]. In general
The work discussed above is all in single point displacement vision-based monitoring, a camera is set up on a tripod at a
calculation, which is unsuitable for the monitoring of long stationary location in sight of the bridge. The vision sensor
span bridges when deflection profile is to be determined. There is used to record a series of images of a structural element
are two approaches to multipoint displacement measurement of the bridge under live loading, usually at a minimum frame
using cameras. The first approach is demonstrated in [11]. This rate of 25 frames per second (fps). A significant advantage
research involves the selection of multiple displacement points of the system is its ability to measure displacement at any
in the viewpoint and using them to calculate multiple displace- location along the span of the bridge from one stationary
ments, with the drawback of decreasing the resolution of the camera location. The purpose of multi-point measurement is
system as less detail of the points are available, especially to provide more accurate information on bridge condition.
for medium to long infrastructures. The concept of multiple A greater number of data points enables a more detailed
points from a single camera’s viewpoint has been expanded assessment of the bridge behavior under live loading due to the
upon using a high resolution camera [12]. However, since no creation of multiple influences or influence surface. A change
traditional sensors were used as a means of comparison in in the behavior under certain load types can then be used to
this research, it is difficult to verify the real accuracy of the detect and localize damage in the structure.
system when deployed in the field. The other approach to mul-
tipoint displacement calculation involves the use of multiple
synchronized cameras. Early work in this area was carried A. Hardware Configuration
out in [13], where multiple personal computers (PCs) were 1) Camera Modification: GoPro vision sensors provide a
used to control camcorders in a master-slave relationship. This low-cost, high resolution (up to 4K) solution for the capture
system is based on estimating the time lag between master and of data. Additionally, their portability and wireless function-
slave computers and is also dependent on having the cameras ality for camera control offer a significant advantage. These
controlled by a PC at all times, which are severe constraints cameras are resistant to adverse environmental conditions such
for practical applications. This research is built upon in [14], as rain, making them practical for long term deployment
with a more advanced version of the synchronization system in the field. The disadvantage of using GoPros for bridge
being used. The inherent disadvantage of a frame grabber and monitoring is that the standard GoPro lens has a limited
PC being required for connecting the cameras limits the scope focal length, rendering them less suitable for accuracy over
of the system for field deployments due to power consumption long distance monitoring of structures. Research was carried
and difficulties with cabling cameras to the computers. out into potential modifications to the camera to add long

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
9746 IEEE SENSORS JOURNAL, VOL. 18, NO. 23, DECEMBER 1, 2018

Fig. 3. Block diagram of algorithm design.

The range of the system can also be extended to 150-180m by


use of a pulse: [21], which allows for greater deployment range
Fig. 1. Hardware specification, from left to right. GoPro Hero 4 black, in addition to wireless control of all units via the Blink Hub
ribcage lens modification and computar ½” 25-135mm F1.8 C-Mount lens. app [22]. The Syncbac system also allows for wireless remote
control of the cameras via PC/smartphone app, meaning the
cameras can be placed in areas not traditionally available for
bridge monitoring using Computer Vision.

B. Software and Algorithm Development


for Wireless Vision-Based Sensor
A requirement of all vision-based SHM systems is intensive
post processing of captured images into accurate bridge dis-
placement responses [23]. For the system developed in this
research, feature-based tracking was selected due to being
more robust and reliable than Digital Image Correlation (DIC)
approaches when paired with a reliable feature extraction tech-
nique, with similar precision [24]. The processing framework
is composed of three main blocks, as shown in Fig 3.
1) Camera Calibration: Camera Calibration is a method
of determining the intrinsic and extrinsic parameters of the
camera used to record the structure motion, to remove lens
Fig. 2. Camera mounting configuration for laboratory and field trials.
distortion effects and to provide a scaling factor for the
conversion from pixel units to engineering units. The method
used to remove lens distortion in this study was based on
distance monitoring capability. A solution was found using
that proposed by Bouguet [25], where a series of images of
a modification kit for the GoPro: Ribcage [17]. The Ribcage
a checkerboard or similar pattern is used to obtain the lens
adds functionality for the attachment of C or F- mount zoom
distortion of a camera at the desired focal length.
lenses to a GoPro. This allows usage of the GoPro as a
There are a variety of approaches used to determine the scal-
long-distance monitoring tool. A Computar 1/2 25-135mm
ing factor for converting pixels to physical distance. In [26],
F1.8 C-Mount [18] lens was attached to the GoPro for the
a pre-testing calibration method is demonstrated. This involves
testing detailed in the following sections. This lens was chosen
setting up the camera in the laboratory in an identical manner
because it is capable of being fitted directly to a tripod while
to that of the field test to be carried out; that is, same
attached to the GoPro, allowing for a stable mounting of the
monitoring distance, focal length, angle etc. The camera is
camera in laboratory/ field trials. The hardware configuration
calibrated using the checkerboard pattern and these variables
for the test is shown in Fig I, with an example of the
are used to remove lens distortion and provide a scaling factor
camera mounting configuration shown in Fig 2. The GoPro
for the videos captured in the field trials. The formula to show
was controlled during testing using the Capture App [19]
this is as follows:
for smartphones provided by GoPro, with footage saved to  
microSd cards for later transfer to a PC for post processing. d f pi xel
SF = = (1)
2) Synchronization Hardware: The system that was chosen D p×Z mm
to provide time synchronization for the GoPro systems is where SF is the scaling factor ratio, d is a distance on the
known as Syncbac [20]. This GoPro accessory can be attached image, D is a world distance, f is the focal length of the
to the extension port of the GoPro and embeds timecode camera, p is the unit length of the camera sensor (mm/pixel)
data into each frame recorded by the GoPro. Analysis of this and Z is the distance from the camera to the monitoring
metadata allows for synchronization of recordings obtained by location. The scaling factor can also be determined from:
the system using a solution developed by the authors in C++
D K nown
in Microsoft Visual Studio. The Syncbac sends live timecode SF = (2)
data via Radio Frequency (RF), with a range of 30-60m. I K nown
There is also functionality available for units to be initially where D K nown is the known physical length on the object
synchronized with a master unit before handling timecode surface and I K nown is the corresponding pixel length on the
insertion without any additional information being provided. image plane.

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
LYDON et al.: DEVELOPMENT AND FIELD TESTING OF A TIME-SYNCHRONIZED SYSTEM 9747

Fig. 4. Features identified using feature detector.

2) Feature Extraction: This is the process of extracting and


detecting salient features from the images of the object to be Fig. 5. Retained features comparison for SURF vs KLT-SURF.
tracked. Examples of these could be corners, rivets or natural
decay in a bridge structure. Processing time can be min-
imized by only searching for features inside a Region of
Interest (ROI). The process selected for use in the algorithm
was SURF [27], a robust and computationally inexpensive
extension of SIFT [28]. The key points provided by SURF
are scale and rotation invariant and are detected using a
Haar wavelet approximation of the blob (region in an image
that differs in properties, such as brightness/color, from sur-
rounding regions) detector based on the Hessian determinant.
These approximations are used in combination with integral
images (the sum of pixel values in the image) to encode
the distribution of pixel intensity values in the neighborhood
of the detected feature. The natural features detected in the Fig. 6. Accuracy comparison for SURF vs KLT-SURF.
laboratory tests are shown in Fig 4. These features consisted
of irregularities in the surface of the beam that could be easily
identified and tracked through the video. a video compared to the tandem approach of SURF/KLT.
3) Feature Tracking: On detection of the points, they must Results from a laboratory investigation on this are shown
be tracked through subsequent frames to filter outliers and in Fig 5 and Fig 6.
improve the dynamic estimation of displacement. Careful The pixel movement is converted to engineering units
application of threshold values must be maintained during using Eq. (2). This continues until all frames of the video
this process, as features may become occluded or vary during have been processed.
the progression of a video. The system in this research
makes use of a Kanade-Lucas-Tomasi (KLT) [29] tracker to IV. E XPERIMENTAL P ROGRAMME
determine movement of the features detected. This method The aim of the experimental work was to conduct a series
takes the points detected by the feature extractor and uses of sequential tests to establish the accuracy of the timecode
them for initialization. The system removes outliers using the synchronization between the vision sensors, since perfect
statistically robust M-estimator SAmple Consensus (MSAC) synchronization is required for a full characterization of the
algorithm [30] which is a variant of the RANSAC algorithm. deflection pattern
The MSAC algorithm scores inliers according to the fitness
of the model and uses this, together with a user-specified re-
projection error distance, to minimize the usage of outliers in A. Test Series 1 – Timecode Testing
the displacement calculation. Any features that do not meet The system for synchronizing the vision sensors involves
these thresholds are rejected, with the inliers then tracked on attaching an accessory for the GoPro called Syncbac. The
the next video frame using the KLT algorithm. The displace- accessory embeds timecode data in each video. This metadata
ment of the object can be measured in pixels by calculating the is then used to synchronize the videos so that they start at
relative movement between frames of the centroid of a matrix the same time. This is accurate to frame level at 30 FPS
containing the extracted features. While it is possible to use as the code embeds timecode data for synchronization in
SURF as a means of means of feature tracking as well as each frame. This was tested using a website called current-
extraction, the results gathered in preliminary trials were not Millis.com, which displays the time elapsed in milliseconds
as accurate/stable compared to reference sensors as the results since 1/1/1970. The cameras were placed 20 m apart and
obtained using SURF & KLT in tandem, with undesirable set to record this website displayed on separate computers.
variance in the number of features detected/tracked throughout When the recordings were analyzed and combined it was

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
9748 IEEE SENSORS JOURNAL, VOL. 18, NO. 23, DECEMBER 1, 2018

Fig. 7. Readings from initial time synchronization trial, confirming accurate


recording of UNIX time.

confirmed that each frame matched successfully. An example


frame is shown in Fig 7. The test was carried out over a span
of 45 minutes and the synchronization of the cameras was
consistent throughout the video analysis, obtaining a perfect
synchronization after the 45 min.

B. Test Series 2 – Accuracy of Synchronized Vision


Sensors for Displacement Measurement Fig. 8. Vision, FOS and LVDT sensor displacement results for test series 2.
The accuracy of the hardware system and associated post
processing techniques were developed and validated through TABLE I
a laboratory experimental program. This involved tracking the R ESULTS F ROM G O P RO VS FOS VS LVDT D ISPLACEMENT M ONITORING
displacement of a simply supported 178mm×102mm×19mm
Universal Beam with a clear span of 5.3 m. A centrally applied
static load of 3255N was applied to induce displacement along
the span of the beam. The beam was split into 9 elements of
equal length along the span of them beam to situate the loading
and sensing points. The nodes connecting the elements were
numbered 1-10 consecutively from left to right. Hence the load
was applied midway between nodes 5 and 6.
1) Sensor Configuration and Data Acquisition: Linear Vari- the camera system and the LVDT/FOS sensors, with our vision
able Displacement Transducers (LVDT’s) were used to val- system outperforming the LVDT sensors in the point where the
idate the camera measurements at the monitoring locations FOS was available.
used by the camera along the span. The LVDT sensors were To allow for accurate comparison between the displacement
configured to measure static displacements at each of the nodes results calculated from the vision-based sensors and the valida-
monitored by the camera. A Datataker DT800 logger was used tion sensors, the root mean square error (RMSE) is presented
to acquire the readings at discrete times during the test corre- in Table I.
sponding to times when the beam was loaded and unloaded. The correlation coefficient between the FO and the Vision
The FOS was also used at a single node (Node 4) to validate results at Node 4 is .894 for this test.
the accuracy of the deflections that were determined from the The results confirm successful synchronization of the two
camera readings. This sensor was used because it provides cameras, providing confidence in this method for future
a high level of accuracy compared to LVDT. The FOS was deployment. It is believed by the authors that the higher RMSE
positioned to record continuous displacement measurements with the LVDT comparison at Node 4 is due to the lower
at a rate of 25 Hz. The wavelength shift data associated with resolution of the LVDT versus the FOS. The low error results
the FOS was recorded and converted to displacement using an from the RMSE comparison validate the use of the GoPro
approach described previously [31], where a Fabry-Perot filter camera in laboratory trials.
was used in tandem with a photodiode. 2) Scaling Factor Determination Trial: A supplementary
Two GoPros set to record continuously during loading and laboratory trial was carried out between the dimension corre-
unloading at a frame rate of 25 fps were used to monitor the spondences (DC), Catbas-Khuc (CK) and Bouquet (B) meth-
beam. The GoPros were modified and fitted with C-mount and ods of scaling factor determination, the results are shown
F-mount lenses to reduce lens distortion and provide greater in Fig 9.
flexibility in terms of monitoring distance. One camera was The correlation coefficient for each method was 1 when
set to monitor node 3 and the second to monitor node 4. The compared to the FOS, the Root Mean Square Error RMSE
readings taken at each node were converted from pixels to mm compared to FOS is shown in Table II.
using the scaling factor of Equation (2). The superior accuracy gained from the use of dimension
The results from the synchronization trial for nodes 3 and 4 correspondences method (Equation (2)) resulted in its selection
are shown in Fig 8. The results show good agreement between for use in this study.

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
LYDON et al.: DEVELOPMENT AND FIELD TESTING OF A TIME-SYNCHRONIZED SYSTEM 9749

Fig. 9. Scaling factor determination trial.

TABLE II
R ESULTS F ROM S CALING FACTOR D ETERMINATION T RIAL
Fig. 10. Setup of displacement apparatus for multiple cameras at single
monitoring location trial.

TABLE III
R ESULTS : M ULTIPLE C AMERAS AT A S INGLE M ONITORING L OCATION

Fig. 11. Results of multiple cameras at single node trial.


3) Multiple Cameras at Single Monitoring Location Trial:
An additional trial was carried out in the laboratory where was carried out where one of the modified action cameras
two GoPro cameras set to 25fps were used to monitor the recorded images of a stationary portion of the testing setup
movement of a single target attached to a displacement testing to determine if the algorithm experienced any drifting away
apparatus while it was displaced manually. The cameras were from true values over a longer monitoring time than previous
placed 4.3m away from the monitoring location and pixel-mm tests. The results are shown in Fig 12.
conversion was carried out using dimension correspondences As can be seen from the results, there is a drift of
method in Eq. (2). Verification of the measured displacement only 2 pixels of a 4K video over the course of the trial in
was provided by the FOS to record at 25hz. The setup for this the Y-axis, with drift of 0.5 pixels in the x-axis.
trial is shown in Fig 10, with the results shown in Fig 11 and
summarized in Table III.
This trial provided confidence in the capabilities of the sys- C. Test Series 3 – Validation of Different Types
tem for measuring displacement from multiple synchronized of Vision-Based Sensors
cameras. The purpose of this test series was to validate results
4) Stationary Location Accuracy Trial: An additional met- using other camera types, thereby ensuring greater monitoring
ric for determining the quality of a displacement monitoring flexibility but also justifying the use of GoPro as the choice
system is that of stability. To test this, a supplementary trial of sensor. For this, the Blink Hub app provided with the

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
9750 IEEE SENSORS JOURNAL, VOL. 18, NO. 23, DECEMBER 1, 2018

Fig. 12. Results from stability trial.

TABLE IV
M ONITORING L OCATIONS FOR T EST S ERIES 3

Fig. 14. Results from test 3 run 1.

TABLE V
R ESULTS : T EST S ERIES 3 AT N ODE 7

The results are shown in Fig 14 and Table V. The data


confirms that there is good agreement between the camera
systems and the FOS.
This test has proven the effectiveness of the solution with
multiple camera types, which increases the flexibility of our
Fig. 13. D810 image from test 3. system to different sensors, according to the user’s preferences
and availability. The superior results obtained with the modi-
Syncbac was used for the camera control. A WIFI-enabled fied action camera system (approx. cost ∼£500) vs the D810
smart device can be paired with the pulse device and used (approx. cost ∼£2500) also validate the use of the low-cost
to display the timecode in use by devices. This reference can action camera as a means of gathering accurate deflection
then be displayed in the field of view of any camera type and data.
used as a means of synchronizing videos. The centrally loaded
beam from test series 2 was loaded and unloaded, similarly V. F IELD T ESTING OF S YSTEM
to test series 2, and a selection of nodes were monitored as On completion of the algorithm development and labora-
detailed below. For this test a Nikon D810 camera with an tory testing, a field test was carried out to determine the
80-400mm zoom lens and a resolution of 1080p was used in system’s suitability for measuring the corresponding bridge
addition to the Syncbac enabled GoPro. displacement in real scenarios and perform a complex analysis,
1) Sensor Configuration and Data Acquisition: For this such as, for instance, accurately identifying vehicles and
test series, the reference timecode was obtained by manually identifying the pattern of displacement along the span of
reading the images (a text recognition algorithm could be the bridge. A single lane 30 m span steel truss bridge,
used to automate this process in future). In this case the FOS illustrated in Fig 15, was selected as a suitable structure to
was used as the sole means of validating the vision- based test this system. ‘Verners’ bridge is on the Tamnamore Road
displacement. The cameras were both placed at a monitoring in Co Tyrone, Northern Ireland. This road provides access to a
distance of 4.2 m and targeted the monitoring location at busy industrial estate and is therefore frequently used by heavy
Node 7. Lenses and zoom levels were chosen to be as similar goods vehicles (HGV’s). Additionally, traffic on the bridge is
as possible, with small differenced due to availability and controlled by a traffic light system which only allows for a
manufacturers, yielding to the pixel to millimeter conversion single lane of traffic in one direction at any one time thus
factors depicted in Table IV. An image from the D810 Camera removing the complication of multiple events on the bridge
used in the test is shown in Fig 13. (which is being addressed in ongoing research). Two GoPro

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
LYDON et al.: DEVELOPMENT AND FIELD TESTING OF A TIME-SYNCHRONIZED SYSTEM 9751

Fig. 17. Side elevation of Verners Bridge at ¾span showing image features.
1) is the distance in engineering units, with 2) the distance in pixels on cap-
Fig. 15. Side elevation of Verners Bridge. (Image taken from location of tured footage. (Image taken from location of camera monitoring deflection).
cameras monitoring deflection).

Fig. 16. Side elevation of Verners Bridge at midspan showing image


features. 1) is the distance in engineering units, with 2) the distance in
pixels on captured footage. (Image taken from location of camera monitoring
deflection).

cameras were used in this field test, to measure displacement at


mid- and ¾-span and a third GoPro to identify the associated
vehicles causing this deflection.
Fig. 18. Vehicle 1 crossing verners bridge & results.

A. Sensor Configuration and Data Interpretation


Bridge displacements were monitored using two GoPros span at both locations. The lens used for this study has a
mounted on a single tripod on the North West river bank capability to lock the focal length in place, which meant
at a monitoring distance of 22m. The cameras were adapted the parameters would be preserved to remove lens distor-
as described in III. The focal length of both lenses was set tion after calibration. The camera to target distance/angle
to 135mm with a wide field of view setting selected on was measured using a laser distometer, and the calibration
the GoPros. Footage was captured at a framerate of 25fps. step was performed in identical circumstances (camera dis-
Natural image features at midspan (Figure 16) and 3/4span tance, focal length, etc) to the field trial on the premises
(Figure 17) were targeted for monitoring. A pixel-mm ratio of QUB.
was defined as 0.5911 mm/pixel at midspan; with a pixel-mm Fig 18, Fig 19 and Fig 20 provide a sample of the data
ratio of 0.6891mm/pixel determined at 3/4span allowing for collected at this site. This confirms accurate camera synchro-
sub-millimeter measurement of displacement on the bridge nization and clear identification of displacements at both the

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
9752 IEEE SENSORS JOURNAL, VOL. 18, NO. 23, DECEMBER 1, 2018

As previously described, to assess the structural condition


of the bridge, it is important to relate the measured displace-
ments to the corresponding imposed traffic loading. Therefore,
the accurate synchronization of the third camera to allow
for successful traffic identification, was a key feature of this
system. In each case the vehicle was easily identified, and
an image has been provided along with the displacement
data. In this initial field trial, the vehicles were manually
identified. Work is currently in progress to develop a deep
learning system for autonomous vehicle classification. This
system would be able to identify and locate axle spacings of
vehicles crossing the bridge, allowing for calculation of local
and global responses of the bridge to crossing vehicles.
VI. D ISCUSSION AND C ONCLUSIONS
A sequential series of tests have been carried out to validate
our fully synchronized wireless vision sensor monitoring sys-
tem. Each of the tests carried out was designed to build upon
the results of previous work and facilitated the development
of an accurate algorithm for determining displacements from
video footage.
A review of the existing literature highlights the need for
a precise fully wireless monitoring system which could be
rapidly deployed on a bridge of medium to long span. The
systems presented overcome previous limitations in terms of
cost and power consumption as well as in the size of the
Fig. 19. Vehicle 2 crossing verners bridge & results. infrastructure due to the use of multiple vision sensors. The
results from the initial synchronization trial are shown to
be repeatable in the field and successful millisecond time-
code synchronization was consistently obtained. Test Series 2
offered a robust testing programme which provided confidence
in the displacement calculation algorithm used in the post
processing of the vision sensors. In comparison to a FOS,
displacement from the vision sensor repeatedly correlated
with loading pattern and displacement magnitude. Significant
advantages of the vision sensor over the FOS include, but
are not limited to, the contactless nature of measurement, no
requirement for a power supply on site and cost. A single FOS
costs five times that of a single vision sensor.
Test 3 confirms that the system can be adapted to include
multiple camera types. This key feature of the system would
be particularly useful for the incorporation of existing camera
networks into the SHM system presented here.
In summary the work carried out in the experiential trials
gave confidence in the accuracy of the system. This allowed
for rapid deployment on site and minimized the equipment
needed for site measurement.
ACKNOWLEDGMENTS
The authors wish to express their gratitude for the finan-
cial support received from Invest Northern Ireland, the USA
National Science Foundation and Science Foundation Ireland
towards this investigation under the US-Ireland Partnership
Scheme. They also gratefully acknowledge Transport NI and
Fig. 20. Vehicle 3 crossing verners bridge & results. the Department of Infrastructure NI for their support.
R EFERENCES
mid- and ¾-span. The additional level of noise in the results
[1] Report Card for America’s Infrastructure, Amer. Soc. Civil Eng.,
for Fig 19 are due to multiple cars following close behind the Reston, VA, USA, 2017, pp. 1–74. [Online]. Available: https://www.
vehicle being tracked. infrastructurereportcard.org/cat-item/bridges/

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
LYDON et al.: DEVELOPMENT AND FIELD TESTING OF A TIME-SYNCHRONIZED SYSTEM 9753

[2] RAC Foundation. Council Road Bridge Maintenance in [23] V. Argyriou, J. M. Del Rincón, B. Villarini, and A. Roche, Image,
Great Britain. Accessed: Mar. 14, 2018. [Online]. Available: Video & 3D Data Registration: Medical, Satellite & Video Processing
https://www.racfoundation.org/media-centre/road-bridge-maintenance- Applications With Quality Metrics. Chichester, U.K.: Wiley, 2015,
2400-council-bridges-sub-standard-press-release doi: 10.1002/9781118702451.
[3] B. A. Graybeal, B. M. Phares, D. D. Rolander, M. Moore, and [24] G. Hong and Y. Zhang, “Combination of feature-based and area-based
G. Washer, “Visual inspection of highway bridges,” J. Nondestruct. image registration technique for high resolution remote sensing image,”
Eval., vol. 21, no. 3, pp. 67–83, 2002, doi: 10.1023/A:1022508121821. in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Jul. 2007,
[4] J. Bennetts, P. Vardanega, C. Taylor, and S. Denton, “Bridge data— pp. 377–380, doi: 10.1109/IGARSS.2007.4422809.
What do we collect and how do we use it?” in Proc. Int. Conf. Smart [25] J. Bouguet. (2015). Camera Calibration Toolbox for MATLAB. [Online].
Infrastruct. Construct., 2016, pp. 27–29, doi: 10.1680/tfitsi.61279.531. Available: http://www.vision.caltech.edu/bouguetj/calib_doc/index.
[5] K.-T. Park, S.-H. Kim, H.-S. Park, and K.-W. Lee, “The html#ref and http://www.vision.caltech.edu/bouguetj/calib_doc/
determination of bridge displacement using measured acceler- [26] T. Khuc and F. N. Catbas, “Computer vision-based displacement
ation,” Eng. Struct., vol. 27, no. 3, pp. 371–378, Feb. 2005, and vibration monitoring without using physical target on struc-
doi: 10.1016/J.ENGSTRUCT.2004.10.013. tures,” Struct. Infrastruct. Eng., vol. 13, no. 4, pp. 505–516, 2017,
[6] S. B. Im, S. Hurlebaus, and Y. J. Kang, “Summary review of GPS tech- doi: 10.1080/15732479.2016.1164729.
nology for structural health monitoring,” J. Struct. Eng., vol. 139, no. 10, [27] H. Bay, T. Tuytelaars, and L. Van Gool. SURF: Speeded Up
pp. 1653–1664, 2013, doi: 10.1061/(ASCE)ST.1943-541X.0000475. Robust Features. Accessed: Dec. 13, 2017. [Online]. Available:
[7] M. Q. Feng, Y. Fukuda, D. Feng, and M. Mizuta, “Nontarget vision http://www.vision.ee.ethz.ch/ surf/eccv06.pdf
sensor for remote measurement of bridge dynamic response,” J. Bridge [28] D. G. Lowe, “Object recognition from local scale-invariant features,”
Eng., vol. 20, no. 12, p. 4015023, 2015, doi: 10.1061/(ASCE)BE.1943- in Proc. 7th IEEE Int. Conf. Comput. Vis., vol. 2, Sep. 1999,
5592.0000747. pp. 1150–1157, doi: 10.1109/ICCV.1999.790410.
[8] O. Celik, C.-Z. Dong, and F. N. Catbas, “A computer vision [29] C. Tomasi and T. Kanade, “Detection and tracking of point features,”
approach for the load time history estimation of lively individu- School Comput. Sci., Carnegie Mellon Univ., Pittsburgh, PA, USA, Tech.
als and crowds,” Comput. Struct., vol. 200, pp. 32–52, Apr. 2018, Rep. 91-132, 1991. [Online]. Available: http://www.lira.dist.unige.it/
doi: 10.1016/J.COMPSTRUC.2018.02.001. teaching/SINA/slides-current/tomasi-kanade-techreport-1991.pdf
[9] M.-H. Shih and W.-P. Sung, “Developing dynamic digital image [30] P. H. S. Torr and A. Zisserman, “MLESAC: A new robust estimator with
techniques with continuous parameters to detect structural dam- application to estimating image geometry,” Comput. Vis. Image Under-
age,” Sci. World J., vol. 2013, Jun. 2013, Art. no. 453468, stand., vol. 78, no. 1, pp. 138–156, 2000, doi: 10.1006/cviu.1999.0832.
doi: 10.1155/2013/453468. [31] M. Lydon et al., “Development of a bridge weigh-in-motion sensor:
[10] J.-W. Park, J.-J. Lee, H.-J. Jung, and H. Myung, “Vision-based dis- Performance comparison using fiber optic and electric resistance strain
placement measurement method for high-rise building structures using sensor systems,” IEEE Sensors J., vol. 14, no. 12, pp. 4284–4296,
partitioning approach,” NDT&E Int., vol. 43, no. 7, pp. 642–647, 2010, Dec. 2014, doi: 10.1109/JSEN.2014.2332874.
doi: 10.1016/j.ndteint.2010.06.009.
[11] S.-W. Kim and N.-S. Kim, “Multi-point displacement response
measurement of civil infrastructures using digital image
processing,” Procedia Eng., vol. 14, pp. 195–203, Jan. 2011,
doi: 10.1016/j.proeng.2011.07.023. Darragh Lydon received the degree in computer
[12] D. Feng and M. Q. Feng, “Experimental validation of cost-effective games development from the University of Ulster
vision-based structural health monitoring,” Mech. Syst. Signal Process., in 2011. He is currently pursuing the Ph.D. degree
vol. 88, pp. 199–211, May 2017, doi: 10.1016/j.ymssp.2016.11.021. in bridge monitoring using computer vision methods
[13] Y. Fukuda, M. Q. Feng, and M. Shinozuka, “Cost-effective vision- from Queen’s University Belfast.
based system for monitoring dynamic response of civil engineering
structures,” Struct. Control Health Monit., vol. 17, no. 8, pp. 918–936,
2010, doi: 10.1002/stc.360.
[14] H.-N. Ho, J.-H. Lee, Y.-S. Park, and J.-J. Lee, “A synchronized
multipoint vision-based system for displacement measurement of
civil infrastructures,” Sci. World J., vol. 2012, pp. 1–9, Aug. 2012,
doi: 10.1100/2012/519146.
[15] H. Yoon, H. Elanwar, H. Choi, M. Golparvar-Fard, and B. F. Spencer,
“Target-free approach for vision-based structural system identification
using consumer-grade cameras,” Struct. Control Health Monit., vol. 23, Myra Lydon received the Ph.D. degree in bridge
no. 12, pp. 1405–1416, 2016, doi: 10.1002/stc.1850. weigh-in-motion and structural health monitoring
[16] GoPro. (2016). GoPro—Refurbished HERO4 Black 4K Ultra HD from Queen’s University Belfast. She is currently
Waterproof Camera. Accessed Jan. 19, 2018. [Online]. Available: a Post-Doctoral Researcher at Queen’s University
https://shop.gopro.com/EMEA/refurbished/refurbished-hero4-black/ Belfast. She has experience in structural health mon-
CHDNH-B11.html itoring and commercial design. She recently received
[17] Back-Bone. (2016). Ribcage AIR HERO4 Mod Kit Bundle|BACK- the RAEng Fellowship Award for Excellence in
BONE. Accessed: Jan. 19, 2018. [Online]. Available: https://www.back- Engineering.
bone.ca/product/ribcage-air-hero4-mod-kit/
[18] Computar. (2016). Product Details for Computar Lens Model
No. E5z2518C—MP. Accessed: Jan. 19, 2018. [Online]. Available:
https://computar.com/product/1115/E5Z2518C-MP
[19] GoPro App-Desktop+Mobile-Capture, Create+Share. Accessed:
Jan. 19, 2018. [Online]. Available: https://shop.gopro.com/EMEA/
softwareandapp/
[20] Timecode Systems, SyncBac Pro Home|Timecode Systems. Accessed: Jesús Martínez del Rincón received the B.Sc.
Mar. 9, 2018. [Online]. Available: https://www.timecodesystems. degree in telecommunication engineering and the
com/syncbac-pro/ Ph.D. degree in computer vision from the University
[21] Timecode Systems Pulse—Sync & Control Timecode Systems. Accessed: of Zaragoza, Spain, in 2003 and 2008, respectively.
Mar. 9, 2018. [Online]. Available: https://www.timecodesystems. He is currently a Lecturer at Queen’s University
com/products-home/pulse/ Belfast, U.K. His research interests include video
[22] Timecode Systems. BLINK Hub—Free Sync & Control surveillance, human pose estimation, and machine
App|Timecode Systems. Accessed: Mar. 9, 2018. [Online]. Available: learning.
https://www.timecodesystems.com/products-home/blink-hub-timecode-
app/

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.
9754 IEEE SENSORS JOURNAL, VOL. 18, NO. 23, DECEMBER 1, 2018

Susan E. Taylor is currently a Professor of Struc- Eugene O’Brien received the Ph.D. degree. He was with industry for five
tural Engineering and the Dean of Research at years before becoming a Lecturer in 1990 at Trinity College Dublin. Since
Queen’s University Belfast, Belfast, U.K. 1998, he has been a Full Professor of Civil Engineering at University College
Dublin. As well as his academic work, he is involved in the commercialization
of research as a Director of a small consulting firm. He has published over
100 journal papers and two books. He was the Founding President of the
International Society for Weigh-in-Motion.

F. Necati Catbas is an Educator and a Researcher. He is currently serving as a


Full Professor at the University of Central Florida. He teaches undergraduate
Desmond Robinson is currently a Senior Lecturer in and graduate level courses in the area of structural engineering, bridge
Civil Engineering with expertise in structural health engineering, structural dynamics, finite-element analysis, structural health
monitoring and numerical modeling using nonlinear monitoring, and advanced engineering topics. His research interests span a
finite-element analysis. variety of topics, including the development, integration, and implementation
of sensing, information, modeling, and simulation technologies, parametric
and nonparametric structural identification, and image-based technologies for
structures, such as bridges, buildings, aerospace structures and components,
lifelines, and stadium structures. Dr. Catbas is the Founding Director of Civil
Infrastructure Technologies for Resilience and Safety. He is an elected Fellow
of the American Society of Civil Engineers and the Structural Engineering
Institute.

Authorized licensed use limited to: Chung-ang Univ. Downloaded on December 12,2022 at 11:43:07 UTC from IEEE Xplore. Restrictions apply.

You might also like