3 - 2018 - MSC - Development of Framework For Automated Progress Monitoring of Construction Projects
3 - 2018 - MSC - Development of Framework For Automated Progress Monitoring of Construction Projects
3 - 2018 - MSC - Development of Framework For Automated Progress Monitoring of Construction Projects
A Project Work
BACHELOR OF TECHNOLOGY
in
CIVIL ENGINEERING
&
MASTER OF TECHNOLOGY
in
INFRASTRUCTURAL CIVIL ENGINEERING
by
AKASH PUSHKAR
(CE13B070)
I take this opportunity to express my sincere and profound sense of gratitude to my project
guide Dr. Koshy Varghese for giving me valuable guidance and encouragement during this
work. It is through his guidance that the project has gained structure. His foresight and
expertise have helped me make the right choices in the project. I am thoroughly indebted to
him for the amount of time they have spent in reviewing my analyses and updates. I consider
it a privilege to have worked under his guidance. I feel it as a great pleasure and honor to be
I am personally indebted to Dr. Koshy Varghese for inspiring me. For teaching me more than
academics. For giving me perspective on lateral thinking. He has not only helped me to learn
the technical aspects of the project but also the way of researching. It is all because of the
pleasant experience of working under with him that I am motivated to pursue research as my
career path.
I owe my sincere thanks to the faculty of the BTCM Division for their encouragement. The
best was made available for successfully completing the project work. I am thankful to Prof.
K Ramamurthy, Head of Department and Prof. Shiva Nagendra, EWRE Division for
providing me facility for conducting experiments. The project would not have been possible
without their support. I am thankful to Prof. Radhakrishna G. Pillai for support in setting up
the laboratory in EWRE division. I would also like to extend thanks to the staff of BTCM
I would like to thank Madhu who helped in driving the project. I thank my friends Akash
Garg, Manish, Shivam, Divyansh, Ravi, Shubham, Shikhar, Manna and Vijayalaxmi for their
constant support, help and for being there for me in good and bad times and for being more
love, blessings and constant support I would not have been able to reach this stage of my
career.
- Akash Pushkar
ABSTRACT
interest for researchers in the field of civil engineering. It is done using 3D point cloud as-
built and as-planned model. Advancements in the field of photogrammetry and computer
vision have made 3D reconstruction of buildings easy and affordable. But the high variability
of construction sites, in terms of lighting conditions, material appearance, etc. and error-
prone data collection techniques tend to make the reconstructed 3D model erroneous and
incorrect representation of the actual site. This eventually affecting the result of progress
estimation step. To overcome these limitations, this study presents a novel approach for
process for the reconstruction as compared to the traditional approach. In the proposed
method, the first step is to obtain an as-built 3D model of the construction site using 3D
scanning techniques or photogrammetry in the form of point cloud data. In the second step,
the model is passed through pre-trained machine learning binary classification model for
identifying and removing erroneous data points in the captured point cloud. Erroneous points
are removed by identifying the correct building points. This processed as-built model is
compared with an as-planned model for progress estimation. Based on this, experiments are
carried out using commercially available stereo vision camera for 3D reconstruction.
Moreover this study, standardizing the methodogy of data collection at a construction site
Figure 1 Overview of integrated RFID and TLS system (Valero et al. 2012) ............................ 29
Figure 2 Principle of Laser scanning (Abellán et al. 2009) ....................................................... 31
Figure 3 3D reconstruction from 2D images ............................................................................ 33
Figure 4 Point cloud model using laser scan and photogrammetry (Shanbari et al. 2016) ....34
Figure 5 (a) Local representation (surface normal) (Rusinkiewicz 2004)(b) Global
representation (spherical harmonics) (Kazhdan et al. 2003) .................................................. 39
Figure 6 (a) Input mesh (b) library of objects to be recognized from (c) objects recognized . 40
Figure 7 Recognition using context (a) input point cloud, (b) Semantic network (c)
semantically segmented point cloud (d) object recognized point cloud (Tang et al. 2010) ... 42
Figure 8 Feature extracted from the point cloud (a) Normals (b) fast feature point histogram
descriptor ................................................................................................................................. 51
Figure 9 Linear SVM for binary classification ........................................................................... 54
Figure 10 SVM for linearly inseparable data............................................................................ 54
Figure 11 Proposed Methodology ........................................................................................... 58
Figure 12 Final as-planned 3D model ...................................................................................... 62
Figure 13 Data collection semi-automatic robot ..................................................................... 64
Figure 14 Zed camera used for data collection ....................................................................... 66
Figure 15 Different frames of video showing data collection ................................................. 70
Figure 16 As-planned model showing sections at which light intensity is measured ............. 71
Figure 17 various wall configuration used for validation of proposed methodology
representing various stage of construction of masonry wall .................................................. 73
Figure 18 Hue and Saturation plot for two datasets, collected under different conditions ... 76
Figure 19 Impact of light conditions on 3D reconstruction (a,b) Low light (c) High light........ 77
Figure 20 Variation of dimensional error with lighting condition and distance of object ...... 78
Figure 21 Sample as-built model configuration used as training dataset ............................... 78
Figure 22 Performance evaluation of SVM classifier ............................................................... 79
Figure 23 Performance evaluation of SVM classifier for different models – representing
various stages of masonry wall configuration ......................................................................... 80
Figure 24 Performance evaluation of the proposed methodology (built-up area percentage_
as-built area as reference) ....................................................................................................... 81
Figure 25 Performance evaluation of the proposed methodology (built-up area percentage_
as-planned area as reference) ................................................................................................. 82
Figure 26 Variation of percentage build for various methods and different configurations .. 84
Figure 27 Missing points in the data collected (a) Holes or pocket of no points in the as-built
model (b) Sides of the columns not registered in the 3D reconstruction ............................... 85
LIST OF TABLES
Table 1 Summary of present-day data acquisition technologies ............................................ 35
Table 2 List of experimental variables ..................................................................................... 67
Table 3 Variables considered and total iterations for each ..................................................... 70
Table 4 Value of the light intensity (lux) on the wall at different sections ............................. 72
Table 5 Summary of the lighting conditions (All values are in lux).......................................... 72
Table 6 Percentage build for various methods and different configurations ......................... 83
ABBREVIATION AND KEYWORDS
DoF Degrees of freedom
The present Construction industry is very dynamic industry given to the high variabilities in
technology and change in user requirements, construction projects have become complex and
difficult. A number of variables have been identified as critical for the success of a
construction project. These factors have been divided into five different categories: human-
related factors, project-related factors, project procedures, project management actions and
Different researchers have identified project management action as a key factor for the
success of the project (Jaselskis and Ashley 1991). Research suggests better management
tools can help to improve the management by assisting the infrastructure managers.
decision making effectiveness, monitoring, project organization structure, plan and schedule
monitoring has evolved as a critical success factor for successful execution and completion of
a construction project.
1.1 BACKGROUND
According to the Project Management Body of Knowledge, monitoring and control “consists
of those processes required to track, review, and orchestrate the progress and performance of
a project; identify any areas in which changes to the plan are required; and initiate the
corresponding changes”. This involves inspection of the progress of the work and its
involving visual inspection and human judgment. Construction companies consider the work
of the work in progress as one the most tedious and challenging problem faced by project
work hours is spent in measuring, recording and analyzing the progress of work. These high
human dependencies make existing monitoring methods slow and inaccurate, reducing the
schedule and cost. In building construction, the average duration of any construction activity
typically ranges in days. However, the average frequency of manual data collection and
reporting is monthly. These poor monitoring techniques are comprehended as one of the
reasons for cost and time overruns in construction projects (Zhang et al. 2009). Thus, the
importance of accurate and efficient progress monitoring have been reiterated by several
researchers. The time taken for identification of inconsistency between the as-built and as-
planned model is proportional to the cost overrun and increased difficulty in implementation
In the past decade, numerous construction companies and academicians have started
implementing digital imaging and photography for assisting the task of visual inspection
(Tsai et al. 2007). The onsite collection of images enables inspection from the confines of the
office thereby reducing the need for frequent on-site inspections. Although digital images
help in project monitoring most of the interpretation from these images is left to the project
managers. Thus there is need to develop automated techniques for analysis and interpretation
Computer vision is the fields of study that aims to provide the computer with the
functionality of human vision and thereby identifying and interpreting different component
captured in a digital image. Similarly, another relevant field of study is photogrammetry. It
involves reconstruction of 3D models from 2D images. This two field together can help to
should be noted here that the application of computer vision and photogrammetry techniques
are limited to visual tasks only, thus restricted to visually evident stages of construction. It is
because of the nature of underlying data that is used in both the aforementioned fields of
study. Despite the above-mentioned limitations, these field of study can be used in partially
automating the process of progress monitoring, increasing the efficiency of the infrastructure
project managers.
Recent advancements in the field of reality capture technologies, like, 3D imaging, laser
scanning, in-situ sensing equipment, onboard instrumentation and electronic tagging, has
made data acquisition possible for automated progress estimation. Some construction
companies have begun to develop and implement technology for automated data collection
tags, laser scanners, audio and video technologies (Navon and Sacks 2007). Cloud-based
learning and computer vision have made an analysis of the acquired data efficient. While the
impacts of these advancements are compelling, numerous challenges continue to persist. (De
Marco et al. 2009)(Lee et al. 2016) These challenges prevent them from maturing into
technologies that could be deployed to an on-going construction site for monitoring without
human intervention. It can be alleged that there are no practices which offer automated
tested by researchers and practitioners to enable better monitoring. One noteworthy example
(BIM). In the last decade, many commercially available inspection software’s have been
developed such as Autodesk BIM 360, xBIM, Field 3D, LATISTA etc. These software’s are
very helpful from the point of document management but inspection is still manual, requiring
the infrastructure manager to navigate around the BIM model and inspecting the construction
visually at the same time. Thus, automated progress monitoring systems are need of the hour.
Automated progress monitoring can be divided into four steps: (1) Data acquisition, which is
capturing digital representation of as-built scenes, (2) information retrieval, this refers to
extraction of useful information from the data collected without loss of any information
required for accurate progress estimation, (3) progress estimation, this is a comparison
between as-built model and as-planned model in order to determine the state of progress, (4)
The goal of the study is to improve the result of progress estimate of a construction by
The study is done considering the high variability in site conditions and data collection
techniques, the method should be robust to these variations for any given site. In addition to
this, the study also aims at developing a methodology for effective and efficient data
collection from a construction site. Thereby, standardizing the data collection process
techniques.
It should be noted that 3D reconstructed point cloud model is obtained using commercially
available stereo vision camera. The process of 3D registration of images to produce point
cloud is known to be error-prone, producing erroneous points with no correspondence in the
actual site (Fathi and Brilakis 2016)(McCoy et al. 2014). Thus, data pre-processing becomes
of paramount importance, involving outlier removal and noise filtering. But these pre-
processing methods are preliminary in nature and erroneous points continue to exist in the
point cloud data. Therefore, there is need to identify these error points.
In this study, we work to improve the results of progress estimate of construction calculated
by comparing as-built model, obtained using a commercially available stereo vision camera,
and as-planned model. The research employs machine learning techniques for processing and
refining 3D point cloud model before progress estimation is made. The processing involves
detection of the masonry followed by its distinction from erroneous points. This is done to
obtain a more accurate representation of the captured scene as compared to the output of 3D
reconstruction. Subsequently, this processed as-built model is used for progress estimation by
This research proposes a novel method is proposed to classify between normal and erroneous
points using machine learning techniques. “Normal” class representing points which
correspond to constructed parts in the 3D point cloud model whereas “erroneous” class
represents points which have no correspondence or correspond to parts other than constructed
elements. In this method, a supervised binary classifier is built by training it over data
collected. This trained classifier is used for identifying masonry points and thereby removing
erroneous points from a point cloud, eventually producing better progress estimate.
The study is conducted for masonry activity of construction, though this can be easily
extended to other construction activities. Though this extension of work is limited only to
data, that is, 2D images or 3D point, used for analysis is obtained using imaging-based
devices. Ideally, the developed system should perform independently of the source or
technology used for collection of data. Thus, the performance of the system should be same
as that in the case of imaging devices when other devices like laser scanner etc. are used.
Since the laser scanners are very costly and require special expertise to operate, we have
confined this study to 2D images (with depth map) and photogrammetric point cloud data.
The thesis is organized into seven chapters. Each subchapter is further divided into the
further section. A brief explanation of various chapters in this thesis is given below.
· Chapter 1 Introduction
importance of the construction progress monitoring. It briefly explains the aim of the
data acquisition, 3D reconstruction from a stereo pair of 2D images and data retrieval
from 3D point clouds. The section also elaborates on Scan-to-BIM registration and
comparison of two point clouds. Finally, it summarizes the research gaps established
in the literature.
This chapter discusses the aims and objectives of the study. It further discusses the
methodology proposed in order to achieve these aims and objectives. Since this is
very area of research, the chapter also establishes the problem statement being tackled
in the study.
This chapter deals with the implementation of the proposed framework using a
laboratory testbed setup. The chapter is divided into the following section: (1) Design
This chapter states the various results obtained in the study and the implications of
these study. It also discusses the analytical results of performance of the methodology
proposed. Besides this, various insights drawn from the data are discussed in detail
fashion in the following sub-sections. The results are divided into three categories:
This chapter summarizes the study and results of the proposed methodology. It
documents the conclusion drawn from the study and contribution to the field of study.
Finally, future scope and work are discussed and various possible research paths are
identified.
CHAPTER 2. LITERATURE REVIEW
2.1 INTRODUCTION
build on the understanding of the concept developed in section 1. In the following section,
various technologies used for data acquisition and numerous algorithms and techniques used
in the analysis of the data acquired are discussed. It also discusses techniques of 3D
reconstruction, that is, 3D point generation from 2D images. In addition to this, a subsection
elaborates the research regarding the comparison of as-built and as-planned 3D Building
Information Model (BIM) model. And finally, research gaps found in the literature are
highlighted.
Practitioner and academicians have developed and implemented various technologies for
collection of data from a construction site. These technologies range from simple barcode to
highly complex imaging devices. Though in the recent past, laser scanning and
photogrammetry have become popular techniques for obtaining construction data in the form.
These techniques collect data in the form of the 3D point cloud, which has emerged as
important format after the advent of BIM 3D models in the construction process. The
retailing, and transportation and logistics industries rely on its capability to identify tagged
objects without requiring physical contact, line-of-sight, or clean environments. Thus, the
construction industry exploited this idea for inventory management and on-site material
tracking (Kasim et al. 2012; Song et al. 2006). The data collected using RFID is then
integrated with the BIM model (Wang and Love 2012). Using this, each component of the
building can be allotted a status - ordered, identified, checked, installed, delivered, protected,
snagged, fixed and completed (Froese 2010). This technology enables the project manager to
retrieve information by a simple scan of the RFID tag using a smartphone or a tablet. A
monitoring system has been developed for concrete pouring, storing and converting the data
collected during the process into information for quality and productivity control (Moon and
Yang 2010).
components for raising more and more buildings. The fabrication and installation of these
components can be managed and inspected by the means of RFID systems (Yin et al. 2009).
Active RFID tags are used for location of the objects and users as well as mapping of the
environment. RFID has been able to solve the problem of Indoor Location Sensing (ILS).
RFID based ILS solution is broadly classified into following categories, depending upon the
type of ILS method and possible scale of deployment of the solution: LANDMARC-based
solutions, triangulation-based ILS solutions, and building or zone level ILS solutions (Li and
Becerik-Gerber 2011).
A combination of RFID technology and terrestrial laser scanners (TLS) has been used to
generate 3D models of different objects and indoor spaces. In recent years, the problem of
generation of synthetic 3D models of different facilities has been tackled by many authors.
points, therefore providing a hindrance to the analysis of this data. In order to assist the
analysis, RFID systems are integrated with RFID systems. This is demonstrated in the figure
GIS is a computer-based system to collect, integrate, manipulate, analyze, store, and display
data in a spatially referenced environment, thereby assisting in analyzing data visually and
observing trends, patterns, and relationships that cannot be captured in case of tabular or
written form. On the other hand, GPS is a satellite-based navigation system made up of a
network of approximately 24 satellites, which were placed into orbit by the U.S.A. It circles
the earth twice a day in a very precise orbit and transmits signal information to earth where
GPS receivers take this information and use triangulation to calculate the user’s exact
In the construction industry, GPS and GIS technologies independently and their integration
has been implemented in the area of job site monitoring, onsite construction, site layout, and
development, etc. These help the project managers and decision-makers to analyze the
regarding real-time locations of vehicles, status reports, drive speed and heading information,
navigation assistance, and collection of navigation routes. Despite the benefits, this
technology has many limitations, firstly, GPS requires clear ‘line of sight’ between the
orbiting satellite and antenna of the receiver. Therefore, it cannot be used indoors, as
anything that can block sunlight is known to block GPS signals. Secondly, the problem of
nearby objects. Thus, application of GPS and GIS are limited when it comes to indoor
This is present day’s one of the popular method of data acquisition for the purpose of
models of the building at any stage of the construction process. The data collected is accurate
and with less time consumption, in comparison to manual measurements and traditional
model accuracy is known to have a low variance of 1/4” (6mm), or less (Shanbari et al.
2016).
The laser scanning technologies are based on two operating principles: Time of flight and
phase shift or phase comparison of the laser pulse. In the time of flight (ToF) scanning, 3D
coordinated of the surrounding environment is calculated based on the time taken by the laser
pulse in order to return the emitting device after it is reflected back from the object surface.
Using ToF scanning alongside current algorithms, the data is rendered at an accuracy with the
standard deviation in the range of millimeters. On the other hand, in phase comparison laser
scanning, the device emits a laser pulse after modulation by a harmonic wave. On the arrival
of the modulating beam after reflection from the object surface, the phase difference between
the outgoing and incoming pulse is calculated. This phase difference is, in turn, gives the 3D
coordinates of the surface. Phase comparison scanning has a higher accuracy (Böhler and
Marbs 2002).
Figure 2 Principle of Laser scanning (Abellán et al. 2009)
In the construction, industry laser has been successfully implemented to develop 3D point
cloud models. Automated progress control system has been developed requiring minimum
human intervention and input (Zhang and Arditi 2013). Despite the high accuracy of the laser
scanner, use of laser scanner is limited because of the high cost of the scanners, high
maintenance cost and requirement of skilled or trained user for operating scanner. Laser
scanned data have a discontinuity of spatial data and requires mixed-pixel restoration. (Klein
et al. 2012). In addition to this, slow warm-up time and regular need for sensor calibration put
low skill intensive solution to the problem of data acquisition in form of the 3D point cloud.
In photogrammetry deals with 3D reconstruction from two or more images, which involves,
identifying key points in images, selecting the common points; calculating the camera
parameters, camera positions, distortions; and finally reconstructing the 3D model by
intersecting the key feature points and adding the information from different images into one
(Mikhail et al. 2001). Advancements in the field of image processing and computer vision
The stitching of common feature points can be achieved through a different level of
automation. Manual stitching requires lesser images but needs a priori knowledge of the
scene. On the other hand, automated stitching reduces human intervention but introduces
stitching errors and noise (Remondino and El-Hakim 2006). After the features are described
and stitched together between images, camera parameter, position and viewing are calculated
based on the location of feature points in 3D space. Bundle adjustment method is employed
for simultaneous optimization of calculated structure and camera poses. Once the cameras
positions are calibrated for all images, the 3D coordinate of any point is obtained by
triangulation of same key point in two images obtained from different viewing angle (Klein
et al. 2012). Figure 2 maps the flowchart for 3D reconstruction from 2D images.
Figure 3 3D reconstruction from 2D images
site conditions. In case of automated stitching, result is affected by dynamic scenery, moving
objects and illumination (Sabry 2000). Other challenges include lack of corresponding pairs
of feature points for stitching 2D images. This is because under-construction building usually
have uniform texture and appearance and thus it is difficult to visually recognize features in a
given image. Moreover, urban construction sites are surrounded by building and cluttered
from inside which prevents required amount of data capturing. In addition, though the camera
has the drawback of lower geometric accuracy; flexibility to use and cost-effectiveness
makes them a favorable choice. Bohn & Teizer have explored advantages and challenges of
camera-based progress monitoring (Bohn and Teizer 2010). The efficacy of stereo vision
cameras to obtain as-built point cloud model using 2D images and depth map has been
studied and proved by various researchers. (Brilakis et al. 2011; Fathi and Brilakis 2011; Son
and Kim 2010). The problems of lack of feature points is obtained is solved by populating the
3D space with unique visual markers, thereby artificially increasing the key feature points.
Occlusions and limited views limitations are overcome by introducing planar constraints to
Figure 4 Point cloud model using laser scan and photogrammetry (Shanbari et al. 2016)
coordinated using two or more frames of video. The input to this is video captured over
capturing large number of frames in small time span stored sequentially. The sequential
nature of video facilitates the location and matching of key feature points (Zhu and Brilakis
2009). Thus, entire reconstruction from video requires minimal human intervention,
automating the process majorly (Serby et al. 2004; Tissainayagam and Suter 2005).
Technology for generation of 3D point cloud data from videos is in the early stage of
development. The results are susceptible to any changes in illumination, abrupt motion of
camcorder, corrupting the features extracted leading to failure of stitching of two frames
(Remondino and El-Hakim 2006). Table 1 summarizes the attributes of laser scanning
The 3D point cloud obtained from the laser scanning or photogrammetry need to be
processed before it is compared with an as-planned BIM model. The 3D point cloud contains
points scattered unorderly throughout and it does not contain any object-oriented information,
as in BIM models. Therefore, these point cloud needs to be processed for object segmentation
and detection (Kopsida et al. 2015). This section discusses techniques of processing digital
images for 3D reconstruction, identification of construction objects and materials, and further
The reconstructed models are highly corrupted with outliers and missing values because of
imperfect conditions in which scanning is done, for instance, the motion of the object,
multiple reflections, object occlusion, etc. Therefore, the output of the data acquisition step,
that is reconstructed point cloud model, needs to be processed and analyzed before feeding to
the next step of information retrieval and progress estimation. The standard tasks involved in
pre-processing of point cloud data are (1) Outlier removal, (2) Handling missing value and
(3) Reducing noise (or smoothening) of data (Pətrəucean et al. 2015). For large point clouds,
downsampling through voxelization is done, in order to reduce the computational time and
cost.
Outlier removal for point cloud is a non-trivial task because of following reasons:
point density (Wang et al. 2013). Outlier removal methods in literature are mostly based on
local properties of points, popularly calculated using density-based approach (Breunig et al.
2000) and distance based approach (Knorr et al. 2000). Some of the local statistics are local
point density, nearest neighbor distance, eigenvalues of the local covariance matrix, etc.
(Papadimitriou et al. 2003). Wang et.al, proposed connectivity-based and clustering approach
for detection of outliers (Wang et al. 2013). Similarly, the point cloud is processed for noise
The processed as-built point cloud is to be compared to as-planned BIM model for progress
estimate. There are two ways to do the comparison: (1) convert the point cloud model in BIM
model (involving semantic segmentation and object recognition) and compare, (2) convert
the as-planned model to point cloud format and compare. In this paper, latter is adopted
progress estimation.
For the creation of as-built BIM model from the unstructured 3D point cloud, three types of
information are needed: (1) Information about the shape of the object, (2) Information about
the identity of the object, (3) Information about the relationship between objects (Tang et al.
2010). The following subsections discuss different algorithms from the field of computer
vision and photogrammetry for extraction of aforementioned knowledge from point cloud
data.
2.3.2.1 Shape representation
Campbell et. al. discuss shape representation vividly in general (Campbell and Flynn 2001).
In the context of BIM models, shape representation is can be categorized into three different
types:
object along with the relationship between these boundaries (Baumgart 1972). In case
Though explicit representations can represent the geometry accurately required for as-
built modeling, it does not perform well in automatic segmentation and recognition of
building elements. Hence for practical usage in BIM modeling, implicit shape
done by means of features, which are either derived from the data itself or from the
library of shape. Most of the implicit representations are non-parametric (Tang et al.
2010).
In the parametric model, the shape of an object in the model is described by a small
number of parameter. For example, the shape of a column in the site, that is, cuboidal
in shape is represented by its length and breadth, its axis, its start point and its
endpoint. On the other hand, non-parametric shape representations do not require any
As the names suggest, global shape representation describes the objects entire shape.
For example, in order to represent an object, histogram of the surface normal could be
used (Horn and Ikeuchi 1984). Some of the other global features which can be used
are shaped distributions (Osada et al. 2002), spherical harmonics (Jia et al. 2007), etc.
For the application of global features, algorithm requires the object to be segmented
from the background and should be observed from all sides. This is rarely met in 3D
point data acquired from the construction site, therefore its application is highly
limited in construction.
On the other hand, local representation describes a small part of the object. Local
common types of local representations are in form of the surface normal and surface
1995). For example, in case of a wall, since it a planar surface, surface normal would
be same throughout and therefore points with the similar surface normal values can be
or explicit, these shapes are to be related for the creation of as-built BIM model. Moreover,
these relationships help to identify the classes of these shapes, for example, whether a cuboid
shape is a column or beam (Nüchter and Hertzberg 2008). The spatial relationship between
the geometric shapes, relevant to BIM, can be categorized in the following manner:
cloud data. After the point cloud is segmented in different geometric shapes and relationships
are established between these shapes, we need to classify these shapes into different classes
of construction elements. Thus, this upgrades a cuboid in the point cloud to a wall in as-build
BIM model. The approaches employed for object recognition in point cloud are: recognizing
using object instances, recognizing using object classes, recognizing using context and
‘Recognition using instances’ is one most employed and most successful technique for object
recognition. The entire recognition process can be divided into three steps. Firstly, shape
descriptors are computed for the object various possible objects and stored in the form of a
library which helps while classifying an object in a 3D scene. Secondly, in the point cloud,
shape descriptor is calculated for the geometric shape in question and the compared to the
shape descriptors computed in step 1. The most similar shape descriptor is chosen as the class
of the object. Thirdly, the two shapes, one from the point cloud and other from shape library
are aligned are validated using a percentage of overlapping. This technique is useful in
recognition of objects throughout the scene, whose shapes are known a priori. Examples of
such objects include machinery in process plants, pipes, valves, and I- beams (shown in the
Figure 6 (a) Input mesh (b) library of objects to be recognized from (c) objects recognized
Recognition using instances has a limitation as it is not able to handle shape variability in an
object. Therefore, recognition using classes is used. This is done by using shape descriptors
which are a representation of a respective class instead of a single instance of the object class.
A global descriptor method is used for recognition of objects class. This is similar to
recognition using instances, the only difference being that the geometric object segmented in
the point cloud is compared to all instances in a particular class (Kazhdan et al. 2003; Shilane
et al. 2004).
Though object recognition has seen an upheaval in terms of development in the recent
decades, research for the recognition of BIM object like walls, windows, doors etc., is still in
the elementary stage. Algorithms for BIM objects’ recognition, typically segments the scene
into planar or curved regions and then using the features derived from these segments,
One of the challenges observed in recognition of BIM elements in point cloud data is lack of
distinctiveness of these elements. For example, on what parameters one should differentiate
between a column and a beam. Both are cuboidal in shape with planar surfaces. In order to
overcome this problem, researchers have developed techniques to exploit the information of a
spatial relationship between objects, thereby providing an additional layer to the process of
object recognition. These techniques are implemented by generating and assigning semantic
information to each of the geometric shapes in the point cloud (Cantzler et al. 2002). Thus,
instead of vanilla segmentation, the point cloud is segmented semantically using these
techniques. These semantic information are stored, like “floor is always diagonal to the wall
and door and parallel to the ceiling”. This process integrates the domain knowledge with the
semantically segmented point cloud (d) object recognized point cloud (Tang et al. 2010)
Another approach for object recognition employs the usage of information from the as–
planned BIM model. The as-planned model is aligned with the point cloud, guiding the
process of assignment of a point to specific objects (Bosche and Haas 2008; Yue et al. 2007).
All the methods put together discussed in section 2.3.2.1 to 2.3.2.3, help in the creation of the
as-built BIM model, with knowledge of construction objects represented in the point cloud.
In general, object recognition and as-built modeling is challenging due to various reasons: (a)
variation in the appearance of different objects in the different environment, (b) non-
distinctiveness between different object models, (c) lack of texture in indoor objects (Yang
After the retrieval of information from the point cloud and generation as an as-built model,
next is to compare it with the as-planned and estimate the amount of progress. This is done
to decide the present state of work and update the schedule with information like ‘project is
behind schedule’, the project is ahead of schedule’, etc. The as-planned model used for
comparison with the as-built model is 4D BIM. The process of comparison of the two models
can be divided into two steps: (1) BIM registration, (2) Progress estimation.
The process of registration includes alignment of the as-built and as-planned model.
Researcher has done this alignment manually (Memon et al. 2006) and semi-automatically
(Bosché 2010; Golparvar-fard et al. 2011). In the semi-automated method, the user selects the
set of corresponding points in both the models, which is used to map the models together.
their volumes. Since the main axis describes the whole volume, an overlap of at least
Once the two models are closely aligned, next step is to estimate the progress.
After BIM registration, progress is computed by verifying the presence of an element in the
spatial location corresponding to the as-planned model. This is aggregated to calculate the
present progress (amount of construction complete). This computed in the following ways:
sized voxels. This is followed by a comparison of a voxel in an as-built model with the
corresponding voxel in the as-planned model and finally labeling of a respective voxel as
For a voxel to be labeled as ‘built’, it should have corresponding built voxel in the as-planned
and it should have a number of point in the as-built voxel above a predefined threshold. This
predefined threshold can be both static and dynamic. Goparvar-Fard developed a dynamic
threshold support vector machine classifier performing with an accuracy of 82.89%. If the
in the point cloud, these recognized objects are compared with the expected objects in the as-
planned model. The number of a successful comparisons, which is, the element being found
in both models, is cumulatively added to get the number of objects created. Finally,
In order to model the comparison for the entire object, it can also be done partially completed
objects, in which, the above equation is the binary values are changed to a percentage. The
percentage here defines the percent of the element that has been built.
2.5 SUMMARY
This chapter extensively discusses the work done in the field of automated progress
monitoring in the last decades. It focusses the work related to photogrammetry an applied
machine learning. In addition to, various techniques for analysis of point cloud data are
elaborated. The entire literature review is divided into three broad categories: Data
techniques can be used for data acquisition. This includes RFID tags, GPS, Laser scanners,
photogrammetry, and videogrammetry. The result of the last three is in the form of the point
cloud. These represent the geometrical and material properties of the object scanned. The
point cloud stores information in the form of shape representation, relationship models, and
objects. This step converts the point cloud from group of points to as-built model with each
point having object-oriented meaning to it. This as-built model is used for progress
estimation. Progress is estimated by comparing the as-built with the as-planned, both
corresponding to the same stage of construction. In order to compare the two models, both
the registered together using coarse and fine registration. This can be done automatically
3.1 INTRODUCTION
This chapter discusses the aims and objectives of the study. It further discusses the
methodology proposed in order to achieve these aims and objectives. Since this is very area
of research, the chapter also establishes the problem statement being tackled in the study. The
The need for automated progress monitoring had been established in the previous chapter. In
order to perform automated progress monitoring, the 3D reconstruction of the as-built state of
construction is monumental. The two popular techniques for 3D reconstruction are laser
The aim of the study is to develop a methodology for automated progress monitoring of a
construction project using stereo vision technology for data collection and photogrammetry
· Train a classifier using Support vector machine algorithm for point-wise and
· Standardizing the methodology for data collection through ZED stereo camera for
least erroneous 3D reconstruction of the as-built model through testbed set up in the
lab (controlled environment). This is also relevant for developing a standard dataset
3.4 METHODOLOGY
In this research, we focus to improve the as-built model reconstruction accuracy. In addition
to, data pre-processing, the point cloud is subjected to supervised binary classification for
construction material detection. This detection method has two-fold advantages – First, it
eliminates the need to manually assign threshold values, which requires expertise. Second, it
enhances the ability to scale the method, as this can be easily re-implemented after re-
training. The proposed methodology excluding data acquisition and pre-processing (outlier
removal and noise smoothing), can be divided into four components: (1) Data generation, (2)
Feature Engineering, (3) masonry recognition using machine learning based supervised
classification, (4) progress estimation and evaluation. Each of the above components are
This is the first step in the proposed framework. The input to this method is point cloud data
and corresponding BIM model. The point cloud is annotated using BIM model as ground
truth. Instead of following the common strategy for point cloud labeling, which involves
over-segmenting the data and then assigning labels to segments. The data is annotated
segmentation step (2) to prevent classifier from learning hand-crafted rules of segmentation
3D annotation is done by manually comparing the point cloud model with the BIM model.
The two models are overlapped and annotation is done view-by-view. A view is fixed and in
this view, all the points lying within the boundary of BIM model, with a small tolerance
value, are marked as normal while the rest of the points are marked as erroneous. This
process is iteratively repeated for different views until all the possible views are covered. The
problem with this way of annotation is that, same points appear in different views, appearing
erroneous in one view and normal in another. To tackle this problem, a point is marked
normal if it appears normal from all possible views and erroneous if it appears as erroneous
from any possible view. Eventually, the point cloud is separated and labeled into two classes
The point cloud generated do not have meaning to the computer as they are the only group of
points spread across a three-dimensional space. Therefore, these point clouds need to be
converted to form that can understand by the computers. In this regard, feature engineering is
done on the point cloud. Features are chosen such that they distinctively represent the
underlying distribution (in this case a masonry wall). These features are used for classifier
training and masonry recognition. These steps are discussed in detail in the following
sections.
Performance of any machine algorithm is only as good as the ability of the features (or
parameters) used to distinctly define the underlying distribution. In this regard, we attempt to
learn a list of features which can be used to identify point corresponding to masonry in a
point cloud. Rashidi et.al., realized three categories of construction materials depending upon
appearance and color features(Rashidi et al. 2016). Thus, discriminating features are learned
for masonry data through an iterative process of, feature extraction and feature selection. The
types of features learned are spatial and color, which are discussed below:
These features are used to model the geometrical properties, like angles, distances, and
angular variations, of masonry data. In a 3D point cloud, these features for a point in space is
calculated by considering the spatial arrangements of data points in the neighborhood. The
neighborhood can be assumed to spherical (Lee and Schenk 2002) or cylindrical (Filin and
Pfeifer 2005) with a fixed radius. The neighborhood can also be selected based on k-nearest
neighbor, depending on 3D-distance from the query point, where k ∈N (Linsen and Prautzsch
2001). For classification, spatial features were calculated for fixed radius spherical
neighborhood. In order to obtain appropriate and uniform features across different data
et al. 2015). And therefore in order to capture this correlation, apart from individual features,
some of the features are obtained which take this interdependency into consideration. Some
FPFH descriptor is calculated in two steps: (1) for every point p, relationships are calculated
between the neighbor and the query point, this histogram is named as simplified point feature
histogram, SPFH. (2) In this step, for each of the query point, its nearest k neighbors are
Figure 8 Feature extracted from the point cloud (a) Normals (b) fast feature point
histogram descriptor
For the recognition of masonry, color values are very effective feature given its distinctive
bright red color. It serves as an effective indicator for differentiating masonry from rest of the
objects found on a construction site. However, it is worthy to note here that though the color
value for a given does not vary drastically in a given point cloud, it might vary significantly
when the same material is compared from two different point clouds, The color values are
obtained in the RGB space. Though these are most widely used color space, these RGB
values are susceptible to a high variation on exposure to the same object under different
vary, dealing with these variations is highly important to produce effective results. Therefore,
in this study, we have transformed the color space from RGB to HSI. It is compelling to
make this transformation because: (1) it is very intuitive as it is similar to the way in which
color is perceived by humans, (2) it separates the chrominance (color) and luminance
application. Hue (H) and Saturation (S) correspond to the color component of the RGB space,
whereas Intensity (I) represents the illuminance-dependent part. Therefore, hue and saturation
provide a good parameter for classifying masonry points from the rest. The color is changed
For the recognition of masonry in the point cloud, the pre-trained binary classifier is used.
An SVM classifier is trained over a training dataset generated using the method explained in
data generation step. Instead of using voxel-based approach for training, a point-based
approach is used. Despite the computationally costly and time-consuming nature of point-
wise training, it is acceptable because: (1) it helps in improving recall values while detecting
concrete points, (2) training is a one-time process, to be done during the setting up of the
system. Hence, selection of the approach was done based on accuracy rather than
computational cost.
Since the classification boundary was non-linear, Radial Basis Function (RBF) kernel was
used. The parameters C and γ are to be determined for RBF kernel SVM model. C is
regularization parameter and γ is kernel parameter (Son et al. 2012). In order to validate the
results obtained using the training set, an independent validation set is used, which was
separating hyperplane. In other words, given labeled training data (supervised learning), the
dimensional space, this hyperplane is a line dividing a plane into two parts wherein each class
But the data usually (and that in our case) do not have linearly separable boundaries.
Therefore, we need to transform the data so that they have a linear boundary. This is done by
kernels. The original space was called the input and the new space is called as a feature
space. The figure below shows the pictorial representation of transformation of data
masonry points in the post-processed point cloud. Features are extracted from the input 3D
point cloud data (different from the data used for classifier training) depending on which
classification is done. The point cloud is separated and classified as in two classes: “normal”
and “erroneous". The points classified as “erroneous” are removed from the point cloud as
they do not have any correspondence in the real world. They were mostly created due to the
inaccuracies in the data collection method and 3D reconstruction from the 2D images. The
point cloud obtained after removing the erroneous correctly represents the underlying real-
Note: It must be noted here that the points which are not captured during data collection are
still lost and not added by the method proposed. The method only removes the erroneous
points.
The modified BIM model, that is model obtained after classification of points and removal of
erroneous is used for estimation of progress. For the estimation of the progress, firstly SCAN
to BIM registration is done, secondly, the voxel-based comparison is done. The system is
validated for different stages of construction of masonry wall. The following steps explain
This is a step in which both the as-built model and as-planned model are overlapped. The
course registration is done manually by selecting 4 common points in both the models. This
was done using “cloud compare” software. Once both the point clouds are aligned, ICP
algorithm is used to attain the fine registration. The maximum root mean square (RMSE)
After the scan to bim registration, the 3d space containing both the models is discretized in
the voxel. For better results, the voxel is kept so small such that there is only one point per
voxel. Though it is computationally rich for the purpose of this study, it has been followed
during the comparison. Once we have the discretized models, voxel is marked as ‘built’ and
‘not-built’. Voxels which are occupied by one or more points from both as-planned and as-
built model are classified as built. And it classified as ‘not built’ if the voxel contains point
from the as-planned model but not from the as-built model. .
3.10 EVALUATION
This section discusses the metrics used for the performance evaluation of the methods
proposed in the study. The method is evaluated in two steps: First, the performance of the
classification performed by SVM and (2) performance of the system on the final value of the
progress estimate. These metrics to measure the aforementioned metrics are discussed below:
The test dataset was used for evaluating the performance of the SVM classifier. The test
dataset was created from the 4 million points collected distributed over 27 models. These
points were divided in the ratio of 80:20. The 20% share of the points were used as test data.
The division of the data points was done using stratified sampling.
For evaluating the performance of the system, two popular measures – Precision and recall
are used.
Where, TP represents true positives, a number of points that are correctly predicted,
belonging to the normal class, FP represents false positive, points which are predicted as
normal but are actually erroneous, FN represents false negative, points which are erroneous
The metric used for measuring the progress estimate is ’built-up area error percentage’
‘percentage build’.
’Built-up area error percentage is calculated as the ratio of the area estimated using the
methodology proposed and actual area of the model at that stage of construction. The actual
area can be calculated from the as-planned model and as-built model. Therefore, depending
on this we have two error percentage of the built-up area. Both the areas are calculated for the
Percentage build is calculated as the ratio of the total area of the structure completed at a
given time in the as-built model to the total surface in the as-planned model. This result
produced using the proposed is validated by comparing the value with the human-produced
value. (Note: The human-produced value is obtained by manually calculating the value from
the as-built model and as-planned model, so that computer and human both have access to
same data. This is more justified because the entire study is about classifying the points
which are erroneous and appear in the point cloud and not about the points which do not
3.11 SUMMARY
This chapter discusses the aims and objectives of the study and methodology proposed in
order to achieve these objectives. The main objective of the study is to train a classifier in
project. In order to do this, masonry data is collected. Data collection is discussed in detail in
the next chapter. The data collected is cleaned, pre-processed and labelled manually in order
to prepare it for implementing a supervised learning algorithm. But the point cloud cannot be
for training a SVM classifier and trained classifier is used for masonry recognition later.
One’s the point cloud model is processed by the SVM, it passed for progress estimation.
approach. Finally the metrics to evaluate the performance of the system are discussed. SVM
evaluated by metric defined as ‘built-up area error percentage’ and ‘percentage build’.
CHAPTER 4. IMPLEMENTATION USING TEST BED SETUP
4.1 INTRODUCTION
This chapter deals with the implementation of the proposed framework using a laboratory
testbed setup. The chapter is divided into the following section: (1) Design of 3D/4D model,
(2) Automated data acquisition, (3) Progress estimation. Each of these topics is elaborated in
A 3D model was developed for the testbed setup. The activities of the project are defined. A
work schedule of these activities are then linked with the 3D model and a 4D model is
obtained.
The 3D model is developed for masonry wall construction. Since only one model was
at a construction site. Therefore, a U-shaped wall configuration was developed with, column
at one corner and no column at another corner, a column in the middle of the wall and at one
of the ends. Moreover, two differently sized openings were left in the wall representative of
A semi-automatic robotic system with 5 degrees of freedom (DOF) was fabricated for data
collection. This was done in order to simulate the motion of a moving robot (or rover)
proposed as one of the ways of automated data collection at construction sites. The 5 DoF of
1. Axis 1: This axis moves the robot in the plane parallel to the floor (x-axis). This
2. Axis 2: This axis moves the robot in the plane parallel to the floor, perpendicular to
the axis 1 (y-axis). The motion along this axis is similar to that of the axis 1.
3. Axis 3: This axis moves the camera mounting station along the vertical perpendicular
to the ground (z-axis). The motion along this automatic with an allowable stroke
length of 2m (allowable height of 0.5m to 1.7m of the camera from the ground)
4. Axis 4: This axis controls the pitch of the camera mounting station and rotates the
camera in the planes perpendicular to the plane of axis 1 and axis 2 (Pitch of the
camera). The motion along this manual with an allowable limit from -90 to +90
degrees.
5. Axis 5: This axis moves the camera mounting system in the plane parallel to the plane
of axis 1 and axis 2 (Yaw of the camera). This allowable movement along this is -45
These are a minimum number of DOF necessary for capturing any construction site given the
geometrical constraints of the structure and maneuvering constraints of the robotic system.
predefined and controlled parameters values. One of the parameters studied in the research is
distance therefore for one iteration of the data capture distance from wall needs to be kept
constant. As a result, path planning becomes an important aspect of the data collection.
Another significant aspect which is monumental in the path planning is Field of View (FOV)
of the camera. The path is defined in 3D space such that every visually visible part is
captured and there are minimal rescanning of any area of construction. Rescanning is avoided
as it creates multiple depth maps of the same surface, therefore, affecting the 3D point model
generated.
Stereo vision cameras are passive capturing technology which works by comparing two
different images of the same area taken by two cameras placed at a separation of a few
inches. The algorithm is used for identifying common key points in the two images. The
distance is a pixel of these key points is computed. This distance is further used for
estimating the depth of the object from the camera. In this study, ZED camera is used for
for data collection. Low cost of the camera priced at USD 449.00/- makes it worth exploring
technology.
lightweight, low cost, and still have high-quality output. The two cameras each have 4,416 x
1,242-pixel sensors in a 16 x 9 widescreen format. The optics allow 110 degrees field of
view. The cameras are spaced 120mm (4.7 inches) apart, which with the dense pixel video
gives usable stereo depth information from 1.5 to 20 m (5-65 ft). The unit itself measures 175
x 30 x 33 mm (6.89 x 1.18 x 1.3’’) and weighs 159 g (0.35 lb). The figure below shows the
zed camera.
Figure 14 Zed camera used for data collection
reality reconstruction, reconstruction. Zed Explorer records the video in Stereolab's SVO
format. This video is used for generating a point cloud of the object using the ZenFu API.
ZedFu uses videogrammetry and visual odometry techniques for the point generation from
This section discusses the experiment carried out in a controlled environment for
addition to, it deals with identifying the dominant features for recognition of masonry in a
· To determine the distinctly representative features for masonry data and assess its
wall
the impact of different parameters. These parameters are known to vary during reality
capture at a construction.
4.3.4.1 Experimental Variables
In this experiment, a possible set of variables were identified which are involved in data
collection process. Variables were identified from the domains of stereo reconstruction,
camera (internal and external) parameters, construction site conditions and properties
(geometrical and visual) of construction elements. The variables identified were further
subdivided into following categories depending upon the scope of work: Independent
Independent variables are those that are being studied in the experiment. These variables are
changed to see if causes any effect on the result. Controlled variables are variables which are
kept constant throughout the experiment by controlling them. Though everything other than
independent variables should be controlled and there should as much as possible. Though in
our study we identified the most prominent variables, which are most susceptible to change
and kept it unchanged. Dependent variables are what we recording to see whether it is
The different variables considered in the study are listed in the table below:
The independent variables were identified considering the conditions of data recording which
condition of the 3D space (Geusebroek et al. 2005). Each independent variables selected for
the study with their corresponding rational has been discussed below:
· Light Intensity
It is the amount of the light intensity reflected from the surface of the object to be
captured. This variable captures the variability in the construction sites in terms of
ZED camera is based on passive RGB stereo vision. It works similar to the way
human eyes works and captures the depth from the RGB images, unlike traditional
depth cameras/sensors which are based on Infrared (IR) technology. The lenses of
ZED camera has a built-in filter which blocks the IR light reaching the sensors from
the object. Thus, the result of the depth map estimation is affected by the illumination
of the object as appearance (that is RGB values) of an image are known to be affected
by the light intensity in which the object is seen (Jacobs et al. 1998; Ng et al. 2013).
Viewing direction can be defined as the combination yaw, pitch, and roll. Though in
case of lens/camera, since it is shaped circular, roll angle does not change image
property except changing the area captured. Therefore, viewing direction can be
Yaw is defined as the angle which the camera makes with the plane parallel to plane
of the object, that is, wall (plane perpendicular to plane containing axis 1 and axis 2,
as defined in subsection 4.3.1). Similarly, the pitch is the angle which the camera
makes with the plane parallel to object (that is a floor in our study formed by the
collected for studying images in the past. SOIL-47: Surrey Object Image Library,
Columbia Image Object Library COIL-20 (Nene et al. 1996a) and COIL-100 (Nene et
al. 1996b), The Amsterdam Library of Object Images (Geusebroek et al. 2005) are
some of the successful datasets which iterate the importance of varied viewing
· Distance from the wall: It is defined as the perpendicular distance of the camera from
the wall. Various values of distances are used in data collection so as for capture the
variation in the image quality (therefore subsequent point cloud) as camera distance
(therefore point cloud density) decreases with the increase in the distance and vice
versa. And as properties extracted from the point cloud depends on the local point
density possible.
The controlled variables are kept constant throughout the experiment despite the fact that
they are bound to change across construction sites. These controlled variables can divide into
two categories – category 1 that affects the local features and category 2 that does not affect
the local features of the point cloud. Category 1 includes bond type, the shape of bricks, the
dimension of the brick as changes in these parameters does not affect the local features of the
wall. Though it must be noted here that local features depend upon the size of neighborhood
selected and if the large neighborhood is used, then these features starts affecting the local
features. But for study, small neighborhood size is used so as overcome this limitation.
Moreover, small sizes are justified as the point obtained is densely populated and has enough
points to represent the underlying real-world properties. On the other hand, category 2
includes the type of bricks, the color of bricks etc. which must be studied when developing an
exhaustive dataset. But here, the study has been scoped to masonry and the methodology can
4.3.4.2 Procedure
The data is collected by varying each of the independent variables for different values. The
number of iterations and values selected for the experiment are given in the table below
built only rotational sweep is done, and for both the yaw conditions, the same result is
and a condition when the lighting device is mounted on the data acquisition system. The
profile of the high lighting condition is given in the figure and table below:
The value of the light intensity at the intersection of different heights of the wall and the
above sections is given in table 3. These values are measured in lux and represent the lighting
No of bricks
20 17 16 8 0
height from floor
Similarly, the light intensity profile was recorded for directed lighting and low lighting
dataset is in the .svo format. Using the ZedFu, proprietary software of Stereolabs, point cloud
data is obtained is from these files. Features are extracted from these point clouds as
explained in sub-section 3.3. Simultaneously, these points are manually classified as normal
and erroneous (as discussed in sub-section 3.1). The features extracted are used to train the
The section briefly discusses the dataset used for evaluating the performance of the
developed methodology. It uses the percentage build as the metric. The eight different stages
of construction of a masonry wall were used for the evaluation of the system. The 8 different
models representing 8 different stages of masonry wall construction were captured amassing
classifier. The pictorial representation of the models of the validation dataset is shown in the
4.5 SUMMARY
This chapter discusses the implementation of the proposed methodology. It elaborates the
data acquisition process and analysis of the data collected. Firstly, the design of the 3D as-
planned model is discussed. The rationale behind different aspects of the wall configuration
chosen is explained. The next subsection details the automated data acquisition process. It
dwells into the details of data collection system, path planning for data collection and stereo
vision technology used. Zed stereo vision camera is used for data collection and ZedFu, the
proprietary software of Stereolabs is used for 3D reconstruction. The chapter also contains
masonry representation. This dataset is used for training the SVM classifier for recognizing
masonry in a given point cloud. The procedure followed to generate this dataset is discussed
in detail. One’s the data is generated techniques for information is retrieval is applied. Spatial
and color features are extracted from the point cloud data. Finally, percentage build is
5.1 INTRODUCTION
This chapter states the various results obtained in the study and the implications of these
study. It also discusses the analytical results of performance of the methodology proposed.
Besides this, various insights drawn from the data are discussed in detail fashion in the
following sub-sections. The results are divided into three categories: (1) Performance of data
Performance of the progress estimation. All these results are discussed in detail in the
following subsections.
This section discusses the performance of data acquisition system and 3D reconstruction
under different real-world conditions. It also throws an insight into the importance of
different features extracted from the point cloud dataset. The following sections elaborate the
results obtained.
Color features are observed as dominant while classification for the traditional. This result is
in coherence with the literature (Son and Kim 2010). This high variability makes the use of
machine learning an important as manually a threshold value can produce erroneous results.
And besides, the color distribution of normal and erroneous points is linearly inseparable.
Figure 18 Hue and Saturation plot for two datasets, collected under different conditions
ZED SDK software produces 3D point cloud and depth map of the scanned object by stereo
rectification and optimized stereo algorithm. The reconstruction is largely affected by the
state of lighting condition. From the experiment, it can concluded that the reconstruction is
poor in case of low lighting. Poor lighting reduces the accuracy of registration of consecutive
frames. Thus distortion and hole appear in the reconstructed model. Though, these also
appear in the case of bright light but it happens in amount as compared to low lights. These
results are similar to results produced in literature (Golparvar-Fard et al. 2011). This variation
The dimensional error of the scanned model varies with the distance of capture device from
the object. As the distance, the resolution of the image captured decreases causing decrease in
the quality of image registration among consecutive frames. This eventually leads to error in
the 3D reconstruction. This error was quantified using a metric known as dimensional error.
Dimensional error is defined as ratio of difference of the dimension of the as-planned and as-
built model to the dimension of as-planned model. This is also varies with the change in the
lighting conditions. The variation of the dimensional error with different distance value and
The SVM classifier is trained using the training sample of nearly 3.2 million points
variation from the domain of lighting conditions and data collection method (viewing angle
and distance from the object). A sample configuration is shown in the figure below.
data collected. About 20% of the total data points collected (that is approximately 0.8 million
points) were used for validation. The evaluation metrics used were – Precision and recall (the
details of test data and evaluation metrics have been discussed in section 3).
Since the color feature have been determined as defining features for masonry wall, SVM
classifier was trained for two different feature space: (1) Spatial + Color feature space, (2)
Color feature space. The color feature space used in both the case was HSV. The precision
and recall values in both the cases are shown in the graphs below.
As the “spatial + color” classifier performs better, it is used for the remaining steps of the
entire progress monitoring system. (Note: Whenever SVM classifier is mentioned it refers to
For the validation of the performance of the SVM classifier, the validation dataset is used.
The validation dataset was collected and generated as discussed in section 4.4. The evaluation
metrics – precision and recall were calculated for all the models. These results are mentioned
The estimation progress was calculated using the validation dataset (the attributes of this
dataset have been discussed in the previous sections). The metric used for evaluating the
performance of the system were – built-up area error percentage and percentage build. The
figure number 24 and 25 plots the performance of the proposed methodology. Figure number
24 compares the error percentage of the built area for the eight different wall configurations.
The error percentage in the above figure range from -3% to 12%, a negative value means that
proposed methodology overestimates the built-up area, thus it not able to remove all
erroneous points. This is attributed to the precision and recall values of the system. If these
values can be improved, the result will move closer to zero. This will produce better results.
Secondly, positive values indicate that proposed methodology underestimated the built-area,
this is attributed to the fact that during 3D reconstruction is perfect and has missing points in
the point cloud reconstructed. These values cause underestimation, as the method proposed
only operates on erroneous points (that is incorrectly registered) and does nothing to populate
the point cloud with the missing information. Therefore there is need to develop
methodologies to perform filling operations in the point clouds. One example of missing
wall configurations. The error percentage is calculated by considering the as-planned area as
a reference.
The built area error percentage is always positive which indicates that the area calculated
using the proposed methodology is always less than the area of the as-planned model. This
happens because of the same problem in all the models. The sides of the columns are not
captured in the point cloud thus the inherent underestimation of the built-up area. Though,
this can overcome by capturing every corner of column during data collection. But this is not
done because it causes rescanning of the same surfaces producing g poor quality of point
cloud. This result opens the need to recognize objects ithe n point cloud and subsequently
the percentage build for different wall configuration is tabulated below. It is calculated for the
as-planned model, using the manual method, using SCAN to BIM but without employing
SVM classifier in the process, and using proposed methodology (that is using SVM classifier
in the process).
Area of
Without Proposed As- Without Proposed
Config as- Manual Manual
SVM Methodol planned SVM Methodol
uration planned Method method
classifier ogy model classifier ogy
model
Figure 26 Variation of percentage build for various methods and different configurations
The results obtained in the Table 5 and figure 19 are explained as follows. The as-planned
model always has larger percentage build as compared to the other because of the missing
side surfaces of column in the as-built which causes this reduction in the value of percentage
build in the case of other methods which are based on calculation from as-built model. In the
cases, where ‘without SVM classifier’ method overestimate as compared to manual method,
the proposed methodology reduces the estimate and brings it closer to correct value. For
example in case of configuration 2, 5 and 6. Though sometimes it reduces it way below the
correct value. This incorrectly highlights the poor performance of the system, this low value
is because of the missing data in the as-built model and not because of the proposed
methodology. If this data can be repopulated, it will move the values to the correct value.
Figure 27 Missing points in the data collected (a) Holes or pocket of no points in the as-
built model (b) Sides of the columns not registered in the 3D reconstruction
5.5 SUMMARY
This chapter discusses the results produces by the proposed methodology in comparison to
some of the existing methodologies and manual means. The data was collected for different
conditions, like lighting, viewing conditions etc. Impact of these variables on the quality of
the data collected was studied. It was established that the color is dominant when dealing
with masonry data due to vibrant bright red color. Impact of lighting conditions was also
studied on the 3D reconstruction, establishing that poor lighting has negative impact on the
Precision and Recall. Finally, performance of the progress estimation step was evaluated
masonry wall. The proposed methodology is found to perform better than the regular method
but needs to be coupled with methodologies working on filling missing points in point cloud
data for better results. Conclusively, we can say that the proposed methodology produces
increment in the performance of the existing methods. Therefore, provides a unique and new
6.1 INTRODUCTION
This chapter summarizes the study and results of the proposed methodology. It documents
the conclusion drawn from the study and contribution to the field of study. Finally, future
scope and work are discussed and various possible research paths are identified.
6.2 SUMMARY
This study was focused on broadly two aspects of automated progress monitoring: Firstly,
the construction project, implementing the result of the first aim in the process. The study
developed a methodology for collection of data from the site for benchmarking. The
standardized dataset was used for training of the supervised classifier. Stereo vision
6.3 CONCLUSION
The study has presented a methodology for automated progress monitoring of a construction
project. It has also established a structured approach for collection of the standardized dataset
for benchmarking. Some of the noteworthy results of the study are listed below:
· Features for distinctively recognizing masonry in a point cloud data were explored
and finally, color and spatial features were found to be defining features.
· Using the spatial and color features, supervised SVM classifier was trained for
masonry recognition. The classifier performed with precision and recall value of 81%
and 83%.
· A standardized dataset containing 4 million points spread across 27 models was
· The trained classifier was used to process the raw point cloud and obtain meaningful
· The as-built model and as-planned were used to calculate the progress estimate. The
built-up area was calculated by the system with the error ranging from -3.45% to
+11.80%. The error in the build percentage ranged from -1.84% to +4.49%.
· Despite poor performance for certain configuration, if coupled with missing points
The Scan to BIM registration process was manual, which has scope to be automated.
6.4 CONTRIBUTION
The contribution of the study can be divided into three fields of study:
machine learning before the final result of reconstruction. This helps to remove the
erroneous points in the point which appear otherwise due to the influence of the
factors like, lighting conditions, viewing angle and distance of the camera from the
object.
system with the implementation of supervised machine learning for the creation of as-
built data.
6.5 FUTURE WORKS
This research was focused on the erroneous points which appear during 3D reconstruction.
But during the study, it was realized that in order to extract better results from the
repopulating points which are lost while data collection. This is necessary the result gets
affected by the missing data in a point cloud as it is considered to be not built by the system.
In this study, construction of masonry wall was used for implementation and evaluation of
the proposed methodology. In future, this must be extended to other construction materials
like concrete, timber, etc. and other construction structures, like, beam, ceiling, floor etc.
Moreover, from the aspect of machine learning, better-performing algorithms like Markov
random field, neural networks etc. needs to test and evaluated for the same task. A
comparative study of these algorithms will help to choose the best performing classifier for
the system. In addition to this, for better performance of the system other feature space like
texture should be explored alongside other spatial and color feature space.
Although stereo vision-based imaging was employed for the study, the proposed
methodology is expected to perform equally well irrespective of the technique used for data
collection, as long as inputs are in the form of 3D point cloud with color features. Moreover,
detection of construction material can help in case of occluded scenes, where the building is
Abellán, A., Jaboyedoff, M., Oppikofer, T., and Vilaplana, J. M. (2009). “Detection of
millimetric deformation using a terrestrial laser scanner: Experiment and application to a
rockfall event.” Natural Hazards and Earth System Science, 9(2), 365–372.
Barnard, K., Martin, L., Funt, B., and Coath, A. (2002). “A Data Set for Colour Research.”
Color Research and Application, 27(3), 147–151.
Baumgart, B. G. (1972). “Winged Edge Polyhedron Representation.” National Technical
Information Service, (October).
Böhler, W., and Marbs, A. (2002). “3D scanning instruments.” Proceedings of the CIPA WG
6 International Workshop on Scanning for Cultural Heritage Recording, 9–18.
Bohn, J. S., and Teizer, J. (2010). “Benefits and Barriers of Construction Project Monitoring
Using High-Resolution Automated Cameras.” Journal of Construction Engineering and
Management, 136(June, 2010), 632–640.
Bosché, F. (2010). “Automated recognition of 3D CAD model objects in laser scans and
calculation of as-built dimensions for dimensional compliance control in construction.”
Advanced Engineering Informatics, Elsevier Ltd, 24(1), 107–118.
Bosche, F., and Haas, C. T. (2008). “Automated retrieval of 3D CAD model objects in
construction range images.” Automation in Construction, 17(4), 499–512.
Breunig, M. M., Kriegel, H.-P., Ng, R. T., and Sander, J. (2000). “Lof.” ACM SIGMOD
Record, 29(2), 93–104.
Brilakis, I., Fathi, H., and Rashidi, A. (2011). “Progressive 3D reconstruction of
infrastructure with videogrammetry.” Automation in Construction, Elsevier B.V., 20(7),
884–895.
Campbell, R. J., and Flynn, P. J. (2001). “A survey of free-form object representation and
recognition techniques.” Computer Vision and Image Understanding, 81(2), 166–210.
Cantzler, H., Cantzler, H., Fisher, R., Fisher, R., Devy, M., and Devy, M. (2002). “Improving
architectural 3D reconstruction by plane and edge constraining.” Proc. British Machine
Vision Conf., Cardiff, 43–52.
Chan, A. P. C., Scott, D., and Chan, A. P. L. (2004). “Factors Affecting the Success of a
Construction Project.” Journal of Construction Engineering and Management, 130(1),
153–155.
Fan, T.-J., Medioni, G., and Nevatia, R. (1989). “Recognizing 3-D objects using surface
descriptions.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(11),
1140–1157.
Fathi, H., and Brilakis, I. (2011). “Automated sparse 3D point cloud generation of
infrastructure using its distinctive visual features.” Advanced Engineering Informatics,
Elsevier Ltd, 25(4), 760–770.
Fathi, H., and Brilakis, I. (2016). “Multistep Explicit Stereo Camera Calibration Approach to
Improve Euclidean Accuracy of Large-Scale 3D Reconstruction.” Journal of Computing
in Civil Engineering, 30(1), 04014120.
Filin, S., and Pfeifer, N. (2005). “Neighborhood Systems for Airborne Laser Data.”
Photogrammetric Engineering & Remote Sensing, 71(6), 743–755.
Froese, T. M. (2010). “The impact of emerging information technology on project
management for construction.” Automation in Construction, Elsevier B.V., 19(5), 531–
538.
Geusebroek, J. M., Burghouts, G. J., and Smeulders, A. W. M. (2005). “The Amsterdam
library of object images.” International Journal of Computer Vision, 61(1), 103–112.
Golparvar-Fard, M., Bohn, J., Teizer, J., Savarese, S., and Peña-Mora, F. (2011). “Evaluation
of image-based modeling and laser scanning accuracy for emerging automated
performance monitoring techniques.” Automation in Construction, Elsevier B.V., 20(8),
1143–1155.
Golparvar-fard, M., Peña-mora, F., and Savarese, S. (2011). “Integrated Sequential As-Built
and As-Planned Representation with D 4 AR Tools in Support of Decision-Making
Tasks in the AEC / FM Industry.” Construction Engineering and Management,
American Society of Civil Engineers, 137(December), 1099–1116.
Golparvar-Fard, M., Peña-Mora, F., and Savarese, S. (2015). “Automated Progress
Monitoring Using Unordered Daily Construction Photographs and IFC-Based Building
Information Models.” Journal of Computing in Civil Engineering, 29(1), 04014025.
Hackel, T., Savinov, N., Ladicky, L., Wegner, J. D., Schindler, K., and Pollefeys, M. (2017).
“Semantic3D.Net: a New Large-Scale Point Cloud Classification Benchmark.” ISPRS
Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences,
4(1W1), 91–98.
Horn, B. K. P., and Ikeuchi, K. (1984). “The Mechanical Manipulation of Randomly
Oriented Parts.” Scientific American - SCI AMER, 251(2), 100–111.
Jacobs, D. W., Belhumeur, P. N., and Basri, R. (1998). “Comparing images under variable
illumination.” Proceedings of the IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, 610–617.
Jaselskis, E. J., and Ashley, D. B. (1991). “Optimal Allocation of Project Management
Resources for Achieving Success.” Journal of Construction Engineering and
Management, 117(2), 321–340.
Jia, J., Qin, Z., and Lu, J. (2007). “Stratified helix information of medial-axis-points
matching for 3D model retrieval.” Proceedings of the international workshop on
Workshop on multimedia information retrieval - MIR ’07, 169.
Kakumanu, P., Makrogiannis, S., and Bourbakis, N. (2007). “A survey of skin-color
modeling and detection methods.” Pattern Recognition, 40(3), 1106–1122.
Kasim, N., Shamsuddin, A., Zainal, R., and Kamarudin, N. C. (2012). “Implementation of
RFID technology for real-time materials tracking process in construction projects.”
CHUSER 2012 - 2012 IEEE Colloquium on Humanities, Science and Engineering
Research, (Chuser), 699–703.
Kazhdan, M., Funkhouser, T., and Rusinkiewicz, S. (2003). “Rotation Invariant Spherical
Harmonic Representation of 3D Shape Descriptors.” Image Rochester NY, 43, 156–164.
Kemper, A., and Wallrath, M. (1987). “An analysis of geometric modeling in database
systems.” ACM Computing Surveys, ACM, 19(1), 47–91.
Klein, L., Li, N., and Becerik-Gerber, B. (2012). “Imaged-based verification of as-built
documentation of operational buildings.” Automation in Construction, Elsevier B.V.,
21(1), 161–171.
Knorr, E. M., Ng, R. T., and Tucakov, V. (2000). “Distance-based outliers: algorithms and
applications.” The VLDB Journal The International Journal on Very Large Data Bases,
8(3–4), 237–253.
Kopsida, M., Brilakis, I., and Vela, P. (2015). “A Review of Automated Construction
Progress and Inspection Methods.” Proceedings of the 32nd CIB W78 Conference on
Construction IT, (January), 421–431.
Kuiaski, D., Neto, H. V., Borba, G., and Gamba, H. (2009). “A study of the effect of
illumination conditions and color spaces on skin segmentation.” Proceedings of
SIBGRAPI 2009 - 22nd Brazilian Symposium on Computer Graphics and Image
Processing, 245–252.
Lange, A. F., and Gilbert, C. (1999). “Using GPS for GIS data capture.” Geographical
information systems: Principles, techniques, management and applications.
Lee, I., and Schenk, T. (2002). “Perceptual organization of 3D surface points.” International
Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences,
34(3A), 193–198.
Lee, S., Asce, M., Choe, S., Fang, Y., Akhavian, R., Asce, S. M., Hwang, S., Asce, A. M.,
Leite, F., Cho, Y., Behzadan, A. H., Lee, S., Choe, S., Fang, Y., Akhavian, R., and
Hwang, S. (2016). “Visualization , Information Modeling , and Simulation : Grand
Challenges in the Construction Industry.” Journal of Computing in Civil Engineering,
30(6), 1–16.
Li, H., Chen, Z., Yong, L., and Kong, S. C. W. (2005). “Application of integrated GPS and
GIS technology for reducing construction waste and improving construction efficiency.”
Automation in Construction, 14(3), 323–331.
Li, N., and Becerik-Gerber, B. (2011). “Performance-based evaluation of RFID-based indoor
location sensing solutions for the built environment.” Advanced Engineering
Informatics, Elsevier Ltd, 25(3), 535–546.
Lin, W.-C., and Chen, T.-W. (1988). “CSG-based object recognition using range images.”
9th International Conference on Pattern Recognition, IEEE Comput. Soc. Press, 99–
103.
Linsen, L., and Prautzsch, H. (2001). “Local Versus Global Triangulations.” Eurographics.
De Marco, A., Briccarello, D., and Rafele, C. (2009). “Cost and Schedule Monitoring of
Industrial Building Projects: Case Study.” Journal of Construction Engineering and
Management, 135(9), 853–862.
McCoy, A. P., Golparvar-Fard, M., and Rigby, E. T. (2014). “Reducing Barriers to Remote
Project Planning: Comparison of Low-Tech Site Capture Approaches and Image-Based
3D Reconstruction.” Journal of Architectural Engineering, 20(1), 05013002.
Memon, Z. A., Majid, M. Z. A., and Mustaffar, M. (2006). “Investigating the Issue of Project
Progress Performance and Proposing a Prototype Software: A Case Study of Malaysian
Construction Industry.” Joint International Conference on Computing, Decision Making
in Civil and Building Engineering, 14–16.
Mikhail, E. M., Bethel, J. S., and McGlone, J. C. (2001). Introduction to modern
photogrammetry. Wiley, New York ;;Chichester :
Moon, S., and Yang, B. (2010). “Effective Monitoring of the Concrete Pouring Operation in
an RFID-Based Environment.” Journal of Computing in Civil Engineering, 24(1), 108–
116.
Navon, R., and Sacks, R. (2007). “Assessing research issues in Automated Project
Performance Control (APPC).” Automation in Construction, 16(4), 474–484.
Navon, R., and Shpatnitsky, Y. (2005). “Field Experiments in Automated Monitoring of
Road Construction.” Journal of Construction Engineering & Management, 131(4), 487–
493.
Nene, S., Nayar, S., and Murase, H. (1996a). “Columbia Object Image Library (COIL-20).”
Technical Report, 95, 223–303.
Nene, S., Nayar, S., and Murase, H. (1996b). “Columbia Object Image Library (COIL-100).”
Technical Report, 95, 223–303.
Ng, H., Chen, I., Liao, H., and Engineering, I. (2013). “An Illumination Invariant Image
Descriptor for Color Image Matching.” 25, 306–311.
Nüchter, A., and Hertzberg, J. (2008). “Towards semantic maps for mobile robots.” Robotics
and Autonomous Systems, Elsevier B.V., 56(11), 915–926.
Osada, R., Funkhouser, T., Chazelle, B., and Dobkin, D. (2002). “Shape distributions.” ACM
Transactions on Graphics, 21(4), 807–832.
Papadimitriou, S., Kitagawa, H., Gibbons, P. B., and Faloutsos, C. (2003). “LOCI: Fast
outlier detection using the local correlation integral.” Proceedings - International
Conference on Data Engineering, 315–326.
Pətrəucean, V., Armeni, I., Nahangi, M., Yeung, J., Brilakis, I., and Haas, C. (2015). “State
of research in automatic as-built modelling.” Advanced Engineering Informatics, 29(2),
162–171.
Rashidi, A., Sigari, M. H., Maghiar, M., and Citrin, D. (2016). “An analogy between various
machine-learning techniques for detecting construction materials in digital images.”
KSCE Journal of Civil Engineering, 20(4), 1178–1188.
Remondino, F., and El-Hakim, S. (2006). “Image-Based 3D Modelling : a Review.” The
Photogrammetric Record 21(115): 269–291, 21(September), 269–291.
Rusinkiewicz, S. (2004). “Estimating Curvature and Their Derivatives on Triangle Meshes.”
Symposium on 3D Data Processing, Visualization, and Transmission, September 2004,
1–8.
Rusu, R. B., Marton, Z. C., Blodow, N., Dolha, M., and Beetz, M. (2008). “Towards 3D
Point cloud based object maps for household environments.” Robotics and Autonomous
Systems, Elsevier B.V., 56(11), 927–941.
Sabry. (2000). “3D Modeling of Complex Environments.” 4309, 1–12.
Serby, D., Meier, E. K. K., and Van Gool, L. (2004). “Probabilistic object tracking using
multiple features.” Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th
International Conference on, 184–187.
Shanbari, H. A., Blinn, N. M., Raymond Issa, R., and Professor, H. (2016). “Laser Scanning
Technology and Bim in Construction Management Education.” Journal of Information
Technology in Construction, 21(21), 204–217.
Shilane, P., Min, P., Kazhdan, M., and Funkhouser, T. (2004). “The Princeton Shape
Benchmark.” Proceedings - Shape Modeling International SMI 2004, IEEE, 167–178.
Son, H., and Kim, C. (2010). “3D structural component recognition and modeling method
using color and 3D data for construction progress monitoring.” Automation in
Construction, Elsevier B.V., 19(7), 844–854.
Son, H., Kim, C., and Kim, C. (2012). “Automated Color Model – Based Concrete Detection
in Construction-Site Images by Using Machine Learning Algorithms.” Journal of
Computing in Civil Engineering, 26(June), 421–433.
Song, J., Haas, C. T., and Caldas, C. H. (2006). “Tracking the Location of Materials on
Construction Job Sites.” Journal of Construction Engineering and Management, 132(9),
911–918.
Tang, P., Huber, D., Akinci, B., Lipman, R., and Lytle, A. (2010). “Automatic reconstruction
of as-built building information models from laser-scanned point clouds: A review of
related techniques.” Automation in Construction, Elsevier B.V., 19(7), 829–843.
Tissainayagam, P., and Suter, D. (2005). “Object tracking in image sequences using point
features.” Pattern Recognition, 38(1), 105–113.
Trucco, E., and Fisher, B. (1995). “Experiments in Curvature-BasedSegmentation of Range
Data.” IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(2), 177–
182.
Tsai, M. K., Yang, J. Bin, and Lin, C. Y. (2007). “Synchronization-based model for
improving on-site data collection performance.” Automation in Construction, 16(3),
323–335.
Turkan, Y., Bosche, F., Haas, C. T., and Haas, R. (2012). “Automated progress tracking
using 4D schedule and 3D sensing technologies.” Automation in Construction, Elsevier
B.V., 22, 414–421.
Valero, E., Adan, A., and Cerrada, C. (2012). “Automatic construction of 3D basic-semantic
models of inhabited interiors using laser scanners and RFID sensors.” Sensors
(Switzerland), 12(5), 5705–5724.
Wang, J., Xu, K., Liu, L., Cao, J., Liu, S., Yu, Z., and Gu, X. D. (2013). “Consolidation of
low-quality point clouds from outdoor scenes.” Eurographics Symposium on Geometry
Processing, 32(5), 207–216.
Wang, X., and Love, P. E. D. (2012). “BIM + AR: Onsite information sharing and
communication via advanced visualization.” Proceedings of the 2012 IEEE 16th
International Conference on Computer Supported Cooperative Work in Design,
CSCWD 2012, 850–855.
Weinmann, M., Schmidt, A., Mallet, C., Hinz, S., Rottensteiner, F., and Jutzi, B. (2015).
“Contextual Classification of Point Cloud Data By Exploiting Individual 3D
Neigbourhoods.” ISPRS Annals of Photogrammetry, Remote Sensing and Spatial
Information Sciences, II-3/W4, 271–278.
Yang, X., and Tian, Y. (2010). “Robust door detection in unfamiliar environments by
combining edge and corner features.” 2010 IEEE Computer Society Conference on
Computer Vision and Pattern Recognition - Workshops, CVPRW 2010, 57–64.
Yin, S. Y. L., Tserng, H. P., Wang, J. C., and Tsai, S. C. (2009). “Developing a precast
production management system using RFID technology.” Automation in Construction,
Elsevier B.V., 18(5), 677–691.
Yue, K., Huber, D., Akinci, B., and Krishnamurti, R. (2007). “The ASDMCon project: The
challenge of detecting defects on construction sites.” Proceedings - Third International
Symposium on 3D Data Processing, Visualization, and Transmission, 3DPVT 2006,
1048–1055.
Zhang, C., and Arditi, D. (2013). “Automated progress control using laser scanning
technology.” Automation in Construction, Elsevier B.V., 36, 108–116.
Zhang, X., Bakis, N., Lukins, T. C., Ibrahim, Y. M., Wu, S., Kagioglou, M., Aouad, G.,
Kaka, A. P., and Trucco, E. (2009). “Automating progress measurement of construction
projects.” Automation in Construction, Elsevier B.V., 18(3), 294–301.
Zhu, Z., and Brilakis, I. (2009). “Comparison of Optical Sensor-Based Spatial Data
Collection Techniques for Civil Infrastructure Modeling.” Journal of Computing in Civil
Engineering, 23(3), 170–177.