Arti Ficial Intelligence Exploitation in Facility Management Using Deep Learning
Arti Ficial Intelligence Exploitation in Facility Management Using Deep Learning
Arti Ficial Intelligence Exploitation in Facility Management Using Deep Learning
https://www.emerald.com/insight/1471-4175.htm
Artificial
Artificial intelligence exploitation intelligence
in facility management using exploitation
deep learning
Mohamed Marzouk and Mohamed Zaher
Department of Structural Engineering, Cairo University, Giza, Egypt
Received 4 December 2019
Revised 15 February 2020
Accepted 19 April 2020
Abstract
Purpose – This paper aims to apply a methodology that is capable to classify and localize mechanical,
electrical and plumbing (MEP) elements to assist facility managers. Furthermore, it assists in decreasing the
technical complexity and sophistication of different systems to the facility management (FM) team.
Design/methodology/approach – This research exploits artificial intelligence (AI) in FM operations
through proposing a new system that uses a deep learning pre-trained model for transfer learning. The model
can identify new MEP elements through image classification with a deep convolutional neural network using
a support vector machine (SVM) technique under supervised learning. Also, an expert system is developed
and integrated with an Android application to the proposed system to identify the required maintenance for
the identified elements. FM team can reach the identified assets with bluetooth tracker devices to perform the
required maintenance.
Findings – The proposed system aids facility managers in their tasks and decreases the maintenance costs
of facilities by maintaining, upgrading, operating assets cost-effectively using the proposed system.
Research limitations/implications – The paper considers three fire protection systems for proactive
maintenance, where other structural or architectural systems can also significantly affect the level of service
and cost expensive repairs and maintenance. Also, the proposed system relies on different platforms that
required to be consolidated for facility technicians and managers end-users. Therefore, the authors will
consider these limitations and expand the study as a case study in future work.
Originality/value – This paper assists in a proactive manner to decrease the lack of knowledge of the
required maintenance to MEP elements that leads to a lower life cycle cost. These MEP elements have a big
share in the operation and maintenance costs of building facilities.
Keywords Deep learning, Supervised learning, Expert system, Facilities management,
Artificial intelligence, Neural networks
Paper type Research paper
1. Introduction
Facility management (FM) gained profound importance because of the increasing
complexity of different systems and the cost of operation and maintenance. Moreover, FM
cost is realized to be greater than the initial cost of construction (Becerik – Gerber et al.,
2012). The International Facility Management Association (IFMA, 2020) defined the FM as
“a profession that encompasses multiple disciplines to ensure functionality, comfort, safety
and efficiency of the built environment by integrating people, place, process and
technology.” Facility managers require access to extensive information for the facilities they
maintain (Perez et al., 2019; Irizarry et al., 2013; Rankohi and Waugh, 2013). Frequently,
facility managers are required to link assets to text-based information (Irizarry et al., 2013).
FM’s manual classical operation of data collection is error-prone, time-consuming, must be Construction Innovation
operated by experts, and therefore are expensive (Kiziltas and Akinci, 2005; Turkan et al., © Emerald Publishing Limited
1471-4175
2012; Bae et al., 2013). DOI 10.1108/CI-12-2019-0138
CI Artificial Intelligence (AI) is defined as “the study of techniques for solving exponentially
hard problems in polynomial time by exploiting knowledge about a problem domain” (Rich
and Knight, 1992). AI has been making headlines because it has several developments in
many fields, such as problem-solving and planning, expert systems, natural language
processing, robotics, computer vision, machine learning, genetic algorithms and neural
networks (Krishnamoorthy and Rajeev, 1996). Deep learning is a subfield of machine
learning that uses a deep neural network to make predictions. Whilst machine learning is a
data analytics technique which is a part of AI that focuses on automatic learning from data
to solve problems. The term “deep” is usually referred to as the number of hidden layers in
the deep neural network, as the numbers of layers in the traditional neural networks are few
(Mosavi et al., 2019). The main difference between machine learning “shallow learning” and
deep learning is that the traditional machine learning has to figure out some metrics or
features of images that could be used to classify images. Those features would be calculated
for image sets and then used to train a classifier. Finally, that classifier could be used to
make a prediction on a new image through calculating the features of images first and then
pass the features to the classifier. Winkler and Le (2017) describes the difference between
deep and shallow neural networks and compered their abilities in their research. On the
other hand, deep learning would perform end to end learning, where the images are the
inputs and both the features and the classification are learned directly from images. Deep
learning shows extremely high accuracy results in many areas such as speech recognition,
natural language processing and computer vision (Najafabadi et al., 2015). Moreover, under
the umbrella of AI, machine learning, decision-making and reasoning under uncertainty,
Bayesian networks have been applied in automated learning and solving AI problems
(Wiegerinck et al., 2013; Korb and Nicholson, 2010). Bayesian networks have many
applications in economy, biology and medicine (Pérez–Ariza et al., 2012).
The immediate access of information assists facility managers in avoiding mistaken
decisions made in the absence of information along with minimizing the time and personnel
required for retrieving information (Ergen et al., 2007). Mechanical, electrical and plumbing
(MEP) engineering may include more than 10 subsystems, where each system consists of
many complex components (Hu et al., 2016), whereas much as the subsystems increase,
facility technicians would not be able to classify the MEP asset. MEP operation and
maintenance costs could reach up to 60% of the total cost (Teicholz, 2004). If Facility
managers rely on that there is no scheduled or regular maintenance, then most of the
maintenance will be reactive that may lead to sudden failure in operation and an increase in
the life cycle cost (LCC) (Lavy, 2008). Generally, each facility may consist of the same MEP
element but in different locations such as fire extinguishers, air conditioners, electrical panel
boards and so forth. Locating facilities components such as equipment, materials and so
forth, is essential for both proactive and reactive maintenance. Facility’s component
localization is time-consuming, labor-consuming and a repetitive task to FM personnel
especially for outsourced FM (Becerik – Gerber et al., 2012).
In light of the above, this paper raises some research questions derived from the problem
that will be answered by the proposed system to assist facility managers and technical staff
in their tasks. These following questions are:
2. Literature review
Based on the reviewed literature, a research gap was found regarding research that is
focusing on assisting facility managers using CNN and AI that stimulated the authors
toward exploiting AI in FM operations through deep learning using pre-trained models for
transfer learning. Facility managers have large quantities of data that require the
employment of numerous staff members, which drives up the operational expenditure.
However, AI can be used to harness the available data where the capacity of humans for
processing data is often limited (Atkin and Bildsten, 2017). Fang et al. (2019) used a machine
learning approach to assist facility managers to perform text clustering and classification
automatically. Finally, deep learning gained profound attention because of their remarkable
progress that outperforms human understanding (Russakovsky et al., 2015). Deep learning
is a machine learning technique that teaches computers to learn from images, text or sound.
Machine learning is divided into supervised learning and unsupervised learning. Supervised
learning is divided into subfields; classification and regression. The classification’s output in
machine learning is response values, while the regression output is continuous response
values. Mahfouz and Kandil, 2012 compared three machine learning algorithms, which are:
support vector machine (SVM), naïve bayesian and inductive and neural network. It was
concluded that SVM that is used in this research, was the most accurate machine learning
algorithm in predication.
During the past decade, neural networks are accountable for recent AI breakthroughs in
applications of computer vision techniques and image processing, which had impressive
progress in the architecture, engineering, construction and facility management (AEC/FM)
industry. Al–Mahasneh et al. (2017) discussed in their review paper the development of
neural network applications and its current trend. Computer vision is an overlapping field
with AI, image processing, machine learning, deep learning, image recognition and many
other fields. Plenty of computer vision studies are done for object recognition, material
recognition, damage detection and progress monitoring, which serve the AEC/FM industry.
Figure 1.
Proposed research
methodology
CI performing transfer learning to suit the research problem. Then the results of deep learning
are being evaluated for the ability to classify facility objects. However, identified objects
exist in different floors or spaces in the facility. So, Phase 2 continues by proposing a method
that identifies specific object locations in a facility. Then, an ES and an Android application
have been developed to identify the required maintenance for simulating intelligence in
decision-making to perform the required maintenance.
To perform transfer learning, three components are required to be created as follows:
(1) An array of layers representing the network architecture, that is, created by
modifying existing pre-trained networks such as Alexnet (Russakovsky et al.,
2015), Googlenet (Szegedy et al., 2015) and ResNet (He et al., 2016).
(2) Images with known labels to be used as training data, where it is typically
provided as a datastore.
(3) A variable containing the options that control the behavior of the training
algorithm.
These three components are provided as inputs to the “train network” function in the
MATLAB that returns the trained network as output. Most of the pre-trained network’s
layers are convolution, pooling and ReLU layers. These take the original input image and
extract various features that can then be used for classification. The considered pre-trained
CNN in this research is AlexNet, which was designed by Krizhevsky et al. (2017) that has
been trained on approximately 1.2 million images from the ImageNet data set (http://image-
net.org/index). AlexNet relies on a supervised SVM technique. The model has 23 layers and
can classify images into one of 1,000 predetermined object categories such as projector,
laptop, printer, desktop computer, electric fan, table lamp, screen, computer keyboard and
many other items that may exist in facilities.
datastore. A datastore is a MATLAB variable that acts as a reference to a data source such
as a folder of image files. Datastore can be used to import the data when needed. A set of
images for fire protection systems, Class 3, in the proposed classification, namely, FM200,
fire extinguisher and fire hose cabinet has been downloaded from the internet from different
sources to be trained, validated and tested in this study, whereas the size of the data set
increases, the deep learning networks continue to improve. Moreover, the behavior of the
network is learned from the data used. Two networks can have the same architecture but
behave differently if they are trained using different data sets. However, the downloaded
images are having different sizes and the AlexNet network expects training images of size
277 277 3 as input layer images, where MATLAB represents red, green and blue
images color as an m-by-n-by-3 array, where m-by-n is the corresponding image pixel height
and width. However, the downloaded images have different sizes. So, all input layers images
are resized using the “imresize” function in the MATLAB. When training a network, known
labels for the training images shall be provided to perform supervised learning. So, images
of fire protection systems are collected in a folder contains three subfolders, each of which
contains a set of clear images of one type of fire protection system also with imperfect Artificial
images that maybe with odd angles, not well framed or cropped images. The name of the intelligence
subfolder can, therefore, be used in MATLAB as a data set to provide the labels needed for a
training study.
exploitation
4.1.2 Performing transfer learning. In this section, the performing transfer learning
algorithm, training options and training sets are illustrated. Object recognition using deep
learning can be performed through pre-trained deep learning models or training new model
from scratch that requires large quantities of training data and network architecture design
that requires much more time. In this paper, it has been decided to use a pre-trained model to
transfer learning rather than start over from scratch to overcome this limitation through
transfer learning.
Training involves applying an algorithm that iteratively improves the network’s ability
to correctly identify the training images. This algorithm can be fine-tuned with many
parameters such as how many training images to use, the used training algorithm, the
learning rate and so forth. A feed-forward neural network is represented in MATLAB as an
array of layers, which makes it easy to index into the layers of a network and change them
to create a new layer then index into the layer array that represents the network and
overwrite the chosen layer with the newly created layer, where the goal of transfer learning
is to fine-tune an existing network. AlexNet 23rd layer is a fully-connected layer with 1,000
neurons. This takes the extracted features from the previous layers and maps them to the
1,000 output classes. Then the next layer, the softmax layer, turns the raw values for the
1,000 classes into normalized scores so that, roughly speaking each value can be interpreted
as the network’s prediction of the probability that the image belongs to that class. The past
layer then takes these probabilities and returns the most likely class as the network’s output.
Typically, when performing transfer learning these last few layers are modified to suit the
research problem of classifying the fire protection systems. In this way, the network will
have the same feature extraction behavior as the pre-trained network but has not yet been
trained to map these features to image classes. When the network is trained with new data,
the network will learn to map and refine the feature extraction to be more specific.
A set of training options have to be specified when a network is trained. As with any
indexed assignment in MATLAB, can combine these steps into one line. Currently, the
output layer still uses the 1,000 labels for the AlexNet classes. This will cause a problem
when information is passed from the new (three-class) fully connected layer that has been
created. To fix this, the output layer is replaced with a new blank output layer. The three
classes will be determined from the training data labels during training. A common problem
with all machine learning algorithms for supervised learning is over-fitting. This is when
the pre-trained model classifying the training data accurately but performs much worse on
the test data. That leads to learning the details of the training data set, rather than the
general patterns. To avoid this, some of the data shall be used for validation. The data set is
split into randomly chosen images for training and testing, where the training set is 80%
from the total number of images and the rest 20% of images are used for testing. When
performing transfer learning, the solver name optimizer shall be specified as an input
argument. Stochastic gradient descent with momentum – optimizer is selected to reduce the
oscillation toward the optimum. The momentum indicates the learning rate that controls
how aggressively the algorithm changes the network weights through the
“InitialLearnRate” name-value pair argument that is set to 0.001, where the default value is
0.01. However, if the selected training options are not adequate, typically adjusting some of
the training options and retraining shall be performed.
CI 4.2 Location identification
Several tracking global positioning system, bluetooth and radio-frequency identification
systems are available nowadays. This paper adopted Bluetooth trackers as it will fit the
localization of indoor objects. Many Bluetooth tracking devices are commercially available
such as TrackR Bravo (TrackR, 2020) and Tile Pro (Tile, 2020). TrackR Bravo Bluetooth
tracking device is used to be attached to most important facility assets to allow FMs
tracking the exact location of the classified objects as many assets may be identical. So,
locating the accurate object is crucial to avoid unnecessary maintenance for the
previously identified objects. TrackR provides a free app for the Bluetooth tracking
devices attached to the assets using a 3M adhesive patch with Bluetooth range up to 100
feed. As such, the mobile device can locate the asset by producing notification with a
volume of up to 90 dB from the tracking device. Figure 2 illustrates the Bluetooth
tracking device on the right, that is, being identified and attached to assets. On the left of
the figure, a screenshot from the TrackR mobile application showing the FM200 that
requires maintenance.
Figure 2.
Bluetooth tracking
Builder features a decision tree modeling process for developing the logic of the expert Artificial
system. intelligence
4.3.2 Android application development. This section describes the newly developed Android exploitation
application, namely, “FM Expert” for this research that is developed through MIT App
Inventor (MIT, 2020), which is a cloud-based tool that is used to build the proposed Android
application using a web browser through Java programing language. This tool is divided
into a group of blocks that have functions and a design interface for application design to
ease the use of end-user. The friendly user interface is crucial to assist FMs and the involved
team in their tasks. The operating system for the mobile application, that is, selected was
Android as the Android operating system possesses 85.9% of the worldwide smartphone
sales to end-users, whilst 14.0% for IOS and 0.1% for other operating systems (Gartner,
2018).
The developed application “FM Expert” home screen attempted to be a friendly user
interface and easy to use. As shown in the left of Figure 3, the application provides
selections to users to select between the different fire protection system that is being
classified. In the right of Figure 3, proactive maintenance is illustrated for the Firefighting
Extinguisher. The rationale behind developing this android application is that the developed
expert system requires a computer, which may not be accessible when needed, whilst the
application is accessible at any time as technicians are constantly moving through facilities
holding their mobile devices.
Figure 3.
Developed “FM
Expert” android
application
CI 5. Results and discussion
This section is to interpret the results of the conducted research, where the research
hypothesis postulated that the proposed system could be considered to assist facility
managers in their tasks. This research proposed MEP elemental classification for
educational facilities to categorize these elements into three levels. Then, a proof-of-Concept
case study was conducted on three fire protection systems, Class 3, using 81 images as a
data set to perform transfer learning to the AlexNet network. The performed training has
been completed in 8:27 min. Figure 4 left side depicts a plot of the training loss, where the
number of iterations is shown in the x-axis and the losses are shown in the y-axis. The loss is
a measure of how far from a perfect prediction the network is totaled over the set of training
images, which should decrease toward zero as the training proceeds.
To investigate how the network performs on the different image classes. Are
misclassifications randomly distributed or are there particular classes that are difficult for
the network? Are there classes that the network disproportionately confuses? The
“confusion chart” function calculates and shows the confusion matrix for the predicted
classifications. Figure 4 right side depicts the confusion matrix of the trained network
revealing that out of seven training images of FM-200, seven images are classified
accurately. The (a,b) element of the confusion matrix is a count of how many images from
Class a the network predicted to be in Class b. As such, diagonal elements represent correct
classifications; off-diagonal elements represent misclassifications.
In the light of the above, it can be concluded that Phase 1 in the proposed methodology
reveals high accuracy in the deep learning through evaluation using a confusion matrix and
a plot of training loss that indicates the model is being trained adequately. In Phase 2,
location identification and maintenance information retrieval were tested and it can be
claimed from the results that it can be broadly used through feeding the expert system and
the Android application with more knowledge about the maintenance to improve facility
manager’s performance with a lower LCC.
6. Conclusions
This research contributes to assisting Facility Managers that are increasingly searching for
effective and rapid means to identify facility assets to maintain the desired level of service in
a timely and proactive manner prior to sudden failure or expensive to repair or even replace.
It can be argued that the research hypothesis is now validated through the paper results,
where AI and its subfields can be exploited in FM to assist facility managers.
Figure 4.
Trained network
performance
Traditional methods for this task entail information access along with minimizing the Artificial
time and labor to use the process and cut-down LCC. This paper raises some questions intelligence
in the introduction section, where this paper attempts to answer them through the
proposed system that exploits the AI in the FM. After answering the research
exploitation
questions it can be concluded that the proposed system attempts to assist facility
managers in their tasks and decrease operation and maintenance costs through
identifying and performing the proactive regular maintenance of the classified assets.
As, such, sudden failure in operation and costly expected reactive maintenance can be
avoided.
The paper presented a developed DCNN learning-based method using the pre-trained
network AlexNet for objects classification from a set of images along with localizations of
objects using a bluetooth tracking device. Hence, FMs could access the proposed expert
system and Android application to retrieve the required maintenance. The pre-trained
network AlexNet can already classify many objects that exist in a facility. However, this
pre-trained network was finetuned to learn a new set of fire protection system images as a
prototype to suit the research problem.
In this work, authors performed transfer learning to AlexNet network using images of
different fire protection systems, Class 3, to be used as the data set. The network’s accuracy
was evaluated showing outstanding results. This paper addressed the localization problem
by attaching Bluetooth tracking devices to the identified objects with producing notification
through the attached device. The study covers one more aspect of retrieving information
through two different methods to get the right information for the right location. The
authors developed an Expert System along with an Android application to show the
periodical required maintenance for MEP elements to maintain the desired level of service
for different assets of facilities.
This paper has some limitations such as considering only three elements of the Level 3
class of the customized MEP elemental classification. Also, the proposed system relies on
different platforms for identifying proactive maintenance for the classified elements. These
limitations will be addressed in future works to get this approach more practicable by
considering reactive maintenance in an integrated mobile platform. This integration will
allow users to classify objects in real-time without using computers and access the required
information directly such as drawings, specifications, manuals as per the available
documents and the required maintenance. Finally, Future works will test for validation of
the approach through case study and expert interviews.
References
Akram, M., Abdul Rahman, I. and Memon, I. (2014), “A review on the expert system and its
applications in civil engineering”, International Journal of Civil Engineering and Built
Environment, Vol. 1 No. 1, pp. 24-29.
Al-Mahasneh, A.J., Anavatti, S.G. and Garratt, M.A. (2017), The Development of Neural Networks
Applications from Perceptron to Deep Learning, IEEE, Surabaya, Indonesia.
Atkin, B. and Bildsten, L. (2017), “A future for facility management”, Construction Innovation, Vol. 17
No. 2, pp. 116-124.
Azar, E.R. and McCabe, B. (2012), “Automated visual recognition of dump trucks in construction
videos”, Journal of Computing in Civil Engineering, Vol. 26 No. 6, p. 26.
Bae, H., Golparvar-Fard, M. and White, J. (2013), “High-precision vision-based mobile augmented
reality system for context-aware architectural, engineering, construction and facility
management (AEC/FM) applications”, Visualization in Engineering, Vol. 1 No. 1.
CI Becerik-Gerber, B., Jazizadeh, F., Li, N. and Calis, G. (2012), “Application areas and data requirements
for BIM-enabled facilities management”, Journal of Construction Engineering and Management,
Vol. 138 No. 3, pp. 431-442.
Berrais, A. and Watson, A. (1993), “Expert systems for seismic engineering: the state-of-the-art”,
Engineering Structures, Vol. 15 No. 3, pp. 146-154.
Bortolini, R. and Forcada, N. (2019), “Analysis of building maintenance requests using a text mining
approach: building services evaluation”, Building Research and Information, Vol. 48 No. 2,
pp. 1-11.
Cha, Y.-J., Choi, W. and Büyüköztürk, O. (2017), “Deep learning-based crack damage detection using
convolutional neural networks”, Computer-Aided Civil and Infrastructure Engineering, Vol. 32
No. 5, pp. 361-378.
Cha, Y.-J., Choi, W., Suh, G., Mahmoudkhani, S. and Büyüköztürk, O. (2017), “Autonomous structural
visual inspection using region-based deep learning for detecting multiple damage types”,
Computer-Aided Civil and Infrastructure Engineering, Vol. 33 No. 9, pp. 731-747.
Chen, F.-C. and Jahanshahi, M.R. (2017), “NB-CNN: deep learning-based crack detection using
convolutional neural network and naïve Bayes data fusion”, IEEE Transactions on Industrial
Electronics, Vol. 65 No. 5, pp. 4392-4400.
Dimitrov, A. and Golparvar-Fard, M. (2014), “Vision-based material recognition for automated
monitoring of construction progress and generating building information modeling from
unordered site image collections”, Advanced Engineering Informatics, Vol. 28 No. 1,
pp. 37-49.
Dizaji, M.S. and Harris, D.K. (2019), 3D InspectionNet: A Deep 3D Convolutional Neural Networks Based
Approach for 3D Defect Detection on Concrete Columns, SPIE. Digital Library, Denver, CO.
El-Fiqi, H., Wang, M., Salimi, N., Kasmarik, K., Barlow, M. and Abbass, H. (2018), Convolution Neural
Networks for Person Identification and Verification Using Steady-State Visual Evoked Potential,
IEEE, Miyazaki, Japan.
Ergen, E., Akinci, B. and Sacks, R. (2007), “Life-cycle data management of engineered-to-order
components using radio frequency identification”, Advanced Engineering Informatics, Vol. 21
No. 4, pp. 356-366.
Fang, Z., Pitt, M. and Hanna, S. (2019), Machine Learning in Facilities and Asset Management, Pacific
Rim Real Estate Society (PRRES), Melbourne, Australia.
Gartner (2018), available at: www.gartner.com/en/newsroom/press-releases/2018-02-22-gartner-says-
worldwide-sales-of-smartphones-recorded-first-ever-decline-during-the-fourth-quarter-of-2017
(accessed February 2020).
Hamledari, H., McCabe, B. and Davari, S. (2017), “Automated computer vision-based detection of
components of under-construction indoor partitions”, Automation in Construction, Vol. 74,
pp. 78-94.
Han, K.K. and Golparvar-Fard, M. (2015), “Appearance-based material classification for monitoring of
operation-level construction progress using 4D BIM and site photologs”, Automation in
Construction, Vol. 53, pp. 44-57.
He, K., Zhang, X., Ren, S. and Sun, J. (2016), Deep Residual Learning for Image Recognition, IEEE, Las
Vegas, NV, pp. 770-778.
Hu, Z.-Z., Zhang, J.P., Yu, F.Q., Tian, P.L. and Xiang, X.S. (2016), “Construction and facility
management of large MEP projects using a multi-scale building information model”, Advances
in Engineering Software, Vol. 100, pp. 215-230.
Hui, L., Park, M.-W. and Brilakis, L. (2015), “Automated brick counting for façade construction progress
estimation”, Journal of Computing in Civil Engineering, Vol. 29 No. 6, pp. 1-11.
IFMA (2020), About IFMA, available at: www.ifma.org/about/what-is-facility-management
Irizarry, J., Gheisari, M., Williams, G. and Walker, B.N. (2013), “InfoSPOT: a mobile augmented reality Artificial
method for accessing building information through a situation awareness approach”,
Automation in Construction, Vol. 33, pp. 11-23.
intelligence
Ismail, N., Ismail, A. and Rahmat, R. (2009), “An overview of expert systems in pavement
exploitation
management”, European Journal of Scientific Research, Vol. 30 No. 1, pp. 99-111.
Jang, Y., Ahn, Y. and Kim, H.Y. (2019), “Estimating compressive strength of concrete using deep
convolutional neural networks with digital microscope images”, Journal of Computing in Civil
Engineering, Vol. 33 No. 3, pp. 1-11.
Kaetzel, L.J. and Clifton, J.R. (1995), “Expert/knowledge-based systems for materials in the
construction industry: state-of-the-art report”, Materials and Structures, Vol. 28 No. 3,
pp. 160-174.
Kim, H., Kim, H., Won Hong, Y. and Byun, H. (2018), “Detecting construction equipment using a region-
based fully convolutional network and transfer learning”, Journal of Computing in Civil
Engineering, Vol. 32 No. 2, pp. 1-15.
Kiziltas, S. and Akinci, B. (2005), The Need for Prompt Schedule Update by Utilizing Reality Capture
Technologies: A Case Study, American Society of Civil Engineers, San Diego, CA, pp. 1-10.
Korb, K.B. and Nicholson, A.E. (2010), Bayesian Artificial Intelligence, 2nd ed., CRC Press.
Krishnamoorthy, C. and Rajeev, S. (1996), Artificial Intelligence and Expert Systems for Engineers, CRC
Press, FL.
Krizhevsky, A., Sutskever, I. and Hinton, G. (2017), “ImageNet classification with deep convolutional
neural networks”, Communications of the ACM, Vol. 60 No. 6, pp. 84-90.
Lavy, S. (2008), “Facility management practices in higher education buildings: a case study”, Journal of
Facilities Management, Vol. 6 No. 4, pp. 303-315.
Lin, Y-Z., Nie, Z-h. and Ma, H-W. (2017), “Structural damage detection with automatic feature-
extraction through deep learning”, Computer-Aided Civil and Infrastructure Engineering, Vol. 32
No. 12, pp. 1025-1046.
Luo, H., Xiong, C., Fang, W., Love, P.E., Zhang, B. and Ouyang, X. (2018), “Convolutional neural
networks: computer vision-based workforce activity assessment in construction”, Automation in
Construction, Vol. 94, pp. 282-289.
Mcgoo (2020), “ES-Builder web”, available at: www.mcgoo.com.au/esbuilder/
Mahfouz, T. and Kandil, A. (2012), “Litigation outcome prediction of differing site condition dispute
through machine learning models”, Journal of Computing in Civil Engineering, Vol. 26 No. 3,
pp. 298-308.
Marzouk, M. and Zaher, M. (2015), Tracking Construction Projects Progress Using Mobile Hand-Held
Devices, Canadian Society for Civil Engineering, Vancouver, BC, pp. 1-9.
MIT (2020), “MIT app inventor”, available at http://appinventor.mit.edu/
Mohammed, A., Ambak, K., Mosa, A. and Syamsunur, D. (2019), “Expert system in engineering
transportation: a review”, Journal of Engineering Science and Technolgy, Vol. 14 No. 1,
pp. 229-252.
Mosavi, A. Ardabili, S.F. and Várkonyi-Koczy, A.R. (2019), “List of deep learning models”, Preprints.
Najafabadi, M.M., Villanustre, F., Khoshgoftaar, T.M., Seliya, N., Wald, R. and Muharemagic, E. (2015),
“Deep learning applications and challenges in big data analytics”, Journal of Big Data, Vol. 2
No. 1, pp. 1-21.
Nash, W., Drummond, T. and Birbilis, N. (2019), Deep Learning AI for Corrosion Detection, NACE
International, Nashville, TN.
Nishikawa, T., Yoshida, J., Sugiyama, T. and Fujino, Y. (2012), “Concrete crack detection by multiple
sequential image filtering”, Computer-Aided Civil and Infrastructure Engineering, Vol. 27,
pp. 24-47.
CI Pérez-Ariza, C.B., Nicholson, A.E., Korb, K.B., Mascaro, S. and Hu, C.H. (2012), Causal Discovery of
Dynamic Bayesian Networks BT – AI 2012: Advances in Artificial Intelligence, Springer, Berlin
Heidelberg, pp. 902-913.
Perez, H., Tah, J.H.M. and Mosavi, A. (2019), “Deep learning for detecting building defects using
convolutional neural networks”, Sensors, Vol. 19 No. 16, pp. 1-22.
Rankohi, S. and Waugh, L. (2013), “Review and analysis of augmented reality literature for the
construction industry”, Visualization in Engineering, pp. Vol. 1 No. 1, p. 9.
Rashidi, A., Sigari, M.H., Maghiar, M. and Citrin, D. (2015), “An analogy between various machine-
learning techniques for detecting construction materials in digital images”, KSCE Journal of
Civil Engineering, Vol. 20 No. 4, pp. 1178-1188.
Rich, E. and Knight, K. (1992), Artificial Intelligence, McGraw Hill, New York, NY.
Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A.,
Bernstein, M. and Berg, A.C. (2015), “ImageNet large scale visual recognition challenge”,
International Journal of Computer Vision, Vol. 115 No. 3, pp. 211-252.
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V. and
Rabinovich, A. (2015), Going Deeper with Convolutions, IEEE, Boston, MA, pp. 1-9.
Teicholz, E. (2004), “Bridging the AEC/FM technology gap”, IFMA Facility Management Journal,
pp. 1-8.
Tile (2020), available at: www.thetileapp.com/en-eu/
TrackR (2020), available at: www.thetrackr.com
Turkan, Y., Bosche, F., Haas, C. and Haas, R. (2012), “Automated progress tracking using a 4D schedule
and 3D sensing technologies”, Automation in Construction, Vol. 22, pp. 414-421.
Wiegerinck, W., Burgers, W. and Kappen, B. (2013), “Bayesian networks, introduction and practical
applications BT”, in Bianchini, M., Maggini, M. and Jain, L.C. (Eds), Handbook on Neural
Information Processing, Springer, Berlin Heidelberg, pp. 401-431.
Winkler, D.A. and Le, T.C. (2017), “Performance of deep and shallow neural networks, the universal
approximation theorem, activity cliffs, and QSAR”, Molecular Informatics, Vol. 36 Nos 1/2,
pp. 1-6.
Yang, J., Arif, O., Vela, P.A., Teizer, J. and Shi, Z. (2010), “Tracking multiple workers on construction
sites using video cameras”, Advanced Engineering Informatics, Vol. 24 No. 4, pp. 428-434.
Yeum, C.M. and Dyke, S.J. (2015), “Vision-based automated crack detection for bridge inspection”,
Computer-Aided Civil and Infrastructure Engineering, Vol. 30 No. 10, pp. 759-770.
Zalama, E., Gomez-García-Bermejo, J., Medina, R. and Llamas, J. (2014), “Road crack detection using
visual features extracted by Gabor filters”, Computer-Aided Civil and Infrastructure
Engineering, Vol. 29 No. 5, pp. 342-358.
Zhang, A., Wang, K.C., Li, B., Yang, E., Dai, X., Peng, Y., Fei, Y., Liu, Y., Li, J.Q. and Chen, C. (2017),
“Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deep-learning
network”, Computer-Aided Civil and Infrastructure Engineering, Vol. 32 No. 10, pp. 805-819.
Zhu, Z., German, S. and Brilakis, I. (2010), “Detection of large-scale concrete columns for automated
bridge inspection”, Automation in Construction, Vol. 19 No. 8, pp. 1047-1055.
Corresponding author
Mohamed Zaher can be contacted at: eng.mohamed.zaher@gmail.com
For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com