1. Introduction
Structures are frequently subject to damage, which is a major factor in structural failure. Unexpected structural failure can have severe effects on society, in addition to resulting in the loss of life and economic ruin. Early damage detection is essential for avoiding sudden and disastrous structural system failures and breakdowns. The non-destructive testing (NDT) techniques include various methods to estimate the integrity of the tested structure. These NDT techniques can provide sufficient information about the structure’s state and identify the shape, size, and location of various flaws [
1]. Structural health monitoring (SHM), a branch of NDT, is applied to evaluate the condition of various structures during the service [
2,
3]. In recent years, a significant focus has been put on structural damage identification in early conditions to prevent the unexpected breakdown of structural elements. In the fields of civil and mechanical engineering, numerous strategies and tools for investigating structures and recognizing damages have been developed [
4,
5]. The SHM field is an emerging approach that aims to give an early warning of damage initiation and to identify structural issues like fatigue damage [
6,
7]. The SHM methodology often entails the installation of sensors to gather important data and provide judgments regarding the dependability and continuing life of the structure [
8,
9]. The researchers have shown tremendous interest in guided waves, one of the most appealing techniques for health monitoring large structures [
10]. Lamb waves can propagate across a considerable distance in metallic and composite materials; as a result, a large region can be promptly investigated [
11]. The estimation of delamination size was experimentally examined utilizing Lamb waves, a network of piezoelectric wafer active sensors (PWASs), and a developed imaging technique. It was discovered that this developed methodology can precisely predict the delamination size and its shape [
12]. Many projects were taken and put into practice to use the generated scattering-guided Lamb waves to measure structural flaws [
13]. Numerous academics have recently investigated the potential of PWASs for SHM purposes such as investigating the integrity of metallic and composite plates and localizing the impacts on aerospace structures. These sensors can be utilized as passive and active sensing devices [
14,
15,
16].
Composite materials are being used extensively in aeronautical industries because of their high specific strength and stiffness [
17]. The severity of damage to composite materials is higher than that to metallic ones. There are numerous damage types can exist in composite materials, including fiber breakage, and delamination. The most frequent and risky type of failure for aerospace composite structures is delamination, which develops without any obvious surface damage, is challenging to spot visually, and is difficult to detect efficiently [
18,
19]. Different techniques, including Lamb wave methods, acoustic emission, and electromechanical impedance measurement have been utilized for the detection and identification of structural delamination [
20,
21,
22]. The Lamb wave modes can investigate the whole laminate structure, allowing for the finding of interior as well as external damage [
11]. A scanning laser Doppler vibrometer (SLDV) has recently become commonly used for acquiring precise wavefield data of the scanning area due to its benefits in precise surface velocity measurement over a regionally intensive grid [
23]. The analysis of acquiring data from the investigated structure is an essential step for evaluating the structure’s integrity and identifying damages. Several data analysis approaches were found to evaluate the structural defects, including wavefield and wavenumber processing techniques [
24,
25]. A wavefield imaging approach was used to locate simulated delamination in a composite plate [
26].
Artificial intelligence (AI) is a wide-science concept that aims to mimic the human way of thinking by applying AI-based models to machines. The AI simulation techniques involve problem-solving and decision-making tasks [
27]. The AI-based techniques are used to develop rule-based, ontology, and inference engines that are used to analyze data and make a decision. SHM techniques are examples of using AI-based systems that read data to make decisions. AI has made a significant impact on fields like voice recognition, autonomous vehicles, precise healthcare, and the identification of many diseases. Its uses for handling enormous amounts of data have been efficiently proved on different systems [
27,
28,
29]. Machine learning (ML) techniques are used in the AI field. ML algorithms help machines to learn statistical patterns based on large amounts of data. These algorithms and models are used with NDT and SHM systems to identify abnormal patterns or to predict a specific class based on knowledge-based data. ML algorithms are also utilized to extract features from input samples [
28]. These algorithms are used in three broad approaches: supervised, unsupervised, and semi-supervised models. Supervised models are used to classify input observations to one of the labeled classes. A large size of data samples is used to train the models. Then, unseen data examples are used to test and validate these models. Examples of ML-supervised algorithms that are used with the SHM systems are decision trees, random forests, and support vector machines (SVMs). The k-means algorithm is a common model that is used to cluster the input samples into a set number of different groups. The state-of-the-art machine learning techniques are deep learning (DL) algorithms [
27]. These algorithms are based on artificial neural networks. In the past decades, many efforts have been achieved in the field of computer vision. These efforts have been regularly improved to address issues with complicated visual patterns, including the detection of individuals, cars, and animals. A convolutional neural network (CNN) is one of the best and most common algorithms that is used with computer vision. Generally, it has an identical structure consisting of stacked convolutional layers followed by the contrasting normalization and maxpooling of one or more fully connected layers [
29]. Many studies were conducted using CNN methods to address the issues of structural damage localization and identification [
29,
30]. The CNN methods have been successfully utilized in classifying images with very high accuracy. These methods can recognize complicated classifier physical boundaries and extract characteristics that can differentiate between various problem factors [
28]. CNN models are artificial neural networks with one or more convolutional layers that are utilized for the processing, categorization, and segmentation of images [
31]. CNN models have achieved ground-breaking accomplishments in a variety of feature recognition-related fields. The fact that CNN models minimize the number of artificial neural network (ANN) variables is their most beneficial feature [
2]. CNN methods recently emerged as the most popular form of deep learning techniques because of their capacity to acquire knowledge directly from the initial signals in a sizable dataset [
31]. Using complex filters and mappings of features, CNN models have been applied to pixel-level classification inside an image to locate and identify different interesting items [
32]. Recent developments in the field of computer vision have increased the awareness of such NDT and SHM technologies as some of the greatest and most powerful techniques. Categorization, object localization, and pixel-level division are common recognition techniques for structural damages using ML. The collected data are classified as damaged or non-damaged using a proposed CNN based on the sliding window approach [
33,
34,
35]. A new DL method combined with artificial neural networks was used to extract the features of measured guided waves during the fatigue test on composite plates. The laser technique was used to provide the dataset by scanning the test area after exciting it with the guided wave by installing sensors over the test plate. The results reflect the capability of the proposed system for identifying the plate damage [
36]. A review study was implemented to present the progress in using ML methods for predicting the structural integrity and fracture of 3D-printed components [
37]. Recent developments in sensor technology have increased the use of data-driven systems for SHM. Since advanced techniques and DL are linked, the requirement for sensors will decrease, bringing down the price of SHM and increasing the quality [
38,
39].
Four proposed supervised CNNs for damage classification were examined. These CNNs have different sizes of receptive fields. The results show that these methods can classify the datasets of cropped images from 3D pavement images with an accuracy of 94%. Also, the result shows a reverse relation between the training time and the size of the receptive field [
40]. The pre-trained CNN has been recently used to enable crack-length estimation in metallic structures from the processing of acoustic emission (AE) signals without prior knowledge of AE history [
41]. The applications of CNNs have been widely used for detecting and classifying various damages in composite structures. A new system was presented for identifying the location and size of impact events on a composite panel. This system includes a CNN-based metamodel with a network of passive sensors installed on the tested structure to receive the impact signals. The result demonstrates the capability of the system to detect impact events with an accuracy of 95% [
42]. An interesting approach was proposed based on the CNN for differentiating between pristine and damage cases and predicting and classifying 12 different forms of delamination in composite laminate structures. The structural transient responses were transferred to spectrograms using STFT to be used as input data to the pre-trained CNN. The confusion matrix was obtained for evaluating the CNN, which came up with an accuracy of 90.1%. [
43]. The damages in CFRP composite structures can be detected with data scarcity using the proposed transfer learning methodology to train an existing CNN. Structural vibration data were used to confirm the proposed method [
44]. A new fully convolutional layer (FCN) model with semantic segmentation was presented for identifying the delamination shape, size, and location in composite structures. Numerical and experimental works were implemented on guided wave propagation and interaction with one delamination. For each damage case, one image was prepared by applying the RMS to the full-time wavefield. This RMS image was used as input data into the proposed FCN to be segmented into pristine and damaged parts to indicate the damage information. The results approved the capability of the FCN model for delamination identification compared with the traditional wavenumber filtering method [
45]. An improved Global Convolutional Network (GCN) was adopted to characterize the CFRP plate delamination using a public dataset of wavefield images of guided wave propagation and interaction with damage. The input data can be one resulting image determined with the RMS technique or 3D wavefield animation. These two different data were obtained at three different resolutions and used as input data to different improved networks. The result approved the capability of the GCN to precisely identify the damage location and shape even with a low-resolution grid [
46]. A new CNN-based semantic division method was developed in conjunction with the SLDV technique to identify structural delamination. The time sequence of wavefield images of Lamb wave propagation and interaction with delamination was used directly as input data to the novel DL. The results reveal the capability of this proposed model for identifying the mapping of damage [
47]. The CNN can be used with acoustic steady-state excitation spatial spectroscopy for identifying the damage location, size, and shape based on predicting the plate thickness at each plate pixel. The results indicate the ability of the proposed CNN to precisely predict the plate thickness at a zone where the dispersion of Lamb waves is complex [
48].
Automating the process of detecting and quantifying structural damages such as delamination is a necessary step to prevent catastrophic events and to perform the process of maintenance. Based on the literature review, there is limited research on using CNN methods for detecting the delamination depth across the composite plate thickness. The contribution of the present paper includes developing an automated system qualified for distinguishing between pristine and delamination structures and classifying three classes of delamination with various depths. This system involves a proposed CNN model based on the Lamb wave technique. The three datasets that are used independently as input data to the CNN model involved numerical and experimental wavefield images and experimental wavenumber spectrum images. The proposed CNN model results of the three different datasets were validated using the GoogLeNet CNN.
4. Datasets for CNN Models
All data-driven approaches in SHM require data as a fundamental component. With the emergence of DL-based techniques that are capable of handling and analyzing massive volumes of data, the importance of reliable data for SHM systems has become clear [
53].
In this work, we have three different datasets consisting of images prepared from numerical and experimental works. From the numerical work, the wavefield images represent time series images of guided waves propagating and interacting with delamination. We prepared a MATLAB code to customize and increase the resolution of the images, crop, and resize the images to be 224 by 224 pixels. The 1004 numerical images were prepared with 224 × 224 pixels before being used as input data to the designed CNN and the existing GoogLeNet CNN. These images were classified into four classes with 251 images for each class. The first class has wavefield images of the pristine case. The second, third, and fourth classes have wavefield images of the delamination C, B, and A cases, respectively, as demonstrated in
Figure 7. For the three delamination classes, the scattered, reflected, and trapped waves represent the variations that exist between the images in a set of input data. The characteristics of these waves are dependent on the delamination depth. Based on this, each class of wavefield images has different features. All the images of the three delamination classes show either S0 mode (faster mode) interaction with delamination or A0 mode (slower mode) interaction with delamination. Two steps are adopted to obtain images with the wavefield–delamination interaction. In the first step, the PWAS transducer was installed at an appropriate distance from the scanning area to make sure that the S0 mode could reach all the spatial points of the scanning area and interact with delamination. In the second step, the first wavefield image of each class is obtained at the time step when the S0 mode reaches all the scanning points.
For the experimental work implemented by SLDV, there are two datasets which are wavefield images and wavenumber spectrum images. The time–space wavefield dataset includes 2804 time series images distributed equally in four classes (each class has 701 images), as shown in
Figure 8. We can observe that a higher number of experimental wavefield images were used compared to the numerical wavefield images. This is because the time step (time interval) of the numerical guided wave signal is larger than the time step of the experimental work with the same time length of the guided wave signal. The numerical work, with the same experimental time steps value, needs a computer with high specifications and high processing time to implement the simulation. Based on this reason, we adopted an appropriate time interval for numerical simulation that makes the number of numerical wavefield images less than the number of experimental wavefield images. The wavenumber spectrum images dataset that can be obtained by processing the time–space wavefield of each scanning line includes 480 images distributed equally in four classes (each class has 120 images), as shown in
Figure 9. Each wavenumber spectrum reveals the information of full-time measured signals of spatial line scan points. These images were modified to fit the 224 × 224 pixels’ standard and classified separately into four classes before being provided into the input layer of the proposed CNN model and GoogLeNet CNN.