1. Introduction
Artificial intelligence (AI) applications have become increasingly prevalent across diverse sectors, providing substantial gains in efficiency and accuracy. Although traditional iterative techniques can be effective, they often demand considerable computational resources and time, and their reliability may be compromised without a robust control system—posing significant challenges for data validation. In contrast, AI offers a more agile and efficient alternative, making it especially valuable for complex problems where conventional methods frequently encounter limitations [
1].
In materials science, development processes are typically trial-based and require extensive experimentation, which is both costly and resource-intensive. Implementing AI systems offers an opportunity to streamline these processes by reducing the reliance on physical testing. AI’s minimal computational cost, self-regulating capabilities, and rapid decision-making render it well-suited for trial-based approaches in material development [
2,
3]. Vasudevan et al. [
4] underscored the progress AI has stimulated in this field, illustrating how integrating computational models, large-scale experiments, and imaging techniques can expedite the innovation of novel material systems. Accordingly, materials science appears to be moving toward a data-driven future, in which AI significantly enhances the pace of understanding and discovering new materials.
Among various AI methodologies, Artificial Neural Networks (ANNs) have emerged as a particularly powerful tool due to their improved computational efficiency and capacity to provide fast, reliable predictions. ANNs have been widely utilized in multiple aspects of material development [
5,
6,
7,
8,
9]. For instance, Kang et al. [
5] applied backpropagation neural networks to predict toughness values in structural steels based on tensile test data, examining the influence of crack plane orientation and temperature. Similarly, Guan et al. [
6] harnessed an ANN to estimate fracture toughness in Nb–silicide composites by inputting both microstructural characteristics and material composition. In another example, Zakaulla et al. [
7] developed a multi-layer perceptron network to model the mechanical properties of PEEK/Carbon/Titanium composites. Genel [
8] and Kazi et al. [
9] also demonstrated that ANNs can accurately predict load and displacement from experimental data.
In this study, Artificial Neural Networks (ANNs) were chosen for optimization due to their demonstrated effectiveness in capturing complex, non-linear relationships—particularly for multi-dimensional force–displacement data in Mode I delamination. While certain traditional methods (e.g., gradient-based or heuristic techniques) can involve specific assumptions or parameter constraints, ANNs offer flexibility by adapting to diverse experimental data without requiring extensive prior knowledge. They also scale well, allowing for architectures (e.g., hidden layer size) to be tailored to the problem at hand and enabling computational efficiency once trained, which can facilitate rapid iterative design. Moreover, existing research suggests that ANNs—especially those employing the Levenberg–Marquardt (LM) algorithm—can, under suitable conditions, achieve high accuracy and convergence speed for experimental data modeling [
10,
11,
12]. Their robust tolerance to noise enhances reliability even in the face of experimental inconsistencies, making them particularly well-suited for modeling composite material behavior. Collectively, these advantages help minimize computational time and maximize prediction accuracy, thereby highlighting ANNs as a strong candidate for optimizing delamination predictions in composite materials.
The aerospace and automotive industries heavily rely on lightweight, high-strength composite materials; however, their susceptibility to delamination—especially under impact loads—remains a primary challenge. This failure mode can significantly affect overall strength and fatigue life. Theoretical frameworks, such as Classical Laminate Theory (CLT) and First-Order Shear Deformation Theory (FSDT), guide the design of laminate structures to mitigate delamination risks. Delamination failures are commonly classified into Mode I (caused by normal or opening loads) and Mode II (driven by interlaminar shear stresses) [
13]. ANNs have shown promise in predicting the size, location, and shape of delamination, thus contributing to a deeper understanding of composite failure [
14,
15,
16,
17,
18,
19]. For example, Bortoluzzi et al. [
17] illustrated the efficacy of using an ANN trained via the Levenberg–Marquardt algorithm to predict Mode II interlaminar fracture toughness in z-pinned composites. Likewise, Sharma et al. [
19] trained and validated an ANN model using impact test data to estimate the crack initiation toughness in polymer composites. Allegri [
18] compared a semi-empirical power-law model with a single hidden-layer ANN model for predicting fatigue delamination growth, noting that the ANN achieved high accuracy despite its relatively simple structure.
Building on these findings, the present study employs force–time data from Mode I delamination tests as inputs to train and validate an ANN model, with displacement values serving as the predicted output. Three algorithms—Scaled Conjugate Gradient (SCG), Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton, and Levenberg–Marquardt (LM)—are compared in terms of both predictive accuracy and computational efficiency. Additionally, the number of neurons in a single hidden layer veries from 10 to 100 to optimize network performance. Ultimately, this research aims to generate artificial experimental data by modeling displacement values based on force–time inputs, facilitating the estimation of composite fracture toughness and enabling validation against real-world experimental outcomes.
Recent studies have demonstrated the application of ANNs in composite materials research, addressing various challenges associated with delamination prediction. For instance, Sreekanth et al. [
16] utilized vibration signals to locate and quantify delamination in glass fiber-reinforced polymer (GFRP) plates, employing a supervised feed-forward ANN. This approach showed accurate predictions but was limited by its dependency on pre-selected signal features, which might not generalize well across varying composite systems. Similarly, Oliver et al. [
20] integrated frequency shifts as inputs to an ANN for detecting and quantifying damage in composite-laminated plates, achieving high diagnostic accuracy yet focusing primarily on damage identification rather than mechanical properties. In a more advanced methodology, Rautela et al. [
21] explored unsupervised feature learning through deep convolutional autoencoders to identify anomalies in composite panels. Their work highlighted the potential of deep learning approaches for complex data patterns but faced computational challenges associated with large-scale training datasets.
The present study uniquely contributes to this field by systematically comparing three ANN algorithms—Scaled Conjugate Gradient (SCG), Broyden–Fletcher–Goldfarb–Shanno (BFGS), and Levenberg–Marquardt (LM)—specifically for Mode I delamination toughness prediction. Unlike previous works, this approach focuses on balancing prediction accuracy and computational efficiency. The LM algorithm, optimized with 50 neurons in a single hidden layer, emerged as the most effective, achieving prediction accuracies of 99.6% for force and 99.3% for displacement. These results surpass the accuracy metrics reported in recent works and provide a robust alternative for resource-efficient data-driven material design. Additionally, while Mukherjee and Routroy [
22] demonstrated the superiority of the LM algorithm in minimizing errors for industrial processes, the present study advances this understanding by applying the LM algorithm specifically to composite material delamination prediction. This highlights its adaptability and effectiveness in a domain with intricate mechanical behaviors. Moreover, the systematic analysis of computational cost versus accuracy trade-offs presented in this study offers a novel perspective for ANN model optimization, which is less explored in the existing literature.
3. ANN Model and Training
In this study, Artificial Neural Networks (ANNs) were employed to predict Mode I delamination behavior in composite materials. The training process utilized three optimization algorithms—Scaled Conjugate Gradient (SCG), Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton, and Levenberg–Marquardt (LM)—implemented within a backpropagation-based neural network containing a single hidden layer. This setup allowed for efficient and accurate modeling of the experimental results.
Figure 2 illustrates the architecture of the designed (ANN). The input layer contains six neurons, corresponding to the time and force values measured in three independent experiments. The output layer predicts the displacement values. A single hidden layer processes the inputs through weights and biases, which are optimized during the training phase. This configuration—encompassing training and validation steps—was implemented to ensure accurate predictions of the experimental outcomes.
In this study, time and force were used as input variables, while displacement was the predicted output. To ensure effective training and validation, data from the first experiment was randomly split into two subsets: half was used for training the ANN model, and the other half was used for validation. The displacement values from the remaining two experiments were then used as an independent test set, allowing the model to generalize to unseen datasets. This approach ensured that the model was tested on entirely independent data, enhancing its predictive reliability.
The fundamental principle of the network is given by Equation (4), where
o denotes the predicted displacement (output),
i the inputs (time and force),
w the weight, and
b the bias. For each input
i in the hidden layer, the network assigns corresponding values of
w and
b, which are subsequently optimized by the selected training algorithm. During each training epoch, these parameters were adjusted to minimize the error between predicted and actual displacement values—quantified by the Mean Squared Error (MSE)—, thereby enhancing the model’s accuracy.
The performance criterion for stopping iterations is the Mean Squared Error (MSE), set to 10
−10, as formulated in Equation (5). Here,
p denotes the predicted output, and
n represents the number of samples in the dataset [
26].
3.1. Scaled Conjugate Gradient
The scaled conjugate gradient (SCG) algorithm differs from other conjugate gradient methods by eliminating the need for a line search at each iteration. This design reduces computational complexity, enhancing efficiency. In the SCG algorithm, the step size is determined through a quadratic approximation of the error criterion, minimizing dependency on user-defined parameters. These attributes make SCG an effective optimization technique for neural network training, particularly in applications requiring faster convergence [
27,
28].
The step size
αk in the SCG algorithm is defined in Equation (6). Here
represents the weight vector,
is the global error function,
,
…
are non-zero weight vectors,
indicates the quadratic approximation of error function, while the
μk and
δk are parameters that determine the step size.
The comparison parameter (Δ
k) is a scale for how close the quadratic approximation (
) to the global error function
. If the Δ
k≥ 0, the error decreases, ensuring convergence [
20].
3.2. BFGS Quasi-Newton
The BFGS (Broyden Fletcher Goldfarb Shanno) algorithm is a combination of gradient descent and Gauss Newton methods, designed to optimize neural network training. It utilizes a second-order Taylor series approximation for error minimization, as shown in Equation (8). The Hessian matrix (H) is iteratively updated during the process to improve convergence.
The step size is defined based on differences in gradient values between iterations, ensuring stability and efficiency in the optimization process. This step size adjustment, along with the matrix update rule, is represented by Equation (9).
The optimization process begins with an identity approximation of the Hessian matrix, expressed as Equation (10). The algorithm then refines the matrix during each iteration, ensuring accurate and efficient convergence through line search methods [
22,
29,
30].
3.3. Levenberg—Marquardt
The Levenberg–Marquardt (LM) algorithm addresses non-linear least squares problems by combining Gauss Newton and gradient descent methods [
10,
12]. This combination improves adaptability and reduces error rates, making LM suitable for high-accuracy neural network training. The gradient descent and Gauss Newton steps are represented in Equations (11) and (12), respectively.
A damping parameter (
λ) is introduced in Equation (13) to control the optimization process, ensuring stable convergence. Both
α in Equation (12) and
λ in Equation (13) serve similar roles within the algorithm. The value of
λ is crucial and exposes iterative regulations to determine its optimal value. The LM algorithm displays a combined structure that is clarified as
where
hLM is a perturbation of the LM. Similarly, in Equation (12), the second-order approximation of Equation (13) is defined by
.
In scenarios involving multiple inputs, as observed in this study, updating the Jacobian matrix (
J) at each step becomes compulsory. By applying Broyden’s rank-1 approach for updating
J, the renewal process can be expressed as follows:
where
Jnew is an updated Jacobian matrix of the first derivatives. When comparing the LM algorithm with other algorithms, a key differentiator that makes the LM method stand out is its ability to update the Jacobian matrix (
J) dynamically.
The LM algorithm’s ability to dynamically update the Jacobian matrix (
J) distinguishes it from other methods. This feature mitigates convergence issues that may arise from using a fixed matrix, ensuring reliable performance across diverse datasets [
11].
3.4. Normalization of ANN Parameters
A dataset containing 2700 data points for each input was created based on experimental results. To improve training efficiency, the average values from the experiments were used as the output data. The dataset was normalized using min-max normalization, scaling all data to the range [0, 1], as expressed in Equation (15). This normalization step ensured that all variables were scaled uniformly, reducing the impact of varying magnitudes and facilitating efficient optimization during training.
Here,
inorm represents the normalized data. Normalization was crucial to ensure consistent network performance and achieve reliable results [
31].
The min-max normalization method was selected in this study due to its simplicity and effectiveness in scaling all input variables to a uniform range [0, 1]. This approach ensures that the relative relationships among features are preserved, which is crucial for the stability and convergence of ANN training. Other normalization methods, such as z-score or logarithmic transformations, were not applied as they are more suited to datasets with large variance or Gaussian-distributed features, which were not characteristic of the current dataset. Future work could include comparative analyses of these methods to further validate the advantages of min-max normalization for ANN-based delamination predictions.
4. Results and Discussions
Mode I delamination experiments are needed to calculate the interlaminar delamination toughness. The force-time data will be used to calculate the delamination toughness. Moreover, the force-time data serves as the input to the ANN algorithm, while displacement is the predicted output used for validation.
4.1. Experimental Results
As depicted in
Figure 3, the delamination length increases in tandem with the opening displacement of the edges. In Mode I delamination experiments, the applied force similarly rises as the edges are displaced. However, once delamination initiates, the force drops owing to the release of stored energy.
Figure 4 presents the experimental interlaminar delamination toughness values at various opening displacements, alongside the corresponding ANN predictions. At 300 s, for instance, a 10 mm edge opening displacement produces a delamination length of 54 mm (see
Figure 3). As the opening displacement grows, the delamination length extends further, reaching approximately 120 mm at a displacement of 40 mm (see
Figure 3b). This finding aligns with earlier observations by Roy et al. [
32] and Soyugüzel et al. [
24], who noted that delamination toughness increases with delamination length after its initiation. In the present study, the average interlaminar delamination toughness is 586.5 J/m² at initiation and 821.7 J/m² during propagation, as illustrated in
Figure 4.
The model’s accuracy was evaluated by comparing the predicted displacement values with the actual displacement data obtained from three independent experiments. The average force and time measurements were used as inputs during training to ensure a balanced representation of the experimental conditions. These averaged values were incorporated as training inputs to ensure a balanced representation of the experimental conditions. The error rates for each model were then determined by examining the discrepancies between the predicted outputs and the experimentally derived averages, as illustrated in
Figure 5. This approach enabled a robust validation of the network’s ability to generalize and reliably forecast outcomes based on the training data.
4.2. Algorithm Comparison
Evaluating and comparing algorithms is paramount in computational research, as it underpins the selection, optimization, and comprehension of various problem-solving strategies. Such comparative analyses play a pivotal role in advancing computational techniques and technologies. For instance, Arora and Saha [
26] compared the LM algorithm with Bayesian Regularization (BR) and Resilient Backpropagation (RP) across seven distinct datasets, identifying strengths and limitations in each approach. Babani et al. [
27] compared the SCG and RP algorithms, noting that SCG delivered more stable performance in terms of Mean Squared Error (MSE) and required less training time, while the LM algorithm showed moderate performance relative to these methods. In another study, Mukherjee and Routroy [
22] found that the LM algorithm produced fewer errors than the Quasi-Newton algorithm during backpropagation in industrial grinding processes, employing a dataset similar to the one used in the present research.
Performance criteria such as Mean Squared Error (MSE) and true percent error (TPE) generally indicate the overall quality of an artificial neural network. Consequently, the chosen algorithm must meet user-defined thresholds to be deemed suitable. In this section, the performances of the SCG, BFGS, and LM algorithms are compared on the basis of these metrics. The true percentage errors for force and displacement are presented to validate the performance plots
During experimentation, force data exhibited greater incremental changes than time data, while displacement values, used as the predicted output, showed consistent variation across measurements. Even after normalization, the displacement data retain similar magnitudes of variation, which contribute to higher error values, as seen in
Table 1 and
Table 2. The comparative analysis was conducted using a single hidden layer with 100 neurons, where 80% of the dataset was allocated for training. A cross-validation approach was applied to improve accuracy, ensuring that the test and validation curves operate in parallel and intersect where appropriate, thereby confirming consistency in the training curve.
For all three algorithms, error peaks are most noticeable within the first 200 data points. This tendency can be attributed to fluctuations in experimental measurements, particularly in the displacement dataset. As noted by Karalar et al. [
33], the initial segments in experimental data often exhibit higher error values due to the inherent nature of measurement techniques. Consequently, the error at the outset tends to exceed that observed in the middle or latter portions of the dataset, a trend also apparent in the present study.
Figure 6a illustrates the performance of the SCG algorithm on the training set (Test 1), showing that its mean squared error (MSE) reaches a minimum of approximately 10
−4 at the 1298th epoch. One notable advantage of SCG is its speed, requiring only 0.0061 s per epoch, which is noticeably faster than the other algorithms tested (
Table 1). In contrast,
Figure 6b depicts the BFGS algorithm’s results, achieving an MSE of around 10
−5 at the 1038th epoch, but requiring 0.0751 s per epoch (
Table 1).
Despite the relatively rapid convergence of SCG and the lower MSE of BFGS, additional analysis reveals that the Levenberg–Marquardt (LM) method exhibits more accurate predictions for Test 1. This outcome appears to stem from LM’s ability to fit the underlying force–deformation relationship more precisely when training data points are randomly selected without manual intervention. Meanwhile, in Tests 2 and 3, BFGS demonstrates slightly higher accuracy than LM, although the difference is marginal (See
Figure 6).
These observations are further corroborated by
Figure 7,
Figure 8 and
Figure 9, which show that while BFGS and LM deliver comparable performance in the remaining portion of Test 1 (
Figure 7), LM’s displacement estimates in
Figure 8 and
Figure 9 align almost perfectly with the true data points, underscoring LM’s strong predictive capabilities across different segments of the dataset.
Taken together, these results point to LM providing an overall more successful prediction process than BFGS, particularly for Test 1, while BFGS maintains a slight edge in Tests 2 and 3. SCG remains the fastest approach but displays higher errors relative to LM and BFGS.
4.3. Comparison of Hidden Layer Neuron Number
In
Section 4.2, the LM method was highlighted for its superior performance, making it a central focus of this study. To further optimize the network’s architecture, a comprehensive evaluation was conducted by varying the number of neurons in a single hidden layer from 10 to 100. The goal was to identify the most efficient configuration based on two primary metrics: percent error (as depicted in
Figure 10) and the time required per epoch.
Initial tests using networks with smaller configurations—specifically 10, 20, 30, and 40 neurons—revealed significant limitations. These setups failed to converge effectively and exhibited poor validation performance, resulting in inadequate generalization. Consequently, they were deemed unsuitable and excluded from further analysis. This outcome underscores the importance of selecting an appropriate network size to ensure stability and convergence, particularly when employing the LM method. The remaining configurations, which incorporated larger neuron counts, successfully balanced accuracy and computational efficiency, thereby offering valuable insights into the optimal network architecture for delamination prediction.
As depicted in
Table 2, the error observed in networks featuring a hidden layer of 50 neurons remains consistent with those employing higher neuron counts. Specifically, configurations with 60, 70, and even 100 neurons demonstrate a similar error rate of approximately 1%—the highest error recorded for the LM method. This consistency across different configurations is noteworthy, as all tested hidden layers successfully converged and completed 100 validation checks without issue. Nonetheless, when evaluating performance at various hidden layer sizes, the required convergence time emerges as a critical factor. Among the tested configurations, the model with 50 neurons not only maintains high accuracy but also achieves convergence in the shortest total time (00:36), corresponding to 0.0825 s per epoch. This balance between accuracy and computational efficiency underscores the effectiveness of the 50-neuron hidden layer, marking it as the optimal choice for this study.
The selected LM model closely aligns with the experimental results (
Figure 7,
Figure 8 and
Figure 9), underscoring its strong predictive capability for new experimental datasets. This artificial model holds considerable promise for forecasting delamination properties, potentially reducing the need for physical testing on similar specimens. While the LM algorithm’s curve nearly replicates the experimental data, the SCG algorithm exhibits modest deviations. In light of the discussions in
Section 4.1 and
Section 4.2, the backpropagation-based LM algorithm offers substantial advantages over the other two evaluated methods. Moreover, previous studies by Mukherjee and Routroy [
22] and Soepangkat et al. [
34] reinforce these findings, consistently demonstrating the LM algorithm’s superior performance.
However, it is worth noting that, despite the LM algorithm’s clear superiority in most metrics, the SCG method—which shows the lowest overall accuracy—can outperform LM in certain aspects, such as faster time per epoch, as also noted by Rosales et al. [
35]. Nevertheless, the accuracy of the selected model, reaching approximately 99.5%, underscores its reliability and applicability in predictive delamination studies.
The conclusion that 50 neurons provide the most effective configuration is grounded in a detailed analysis of the trade-off between accuracy and computational efficiency. As illustrated in
Table 2, this configuration consistently yielded low error rates and faster convergence compared to other tested neuron counts. However, it is important to note that these experiments were performed under controlled conditions and with a relatively limited dataset.
To further ensure the robustness of this finding, cross-validation was employed using an 80/20 train–test split, repeated over multiple iterations. The results demonstrated that the 100-neuron model sustained stable performance across various runs, exhibiting only minimal fluctuations in error rates. This outcome underscores the consistency of the chosen configuration under the current experimental parameters.
In order to ensure that the dataset accurately represents real-world delamination scenarios, the experimental setup and sample preparation procedures adhered to established ASTM D3171 and D5528 standards, which simulate the most prevalent failure modes observed in aerospace and automotive applications. Although only three unidirectional carbon fiber composite samples were tested, a total of 2700 data points were generated under consistent loading and environmental conditions, providing a diverse yet coherent foundation for training, validation, and testing. This approach enabled an evaluation of the model’s performance on unseen data, revealing robust predictive accuracy and confirming the dataset’s utility in capturing essential force–displacement relationships. Furthermore, the Mode I delamination tests conducted at room temperature and under displacement control reflect common industry practices, bolstering confidence in the practical relevance and generalizability of the results.
While the present dataset and experimental framework proved sufficient for optimizing the network structure in this particular context, future research could broaden the scope by incorporating more diverse datasets and variable experimental conditions. Such expansions would help to validate and refine the optimal neuron count for a wider range of applications.
Figure 10 illustrates the training performance of the Levenberg–Marquardt (LM) method. Across most data points, the error ranges between 0% and 0.10%, demonstrating a high degree of accuracy and robust convergence. Nevertheless, certain points show transient spikes in the error, which could be attributed to outliers or brief mismatches between the model’s predictions and the underlying force–deformation relationships. Overall, the LM algorithm maintains a favorable balance between speed and accuracy under the current experimental conditions. Future work could explore refined data preprocessing and further hyperparameter optimization to mitigate these localized error peaks. These results suggest that the error patterns exhibit a symmetry-like distribution, contributing to the algorithm’s robust convergence and predictive accuracy.
5. Conclusions
This study investigated the effectiveness of three Artificial Neural Network (ANN) algorithms—Scaled Conjugate Gradient (SCG), Broyden–Fletcher–Goldfarb–Shanno (BFGS) Quasi-Newton, and Levenberg–Marquardt (LM)—in predicting Mode-I delamination behavior in composite materials through Double Cantilever Beam (DCB) tests. Among the evaluated algorithms, the LM method achieved the highest accuracy, with prediction rates of 99.6% for force and 99.3% for displacement. Its optimal configuration, featuring 50 neurons in the hidden layer, surpassed both SCG and BFGS in terms of precision and reliability. Although SCG operated at a faster pace, its error rate of 8.6% greatly limited its accuracy. The BFGS algorithm offered a more balanced trade-off between speed and error but remained less precise than LM.
Additionally, this study leveraged a robust dataset of 2700 data points drawn from three separate samples, all prepared and tested under standardized protocols (ASTM D3171, D5528). These conditions effectively capture typical delamination scenarios in aerospace and automotive composites, thereby enhancing the model’s generalization capability. The Levenberg–Marquardt algorithm’s ability to minimize training and validation errors, even with this limited but representative dataset, further underscores its potential for various delamination applications.
The LM algorithm’s robust performance holds considerable implications for industries such as aerospace and automotive, where composite materials are critical for high-strength, low-weight applications. By reducing reliance on extensive physical testing, ANN-driven predictions can significantly cut both development time and resource expenditure, enabling more rapid iterations and optimization of composite structures. Consequently, the LM algorithm’s predictive capabilities present valuable opportunities to enhance and streamline material design processes. The structured patterns in the experimental data contribute to the predictive accuracy and robustness of the ANN models, reflecting symmetry-like properties.
The results highlight the potential of ANNs to minimize experimental efforts in material design by providing accurate and resource-efficient predictions. The proposed framework offers a balanced approach, combining precision and computational efficiency compared to traditional methods. Future work could explore advanced ANN architectures or hybrid AI models to further enhance prediction capabilities and broaden the applicability of this methodology to other failure modes or material systems.
Future investigations should focus on further improving ANN architectures and exploring advanced training methods to enhance both predictive accuracy and computational efficiency. Techniques such as reinforcement learning or genetic algorithms could offer additional optimization benefits. Moreover, extending this approach to predict other failure modes or delamination behaviors would broaden its applicability and provide deeper insights into the performance of composite materials in diverse conditions.
In conclusion, the LM-based ANN model demonstrated high predictive accuracy and practical utility in Mode I delamination prediction, marking a significant step forward in material design optimization. Its capacity to save time and cost makes it a vital asset for material design and testing processes, particularly in sectors where stringent performance standards for composite materials are paramount.
Future research could expand on the findings of this study by exploring alternative network architectures, such as multi-layer or convolutional neural networks, and assessing their performance in conditions involving high noise levels or limited data availability. An additional avenue involves systematically evaluating the robustness of ANN models by introducing controlled noise into the dataset and employing advanced preprocessing techniques—such as smoothing filters, dropout, or weight decay. These directions would provide a broader understanding of ANN capabilities in composite material prediction, enhance their reliability, and further optimize their practical implementation in real-world engineering contexts.