Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Learning Transformed Dynamics for Efficient Control Purposes
Previous Article in Journal
A Comprehensive Analytical Framework under Practical Constraints for a Cooperative NOMA System Empowered by SWIPT IoT
Previous Article in Special Issue
Theoretical Framework and Practical Considerations for Achieving Superior Multi-Robot Exploration: Hybrid Cheetah Optimization with Intelligent Initial Configurations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CCFD: Efficient Credit Card Fraud Detection Using Meta-Heuristic Techniques and Machine Learning Algorithms

1
Department of Cyber Security, College of Engineering and Information Technology, Buraydah Private Colleges, Buraydah 51418, Saudi Arabia
2
Faculty of Computers and Information, Kafrelsheikh University, Kafrelsheikh 33516, Egypt
3
Department of Management Information Systems, School of Business, King Faisal University, Alhufof 31982, Saudi Arabia
4
Faculty of Specific Education, Kafrelsheikh University, Kafrelsheikh 33511, Egypt
5
College of Computing and Information Technology, Arab Academy for Science, Technology, and Maritime Transport, Cairo 2033, Egypt
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(14), 2250; https://doi.org/10.3390/math12142250
Submission received: 8 June 2024 / Revised: 16 July 2024 / Accepted: 17 July 2024 / Published: 19 July 2024
(This article belongs to the Special Issue Evolutionary Computation for Deep Learning and Machine Learning)

Abstract

:
This study addresses the critical challenge of data imbalance in credit card fraud detection (CCFD), a significant impediment to accurate and reliable fraud prediction models. Fraud detection (FD) is a complex problem due to the constantly evolving tactics of fraudsters and the rarity of fraudulent transactions compared to legitimate ones. Efficiently detecting fraud is crucial to minimize financial losses and ensure secure transactions. By developing a framework that transitions from imbalanced to balanced data, the research enhances the performance and reliability of FD mechanisms. The strategic application of Meta-heuristic optimization (MHO) techniques was accomplished by analyzing a dataset from Kaggle’s CCF benchmark datasets, which included data from European credit-cardholders. They evaluated their capability to pinpoint the smallest, most relevant set of features, analyzing their impact on prediction accuracy, fitness values, number of selected features, and computational time. The study evaluates the effectiveness of 15 MHO techniques, utilizing 9 transfer functions (TFs) that identify the most relevant subset of features for fraud prediction. Two machine learning (ML) classifiers, random forest (RF) and support vector machine (SVM), are used to evaluate the impact of the chosen features on predictive accuracy. The result indicated a substantial improvement in model efficiency, achieving a classification accuracy of up to 97% and reducing the feature size by up to 90%. In addition, it underscored the critical role of feature selection in optimizing fraud detection systems (FDSs) and adapting to the challenges posed by data imbalance. Additionally, this research highlights how machine learning continues to evolve, revolutionizing FDSs with innovative solutions that deliver significantly enhanced capabilities.

1. Introduction

Electronic commerce (E-commerce) is the revolutionary method of buying and selling goods and services using the internet. This evolution in traditional business models has made it possible for consumers to shop from the comfort of their homes [1,2]. E-commerce platforms offer several advantages, such as speeding up procurement processes, reducing costs, improving customer convenience, enabling easy comparison of products and prices, quickly adjusting to market changes, and offering various payment options. These features have significantly supported economic stability, primarily when government restrictions and stay-at-home orders were enforced during pandemics like COVID-19 [3,4]. In 2020, eMarketer reports indicated that sales worldwide grew by 27.6% and were forecasted to grow by an additional 14.3% in 2021, approaching 5 trillion (https://www.insiderintelligence.com/content/worldwide-ecommerce-will-approach-5-trillion-this-year (accessed on 16 July 2024)).
The 2008–2009 financial crisis saw a substantial surge in e-commerce revenue growth from 15% to 25% annually. However, due to the economic downturn, it subsequently plummeted to around 3% in 2009. Post-2009, the development of E-commerce bounced back, climbing over 10% each year, a figure significantly exceeding that of overall sales. In 2020, the COVID-19 pandemic, despite its negative impact on digital travel sales, sparked a notable rise in revenue from E-commerce. This was driven by a shift in consumer behavior towards online shopping, a trend that has persisted beyond the pandemic [5].
The ongoing COVID-19 pandemic has resulted in a surge in demand for goods, including necessities. This trend has also led to an increased use of digital payment options, which has increased financial fraud. As e-commerce continues to evolve, businesses of all sizes have started accepting credit card (CC) payments. However, this uptick in CC use, especially for online purchases, has given fraudsters avenues to exploit and steal consumer CC details [6].
Financial fraud significantly impacts academic, commercial, and regulatory spheres, presenting a significant challenge for service providers and their clientele. This critical issue permeates the financial sector, affecting daily economic transactions worldwide. It involves the unauthorized use of assets or funds for personal gain, undermining confidence in financial institutions and leading to higher living costs. Financial fraud encompasses various activities, including falsifying financial statements, telecommunications fraud, and deceptive insurance practices misconduct in the markets. The widespread consequences of fraudulent actions have led to significant disruptions in the global financial system, hastening the shift towards digital financial services and introducing new challenges [7,8].
Between 2000 and 2015, there was a substantial surge in the amount of money lost to fraud related to credit and debit cards. Carta et al. [9] emphasize that while illegal transactions and counterfeit CCs make up from 10 to 15% of all frauds, they account for an overwhelming 75–80% of financial fraud damages. This has increased private and public investments in developing more advanced systems to detect fraudulent activities. The significant monetary transactions in E-commerce make it a prime target for fraud, leading to potential substantial economic losses. The report from Juniper Research clearly shows an alarming increase in fraud-related financial loss, soaring from 17.5 billion in 2020 to a staggering 20 billion in 2021. This underscores the urgent imperative for financial institutions to bolster their CCFD measures without delay [10].
Fraud is an unauthorized action marked by deceit. Credit card fraud (CCF) involves illegally obtaining a cardholder’s details through phone calls, text messages, or online hacking to carry out unauthorized transactions. These fraud acts typically involve using software controlled by the fraudster. The process of detecting CCF starts when a customer purchases using their information. This transaction must be verified to ensure its authenticity [10,11,12].
Statistics reveal that in the fourth quarter of 2020, Visa and Mastercard distributed 2287 million CCs globally, as shown in Figure 1a,b. Visa was responsible for issuing 1131 million cards, while Mastercard issued 1156 million. This data highlights the growing ease and popularity of card-based transactions among consumers. However, it also indicates a potential risk, as this significant volume of transactions attracts fraudsters looking to exploit card users [13].
Research efforts are centered on creating detection systems that utilize ML, data mining (DM), and deep learning (DL) approaches. These systems analyze transactions to distinguish between legitimate and deceptive transactions. As fraudulent transactions increasingly mimic legitimate ones, the challenge of detecting CCF grows. This necessitates the adoption of more sophisticated fraud detection (FD) technologies by CC companies. An effective and accurate FDS for real-time fraud identification can generally be divided into two main types: anomaly detection systems [11].
Financial entities issuing CCs or overseeing online payments must implement automatic FD mechanisms. This practice not only cuts down on financial losses but also boosts the confidence of their customers. Thanks to the advent of artificial intelligence and big data, there are now innovative opportunities to employ sophisticated ML algorithms for identifying fraudulent activities [14]. The current FD technologies leverage sophisticated DM, ML, and DL techniques to achieve high efficiency. These systems use a binary classification approach, utilizing datasets with labeled transactions—identifying them as normal or fraudulent. The model created through this process can determine new transactions’ legitimacy. However, employing classification methods to identify fraudulent activities comes with its own set of challenges [15,16].
Automated mechanisms for identifying fraud play a crucial role for entities engaged in providing CC services or managing online payments, as they aid in reducing financial losses and enhancing trust among consumers. The advent of big data and advancements in artificial intelligence have paved the way for using complex ML models to spot fraudulent activities. Bao and colleagues (2022) [14] have demonstrated that the most recent advancements in fraud detection systems (FDSs), which utilize sophisticated DM, ML, and DL techniques, are highly efficient. The present study explores CCF, a critical type of banking fraud. CCs, a significant form of online payment globally, have made electronic transactions easier and resulted in increased fraud perpetrated by cyber crimes. The illegal utilization of CC systems or data, frequently without the card owner’s awareness, is an escalating issue affecting many banks and financial institutions worldwide.
Feature selection (FS) stands as a crucial preprocessing step aimed at addressing the issue of irrelevant features detrimentally affecting the performance of ML models. This process involves pinpointing and removing unnecessary attributes to decrease the dimensionality of the feature sets without compromising the accuracy of the model’s performance. Numerous strategies have been devised for classifying datasets, among which meta-heuristic optimization algorithms (MHOA) have stood out for their proficiency in solving a broad spectrum of optimization challenges [17].
MHOA stands for optimization techniques that aim to identify the best or nearly the best solutions to various optimization challenges. These methods are not dependent on derivatives, highlighting their ease of use, versatility, and ability to avoid getting stuck at local optimum points. MHOAs employ a stochastic approach, starting their optimization journey with solutions generated at random, in contrast to gradient search methods that rely on calculating derivatives within the search space. Their straightforwardness and simplicity, rooted in fundamental concepts and ease of implementation, render these algorithms adaptable and straightforward to tailor to specific issues. A distinctive feature of MHOAs is their superior ability to avoid early convergence. Thanks to their stochastic nature, they can effectively operate as a dark box, evading local optima and extensively probing the search space [18,19,20,21,22,23].
Employing MHOAs to manage FS dilemmas can significantly mitigate obstacles during data analytics. FS plays a pivotal role in identifying pertinent features within imbalanced datasets, especially before the classification of CCF instances in large datasets. The key advantages of FS encompass easier data understanding, decreased time required for training, and addressing high dimensionality concerns. Bio-inspired algorithms, which excel in solving intricate and combinatorial challenges, have successfully identified CCF.

1.1. Motivations

This study addresses the challenge of data imbalance in CCFD by implementing advanced meta-heuristic optimization (MHO) techniques to refine classifier performance. Random forest (RF) and support vector machine (SVM) classifiers are utilized, leveraging their proven effectiveness across various domains [24,25,26]. The core of the experimentation involved 135 variations in 9 typical S-shaped and V-shaped transfer functions, each designed to enhance the performance of these MHO techniques.
These variants were assessed using CCF benchmark datasets from the Kaggle repository, with RF and SVM classifiers acting as fitness evaluators. This robust evaluation pinpointed the sailfish optimizer (SFO) as a standout among 15 renowned MHO algorithms, including brown-bear optimization (BBO), African vultures optimization (AVO), and others. SFO was particularly notable for its ability to reduce feature size by up to 90% while achieving classification accuracy as high as 97%.
The results of this investigation are critical for improving the security of CC transactions, thus enhancing e-commerce reliability worldwide. Utilizing the European credit cardholders dataset from Kaggle, the study demonstrated the algorithm’s capability to identify critical features accurately and reliably. Each algorithm was tested through 30 separate executions using the RF and SVM classifiers, ensuring the consistency and reliability of the model in detecting CCF. This research provides a significant foundation for future enhancements in FD technologies.

1.2. Contributions

This study offers significant advancements in addressing the challenges of data imbalance in CCFD, as outlined in the following contributions:
  • Employing data from the European cardholder dataset on Kaggle, this work evaluates the performance of various techniques to address data imbalance. These techniques are assessed based on mean classification accuracy, the number of selected features, fitness values, and computational time.
  • Implementation and testing of 15 MHO algorithms, each enhanced by nine different transfer functions, are conducted to identify the most significant features for managing data imbalance within the dataset.
  • Evaluation of two machine learning techniques, random forest(RF) and support vector Machine (SVM), assessing their effectiveness on features selected by the MHO algorithms to validate robustness in improving model performance under conditions of data imbalance.
  • Document significant improvements in classification accuracy, notably with the sailfish optimizer combined with the random forest (SFO-RF) approach, achieving up to 97% accuracy. This highlights the effectiveness of the proposed methods in overcoming the challenges posed by imbalanced datasets.

1.3. Structure

The remainder of the paper is organized as follows: Section 2 introduces the current researches that handle FD. Section 3 presents the proposed techniques. Section 4 discusses the findings and analysis of experimental results. Finally, Section 5 concludes the paper, outlining implications derived from the results.

2. Related Work

Many research studies have recently emerged, conducting reviews on existing strategies for FD and prevention, as noted in the scholarly works. This segment highlights various academic studies centered on FD, focusing on those addressing the issue within the framework of class imbalance challenges. A wide range of methods has been utilized to uncover fraudulent financial transactions. To effectively review the most relevant literature in this area, it is helpful to categorize the critical methodologies into several distinct groups, such as ML, strategies for detecting CCF, ensemble methods, feature ranking techniques, and methods for user authentication.
Zojaji et al. [27] have organized the methods for detecting fraud in CC transactions into two primary categories: supervised and unsupervised. They provided a comprehensive classification of the techniques mentioned in the studies, focusing on the categories and the applied datasets. However, they did not suggest any new methods themselves. On the other hand, Adewumi et al. [28] reviewed detecting fraud in online CC transactions with methods inspired by nature using ML. Their review spanned studies from 1997 to 2016, focusing primarily on the essential techniques and algorithms that emerged between 2010 and 2015 without delving into the methodologies. Similarly, Chilaka et al. [29] investigated methods for FD in CCs within the e-banking industry, focusing on pertinent studies from 2014 until 2019 to summarize the approaches taken. They concentrated on solutions that emphasized a quick, efficient response. However, their review was not conducted systematically and did not include a classification system.
Khalid et al. [30] has developed an ensemble approach using various ML algorithms to improve CCFD. This method leverages different algorithms’ strengths to enhance the precision and dependability of FD mechanisms. Through detailed experiments and analysis, the authors illustrate how their method can effectively tackle the difficulties of identifying fraudulent activities in CC transactions.
Abdul et al. [31] introduced a federated learning (FL) framework tailored explicitly for detecting CCF, including methods for balancing data to enhance effectiveness. FL allows for training models on multiple decentralized data sources without the need to gather the data in a single location, thereby protecting data privacy. The researchers applied methodologies to overcome the challenge of uneven data distribution, a common issue in FD data. Their experiments and analysis show that their method not only maintains privacy but also significantly increases FD’s accuracy by addressing the data imbalance.
Chen et al. [32] have proposed a method for CCFD that leverages sampling alongside self-supervised learning approaches. Intelligent sampling is used to refine the choice of samples for training, thereby enhancing the training process’s efficiency. Meanwhile, self-supervised learning is applied to extract valuable features from data that have not been labeled, which is crucial for identifying illegal transactions. Through the research, the team has shown that their methodology improves the accuracy of FD, reduces the computational effort required, and lessens the dependency on labeled datasets.
Taha et al. [33] developed an intelligent technique for identifying fraud in CC transactions using a refined version of the light gradient boosting machine (LightGBM) algorithm. Known for its speed and precision in processing vast amounts of data, the authors further enhance LightGBM’s effectiveness by fine-tuning its settings. Their research and testing reveal that this optimized method significantly improves the accuracy of FD while keeping the computational demands manageable. This strategy presents a viable option for banks and other financial entities aiming to upgrade their systems to detect fraudulent activities.
In their work, Rawashdeh et al. [34] developed an effective technique for identifying CCF, employing a combination of evolutionary algorithms for selecting the best features and random weight networks for classification. This innovative approach aims to enhance the FD precision with careful choice of the most significant features and optimize the network weights by minimizing the effort consumed. The effectiveness of this method in accurately spotting CCF is demonstrated through various tests and assessments, highlighting the security in finance.
Kennedy et al. [35] outlined the issue of significant imbalance in the CCFD dataset by introducing a method for synthesizing class labels. This imbalance often results in models that are biased and ineffective at identifying fraudulent transactions. To address this, the team created synthetic examples of the less represented class, effectively evening out the dataset. They applied ML algorithms to these balanced datasets to train their models. Through experimental analysis, they showed that their method significantly enhances the performance of systems that detect CCF, making a noteworthy contribution to the FD domain.
Aziz et al. [36] delved into several DM approaches to identify CCF, concentrating on a range of ML tactics, including RF, SVM, hybrid methods, decision tree (DT), and DL. These techniques were employed to detect common patterns in consumer behavior from historical data. Their analysis of ML strategies revealed notable disparities across various studies and pointed out potential future directions for investigation.
Singh et al. [7] introduced a model for CCFD that integrates a two-stage process involving SVM and an optimization technique inspired by firefly behavior. Initially, the model employs the firefly technique alongside the CfsSubsetEval technique to refine the FS. Subsequently, an SVM classifier is utilized to build the CCFD model in the second stage. This approach achieved a classification with 85.65% accuracy in 591 transactions.
Nguyen et al. [37] utilized a DL strategy that incorporates long short-term memory (LSTM) and convolutional neural network techniques to successfully identify CCF on a variety of card types, including European, small, and tall, across various datasets. To combat the challenge of imbalanced classes, the research employed sampling methods, which, while reducing efficacy on unseen data, enhanced performance on familiar samples. The study demonstrates the proposed DL methods’ capability to detect CCF in real-world applications, outperforming traditional ML models. Among all tested algorithms, the LSTM model with 50 units was highlighted for its superior performance, achieving an F1 score of 84.85%.
Ahmed et al. [38] explored using FS to identify intrusions in wireless sensor networks. They utilized particle swarm optimization (PSO) and principal component analysis space for this purpose, and they also compared its effectiveness with that of the genetic algorithm (GA). Rtayli et al. [39] developed an advanced method for identifying CC risk, employing algorithms such as DF and SVM to detect fraud.
Misra et al. [40] and Schlör et al. [41] have balanced datasets for detection models by applying under-sampling methods. A significant drawback of under-sampling is its propensity to exclude valuable instances from the training dataset, potentially diminishing detection accuracy. Conversely, isolation techniques have been utilized to approximate data distribution and construct a model with a diverse mixture of components. Such strategies for detecting outliers have proven effective in identifying fraud, as shown by Buschjäger et al. [42]. However, there is a notable absence in the research into comprehensive evaluations of the recent ML algorithms that leverage under-sampling to mitigate issues of imbalance in hybrid semi-supervised methods, which merge supervised learning with unsupervised outlier detection, which are significantly underutilized—additionally, the assessment of FDSs.
Hajek et al. (2022) [43] focused on creating FDSs utilizing XGBoost, which also evaluates the financial implications of such systems. This system underwent extensive testing on a dataset of over 6 million mobile transactions. To determine their model’s effectiveness, they compared the proposed model to other ML strategies designed for managing imbalanced datasets and identifying outliers. Their research showed that a semi-supervised ensemble model, combining unsupervised outlier detection techniques with an XGBoost technique, surpassed the performance of other models regarding standard classification metrics. The most substantial cost reduction was achieved by integrating random under-sampling with the XGBoost approach.
Krim et al. [44] described an autoencoder as a particular kind of neural network that can encode and decode data similarly. This approach includes the specialized training of autoencoders on data points that are not anomalous. It depends on evaluating reconstruction errors to identify anomalies classified as either ’fraud’ or ’no fraud.’ This suggests that in situations not previously encountered by the system, there is a greater chance of detecting anomalies [2]. A slight rise beyond the maximum limit is typically marked as unusual. This method has been utilized in scenarios involving an autoencoder-based framework for identifying anomalies. Within the realm of ML, a generative adversarial network (GAN) consists of two neural networks collaborating to enhance their forecasting abilities. Mainly unsupervised, GANs develop by engaging in a collaborative zero-sum game approach.
These investigations utilized various ML techniques, such as SVM, DT, RF, NB, LR, LightGBM, and MLP. Moreover, firefly optimization, SMO, and hybrid sampling were employed. The findings are predominantly shown through accuracy ( A C C ) , showcasing high levels of success in numerous cases. For example, Singh et al. reached an accuracy level of 85.65%, whereas Balogun et al. recorded accuracies of 97.50% with SVM and 98.60% with RF. Although various methods are used across these studies, a significant number have proven to be effective in detecting fraud.
The comprehensive analysis of the existing literature underlines a crucial realization: the landscape of FD, especially within the realm of CC transactions, is rapidly evolving in response to the equally dynamic tactics of fraudsters. The shift from traditional statistical methods to more intricate approaches, such as ML and MHO, marks a significant turning point in the battle against financial fraud. These advancements are not merely incremental; they are pivotal in addressing the persistent challenge of imbalanced data, which significantly undermines the effectiveness of detection systems. As such, the field stands on the brink of a new era in FD, where deploying sophisticated algorithms could redefine security standards in digital financial transactions. The ongoing innovation in this sector is vital, promising not only to enhance accuracy but also to fortify the resilience of economic systems against cyber-fraud threats.

3. Proposed Model

The diagram depicts a comprehensive workflow for FS in the context of FD using MHO algorithms. FS is a critical preprocessing step aimed at reducing the dimensionality of high-dimensional datasets by eliminating irrelevant and redundant features, thereby enhancing the performance of ML models. To address the class imbalance, we adopted the under-sampling technique. This involved randomly sampling an equal number of instances from the minority class (fraudulent transactions) and the majority class (non-fraudulent transactions). By equalizing the distribution of classes in the dataset, we aimed to enhance the performance of classification models in detecting fraudulent activities. The dataset was partitioned into subsets based on the class label, segregating fraud and non-fraud transactions. Then, an equivalent number of instances (492 instances) were randomly sampled from the majority class to match the minority class. The sampled subsets from both classes were combined to form a balanced dataset, which was shuffled to introduce randomness and prevent bias in subsequent analyses. The FS phase involves the application of different well-known MHO algorithms, including brown-bear optimization (BBO), African vultures optimization (AVO), Aquila optimization (AO), sparrow search algorithm (SSA), artificial bee colony (ABC), particle swarm optimization (PSO), bat algorithm (BA), grey wolf optimization (GWO), whale optimization algorithm (WOA), grasshopper optimization algorithm (GOA), sailfish optimizer (SFO), Harris hawks optimization (HHO), bird swarm algorithm (BSA), atom search optimization (ASO), and Henry gas solubility optimization (HGSO). These algorithms are designed to efficiently search for optimal feature subsets while considering the combinatorial nature of the problem and the exponential increase in computational time with problem complexity. The MHO techniques are evaluated against nine common S-shaped and V-shaped TFs to produce multiple variants. These variants are assessed using random forest (RF) and support vector machine (SVM) classifiers as fitness evaluators. Finally, the performance of the top-performing algorithm variants for each classifier is compared with 15 MHO algorithms. This comparative analysis provides insights into the effectiveness and robustness of the MHO algorithms for FS in FD applications, as shown in Figure 2.

3.1. Data Preprocessing

The preprocessing steps involve addressing class imbalance and preparing the dataset for feature selection. An under-sampling technique is employed to handle the severe class imbalance in the dataset, where an equal number of instances from both fraudulent and non-fraudulent classes are randomly sampled, resulting in a balanced dataset. This ensures that the models are trained on representative data from both classes. Additionally, the dataset is partitioned based on the class label, and the sampled subsets are combined and shuffled to introduce randomness. Following the preprocessing steps, a balanced dataset was obtained, comprising 984 instances (492 frauds and 492 non-frauds), wherein the occurrences of both fraudulent and non-fraudulent transactions are approximately equal, as in Table 1. This balanced dataset is the foundation for subsequent analyses, which include feature engineering, model training, and evaluation.

3.2. MHO Algorithms for FS

The advantage of using MHO algorithms lies in their ability to pinpoint the key features within corpus data. In our recent research, we employed 15 distinct MHO algorithms to determine the crucial features needed for the accurate prediction of CCF and to identify which features, when eliminated, could improve or maintain the system’s predictive performance at its peak. A brief description of each algorithm is provided below.
  • Brown-Bear Optimization (BBO): The BBO algorithm is an MHO approach to optimization, first presented in paper [18]. This method is unique because it is inspired by the specific abilities of brown bears to distinguish and sniff out scents, a trait not seen in other bear species. These special abilities have been translated into mathematical models to build the BBO technique, effectively replicating the natural behaviors of bears. Brown bears showcase significant intelligence through their ability to differentiate between various smells, using their sense of smell as a critical form of communication. They demonstrate pedal scent differentiation throughout their territories, each group displaying unique behaviors. These include specific walking patterns, deliberate stepping, and manipulation of their feet on the ground, all contributing to their ability to distinguish scents. Moreover, brown bears exhibit a sniffing pedal differentiation behavior, where group members prefer to engage in sniffing activities. The efficiency of the optimization algorithm is dependent on its ability to exploit and explore. The BBO algorithm’s exploitation aspect is inspired by the behavior of differentiating scents through pedals. In contrast, its exploration aspect is akin to the act of sniffing out differences in pedal scents.
  • African Vultures Optimization (AVO): AVO is a novel swarm-based optimization technique [45] inspired by hunting African vultures, who scavenge for weak animals and carcasses. These birds exhibit diverse traits and are classified into three groups based on strength, with the strongest having the highest chances of securing food. Vultures employ rotational flight to cover vast distances and locate food sources, often using aggressive tactics to access prey. The AVOA mimics these behaviors to optimize search processes in various problem-solving scenarios [46].
  • Aquila Optimization (AO): AO is introduced [47] as a novel MHO algorithm inspired by the hunting strategies and behaviors of the majestic Aquila genus, which includes eagles known for their keen vision, agility, and efficiency in capturing prey. The algorithm adapts these characteristics into a computational framework for solving the optimization. By mimicking the efficient hunting techniques of eagles, the Aquila optimizer aims to offer a powerful and practical approach to optimization tasks, potentially outperforming existing algorithms in terms of convergence speed, solution quality, and robustness across various problem domains.
  • Sparrow Search Algorithm (SSA): SSA [48] is an MHO method inspired by the social behavior and interactions of bird swarms, particularly sparrows. Sparrows, found globally and often living near human habitats, are omnivorous birds known for feeding on weed or grain seeds. They exhibit intelligence and memory, employing anti-predation and foraging behaviors. Captive sparrows are categorized into producers, actively seeking food sources, and scroungers, who obtain food from producers. Sparrows flexibly switch between these roles using similar foraging strategies. In SSA, each sparrow monitors the behavior of its neighbors, with attackers competing for high food intake for the flock. Sparrows utilize different foraging strategies to optimize energy utilization and increase food intake, with scrawny sparrows benefiting from these strategies. Sparrows in the search space are vulnerable to predator attacks and must seek safer locations. Sparrows exhibit natural curiosity and vigilance, emitting warning chirps to alert the group of danger, prompting them to fly away from potential threats. Based on the behaviors observed in sparrows, a mathematical model is formulated to construct the SSA algorithm, which leverages these principles to optimize search processes in various problem-solving scenarios [49].
  • Artificial Bee Colony (ABC): ABC, in 2005, was proposed by Karaboga’s model [50] of the bee colony’s foraging behavior. Inspired by the intelligent foraging behaviors of honey bees, the ABC algorithm’s search process comprises three primary phases: dispatching forager bees to assess nectar quantity, sharing information with onlooker bees, and deploying scout bees to explore potential new food sources. This algorithm is part of a broader trend of algorithms inspired by insect colonies’ foraging behavior under the “survival of the fittest” rule. This algorithm boasts easy implementation, minimal control parameters, and robust stability [51].
  • Particle Swarm Optimization (PSO): PSO is an effective and straightforward optimization technique inspired by the social behavior of animals like birds and fish. It has been widely applied across numerous fields, such as ML, image processing, data mining, robotics, etc. PSO was initially introduced by Eberhart and Kennedy in 1995 [52], drawing on models that mimic the collective behavior observed in natural species. As a result, PSO has found application across a broad range of industries for tackling various optimization challenges [53].
  • Bat Algorithm (BA): BA, a meta-heuristic approach inspired by the echolocation of bats, was introduced by Yang in 2010 [54]. This method draws inspiration from the echolocation behavior of microbats, which is characterized by varying pulse emission rates and loudness. Moreover, it incorporates principles of swarm intelligence (SI), influenced by observations of bats. Typically, bats utilize short, intense sound pulses during nocturnal hunts to locate obstacles or prey through the echoes these pulses generate. Additionally, the unique auditory system of bats enables them to ascertain the size and position of objects [55].
  • Grey Wolf Optimization (GWO): In 2014, Zorarpacı and Özel [56] introduced the GWO algorithm, which has since become one of the most used algorithms based on SI. The GWO algorithm’s inspiration comes from grey wolves’ natural hunting behavior, which efficiently tracks and captures their prey. The algorithm mimics the social hierarchy within a wolf pack to assign various roles during the optimization process. These roles are categorized into four groups: omega, alpha, delta, and beta, each signifying an optimal solution that has to be discovered [57].
  • Whale Optimization Algorithm (WOA): Mirjalili and Lewis developed WOA [58]. It was designed to tackle numerical optimization challenges. It incorporates three distinct mechanisms inspired by the feeding strategies of humpback whales: prey detection, prey encirclement, and bubble-net hunting. WOA aims to pinpoint the optimal solution for specific optimization issues by deploying a group of search agents. What sets WOA apart from similar algorithms are the unique rules it applies to enhance potential solutions at every step of optimization. Mimicking the predatory tactics of humpback whales, WOA zeroes in on and captures prey using a method referred to as bubble-net feeding [59].
  • Grasshopper Optimization Algorithm (GOA): Saremi et al. [60] introduced GOA, which draws inspiration from the natural foraging and swarming behavior of grasshoppers. This algorithm stands out due to its adaptive mechanism, balancing the exploration and exploitation processes. Due to these features, GOA has the potential to navigate the complexities of multi-objective search spaces more efficiently than other strategies. Moreover, it boasts a lower computational complexity than many current optimization methods [61].
  • Sailfish Optimizer (SFO): SFO [62] is a meta-heuristic algorithm that mimics the hunting behavior of sailfish preying on sardines. This hunting strategy aids predators in conserving energy. The algorithm features two populations: sailfish and sardines. The sailfish represent candidate solutions, with their positions in the search space corresponding to problem variables. SFO aims to randomize the movement of both the sailfish and sardines. Sailfish are dispersed throughout the search area, while the positioning of sardines assists in locating the best solution within the search space [63].
  • Harris Hawks Optimization (HHO): HHO is a developed population-based optimization method inspired by the cooperative hunting behavior of Harris hawks in nature. Heidari et al. [64] Introduced HHO to simulate the dynamic teamwork and hunting strategies of these hawks, which include techniques such as tracing, encircling, approaching, and attacking prey. In this model, the hawks’ pursuit efforts represent agents navigating the search area, with the prey representing the optimal solution. HHO effectively addresses various real-world optimization challenges and can handle discrete and continuous domains. It can explore uncharted search spaces and achieve high-quality solutions, making it suitable for tasks requiring optimal parameter extraction. Overall, HHO demonstrates promising performance and offers a novel approach to solving optimization problems inspired by nature’s cooperative behaviors [65].
  • Bird Swarm Algorithm (BSA): Meng et al. [66] unveiled a novel MHO strategy known as BSA for tackling continuous optimization issues. This approach is inspired by SI, which originates from the collective behaviors and interactions observed in bird swarms. By emulating the search for food, vigilance, and flight patterns of birds, BSA effectively leverages SI drawn from these avian swarms to address various optimization challenges [67].
  • Atom Search Optimization (ASO): ASO is presented as an optimization method inspired by molecular dynamics. In this approach, the search space is navigated by the position of atoms, each representing a potential solution, evaluated based on its mass or “heaviness” [68]. The interaction between atoms is determined by their proximity, leading to either attraction or repulsion. This dynamic causes lighter atoms to move towards the heavier ones. Additionally, heavier atoms have a slower movement, so they are more efficient in thoroughly searching local areas for improved solutions. On the other hand, the rapid movement of lighter atoms enables them to explore new and broader areas of the search space more effectively.
  • Henry Gas Solubility Optimization (HGSO): Hashim et al. introduced [69] the HGSO algorithm in 2019. HGSO is a variant of the MHO algorithm, drawing inspiration from Henry’s law to mimic the behavior of gas particles [70]. It employs gas clustering behavior to effectively balance exploitation and exploration within the search space, thus mitigating the risk of converging to local optima [71].

3.3. ML Techniques

This section outlines the ML classifiers employed in the research to evaluate the subset of selected features regarding classification accuracy and fitness values.
  • Random Forest (RF) In 2001, Breiman [72] introduced the RF algorithm, incorporating the outcomes of multiple decision trees during training to derive the mode of the classes or the average prediction of the individual trees. It deliberately selects samples and predictors randomly, and the optimized hyperparameter values have a substantial impact on enhancing prediction accuracy [73]. The fundamental concept of the RF algorithm revolves around the bootstrap sampling method, which randomly and repeatedly generates N samples from the training dataset. It builds a robust classifier by integrating weak decision trees. Two thirds of the original data size will be used for each training set cluster and one third for test sets. The features were selected randomly to split decision trees, and RF helps to relieve overfitting problems by choosing input and predictors randomly [74]. The strengths of RF are that the learning effect for integrated RF is regularly more significant than the sum of the learning effect for parts. RF is highly versatile, excelling with large volumes of data, effectively estimating missing values, and balancing errors in unbalanced sets. It accurately identifies the most critical features in classification and can be utilized with other datasets for classification and regression tasks. Furthermore, it enhances prediction accuracy without significantly increasing computation and demonstrates strong performance in predicting stock prices. Its wide-ranging applications include bioinformatics, data mining, big data, and various other domains [75].
  • Support Vector Machine (SVM)
    Vapnik and Cortes [76] first proposed SVM for classifying linear and non-linear problems. It is a well-known supervised binary classifier that builds a model by grouping data into two classes. It plots a hyperplane boundary to separate the training dataset (a decision surface) in the input space by maximizing the isolation edge between positive and negative examples [77]. The extensive research has thoroughly explored SVM’s ability to optimize performance by adapting the training classifier or reducing the training set size. SVM holds crucial applications in various fields, including ML, statistics, object recognition, text categorization, speaker identification, and health care [78].

3.4. Transfer Functions (TFs)

As the final solution acquired through different utilized MHO techniques comprises continuous values, MHO techniques cannot address an FS problem directly. Thus, it becomes essential to employ a mapping (transfer) function to convert these continuous values into binary 0 s or 1 s. Transfer functions (TFs) [79] dictate the rate of change in the decision variable values from 0 to 1 and vice versa. Two shared transfer functions used for this purpose are S-shaped, which are so named because their graphical representation resembles the letter ’S’, and V-shaped transfer functions. S-shaped functions are used to map continuous values to probabilities, which can then be converted into binary values through a thresholding process. The output of an S-shaped transfer function lies between 0 and 1, representing the probability of including a feature. V-shaped transfer functions, on the other hand, are characterized by their ’V’-shaped graphical representation. These functions are also used to map continuous values to binary decisions but follow a mathematical approach different from that of S-shaped functions. The output of a V-shaped transfer function is used to determine the likelihood of a feature’s inclusion based on a different set of criteria.
When selecting a TF for the conversion of continuous to binary values, several considerations must be taken into account from the MHO techniques perspective, as follows:
  • The range of values from a TF should be between 0 and 1, representing the probability of a feature-changing state.
  • If the evaluation metric for the feature indicates suboptimal performance, the TF should show a higher probability of changing the current state in the next iteration.
  • When a feature is considered optimal, the TF should have a low probability of changing its current state.
  • The probability generated by the TF should rise as the evaluation metric approaches a threshold value. This enables less optimal features to have a higher likelihood of changing their state, which helps move towards more optimal solutions in subsequent iterations.
  • The probability derived from a TF must decrease as the evaluation metric moves away from the threshold value.
These concepts demonstrate the high capability of TFs to convert the process of continuous search into binary for each x , using Equation (1):
x i , j t + 1 bin = 0 if r a n d < T F x i , j t + 1 , 1 if r a n d T F x i , j t + 1 , if TF is S - shaped , ¬ x i , j t bin if r a n d < T F x i , j t + 1 , x i , j t bin if r a n d T F x i , j t + 1 , if TF is V - shaped ,
where x i , j t + 1 bin represents the j-th dimension of the i-th individual at the current iteration t + 1 , r a n d is a number selected randomly from within the range [ 0 , 1 ] , and T F x i , j t + 1 is the probability value obtained when applying a given TF to every j-th component’s continuous value of agent i. It is clear from Equation (1) that we have two cases: (i) if the TF is S-shaped, then if r a n d is less than the probability returned by the involved TF, the j-th dimension of the original individual is set to 0; otherwise, it is set to 1; and (ii) if the TF is V-shaped, then if r a n d is less than the probability returned by the involved TF, the j-th dimension is negated; otherwise, it remains unchanged. Thus, continuous variables are successfully mapped into binary by using the S-shaped and V-shaped TFs and Equation (1).
Table 2 reports the families of TFs, while Figure 3 exhibits these two families, divided into S-shaped and V-shaped TFs. Here, it should be noted that the proposed MHO techniques were evaluated based on the nine TFs whose mathematical expressions are shown in the Table 2.

3.5. Sampling Technique

The dataset used in this study contains 284,807 transactions with a severe class imbalance, where only 492 instances are fraudulent. Under-sampling was chosen because it creates a balanced dataset by reducing the majority of class instances without introducing synthetic data points. This methodological approach aligns to accurately capture the inherent characteristics of both fraudulent and non-fraudulent transactions. Given the computational efficiency and simplicity of implementation, under-sampling is suitable for handling large-scale datasets like ours. It allows us to focus on the intrinsic patterns within the data without the added complexity of generating synthetic instances, which may not accurately represent real-world fraud scenarios. Under-sampling is one of several techniques data scientists can use to extract more accurate information from originally unbalanced datasets [82]. The steps taken for the under-sampling process are as follows:
  • Random Sampling: Instances from the majority class (non-fraudulent transactions) were randomly sampled to match the instances in the minority class (fraudulent transactions), resulting in a balanced dataset.
  • Data Partitioning: The dataset was partitioned based on the class label, ensuring equal representation from both classes.
  • Combining and Shuffling: The sampled subsets were then combined and shuffled to introduce randomness and prevent ordering bias.
While reducing the dataset size to 492 instances from the original 284,807 transactions does involve a reduction in the overall data volume, our approach ensured that this reduction was performed in a manner that preserved the integrity and representativeness of the data:
  • We employed stratified sampling to ensure that the under-sampling process maintained a proportional representation of both fraud and non-fraud within the reduced dataset. This approach helps mitigate the risk of losing critical information that may be present in the minority class.
  • Post-sampling, rigorous feature engineering, and feature selection techniques(MHO) maximize the retained information’s relevance and quality. Focusing on the most discriminative features, we aimed to capture and leverage the essential characteristics that differentiate fraudulent from non-fraudulent transactions.

4. Experimental Methodology

This section presents the performance evaluation results of 15 MHO techniques and their counterparts with different Ml classifiers RF and SVM based on nine TFs. The experimental dataset was downloaded from the CCF online dataset on the Kaggle-ML Repository. The parameters of the utilized MHO methods and ML classifiers are defined in Section 4.2. Section 4.3 states the utilized Performance Metrics. The experimental results are discussed in Section 4.4 and Section 4.5. The convergence curves are shown in Section 4.7.

4.1. Dataset Description

To assess the robustness of the MHO techniques with nine different TFs (five S S h a p e d and four V S h a p e d functions), the Kaggle dataset was considered in this research. It can be downloaded directly from the Kaggle repository (https://www.kaggle.com/datasets/mlg-ulb/creditcardfraud (accessed on 16 July 2024).This dataset was used to create a framework for CCFD. More relevant autonomous information functions and target yield markers are extracted and utilized to detect fraud. The dataset contains transactions made by credit-cardholders in September 2013, primarily from European cardholders. It includes 284,807 transactions over two days, of which only 492 transactions are recorded as fraud. This creates a severe class imbalance, with fraudulent transactions accounting for only 0.172% of all transactions. The dataset consists of numerical input variables resulting from a principal component analysis (PCA) transformation. Due to confidentiality constraints, the original features and additional background information about the data are unavailable. The features V1 through V28 represent the principal components obtained through PCA, while ’Time’ and ’Amount’ are the only features not subjected to PCA transformation. ’Time’ means the elapsed time in seconds between each transaction and the first transaction in the dataset, while ’Amount’ denotes the monetary value of each transaction. The objective class is divided into two categories: 1 for fraudulent transactions and 0 for non-fraudulent transactions Table 3. We explored various ML techniques using binary or multiclass datasets in the Python programming language.

4.2. Parameter Settings

Several MHO algorithms based on two ML classifiers were evaluated utilizing 9 TFs. These MHO algorithms include BBO, AVO, AO, SSA, ABC, PSO, BA, GWO, WOA, GOA, SFO, HHO, BSA, ASO, and HGSO. Each technique underwent thirty experiments on the utilized dataset due to the stochastic character of the Meta-heuristic techniques. We documented evaluation metrics based on average results to make a fair comparison between different methods. We assigned a population size of 10 and a maximum of 100 iterations to all techniques. The benchmark reflects the problem size by the number of features, and the search domain is set to [ 1 , 1 ] , allowing exploration within a constrained space.
Our framework used a ten-fold cross-validation method to ensure the outcomes’ reliability. This approach involves randomly dividing the benchmark into training and testing subsets. The training subset comprises 80% of the data and is used to learn the machine learning model, and the test dataset evaluates the selected attributes. Each method’s configurations and parameter standards were based on their initial versions and information from their primary publications, as introduced in Table 4. Our computing environment utilized Python 3.10, an Intel Core i7 processor, 16 GB of RAM, and an NVIDIA GTX 1050i GPU.

4.3. Performance Metrics

The benchmark data are determined 30 times to compare the effectiveness of 15 MHO algorithms with RF and SVM classifiers. The following are the FS methodology evaluation metrics that were utilized.
  • The average accuracy (( A V G A C C )) is evaluated by executing the method for 30 runs and calculating the percentage of correct data classification. The accuracy is determined using the following equation:
    AVG A C C = 1 30 1 m k = 1 30 r = 1 m m a t c h P L r , A L r ,
    where m represents the number of samples in the test subset, P L r and A L r , respectively, indicate the predicted and reference class labels for sample r. The comparison function m a t c h P L r , A L r determines the matching between the predicted and the reference label. If they match, then m a t c h P L r , A L r equals 1; otherwise, it equals 0.
  • The average fitness outcomes ( A V G F i t ) are evaluated by implementing the approach in 30 individual trials and unequivocally demonstrate that decreasing the selected attributes’ number and increasing the accuracy rate synergistically work together. It is crucial to note that the ideal outcome is determined by the following formula: the lowest value represents the best result.
    AVG F i t = 1 30 k = 1 30 f * k ,
    where f * k is the optimal fitness value obtained in the k-th run.
  • The average number of features chosen ( A V G F e a t u r e s ) indicates the average number of chosen features by implementing the methodology individually 30 times and is represented as
    AVG F e a t u r e s = 1 30 k = 1 30 | d * k | | D | ,
    where | d * k | is the length of features chosen in the optimal solution for the k-th run, and | D | is the entire length of attributes in the utilized dataset.
  • The average computational time ( T z ) shows the execution time in seconds for each algorithm validated over 30 different runs and is represented as (5)
    T = 1 N i = 1 N R u n T i m e i ,
    where N is the runs’ number, and R u n T i m e i is the computational time in seconds at run i.
  • Standard Deviation (STDE): The average results from the thirty runs of the algorithm on the used dataset are assessed for stability as
    S T D E = 1 29 k = 1 30 Y * k Mean Y 2 ,
    where Y is the metric to be assessed, Y * k is the metric value Y in the k-th run, and Mean Y is the average of the metric over 30 independent runs.
In the following sections, we examine the analytical results, highlighting the most promising outcomes in bold.

4.4. Comparisons Based on RF Classifier Using CCF Dataset

In this section, we compare the performance of the fifteen MHO techniques based on the RF classifier. This evaluates the average classification accuracy, the number of selected features, and the average computational time. The aim is to assess the impact of the MHO algorithms in choosing the most relevant features.
Firstly, the classification accuracy is estimated using the original RF classifier without utilizing MHO techniques based on the number of features in the CCF dataset. The original RF classifier achieved 0.9328 classification accuracy. On the other hand, Table 5 shows the performance analysis of the RF classifier with 15 MHO techniques regarding different TFs based on the average classification accuracy to evaluate the impact of the MHO algorithms on the utilized dataset. Remarkably, SFO-RF ranked first with all TFs except S V 4 ( S V 1 achieved 0.9762, S V 1 C achieved 0.9754, S V 2 achieved 0.9773, S V 3 achieved 0.9770, V V 1 achieved 0.9779, V V 2 achieved 0.9768, V V 3 achieved 0.9756, and V V 4 achieved 0.9765). BBO-RF ranked first regarding S V 4 by obtaining 0.9754 classification accuracy.
Secondly, Table 6 shows the performance analysis of the RF classifier with 15 MHO techniques regarding different TFs based on the average number of selected features on the utilized dataset. It is remarkable that ( A V G F e a t u r e s ) based on the PSO-RF according to S V 1 selected 5.0667 features, WOA-RF regarding S V 1 C selected 9.8667 features, SFO-RF regarding S V 2 selected 6.8667 features; furthermore, SFO-RF regarding S V 3 selected 7.1333 features, PSO-RF regarding S V 4 selected 4.6000 features, AO-RF regarding V V 1 selected 11.3667 features, HHO-RF regarding V V 2 selected 11.8000 features, AO-RF and PSO-SSA regarding V V 3 selected 11.8333 features, and AVO-RF regarding V V 4 selected 11.5667 features. Therefore, PSO-RF regarding S V 1 selected the least number of chosen features.
Thirdly, Table 7 shows the performance analysis of the RF classifier with 15 MHO techniques in terms of different TFs based on the average classification fitness to evaluate the impact of the MHO algorithms on the utilized dataset. Remarkably, A V G F i t based on the SFO-RF method ranked first with all TFs except V V 4 ( S V 1 achieved 0.0262, S V 1 C achieved 0.0290, S V 2 achieved 0.0248, S V 3 achieved 0.0252, S V 4 achieved 0.0281, V V 1 achieved 0.0262, V V 2 achieved 0.0275, and V V 3 achieved 0.0282). BBO-RF ranked first regarding V V 4 .
Finally, Table 8 shows the performance analysis of the RF classifier with 15 MHO techniques in terms of different TFs based on the average computational time to evaluate the impact of the MHO algorithms on the utilized dataset. Remarkably, the BA-RF method ranked first with five TFs based on A V G C o m p u t a t i o n a l T i m e ( S V 1 took 19623 MS, S V 1 C took 16105 MS, S V 2 took 16243 MS, S V 3 took 20211 MS, and V V 4 took 17714 MS). The AO-RF method ranked first based on S V 4 and V V 1 . The AVO-RF method ranked first based on V V 2 . Finally, the HHO-RF method ranked first based on V V 3 .
In the end, The results show that the SFO-RF method achieves the best average accuracy and fitness regarding 8 of 9 TFs and the best feature size regarding 2 of 9 TFs. PSO-RF, SFO-RF, and AO-RF methods also have the best feature size for 2 of 9 TFs. For the average computational time measure, the BA-RF method achieved the best result for 5 of 9 TFs, while the AO-RF method achieved the best result for only 2 of 9 TFs.

4.5. Comparisons Based on SVM Classifier Using CCF Dataset

In this subsection, we compare the performance of the SVM classifier with 15 MHO techniques based on the average classification accuracy, the average number of selected features, the average fitness values, and the average computational time to evaluate the impact of the MHO algorithms in improving classification accuracy and choosing the most appropriate features.
Firstly, the classification accuracy is based on the original SVM classifier (before FS), and the original number of features on the CCF dataset is 0.5378. On the other hand, Table 9 shows the performance analysis of the SVM classifier with fifteen MHO techniques regarding different TFs based on the average classification accuracy to evaluate the impact of the MHO algorithms on the utilized dataset. Remarkably, A V G A C C based on the SFO-SVM method ranked first for four TFS ( S V 1 achieved 0.9406, S V 2 achieved 0.9412, S V 3 achieved 0.9401, and S V 4 achieved 0.9412). The AO-SVM method ranked first with V V 1 C by achieving an accuracy of 0.9328. BBO-SVM ranked first with V V 1 by achieving an accuracy of 0.9347. BBO-SVM, AVO-SVM, and AO-SVM ranked first for V V 2 by achieving an accuracy of 0.9339. BBO-SVM ranked first for V V 3 by achieving an accuracy of 0.9347. Finally, AO-SVM ranked first by achieving an accuracy of 0.9339 for S V 4 .
Secondly, Table 10 shows the performance analysis of the SVM classifier and 15 MHO techniques in terms of different TFs based on the average number of selected features on the utilized dataset. It is remarkable that A V G F e a t u r e s based on the AO-SVM ranked first for five TFs ( S V 1 selected 1.5333 features, S V 2 selected 1.3667 features, S V 3 selected 1.5 features, V V 1 selected 4.6333 features, and 5 features were selected by V V 3 ). SSA-SVM ranked first for two TFs ( S V 1 C selected 1.9 features, and S V 4 selected 1.5333). BBO-SVM ranked first according to V V 2 by selecting 5.2 features. Finally, BBO-SVM and AVO-SVM ranked first according to V V 4 by selecting five features.
Thirdly, Table 11 shows the performance analysis of the SVM classifier with 15 MHO techniques in terms of different TFs based on the average classification fitness to evaluate the impact of the MHO algorithms on the utilized dataset. Remarkably, ( A V G F i t ) based on the SFO-SVM ranked first with 4 TFs ( S V 1 achieved a fitness value of 0.0595, S V 2 achieved a fitness value of 0.0591, S V 3 achieved a fitness value of 0.0605, and S V 4 achieved a fitness value of 0.0590). AVO-SVM ranked first with S V 1 C by achieving a fitness value of 0.0636. BBO-SVM ranked first with V V 1 , V V 2 , and V V 3 by achieving fitness values of 0.0664, 0.0672, 0.0665 sequentially. Finally, AO-SVM ranked first by achieving a fitness value of 0.0672 for V V 4 .
Finally, Table 12 shows the performance analysis of the SVM classifier with 15 MHO techniques regarding different TFs based on the average computational time to evaluate the impact of the MHO algorithms on the utilized dataset.
Remarkably, A V G C o m p u t a t i o n a l T i m e with the HHO-SVM method ranked first in five TFs ( S V 2 took 9258, V V 1 took 6164 MS, V V 2 took 6059 MS, V V 3 took 6221 MS, and V V 4 took 5757MS). PSO-SVM ranked first with S V 1 by taking 6395 MS and 5365 MS with S V 4 . WOA-SVM ranked first with S V 1 C by taking 7940 MS. AVO-SVM method ranked first with V V 3 by taking 8993 MS. Finally, PSO-SVM method ranked first based on S V 4 by taking 5365 MS.
In the end, the results show that the SFO-SVM method achieves the best average accuracy and fitness for 4 of 9 TFs sequentially. At the same time, BBO-SVM and AO-SVM methods achieve the best accuracy for 3 of 9 TFs. For average feature size, AO-SVM performs well for 5 of 9, and BBO-SVM and SSA-SVM perform well for 2 of 9 TFs. For average fitness, BBO-SVM performs well for 3 of 9 TFs. HHO-SVM reaches the best result in computational time measure for 5 of 9 TFs, while the PSO-SVM method achieves the best result for only 2 of 9 TFs.
Finally, the SFO method achieves the best average accuracy and fitness results for two classifiers, RF and SVM, and with most TFs.

4.6. Comparing with Other Studies That Utilized the Credit European Cardholders Dataset

In this subsection, a detailed comparison of various research efforts on the same CCF Kaggle dataset is presented. Table 13 illustrates the impact of different methodologies on classification accuracy. Notably, our research (CCFD) distinguishes itself by applying the MHO (SFO) feature selection technique, achieving accuracy rates of 97.79% with the RF classifier and 94.12% with the support SVM classifier, employing under-sampling to mitigate data imbalance. This improvement over earlier studies, such as the 2018 research by Lakshmi and Selvani, achieving 95.50% accuracy using RF and oversampling, and the 2023 studies by Mniai et al., who utilized varied feature selection methods but attained lower accuracies, highlights the effectiveness of MHO (SFO). Our findings suggest that advanced feature selection techniques like MHO (SFO) can significantly boost machine learning performance, providing a robust approach for managing intricate datasets.

4.7. Convergence Investigation

This section examines the performance of 4 methods with 15 MHO techniques (RF (with S S h a p e d TFs), RF (with V S h a p e d TFs), SVM (with S S h a p e d TFs), and SVM (with V S h a p e d TFs)) for handling the FS strategy using the European credit-cardholders dataset. The aim is to evaluate their convergence capabilities, as shown in figures sequentially labeled Figure 4, Figure 5, Figure 6 and Figure 7. These graphs indicate that the SFO-RF algorithm demonstrated superior and optimal convergence over the dataset compared to its peers, assessed under the same population size and iteration number conditions.

4.8. Evaluating Model Robustness and Generalization

Detecting fraud in financial transactions is a critical task that demands reliable and adaptable predictive models. In this study, we assessed the robustness and ability to generalize of our chosen model for fraud detection. After comprehensive experimentation, we selected the best-performing model from our experiments in term of accuracy, which was trained on a balanced dataset. The chosen model utilized a random forest classifier, along with a V V 1 s h a p e d transfer function and sailfish optimizer. To evaluate its effectiveness, we tested the model on an imbalanced dataset comprising 20% of the original data, totaling 284,807 samples. The model achieved an accuracy of 97.14% on this imbalanced test set, demonstrating its robustness and ability to generalize effectively to real-world data distributions.

5. Conclusions

This study addressed the significant challenge of data imbalance, which significantly impedes the efficiency of CCFD models, by creating a model that enhances CC security and supports the advancement of E-commerce. The model is structured into four phases: data preprocessing, FS, applying ML classifiers, and evaluating these classifiers. The dataset was collected from a Kaggle repository, and the optimal feature set was selected using 15 MHO algorithms employing 9 TFs. Two ML classifiers, RF and SVM, were employed to compare the outcomes on the whole and selected features. The results revealed an improvement in accuracy by 4% for RF and 41% for SVM when using the selected features over the full set of features. SFO-RF with V V 1 s h a p e d ranked first regarding classification accuracy by achieving 97.79%. AO-SVM with S V 3 s h a p e d selected the least number of features (1.5) while obtaining an accuracy of 93.53%. SFO-RF with S V 2 s h a p e d ranked first regarding classification error by a fitness value of 0.0248%. PSO-SVM with S V 4 s h a p e d ranked first regarding computational time by recording 5365 MS. Therefore, SFO-RF is the overall best method by achieving the greatest classification accuracy with V V 1 and least fitness value with S V 2 . The future research should explore integrating ML with various MHO techniques and additional classifiers, such as deep learning, to further validate the efficiency of MHO methods in feature selection for classification tasks. They are, additionally, exploring hybrid models that combine the strengths of different techniques, such as integrating under-sampling with over-sampling strategies like SMOTE. Hybrid models offer a promising avenue to enhance the robustness and performance of FDSs further. By leveraging the complementary nature of various sampling and modeling techniques, we anticipate achieving improved accuracy in identifying fraudulent transactions while maintaining computational efficiency and interpretability.

Author Contributions

A.A.A. conceived the idea for the CCFD method, designed the experimental framework, and analyzed the results. F.A.M. contributed to designing and implementing the meta-heuristic optimization techniques and data analysis, providing insights into feature selection. D.T.M. worked on the technical aspects of data processing, assisted in interpreting the data, and provided critical feedback on the methodology. S.E.S. participated in the design and execution of the study and contributed to the critical review and editing of the manuscript. All authors collectively contributed to the writing and revision of the manuscript, provided critical feedback on the methodology, and approved the final version of the paper for submission. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia, under Project Grant KFU241160.

Data Availability Statement

The data and code used in this study can be found at https://github.com/FMaghraby/CCFD (accessed on 16 July 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

B B O Brown-bear optimization
A V O African vultures optimization
A O Aquila optimization
S S A Sparrow search algorithm
A B C Artificial bee colony
P S O Particle swarm optimization
B A Bat algorithm
G W O Grey wolf optimization
W O A Whale optimization algorithm
G O A Grasshopper optimization algorithm
S F O Sailfish optimizer
H H O Harris hawks optimization
B S A Bird swarm algorithm
A S O Atom search optimization
H G S O Henry gas solubility optimization
R F Random forest
S V M Support vector machine
T F s Transfer functions
A V G A C C Average accuracy
S T D E Standard deviation
C C F Credit card fraud
D M Data mining
D L Deep learning
C C F D Credit card fraud detection
F D Fraud detection
C C Credit card
F D S s Fraud detection systems
F S Feature selection
M H O A Meta-heuristic optimization Algorithm
M L Machine learning

References

  1. Song, Y.; Escobar, O.; Arzubiaga, U.; De Massis, A. The digital transformation of a traditional market into an entrepreneurial ecosystem. Rev. Manag. Sci. 2022, 16, 65–88. [Google Scholar] [CrossRef]
  2. Lucas, Y.; Jurgovsky, J. Credit card fraud detection using machine learning: A survey. arXiv 2020, arXiv:2010.06479. [Google Scholar]
  3. Liu, Y.; Gao, W.; Hua, R.; Chen, H. Decomposition and measurement of economic effects of E-commerce based on static feder model and improved dynamic feder model. In Proceedings of the 2021 2nd International Conference on E-Commerce and Internet Technology (ECIT), Hangzhou, China, 5–7 March 2021; pp. 213–217. [Google Scholar]
  4. Tran, L.T.T. Managing the effectiveness of e-commerce platforms in a pandemic. J. Retail. Consum. Serv. 2021, 58, 102287. [Google Scholar] [CrossRef]
  5. Laudon, K.C.; Laudon, J.P. Management Information Systems: Managing the Digital Firm, 17th ed.; Pearson Educación: London, UK, 2023. [Google Scholar]
  6. Fanai, H.; Abbasimehr, H. A novel combined approach based on deep Autoencoder and deep classifiers for credit card fraud detection. Expert Syst. Appl. 2023, 217, 119562. [Google Scholar] [CrossRef]
  7. Singh, A.; Jain, A.; Biable, S.E. Financial Fraud Detection Approach Based on Firefly Optimization Algorithm and Support Vector Machine. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1468015. [Google Scholar] [CrossRef]
  8. Wahid, A.; Msahli, M.; Bifet, A.; Memmi, G. NFA: A neural factorization autoencoder based online telephony fraud detection. Digit. Commun. Netw. 2023, 10, 158–167. [Google Scholar] [CrossRef]
  9. Carta, S.; Fenu, G.; Recupero, D.R.; Saia, R. Fraud detection for E-commerce transactions by employing a prudential Multiple Consensus model. J. Inf. Secur. Appl. 2019, 46, 13–22. [Google Scholar] [CrossRef]
  10. Rodrigues, V.F.; Policarpo, L.M.; da Silveira, D.E.; da Rosa Righi, R.; da Costa, C.A.; Barbosa, J.L.V.; Antunes, R.S.; Scorsatto, R.; Arcot, T. Fraud detection and prevention in e-commerce: A systematic literature review. Electron. Commer. Res. Appl. 2022, 7, 101207. [Google Scholar] [CrossRef]
  11. Alamri, M.; Ykhlef, M. Survey of Credit Card Anomaly and Fraud Detection Using Sampling Techniques. Electronics 2022, 11, 4003. [Google Scholar] [CrossRef]
  12. Asha, R.; KR, S.K. Credit card fraud detection using artificial neural network. Glob. Transit. Proc. 2021, 2, 35–41. [Google Scholar]
  13. Bin Sulaiman, R.; Schetinin, V.; Sant, P. Review of machine learning approach on credit card fraud detection. Hum.-Centric Intell. Syst. 2022, 2, 55–68. [Google Scholar] [CrossRef]
  14. Bao, Y.; Hilary, G.; Ke, B. Artificial intelligence and fraud detection. In Innovative Technology at the Interface of Finance and Operations; Springer: Berlin/Heidelberg, Germany, 2022; Volume I, pp. 223–247. [Google Scholar]
  15. Nandi, A.K.; Randhawa, K.K.; Chua, H.S.; Seera, M.; Lim, C.P. Credit card fraud detection using a hierarchical behavior-knowledge space model. PLoS ONE 2022, 17, e0260579. [Google Scholar] [CrossRef] [PubMed]
  16. Agarwal, A.; Ratha, N.K. Black-Box Adversarial Entry in Finance through Credit Card Fraud Detection. In Proceedings of the CIKM Workshops, Gold Coast, QLD, Australia, 1–5 November 2021. [Google Scholar]
  17. Faris, H.; Mafarja, M.M.; Heidari, A.A.; Aljarah, I.; Ala’m, A.Z.; Mirjalili, S.; Fujita, H. An efficient binary salp swarm algorithm with crossover scheme for feature selection problems. Knowl.-Based Syst. 2018, 154, 43–67. [Google Scholar] [CrossRef]
  18. Prakash, T.; Singh, P.P.; Singh, V.P.; Singh, S.N. A Novel Brown-bear Optimization Algorithm for Solving Economic Dispatch Problem. In Advanced Control & Optimization Paradigms for Energy System Operation and Management; River Publishers: Aalborg, Denmark, 2023; pp. 137–164. [Google Scholar]
  19. Cartella, F.; Anunciacao, O.; Funabiki, Y.; Yamaguchi, D.; Akishita, T.; Elshocht, O. Adversarial attacks for tabular data: Application to fraud detection and imbalanced data. arXiv 2021, arXiv:2101.08030. [Google Scholar]
  20. Beheshti, Z.; Shamsuddin, S.M.H. A review of population-based meta-heuristic algorithms. Int. J. Adv. Soft Comput. Appl 2013, 5, 18298676. [Google Scholar]
  21. Agrawal, P.; Abutarboush, H.F.; Ganesh, T.; Mohamed, A.W. Metaheuristic algorithms on feature selection: A survey of one decade of research (2009–2019). IEEE Access 2021, 9, 26766–26791. [Google Scholar] [CrossRef]
  22. Abualigah, L.; Diabat, A.; Geem, Z.W. A comprehensive survey of the harmony search algorithm in clustering applications. Appl. Sci. 2020, 10, 3827. [Google Scholar] [CrossRef]
  23. Salcedo-Sanz, S. Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures. Phys. Rep. 2016, 655, 1–70. [Google Scholar] [CrossRef]
  24. Palimkar, P.; Shaw, R.N.; Ghosh, A. Machine learning technique to prognosis diabetes disease: Random forest classifier approach. In Proceedings of the Advanced Computing and Intelligent Technologies: Proceedings of ICACIT 2021; Springer: Berlin/Heidelberg, Germany, 2022; pp. 219–244. [Google Scholar]
  25. Phan, T.N.; Kuch, V.; Lehnert, L.W. Land cover classification using Google Earth Engine and random forest classifier—The role of image composition. Remote Sens. 2020, 12, 2411. [Google Scholar] [CrossRef]
  26. Pisner, D.A.; Schnyer, D.M. Support vector machine. In Machine Learning; Elsevier: Amsterdam, The Netherlands, 2020; pp. 101–121. [Google Scholar]
  27. Zojaji, Z.; Atani, R.E.; Monadjemi, A.H. A survey of credit card fraud detection techniques: Data and technique oriented perspective. arXiv 2016, arXiv:1611.06439. [Google Scholar]
  28. Adewumi, A.O.; Akinyelu, A.A. A survey of machine-learning and nature-inspired based credit card fraud detection techniques. Int. J. Syst. Assur. Eng. Manag. 2017, 8, 937–953. [Google Scholar] [CrossRef]
  29. Chilaka, U.; Chukwudebe, G.; Bashiru, A. A review of credit card fraud detection techniques in electronic finance and banking. Conic Res. Eng. J. 2019, 3, 456–467. [Google Scholar]
  30. Khalid, A.R.; Owoh, N.; Uthmani, O.; Ashawa, M.; Osamor, J.; Adejoh, J. Enhancing credit card fraud detection: An ensemble machine learning approach. Big Data Cogn. Comput. 2024, 8, 6. [Google Scholar] [CrossRef]
  31. Abdul Salam, M.; Fouad, K.M.; Elbably, D.L.; Elsayed, S.M. Federated learning model for credit card fraud detection with data balancing techniques. Neural Comput. Appl. 2024, 36, 6231–6256. [Google Scholar] [CrossRef]
  32. Chen, C.T.; Lee, C.; Huang, S.H.; Peng, W.C. Credit Card Fraud Detection via Intelligent Sampling and Self-supervised Learning. Acm Trans. Intell. Syst. Technol. 2024, 15, 1–29. [Google Scholar] [CrossRef]
  33. Taha, A.A.; Malebary, S.J. An intelligent approach to credit card fraud detection using an optimized light gradient boosting machine. IEEE Access 2020, 8, 25579–25587. [Google Scholar] [CrossRef]
  34. Rawashdeh, E.; Al-Ramahi, N.; Ahmad, H.; Zaghloul, R. Efficient credit card fraud detection using evolutionary hybrid feature selection and random weight networks. Int. J. Data Netw. Sci. 2024, 8, 463–472. [Google Scholar] [CrossRef]
  35. Kennedy, R.K.; Villanustre, F.; Khoshgoftaar, T.M.; Salekshahrezaee, Z. Synthesizing class labels for highly imbalanced credit card fraud detection data. J. Big Data 2024, 11, 38. [Google Scholar] [CrossRef]
  36. Aziz, A.; Ghous, H. Fraudulent transactions detection in credit card by using data mining methods: A review. Int. J. Sci. Prog. Res. (IJSPR) 2021, 79, 31–48. [Google Scholar]
  37. Nguyen, T.T.; Tahir, H.; Abdelrazek, M.; Babar, A. Deep learning methods for credit card fraud detection. arXiv 2020, arXiv:2012.03754. [Google Scholar]
  38. Ahmad, I. Feature selection using particle swarm optimization in intrusion detection. Int. J. Distrib. Sens. Netw. 2015, 11, 806954. [Google Scholar] [CrossRef]
  39. Rtayli, N.; Enneya, N. Selection features and support vector machine for credit card risk identification. Procedia Manuf. 2020, 46, 941–948. [Google Scholar] [CrossRef]
  40. Misra, S.; Thakur, S.; Ghosh, M.; Saha, S.K. An autoencoder based model for detecting fraudulent credit card transaction. Procedia Comput. Sci. 2020, 167, 254–262. [Google Scholar] [CrossRef]
  41. Schlör, D.; Ring, M.; Krause, A.; Hotho, A. Financial fraud detection with improved neural arithmetic logic units. In Proceedings of the Mining Data for Financial Applications: 5th ECML PKDD Workshop, MIDAS 2020, Ghent, Belgium, 18 September 2020; Revised Selected Papers 5. Springer: Berlin/Heidelberg, Germany, 2021; pp. 40–54. [Google Scholar]
  42. Buschjäger, S.; Honysz, P.J.; Morik, K. Randomized outlier detection with trees. Int. J. Data Sci. Anal. 2022, 13, 91–104. [Google Scholar] [CrossRef]
  43. Hajek, P.; Abedin, M.Z.; Sivarajah, U. Fraud detection in mobile payment systems using an XGBoost-based framework. Inf. Syst. Front. 2022, 25, 1985–2003. [Google Scholar] [CrossRef] [PubMed]
  44. Kim, J.; Kim, H.J.; Kim, H. Fraud detection for job placement using hierarchical clusters-based deep neural networks. Appl. Intell. 2019, 49, 2842–2861. [Google Scholar] [CrossRef]
  45. Abdollahzadeh, B.; Gharehchopogh, F.S.; Mirjalili, S. African vultures optimization algorithm: A new nature-inspired metaheuristic algorithm for global optimization problems. Comput. Ind. Eng. 2021, 158, 107408. [Google Scholar] [CrossRef]
  46. Abuelrub, A.; Awwad, B. An improved binary African vultures optimization approach to solve the UC problem for power systems. Results Eng. 2023, 19, 101354. [Google Scholar] [CrossRef]
  47. Abualigah, L.; Yousri, D.; Abd Elaziz, M.; Ewees, A.A.; Al-Qaness, M.A.; Gandomi, A.H. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput. Ind. Eng. 2021, 157, 107250. [Google Scholar] [CrossRef]
  48. Xue, J.; Shen, B. A novel swarm intelligence optimization approach: Sparrow search algorithm. Syst. Sci. Control Eng. 2020, 8, 22–34. [Google Scholar] [CrossRef]
  49. Gad, A.G.; Sallam, K.M.; Chakrabortty, R.K.; Ryan, M.J.; Abohany, A.A. An improved binary sparrow search algorithm for feature selection in data classification. Neural Comput. Appl. 2022, 34, 15705–15752. [Google Scholar] [CrossRef]
  50. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report, Technical Report-tr06; Erciyes University, Engineering Faculty, Computer: Kayseri, Turkey, 2005. [Google Scholar]
  51. Li, P.; Zhang, Y.; Gu, J.; Duan, S. Prediction of compressive strength of concrete based on improved artificial bee colony-multilayer perceptron algorithm. Sci. Rep. 2024, 14, 6414. [Google Scholar] [CrossRef] [PubMed]
  52. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International Symposium on Micro Machine and Human Science, MHS’95, Nagoya, Japan, 4–6 October 1995; pp. 39–43. [Google Scholar]
  53. Gad, A.G. Particle swarm optimization algorithm and its applications: A systematic review. Arch. Comput. Methods Eng. 2022, 29, 2531–2561. [Google Scholar] [CrossRef]
  54. Yang, X.S. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative Strategies for Optimization (NICSO 2010); Springer: Berlin/Heidelberg, Germany, 2010; pp. 65–74. [Google Scholar]
  55. Agarwal, T.; Kumar, V. A systematic review on bat algorithm: Theoretical foundation, variants, and applications. Arch. Comput. Methods Eng. 2021, 29, 2707–2736. [Google Scholar] [CrossRef]
  56. Zorarpacı, E.; Özel, S.A. A hybrid approach of differential evolution and artificial bee colony for feature selection. Expert Syst. Appl. 2016, 62, 91–103. [Google Scholar] [CrossRef]
  57. Sharma, I.; Kumar, V.; Sharma, S. A comprehensive survey on grey wolf optimization. Recent Adv. Comput. Sci. Commun. 2022, 15, 323–333. [Google Scholar]
  58. Mirjalili, S.; Lewis, A. The whale optimization algorithm. Adv. Eng. Softw. 2016, 95, 51–67. [Google Scholar] [CrossRef]
  59. Rana, N.; Latiff, M.S.A.; Abdulhamid, S.M.; Chiroma, H. Whale optimization algorithm: A systematic review of contemporary applications, modifications and developments. Neural Comput. Appl. 2020, 32, 16245–16277. [Google Scholar] [CrossRef]
  60. Saremi, S.; Mirjalili, S.; Lewis, A. Grasshopper optimisation algorithm: Theory and application. Adv. Eng. Softw. 2017, 105, 30–47. [Google Scholar] [CrossRef]
  61. Meraihi, Y.; Gabis, A.B.; Mirjalili, S.; Ramdane-Cherif, A. Grasshopper optimization algorithm: Theory, variants, and applications. IEEE Access 2021, 9, 50001–50024. [Google Scholar] [CrossRef]
  62. Shadravan, S.; Naji, H.R.; Bardsiri, V.K. The Sailfish Optimizer: A novel nature-inspired metaheuristic algorithm for solving constrained engineering optimization problems. Eng. Appl. Artif. Intell. 2019, 80, 20–34. [Google Scholar] [CrossRef]
  63. Ghosh, K.K.; Ahmed, S.; Singh, P.K.; Geem, Z.W.; Sarkar, R. Improved binary sailfish optimizer based on adaptive β-hill climbing for feature selection. IEEE Access 2020, 8, 83548–83560. [Google Scholar] [CrossRef]
  64. Heidari, A.A.; Mirjalili, S.; Faris, H.; Aljarah, I.; Mafarja, M.; Chen, H. Harris hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 2019, 97, 849–872. [Google Scholar] [CrossRef]
  65. Alabool, H.M.; Alarabiat, D.; Abualigah, L.; Heidari, A.A. Harris hawks optimization: A comprehensive review of recent variants and applications. Neural Comput. Appl. 2021, 33, 8939–8980. [Google Scholar] [CrossRef]
  66. Meng, X.B.; Gao, X.Z.; Lu, L.; Liu, Y.; Zhang, H. A new bio-inspired optimisation algorithm: Bird Swarm Algorithm. J. Exp. Theor. Artif. Intell. 2016, 28, 673–687. [Google Scholar] [CrossRef]
  67. Varol Altay, E.; Alatas, B. Bird swarm algorithms with chaotic mapping. Artif. Intell. Rev. 2020, 53, 1373–1414. [Google Scholar] [CrossRef]
  68. Zhao, W.; Wang, L.; Zhang, Z. Atom search optimization and its application to solve a hydrogeologic parameter estimation problem. Knowl.-Based Syst. 2019, 163, 283–304. [Google Scholar] [CrossRef]
  69. Hashim, F.A.; Houssein, E.H.; Mabrouk, M.S.; Al-Atabany, W.; Mirjalili, S. Henry gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Syst. 2019, 101, 646–667. [Google Scholar] [CrossRef]
  70. Mosa, D.T.; Mahmoud, A.; Zaki, J.; Sorour, S.E.; El-Sappagh, S.; Abuhmed, T. Henry gas solubility optimization double machine learning classifier for neurosurgical patients. PLoS ONE 2023, 18, e0285455. [Google Scholar] [CrossRef]
  71. Hussien, R.M.; Abohany, A.A.; Moustafa, N.; Sallam, K.M. An improved Henry gas optimization algorithm for joint mining decision and resource allocation in a MEC-enabled blockchain networks. Neural Comput. Appl. 2023, 35, 18665–18680. [Google Scholar] [CrossRef]
  72. Zaki, M.J.; Meira, W. Data Mining and Analysis: Fundamental Concepts and Algorithms; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  73. Xiong, Z.; Sun, X.; Sang, J.; Wei, X. Modify the accuracy of MODIS PWV in China: A performance comparison using random forest, generalized regression neural network and back-propagation neural network. Remote Sens. 2021, 13, 2215. [Google Scholar] [CrossRef]
  74. Zhang, L.; Li, X.; Zheng, D.; Zhang, K.; Ma, Q.; Zhao, Y.; Ge, Y. Merging multiple satellite-based precipitation products and gauge observations using a novel double machine learning approach. J. Hydrol. 2021, 594, 125969. [Google Scholar] [CrossRef]
  75. Sadorsky, P. A random forests approach to predicting clean energy stock prices. J. Risk Financ. Manag. 2021, 14, 48. [Google Scholar] [CrossRef]
  76. Vapnik, V. The Nature of Statistical Learning Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  77. Huang, W.; Liu, H.; Zhang, Y.; Mi, R.; Tong, C.; Xiao, W.; Shuai, B. Railway dangerous goods transportation system risk identification: Comparisons among SVM, PSO-SVM, GA-SVM and GS-SVM. Appl. Soft Comput. 2021, 109, 107541. [Google Scholar] [CrossRef]
  78. Ding, C.; Bao, T.Y.; Huang, H.L. Quantum-inspired support vector machine. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 7210–7222. [Google Scholar] [CrossRef]
  79. Mirjalili, S.; Lewis, A. S-shaped versus V-shaped arXiv:2010.06479 for binary particle swarm optimization. Swarm Evol. Comput. 2013, 9, 1–14. [Google Scholar] [CrossRef]
  80. Kennedy, J.; Eberhart, R.C. A discrete binary version of the particle swarm algorithm. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997; Volume 5, pp. 4104–4108. [Google Scholar]
  81. Rashedi, E.; Nezamabadi-Pour, H.; Saryazdi, S. BGSA: Binary gravitational search algorithm. Nat. Comput. 2010, 9, 727–745. [Google Scholar] [CrossRef]
  82. Mniai, A.; Tarik, M.; Jebari, K. A Novel Framework for Credit Card Fraud Detection. IEEE Access 2023, 99, 112776–112786. [Google Scholar] [CrossRef]
  83. Lakshmi, S.; Kavilla, S.D. Machine learning for credit card fraud detection system. Int. J. Appl. Eng. Res. 2018, 13, 16819–16824. [Google Scholar]
  84. Almazroi, A.A.; Ayub, N. Online Payment Fraud Detection Model Using Machine Learning Techniques. IEEE Access 2023, 11, 137188–137203. [Google Scholar] [CrossRef]
Figure 1. Amount of Master and Visa CCs issued worldwide [13].
Figure 1. Amount of Master and Visa CCs issued worldwide [13].
Mathematics 12 02250 g001
Figure 2. Framework of CCFD Model.
Figure 2. Framework of CCFD Model.
Mathematics 12 02250 g002
Figure 3. Families of transfer functions.
Figure 3. Families of transfer functions.
Mathematics 12 02250 g003
Figure 4. Convergence curve of 15 MHO techniques based on the RF classifier with S s h a p e d .
Figure 4. Convergence curve of 15 MHO techniques based on the RF classifier with S s h a p e d .
Mathematics 12 02250 g004
Figure 5. Convergence curve of 15 MHO techniques based on the RF classifier with V s h a p e d .
Figure 5. Convergence curve of 15 MHO techniques based on the RF classifier with V s h a p e d .
Mathematics 12 02250 g005
Figure 6. Convergence curve of 15 MHO techniques based on the SVM classifier with S s h a p e d .
Figure 6. Convergence curve of 15 MHO techniques based on the SVM classifier with S s h a p e d .
Mathematics 12 02250 g006
Figure 7. Convergence curve of 15 MHO techniques based on the SVM classifier with V s h a p e d .
Figure 7. Convergence curve of 15 MHO techniques based on the SVM classifier with V s h a p e d .
Mathematics 12 02250 g007
Table 1. Original and balanced dataset.
Table 1. Original and balanced dataset.
Original DataSetAttributesTotal TransactionsFraud InstancesNon-Fraud InstancesRatio
30284,807492284,3150.172%
Balanced DataSetAttributesTotal transactionsFraud instancesnon-Fraud instancesRatio
309844924920.50%
Table 2. S-shaped and V-shaped TFs families.
Table 2. S-shaped and V-shaped TFs families.
S-Shaped FamilyV-Shaped Family
Name Transfer Function Name Transfer Function
Sv1 T F ( x ) = 1 1 + e x p x [80]Vv1 T F ( x ) = tanh ( x ) [81]
Sv1c T F ( x ) = 1 1 + e x p x Vv2 T F ( x ) = erf π 2 x
Sv2 T F ( x ) = 1 1 + e x p ( x / 2 ) Vv3 T F ( x ) = ( x ) / 1 + x 2
Sv3 T F ( x ) = 1 1 + e x p ( x / 3 ) Vv4 T F ( x ) = 2 π arctan π 2 x
Sv4 T F ( x ) = 1 1 + e x p 2 x
Table 3. CCF Dataset Description.
Table 3. CCF Dataset Description.
No.FeatureDescription
1TimeTime (in seconds) required to calculate the intervals between the current and first transaction
2V1, V2,…V28These attributes show the result of PCA dimensionality reduction to protect user identities and sensitive features
3AmountAmount of transaction
4Class labelBinary class label 0 or 1 for fraudulent and non-fraudulent
Table 4. Parameter configurations of all methods.
Table 4. Parameter configurations of all methods.
MethodsParameters
All methodsThe number of runs = 30
The number of allowed Iterations G m a x = 100
The size of population M = 10
Dimensionality D = The attributes number in the utilized dataset
BBO [18] X L B is the smaller bounds of parameters
X L B is the bigger bounds of parameters
AVO [45] L 1 = 0.7
L 2 = 0.2
w = 2
P 1 = 0.6
P 2 = 0.6
P 3 = 0.5
AOScale factor whose value is defined by the scales of the problem s = 0.01
β = 1.5
Number of search cycles r 1 = 10
U = 0.00565
ω = 0.005
Adjustment parameters for exploitation stage α = 0.1 and δ = 0.1
Aquila’s random movements G 1 [ 1 , 1 ]
Flying slope of Aquila G 2 [ 2 , 0 ]
SSAScroungers’ number S D = 0 . 1 * N
Producers; number P D = 0 . 2 * N
The iterations’ number in LSA = 20
Safety threshold S T = 0.8
ABCNumber of employed bees = 16
Number of scout bees = 3
Number of onlooker bees = 4
PSOInertia weight ω m a x = 0.9 ω m i n = 0.4
Acceleration coefficients c 2 = c 1 = 1.2
BALoudness A = 0.8
Lower and upper pulse frequencies = 0 , 10
Pulse emission rate r = 0.95
GWOa is linearly reduced from 2 to 0
WOAa is linearly reduced from 2 to 0
b = 1.0
p = 0.5
GOA C min = 0.00004 and C max = 1
SFORatio between sardines and sailfish p p = 0.1
ε = 0.0001
A = 1
HHORabbit energy E [ 1 , 1 ]
BSAFrequency of flight f f = 10
Followed coefficient f l = 0.5
Effect on birds’ vigilance behaviors a 1 = a 2 = 1.0
Acceleration coefficients c 1 = c 2 = 1.5
Probability of foraging for food p = 0.8
ASODepth weight α = 50
Multiplier weight β = 0.2
HGSONumber of clusters = 2
l 1 = 5 E 03 , l 2 = 1 E + 02 , and  l 3 = 1 E 02
α = β = 0.1 and K = 1.0
Table 5. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G A C C ).
Table 5. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G A C C ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG0.97420.97000.96890.96920.97560.96750.96580.97170.97060.97030.97620.96890.97000.97000.9625SFO
STDE0.00370.00470.00440.00450.00250.00470.00610.00460.00420.00420.00440.00440.00470.00600.0042
S V 1 C AVG0.97340.97170.96810.96950.97420.96890.96550.97170.96970.97200.97540.97030.97140.96890.9641SFO
STDE0.00440.00510.00400.00460.00370.00440.00550.00550.00510.00500.00300.00520.00470.00490.0053
S V 2 AVG0.97450.97090.97030.97060.97560.96920.96550.97110.97030.97000.97730.96890.97110.97250.9636SFO
STDE0.00340.00470.00520.00420.00450.00400.00630.00470.00470.00560.00390.00390.00520.00650.0050
S V 3 AVG0.97450.97060.96830.97060.97480.97140.96640.97090.96890.97230.97700.96920.97170.97170.9655SFO
STDE0.00460.00420.00360.00470.00380.00470.00530.00470.00390.00580.00370.00400.00400.00550.0059
S V 4 AVG0.97540.97060.97000.97060.97480.96440.96160.97140.96950.97140.97420.96860.97230.96610.9625BBO
STDE0.00370.00560.00520.00470.00310.00470.00770.00410.00550.00600.00480.00370.00490.00460.0052
V S h a p e d
V V 1 AVG0.97680.97200.97030.97110.97450.96750.96080.97340.96970.97090.97790.97000.97170.95940.9630SFO
STDE0.00360.00500.00600.00420.00460.00520.00660.00490.00600.00470.00400.00560.00510.00440.0051
V V 2 AVG0.97560.97250.97110.97230.97510.96720.96110.97370.96970.97030.97680.96950.96950.96130.9641SFO
STDE0.00250.00430.00640.00440.00510.00550.00980.00520.00510.00520.00360.00550.00460.00670.0057
V V 3 AVG0.97420.97250.97110.97310.97510.96920.96190.97340.97110.97340.97560.96860.97170.96410.9619SFO
STDE0.00300.00530.00470.00460.00460.00660.00640.00540.00470.00540.00250.00430.00460.00680.0047
V V 4 AVG0.97650.97340.97170.97140.97450.96830.96360.97090.97030.97310.97650.97110.97170.96410.9622SFO
STDE0.00500.00490.00460.00470.00400.00520.00630.00520.00470.00550.00340.00640.00460.00530.0052
Bold values indicate the highest results obtained.
Table 6. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G F e a t u r e s ).
Table 6. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G F e a t u r e s ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG12.533310.200011.26679.166712.53335.066710.800010.866710.900013.56677.733312.333312.866712.900012.5000PSO
STDE2.30553.93623.90674.18792.01221.86073.84193.25305.87282.55193.99112.64992.62971.44571.7464
S V 1 C AVG12.400011.000012.100010.366712.666720.433313.833311.06679.866712.566713.300012.700013.000012.766713.2667WOA
STDE2.44404.33592.83253.73712.13443.61193.36732.64495.60791.76414.32172.42421.63302.67932.0645
S V 2 AVG12.766710.466710.933310.833312.26677.933312.866712.000010.500013.26676.866713.200012.066712.233314.0000SFO
STDE2.78913.54714.09823.82171.71142.67003.47122.63314.66732.12812.18682.93712.33711.99472.2211
S V 3 AVG12.466710.600010.466711.133312.83339.000012.766712.433311.600013.86677.133313.166712.866712.500013.5667SFO
STDE2.59143.60193.50943.75712.42332.40832.65433.01863.85232.72932.90672.45062.57851.94512.4723
S V 4 AVG12.33338.233311.766710.566712.50004.600011.633310.26679.633313.50007.466711.966712.566712.766713.2000PSO
STDE2.03853.40283.37335.28951.99581.81844.25433.31605.52262.26202.99702.94942.10842.60362.6758
V S h a p e d
V V 1 AVG12.133312.733311.366712.666714.733313.633313.366712.766712.500013.133312.566712.433313.166713.666714.7333AO
STDE2.39072.61962.04102.10292.65752.70162.70162.61642.86072.34852.67932.81292.33932.41292.7921
V V 2 AVG12.800012.733312.000013.233313.966713.500013.533312.466713.300012.733313.000011.800012.700013.866715.5000HHO
STDE2.62552.48912.38052.20132.15232.20232.87212.36272.43792.76812.23612.53512.38262.06132.5265
V V 3 AVG12.133312.100011.833311.833313.266714.133314.100013.100012.166713.300011.866712.333313.366713.966714.8667AO, SSA
STDE2.10922.16562.91072.53091.65192.66752.34312.76102.58312.64762.20202.71213.20922.44242.5263
V V 4 AVG12.033311.566712.900012.233313.533312.933313.733312.200012.466712.733312.166712.500012.466713.333316.0667AVO
STDE2.33072.01143.09142.21642.89522.46222.83942.40002.72932.27942.80572.09362.32002.58632.6949
Bold values indicate the highest results obtained.
Table 7. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G F i t ).
Table 7. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G F i t ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG0.02980.03320.03470.03370.02840.03390.03760.03180.03290.03410.02620.03500.03410.03410.0415SFO
STDE0.00360.00470.00440.00440.00250.00440.00590.00430.00470.00380.00410.00400.00430.00590.0041
S V 1 C AVG0.03060.03180.03580.03380.02990.03780.03890.03180.03340.03210.02900.03380.03280.03520.0401SFO
STDE0.00410.00520.00380.00470.00350.00440.00540.00550.00540.00480.00320.00500.00430.00490.0051
S V 2 AVG0.02960.03240.03320.03290.02840.03320.03850.03270.03300.03420.02480.03530.03270.03140.0409SFO
STDE0.00310.00460.00450.00400.00440.00350.00600.00470.00480.00510.00380.00390.00500.00620.0050
S V 3 AVG0.02950.03280.03490.03300.02940.03140.03770.03310.03480.03220.02520.03500.03240.03230.0388SFO
STDE0.00410.00420.00330.00420.00370.00450.00530.00410.00340.00540.00360.00390.00380.00530.0057
S V 4 AVG0.02870.03200.03370.03280.02930.03680.04200.03180.03350.03290.02810.03520.03180.03800.0417SFO
STDE0.00370.00560.00500.00480.00290.00450.00810.00400.00580.00570.00410.00330.00480.00450.0051
V S h a p e d
V V 1 AVG0.02720.03210.03330.03290.03030.03690.04340.03070.03430.03340.02620.03400.03250.04490.0417SFO
STDE0.00330.00510.00570.00400.00430.00510.00660.00450.00570.00410.00350.00520.00480.00420.0048
V V 2 AVG0.02850.03160.03270.03200.02950.03710.04320.03040.03450.03380.02750.03430.03460.04310.0408SFO
STDE0.00260.00390.00620.00420.00480.00510.00940.00480.00490.00490.00350.00550.00450.00650.0054
V V 3 AVG0.02970.03130.03260.03070.02930.03540.04260.03090.03280.03090.02820.03530.03260.04030.0428SFO
STDE0.00290.00530.00460.00440.00450.00640.00630.00540.00460.00510.00230.00430.00420.00650.0043
V V 4 AVG0.02740.03030.03250.03250.02990.03580.04080.03300.03370.03100.02750.03290.03230.04010.0430BBO
STDE0.00460.00480.00440.00450.00390.00490.00610.00490.00430.00540.00280.00620.00440.00530.0048
Bold values indicate the highest results obtained.
Table 8. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G C o m p u t a t i o n a l T i m e ) (in mile seconds (MS)).
Table 8. Assessment of the impact of 9 TFs on 15 MHO algorithms with RF in terms of ( A V G C o m p u t a t i o n a l T i m e ) (in mile seconds (MS)).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG319,22021,843956,66958,51585,62228,29919,62320,69926,47721,784782,37674,87942,68953,65838,196BA
STDE72,184126935,039,076191,11320,898749458692655525620461,218,471294,58086,490114,40438,296
S V 1 C AVG199,98323,69124,49039,82973,27220,47316,10520,32521,75624,084204,48417,54120,04320,28522,354BA
STDE445,5313449331179,66610,46919893331754529091178649261234293
S V 2 AVG60,45520,42619,027814,391230,97220,91516,24320,70423,88125,771379,60327,81558,31334,13631,841BA
STDE6952108224,211,908618,51458290459339362290463,8968153158,02680962876
S V 3 AVG124,53325,84223,65238,542169,18523,80720,21178,90459,18322,017467,192269,571425,44923,32033,734BA
STDE249,8003275289064,281479,10642103237205,230140,3603101,111,5411,215,8451,479,72996748086
S V 4 AVG72,69420,43816,09532,900180,39520,31616,61430,665131,32822,963142,75517,31520,40019,85323,586AO
STDE15,5013028372055412,7912982160843,904491,9931571132,943107512473611872
V S h a p e d
V V 1 AVG67,89122,23322,01923,87590,64529,26723,80234,42794,25223,8841,102,16025,43061,50527,78379,981AO
STDE943918921961306158,871428128387587275,78935443,465,1913020175,57811,834199,837
V V 2 AVG59,64319,00319,31820,72357,761996,07522,53885,12036,687113,4101,310,59920,95822,49128,99927,587AVO
STDE46642892601512493,987,94125,247181,02251,720463,8685,396,0173363227064105708
V V 3 AVG65,06121,28921,74423,42862,9782,269,89318,406131,92625,651160,8812,638,87617,66720,50820,64522,698HHO
STDE243831552630048458,022,7052668391,7302099469,0099,852,159600321380699
V V 4 AVG105,24028,34373,36734,158107,950709,97617,71421,33692,96722,817251,30218,82820,85322,9901,556,733BA
STDE138,2286919181,029588913,4543,650,5731704518253,812954171,34392839544865,193,120
Bold values indicate the highest results obtained.
Table 9. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G A C C ).
Table 9. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G A C C ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG0.93330.93450.93500.93470.93330.93750.93220.93390.93530.93310.94060.93330.93280.92940.9317SFO
STDE0.00210.00340.00370.00360.00210.00420.00300.00290.00390.00150.00210.002100.00410.0029
S V 1 C AVG0.93360.93640.93280.93560.93360.93030.93220.93470.93560.93280.93360.93330.93310.93050.9311AO
STDE0.00250.004200.00400.00250.00390.00210.00360.004000.00250.00210.00150.00370.0034
S V 2 AVG0.93310.93560.93530.93420.93310.93640.93190.93310.93470.93280.94120.93310.93280.93050.9317SFO
STDE0.00150.00400.00390.00310.00150.00420.00400.00150.0036000.001500.00370.0029
S V 3 AVG0.93280.93450.93530.93390.93330.93420.93190.93280.93450.93280.94010.93310.93280.93190.9311SFO
STDE00.00340.00390.00290.00210.00310.003300.003400.00290.001500.00250.0034
S V 4 AVG0.93330.93530.93420.93530.93330.93590.93310.93500.93610.93280.94120.93330.93280.92970.9314SFO
STDE0.00210.00390.00310.00390.00210.00400.00260.00370.00410.002200.002100.00400.0031
V S h a p e d
V V 1 AVG0.93470.93390.93280.93390.93280.93110.92830.93310.93250.93280.93390.93360.93310.92720.9283BBO
STDE0.00360.002900.002900.00340.00420.00150.001500.00290.00250.00150.00450.0042
V V 2 AVG0.93390.93390.93390.93310.93310.93080.92970.93330.93250.93280.93360.93310.93310.92720.9289BBO, AVO, AO
STDE0.00290.00290.00290.00150.00150.00360.00400.00210.001500.00250.00150.00150.00450.0042
V V 3 AVG0.93470.93310.93420.93360.93280.93220.92860.93330.93280.93280.93360.93280.93330.92720.9328BBO
STDE0.00360.00150.00310.002500.00210.00420.00210.002200.002500.00210.00400.0041
V V 4 AVG0.93330.93360.93390.93310.93310.93190.92970.93360.93280.93280.93330.93280.93280.92750.9258AO
STDE0.00210.00250.00290.00150.00150.00250.00400.0025000.0021000.00400.0031
Bold values indicate the highest results obtained.
Table 10. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G F e a t u r e s ).
Table 10. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G F e a t u r e s ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG7.46671.86671.53331.73337.50001.90005.60004.40002.63338.53332.00007.56678.96679.733310.9667AO
STDE1.11751.58610.95681.63161.64820.90744.12802.37491.32871.66800.36512.44522.10532.59402.3872
S V 1 C AVG6.96672.00008.23331.90007.166713.30008.90004.53332.00008.43333.16676.40008.200010.200011.1000SSA
STDE1.85261.21112.60361.10601.31872.47863.81532.10921.39041.54242.49112.20001.66132.15101.5351
S V 2 AVG7.40002.86671.36672.36677.43333.16675.16677.16674.10009.03332.40007.66678.60009.766710.9333AO
STDE1.08321.94480.60461.40201.52061.75283.65221.82732.97041.44880.84061.66001.22752.12421.8607
S V 3 AVG7.46673.26671.50003.43337.63334.66676.90007.26674.53338.63333.30008.36678.76679.366710.7333AO
STDE0.92142.29400.76382.56491.32871.59863.83281.54782.99701.35361.86461.66301.70651.53801.7876
S V 4 AVG7.60002.23331.66671.53336.60001.60003.83332.86672.06679.40002.33335.50008.466710.433311.1000SSA
STDE1.54061.76411.07500.84591.42830.84063.18421.92761.71142.23010.64982.72951.99561.64692.4269
V S h a p e d
V V 1 AVG5.16675.93334.63335.56679.266711.400010.73336.80008.76678.90007.23336.96678.133311.666711.9000AO
STDE1.75281.45910.83601.45331.73081.66532.44861.72052.07661.92091.58502.41502.32002.02212.3714
V V 2 AVG5.20005.86675.23335.30009.300011.200010.80006.96678.73339.10007.13336.43338.100011.666711.9333BBO
STDE1.51441.74611.68691.59481.73492.16642.31521.68291.96532.25611.52171.47611.88591.67992.3228
V V 3 AVG5.33335.46675.00005.70008.033310.73339.70006.60008.36678.36676.53336.03338.033311.266711.7333AO
STDE2.19601.49961.87971.34541.40201.59021.73491.80002.16771.30341.20371.51621.68292.61961.9653
V V 4 AVG5.00005.00005.06675.86678.400010.366710.86676.53338.53338.16676.96676.50007.900010.766711.4000BBO, AVO
STDE1.26491.34161.96531.80251.14312.15232.66751.72691.99561.77171.66301.76541.42242.49911.6042
Bold values indicate the highest results obtained.
Table 11. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G F i t ).
Table 11. Assessment of the impact of 9 TFs on 15 MHO algorithms with SVM in terms of ( A V G F i t ).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG0.06860.06550.06490.06520.06860.06250.06900.06700.06500.06920.05950.06860.06960.07320.0714SFO
STDE0.00190.00320.00350.00310.00160.00390.00370.00310.00370.00160.00200.00220.00070.00450.0025
S V 1 C AVG0.06810.06360.06940.06440.06820.07360.07020.06620.06450.06950.06680.06820.06910.07230.0720AVO
STDE0.00220.00390.00090.00370.00220.00380.00270.00380.00380.00050.00210.00230.00160.00380.0031
S V 2 AVG0.06880.06480.06450.06600.06880.06400.06920.06870.06600.06970.05910.06890.06950.07210.0714SFO
STDE0.00140.00400.00360.00290.00120.00400.00470.00130.00370.00050.00030.00160.00040.00380.0027
S V 3 AVG0.06910.06600.06460.06660.06860.06680.06980.06910.06650.06950.06050.06920.06960.07060.0719SFO
STDE0.00030.00350.00360.00270.00180.00290.00400.00050.00330.00050.00260.00170.00060.00240.0031
S V 4 AVG0.06860.06480.06570.06460.06830.06410.06760.06530.06390.06980.05900.06790.06950.07320.0718SFO
STDE0.00180.00360.00300.00360.00200.00380.00320.00360.00370.00220.00020.00240.00070.00400.0029
V S h a p e d
V V 1 AVG0.06640.06750.06820.06740.06980.07210.07470.06860.06990.06960.06790.06810.06910.07610.0751BBO
STDE0.00330.00270.00030.00270.00060.00330.00410.00130.00180.00070.00230.00230.00170.00440.0038
V V 2 AVG0.06720.06750.06720.06810.06950.07240.07330.06840.06980.06970.06820.06850.06910.07610.0746BBO
STDE0.00270.00260.00240.00140.00160.00350.00420.00220.00150.00080.00220.00170.00170.00440.0038
V V 3 AVG0.06650.06820.06690.06770.06930.07080.07410.06830.06940.06940.06800.06860.06880.07600.0756BBO
STDE0.00320.00120.00290.00250.00050.00210.00420.00210.00210.00040.00230.00050.00210.00410.0038
V V 4 AVG0.06770.06740.06720.06830.06920.071000.07340.06800.06950.06940.06840.06880.06930.07550.0774AO
STDE0.00190.00250.00270.00170.00140.00240.00400.00240.00070.00060.00160.00060.00050.00370.0030
Bold values indicate the highest results obtained.
Table 12. Assessment of the impact of 9 TFs on 15 MHO algorithms based on SVM in terms of average computational time (in mile seconds (MS)).
Table 12. Assessment of the impact of 9 TFs on 15 MHO algorithms based on SVM in terms of average computational time (in mile seconds (MS)).
MetricBBOAVOAOSSAABCPSOBAGWOWOAGOASFOHHOBSAASOHGSOWinner
S S h a p e d
S V 1 AVG28,97975766793913229,619639586507793804010,63468,89388669173836211,120PSO
STDE161587246471415375711331963942741167044511811407382
S V 1 C AVG28,961835212,587916430,12312,70884458008.9667794011,275685,09930,22013,432985617,557WOA
STDE17775603047561581110511741074124910282,896,82797,500487216807035
S V 2 AVG44,77911,95110,34612,56243,85813,723113,59216,10911,86215,043316,007925810,046124,44512,331HHO
STDE6882217013732916146344637513,40528,64324443179637,806526855612,6221657
S V 3 AVG31,6528993946611,83836,42910,25857,08311,038163,72114,6921,257,44913,23312,75710,34014,005AVO
STDE12858011362340867452594180,4281223410,10629765,486,94712936201871546
S V 4 AVG30,52377866795961728,089536591327588796910,65751,32293028565877611,442PSO
STDE4659154568368324621043128612689641759128655415221861451
V S h a p e d
V V 1 AVG25,897820710,04411,464615,26112,89211,086889211,38912,154129,52561649705543,87611,687HHO
STDE206110006026362,204,34517349631736171915852181753418312,829,738309
V V 2 AVG26,0089518973611,03334,84412,84610,996885211,26912,179185,3196059973914,90011,669HHO
STDE2304265542364322631777769150417231651277,3004991830629361
V V 3 AVG77,7368405197,42423,81793,18126,76017,35113,85211,46614,751272,414622121,47017,76311,517HHO
STDE156,7631227573,17911,23338,73613,692525314,53519593924659,99196746,7553137353
V V 4 AVG317,211596,14217,78310,940187,13612,47711,095822311,16811,388929,0995757977614,57511,274HHO
STDE893,8412,389,68330,533729808,15616916661555175211623,428,2644591359567325
Bold values indicate the highest results obtained.
Table 13. Accuracy of different research for CCF dataset.
Table 13. Accuracy of different research for CCF dataset.
YearAuthorAccuracyMethodologyFeature Selection TechniqueSampling Technique
2018Lakshmi and Selvani [83]0.9550RF-Oversampling
2023Almazroi and Ayub [84]0.5600SVM-SMOTE
2023Mniai et. al. [82]0.8900RF-Undersampling
2023Mniai et. al. [82]0.8300SVM-Undersampling
2023Mniai et. al. [82]0.9100RFFilter (Rank Order Filter)Undersampling
2023Mniai et. al. [82]0.8300SVMFilter (Rank Order Filter)Undersampling
2023Mniai et. al. [82]0.8800RFWrapper (Recursive Feature Elimination)Undersampling
2023Mniai et. al. [82]0.8200SVMWrapper (Recursive Feature Elimination )Undersampling
2023Mniai et. al. [82]0.8900RFEmbeddedUndersampling
2023Mniai et. al. [82]0.8300SVMEmbeddedUndersampling
2024CCFD (Our Research)0.9779RFMHO (SFO)Undersampling
2024CCFD (Our Research)0.9412SVMMHO (SFO)Undersampling
Bold values indicate the highest results obtained.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mosa, D.T.; Sorour, S.E.; Abohany, A.A.; Maghraby, F.A. CCFD: Efficient Credit Card Fraud Detection Using Meta-Heuristic Techniques and Machine Learning Algorithms. Mathematics 2024, 12, 2250. https://doi.org/10.3390/math12142250

AMA Style

Mosa DT, Sorour SE, Abohany AA, Maghraby FA. CCFD: Efficient Credit Card Fraud Detection Using Meta-Heuristic Techniques and Machine Learning Algorithms. Mathematics. 2024; 12(14):2250. https://doi.org/10.3390/math12142250

Chicago/Turabian Style

Mosa, Diana T., Shaymaa E. Sorour, Amr A. Abohany, and Fahima A. Maghraby. 2024. "CCFD: Efficient Credit Card Fraud Detection Using Meta-Heuristic Techniques and Machine Learning Algorithms" Mathematics 12, no. 14: 2250. https://doi.org/10.3390/math12142250

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop