Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,446)

Search Parameters:
Keywords = computing cloud

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 8824 KiB  
Article
Trust Management and Resource Optimization in Edge and Fog Computing Using the CyberGuard Framework
by Ahmed M. Alwakeel and Abdulrahman K. Alnaim
Sensors 2024, 24(13), 4308; https://doi.org/10.3390/s24134308 - 2 Jul 2024
Viewed by 151
Abstract
The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in [...] Read more.
The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain’s inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard’s effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments. Full article
Show Figures

Figure 1

18 pages, 2032 KiB  
Article
Receptive Field Space for Point Cloud Analysis
by Zhongbin Jiang, Hai Tao and Ye Liu
Sensors 2024, 24(13), 4274; https://doi.org/10.3390/s24134274 - 1 Jul 2024
Viewed by 189
Abstract
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible [...] Read more.
Similar to convolutional neural networks for image processing, existing analysis methods for 3D point clouds often require the designation of a local neighborhood to describe the local features of the point cloud. This local neighborhood is typically manually specified, which makes it impossible for the network to dynamically adjust the receptive field’s range. If the range is too large, it tends to overlook local details, and if it is too small, it cannot establish global dependencies. To address this issue, we introduce in this paper a new concept: receptive field space (RFS). With a minor computational cost, we extract features from multiple consecutive receptive field ranges to form this new receptive field space. On this basis, we further propose a receptive field space attention mechanism, enabling the network to adaptively select the most effective receptive field range from RFS, thus equipping the network with the ability to adjust granularity adaptively. Our approach achieved state-of-the-art performance in both point cloud classification, with an overall accuracy (OA) of 94.2%, and part segmentation, achieving an mIoU of 86.0%, demonstrating the effectiveness of our method. Full article
Show Figures

Figure 1

31 pages, 1675 KiB  
Review
A Review of Edge Computing Technology and Its Applications in Power Systems
by Shiyang Liang, Shuangshuang Jin and Yousu Chen
Energies 2024, 17(13), 3230; https://doi.org/10.3390/en17133230 - 1 Jul 2024
Viewed by 272
Abstract
Recent advancements in network-connected devices have led to a rapid increase in the deployment of smart devices and enhanced grid connectivity, resulting in a surge in data generation and expanded deployment to the edge of systems. Classic cloud computing infrastructures are increasingly challenged [...] Read more.
Recent advancements in network-connected devices have led to a rapid increase in the deployment of smart devices and enhanced grid connectivity, resulting in a surge in data generation and expanded deployment to the edge of systems. Classic cloud computing infrastructures are increasingly challenged by the demands for large bandwidth, low latency, fast response speed, and strong security. Therefore, edge computing has emerged as a critical technology to address these challenges, gaining widespread adoption across various sectors. This paper introduces the advent and capabilities of edge computing, reviews its state-of-the-art architectural advancements, and explores its communication techniques. A comprehensive analysis of edge computing technologies is also presented. Furthermore, this paper highlights the transformative role of edge computing in various areas, particularly emphasizing its role in power systems. It summarizes edge computing applications in power systems that are oriented from the architectures, such as power system monitoring, smart meter management, data collection and analysis, resource management, etc. Additionally, the paper discusses the future opportunities of edge computing in enhancing power system applications. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

28 pages, 2021 KiB  
Article
Towards Sustainable Cloud Computing: Load Balancing with Nature-Inspired Meta-Heuristic Algorithms
by Peiyu Li, Hui Wang, Guo Tian and Zhihui Fan
Electronics 2024, 13(13), 2578; https://doi.org/10.3390/electronics13132578 - 30 Jun 2024
Viewed by 308
Abstract
Cloud computing is considered suitable for organizations thanks to its flexibility and the provision of digital services via the Internet. The cloud provides nearly limitless computing resources on demand without any upfront costs or long-term contracts, enabling organizations to meet their computing needs [...] Read more.
Cloud computing is considered suitable for organizations thanks to its flexibility and the provision of digital services via the Internet. The cloud provides nearly limitless computing resources on demand without any upfront costs or long-term contracts, enabling organizations to meet their computing needs more economically. Furthermore, cloud computing provides higher security, scalability, and reliability levels than traditional computing solutions. The efficiency of the platform affects factors such as Quality of Service (QoS), congestion, lifetime, energy consumption, dependability, and scalability. Load balancing refers to managing traffic flow to spread it across several channels. Asymmetric network traffic results in increased traffic processing, more congestion on specific routes, and fewer packets delivered. The paper focuses on analyzing the use of the meta-optimization algorithm based on the principles of natural selection to solve the imbalance of loads in cloud systems. To sum up, it offers a detailed literature review on the essential meta-heuristic algorithms for load balancing in cloud computing. The study also assesses and analyses meta-heuristic algorithm performance in load balancing, as revealed by past studies, experiments, and case studies. Key performance indicators encompass response time, throughput, resource utilization, and scalability, and they are used to assess how these algorithms impact load balance efficiency. Full article
Show Figures

Figure 1

16 pages, 1888 KiB  
Article
Edge Computing-Enabled Secure Forecasting Nationwide Industry PM2.5 with LLM in the Heterogeneous Network
by Changkui Yin, Yingchi Mao, Zhenyuan He, Meng Chen, Xiaoming He and Yi Rong
Electronics 2024, 13(13), 2581; https://doi.org/10.3390/electronics13132581 - 30 Jun 2024
Viewed by 253
Abstract
The heterogeneous network formed by the deployment and interconnection of various network devices (e.g., sensors) has attracted widespread attention. PM2.5 forecasting on the entire industrial region throughout mainland China is an important application of heterogeneous networks, which has great [...] Read more.
The heterogeneous network formed by the deployment and interconnection of various network devices (e.g., sensors) has attracted widespread attention. PM2.5 forecasting on the entire industrial region throughout mainland China is an important application of heterogeneous networks, which has great significance to factory management and human health travel. In recent times, Large Language Models (LLMs) have exhibited notability in terms of time series prediction. However, existing LLMs tend to forecast nationwide industry PM2.5, which encounters two issues. First, most LLM-based models use centralized training, which requires uploading large amounts of data from sensors to a central cloud. This entire transmission process can lead to security risks of data leakage. Second, LLMs fail to extract spatiotemporal correlations in the nationwide sensor network (heterogeneous network). To tackle these issues, we present a novel framework entitled Spatio-Temporal Large Language Model with Edge Computing Servers (STLLM-ECS) to securely predict nationwide industry PM2.5 in China. In particular, We initially partition the entire sensor network, located in the national industrial region, into several subgraphs. Each subgraph is allocated an edge computing server (ECS) for training and inference, avoiding the security risks caused by data transmission. Additionally, a novel LLM-based approach named Spatio-Temporal Large Language Model (STLLM) is developed to extract spatiotemporal correlations and infer prediction sequences. Experimental results prove the effectiveness of our proposed model. Full article
(This article belongs to the Special Issue Network Security Management in Heterogeneous Networks)
17 pages, 2133 KiB  
Article
Network Slicing in 6G: A Strategic Framework for IoT in Smart Cities
by Ahmed M. Alwakeel and Abdulrahman K. Alnaim
Sensors 2024, 24(13), 4254; https://doi.org/10.3390/s24134254 - 30 Jun 2024
Viewed by 215
Abstract
The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities’ IoT deployments. The [...] Read more.
The emergence of 6G communication technologies brings both opportunities and challenges for the Internet of Things (IoT) in smart cities. In this paper, we introduce an advanced network slicing framework designed to meet the complex demands of 6G smart cities’ IoT deployments. The framework development follows a detailed methodology that encompasses requirement analysis, metric formulation, constraint specification, objective setting, mathematical modeling, configuration optimization, performance evaluation, parameter tuning, and validation of the final design. Our evaluations demonstrate the framework’s high efficiency, evidenced by low round-trip time (RTT), minimal packet loss, increased availability, and enhanced throughput. Notably, the framework scales effectively, managing multiple connections simultaneously without compromising resource efficiency. Enhanced security is achieved through robust features such as 256-bit encryption and a high rate of authentication success. The discussion elaborates on these findings, underscoring the framework’s impressive performance, scalability, and security capabilities. Full article
(This article belongs to the Special Issue Edge Computing in Internet of Things Applications)
Show Figures

Figure 1

16 pages, 4124 KiB  
Article
IoT-Based Heartbeat Rate-Monitoring Device Powered by Harvested Kinetic Energy
by Olivier Djakou Nekui, Wei Wang, Cheng Liu, Zhixia Wang and Bei Ding
Sensors 2024, 24(13), 4249; https://doi.org/10.3390/s24134249 - 29 Jun 2024
Viewed by 396
Abstract
Remote patient-monitoring systems are helpful since they can provide timely and effective healthcare facilities. Such online telemedicine is usually achieved with the help of sophisticated and advanced wearable sensor technologies. The modern type of wearable connected devices enable the monitoring of vital sign [...] Read more.
Remote patient-monitoring systems are helpful since they can provide timely and effective healthcare facilities. Such online telemedicine is usually achieved with the help of sophisticated and advanced wearable sensor technologies. The modern type of wearable connected devices enable the monitoring of vital sign parameters such as: heart rate variability (HRV) also known as electrocardiogram (ECG), blood pressure (BLP), Respiratory rate and body temperature, blood pressure (BLP), respiratory rate, and body temperature. The ubiquitous problem of wearable devices is their power demand for signal transmission; such devices require frequent battery charging, which causes serious limitations to the continuous monitoring of vital data. To overcome this, the current study provides a primary report on collecting kinetic energy from daily human activities for monitoring vital human signs. The harvested energy is used to sustain the battery autonomy of wearable devices, which allows for a longer monitoring time of vital data. This study proposes a novel type of stress- or exercise-monitoring ECG device based on a microcontroller (PIC18F4550) and a Wi-Fi device (ESP8266), which is cost-effective and enables real-time monitoring of heart rate in the cloud during normal daily activities. In order to achieve both portability and maximum power, the harvester has a small structure and low friction. Neodymium magnets were chosen for their high magnetic strength, versatility, and compact size. Due to the non-linear magnetic force interaction of the magnets, the non-linear part of the dynamic equation has an inverse quadratic form. Electromechanical damping is considered in this study, and the quadratic non-linearity is approximated using MacLaurin expansion, which enables us to find the law of motion for general case studies using classical methods for dynamic equations and the suitable parameters for the harvester. The oscillations are enabled by applying an initial force, and there is a loss of energy due to the electromechanical damping. A typical numerical application is computed with Matlab 2015 software, and an ODE45 solver is used to verify the accuracy of the method. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

21 pages, 658 KiB  
Review
A Review of Key Technologies for Environment Sensing in Driverless Vehicles
by Yuansheng Huo and Chengwei Zhang
World Electr. Veh. J. 2024, 15(7), 290; https://doi.org/10.3390/wevj15070290 - 29 Jun 2024
Viewed by 144
Abstract
Environment perception technology is the most important part of driverless technology, and driverless vehicles need to realize decision planning and control by virtue of perception feedback. This paper summarizes the most promising technology methods in the field of perception, namely visual perception technology, [...] Read more.
Environment perception technology is the most important part of driverless technology, and driverless vehicles need to realize decision planning and control by virtue of perception feedback. This paper summarizes the most promising technology methods in the field of perception, namely visual perception technology, radar perception technology, state perception technology, and information fusion technology. Regarding the current development status in the field, the development of the main perception technology is mainly the innovation of information fusion technology and the optimization of algorithms. Multimodal perception and deep learning are becoming popular. The future of the field can be transformed by intelligent sensors, promote edge computing and cloud collaboration, improve system data processing capacity, and reduce the burden of data transmission. Regarding driverless vehicles as a future development trend, the corresponding technology will become a research hotspot. Full article
20 pages, 4735 KiB  
Article
Elevating Smart Manufacturing with a Unified Predictive Maintenance Platform: The Synergy Between Data Warehousing, Apache Spark, and Machine Learning
by Nai-Jing Su, Shi-Feng Huang and Chuan-Jun Su
Sensors 2024, 24(13), 4237; https://doi.org/10.3390/s24134237 - 29 Jun 2024
Viewed by 211
Abstract
Abstract: The transition to smart manufacturing introduces heightened complexity in regard to the machinery and equipment used within modern collaborative manufacturing landscapes, presenting significant risks associated with equipment failures. The core ambition of smart manufacturing is to elevate automation through the integration [...] Read more.
Abstract: The transition to smart manufacturing introduces heightened complexity in regard to the machinery and equipment used within modern collaborative manufacturing landscapes, presenting significant risks associated with equipment failures. The core ambition of smart manufacturing is to elevate automation through the integration of state-of-the-art technologies, including artificial intelligence (AI), the Internet of Things (IoT), machine-to-machine (M2M) communication, cloud technology, and expansive big data analytics. This technological evolution underscores the necessity for advanced predictive maintenance strategies that proactively detect equipment anomalies before they escalate into costly downtime. Addressing this need, our research presents an end-to-end platform that merges the organizational capabilities of data warehousing with the computational efficiency of Apache Spark. This system adeptly manages voluminous time-series sensor data, leverages big data analytics for the seamless creation of machine learning models, and utilizes an Apache Spark-powered engine for the instantaneous processing of streaming data for fault detection. This comprehensive platform exemplifies a significant leap forward in smart manufacturing, offering a proactive maintenance model that enhances operational reliability and sustainability in the digital manufacturing era. Full article
16 pages, 5994 KiB  
Article
Low-Cost Imaging to Quantify Germination Rate and Seedling Vigor across Lettuce Cultivars
by Mark Iradukunda, Marc W. van Iersel, Lynne Seymour, Guoyu Lu and Rhuanito Soranz Ferrarezi
Sensors 2024, 24(13), 4225; https://doi.org/10.3390/s24134225 (registering DOI) - 29 Jun 2024
Viewed by 304
Abstract
The survival and growth of young plants hinge on various factors, such as seed quality and environmental conditions. Assessing seedling potential/vigor for a robust crop yield is crucial but often resource-intensive. This study explores cost-effective imaging techniques for rapid evaluation of seedling vigor, [...] Read more.
The survival and growth of young plants hinge on various factors, such as seed quality and environmental conditions. Assessing seedling potential/vigor for a robust crop yield is crucial but often resource-intensive. This study explores cost-effective imaging techniques for rapid evaluation of seedling vigor, offering a practical solution to a common problem in agricultural research. In the first phase, nine lettuce (Lactuca sativa) cultivars were sown in trays and monitored using chlorophyll fluorescence imaging thrice weekly for two weeks. The second phase involved integrating embedded computers equipped with cameras for phenotyping. These systems captured and analyzed images four times daily, covering the entire growth cycle from seeding to harvest for four specific cultivars. All resulting data were promptly uploaded to the cloud, allowing for remote access and providing real-time information on plant performance. Results consistently showed the ‘Muir’ cultivar to have a larger canopy size and better germination, though ‘Sparx’ and ‘Crispino’ surpassed it in final dry weight. A non-linear model accurately predicted lettuce plant weight using seedling canopy size in the first study. The second study improved prediction accuracy with a sigmoidal growth curve from multiple harvests (R2 = 0.88, RMSE = 0.27, p < 0.001). Utilizing embedded computers in controlled environments offers efficient plant monitoring, provided there is a uniform canopy structure and minimal plant overlap. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

29 pages, 10154 KiB  
Article
Developing a Semi-Automated Near-Coastal, Water Quality-Retrieval Process from Global Multi-Spectral Data: South-Eastern Australia
by Avik Nandy, Stuart Phinn, Alistair Grinham and Simon Albert
Remote Sens. 2024, 16(13), 2389; https://doi.org/10.3390/rs16132389 - 28 Jun 2024
Viewed by 271
Abstract
The estimation of water quality properties through satellite remote sensing relies on (1) the optical characteristics of the water body, (2) the resolutions (spatial, spectral, radiometric and temporal) of the sensor and (3) algorithm(s) applied. More than 80% of global water bodies fall [...] Read more.
The estimation of water quality properties through satellite remote sensing relies on (1) the optical characteristics of the water body, (2) the resolutions (spatial, spectral, radiometric and temporal) of the sensor and (3) algorithm(s) applied. More than 80% of global water bodies fall under Case I (open ocean) waters, dominated by scattering and absorption associated with phytoplankton in the water column. Globally, previous studies show significant correlations between satellite-based retrieval methods and field measurements of absorbing and scattering constituents, while limited research from Australian coastal water bodies appears. This study presents a methodology to extract chlorophyll a properties from surface waters from near-coastal environments, within 2 km of coastline, in Tasmania, south-eastern Australia. We use general purpose, global, long-time series, multi-spectral satellite data, as opposed to ocean colour-specific sensor data. This approach may offer globally applicable tools for combining global satellite image archives with in situ field sensors for water quality monitoring. To enable applications from local to global scales, a cloud-based geospatial analysis workflow was developed and tested on several sites. This work represents the initial stage in developing a semi-automated near-coastal water-quality workflow using easily accessed, fully corrected global multi-spectral datasets alongside large-scale computation and delivery capabilities. Our results indicated a strong correlation between the in situ chlorophyll concentration data and blue-green band ratios from the multi-spectral sensor. In line with published research, environment-specific empirical models exhibited the highest correlations between in situ and satellite measurements, underscoring the importance of tailoring models to specific coastal waters. Our findings may provide the basis for developing this workflow for other sites in Australia. We acknowledge the use of general purpose multi-spectral data such as the Sentinel-2 and Landsat Series, their corrections and algorithms may not be as accurate and precise as ocean colour satellites. The data we are using are more readily accessible and also have true global coverage with global historic archives and regular, global collection will continue at least 10 years in the future. Regardless of sensor specifications, the retrieval method relies on localised algorithm calibration and validation using in situ measurements, which demonstrates close-to-realistic outputs. We hope this approach enables future applications to also consider these globally accessible and regularly updated datasets that are suited to coastal environments. Full article
20 pages, 4187 KiB  
Article
A Neural-Network-Based Cost-Effective Method for Initial Weld Point Extraction from 2D Images
by Miguel-Angel Lopez-Fuster, Arturo Morgado-Estevez, Ignacio Diaz-Cano and Francisco J. Badesa
Machines 2024, 12(7), 447; https://doi.org/10.3390/machines12070447 - 28 Jun 2024
Viewed by 184
Abstract
This paper presents a novel approach for extracting 3D weld point information using a two-stage deep learning pipeline based on readily available 2D RGB cameras. Our method utilizes YOLOv8s for object detection, specifically targeting vertices, followed by semantic segmentation for precise pixel localization. [...] Read more.
This paper presents a novel approach for extracting 3D weld point information using a two-stage deep learning pipeline based on readily available 2D RGB cameras. Our method utilizes YOLOv8s for object detection, specifically targeting vertices, followed by semantic segmentation for precise pixel localization. This pipeline addresses the challenges posed by low-contrast images and complex geometries, significantly reducing costs compared with traditional 3D-based solutions. We demonstrated the effectiveness of our approach through a comparison with a 3D-point-cloud-based method, showcasing the potential for improved speed and efficiency. This research advances the field of automated welding by providing a cost-effective and versatile solution for extracting key information from 2D images. Full article
(This article belongs to the Special Issue Intelligent Welding)
30 pages, 959 KiB  
Article
Last Word in Last-Mile Logistics: A Novel Hybrid Multi-Criteria Decision-Making Model for Ranking Industry 4.0 Technologies
by Miloš Veljović, Snežana Tadić and Mladen Krstić
Mathematics 2024, 12(13), 2010; https://doi.org/10.3390/math12132010 - 28 Jun 2024
Viewed by 180
Abstract
The complexity, increasing flow number and volumes, and challenges of last-mile logistics (LML) motivate or compel companies, authorities, and the entire community to think about ways to increase efficiency, reliability, and profits, reduce costs, reduce negative environmental impacts, etc. These objectives can be [...] Read more.
The complexity, increasing flow number and volumes, and challenges of last-mile logistics (LML) motivate or compel companies, authorities, and the entire community to think about ways to increase efficiency, reliability, and profits, reduce costs, reduce negative environmental impacts, etc. These objectives can be met by applying Industry 4.0 (I4.0) technologies, but the key question is which one. To solve this task, this paper used an innovative method that combines the fuzzy analytic network process (fuzzy ANP) and the fuzzy axial-distance-based aggregated measurement (fuzzy ADAM) method. The first was used for determining criteria weights and the second for selecting the best variant. The best solution is e/m-marketplaces, followed by cloud-computing-supported management and control systems and blockchain. These results indicate that widely adopted and implemented technologies are suitable for last-mile logistics. Newer technologies already producing significant results have serious potential for further development in this area. The main novelties and contributions of this paper are the definition of a new methodology based on multi-criteria decision-making (MCDM) methods, as well as its application for ranking I4.0 technologies for LML. Full article
(This article belongs to the Special Issue Multi-criteria Optimization Models and Methods for Smart Cities)
18 pages, 10562 KiB  
Article
Motor PHM on Edge Computing with Anomaly Detection and Fault Severity Estimation through Compressed Data Using PCA and Autoencoder
by Jong Hyun Choi, Sung Kyu Jang, Woon Hyung Cho, Seokbae Moon and Hyeongkeun Kim
Mach. Learn. Knowl. Extr. 2024, 6(3), 1466-1483; https://doi.org/10.3390/make6030069 - 28 Jun 2024
Viewed by 252
Abstract
The motor is essential for manufacturing industries, but wear can cause unexpected failure. Predictive and health management (PHM) for motors is critical in manufacturing sites. In particular, data-driven PHM using deep learning methods has gained popularity because it reduces the need for domain [...] Read more.
The motor is essential for manufacturing industries, but wear can cause unexpected failure. Predictive and health management (PHM) for motors is critical in manufacturing sites. In particular, data-driven PHM using deep learning methods has gained popularity because it reduces the need for domain expertise. However, the massive amount of data poses challenges to traditional cloud-based PHM, making edge computing a promising solution. This study proposes a novel approach to motor PHM in edge devices. Our approach integrates principal component analysis (PCA) and an autoencoder (AE) encoder achieving effective data compression while preserving fault detection and severity estimation integrity. The compressed data is visualized using t-SNE, and its ability to retain information is assessed through clustering performance metrics. The proposed method is tested on a custom-made experimental platform dataset, demonstrating robustness across various fault scenarios and providing valuable insights for practical applications in manufacturing. Full article
(This article belongs to the Section Data)
12 pages, 2966 KiB  
Article
Integrating PointNet-Based Model and Machine Learning Algorithms for Classification of Rupture Status of IAs
by Yilu Shou, Zhenpeng Chen, Pujie Feng, Yanan Wei, Beier Qi, Ruijuan Dong, Hongyu Yu and Haiyun Li
Bioengineering 2024, 11(7), 660; https://doi.org/10.3390/bioengineering11070660 - 28 Jun 2024
Viewed by 251
Abstract
Background: The rupture of intracranial aneurysms (IAs) would result in subarachnoid hemorrhage with high mortality and disability. Predicting the risk of IAs rupture remains a challenge. Methods: This paper proposed an effective method for classifying IAs rupture status by integrating a PointNet-based model [...] Read more.
Background: The rupture of intracranial aneurysms (IAs) would result in subarachnoid hemorrhage with high mortality and disability. Predicting the risk of IAs rupture remains a challenge. Methods: This paper proposed an effective method for classifying IAs rupture status by integrating a PointNet-based model and machine learning algorithms. First, medical image segmentation and reconstruction algorithms were applied to 3D Digital Subtraction Angiography (DSA) imaging data to construct three-dimensional IAs geometric models. Geometrical parameters of IAs were then acquired using Geomagic, followed by the computation of hemodynamic clouds and hemodynamic parameters using Computational Fluid Dynamics (CFD). A PointNet-based model was developed to extract different dimensional hemodynamic cloud features. Finally, five types of machine learning algorithms were applied on geometrical parameters, hemodynamic parameters, and hemodynamic cloud features to classify and recognize IAs rupture status. The classification performance of different dimensional hemodynamic cloud features was also compared. Results: The 16-, 32-, 64-, and 1024-dimensional hemodynamic cloud features were extracted with the PointNet-based model, respectively, and the four types of cloud features in combination with the geometrical parameters and hemodynamic parameters were respectively applied to classify the rupture status of IAs. The best classification outcomes were achieved in the case of 16-dimensional hemodynamic cloud features, the accuracy of XGBoost, CatBoost, SVM, LightGBM, and LR algorithms was 0.887, 0.857, 0.854, 0.857, and 0.908, respectively, and the AUCs were 0.917, 0.934, 0.946, 0.920, and 0.944. In contrast, when only utilizing geometrical parameters and hemodynamic parameters, the accuracies were 0.836, 0.816, 0.826, 0.832, and 0.885, respectively, with AUC values of 0.908, 0.922, 0.930, 0.884, and 0.921. Conclusion: In this paper, classification models for IAs rupture status were constructed by integrating a PointNet-based model and machine learning algorithms. Experiments demonstrated that hemodynamic cloud features had a certain contribution weight to the classification of IAs rupture status. When 16-dimensional hemodynamic cloud features were added to the morphological and hemodynamic features, the models achieved the highest classification accuracies and AUCs. Our models and algorithms would provide valuable insights for the clinical diagnosis and treatment of IAs. Full article
(This article belongs to the Section Biomedical Engineering and Biomaterials)
Show Figures

Graphical abstract

Back to TopTop