Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (427)

Search Parameters:
Keywords = DBSCAN

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 840 KiB  
Article
A Model to Analyze Industrial Clusters to Measure Land Use Efficiency in China
by Yanzhe Cui, Yingnan Niu, Yawen Ren, Shiyi Zhang and Lindan Zhao
Land 2024, 13(7), 1070; https://doi.org/10.3390/land13071070 (registering DOI) - 16 Jul 2024
Viewed by 95
Abstract
An understanding of how land use efficiency and industrial clusters interact helps one to make informed decisions that balance economic benefits with sustainable urban development. The emergence of industrial clusters is a result of market behavior, while the determination of administrative boundaries is [...] Read more.
An understanding of how land use efficiency and industrial clusters interact helps one to make informed decisions that balance economic benefits with sustainable urban development. The emergence of industrial clusters is a result of market behavior, while the determination of administrative boundaries is a result of government behavior. When these two are not consistent, it can lead to distortions in the allocation of land resources. However, current research on industrial development and land use efficiency is based on agglomeration within administrative regions rather than on industrial clusters. This study addresses this gap by identifying industrial clusters based on the spatial distribution of enterprises and analyzing their impact on land use efficiency. This study uses the density-based spatial clustering of applications with noise (DBSCAN) algorithm to identify industrial clusters, the convex hull algorithm to study their morphology, and spatial econometrics to measure the relationship between land use efficiency and the scale of industrial clusters. The results indicate the following: (1) the density of manufacturing industry (MI) clusters is significantly higher than that of information technology industry (ITI) clusters, and larger industrial clusters tend to be more circular in shape; (2) there is a positive correlation between the scale of industrial clusters and land use efficiency, and industrial clusters with varying levels of land use efficiency are interspersed throughout; (3) significant differences exist between the boundaries of industrial clusters and administrative regions, which could lead to biases when analyzing land use efficiency based on administrative regions. This study provides theoretical support for government policies on improving land use efficiency in China. Full article
(This article belongs to the Section Land Socio-Economic and Political Issues)
15 pages, 5365 KiB  
Article
Extraction of Arbors from Terrestrial Laser Scanning Data Based on Trunk Axis Fitting
by Song Liu, Yuncheng Deng, Jianpeng Zhang, Jinliang Wang and Di Duan
Forests 2024, 15(7), 1217; https://doi.org/10.3390/f15071217 - 13 Jul 2024
Viewed by 302
Abstract
Accurate arbor extraction is an important element of forest surveys. However, the presence of shrubs can interfere with the extraction of arbors. Addressing the issues of low accuracy and weak generalizability in existing Terrestrial Laser Scanning (TLS) arbor point clouds extraction methods, this [...] Read more.
Accurate arbor extraction is an important element of forest surveys. However, the presence of shrubs can interfere with the extraction of arbors. Addressing the issues of low accuracy and weak generalizability in existing Terrestrial Laser Scanning (TLS) arbor point clouds extraction methods, this study proposes a trunk axis fitting (TAF) method for arbor extraction. After separating the point cloud data by upper and lower, slicing, clustering, fitting circles, obtaining the main central axis, filtering by distance, etc. The canopy point clouds are merged with the extracted trunk point clouds to precisely separate arbors and shrubs. The advantage of the TAF method proposed in this study is that it is not affected by point cloud density or the degree of trunk curvature. This study focuses on a natural forest plot in Shangri-La City, Yunnan Province, and a plantation plot in Kunming City, using manually extracted data from a standardized dataset of samples to test the accuracy of the TAF method and validate the feasibility of the proposed method. The results showed that the TAF method proposed in this study has high extraction accuracy. It can effectively avoid the problem of trunk point cloud loss caused by tree growth curvature. The experimental accuracy for both plots reached over 99%. This study can provide certain technical support for arbor parameter extraction and scientific guidance for forest resource investigation and forest management decision-making. Full article
(This article belongs to the Special Issue Airborne and Terrestrial Laser Scanning in Forests)
Show Figures

Figure 1

28 pages, 14496 KiB  
Article
An Optimal Denoising Method for Spaceborne Photon-Counting LiDAR Based on a Multiscale Quadtree
by Baichuan Zhang, Yanxiong Liu, Zhipeng Dong, Jie Li, Yilan Chen, Qiuhua Tang, Guoan Huang and Junlin Tao
Remote Sens. 2024, 16(13), 2475; https://doi.org/10.3390/rs16132475 - 5 Jul 2024
Viewed by 510
Abstract
Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) has excellent potential for obtaining water depth information around islands and reefs. Combining the density-based spatial clustering of applications with noise algorithm (DBSCAN) and multiscale quadtree analysis, we propose a new photon-counting lidar denoising method to [...] Read more.
Ice, Cloud, and Land Elevation Satellite-2 (ICESat-2) has excellent potential for obtaining water depth information around islands and reefs. Combining the density-based spatial clustering of applications with noise algorithm (DBSCAN) and multiscale quadtree analysis, we propose a new photon-counting lidar denoising method to discard the large amount of noise in ICESat-2 data. First, the kernel density estimation (KDE) is used to preprocess the point cloud data, and a threshold is set to remove the noise photons on the sea surface. Next, the DBSCAN algorithm is used to preliminarily remove underwater noise photons. Then, the quadtree segmentation and Otsu algorithm are used for fine denoising to extract accurate bottom signal photons. Based on ICESat-2 pho-ton-counting data from six typical islands and reefs worldwide, the proposed method outperforms other algorithms in terms of denoising effect. Compared to in situ data, the determination coefficient (R2) reaches 94.59%, and the root mean square error (RMSE) is 1.01 m. The proposed method can extract accurate underwater terrain information, laying a foundation for offshore bathymetry. Full article
Show Figures

Figure 1

19 pages, 5361 KiB  
Article
Research on Resident Behavioral Activities Based on Social Media Data: A Case Study of Four Typical Communities in Beijing
by Zhiyuan Ou, Bingqing Wang, Bin Meng, Changsheng Shi and Dongsheng Zhan
Information 2024, 15(7), 392; https://doi.org/10.3390/info15070392 - 5 Jul 2024
Viewed by 367
Abstract
With the support of big data mining techniques, utilizing social media data containing location information and rich semantic text information can construct large-scale daily activity OD flows for urban populations, providing new data resources and research perspectives for studying urban spatiotemporal structures. This [...] Read more.
With the support of big data mining techniques, utilizing social media data containing location information and rich semantic text information can construct large-scale daily activity OD flows for urban populations, providing new data resources and research perspectives for studying urban spatiotemporal structures. This paper employs the ST-DBSCAN algorithm to identify the residential locations of Weibo users in four communities and then uses the BERT model for activity-type classification of Weibo texts. Combined with the TF-IDF method, the results are analyzed from three aspects: temporal features, spatial features, and semantic features. The research findings indicate: ① Spatially, residents’ daily activities are mainly centered around their residential locations, but there are significant differences in the radius and direction of activity among residents of different communities; ② In the temporal dimension, the activity intensities of residents from different communities exhibit uniformity during different time periods on weekdays and weekends; ③ Based on semantic analysis, the differences in activities and venue choices among residents of different communities are deeply influenced by the comprehensive characteristics of the communities. This study explores methods for OD information mining based on social media data, which is of great significance for expanding the mining methods of residents’ spatiotemporal behavior characteristics and enriching research on the configuration of public service facilities based on community residents’ activity spaces and facility demands. Full article
(This article belongs to the Special Issue Big Data Analytics in Smart Cities)
Show Figures

Figure 1

16 pages, 1949 KiB  
Article
Anomaly Detection Based on GCNs and DBSCAN in a Large-Scale Graph
by Christopher Retiti Diop Emane, Sangho Song, Hyeonbyeong Lee, Dojin Choi, Jongtae Lim, Kyoungsoo Bok and Jaesoo Yoo
Electronics 2024, 13(13), 2625; https://doi.org/10.3390/electronics13132625 - 4 Jul 2024
Viewed by 470
Abstract
Anomaly detection is critical across domains, from cybersecurity to fraud prevention. Graphs, adept at modeling intricate relationships, offer a flexible framework for capturing complex data structures. This paper proposes a novel anomaly detection approach, combining Graph Convolutional Networks (GCNs) and Density-Based Spatial Clustering [...] Read more.
Anomaly detection is critical across domains, from cybersecurity to fraud prevention. Graphs, adept at modeling intricate relationships, offer a flexible framework for capturing complex data structures. This paper proposes a novel anomaly detection approach, combining Graph Convolutional Networks (GCNs) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN). GCNs, a specialized deep learning model for graph data, extracts meaningful node and edge representations by incorporating graph topology and attribute information. This facilitates learning expressive node embeddings capturing local and global structural patterns. For anomaly detection, DBSCAN, a density-based clustering algorithm effective in identifying clusters of varying densities amidst noise, is employed. By defining a minimum distance threshold and a minimum number of points within that distance, DBSCAN proficiently distinguishes normal graph elements from anomalies. Our approach involves training a GCN model on a labeled graph dataset, generating appropriately labeled node embeddings. These embeddings serve as input to DBSCAN, identifying clusters and isolating anomalies as noise points. The evaluation on benchmark datasets highlights the superior performance of our approach in anomaly detection compared to traditional methods. The fusion of GCNs and DBSCAN demonstrates a significant potential for accurate and efficient anomaly detection in graphs. This research contributes to advancing graph-based anomaly detection, with promising applications in domains where safeguarding data integrity and security is paramount. Full article
(This article belongs to the Special Issue Advances in Data Science: Methods, Systems, and Applications)
Show Figures

Figure 1

13 pages, 2905 KiB  
Article
Deep Learning and Face Recognition: Face Recognition Approach Based on the DS-CDCN Algorithm
by Nan Deng, Zhengguang Xu, Xiuyun Li, Chenxuan Gao and Xue Wang
Appl. Sci. 2024, 14(13), 5739; https://doi.org/10.3390/app14135739 - 1 Jul 2024
Viewed by 380
Abstract
To enhance the performance and reliability of the face recognition algorithm that is based on deep learning technology, this study utilizes a density-based noise-applied spatial clustering algorithm to cluster a large-scale face image dataset, resulting in a self-constructed dataset. A deep separable center [...] Read more.
To enhance the performance and reliability of the face recognition algorithm that is based on deep learning technology, this study utilizes a density-based noise-applied spatial clustering algorithm to cluster a large-scale face image dataset, resulting in a self-constructed dataset. A deep separable center differential convolutional network algorithm is utilized for face recognition. The impact of convolutional parameters on the algorithm’s performance is verified through experiments with ablated convolutional parameters. The study found that the density-based noise-applied spatial clustering algorithm resulted in time savings of 43.66% and 51.22% compared to the K-means clustering algorithm and the hierarchical clustering algorithm, respectively, when analyzing 8000 images. Additionally, the depth-separable center difference convolutional network algorithm had a lower average classification error rate compared to the other two algorithms, with reductions of 2.49% and 17.01%, respectively. The depth-separable center difference convolutional network technique is an advanced method for identifying the faces of people of different races, according to the experimental investigation. It can provide efficient and accurate services for the face recognition needs of various races. Full article
(This article belongs to the Special Issue Recent Applications of Artificial Intelligence for Bioinformatics)
Show Figures

Figure 1

24 pages, 2084 KiB  
Article
Graph-Based Hotspot Detection of Socio-Economic Data Using Rough-Set
by Mohd Shamsh Tabarej, Sonajharia Minz, Anwar Ahamed Shaikh, Mohammed Shuaib, Fathe Jeribi and Shadab Alam
Mathematics 2024, 12(13), 2031; https://doi.org/10.3390/math12132031 - 29 Jun 2024
Viewed by 308
Abstract
The term hotspot refers to a location or an area where the occurrence of a particular phenomenon, event, or activity is significantly higher than in the surrounding areas. The existing statistical methods need help working well on discrete data. Also, it can identify [...] Read more.
The term hotspot refers to a location or an area where the occurrence of a particular phenomenon, event, or activity is significantly higher than in the surrounding areas. The existing statistical methods need help working well on discrete data. Also, it can identify a false hotspot. This paper proposes a novel graph-based hotspot detection using a rough set (GBHSDRS) for detecting the hotspots. This algorithm works well with discrete spatial vector data. Furthermore, it removes the false hotspot by finding the statistical significance of the identified hotspots. A rough set theory is applied to the graph of the spatial polygon data, and the nodes are divided into lower, boundary, and negative regions. Therefore, the candidate hotspot belongs to the lower region of the set, and the boundary value analysis will ensure the identification of the hotspots if the hotspot is present in the dataset. The p-value is used to find the statistical significance of the hotspots. The algorithm is tested on the socioeconomic data of Uttar Pradesh (UP) from 1991 on medical facilities. The average gain in density and Hotspot Prediction Accuracy Index (HAPI) of the detected hotspots is 26.54% and 23.41%, respectively. An average reduction in runtime is 27.73%, acquired compared to all other methods on the socioeconomic data. Full article
Show Figures

Figure 1

14 pages, 6881 KiB  
Article
A Tree Segmentation Algorithm for Airborne Light Detection and Ranging Data Based on Graph Theory and Clustering
by Jakub Seidl, Michal Kačmařík and Martin Klimánek
Forests 2024, 15(7), 1111; https://doi.org/10.3390/f15071111 - 27 Jun 2024
Viewed by 375
Abstract
This paper presents a single tree segmentation method applied to 3D point cloud data acquired with a LiDAR scanner mounted on an unmanned aerial vehicle (UAV). The method itself is based on clustering methods and graph theory and uses only the spatial properties [...] Read more.
This paper presents a single tree segmentation method applied to 3D point cloud data acquired with a LiDAR scanner mounted on an unmanned aerial vehicle (UAV). The method itself is based on clustering methods and graph theory and uses only the spatial properties of points. Firstly, the point cloud is reduced to clusters with DBSCAN. Those clusters are connected to a 3D graph, and then graph partitioning and further refinements are applied to obtain the final segments. Multiple datasets were acquired for two test sites in the Czech Republic which are covered by commercial forest to evaluate the influence of laser scanning parameters and forest characteristics on segmentation results. The accuracy of segmentation was compared with manual labels collected on top of the orthophoto image and reached between 82 and 93% depending on the test site and laser scanning parameters. Additionally, an area-based approach was employed for validation using field-measured data, where the distribution of tree heights in plots was analyzed. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

20 pages, 995 KiB  
Article
Leveraging Sports Analytics and Association Rule Mining to Uncover Recovery and Economic Impacts in NBA Basketball
by Vangelis Sarlis, George Papageorgiou and Christos Tjortjis
Data 2024, 9(7), 83; https://doi.org/10.3390/data9070083 - 24 Jun 2024
Viewed by 587
Abstract
This study examines the multifaceted field of injuries and their impacts on performance in the National Basketball Association (NBA), leveraging a blend of Data Science, Data Mining, and Sports Analytics. Our research is driven by three pivotal questions: Firstly, we explore how Association [...] Read more.
This study examines the multifaceted field of injuries and their impacts on performance in the National Basketball Association (NBA), leveraging a blend of Data Science, Data Mining, and Sports Analytics. Our research is driven by three pivotal questions: Firstly, we explore how Association Rule Mining can elucidate the complex interplay between players’ salaries, physical attributes, and health conditions and their influence on team performance, including team losses and recovery times. Secondly, we investigate the relationship between players’ recovery times and their teams’ financial performance, probing interdependencies with players’ salaries and career trajectories. Lastly, we examine how insights gleaned from Data Mining and Sports Analytics on player recovery times and financial influence can inform strategic financial management and salary negotiations in basketball. Harnessing extensive datasets detailing player demographics, injuries, and contracts, we employ advanced analytic techniques to categorize injuries and transform contract data into a format conducive to deep analytical scrutiny. Our anomaly detection methodologies, an ensemble combination of DBSCAN, isolation forest, and Z-score algorithms, spotlight patterns and outliers in recovery times, unveiling the intricate dance between player health, performance, and financial outcomes. This nuanced understanding emphasizes the economic stakes of sports injuries. The findings of this study provide a rich, data-driven foundation for teams and stakeholders, advocating for more effective injury management and strategic planning. By addressing these research questions, our work not only contributes to the academic discourse in Sports Analytics but also offers practical frameworks for enhancing player welfare and team financial health, thereby shaping the future of strategic decisions in professional sports. Full article
Show Figures

Figure 1

19 pages, 6475 KiB  
Article
Data Clustering Utilization Technologies Using Medians of Current Values for Improving Arc Sensing in Unstructured Environments
by Hee-Jun Kim, Jeong-Ho Kim, Shin-Nyeong Heo, Do-Hyung Jeon and Won-Suk Kim
Sensors 2024, 24(13), 4075; https://doi.org/10.3390/s24134075 - 23 Jun 2024
Viewed by 332
Abstract
In the shipbuilding industry, welding automation using welding robots often relies on arc-sensing techniques due to spatial limitations. However, the reliability of the feedback current value, core sensing data, is reduced when welding target workpieces have significant curvature or gaps between curved workpieces [...] Read more.
In the shipbuilding industry, welding automation using welding robots often relies on arc-sensing techniques due to spatial limitations. However, the reliability of the feedback current value, core sensing data, is reduced when welding target workpieces have significant curvature or gaps between curved workpieces due to the control of short-circuit transition, leading to seam tracking failure and subsequent damage to the workpieces. To address these problems, this study proposes a new algorithm, MBSC (median-based spatial clustering), based on the DBSCAN (density-based spatial clustering of applications with noise) clustering algorithm. By performing clustering based on the median value of data in each weaving area and considering the characteristics of the feedback current data, the proposed technique utilizes detected outliers to enhance seam tracking accuracy and responsiveness in unstructured and challenging welding environments. The effectiveness of the proposed technique was verified through actual welding experiments in a yard environment. Full article
Show Figures

Figure 1

18 pages, 7510 KiB  
Article
An Individual Tree Detection and Segmentation Method from TLS and MLS Point Clouds Based on Improved Seed Points
by Qiuji Chen, Hao Luo, Yan Cheng, Mimi Xie and Dandan Nan
Forests 2024, 15(7), 1083; https://doi.org/10.3390/f15071083 - 22 Jun 2024
Viewed by 403
Abstract
Individual Tree Detection and Segmentation (ITDS) is a key step in accurately extracting forest structural parameters from LiDAR (Light Detection and Ranging) data. However, most ITDS algorithms face challenges with over-segmentation, under-segmentation, and the omission of small trees in high-density forests. In this [...] Read more.
Individual Tree Detection and Segmentation (ITDS) is a key step in accurately extracting forest structural parameters from LiDAR (Light Detection and Ranging) data. However, most ITDS algorithms face challenges with over-segmentation, under-segmentation, and the omission of small trees in high-density forests. In this study, we developed a bottom–up framework for ITDS based on seed points. The proposed method is based on density-based spatial clustering of applications with noise (DBSCAN) to initially detect the trunks and filter the clusters by a set threshold. Then, the K-Nearest Neighbor (KNN) algorithm is used to reclassify the non-core clustered point cloud after threshold filtering. Furthermore, the Random Sample Consensus (RANSAC) cylinder fitting algorithm is used to correct the trunk detection results. Finally, we calculate the centroid of the trunk point clouds as seed points to achieve individual tree segmentation (ITS). In this paper, we use terrestrial laser scanning (TLS) data from natural forests in Germany and mobile laser scanning (MLS) data from planted forests in China to explore the effects of seed points on the accuracy of ITS methods; we then evaluate the efficiency of the method from three aspects: trunk detection, overall segmentation and small tree segmentation. We show the following: (1) the proposed method addresses the issues of missing segmentation and misrecognition of DBSCAN in trunk detection. Compared to using DBSCAN directly, recall (r), precision (p), and F-score (F) increased by 6.0%, 6.5%, and 0.07, respectively; (2) seed points significantly improved the accuracy of ITS methods; (3) the proposed ITDS framework achieved overall r, p, and F of 95.2%, 97.4%, and 0.96, respectively. This work demonstrates excellent accuracy in high-density forests and is able to accurately segment small trees under tall trees. Full article
(This article belongs to the Special Issue Panoptic Segmentation of Tree Scenes from Mobile LiDAR Data)
Show Figures

Figure 1

23 pages, 801 KiB  
Article
Data-Driven Occupancy Profile Identification and Application to the Ventilation Schedule in a School Building
by Kristina Vassiljeva, Margarita Matson, Andrea Ferrantelli, Eduard Petlenkov, Martin Thalfeldt and Juri Belikov
Energies 2024, 17(13), 3080; https://doi.org/10.3390/en17133080 - 21 Jun 2024
Viewed by 410
Abstract
Facing the current sustainability challenges requires reduction in building stock energy usage towards achieving the European Green Deal targets. This can be accomplished by adopting techniques such as fault detection and diagnosis and efficiency optimization. Taking an Estonian school as a case study, [...] Read more.
Facing the current sustainability challenges requires reduction in building stock energy usage towards achieving the European Green Deal targets. This can be accomplished by adopting techniques such as fault detection and diagnosis and efficiency optimization. Taking an Estonian school as a case study, an occupancy-based algorithm for scheduling ventilation operations in buildings is here developed starting only from energy use data. The aim is optimizing the system’s operation according to occupancy profiles while maintaining a comfortable indoor climate. By relying only on electricity meters without using carbon dioxide or occupancy sensors, we use the historical data of a school to develop a DBSCAN-based clustering algorithm that generates consumption profiles. A novel occupancy estimation algorithm, based on threshold and time-series methods, then creates 12 occupancy schedules that are either based on classical detection with an on-off method or on occupancy estimation for demand-controlled ventilation. We find that the latter replaces the 60% capacity of current on-off schedules by 30% or even 0%, with energy savings ranging from 3.5% to 66.4%. The corresponding costs are reduced from 18.1% up to 62.6%, while still complying with current national regulations for indoor air quality. Remarkably, our method can immediately be extended to other countries, as it relies only on occupancy schedules that ignore weather and other location-specific factors. Full article
(This article belongs to the Section G: Energy and Buildings)
Show Figures

Figure 1

22 pages, 49542 KiB  
Article
A Robust Target Detection Algorithm Based on the Fusion of Frequency-Modulated Continuous Wave Radar and a Monocular Camera
by Yanqiu Yang, Xianpeng Wang, Xiaoqin Wu, Xiang Lan, Ting Su and Yuehao Guo
Remote Sens. 2024, 16(12), 2225; https://doi.org/10.3390/rs16122225 - 19 Jun 2024
Viewed by 343
Abstract
Decision-level information fusion methods using radar and vision usually suffer from low target matching success rates and imprecise multi-target detection accuracy. Therefore, a robust target detection algorithm based on the fusion of frequency-modulated continuous wave (FMCW) radar and a monocular camera is proposed [...] Read more.
Decision-level information fusion methods using radar and vision usually suffer from low target matching success rates and imprecise multi-target detection accuracy. Therefore, a robust target detection algorithm based on the fusion of frequency-modulated continuous wave (FMCW) radar and a monocular camera is proposed to address these issues in this paper. Firstly, a lane detection algorithm is used to process the image to obtain lane information. Then, two-dimensional fast Fourier transform (2D-FFT), constant false alarm rate (CFAR), and density-based spatial clustering of applications with noise (DBSCAN) are used to process the radar data. Furthermore, the YOLOv5 algorithm is used to process the image. In addition, the lane lines are utilized to filter out the interference targets from outside lanes. Finally, multi-sensor information fusion is performed for targets in the same lane. Experiments show that the balanced score of the proposed algorithm can reach 0.98, which indicates that it has low false and missed detections. Additionally, the balanced score is almost unchanged in different environments, proving that the algorithm is robust. Full article
(This article belongs to the Special Issue Remote Sensing: 15th Anniversary)
Show Figures

Figure 1

22 pages, 6892 KiB  
Article
Research on Clustering-Based Fault Diagnosis during ROV Hovering Control
by Jung-Hyeun Park, Hyunjoon Cho, Sang-Min Gil, Ki-Beom Choo, Myungjun Kim, Jiafeng Huang, Dongwook Jung, ChiUng Yun and Hyeung-Sik Choi
Appl. Sci. 2024, 14(12), 5235; https://doi.org/10.3390/app14125235 - 17 Jun 2024
Viewed by 365
Abstract
The objective of this study was to perform fault diagnosis (FD) specific to various faults that can occur in the thrusters of remotely operated vehicles (ROVs) during hovering control. Underwater thrusters are predominantly utilized as propulsion systems in the majority of ROVs and [...] Read more.
The objective of this study was to perform fault diagnosis (FD) specific to various faults that can occur in the thrusters of remotely operated vehicles (ROVs) during hovering control. Underwater thrusters are predominantly utilized as propulsion systems in the majority of ROVs and are essential components for implementing motions such as trajectory tracking and hovering. Faults in the underwater thrusters can limit the operational capabilities of ROVs, leading to permanent damage. Therefore, this study focused on the FD for faults frequently caused by external factors such as entanglement with floating debris and propeller breakage. For diagnosing faults, a data-based technique that identifies patterns according to data characteristics was utilized. In imitation of the fault situations, data for normal, breakage and entangled conditions were acquired, and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) was employed to differentiate between these fault conditions. The proposed methodology was validated by configuring an ROV and conducting experiments in an engineering water tank to verify the performance of the FD. Full article
Show Figures

Figure 1

37 pages, 29588 KiB  
Article
Pixel-MPS: Stochastic Embedding and Density-Based Clustering of Image Patterns for Pixel-Based Multiple-Point Geostatistical Simulation
by Adel Asadi and Snehamoy Chatterjee
Geosciences 2024, 14(6), 162; https://doi.org/10.3390/geosciences14060162 - 12 Jun 2024
Viewed by 546
Abstract
Multiple-point geostatistics (MPS) is an established tool for the uncertainty quantification of Earth systems modeling, particularly when dealing with the complexity and heterogeneity of geological data. This study presents a novel pixel-based MPS method for modeling spatial data using advanced machine-learning algorithms. Pixel-based [...] Read more.
Multiple-point geostatistics (MPS) is an established tool for the uncertainty quantification of Earth systems modeling, particularly when dealing with the complexity and heterogeneity of geological data. This study presents a novel pixel-based MPS method for modeling spatial data using advanced machine-learning algorithms. Pixel-based multiple-point simulation implies the sequential modeling of individual points on the simulation grid, one at a time, by borrowing spatial information from the training image and honoring the conditioning data points. The developed methodology is based on the mapping of the training image patterns database using the t-Distributed Stochastic Neighbor Embedding (t-SNE) algorithm for dimensionality reduction, and the clustering of patterns by applying the Density-based Spatial Clustering of Applications with Noise (DBSCAN) algorithm, as an efficient unsupervised classification technique. For the automation, optimization, and input parameter tuning, multiple stages are implemented, including entropy-based determination of the template size and a k-nearest neighbors search for clustering parameter selection, to ensure the proposed method does not require the user’s interference. The proposed model is validated using synthetic two- and three-dimensional datasets, both for conditional and unconditional simulations, and runtime information is provided. Finally, the method is applied to a case study gold mine for stochastic orebody modeling. To demonstrate the computational efficiency and accuracy of the proposed method, a two-dimensional training image with 101 by 101 pixels is simulated for 100 conditional realizations in 453 s (~4.5 s per realization) using only 361 hard data points (~3.5% of the simulation grid), and the resulting average simulation has a good visual match and only an 11.8% pixel-wise mismatch with the training image. Full article
Show Figures

Figure 1

Back to TopTop