Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (7,517)

Search Parameters:
Keywords = automated processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 812 KiB  
Article
End-to-End Framework for Identifying Vulnerabilities of Operational Technology Protocols and Their Implementations in Industrial IoT
by Matthew Boeding, Michael Hempel and Hamid Sharif
Future Internet 2025, 17(1), 34; https://doi.org/10.3390/fi17010034 (registering DOI) - 14 Jan 2025
Abstract
The convergence of IT and OT networks has gained significant attention in recent years, facilitated by the increase in distributed computing capabilities, the widespread deployment of Internet of Things devices, and the adoption of Industrial Internet of Things. This convergence has led to [...] Read more.
The convergence of IT and OT networks has gained significant attention in recent years, facilitated by the increase in distributed computing capabilities, the widespread deployment of Internet of Things devices, and the adoption of Industrial Internet of Things. This convergence has led to a drastic increase in external access capabilities to previously air-gapped industrial systems for process control and monitoring. To meet the need for remote access to system information, protocols designed for the OT space were extended to allow IT networked communications. However, OT protocols often lack the rigor of cybersecurity capabilities that have become a critical characteristic of IT protocols. Furthermore, OT protocol implementations on individual devices can vary in performance, requiring the comprehensive evaluation of a device’s reliability and capabilities before installation into a critical infrastructure production network. In this paper, the authors define a framework for identifying vulnerabilities within these protocols and their on-device implementations, utilizing formal modeling, hardware in the loop-driven network emulation, and fully virtual network scenario simulation. Initially, protocol specifications are modeled to identify any vulnerable states within the protocol, leveraging the Construction and Analysis of Distributed Processes (CADP) software (version 2022-d “Kista”, which was created by Inria, the French Institute for Research in Computer Science and Automation, in France). Device characteristics are then extracted through automated real-time network emulation tests built on the OMNET++ framework, and all measured device characteristics are then used as a virtual device representation for network simulation tests within the OMNET++ software (version 6.0.1., a public-soucre, open-architecture software, initially developed by OpenSim Limited in Budapest, Hungary), to verify the presence of any potential vulnerabilities identified in the formal modeling stage. With this framework, the authors have thus defined an end-to-end process to identify and verify the presence and impact of potential vulnerabilities within a protocol, as shown by the presented results. Furthermore, this framework can test protocol compliance, performance, and security in a controlled environment before deploying devices in live production networks and addressing cybersecurity concerns. Full article
Show Figures

Figure 1

19 pages, 3272 KiB  
Article
A Systematic Method Combining Rotated Convolution and State Space Augmented Transformer for Digitizing and Classifying Paper ECGs
by Xiang Wang and Jie Yang
Symmetry 2025, 17(1), 120; https://doi.org/10.3390/sym17010120 - 14 Jan 2025
Viewed by 112
Abstract
Billions of paper Electrocardiograms (ECGs) are recorded annually worldwide, particularly in the Global South. Manual review of this massive dataset is time-consuming and inefficient. Accurate digital reconstruction of these records is essential for efficient cardiac disease diagnosis. This paper proposes a systematic framework [...] Read more.
Billions of paper Electrocardiograms (ECGs) are recorded annually worldwide, particularly in the Global South. Manual review of this massive dataset is time-consuming and inefficient. Accurate digital reconstruction of these records is essential for efficient cardiac disease diagnosis. This paper proposes a systematic framework for digitizing paper ECGs with 12 symmetrically distributed leads and identifying abnormal samples. This method consists of three main components. First, we introduce an adaptive rotated convolution network to detect the positions of lead waveforms. By exploiting the symmetric distribution of 12 leads, a novel loss is proposed to improve the detection model’s performance. Second, image processing techniques, including denoising and connected component analysis, are employed to digitize ECG waveforms. Finally, we propose a transformer-based classification method combined with a state space model. Our process is evaluated on a large synthetic dataset, including ECG images characterized by rotations, noise, and creases. The results demonstrate that the proposed detection method can effectively reconstruct paper ECGs, achieving an 11% improvement in SNR compared to the baseline. Moreover, our classification model exhibits slightly higher performance than other counterparts. The proposed approach offers a promising solution for the automated analysis of paper ECGs, supporting clinical decision-making. Full article
Show Figures

Figure 1

13 pages, 1625 KiB  
Article
MetaboLabPy—An Open-Source Software Package for Metabolomics NMR Data Processing and Metabolic Tracer Data Analysis
by Christian Ludwig
Metabolites 2025, 15(1), 48; https://doi.org/10.3390/metabo15010048 - 14 Jan 2025
Viewed by 134
Abstract
Introduction: NMR spectroscopy is a powerful technique for studying metabolism, either in metabolomics settings or through tracing with stable isotope-enriched metabolic precursors. MetaboLabPy (version 0.9.66) is a free and open-source software package used to process 1D- and 2D-NMR spectra. The software implements a [...] Read more.
Introduction: NMR spectroscopy is a powerful technique for studying metabolism, either in metabolomics settings or through tracing with stable isotope-enriched metabolic precursors. MetaboLabPy (version 0.9.66) is a free and open-source software package used to process 1D- and 2D-NMR spectra. The software implements a complete workflow for NMR data pre-processing to prepare a series of 1D-NMR spectra for multi-variate statistical data analysis. This includes a choice of algorithms for automated phase correction, segmental alignment, spectral scaling, variance stabilisation, export to various software platforms, and analysis of metabolic tracing data. The software has an integrated help system with tutorials that demonstrate standard workflows and explain the capabilities of MetaboLabPy. Materials and Methods: The software is implemented in Python and uses numerous Python toolboxes, such as numpy, scipy, pandas, etc. The software is implemented in three different packages: metabolabpy, qtmetabolabpy, and metabolabpytools. The metabolabpy package contains classes to handle NMR data and all the numerical routines necessary to process and pre-process 1D NMR data and perform multiplet analysis on 2D-1H, 13C HSQC NMR data. The qtmetabolabpy package contains routines related to the graphical user interface. Results: PySide6 is used to produce a modern and user-friendly graphical user interface. The metabolabpytools package contains routines which are not specific to just handling NMR data, for example, routines to derive isotopomer distributions from the combination of NMR multiplet and GC-MS data. A deep-learning approach for the latter is currently under development. MetaboLabPy is available via the Python Package Index or via GitHub. Full article
(This article belongs to the Special Issue Open-Source Software in Metabolomics)
Show Figures

Figure 1

30 pages, 4049 KiB  
Article
A Contactless Multi-Modal Sensing Approach for Material Assessment and Recovery in Building Deconstruction
by Sophia Cabral, Mikita Klimenka, Fopefoluwa Bademosi, Damon Lau, Stefanie Pender, Lorenzo Villaggi, James Stoddart, James Donnelly, Peter Storey and David Benjamin
Sustainability 2025, 17(2), 585; https://doi.org/10.3390/su17020585 - 14 Jan 2025
Viewed by 235
Abstract
As material scarcity and environmental concerns grow, material reuse and waste reduction are gaining attention based on their potential to reduce carbon emissions and promote net-zero buildings. This study develops an innovative approach that combines multi-modal sensing technologies with machine learning to enable [...] Read more.
As material scarcity and environmental concerns grow, material reuse and waste reduction are gaining attention based on their potential to reduce carbon emissions and promote net-zero buildings. This study develops an innovative approach that combines multi-modal sensing technologies with machine learning to enable contactless assessment of in situ building materials for reuse potential. By integrating thermal imaging, red, green, and blue (RGB) cameras, as well as depth sensors, the system analyzes material conditions and reveals hidden geometries within existing buildings. This approach enhances material understanding by analyzing existing materials, including their compositions, histories, and assemblies. A case study on drywall deconstruction demonstrates that these technologies can effectively guide the deconstruction process, potentially reducing material costs and carbon emissions significantly. The findings highlight feasible scenarios for drywall reuse and offer insights into improving existing deconstruction techniques through automated feedback and visualization of cut lines and fastener positions. This research indicates that contactless assessment and automated deconstruction methods are technically viable, economically advantageous, and environmentally beneficial. Serving as an initial step toward novel methods to view and classify existing building materials, this study lays a foundation for future research, promoting sustainable construction practices that optimize material reuse and reduce negative environmental impact. Full article
(This article belongs to the Special Issue A Circular Economy for a Cleaner Built Environment)
Show Figures

Graphical abstract

28 pages, 10488 KiB  
Article
Design and Testing of a Whole-Row Top-Loosening Stem-Clamping Seedling Extraction Device for Hole Tray Seedlings
by Zehui Peng, Fazhan Yang, Yuhuan Li, Xiang Li, Baogang Li and Guoli Xu
Agriculture 2025, 15(2), 165; https://doi.org/10.3390/agriculture15020165 - 13 Jan 2025
Viewed by 305
Abstract
A combined seedling extraction device was developed that operates by first top loosening and then clamping the stem in order to solve the current issues with automated transplanting technology, such as low seedling extraction efficiency and a high rate of substrate loss. The [...] Read more.
A combined seedling extraction device was developed that operates by first top loosening and then clamping the stem in order to solve the current issues with automated transplanting technology, such as low seedling extraction efficiency and a high rate of substrate loss. The pepper plug tray seedlings were selected as the experimental subjects for testing the mechanical properties of the stems. The tensile and compressive mechanical properties of the stems were obtained, and the kinematic model of the seedling spacing process and the mechanical model of the seedling clamping process were established. Key parameters of the seedling extraction device were analyzed and calculated, and an automated seedling extraction system was constructed. Using substrate moisture content, seedling age, and extraction frequency as experimental factors, orthogonal tests were conducted. Through variance analysis and 3D response surface analysis, the optimal rounded parameter values were determined: 48% substrate moisture content, 38-day-old seedlings, and a seedling extraction frequency of 60 plants/min. Under these conditions, the seedling extraction success rate was 94.44%, the substrate loss rate was 6.07%, and the seedling damage rate was 4.17%, meeting the requirements for automated seedling extraction. Full article
(This article belongs to the Section Agricultural Technology)
Show Figures

Figure 1

28 pages, 618 KiB  
Article
CodeContrast: A Contrastive Learning Approach for Generating Coherent Programming Exercises
by Nicolás Torres
Educ. Sci. 2025, 15(1), 80; https://doi.org/10.3390/educsci15010080 - 13 Jan 2025
Viewed by 362
Abstract
Generating high-quality programming exercises with well-aligned problem descriptions, test cases, and code solutions is crucial for computer science education. However, current methods often lack coherence among these components, reducing their educational value. We present CodeContrast, a novel generative model that uses contrastive learning [...] Read more.
Generating high-quality programming exercises with well-aligned problem descriptions, test cases, and code solutions is crucial for computer science education. However, current methods often lack coherence among these components, reducing their educational value. We present CodeContrast, a novel generative model that uses contrastive learning to map programming problems, test cases, and solutions into a shared feature space. By minimizing the distance between matched components and maximizing it for non-matched ones, CodeContrast learns the intricate relationships necessary to generate coherent programming exercises. Our model architecture includes three encoder networks for problem descriptions, test cases, and solutions. During training, CodeContrast processes positive triplets (matching problem, test case, solution) and negative triplets (non-matching combinations) and uses a contrastive loss to position positive triplets close in the feature space while separating negative ones. Comprehensive evaluations of CodeContrast—through automatic metrics, expert ratings, and student studies—demonstrate its effectiveness. Results show high code correctness (92.3% of test cases passed), strong problem–solution alignment (BLEU score up to 0.826), and robust test case coverage (85.7% statement coverage). Expert feedback and student performance further support the pedagogical value of these generated exercises, with students performing comparably to those using manually curated content. CodeContrast advances the automated generation of high-quality programming exercises, capturing relationships among programming components to enhance educational content and improve the learning experience for students and instructors. Full article
Show Figures

Figure 1

28 pages, 36223 KiB  
Article
Victim Verification with the Use of Deep Metric Learning in DVI System Supported by Mobile Application
by Zbigniew Piotrowski, Marta Bistroń, Gabriel Jekateryńczuk, Paweł Kaczmarek and Dymitr Pietrow
Appl. Sci. 2025, 15(2), 727; https://doi.org/10.3390/app15020727 - 13 Jan 2025
Viewed by 249
Abstract
This paper presents the design of a system to support the identification of victims of disasters and terrorist attacks. The system, called ID Victim (IDV), is a web application using a mobile app and data server. The DVI (Disaster Victim Identification) procedure, an [...] Read more.
This paper presents the design of a system to support the identification of victims of disasters and terrorist attacks. The system, called ID Victim (IDV), is a web application using a mobile app and data server. The DVI (Disaster Victim Identification) procedure, an international standard developed by Interpol, is used. The purpose of the IDV system is to facilitate and expedite the process of determining victims’ identities. A neural identification module was developed and trained on approximately 13,000 images from the LFW dataset and fine-tuned using 400 simulated PostMortem (PM) and AnteMortem (AM) images. Postmortem data include photographs of victims while antemortem data consist of pre-disaster photos of potential victims. The module generates a hypothesis, linking PM to AM, which is then verified. The module achieved test identification accuracy of up to 60% for 25 sample PM and AM sets. The system partially automates photo comparisons by DVI teams, improving efficiency, reducing identification time, and limiting the exposure of operators to graphic images. Implementing the system as a mobile application accelerates the process by enabling direct data entry during victim examinations on-site. Full article
(This article belongs to the Special Issue Advanced Pattern Recognition & Computer Vision)
Show Figures

Figure 1

19 pages, 4254 KiB  
Article
Water Treatment Technologies: Development of a Test Bench for Optimizing Flocculation-Thickening Processes in Laboratory Applications
by Amine Ennawaoui, Mohammed Badr Rachidi, Nasr Guennouni, Ilyass Mousaid, Mohamed Amine Daoud, Hicham Mastouri, Chouaib Ennawaoui, Younes Chhiti and Oussama Laayati
Processes 2025, 13(1), 198; https://doi.org/10.3390/pr13010198 - 12 Jan 2025
Viewed by 663
Abstract
This study introduces an automated test bench designed to optimize flocculation-thickening processes in the wastewater treatment industry. Addressing current challenges in operational efficiency and cost reduction, the test bench employs Model-Based Systems Engineering (MBSE) principles, leveraging SysML modeling within the CESAM framework. By [...] Read more.
This study introduces an automated test bench designed to optimize flocculation-thickening processes in the wastewater treatment industry. Addressing current challenges in operational efficiency and cost reduction, the test bench employs Model-Based Systems Engineering (MBSE) principles, leveraging SysML modeling within the CESAM framework. By integrating advanced technologies, including PLC programming and a closed-loop control system, this bench provides precise and efficient testing under varying operational conditions. Economic implications are explored, demonstrating the cost-effectiveness of optimized flocculation processes, which reduce chemical use and operational expenditures while enhancing water clarity and sludge management. The system’s 3D modeling enables detailed simulations, aiding in both research and pedagogical applications. This platform highlights the potential of MBSE in creating scalable, robust solutions that contribute to sustainable water management. Full article
(This article belongs to the Section Environmental and Green Processes)
Show Figures

Figure 1

17 pages, 6994 KiB  
Article
MicroVi: A Cost-Effective Microscopy Solution for Yeast Cell Detection and Count in Wine Value Chain
by Ismael Benito-Altamirano, Sergio Moreno, David M. Vaz-Romero, Anna Puig-Pujol, Gemma Roca-Domènech, Joan Canals, Anna Vilà, Joan Daniel Prades and Ángel Diéguez
Biosensors 2025, 15(1), 40; https://doi.org/10.3390/bios15010040 - 12 Jan 2025
Viewed by 503
Abstract
In recent years, the wine industry has been researching how to improve wine quality along the production value chain. In this scenario, we present here a new tool, MicroVi, a cost-effective chip-sized microscopy solution to detect and count yeast cells in wine samples. [...] Read more.
In recent years, the wine industry has been researching how to improve wine quality along the production value chain. In this scenario, we present here a new tool, MicroVi, a cost-effective chip-sized microscopy solution to detect and count yeast cells in wine samples. We demonstrate that this novel microscopy setup is able to measure the same type of samples as an optical microscopy system, but with smaller size equipment and with automated cell count configuration. The technology relies on the top of state-of-the-art computer vision pipelines to post-process the images and count the cells. A typical pipeline consists of normalization, feature extraction (i.e., SIFT), image composition (to increase both resolution and scanning area), holographic reconstruction and particle count (i.e., Hough transform). MicroVi achieved a 2.19 µm resolution by properly resolving the G7.6 features from the USAF Resolving Power Test Target 1951. Additionally, we aimed for a successful calibration of cell counts for Saccharomyces cerevisiae. We compared our direct results with our current optical setup, achieving a linear calibration for measurements ranging from 0.5 to 50 million cells per milliliter. Furthermore, other yeast cells were qualitatively resolved with our MicroVi microscope, such as, Brettanomyces bruxellensis, or bacteria, like, Lactobacillus plantarum, thus confirming the system’s reliability for consistent microbial assessment. Full article
(This article belongs to the Special Issue Trends in Optical Biosensing and Bioimaging)
Show Figures

Figure 1

18 pages, 1257 KiB  
Article
Multi-Person Localization Based on a Thermopile Array Sensor with Machine Learning and a Generative Data Model
by Stefan Klir, Julian Lerch, Simon Benkner and Tran Quoc Khanh
Sensors 2025, 25(2), 419; https://doi.org/10.3390/s25020419 - 12 Jan 2025
Viewed by 295
Abstract
Thermopile sensor arrays provide a sufficient counterbalance between person detection and localization while preserving privacy through low resolution. The latter is especially important in the context of smart building automation applications. Current research has shown that there are two machine learning-based algorithms that [...] Read more.
Thermopile sensor arrays provide a sufficient counterbalance between person detection and localization while preserving privacy through low resolution. The latter is especially important in the context of smart building automation applications. Current research has shown that there are two machine learning-based algorithms that are particularly prominent for general object detection: You Only Look Once (YOLOv5) and Detection Transformer (DETR). Over the course of this paper, both algorithms are adapted to localize people in 32 × 32-pixel thermal array images. The drawbacks in precision due to the sparse amount of labeled data were counteracted with a novel generative image generator (IIG). This generator creates synthetic thermal frames from the sparse amount of available labeled data. Multiple robustness tests were performed during the evaluation process to determine the overall usability of the aforementioned algorithms as well as the advantage of the image generator. Both algorithms provide a high mean average precision (mAP) exceeding 98%. They also prove to be robust against disturbances of warm air streams, sun radiation, the replacement of the sensor with an equal type sensor, new persons, cold objects, movements along the image frame border and people standing still. However, the precision decreases for persons wearing thick layers of clothes, such as winter clothing, or in scenarios where the number of present persons exceeds the number of persons the algorithm was trained on. In summary, both algorithms are suitable for detection and localization purposes, although YOLOv5m has the advantage in real-time image processing capabilities, accompanied by a smaller model size and slightly higher precision. Full article
Show Figures

Figure 1

26 pages, 2636 KiB  
Article
A Complete Coverage Path Planning Algorithm for Lawn Mowing Robots Based on Deep Reinforcement Learning
by Ying Chen, Zhe-Ming Lu, Jia-Lin Cui, Hao Luo and Yang-Ming Zheng
Sensors 2025, 25(2), 416; https://doi.org/10.3390/s25020416 - 12 Jan 2025
Viewed by 248
Abstract
This paper introduces Re-DQN, a deep reinforcement learning-based algorithm for comprehensive coverage path planning in lawn mowing robots. In the fields of smart homes and agricultural automation, lawn mowing robots are rapidly gaining popularity to reduce the demand for manual labor. The algorithm [...] Read more.
This paper introduces Re-DQN, a deep reinforcement learning-based algorithm for comprehensive coverage path planning in lawn mowing robots. In the fields of smart homes and agricultural automation, lawn mowing robots are rapidly gaining popularity to reduce the demand for manual labor. The algorithm introduces a new exploration mechanism, combined with an intrinsic reward function based on state novelty and a dynamic input structure, effectively enhancing the robot’s adaptability and path optimization capabilities in dynamic environments. In particular, Re-DQN improves the stability of the training process through a dynamic incentive layer and achieves more comprehensive area coverage and shorter planning times in high-dimensional continuous state spaces. Simulation results show that Re-DQN outperforms the other algorithms in terms of performance, convergence speed, and stability, making it a robust solution for comprehensive coverage path planning. Future work will focus on testing and optimizing Re-DQN in more complex environments and exploring its application in multi-robot systems to enhance collaboration and communication. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation, 2nd Volume)
Show Figures

Figure 1

26 pages, 3092 KiB  
Article
Study on the Adhesion Properties of Raw Coal and Adhesive Clogging Characteristics of Underground Coal Bunkers
by Chongyang Jiang, Lianguo Wang, Zhiyuan Pan, Jiaxing Guo and Shuai Wang
Appl. Sci. 2025, 15(2), 684; https://doi.org/10.3390/app15020684 - 12 Jan 2025
Viewed by 250
Abstract
The automation and continuous operation of coal production are fundamental to the construction of high-yielding and efficient mines. Underground coal bunkers, serving as the pivotal link between various production and transportation segments, are vital for the seamless operation of mines. Nonetheless, the adhesive [...] Read more.
The automation and continuous operation of coal production are fundamental to the construction of high-yielding and efficient mines. Underground coal bunkers, serving as the pivotal link between various production and transportation segments, are vital for the seamless operation of mines. Nonetheless, the adhesive properties of raw coal can lead to increasingly severe issues, such as the adhesive clogging of coal bunkers. To address this issue, this paper first employs a self-designed raw coal shear testing apparatus to conduct experiments under varying conditions of shear interfaces, moisture content in raw coal, and compaction forces. Obtaining the adhesion behavior characteristics and adhesion parameter variation patterns of raw coal at coal-coal and coal-wall interfaces under various influencing factors. Subsequently, leveraging the adhesion property parameters of raw coal and the engineering conditions of the 1011 roadway coal bunker in Taoyuan Coal Mine II, a numerical model for coal bunker discharge using irregular particles was developed with the PFC2D numerical simulation software. Based on these, we obtained the influence patterns of various factors, such as coal bunker convergence angle, coal storage height, and coal moisture content, on the coal particle flow pattern, bunker wall pressure, and adhesive clogging distribution characteristics of the coal bunker during the discharge process, thereby revealing the mechanisms underlying the adhesive clogging phenomenon. The findings offer significant insights for optimizing solutions to the adhesive clogging issues in underground coal bunkers and ensuring their safe and efficient operation. Full article
19 pages, 6391 KiB  
Article
Automated Tree Detection Using Image Processing and Multisource Data
by Grzegorz Dziczkowski, Barbara Probierz, Przemysław Juszczuk, Piotr Stefański, Tomasz Jach, Szymon Głowania and Jan Kozak
Appl. Sci. 2025, 15(2), 667; https://doi.org/10.3390/app15020667 - 11 Jan 2025
Viewed by 305
Abstract
This paper presents a method for the automatic detection and assessment of trees and tree-covered areas in Katowice, the capital of the Upper Silesian Industrial Region in southern Poland. The proposed approach utilizes satellite imagery and height maps, employing image-processing techniques and integrating [...] Read more.
This paper presents a method for the automatic detection and assessment of trees and tree-covered areas in Katowice, the capital of the Upper Silesian Industrial Region in southern Poland. The proposed approach utilizes satellite imagery and height maps, employing image-processing techniques and integrating data from various sources. We developed a data pipeline for gathering and pre-processing information, including vegetation data and numerical land-cover models, which were used to derive a new method for tree detection. Our findings confirm that automatic tree detection can significantly enhance the efficiency of urban tree management processes, contributing to the creation of greener and more resident-friendly cities. Full article
(This article belongs to the Special Issue Urban Geospatial Analytics Based on Big Data)
Show Figures

Figure 1

28 pages, 15369 KiB  
Article
Improvement of the Reliability of Urban Park Location Results Through the Use of Fuzzy Logic Theory
by Beata Calka, Katarzyna Siok, Marta Szostak, Elzbieta Bielecka, Tomasz Kogut and Mohamed Zhran
Sustainability 2025, 17(2), 521; https://doi.org/10.3390/su17020521 - 10 Jan 2025
Viewed by 669
Abstract
Green areas, thanks to their relatively unified natural systems, play several key roles. They contribute to the proper functioning and sustainable development of cities and also determine the quality of life for their inhabitants. As a result, urban planners and policy-makers frequently aim [...] Read more.
Green areas, thanks to their relatively unified natural systems, play several key roles. They contribute to the proper functioning and sustainable development of cities and also determine the quality of life for their inhabitants. As a result, urban planners and policy-makers frequently aim to maximize the benefits of green spaces by creating various programs and strategies focused on green infrastructure development, such as the Green City initiative. One of the objectives of this program is to create new urban parks. This research focuses on developing a new method for selecting sites for urban parks, taking into account factors related to the environment, accessibility, and human activity. The research was carried out for the area of Ciechanów city. To make the city areas more attractive to residents, the authorities aim to increase green spaces and also revitalize the existing greenery. The combination of the Fuzzy AHP method and fuzzy set theory (selecting appropriate fuzzy membership for each factor), along with the use of large and diverse geospatial datasets, minimized subjectivity in prioritizing criteria and allowed for a fully automated analysis process. Among the factors analyzed, land use emerged as the most significant, followed by the normalized difference vegetation index (NDVI) and proximity to surface water. The results indicated that 16% of the area was deemed highly suitable for urban park development, while 15% was considered unsuitable. One-at-a-time (OAT) sensitivity analysis, based on changes in the weight of the land-use factor, revealed that a 75% reduction in weight resulted in a nearly 57.2% decrease in unsuitable areas, while a 75% increase in weight led to a 40% expansion of the most suitable locations. The potential park locations were compared with a heat map of urban activity in the city. The developed method contributes to the discourse on the transparency of location decisions and the validity of the criteria used, to promote sustainable urban development that provides residents with access to active recreation. Full article
Show Figures

Figure 1

21 pages, 4087 KiB  
Article
Enhanced Bug Priority Prediction via Priority-Sensitive Long Short-Term Memory–Attention Mechanism
by Geunseok Yang, Jinfeng Ji and Jaehee Kim
Appl. Sci. 2025, 15(2), 633; https://doi.org/10.3390/app15020633 - 10 Jan 2025
Viewed by 272
Abstract
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is [...] Read more.
The rapid expansion of software applications has led to an increase in the frequency of bugs, which are typically reported through user-submitted bug reports. Developers prioritize these reports based on severity and project schedules. However, the manual process of assigning bug priorities is time-consuming and prone to inconsistencies. To address these limitations, this study presents a Priority-Sensitive LSTM–Attention mechanism for automating bug priority prediction. The proposed approach extracts features such as product and component details from bug repositories and preprocesses the data to ensure consistency. Priority-based feature selection is applied to align the input data with the task of bug prioritization. These features are processed through a Long Short-Term Memory (LSTM) network to capture sequential dependencies, and the outputs are further refined using an Attention mechanism to focus on the most relevant information for prediction. The effectiveness of the proposed model was evaluated using datasets from the Eclipse and Mozilla open-source projects. Compared to baseline models such as Naïve Bayes, Random Forest, Decision Tree, SVM, CNN, LSTM, and CNN-LSTM, the proposed model achieved a superior performance. It recorded an accuracy of 93.00% for Eclipse and 84.11% for Mozilla, representing improvements of 31.11% and 40.39%, respectively, over the baseline models. Statistical verification confirmed that these performance gains were significant. This study distinguishes itself by integrating priority-based feature selection with a hybrid LSTM–Attention architecture, which enhances prediction accuracy and robustness compared to existing methods. The results demonstrate the potential of this approach to streamline bug prioritization, improve project management efficiency, and assist developers in resolving high-priority issues. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop