Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Issue
Volume 15, August
Previous Issue
Volume 15, June
 
 

Information, Volume 15, Issue 7 (July 2024) – 56 articles

Cover Story (view full-size image): FEINT is an automated framework to facilitate modular compositions/customizations of FPGA designs. FEINT is architected as a “template” insertion tool driven by a user-provided configuration script to introduce dynamic design features as plugins at different stages of the FPGA design process to facilitate rapid prototyping, composition-based design evolution, and system customization. For example, FEINT can help insert defensive monitoring, adversarial Trojan, and plugin-based functionality enhancements. FEINT is scalable, future-proof, and cross-platform without dependence on vendor-specific file formats, ensuring compatibility across FPGA families and tool versions and integrability with commercial tools. FEINT’s effectiveness is demonstrated using several template/module scenarios from designer, defender, and attacker perspectives. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 1746 KiB  
Article
Examining the Roles, Sentiments, and Discourse of European Interest Groups in the Ukrainian War through X (Twitter)
by Aritz Gorostiza-Cerviño, Álvaro Serna-Ortega, Andrea Moreno-Cabanillas, Ana Almansa-Martínez and Antonio Castillo-Esparcia
Information 2024, 15(7), 422; https://doi.org/10.3390/info15070422 - 22 Jul 2024
Viewed by 825
Abstract
This research focuses on examining the responses of interest groups listed in the European Transparency Register to the ongoing Russia–Ukraine war. Its aim is to investigate the nuanced reactions of 2579 commercial and business associations and 2957 companies and groups to the recent [...] Read more.
This research focuses on examining the responses of interest groups listed in the European Transparency Register to the ongoing Russia–Ukraine war. Its aim is to investigate the nuanced reactions of 2579 commercial and business associations and 2957 companies and groups to the recent conflict, as expressed through their X (Twitter) activities. Utilizing advanced text mining and NLP and LDA techniques, this study conducts a comprehensive analysis encompassing language dynamics, thematic shifts, sentiment variations, and activity levels exhibited by these entities both before and after the outbreak of the war. The results obtained reflect a gradual decrease in negative emotions regarding the conflict over time. Likewise, multiple forms of outside lobbying are identified in the communication strategies of interest groups. All in all, this empirical inquiry into how interest groups adapt their messaging in response to complex geopolitical events holds the potential to provide invaluable insights into the multifaceted role of lobbying in shapi ng public policies. Full article
(This article belongs to the Special Issue Information Processing in Multimedia Applications)
Show Figures

Figure 1

15 pages, 473 KiB  
Article
Semi-Supervised Learning for Multi-View Data Classification and Visualization
by Najmeh Ziraki, Alireza Bosaghzadeh and Fadi Dornaika
Information 2024, 15(7), 421; https://doi.org/10.3390/info15070421 - 22 Jul 2024
Cited by 1 | Viewed by 986
Abstract
Data visualization has several advantages, such as representing vast amounts of data and visually demonstrating patterns within it. Manifold learning methods help us estimate lower-dimensional representations of data, thereby enabling more effective visualizations. In data analysis, relying on a single view can often [...] Read more.
Data visualization has several advantages, such as representing vast amounts of data and visually demonstrating patterns within it. Manifold learning methods help us estimate lower-dimensional representations of data, thereby enabling more effective visualizations. In data analysis, relying on a single view can often lead to misleading conclusions due to its limited perspective. Hence, leveraging multiple views simultaneously and interactively can mitigate this risk and enhance performance by exploiting diverse information sources. Additionally, incorporating different views concurrently during the graph construction process using interactive visualization approach has improved overall performance. In this paper, we introduce a novel algorithm for joint consistent graph construction and label estimation. Our method simultaneously constructs a unified graph and predicts the labels of unlabeled samples. Furthermore, the proposed approach estimates a projection matrix that enables the prediction of labels for unseen samples. Moreover, it incorporates the information in the label space to further enhance the accuracy. In addition, it merges the information in different views along with the labels to construct a consensus graph. Experimental results conducted on various image databases demonstrate the superiority of our fusion approach compared to using a single view or other fusion algorithms. This highlights the effectiveness of leveraging multiple views and simultaneously constructing a unified graph for improved performance in data classification and visualization tasks in semi-supervised contexts. Full article
(This article belongs to the Special Issue Interactive Visualizations: Design, Technologies and Applications)
Show Figures

Figure 1

20 pages, 2522 KiB  
Article
Machine Learning-Driven Detection of Cross-Site Scripting Attacks
by Rahmah Alhamyani and Majid Alshammari
Information 2024, 15(7), 420; https://doi.org/10.3390/info15070420 - 20 Jul 2024
Viewed by 1580
Abstract
The ever-growing web application landscape, fueled by technological advancements, introduces new vulnerabilities to cyberattacks. Cross-site scripting (XSS) attacks pose a significant threat, exploiting the difficulty of distinguishing between benign and malicious scripts within web applications. Traditional detection methods struggle with high false-positive (FP) [...] Read more.
The ever-growing web application landscape, fueled by technological advancements, introduces new vulnerabilities to cyberattacks. Cross-site scripting (XSS) attacks pose a significant threat, exploiting the difficulty of distinguishing between benign and malicious scripts within web applications. Traditional detection methods struggle with high false-positive (FP) and false-negative (FN) rates. This research proposes a novel machine learning (ML)-based approach for robust XSS attack detection. We evaluate various models including Random Forest (RF), Logistic Regression (LR), Support Vector Machines (SVMs), Decision Trees (DTs), Extreme Gradient Boosting (XGBoost), Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNNs), Artificial Neural Networks (ANNs), and ensemble learning. The models are trained on a real-world dataset categorized into benign and malicious traffic, incorporating feature selection methods like Information Gain (IG) and Analysis of Variance (ANOVA) for optimal performance. Our findings reveal exceptional accuracy, with the RF model achieving 99.78% and ensemble models exceeding 99.64%. These results surpass existing methods, demonstrating the effectiveness of the proposed approach in securing web applications while minimizing FPs and FNs. This research offers a significant contribution to the field of web application security by providing a highly accurate and robust ML-based solution for XSS attack detection. Full article
Show Figures

Figure 1

30 pages, 12265 KiB  
Article
Toward Robust Arabic AI-Generated Text Detection: Tackling Diacritics Challenges
by Hamed Alshammari and Khaled Elleithy
Information 2024, 15(7), 419; https://doi.org/10.3390/info15070419 - 19 Jul 2024
Viewed by 1379
Abstract
Current AI detection systems often struggle to distinguish between Arabic human-written text (HWT) and AI-generated text (AIGT) due to the small marks present above and below the Arabic text called diacritics. This study introduces robust Arabic text detection models using Transformer-based pre-trained models, [...] Read more.
Current AI detection systems often struggle to distinguish between Arabic human-written text (HWT) and AI-generated text (AIGT) due to the small marks present above and below the Arabic text called diacritics. This study introduces robust Arabic text detection models using Transformer-based pre-trained models, specifically AraELECTRA, AraBERT, XLM-R, and mBERT. Our primary goal is to detect AIGTs in essays and overcome the challenges posed by the diacritics that usually appear in Arabic religious texts. We created several novel datasets with diacritized and non-diacritized texts comprising up to 9666 HWT and AIGT training examples. We aimed to assess the robustness and effectiveness of the detection models on out-of-domain (OOD) datasets to assess their generalizability. Our detection models trained on diacritized examples achieved up to 98.4% accuracy compared to GPTZero’s 62.7% on the AIRABIC benchmark dataset. Our experiments reveal that, while including diacritics in training enhances the recognition of the diacritized HWTs, duplicating examples with and without diacritics is inefficient despite the high accuracy achieved. Applying a dediacritization filter during evaluation significantly improved model performance, achieving optimal performance compared to both GPTZero and the detection models trained on diacritized examples but evaluated without dediacritization. Although our focus was on Arabic due to its writing challenges, our detector architecture is adaptable to any language. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 2487 KiB  
Article
SiamSMN: Siamese Cross-Modality Fusion Network for Object Tracking
by Shuo Han, Lisha Gao, Yue Wu, Tian Wei, Manyu Wang and Xu Cheng
Information 2024, 15(7), 418; https://doi.org/10.3390/info15070418 - 19 Jul 2024
Viewed by 804
Abstract
The existing Siamese trackers have achieved increasingly successful results in visual object tracking. However, the interactive fusion among multi-layer similarity maps after cross-correlation has not been fully studied in previous Siamese network-based methods. To address this issue, we propose a novel Siamese network [...] Read more.
The existing Siamese trackers have achieved increasingly successful results in visual object tracking. However, the interactive fusion among multi-layer similarity maps after cross-correlation has not been fully studied in previous Siamese network-based methods. To address this issue, we propose a novel Siamese network for visual object tracking, named SiamSMN, which consists of a feature extraction network, a multi-scale fusion module, and a prediction head. First, the feature extraction network is used to extract the features of the template image and the search image, which is calculated by a depth-wise cross-correlation operation to produce multiple similarity feature maps. Second, we propose an effective multi-scale fusion module that can extract global context information for object search and learn the interdependencies between multi-level similarity maps. In addition, to further improve tracking accuracy, we design a learnable prediction head module to generate a boundary point for each side based on the coarse bounding box, which can solve the problem of inconsistent classification and regression during the tracking. Extensive experiments on four public benchmarks demonstrate that the proposed tracker has a competitive performance among other state-of-the-art trackers. Full article
Show Figures

Figure 1

15 pages, 3305 KiB  
Article
Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei Segmentation in Histopathology Images
by Haotian Wang, Aleksandar Vakanski, Changfa Shi and Min Xian
Information 2024, 15(7), 417; https://doi.org/10.3390/info15070417 - 18 Jul 2024
Viewed by 840
Abstract
Separating overlapped nuclei is a significant challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on nuclei segmentation; however, their performance on separating overlapped nuclei is limited. To address this issue, we propose a novel multitask learning network with [...] Read more.
Separating overlapped nuclei is a significant challenge in histopathology image analysis. Recently published approaches have achieved promising overall performance on nuclei segmentation; however, their performance on separating overlapped nuclei is limited. To address this issue, we propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately. The newly proposed multitask learning architecture enhances generalization by learning shared representation from the following three tasks: instance segmentation, nuclei distance map prediction, and overlapped nuclei distance map prediction. The proposed bending loss defines high penalties to concave contour points with large curvatures, and small penalties are applied to convex contour points with small curvatures. Minimizing the bending loss avoids generating contours that encompass multiple nuclei. In addition, two new quantitative metrics, the Aggregated Jaccard Index of overlapped nuclei (AJIO) and the accuracy of overlapped nuclei (ACCO), have been designed to evaluate overlapped nuclei segmentation. We validate the proposed approach on the CoNSeP and MoNuSegv1 data sets using the following seven quantitative metrics: Aggregate Jaccard Index, Dice, Segmentation Quality, Recognition Quality, Panoptic Quality, AJIO, and ACCO. Extensive experiments demonstrate that the proposed Bend-Net outperforms eight state-of-the-art approaches. Full article
Show Figures

Figure 1

16 pages, 256 KiB  
Article
Higher Education Students’ Perceptions of GenAI Tools for Learning
by Wajeeh Daher and Asma Hussein
Information 2024, 15(7), 416; https://doi.org/10.3390/info15070416 - 18 Jul 2024
Viewed by 1541
Abstract
Students’ perceptions of tools with which they learn affect the outcomes of this learning. GenAI tools are new tools that have promise for students’ learning, especially higher education students. Examining students’ perceptions of GenAI tools as learning tools can help instructors better plan [...] Read more.
Students’ perceptions of tools with which they learn affect the outcomes of this learning. GenAI tools are new tools that have promise for students’ learning, especially higher education students. Examining students’ perceptions of GenAI tools as learning tools can help instructors better plan activities that utilize these tools in the higher education context. The present research considers four components of students’ perceptions of GenAI tools: efficiency, interaction, affect, and intention. To triangulate data, it combines the quantitative and the qualitative methodologies, by using a questionnaire and by conducting interviews. A total of 153 higher education students responded to the questionnaire, while 10 higher education students participated in the interview. The research results indicated that the means of affect, interaction, and efficiency were significantly medium, while the mean of intention was significantly high. The research findings showed that in efficiency, affect, and intention, male students had significantly higher perceptions of AI tools than female students, but in the interaction component, the two genders did not differ significantly. Moreover, the degree affected only the perception of interaction of higher education students, where the mean value of interaction was significantly different between B.A. and Ph.D. students in favor of Ph.D. students. Moreover, medium-technology-knowledge and high-technology-knowledge students differed significantly in their perceptions of working with AI tools in the interaction component only, where this difference was in favor of the high-technology-knowledge students. Furthermore, AI knowledge significantly affected efficiency, interaction, and affect of higher education students, where they were higher in favor of high-AI-knowledge students over low-AI-knowledge students, as well as in favor of medium-AI-knowledge students over low-AI-knowledge students. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
18 pages, 1793 KiB  
Article
An Open Data-Based Omnichannel Approach for Personalized Healthcare
by Ailton Moreira and Manuel Filipe Santos
Information 2024, 15(7), 415; https://doi.org/10.3390/info15070415 - 18 Jul 2024
Viewed by 841
Abstract
Currently, telemedicine and telehealth have grown, prompting healthcare institutions to seek innovative ways to incorporate them into their services. Challenges such as resource allocation, system integration, and data compatibility persist in healthcare. Utilizing an open data approach in a versatile mobile platform holds [...] Read more.
Currently, telemedicine and telehealth have grown, prompting healthcare institutions to seek innovative ways to incorporate them into their services. Challenges such as resource allocation, system integration, and data compatibility persist in healthcare. Utilizing an open data approach in a versatile mobile platform holds great promise for addressing these challenges. This research focuses on adopting such an approach for a mobile platform catering to personalized care services. It aims to bridge identified gaps in healthcare, including fragmented communication channels and limited real-time data access, through an open data approach. This study builds upon previous research in omnichannel healthcare using prototyping to design a mobile companion for personalized care. By combining an omnichannel mobile companion with open data principles, this research successfully tackles key healthcare gaps, enhancing patient-centered care and improving data accessibility and integration. The strategy proves effective despite encountering challenges, although additional issues in personalized care services warrant further exploration and consideration. Full article
(This article belongs to the Special Issue Information Systems in Healthcare)
Show Figures

Figure 1

16 pages, 9223 KiB  
Article
NATCA YOLO-Based Small Object Detection for Aerial Images
by Yicheng Zhu, Zhenhua Ai, Jinqiang Yan, Silong Li, Guowei Yang and Teng Yu
Information 2024, 15(7), 414; https://doi.org/10.3390/info15070414 - 18 Jul 2024
Viewed by 1044
Abstract
The object detection model in UAV aerial image scenes faces challenges such as significant scale changes of certain objects and the presence of complex backgrounds. This paper aims to address the detection of small objects in aerial images using NATCA (neighborhood attention Transformer [...] Read more.
The object detection model in UAV aerial image scenes faces challenges such as significant scale changes of certain objects and the presence of complex backgrounds. This paper aims to address the detection of small objects in aerial images using NATCA (neighborhood attention Transformer coordinate attention) YOLO. Specifically, the feature extraction network incorporates a neighborhood attention transformer (NAT) into the last layer to capture global context information and extract diverse features. Additionally, the feature fusion network (Neck) incorporates a coordinate attention (CA) module to capture channel information and longer-range positional information. Furthermore, the activation function in the original convolutional block is replaced with Meta-ACON. The NAT serves as the prediction layer in the new network, which is evaluated using the VisDrone2019-DET object detection dataset as a benchmark, and tested on the VisDrone2019-DET-test-dev dataset. To assess the performance of the NATCA YOLO model in detecting small objects in aerial images, other detection networks, such as Faster R-CNN, RetinaNet, and SSD, are employed for comparison on the test set. The results demonstrate that the NATCA YOLO detection achieves an average accuracy of 42%, which is a 2.9% improvement compared to the state-of-the-art detection network TPH-YOLOv5. Full article
Show Figures

Figure 1

16 pages, 3196 KiB  
Article
The Physics of Preference: Unravelling Imprecision of Human Preferences through Magnetisation Dynamics
by Ivan S. Maksymov and Ganna Pogrebna
Information 2024, 15(7), 413; https://doi.org/10.3390/info15070413 - 18 Jul 2024
Cited by 1 | Viewed by 881
Abstract
Paradoxical decision-making behaviours such as preference reversal often arise from imprecise or noisy human preferences. Harnessing the physical principle of magnetisation reversal in ferromagnetic nanostructures, we developed a model that closely reflects human decision-making dynamics. Tested against a spectrum of psychological data, our [...] Read more.
Paradoxical decision-making behaviours such as preference reversal often arise from imprecise or noisy human preferences. Harnessing the physical principle of magnetisation reversal in ferromagnetic nanostructures, we developed a model that closely reflects human decision-making dynamics. Tested against a spectrum of psychological data, our model adeptly captures the complexities inherent in individual choices. This blend of physics and psychology paves the way for fresh perspectives on understanding the imprecision of human decision-making processes, extending the reach of the current classical and quantum physical models of human behaviour and decision making. Full article
Show Figures

Figure 1

18 pages, 1368 KiB  
Article
Exploring the Factors in the Discontinuation of a Talent Pool Information System: A Case Study of an EduTech Startup in Indonesia
by Sabila Nurwardani, Ailsa Zayyan, Endah Fuji Astuti and Panca O. Hadi Putra
Information 2024, 15(7), 412; https://doi.org/10.3390/info15070412 - 17 Jul 2024
Viewed by 2033
Abstract
This research was conducted to determine the reasons behind users’ discontinuation of talent pool information system use. A qualitative approach was chosen to explore these factors in depth. Respondents were selected using purposive sampling techniques, and the data collection process was carried out [...] Read more.
This research was conducted to determine the reasons behind users’ discontinuation of talent pool information system use. A qualitative approach was chosen to explore these factors in depth. Respondents were selected using purposive sampling techniques, and the data collection process was carried out through semi-structured interviews. The thematic analysis method was then applied to the transcripts of the interviews with the users. Based on the qualitative methodology employed, we found seven factors behind users’ discontinuation of the use of the studied information system. The seven factors were grouped based on two dimensions, namely, experiential factors and external factors. Poor system quality, informational issues, interface issues, and unfamiliarity with the system influenced the experiential factors. On the other hand, the external factors were influenced by workforce needs, talent mismatches, and a lack of socialization. This research offers a novel, in-depth analysis of the factors that cause users to stop using information systems based on direct experience from users. In addition, the results of this study will be used as feedback companies can use to improve their systems. Full article
(This article belongs to the Special Issue Fundamental Problems of Information Studies)
Show Figures

Figure 1

18 pages, 3838 KiB  
Article
DPP: A Novel Disease Progression Prediction Method for Ginkgo Leaf Disease Based on Image Sequences
by Shubao Yao, Jianhui Lin and Hao Bai
Information 2024, 15(7), 411; https://doi.org/10.3390/info15070411 - 16 Jul 2024
Viewed by 827
Abstract
Ginkgo leaf disease poses a grave threat to Ginkgo biloba. The current management of Ginkgo leaf disease lacks precision guidance and intelligent technologies. To provide precision guidance for disease management and to evaluate the effectiveness of the implemented measures, the present study [...] Read more.
Ginkgo leaf disease poses a grave threat to Ginkgo biloba. The current management of Ginkgo leaf disease lacks precision guidance and intelligent technologies. To provide precision guidance for disease management and to evaluate the effectiveness of the implemented measures, the present study proposes a novel disease progression prediction (DPP) method for Ginkgo leaf blight with a multi-level feature translation architecture and enhanced spatiotemporal attention module (eSTA). The proposed DPP method is capable of capturing key spatiotemporal dependencies of disease symptoms at various feature levels. Experiments demonstrated that the DPP method achieves state-of-the-art prediction performance in disease progression prediction. Compared to the top-performing spatiotemporal predictive learning method (SimVP + TAU), our method significantly reduced the mean absolute error (MAE) by 19.95% and the mean square error (MSE) by 25.35%. Moreover, it achieved a higher structure similarity index measure (SSIM) of 0.970 and superior peak signal-to-noise ratio (PSNR) of 37.746 dB. The proposed method can accurately forecast the progression of Ginkgo leaf blight to a large extent, which is expected to provide valuable insights for precision and intelligent disease management. Additionally, this study presents a novel perspective for the extensive research on plant disease prediction. Full article
Show Figures

Figure 1

44 pages, 1569 KiB  
Review
Digital Educational Tools for Undergraduate Nursing Education: A Review of Serious Games, Gamified Applications and Non-Gamified Virtual Reality Simulations/Tools for Nursing Students
by Vasiliki Eirini Chatzea, Ilias Logothetis, Michail Kalogiannakis, Michael Rovithis and Nikolas Vidakis
Information 2024, 15(7), 410; https://doi.org/10.3390/info15070410 - 15 Jul 2024
Viewed by 1888
Abstract
Educational technology has advanced tremendously in recent years, with several major developments becoming available in healthcare professionals’ education, including nursing. Furthermore, the COVID-19 pandemic resulted in obligatory physical distancing, which forced an accelerated digital transformation of teaching tools. This review aimed to summarize [...] Read more.
Educational technology has advanced tremendously in recent years, with several major developments becoming available in healthcare professionals’ education, including nursing. Furthermore, the COVID-19 pandemic resulted in obligatory physical distancing, which forced an accelerated digital transformation of teaching tools. This review aimed to summarize all the available digital tools for nursing undergraduate education developed from 2019 to 2023. A robust search algorithm was implemented in the Scopus database, resulting in 1592 publications. Overall, 266 relevant studies were identified enrolling more than 22,500 undergraduate nursing students. Upon excluding multiple publications on the same digital tool, studies were categorized into three broad groups: serious games (28.0%), gamified applications (34.5%), and VR simulations and other non-gamified digital interventions (37.5%). Digital tools’ learning activity type (categories = 8), geographical distribution (countries = 34), educational subjects (themes = 12), and inclusion within a curriculum course (n = 108), were also explored. Findings indicate that digital educational tools are an emerging field identified as a potential pedagogical strategy aiming to transform nursing education. This review highlights the latest advances in the field, providing useful insights that could inspire countries and universities which have not yet incorporated digital educational tools in their nursing curriculum, to invest in their implementation. Full article
Show Figures

Figure 1

14 pages, 2421 KiB  
Article
Optimization of Memristor Crossbar’s Mapping Using Lagrange Multiplier Method and Genetic Algorithm for Reducing Crossbar’s Area and Delay Time
by Seung-Myeong Cho, Rina Yoon, Ilpyeong Yoon, Jihwan Moon, Seokjin Oh and Kyeong-Sik Min
Information 2024, 15(7), 409; https://doi.org/10.3390/info15070409 - 15 Jul 2024
Cited by 1 | Viewed by 903
Abstract
Memristor crossbars offer promising low-power and parallel processing capabilities, making them efficient for implementing convolutional neural networks (CNNs) in terms of delay time, area, etc. However, mapping large CNN models like ResNet-18, ResNet-34, VGG-Net, etc., onto memristor crossbars is challenging due to the [...] Read more.
Memristor crossbars offer promising low-power and parallel processing capabilities, making them efficient for implementing convolutional neural networks (CNNs) in terms of delay time, area, etc. However, mapping large CNN models like ResNet-18, ResNet-34, VGG-Net, etc., onto memristor crossbars is challenging due to the line resistance problem limiting crossbar size. This necessitates partitioning full-image convolution into sub-image convolution. To do so, an optimized mapping of memristor crossbars should be considered to divide full-image convolution into multiple crossbars. With limited crossbar resources, especially in edge devices, it is crucial to optimize the crossbar allocation per layer to minimize the hardware resource in term of crossbar area, delay time, and area–delay product. This paper explores three optimization scenarios: (1) optimizing total delay time under a crossbar’s area constraint, (2) optimizing total crossbar area with a crossbar’s delay time constraint, and (3) optimizing a crossbar’s area–delay-time product without constraints. The Lagrange multiplier method is employed for the constrained cases 1 and 2. For the unconstrained case 3, a genetic algorithm (GA) is used to optimize the area–delay-time product. Simulation results demonstrate that the optimization can have significant improvements over the unoptimized results. When VGG-Net is simulated, the optimization can show about 20% reduction in delay time for case 1 and 22% area reduction for case 2. Case 3 highlights the benefits of optimizing the crossbar utilization ratio for minimizing the area–delay-time product. The proposed optimization strategies can substantially enhance the neural network’s performance of memristor crossbar-based processing-in-memory architectures, especially for resource-constrained edge computing platforms. Full article
(This article belongs to the Special Issue Neuromorphic Engineering and Machine Learning)
Show Figures

Figure 1

14 pages, 12531 KiB  
Article
Application of Attention-Enhanced 1D-CNN Algorithm in Hyperspectral Image and Spectral Fusion Detection of Moisture Content in Orah Mandarin (Citrus reticulata Blanco)
by Weiqi Li, Yifan Wang, Yue Yu and Jie Liu
Information 2024, 15(7), 408; https://doi.org/10.3390/info15070408 - 14 Jul 2024
Viewed by 951
Abstract
A method fusing spectral and image information with a one-dimensional convolutional neural network(1D-CNN) for the detection of moisture content in Orah mandarin (Citrus reticulata Blanco) was proposed. The 1D-CNN model integrated with three different attention modules (SEAM, ECAM, CBAM) and machine learning [...] Read more.
A method fusing spectral and image information with a one-dimensional convolutional neural network(1D-CNN) for the detection of moisture content in Orah mandarin (Citrus reticulata Blanco) was proposed. The 1D-CNN model integrated with three different attention modules (SEAM, ECAM, CBAM) and machine learning models were applied to individual spectrum and fused information by passing the traditional feature extraction stage. Additionally, the dimensionality reduction of hyperspectral images and extraction of one-dimensional color and textural features from the reduced images were performed, thus avoiding the large parameter volumes and efficiency decline inherent in the direct modeling of two-dimensional images. The results indicated that the 1D-CNN model with integrated attention modules exhibited clear advantages over machine learning models in handling multi-source information. The optimal machine learning model was determined to be the random forest (RF) model under the fusion information, with a correlation coefficient (R) of 0.8770 and a root mean square error (RMSE) of 0.0188 on the prediction set. The CBAM-1D-CNN model under the fusion information exhibited the best performance, with an R of 0.9172 and an RMSE of 0.0149 on the prediction set. The 1D-CNN models utilizing fusion information exhibited superior performance compared to single spectrum, and 1D-CNN with the fused information based on SEAM, ECAM, and CBAM respectively improved Rp by 4.54%, 0.18%, and 10.19% compared to the spectrum, with the RMSEP decreased by 11.70%, 14.06%, and 31.02%, respectively. The proposed approach of 1D-CNN integrated attention can obtain excellent regression results by only using one-dimensional data and without feature pre-extracting, reducing the complexity of the models, simplifying the calculation process, and rendering it a promising practical application. Full article
Show Figures

Figure 1

15 pages, 1736 KiB  
Article
Multi-Level Attention with 2D Table-Filling for Joint Entity-Relation Extraction
by Zhenyu Zhang, Lin Shi, Yang Yuan, Huanyue Zhou and Shoukun Xu
Information 2024, 15(7), 407; https://doi.org/10.3390/info15070407 - 14 Jul 2024
Viewed by 678
Abstract
Joint entity-relation extraction is a fundamental task in the construction of large-scale knowledge graphs. This task relies not only on the semantics of the text span but also on its intricate connections, including classification and structural details that most previous models overlook. In [...] Read more.
Joint entity-relation extraction is a fundamental task in the construction of large-scale knowledge graphs. This task relies not only on the semantics of the text span but also on its intricate connections, including classification and structural details that most previous models overlook. In this paper, we propose the incorporation of this information into the learning process. Specifically, we design a novel two-dimensional word-pair tagging method to define the task of entity and relation extraction. This allows type markers to focus on text tokens, gathering information for their corresponding spans. Additionally, we introduce a multi-level attention neural network to enhance its capacity to perceive structure-aware features. Our experiments show that our approach can overcome the limitations of earlier tagging methods and yield more accurate results. We evaluate our model using three different datasets: SciERC, ADE, and CoNLL04. Our model demonstrates competitive performance compared to the state-of-the-art, surpassing other approaches across the majority of evaluated metrics. Full article
Show Figures

Figure 1

15 pages, 1422 KiB  
Article
Integrating Change Management with a Knowledge Management Framework: A Methodological Proposal
by Bernal Picado Argüello and Vicente González-Prida
Information 2024, 15(7), 406; https://doi.org/10.3390/info15070406 - 13 Jul 2024
Viewed by 1494
Abstract
This study proposes the integration of change management with a knowledge management framework to address knowledge retention and successful change management in the context of Industry 5.0. Using the ADKAR model, it is suggested to implement strategies for training and user acceptance testing. [...] Read more.
This study proposes the integration of change management with a knowledge management framework to address knowledge retention and successful change management in the context of Industry 5.0. Using the ADKAR model, it is suggested to implement strategies for training and user acceptance testing. The research highlights the importance of applying the human capital life cycle in knowledge and change management, demonstrating the effectiveness of this approach in adapting to Industry 5.0. The methodology includes a review of the state of the art in intangible asset management, change management models, and the integration of change and knowledge management. In addition, a case study is presented in a food production company that validates the effectiveness of the ADKAR model in implementing digital technologies, improving process efficiency and increasing employee acceptance of new technologies. The results show a significant improvement in process efficiency and a reduction in resistance to change. The originality of the study lies in the combination of the ADKAR model with intangible asset and knowledge management, providing a holistic solution for change management in the Industry 5.0 era. Future implications suggest the need to explore the applicability of the ADKAR model in different industries and cultures, as well as its long-term effects on organisational sustainability and innovation. This comprehensive approach can serve as a guide for other organisations seeking to implement successful digital transformations. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2024–2025)
Show Figures

Figure 1

12 pages, 1668 KiB  
Article
Bridging Artificial Intelligence and Neurological Signals (BRAINS): A Novel Framework for Electroencephalogram-Based Image Generation
by Mateo Sokač, Leo Mršić, Mislav Balković and Maja Brkljačić
Information 2024, 15(7), 405; https://doi.org/10.3390/info15070405 - 12 Jul 2024
Viewed by 1482
Abstract
Recent advancements in cognitive neuroscience, particularly in electroencephalogram (EEG) signal processing, image generation, and brain–computer interfaces (BCIs), have opened up new avenues for research. This study introduces a novel framework, Bridging Artificial Intelligence and Neurological Signals (BRAINS), which leverages the power of artificial [...] Read more.
Recent advancements in cognitive neuroscience, particularly in electroencephalogram (EEG) signal processing, image generation, and brain–computer interfaces (BCIs), have opened up new avenues for research. This study introduces a novel framework, Bridging Artificial Intelligence and Neurological Signals (BRAINS), which leverages the power of artificial intelligence (AI) to extract meaningful information from EEG signals and generate images. The BRAINS framework addresses the limitations of traditional EEG analysis techniques, which struggle with nonstationary signals, spectral estimation, and noise sensitivity. Instead, BRAINS employs Long Short-Term Memory (LSTM) networks and contrastive learning, which effectively handle time-series EEG data and recognize intrinsic connections and patterns. The study utilizes the MNIST dataset of handwritten digits as stimuli in EEG experiments, allowing for diverse yet controlled stimuli. The data collected are then processed through an LSTM-based network, employing contrastive learning and extracting complex features from EEG data. These features are fed into an image generator model, producing images as close to the original stimuli as possible. This study demonstrates the potential of integrating AI and EEG technology, offering promising implications for the future of brain–computer interfaces. Full article
(This article belongs to the Special Issue Signal Processing Based on Machine Learning Techniques)
Show Figures

Figure 1

15 pages, 2488 KiB  
Article
Extended Isolation Forest for Intrusion Detection in Zeek Data
by Fariha Moomtaheen, Sikha S. Bagui, Subhash C. Bagui and Dustin Mink
Information 2024, 15(7), 404; https://doi.org/10.3390/info15070404 - 12 Jul 2024
Viewed by 810
Abstract
The novelty of this paper is in determining and using hyperparameters to improve the Extended Isolation Forest (EIF) algorithm, a relatively new algorithm, to detect malicious activities in network traffic. The EIF algorithm is a variation of the Isolation Forest algorithm, known for [...] Read more.
The novelty of this paper is in determining and using hyperparameters to improve the Extended Isolation Forest (EIF) algorithm, a relatively new algorithm, to detect malicious activities in network traffic. The EIF algorithm is a variation of the Isolation Forest algorithm, known for its efficacy in detecting anomalies in high-dimensional data. Our research assesses the performance of the EIF model on a newly created dataset composed of Zeek Connection Logs, UWF-ZeekDataFall22. To handle the enormous volume of data involved in this research, the Hadoop Distributed File System (HDFS) is employed for efficient and fault-tolerant storage, and the Apache Spark framework, a powerful open-source Big Data analytics platform, is utilized for machine learning (ML) tasks. The best results for the EIF algorithm came from the 0-extension level. We received an accuracy of 82.3% for the Resource Development tactic, 82.21% for the Reconnaissance tactic, and 78.3% for the Discovery tactic. Full article
(This article belongs to the Special Issue Intrusion Detection Systems in IoT Networks)
Show Figures

Figure 1

16 pages, 3249 KiB  
Article
Explainable Artificial Intelligence and Deep Learning Methods for the Detection of Sickle Cell by Capturing the Digital Images of Blood Smears
by Neelankit Gautam Goswami, Niranjana Sampathila, Giliyar Muralidhar Bairy, Anushree Goswami, Dhruva Darshan Brp Siddarama and Sushma Belurkar
Information 2024, 15(7), 403; https://doi.org/10.3390/info15070403 - 12 Jul 2024
Viewed by 1045
Abstract
A digital microscope plays a crucial role in the better and faster diagnosis of an abnormality using various techniques. There has been significant development in this domain of digital pathology. Sickle cell disease (SCD) is a genetic disorder that affects hemoglobin in red [...] Read more.
A digital microscope plays a crucial role in the better and faster diagnosis of an abnormality using various techniques. There has been significant development in this domain of digital pathology. Sickle cell disease (SCD) is a genetic disorder that affects hemoglobin in red blood cells. The traditional method for diagnosing sickle cell disease involves preparing a glass slide and viewing the slide using the eyepiece of a manual microscope. The entire process thus becomes very tedious and time consuming. This paper proposes a semi-automated system that can capture images based on a predefined program. It has an XY stage for moving the slide horizontally or vertically and a Z stage for focus adjustments. The case study taken here is of SCD. The proposed hardware captures SCD slides, which are further used to classify them with respect to normal. They are processed using deep learning models such as Darknet-19, ResNet50, ResNet18, ResNet101, and GoogleNet. The tested models demonstrated strong performance, with most achieving high metrics across different configurations varying with an average of around 97%. In the future, this semi-automated system will benefit pathologists and can be used in rural areas, where pathologists are in short supply. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Health)
Show Figures

Figure 1

8 pages, 224 KiB  
Editorial
Editorial to the Special Issue “Systems Engineering and Knowledge Management”
by Vladimír Bureš
Information 2024, 15(7), 402; https://doi.org/10.3390/info15070402 - 12 Jul 2024
Viewed by 821
Abstract
The International Council on Systems Engineering, the leading authority in the realm of systems engineering (SE), defines this field of study as a transdisciplinary and integrative approach to enabling the realization of the entire life cycle of any engineered system [...] Full article
(This article belongs to the Special Issue Systems Engineering and Knowledge Management)
16 pages, 1203 KiB  
Article
Defining Nodes and Edges in Other Languages in Cognitive Network Science—Moving beyond Single-Layer Networks
by Michael S. Vitevitch, Alysia E. Martinez and Riley England
Information 2024, 15(7), 401; https://doi.org/10.3390/info15070401 - 12 Jul 2024
Viewed by 1086
Abstract
Cognitive network science has increased our understanding of how the mental lexicon is structured and how that structure at the micro-, meso-, and macro-levels influences language and cognitive processes. Most of the research using this approach has used single-layer networks of English words. [...] Read more.
Cognitive network science has increased our understanding of how the mental lexicon is structured and how that structure at the micro-, meso-, and macro-levels influences language and cognitive processes. Most of the research using this approach has used single-layer networks of English words. We consider two fundamental concepts in network science—nodes and connections (or edges)—in the context of two lesser-studied languages (American Sign Language and Kaqchikel) to see if a single-layer network can model phonological similarities among words in each of those languages. The analyses of those single-layer networks revealed several differences in network architecture that may challenge the cognitive network approach. We discuss several directions for future research using different network architectures that could address these challenges and also increase our understanding of how language processing might vary across languages. Such work would also provide a common framework for research in the language sciences, despite the variation among human languages. The methodological and theoretical tools of network science may also make it easier to integrate research of various language processes, such as typical and delayed development, acquired disorders, and the interaction of phonological and semantic information. Finally, coupling the cognitive network science approach with investigations of languages other than English might further advance our understanding of cognitive processing in general. Full article
Show Figures

Figure 1

15 pages, 1030 KiB  
Article
Compact and Low-Latency FPGA-Based Number Theoretic Transform Architecture for CRYSTALS Kyber Postquantum Cryptography Scheme
by Binh Kieu-Do-Nguyen, Nguyen The Binh, Cuong Pham-Quoc, Huynh Phuc Nghi, Ngoc-Thinh Tran, Trong-Thuc Hoang and Cong-Kha Pham
Information 2024, 15(7), 400; https://doi.org/10.3390/info15070400 - 11 Jul 2024
Viewed by 714
Abstract
In the modern era of the Internet of Things (IoT), especially with the rapid development of quantum computers, the implementation of postquantum cryptography algorithms in numerous terminals allows them to defend against potential future quantum attack threats. Lattice-based cryptography can withstand quantum computing [...] Read more.
In the modern era of the Internet of Things (IoT), especially with the rapid development of quantum computers, the implementation of postquantum cryptography algorithms in numerous terminals allows them to defend against potential future quantum attack threats. Lattice-based cryptography can withstand quantum computing attacks, making it a viable substitute for the currently prevalent classical public-key cryptography technique. However, the algorithm’s significant time complexity places a substantial computational burden on the already resource-limited chip in the IoT terminal. In lattice-based cryptography algorithms, the polynomial multiplication on the finite field is well known as the most time-consuming process. Therefore, investigations into efficient methods for calculating polynomial multiplication are essential for adopting these quantum-resistant lattice-based algorithms on a low-profile IoT terminal. Number theoretic transform (NTT), a variant of fast Fourier transform (FFT), is a technique widely employed to accelerate polynomial multiplication on the finite field to achieve a subquadratic time complexity. This study presents an efficient FPGA-based implementation of number theoretic transform for the CRYSTAL Kyber, a lattice-based public-key cryptography algorithm. Our hybrid design, which supports both forward and inverse NTT, is able run at high frequencies up to 417 MHz on a low-profile Artix7-XC7A100T and achieve a low latency of 1.10μs while achieving state-of-the-art hardware efficiency, consuming only 541-LUTs, 680 FFs, and four 18 Kb BRAMs. This is made possible thanks to the newly proposed multilevel pipeline butterfly unit architecture in combination with employing an effective coefficient accessing pattern. Full article
(This article belongs to the Special Issue Software Engineering and Green Software)
Show Figures

Figure 1

20 pages, 6753 KiB  
Article
Rolling Bearing Fault Diagnosis Based on CNN-LSTM with FFT and SVD
by Muzi Xu, Qianqian Yu, Shichao Chen and Jianhui Lin
Information 2024, 15(7), 399; https://doi.org/10.3390/info15070399 - 11 Jul 2024
Viewed by 808
Abstract
In the industrial sector, accurate fault identification is paramount for ensuring both safety and economic efficiency throughout the production process. However, due to constraints imposed by actual working conditions, the motor state features collected are often limited in number and singular in nature. [...] Read more.
In the industrial sector, accurate fault identification is paramount for ensuring both safety and economic efficiency throughout the production process. However, due to constraints imposed by actual working conditions, the motor state features collected are often limited in number and singular in nature. Consequently, extending and extracting these features pose significant challenges in fault diagnosis. To address this issue and strike a balance between model complexity and diagnostic accuracy, this paper introduces a novel motor fault diagnostic model termed FSCL (Fourier Singular Value Decomposition combined with Long and Short-Term Memory networks). The FSCL model integrates traditional signal analysis algorithms with deep learning techniques to automate feature extraction. This hybrid approach innovatively enhances fault detection by describing, extracting, encoding, and mapping features during offline training. Empirical evaluations against various state-of-the-art techniques such as Bayesian Optimization and Extreme Gradient Boosting Tree (BOA-XGBoost), Whale Optimization Algorithm and Support Vector Machine (WOA-SVM), Short-Time Fourier Transform and Convolutional Neural Networks (STFT-CNNs), and Variational Modal Decomposition-Multi Scale Fuzzy Entropy-Probabilistic Neural Network (VMD-MFE-PNN) demonstrate the superior performance of the FSCL model. Validation using the Case Western Reserve University dataset (CWRU) confirms the efficacy of the proposed technique, achieving an impressive accuracy of 99.32%. Moreover, the model exhibits robustness against noise, maintaining an average precision of 98.88% and demonstrating recall and F1 scores ranging from 99.00% to 99.89%. Even under conditions of severe noise interference, the FSCL model consistently achieves high accuracy in recognizing the motor’s operational state. This study underscores the FSCL model as a promising approach for enhancing motor fault diagnosis in industrial settings, leveraging the synergistic benefits of traditional signal analysis and deep learning methodologies. Full article
Show Figures

Figure 1

20 pages, 1505 KiB  
Article
Optimizing Tourism Accommodation Offers by Integrating Language Models and Knowledge Graph Technologies
by Andrea Cadeddu, Alessandro Chessa, Vincenzo De Leo, Gianni Fenu, Enrico Motta, Francesco Osborne, Diego Reforgiato Recupero, Angelo Salatino and Luca Secchi
Information 2024, 15(7), 398; https://doi.org/10.3390/info15070398 - 10 Jul 2024
Cited by 1 | Viewed by 1174
Abstract
Online platforms have become the primary means for travellers to search, compare, and book accommodations for their trips. Consequently, online platforms and revenue managers must acquire a comprehensive comprehension of these dynamics to formulate a competitive and appealing offerings. Recent advancements in natural [...] Read more.
Online platforms have become the primary means for travellers to search, compare, and book accommodations for their trips. Consequently, online platforms and revenue managers must acquire a comprehensive comprehension of these dynamics to formulate a competitive and appealing offerings. Recent advancements in natural language processing, specifically through the development of large language models, have demonstrated significant progress in capturing the intricate nuances of human language. On the other hand, knowledge graphs have emerged as potent instruments for representing and organizing structured information. Nevertheless, effectively integrating these two powerful technologies remains an ongoing challenge. This paper presents an innovative deep learning methodology that combines large language models with domain-specific knowledge graphs for classification of tourism offers. The main objective of our system is to assist revenue managers in the following two fundamental dimensions: (i) comprehending the market positioning of their accommodation offerings, taking into consideration factors such as accommodation price and availability, together with user reviews and demand, and (ii) optimizing presentations and characteristics of the offerings themselves, with the intention of improving their overall appeal. For this purpose, we developed a domain knowledge graph covering a variety of information about accommodations and implemented targeted feature engineering techniques to enhance the information representation within a large language model. To evaluate the effectiveness of our approach, we conducted a comparative analysis against alternative methods on four datasets about accommodation offers in London. The proposed solution obtained excellent results, significantly outperforming alternative methods. Full article
Show Figures

Figure 1

20 pages, 5600 KiB  
Article
Spatial Analysis of Advanced Air Mobility in Rural Healthcare Logistics
by Raj Bridgelall
Information 2024, 15(7), 397; https://doi.org/10.3390/info15070397 - 10 Jul 2024
Viewed by 948
Abstract
The transportation of patients in emergency medical situations, particularly in rural areas, often faces significant challenges due to long travel distances and limited access to healthcare facilities. These challenges can result in critical delays in medical care, adversely affecting patient outcomes. Addressing this [...] Read more.
The transportation of patients in emergency medical situations, particularly in rural areas, often faces significant challenges due to long travel distances and limited access to healthcare facilities. These challenges can result in critical delays in medical care, adversely affecting patient outcomes. Addressing this issue is essential for improving survival rates and health outcomes in underserved regions. This study explored the potential of advanced air mobility to enhance emergency medical services by reducing patient transport times through the strategic placement of vertiports. Using North Dakota as a case study, the research developed a GIS-based optimization workflow to identify optimal vertiport locations that maximize time savings. The study highlighted the benefits of strategic vertiport placement at existing airports and hospital heliports to minimize community disruption and leverage underutilized infrastructure. A key finding was that the optimized mixed-mode routes could reduce patient transport times by up to 21.8 min compared with drive-only routes, significantly impacting emergency response efficiency. Additionally, the study revealed that more than 45% of the populated areas experienced reduced ground travel times due to the integration of vertiports, highlighting the strategic importance of vertiport placement in optimizing emergency medical services. The research also demonstrated the replicability of the GIS-based optimization model for other regions, offering valuable insights for policymakers and stakeholders in enhancing EMS through advanced air mobility solutions. Full article
(This article belongs to the Special Issue Editorial Board Members’ Collection Series: "Information Processes")
Show Figures

Figure 1

29 pages, 4704 KiB  
Article
Virtual Journeys, Real Engagement: Analyzing User Experience on a Virtual Travel Social Platform
by Ana-Karina Nazare, Alin Moldoveanu and Florica Moldoveanu
Information 2024, 15(7), 396; https://doi.org/10.3390/info15070396 - 8 Jul 2024
Viewed by 808
Abstract
A sustainable smart tourism ecosystem relies on building digital networks that link tourists to destinations. This study explores the potential of web and immersive technologies, specifically the Virtual Romania (VRRO) platform, in enhancing sustainable tourism by redirecting tourist traffic to lesser-known destinations and [...] Read more.
A sustainable smart tourism ecosystem relies on building digital networks that link tourists to destinations. This study explores the potential of web and immersive technologies, specifically the Virtual Romania (VRRO) platform, in enhancing sustainable tourism by redirecting tourist traffic to lesser-known destinations and boosting user engagement through interactive experiences. Our research examines how virtual tourism platforms (VTPs), which include web-based and immersive technologies, support sustainable tourism, complement physical visits, influence user engagement, and foster community building through social features and user-generated content (UGC). An empirical analysis of the VRRO platform reveals high user engagement levels, attributed to its intuitive design and interactive features, regardless of the users’ technological familiarity. Our findings also highlight the necessity for ongoing enhancements to maintain user satisfaction. In conclusion, VRRO demonstrates how accessible and innovative technologies in tourism can modernize travel experiences and contribute to the evolution of the broader tourism ecosystem by supporting sustainable practices and fostering community engagement. Full article
Show Figures

Graphical abstract

17 pages, 1286 KiB  
Article
FEINT: Automated Framework for Efficient INsertion of Templates/Trojans into FPGAs
by Virinchi Roy Surabhi, Rajat Sadhukhan, Md Raz, Hammond Pearce, Prashanth Krishnamurthy, Joshua Trujillo, Ramesh Karri and Farshad Khorrami
Information 2024, 15(7), 395; https://doi.org/10.3390/info15070395 - 8 Jul 2024
Viewed by 779
Abstract
Field-Programmable Gate Arrays (FPGAs) play a significant and evolving role in various industries and applications in the current technological landscape. They are widely known for their flexibility, rapid prototyping, reconfigurability, and design development features. FPGA designs are often constructed as compositions of interconnected [...] Read more.
Field-Programmable Gate Arrays (FPGAs) play a significant and evolving role in various industries and applications in the current technological landscape. They are widely known for their flexibility, rapid prototyping, reconfigurability, and design development features. FPGA designs are often constructed as compositions of interconnected modules that implement the various features/functionalities required in an application. This work develops a novel tool FEINT, which facilitates this module composition process and automates the design-level modifications required when introducing new modules into an existing design. The proposed methodology is architected as a “template” insertion tool that operates based on a user-provided configuration script to introduce dynamic design features as plugins at different stages of the FPGA design process to facilitate rapid prototyping, composition-based design evolution, and system customization. FEINT can be useful in applications where designers need to tailor system behavior without requiring expert FPGA programming skills or significant manual effort. For example, FEINT can help insert defensive monitoring, adversarial Trojan, and plugin-based functionality enhancement features. FEINT is scalable, future-proof, and cross-platform without a dependence on vendor-specific file formats, thus ensuring compatibility with FPGA families and tool versions and being integrable with commercial tools. To assess FEINT’s effectiveness, our tests covered the injection of various types of templates/modules into FPGA designs. For example, in the Trojan insertion context, our tests consider diverse Trojan behaviors and triggers, including key leakage and denial of service Trojans. We evaluated FEINT’s applicability to complex designs by creating an FPGA design that features a MicroBlaze soft-core processor connected to an AES-accelerator via an AXI-bus interface. FEINT can successfully and efficiently insert various templates into this design at different FPGA design stages. Full article
(This article belongs to the Special Issue Hardware Security and Trust)
Show Figures

Graphical abstract

15 pages, 465 KiB  
Article
Optimized Ensemble Learning Approach with Explainable AI for Improved Heart Disease Prediction
by Ibomoiye Domor Mienye and Nobert Jere
Information 2024, 15(7), 394; https://doi.org/10.3390/info15070394 - 8 Jul 2024
Cited by 3 | Viewed by 1999
Abstract
Recent advances in machine learning (ML) have shown great promise in detecting heart disease. However, to ensure the clinical adoption of ML models, they must not only be generalizable and robust but also transparent and explainable. Therefore, this research introduces an approach that [...] Read more.
Recent advances in machine learning (ML) have shown great promise in detecting heart disease. However, to ensure the clinical adoption of ML models, they must not only be generalizable and robust but also transparent and explainable. Therefore, this research introduces an approach that integrates the robustness of ensemble learning algorithms with the precision of Bayesian optimization for hyperparameter tuning and the interpretability offered by Shapley additive explanations (SHAP). The ensemble classifiers considered include adaptive boosting (AdaBoost), random forest, and extreme gradient boosting (XGBoost). The experimental results on the Cleveland and Framingham datasets demonstrate that the optimized XGBoost model achieved the highest performance, with specificity and sensitivity values of 0.971 and 0.989 on the Cleveland dataset and 0.921 and 0.975 on the Framingham dataset, respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Health)
Show Figures

Figure 1

14 pages, 1052 KiB  
Article
The Effect of Training Data Size on Disaster Classification from Twitter
by Dimitrios Effrosynidis, Georgios Sylaios and Avi Arampatzis
Information 2024, 15(7), 393; https://doi.org/10.3390/info15070393 - 8 Jul 2024
Viewed by 863
Abstract
In the realm of disaster-related tweet classification, this study presents a comprehensive analysis of various machine learning algorithms, shedding light on crucial factors influencing algorithm performance. The exceptional efficacy of simpler models is attributed to the quality and size of the dataset, enabling [...] Read more.
In the realm of disaster-related tweet classification, this study presents a comprehensive analysis of various machine learning algorithms, shedding light on crucial factors influencing algorithm performance. The exceptional efficacy of simpler models is attributed to the quality and size of the dataset, enabling them to discern meaningful patterns. While powerful, complex models are time-consuming and prone to overfitting, particularly with smaller or noisier datasets. Hyperparameter tuning, notably through Bayesian optimization, emerges as a pivotal tool for enhancing the performance of simpler models. A practical guideline for algorithm selection based on dataset size is proposed, consisting of Bernoulli Naive Bayes for datasets below 5000 tweets and Logistic Regression for larger datasets exceeding 5000 tweets. Notably, Logistic Regression shines with 20,000 tweets, delivering an impressive combination of performance, speed, and interpretability. A further improvement of 0.5% is achieved by applying ensemble and stacking methods. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop