Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Issue
Volume 3, June
Previous Issue
Volume 2, December

AI, Volume 3, Issue 1 (March 2022) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Readerexternal link to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
Review
Systematic Review of Computer Vision Semantic Analysis in Socially Assistive Robotics
AI 2022, 3(1), 229-249; https://doi.org/10.3390/ai3010014 - 17 Mar 2022
Viewed by 871
Abstract
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially [...] Read more.
The simultaneous surges in the research on socially assistive robotics and that on computer vision can be seen as a result of the shifting and increasing necessities of our global population, especially towards social care with the expanding population in need of socially assistive robotics. The merging of these fields creates demand for more complex and autonomous solutions, often struggling with the lack of contextual understanding of tasks that semantic analysis can provide and hardware limitations. Solving those issues can provide more comfortable and safer environments for the individuals in most need. This work aimed to understand the current scope of science in the merging fields of computer vision and semantic analysis in lightweight models for robotic assistance. Therefore, we present a systematic review of visual semantics works concerned with assistive robotics. Furthermore, we discuss the trends and possible research gaps in those fields. We detail our research protocol, present the state of the art and future trends, and answer five pertinent research questions. Out of 459 articles, 22 works matching the defined scope were selected, rated in 8 quality criteria relevant to our search, and discussed in depth. Our results point to an emerging field of research with challenging gaps to be explored by the academic community. Data on database study collection, year of publishing, and the discussion of methods and datasets are displayed. We observe that the current methods regarding visual semantic analysis show two main trends. At first, there is an abstraction of contextual data to enable an automated understanding of tasks. We also observed a clearer formalization of model compaction metrics. Full article
(This article belongs to the Topic Artificial Intelligence (AI) in Medical Imaging)
Show Figures

Figure 1

Article
Rule-Enhanced Active Learning for Semi-Automated Weak Supervision
AI 2022, 3(1), 211-228; https://doi.org/10.3390/ai3010013 - 16 Mar 2022
Viewed by 745
Abstract
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels. Alternatives such as weak supervision, active learning, and fine-tuning of pretrained models reduce this burden but require substantial human input to [...] Read more.
A major bottleneck preventing the extension of deep learning systems to new domains is the prohibitive cost of acquiring sufficient training labels. Alternatives such as weak supervision, active learning, and fine-tuning of pretrained models reduce this burden but require substantial human input to select a highly informative subset of instances or to curate labeling functions. REGAL (Rule-Enhanced Generative Active Learning) is an improved framework for weakly supervised text classification that performs active learning over labeling functions rather than individual instances. REGAL interactively creates high-quality labeling patterns from raw text, enabling a single annotator to accurately label an entire dataset after initialization with three keywords for each class. Experiments demonstrate that REGAL extracts up to 3 times as many high-accuracy labeling functions from text as current state-of-the-art methods for interactive weak supervision, enabling REGAL to dramatically reduce the annotation burden of writing labeling functions for weak supervision. Statistical analysis reveals REGAL performs equal or significantly better than interactive weak supervision for five of six commonly used natural language processing (NLP) baseline datasets. Full article
(This article belongs to the Topic Methods for Data Labelling for Intelligent Systems)
Show Figures

Figure 1

Article
Abstract Reservoir Computing
AI 2022, 3(1), 194-210; https://doi.org/10.3390/ai3010012 - 10 Mar 2022
Viewed by 701
Abstract
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such [...] Read more.
Noise of any kind can be an issue when translating results from simulations to the real world. We suddenly have to deal with building tolerances, faulty sensors, or just noisy sensor readings. This is especially evident in systems with many free parameters, such as the ones used in physical reservoir computing. By abstracting away these kinds of noise sources using intervals, we derive a regularized training regime for reservoir computing using sets of possible reservoir states. Numerical simulations are used to show the effectiveness of our approach against different sources of errors that can appear in real-world scenarios and compare them with standard approaches. Our results support the application of interval arithmetics to improve the robustness of mass-spring networks trained in simulations. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Article
Weight-Quantized SqueezeNet for Resource-Constrained Robot Vacuums for Indoor Obstacle Classification
AI 2022, 3(1), 180-193; https://doi.org/10.3390/ai3010011 - 09 Mar 2022
Cited by 2 | Viewed by 764
Abstract
With the rapid development of artificial intelligence (AI) theory, particularly deep learning neural networks, robot vacuums equipped with AI power can automatically clean indoor floors by using intelligent programming and vacuuming services. To date, several deep AI models have been proposed to distinguish [...] Read more.
With the rapid development of artificial intelligence (AI) theory, particularly deep learning neural networks, robot vacuums equipped with AI power can automatically clean indoor floors by using intelligent programming and vacuuming services. To date, several deep AI models have been proposed to distinguish indoor objects between cleanable litter and noncleanable hazardous obstacles. Unfortunately, these existing deep AI models focus entirely on the accuracy enhancement of object classification, and little effort has been made to minimize the memory size and implementation cost of AI models. As a result, these existing deep AI models require far more memory space than a typical robot vacuum can provide. To address this shortcoming, this paper aims to study and find an efficient deep AI model that can achieve a good balance between classification accuracy and memory usage (i.e., implementation cost). In this work, we propose a weight-quantized SqueezeNet model for robot vacuums. This model can classify indoor cleanable litters from noncleanable hazardous obstacles based on the image or video captures from robot vacuums. Furthermore, we collect videos or pictures captured by built-in cameras of robot vacuums and use them to construct a diverse dataset. The dataset contains 20,000 images with a ground-view perspective of dining rooms, kitchens and living rooms for various houses under different lighting conditions. Experimental results show that the proposed deep AI model can achieve comparable object classification accuracy of around 93% while reducing memory usage by at least 22.5 times. More importantly, the memory footprint required by our AI model is only 0.8 MB, indicating that this model can run smoothly on resource-constrained robot vacuums, where low-end processors or microcontrollers are dedicated to running AI algorithms. Full article
Show Figures

Figure 1

Article
DeepSleep 2.0: Automated Sleep Arousal Segmentation via Deep Learning
AI 2022, 3(1), 164-179; https://doi.org/10.3390/ai3010010 - 01 Mar 2022
Viewed by 1506
Abstract
DeepSleep 2.0 is a compact version of DeepSleep, a state-of-the-art, U-Net-inspired, fully convolutional deep neural network, which achieved the highest unofficial score in the 2018 PhysioNet Computing Challenge. The proposed network architecture has a compact encoder/decoder structure containing only 740,551 trainable parameters. The [...] Read more.
DeepSleep 2.0 is a compact version of DeepSleep, a state-of-the-art, U-Net-inspired, fully convolutional deep neural network, which achieved the highest unofficial score in the 2018 PhysioNet Computing Challenge. The proposed network architecture has a compact encoder/decoder structure containing only 740,551 trainable parameters. The input to the network is a full-length multichannel polysomnographic recording signal. The network has been designed and optimized to efficiently predict nonapnea sleep arousals on held-out test data at a 5 ms resolution level, while not compromising the prediction accuracy. When compared to DeepSleep, the obtained experimental results in terms of gross area under the precision–recall curve (AUPRC) and gross area under the receiver operating characteristic curve (AUROC) suggest a lightweight architecture, which can achieve similar prediction performance at a lower computational cost, is realizable. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

Article
An Artificial Neural Network-Based Approach for Predicting the COVID-19 Daily Effective Reproduction Number Rt in Italy
AI 2022, 3(1), 146-163; https://doi.org/10.3390/ai3010009 - 26 Feb 2022
Viewed by 1073
Abstract
Since December 2019, the novel coronavirus disease (COVID-19) has had a considerable impact on the health and socio-economic fabric of Italy. The effective reproduction number Rt is one of the most representative indicators of the contagion status as it reports the number [...] Read more.
Since December 2019, the novel coronavirus disease (COVID-19) has had a considerable impact on the health and socio-economic fabric of Italy. The effective reproduction number Rt is one of the most representative indicators of the contagion status as it reports the number of new infections caused by an infected subject in a partially immunized population. The task of predicting Rt values forward in time is challenging and, historically, it has been addressed by exploiting compartmental models or statistical frameworks. The present study proposes an Artificial Neural Networks-based approach to predict the Rt temporal trend at a daily resolution. For each Italian region and autonomous province, 21 daily COVID-19 indicators were exploited for the 7-day ahead prediction of the Rt trend by means of different neural network architectures, i.e., Feed Forward, Mono-Dimensional Convolutional, and Long Short-Term Memory. Focusing on Lombardy, which is one of the most affected regions, the predictions proved to be very accurate, with a minimum Root Mean Squared Error (RMSE) ranging from 0.035 at day t + 1 to 0.106 at day t + 7. Overall, the results show that it is possible to obtain accurate forecasts in Italy at a daily temporal resolution instead of the weekly resolution characterizing the official Rt data. Full article
(This article belongs to the Section Medical & Healthcare AI)
Show Figures

Figure 1

Article
Client Selection in Federated Learning under Imperfections in Environment
AI 2022, 3(1), 124-145; https://doi.org/10.3390/ai3010008 - 25 Feb 2022
Viewed by 859
Abstract
Federated learning promises an elegant solution for learning global models across distributed and privacy-protected datasets. However, challenges related to skewed data distribution, limited computational and communication resources, data poisoning, and free riding clients affect the performance of federated learning. Selection of the best [...] Read more.
Federated learning promises an elegant solution for learning global models across distributed and privacy-protected datasets. However, challenges related to skewed data distribution, limited computational and communication resources, data poisoning, and free riding clients affect the performance of federated learning. Selection of the best clients for each round of learning is critical in alleviating these problems. We propose a novel sampling method named the irrelevance sampling technique. Our method is founded on defining a novel irrelevance score that incorporates the client characteristics in a single floating value, which can elegantly classify the client into three numerical sign defined pools for easy sampling. It is a computationally inexpensive, intuitive and privacy preserving sampling technique that selects a subset of clients based on quality and quantity of data on edge devices. It achieves 50–80% faster convergence even in highly skewed data distribution in the presence of free riders based on lack of data and severe class imbalance under both Independent and Identically Distributed (IID) and Non-IID conditions. It shows good performance on practical application datasets. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Article
Evolution towards Smart and Software-Defined Internet of Things
AI 2022, 3(1), 100-123; https://doi.org/10.3390/ai3010007 - 21 Feb 2022
Cited by 1 | Viewed by 995
Abstract
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up [...] Read more.
The Internet of Things (IoT) is a mesh network of interconnected objects with unique identifiers that can transmit data and communicate with one another without the need for human intervention. The IoT has brought the future closer to us. It has opened up new and vast domains for connecting not only people, but also all kinds of simple objects and phenomena all around us. With billions of heterogeneous devices connected to the Internet, the network architecture must evolve to accommodate the expected increase in data generation while also improving the security and efficiency of connectivity. Traditional IoT architectures are primitive and incapable of extending functionality and productivity to the IoT infrastructure’s desired levels. Software-Defined Networking (SDN) and virtualization are two promising technologies for cost-effectively handling the scale and versatility required for IoT. In this paper, we discussed traditional IoT networks and the need for SDN and Network Function Virtualization (NFV), followed by an analysis of SDN and NFV solutions for implementing IoT in various ways. Full article
(This article belongs to the Special Issue Feature Papers for AI)
Show Figures

Figure 1

Systematic Review
Hydropower Operation Optimization Using Machine Learning: A Systematic Review
AI 2022, 3(1), 78-99; https://doi.org/10.3390/ai3010006 - 11 Feb 2022
Cited by 1 | Viewed by 1035
Abstract
The optimal dispatch of hydropower plants consists of the challenge of taking advantage of both available head and river flows. Despite the objective of delivering the maximum power to the grid, some variables are uncertain, dynamic, non-linear, and non-parametric. Nevertheless, some models may [...] Read more.
The optimal dispatch of hydropower plants consists of the challenge of taking advantage of both available head and river flows. Despite the objective of delivering the maximum power to the grid, some variables are uncertain, dynamic, non-linear, and non-parametric. Nevertheless, some models may help hydropower generating players with computer science evolution, thus maximizing the hydropower plants’ power production. Over the years, several studies have explored Machine Learning (ML) techniques to optimize hydropower plants’ dispatch, being applied in the pre-operation, real-time and post-operation phases. Hence, this work consists of a systematic review to analyze how ML models are being used to optimize energy production from hydropower plants. The analysis focused on criteria that interfere with energy generation forecasts, operating policies, and performance evaluation. Our discussions aimed at ML techniques, schedule forecasts, river systems, and ML applications for hydropower optimization. The results showed that ML techniques have been more applied for river flow forecast and reservoir operation optimization. The long-term scheduling horizon is the most common application in the analyzed studies. Therefore, supervised learning was more applied as ML technique segment. Despite being a widely explored theme, new areas present opportunities for disruptive research, such as real-time schedule forecast, run-of-river system optimization and low-head hydropower plant operation. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
Show Figures

Figure 1

Article
Situational Awareness: Techniques, Challenges, and Prospects
AI 2022, 3(1), 55-77; https://doi.org/10.3390/ai3010005 - 29 Jan 2022
Viewed by 1478
Abstract
Situational awareness (SA) is defined as the perception of entities in the environment, comprehension of their meaning, and projection of their status in near future. From an Air Force perspective, SA refers to the capability to comprehend and project the current and future [...] Read more.
Situational awareness (SA) is defined as the perception of entities in the environment, comprehension of their meaning, and projection of their status in near future. From an Air Force perspective, SA refers to the capability to comprehend and project the current and future disposition of red and blue aircraft and surface threats within an airspace. In this article, we propose a model for SA and dynamic decision-making that incorporates artificial intelligence and dynamic data-driven application systems to adapt measurements and resources in accordance with changing situations. We discuss measurement of SA and the challenges associated with quantification of SA. We then elaborate a plethora of techniques and technologies that help improve SA ranging from different modes of intelligence gathering to artificial intelligence to automated vision systems. We then present different application domains of SA including battlefield, gray zone warfare, military- and air-base, homeland security and defense, and critical infrastructure. Finally, we conclude the article with insights into the future of SA. Full article
Show Figures

Figure 1

Editorial
Acknowledgment to Reviewers of AI in 2021
AI 2022, 3(1), 53-54; https://doi.org/10.3390/ai3010004 - 28 Jan 2022
Viewed by 697
Abstract
Rigorous peer-reviews are the basis of high-quality academic publishing [...] Full article
Article
Multi-CartoonGAN with Conditional Adaptive Instance-Layer Normalization for Conditional Artistic Face Translation
AI 2022, 3(1), 37-52; https://doi.org/10.3390/ai3010003 - 24 Jan 2022
Viewed by 1113
Abstract
In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain [...] Read more.
In CycleGAN, an image-to-image translation architecture was established without the use of paired datasets by employing both adversarial and cycle consistency loss. The success of CycleGAN was followed by numerous studies that proposed new translation models. For example, StarGAN works as a multi-domain translation model based on a single generator–discriminator pair, while U-GAT-IT aims to close the large face-to-anime translation gap by adapting its original normalization to the process. However, constructing robust and conditional translation models requires tradeoffs when the computational costs of training on graphic processing units (GPUs) are considered. This is because, if designers attempt to implement conditional models with complex convolutional neural network (CNN) layers and normalization functions, the GPUs will need to secure large amounts of memory when the model begins training. This study aims to resolve this tradeoff issue via the development of Multi-CartoonGAN, which is an improved CartoonGAN architecture that can output conditional translated images and adapt to large feature gap translations between the source and target domains. To accomplish this, Multi-CartoonGAN reduces the computational cost by using a pretrained VGGNet to calculate the consistency loss instead of reusing the generator. Additionally, we report on the development of the conditional adaptive layer-instance normalization (CAdaLIN) process for use with our model to make it robust to unique feature translations. We performed extensive experiments using Multi-CartoonGAN to translate real-world face images into three different artistic styles: portrait, anime, and caricature. An analysis of the visualized translated images and GPU computation comparison shows that our model is capable of performing translations with unique style features that follow the conditional inputs and at a reduced GPU computational cost during training. Full article
Show Figures

Figure 1

Article
Cyberattack and Fraud Detection Using Ensemble Stacking
AI 2022, 3(1), 22-36; https://doi.org/10.3390/ai3010002 - 18 Jan 2022
Cited by 1 | Viewed by 1034
Abstract
Smart devices are used in the era of the Internet of Things (IoT) to provide efficient and reliable access to services. IoT technology can recognize comprehensive information, reliably deliver information, and intelligently process that information. Modern industrial systems have become increasingly dependent on [...] Read more.
Smart devices are used in the era of the Internet of Things (IoT) to provide efficient and reliable access to services. IoT technology can recognize comprehensive information, reliably deliver information, and intelligently process that information. Modern industrial systems have become increasingly dependent on data networks, control systems, and sensors. The number of IoT devices and the protocols they use has increased, which has led to an increase in attacks. Global operations can be disrupted, and substantial economic losses can be incurred due to these attacks. Cyberattacks have been detected using various techniques, such as deep learning and machine learning. In this paper, we propose an ensemble staking method to effectively reveal cyberattacks in the IoT with high performance. Experiments were conducted on three different datasets: credit card, NSL-KDD, and UNSW datasets. The proposed stacked ensemble classifier outperformed the individual base model classifiers. Full article
Show Figures

Figure 1

Article
DPDRC, a Novel Machine Learning Method about the Decision Process for Dimensionality Reduction before Clustering
AI 2022, 3(1), 1-21; https://doi.org/10.3390/ai3010001 - 29 Dec 2021
Viewed by 870
Abstract
This paper examines the critical decision process of reducing the dimensionality of a dataset before applying a clustering algorithm. It is always a challenge to choose between extracting or selecting features. It is not obvious to evaluate the importance of the features since [...] Read more.
This paper examines the critical decision process of reducing the dimensionality of a dataset before applying a clustering algorithm. It is always a challenge to choose between extracting or selecting features. It is not obvious to evaluate the importance of the features since the most popular methods to do it are usually intended for a supervised learning technique process. This paper proposes a novel method called “Decision Process for Dimensionality Reduction before Clustering” (DPDRC). It chooses the best dimensionality reduction method (selection or extraction) according to the data scientist’s parameters and the profile of the data, aiming to apply a clustering process at the end. It uses a Feature Ranking Process Based on Silhouette Decomposition (FRSD) algorithm, a Principal Component Analysis (PCA) algorithm, and a K-means algorithm along with its metric, the Silhouette Index (SI). This paper presents five scenarios based on different parameters. This research also aims to discuss the impacts, advantages, and disadvantages of each choice that can be made in this unsupervised learning process. Full article
Show Figures

Figure 1

Previous Issue
Next Issue
Back to TopTop