Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 

Topic Editors

Department of Electronic Engineering, National Formosa University, Yunlin City 632, Taiwan
Director of the Cognitions Humaine et Artificielle Laboratory, University Paris 8, 93526 Saint-Denis, France
Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
Department of Recreation and Health Care Management, Chia Nan University of Pharmacy & Science, Tainan City 71710, Taiwan
Department of Digital Media Design, National Yunlin University of Science and Technology, Yunlin 640, Taiwan
Department of Electrical Engineering, Lunghwa University of Science and Technology, Taoyuan 333, Taiwan

Electronic Communications, IOT and Big Data

Abstract submission deadline
closed (30 September 2023)
Manuscript submission deadline
closed (30 November 2023)
Viewed by
84521

Topic Information

Dear Colleagues,

The 2nd IEEE International Conference on Electronic Communications, Internet of Things and Big Data Conference 2022 will be held in Hsinchu, Taiwan, from July 15 to 17, 2022 (http://www.iceib.asia/). It will provide a communication platform for high-tech personnel and researchers in the topics of Electronic Communications, Internet of Things and Big Data. The booming economic development in Asia, especially the advanced technology in electronic communications, Internet of Things and big data, has attracted great attention from universities, research institutions and many companies. This conference will focus on research with innovative ideas or results and practical application. Topics of interest include, but are not limited to, the following:

I. Big Data and Cloud Computing:

1) Models and algorithms of big data;

2) Architecture of big data;

3) Big data management;

4) Big data analysis and processing;      

5) Security and privacy of big data;  

6) Big data in smart cities; 

7) Search, mining and visualization of big data;  

8) Technologies, services and application of big data; 

9) Edge computing;

10) Architectures and systems of cloud computing;  

11) Models, simulations, designs and paradigms of cloud computing;  

12) Management and operations of cloud computing;

13) Technologies, services and applications of cloud computing;

14) Dynamic resource supply and consumption;  

15) Management and analysis of geospatial big data;

16) UAV oblique photography and ground 3D real scene modeling;

17) Aerial photography of UAV loaded with multispectral sensors.

II. Technologies and Application of Artificial Intelligence:

1) Basic theory and application of Artificial Intelligence;

2) Knowledge science and knowledge engineering;    

3) Machine learning and data mining;  

4) Machine perception and virtual reality;    

5) Natural language processing and understanding;  

6) Neural networks and deep learning;   

7) Pattern recognition theory and application;   

8) Rough set and soft computing;

9) Biometric identification;  

10) Computer vision and image processing;  

11) Evolutionary calculation;

12) Information retrieval and web search;  

13) Intelligent planning and scheduling;

14) Intelligent control.

15) Classification and change detection of remote sensing images or aerial images.   

III. Robotics Science and Engineering:   

1) Robot control;  

2) Mobile robotics;  

3) Intelligent pension robots;  

4) Mobile sensor networks;   

5) Perception systems;     

6) Micro robots and micro-manipulation;      

7) Visual serving;  

8) Search, rescue and field robotics;     

9) Robot sensing and data fusion;      

10) Indoor localization and navigation;    

11) Dexterous manipulation;     

12) Medical robots and bio-robotics;      

13) Human centered systems;      

14) Space and underwater robots;     

15) Tele-robotics.

IV. Internet of Things and Sensor Technology:

1) Technology architecture of Internet of Things;

2) Sensors in Internet of Things;

3) Perception technology of Internet of Things information;

4) Multi terminal cooperative control and Internet of Things intelligent terminal;

5) Multi-network resource sharing in the environment of Internet of Things;  

6) Heterogeneous fusion and multi-domain collaboration in the Internet of Things environment;

7) SDN and intelligent service network;

8) Technology and its application in the Internet of Things;  

9) Cloud computing and big data in the Internet of Things;  

10) Information analysis and processing of the Internet of Things; 

11) CPS technology and intelligent information system;  

12) Internet of Things technology standard;

13) Internet of Things information security,

14) Narrow Band Internet of Things (NB-IoT);

15) Smart cities;

16) Smart farming;

17) Smart grids;  

18) Digital health/telehealth/telemedicine.

V. Special Session: Intelligent Big Data Analysis and Applications

1) Big data and its application;

2) Data mining and its application;

3) Cloud computing and its application;

4) Deep learning and its application;

5) Fuzzy theory and its application;

6) Evolutionary computing and its application.

Prof. Dr. Teen-­Hang Meen
Prof. Dr. Charles Tijus
Prof. Dr. Cheng-Chien Kuo
Prof. Dr. Kuei-Shu Hsu
Prof. Dr. Kuo-Kuang Fan
Prof. Dr. Jih-Fu Tu
Topic Editors

Keywords

  • electronic communications
  • Internet of Things
  • big data
  • robotics science
  • sensor technology

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Applied Sciences
applsci
2.5 5.3 2011 17.8 Days CHF 2400
Big Data and Cognitive Computing
BDCC
3.7 7.1 2017 18 Days CHF 1800
Computers
computers
2.6 5.4 2012 17.2 Days CHF 1800
Electronics
electronics
2.6 5.3 2012 16.8 Days CHF 2400
Journal of Sensor and Actuator Networks
jsan
3.3 7.9 2012 22.6 Days CHF 2000
Inventions
inventions
2.1 4.8 2016 21.2 Days CHF 1800
Technologies
technologies
4.2 6.7 2013 24.6 Days CHF 1600
Telecom
telecom
2.1 4.8 2020 22.7 Days CHF 1200

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (38 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
31 pages, 4597 KiB  
Review
Emerging Industrial Internet of Things Open-Source Platforms and Applications in Diverse Sectors
by Eyuel Debebe Ayele, Stylianos Gavriel, Javier Ferreira Gonzalez, Wouter B. Teeuw, Panayiotis Philimis and Ghayoor Gillani
Telecom 2024, 5(2), 369-399; https://doi.org/10.3390/telecom5020019 - 14 May 2024
Cited by 1 | Viewed by 2182
Abstract
Revolutionary advances in technology have been seen in many industries, with the IIoT being a prime example. The IIoT creates a network of interconnected devices, allowing smooth communication and interoperability in industrial settings. This not only boosts efficiency, productivity, and safety but also [...] Read more.
Revolutionary advances in technology have been seen in many industries, with the IIoT being a prime example. The IIoT creates a network of interconnected devices, allowing smooth communication and interoperability in industrial settings. This not only boosts efficiency, productivity, and safety but also provides transformative solutions for various sectors. This research looks into open-source IIoT and edge platforms that are applicable to a range of applications with the aim of finding and developing high-potential solutions. It highlights the effect of open-source IIoT and edge computing platforms on traditional IIoT applications, showing how these platforms make development and deployment processes easier. Popular open-source IIoT platforms include DeviceHive and Thingsboard, while EdgeX Foundry is a key platform for edge computing, allowing IIoT applications to be deployed closer to data sources, thus reducing latency and conserving bandwidth. This study seeks to identify potential future domains for the implementation of IIoT solutions using these open-source platforms. Additionally, each sector is evaluated based on various criteria, such as development requirement analyses, market demand projections, the examination of leading companies and emerging startups in each domain, and the application of the International Patent Classification (IPC) scheme for in-depth sector analysis. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

27 pages, 1011 KiB  
Article
TinyGS vs. SatNOGS: A Comparative Analysis of Open-Source Satellite Ground Station Networks
by João Sá Gomes and Alexandre Ferreira da Silva
Telecom 2024, 5(1), 228-254; https://doi.org/10.3390/telecom5010012 - 7 Mar 2024
Viewed by 3509
Abstract
In recent years, two of the largest open-source ground station (GS) networks capable of enabling Earth–satellite communication have emerged: TinyGS and SatNOGS. These open-source projects enable anyone to build their own GS inexpensively and easily, integrate into a GS network, and receive data [...] Read more.
In recent years, two of the largest open-source ground station (GS) networks capable of enabling Earth–satellite communication have emerged: TinyGS and SatNOGS. These open-source projects enable anyone to build their own GS inexpensively and easily, integrate into a GS network, and receive data from satellites listed in the database. Additionally, it enables satellite developers to add satellites to the databases of these projects and take advantage of this GS network to receive data from the satellites. This article introduces the TinyGS and SatNOGS projects and conducts a comparative analysis between them. Generally, the TinyGS project seems to have simpler implementation as well as lower associated costs. In a deeper analysis, it was observed that on the 29 July 2023, the TinyGS project had a higher number of online GSs and a more favorable geographic distribution. On the other hand, the SatNOGS project managed to communicate and decode a larger number of satellites up to 29 July 2023. Additionally, in both projects, it was noted that frequencies between 436 and 437 had the highest number of satellites with decoded data. Ultimately, the choice between these projects depends on critical parameters defined by the reader. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

38 pages, 4085 KiB  
Review
Reliability of LoRaWAN Communications in Mining Environments: A Survey on Challenges and Design Requirements
by Sonile K. Musonda, Musa Ndiaye, Hastings M. Libati and Adnan M. Abu-Mahfouz
J. Sens. Actuator Netw. 2024, 13(1), 16; https://doi.org/10.3390/jsan13010016 - 9 Feb 2024
Cited by 4 | Viewed by 3211
Abstract
While a robust and reliable communication network for monitoring the mining environment in a timely manner to take care of people, the planet Earth and profits is key, the mining environment is very challenging in terms of achieving reliable wireless transmission. This survey [...] Read more.
While a robust and reliable communication network for monitoring the mining environment in a timely manner to take care of people, the planet Earth and profits is key, the mining environment is very challenging in terms of achieving reliable wireless transmission. This survey therefore investigates the reliability of LoRaWAN communication in the mining environment, identifying the challenges and design requirements. Bearing in mind that LoRaWAN is an IoT communication technology that has not yet been fully deployed in mining, the survey incorporates an investigation of LoRaWAN and other mining IoT communication technologies to determine their records of reliability, strengths and weaknesses and applications in mining. This aspect of the survey gives insight into the requirements of future mining IoT communication technologies and where LoRaWAN can be deployed in both underground and surface mining. Specific questions that the survey addresses are: (1) What is the record of reliability of LoRaWAN in mining environments? (2) What contributions have been made with regard to LoRa/LoRaWAN communication in general towards improving reliability? (3) What are the challenges and design requirements of LoRaWAN reliability in mining environments? (4) What research opportunities exist for achieving LoRaWAN communication in mining environments? In addition to recommending open research opportunities, the lessons learnt from the survey are also outlined. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

22 pages, 7142 KiB  
Article
A Novel Alternating μ-Law Companding Algorithm for PAPR Reduction in OFDM Systems
by Yung-Ping Tu, Zi-Teng Zhan and Yung-Fa Huang
Electronics 2024, 13(4), 694; https://doi.org/10.3390/electronics13040694 - 8 Feb 2024
Viewed by 1145
Abstract
Orthogonal frequency division multiplexing (OFDM) inherits multi-carrier systems’ inevitable high peak-to-average power ratio (PAPR) problem. In this paper, a novel alternating companding technique is proposed to combat the harassment of high PAPR. The sequential μ-law companding (SULC) and a tone with a [...] Read more.
Orthogonal frequency division multiplexing (OFDM) inherits multi-carrier systems’ inevitable high peak-to-average power ratio (PAPR) problem. In this paper, a novel alternating companding technique is proposed to combat the harassment of high PAPR. The sequential μ-law companding (SULC) and a tone with a lower PAPR result in only partial tones needing companding. The SULC scheme’s PAPR and bit error rate (BER) performance has been balanced and improved. However, the computational complexity is still too high to be implemented. Therefore, this study sorted the transmission signals according to their amplitudes. Then, all the tones are divided into two groups by estimating the rough companding amount (around 54% of the subcarriers), using traditional parallel companding for the first group and the other group only by partial μ-law companding. This alternating μ-law companding (AULC) is proposed to improve the PAPR performance and simultaneously reduce complexity. Simulation results show that the proposed AULC method appreciably reduces the PAPR by about 5 dB (around 45%) compared with the original μ-law at complementary cumulative distribution function (CCDF) equal to 104. Moreover, it only requires a moderate complexity to outperform the other companding schemes without sacrificing the BER performance in the OFDM systems. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

14 pages, 5787 KiB  
Article
Ka-Band Miniaturized 90 nm Complementary Metal Oxide Semiconductor Wideband Rat-Race Coupler Using Left-Handed and Right-Handed Transmission Lines
by Je-Yao Chang, Tsu-Yu Lo, Pin-Yen Chen, Tan-Zhi Wei, Shih-Ping Huang, Wei-Ting Tsai, Chong-Yi Liou and Shau-Gang Mao
Electronics 2024, 13(2), 417; https://doi.org/10.3390/electronics13020417 - 19 Jan 2024
Viewed by 1093
Abstract
The traditional rat-race coupler comprises a quarter-wavelength transmission line and a three-quarter-wavelength transmission line. In this design, the slow-wave structure transmission line is employed to replace the conventional quarter-wavelength transmission line, and the three-quarter-wavelength transmission line is substituted with the left-handed transmission line. [...] Read more.
The traditional rat-race coupler comprises a quarter-wavelength transmission line and a three-quarter-wavelength transmission line. In this design, the slow-wave structure transmission line is employed to replace the conventional quarter-wavelength transmission line, and the three-quarter-wavelength transmission line is substituted with the left-handed transmission line. By using the TSMC CMOS 90 nm fabrication process, a circuit is created with a chip size of 300 μm × 200 μm, corresponding to the electrical size at 39 GHz of 0.039λ0 × 0.026λ0. The measured results demonstrate that the operating bandwidth is 35 to 43 GHz, with an amplitude imbalance of around 0 dB, a phase error within 1°, a return loss of less than 26 dB, and an isolation better than 40 dB. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

24 pages, 518 KiB  
Article
On Maximizing the Probability of Achieving Deadlines in Communication Networks
by Benjamin Becker, Christian Oberli, Tobias Meuser and Ralf Steinmetz
J. Sens. Actuator Netw. 2024, 13(1), 9; https://doi.org/10.3390/jsan13010009 - 18 Jan 2024
Cited by 1 | Viewed by 1607
Abstract
We consider the problem of meeting deadline constraints in wireless communication networks. Fulfilling deadlines depends heavily on the routing algorithm used. We study this dependence generically for a broad class of routing algorithms. For analyzing the impact of routing decisions on deadline fulfillment, [...] Read more.
We consider the problem of meeting deadline constraints in wireless communication networks. Fulfilling deadlines depends heavily on the routing algorithm used. We study this dependence generically for a broad class of routing algorithms. For analyzing the impact of routing decisions on deadline fulfillment, we adopt a stochastic model from operations research to capture the source-to-destination delay distribution and the corresponding probability of successfully delivering data before a given deadline. Based on this model, we propose a decentralized algorithm that operates locally at each node and exchanges information solely with direct neighbors in order to determine the probabilities of achieving deadlines. A modified version of the algorithm also improves routing tables iteratively to progressively increase the deadline achievement probabilities. This modified algorithm is shown to deliver routing tables that maximize the deadline achievement probabilities for all nodes in a given network. We tested the approach by simulation and compared it with routing strategies based on established metrics, specifically the average delay, minimum hop count, and expected transmission count. Our evaluations encompass different channel quality and small-scale fading conditions, as well as various traffic load scenarios. Notably, our solution consistently outperforms the other approaches in all tested scenarios. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

28 pages, 1136 KiB  
Review
A Survey of Incremental Deep Learning for Defect Detection in Manufacturing
by Reenu Mohandas, Mark Southern, Eoin O’Connell and Martin Hayes
Big Data Cogn. Comput. 2024, 8(1), 7; https://doi.org/10.3390/bdcc8010007 - 5 Jan 2024
Cited by 2 | Viewed by 3198
Abstract
Deep learning based visual cognition has greatly improved the accuracy of defect detection, reducing processing times and increasing product throughput across a variety of manufacturing use cases. There is however a continuing need for rigorous procedures to dynamically update model-based detection methods that [...] Read more.
Deep learning based visual cognition has greatly improved the accuracy of defect detection, reducing processing times and increasing product throughput across a variety of manufacturing use cases. There is however a continuing need for rigorous procedures to dynamically update model-based detection methods that use sequential streaming during the training phase. This paper reviews how new process, training or validation information is rigorously incorporated in real time when detection exceptions arise during inspection. In particular, consideration is given to how new tasks, classes or decision pathways are added to existing models or datasets in a controlled fashion. An analysis of studies from the incremental learning literature is presented, where the emphasis is on the mitigation of process complexity challenges such as, catastrophic forgetting. Further, practical implementation issues that are known to affect the complexity of deep learning model architecture, including memory allocation for incoming sequential data or incremental learning accuracy, is considered. The paper highlights case study results and methods that have been used to successfully mitigate such real-time manufacturing challenges. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

18 pages, 4678 KiB  
Article
Analyzing Distance-Based Registration with Two Location Areas: A Semi-Markov Process Approach
by Jang-Hyun Baek
Electronics 2024, 13(1), 233; https://doi.org/10.3390/electronics13010233 - 4 Jan 2024
Viewed by 982
Abstract
In order to connect an incoming call to the user equipment (UE) in a mobile communication network, the location information of the UE must be always kept in the network database. Therefore, the efficiency of the location registration method of reporting new location [...] Read more.
In order to connect an incoming call to the user equipment (UE) in a mobile communication network, the location information of the UE must be always kept in the network database. Therefore, the efficiency of the location registration method of reporting new location information to a mobile communication network whenever the location information of the UE changes directly affects the performance of the radio channel, which is a limited resource in a mobile communication network. This study deals with distance-based registration (DBR). DBR does not cause the ping-pong phenomenon known to be a main problem in zone-based registration. It shows good performance when assuming a random walk mobility model. To improve the performance of the original DBR with one location area (1D), a DBR with two location areas (2D) was proposed. It is known that 2D is better than 1D in most cases. However, unlike 1D, an accurate mathematical model for 2D has not been presented in previous studies, raising questions about whether an accurate performance comparison has been performed. In this study, we present an accurate mathematical model based on the semi-Markov process for performance analysis of 2D. We compared performances of 1D and 2D using the proposed mathematical model. Various numerical results showed that 2D with two-step paging was superior to 1D in most cases. However, when simultaneous paging was applied to 2D, 1D was better than 2D in most cases. In real situations, optimal performance can be achieved by reflecting the network situation in real time and dynamically changing the operating method using a better-performing model among these two methods. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

17 pages, 7134 KiB  
Article
Performance Analysis of Two-Zone-Based Registration System with Timer in Wireless Communication Networks
by Hee-Seon Jang and Jang-Hyun Baek
Electronics 2024, 13(1), 160; https://doi.org/10.3390/electronics13010160 - 29 Dec 2023
Viewed by 670
Abstract
Numerous studies have been conducted on wireless communication networks to reduce the costs associated with location registration and paging traffic caused by the movement of user equipment (UE). Among them, a zone-based registration scheme is commonly used due to its convenience of implementation. [...] Read more.
Numerous studies have been conducted on wireless communication networks to reduce the costs associated with location registration and paging traffic caused by the movement of user equipment (UE). Among them, a zone-based registration scheme is commonly used due to its convenience of implementation. In a zone-based scheme, a set of non-overlapping cells are managed as a single zone called a registration area (RA). The UE requests registration each time it enters a new RA. The most significant drawback of the 1Z system (an RA consisting of one zone) is the degradation in the service quality due to the registration traffic that frequently occurs at the RA boundaries. To overcome this drawback, a 2Z system that can manage two zones with one RA has been proposed. However, in the 2Z system, the paging costs for the UE that has returned to the previous zone increase, which can significantly degrade the performance compared to the 1Z system if the call-to-mobility ratio (CMR) is large or if the probability of returning to the previous zone is small. In this study, a new 2Z_Timer scheme is proposed to enhance the performance of the 2Z system. This method involves initiating a Timer for the previously visited zone when the UE enters a new zone, making it possible to retain the information of the previous zone for a specified threshold period. Simulations were conducted using flowchart-based RAPTOR software to compare its performance to those of the 1Z and 2Z systems. The results showed that the 2Z_Timer system effectively reduced the paging costs, even when the CMR was high or the probability of returning to the previous zone was low. Numerical results for various Timer thresholds showed that the 2Z_Timer system could lead to cost reductions of 10.6% and 28.6% compared to the 1Z and 2Z systems, respectively. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

20 pages, 3293 KiB  
Article
Deep Learning-Based Hip X-ray Image Analysis for Predicting Osteoporosis
by Shang-Wen Feng, Szu-Yin Lin, Yi-Hung Chiang, Meng-Han Lu and Yu-Hsiang Chao
Appl. Sci. 2024, 14(1), 133; https://doi.org/10.3390/app14010133 - 22 Dec 2023
Cited by 5 | Viewed by 1933
Abstract
Osteoporosis is a common problem in orthopedic medicine, and it has become an important medical issue in orthopedics as Taiwan is gradually becoming an aging society. In the diagnosis of osteoporosis, the bone mineral density (BMD) derived from dual-energy X-ray absorptiometry (DXA) is [...] Read more.
Osteoporosis is a common problem in orthopedic medicine, and it has become an important medical issue in orthopedics as Taiwan is gradually becoming an aging society. In the diagnosis of osteoporosis, the bone mineral density (BMD) derived from dual-energy X-ray absorptiometry (DXA) is the main criterion for orthopedic diagnosis of osteoporosis, but due to the high cost of this equipment and the lower penetration rate of the equipment compared to the X-ray images, the problem of osteoporosis has not been effectively solved for many people who suffer from osteoporosis. At present, in clinical diagnosis, doctors are not yet able to accurately interpret X-ray images for osteoporosis manually and must rely on the data obtained from DXA. In recent years, with the continuous development of artificial intelligence, especially in the fields of machine learning and deep learning, significant progress has been made in image recognition. Therefore, it is worthwhile to revisit the question of whether it is possible to use a convolutional neural network model to read a hip X-ray image and then predict the patient’s BMD. In this study, we proposed a hip X-ray image segmentation model and a hip X-ray image recognition classification model. First, we used the U-Net model as a framework to segment the femoral neck, greater trochanter, Ward’s triangle, and the total hip in the hip X-ray images. We then performed image matting and data augmentation. Finally, we constructed a predictive model for osteoporosis using deep learning algorithms. In the segmentation experiments, we used intersection over union (IoU) as the evaluation metric for image segmentation, and both the U-Net model and the U-Net++ model achieved segmentation results greater than or equal to 0.5. In the classification experiments, using the T-score as the classification basis, the total hip using the DenseNet121 model has the highest accuracy of 74%. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

22 pages, 3185 KiB  
Article
Improving the IoT Attack Classification Mechanism with Data Augmentation for Generative Adversarial Networks
by Hung-Chi Chu and Yu-Jhe Lin
Appl. Sci. 2023, 13(23), 12592; https://doi.org/10.3390/app132312592 - 22 Nov 2023
Viewed by 1776
Abstract
The development of IoT technology has made various IoT applications and services widely used. Because IoT devices have weak information security protection capabilities, they are easy targets for cyber attacks. Therefore, this study proposes MLP-based IoT attack classification with data augmentation for GANs. [...] Read more.
The development of IoT technology has made various IoT applications and services widely used. Because IoT devices have weak information security protection capabilities, they are easy targets for cyber attacks. Therefore, this study proposes MLP-based IoT attack classification with data augmentation for GANs. In situations where the overall classification performance is satisfactory but the performance of a specific class is poor, GANs are employed as a data augmentation mechanism for that class to enhance its classification performance. The experimental results indicate that regardless of whether the training dataset is BoT-IoT or TON-IOT, the proposed method significantly improves the classification performance of classes with insufficient training data when using the data augmentation mechanism with GANs. Furthermore, the classification accuracy, precision, recall, and F1-score performance all exceed 90%. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

27 pages, 6742 KiB  
Article
Application of G.hn Broadband Powerline Communication for Industrial Control Using COTS Components
by Kilian Brunner, Stephen Dominiak and Martin Ostertag
Technologies 2023, 11(6), 160; https://doi.org/10.3390/technologies11060160 - 10 Nov 2023
Viewed by 2634
Abstract
Broadband powerline communication is a technology developed mainly with consumer applications and bulk data transmission in mind. Typical use cases include file download, streaming, or last-mile internet access for residential buildings. Applications gaining momentum are smart metering and grid automation, where response time [...] Read more.
Broadband powerline communication is a technology developed mainly with consumer applications and bulk data transmission in mind. Typical use cases include file download, streaming, or last-mile internet access for residential buildings. Applications gaining momentum are smart metering and grid automation, where response time requirements are relatively moderate compared to industrial (real-time) control. This work investigates to which extent G.hn technology, with existing, commercial off-the-shelf components, can be used for real-time control applications. Maximum packet rate and latency statistics are investigated for different G.hn profiles and MAC algorithms. An elevator control system serves as an example application to define the latency and throughput requirements. The results show that G.hn is a feasible technology candidate for industrial IoT-type applications if certain boundary conditions can be ensured. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Graphical abstract

20 pages, 2366 KiB  
Review
AI-Powered Intelligent Seaport Mobility: Enhancing Container Drayage Efficiency through Computer Vision and Deep Learning
by Hoon Lee, Indranath Chatterjee and Gyusung Cho
Appl. Sci. 2023, 13(22), 12214; https://doi.org/10.3390/app132212214 - 10 Nov 2023
Cited by 4 | Viewed by 1867
Abstract
The rapid urbanization phenomenon has introduced multifaceted challenges across various domains, including housing, transportation, education, health, and the economy. This necessitates a significant transformation of seaport operations in order to optimize smart mobility and facilitate the evolution of intelligent cities. This conceptual paper [...] Read more.
The rapid urbanization phenomenon has introduced multifaceted challenges across various domains, including housing, transportation, education, health, and the economy. This necessitates a significant transformation of seaport operations in order to optimize smart mobility and facilitate the evolution of intelligent cities. This conceptual paper presents a novel mathematical framework rooted in deep learning techniques. Our innovative model accurately identifies parking spaces and lanes in seaport environments based on crane positions, utilizing live Closed-Circuit Television (CCTV) camera data for real-time monitoring and efficient parking space allocation. Through a comprehensive literature review, we explore the advantages of merging artificial intelligence (AI) and computer vision (CV) technologies in parking facility management. Our framework focuses on enhancing container drayage efficiency within seaports, emphasizing improved traffic management, optimizing parking space allocation, and streamlining container movement. The insights from our study provide a foundation that could have potential implications for real-world applications. By integrating cutting-edge technologies, our proposed framework not only enhances the efficiency of seaport operations, but also lays the foundation for sustainable and intelligent seaport systems. It signifies a significant leap toward the realization of intelligent seaport operations, contributing profoundly to the advancement of urban logistics and transportation networks. Future research endeavors will concentrate on the practical implementation and validation of this pioneering mathematical framework in real-world seaport environments. Additionally, our work emphasizes the crucial need to explore further applications of AI and CV technologies in seaport logistics, adapting the framework to address the evolving urbanization and transportation challenges. These efforts will foster continuous advancements in the field, shaping the future of intelligent seaport operations. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

19 pages, 4273 KiB  
Article
Construction of an Online Cloud Platform for Zhuang Speech Recognition and Translation with Edge-Computing-Based Deep Learning Algorithm
by Zeping Fan, Min Huang, Xuejun Zhang, Rongqi Liu, Xinyi Lyu, Taisen Duan, Zhaohui Bu and Jianghua Liang
Appl. Sci. 2023, 13(22), 12184; https://doi.org/10.3390/app132212184 - 9 Nov 2023
Viewed by 1285
Abstract
The Zhuang ethnic minority in China possesses its own ethnic language and no ethnic script. Cultural exchange and transmission encounter hurdles as the Zhuang rely exclusively on oral communication. An online cloud-based platform was required to enhance linguistic communication. First, a database of [...] Read more.
The Zhuang ethnic minority in China possesses its own ethnic language and no ethnic script. Cultural exchange and transmission encounter hurdles as the Zhuang rely exclusively on oral communication. An online cloud-based platform was required to enhance linguistic communication. First, a database of 200 h of annotated Zhuang speech was created by collecting standard Zhuang speeches and improving database quality by removing transcription inconsistencies and text normalization. Second, SAformerNet, a more efficient and accurate transformer-based automatic speech recognition (ASR) network, is achieved by inserting additional downsampling modules. Subsequently, a Neural Machine Translation (NMT) model for translating Zhuang into other languages is constructed by fine-tuning the BART model and corpus filtering strategy. Finally, for the network’s responsiveness to real-world needs, edge-computing techniques are applied to relieve network bandwidth pressure. An edge-computing private cloud system based on FPGA acceleration is proposed to improve model operation efficiency. Experiments show that the most critical metric of the system, model accuracy, is above 93%, and inference time is reduced by 29%. The computational delay for multi-head self-attention (MHSA) and feed-forward network (FFN) modules has been reduced by 7.1 and 1.9 times, respectively, and terminal response time is accelerated by 20% on average. Generally, the scheme provides a prototype tool for small-scale Zhuang remote natural language tasks in mountainous areas. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

13 pages, 2493 KiB  
Article
Authority Transfer According to a Driver Intervention Intention Considering Coexistence of Communication Delay
by Taeyoon Lim, Myeonghwan Hwang, Eugene Kim and Hyunrok Cha
Computers 2023, 12(11), 228; https://doi.org/10.3390/computers12110228 - 8 Nov 2023
Viewed by 1634
Abstract
Recently, interest and research on autonomous driving technology have been actively conducted. However, proving the safety of autonomous vehicles and commercializing autonomous vehicles remain key challenges. According to a report released by the California Department of Motor Vehicles on self-driving, it is still [...] Read more.
Recently, interest and research on autonomous driving technology have been actively conducted. However, proving the safety of autonomous vehicles and commercializing autonomous vehicles remain key challenges. According to a report released by the California Department of Motor Vehicles on self-driving, it is still hard to say that self-driving technology is highly reliable. Until fully autonomous driving is realized, authority transfer to humans is necessary to ensure the safety of autonomous driving. Several technologies, such as teleoperation and haptic-based approaches, are being developed based on human-machine interaction systems. This study deals with teleoperation and presents a way to switch control from autonomous vehicles to remote drivers. However, there are many studies on how to do teleoperation, but not many studies deal with communication delays that occur when switching control. Communication delays inevitably occur when switching control, and potential risks and accidents of the magnitude of the delay cannot be ignored. This study examines compensation for communication latency during remote control attempts and checks the acceptable level of latency for enabling remote operations. In addition, supplemented the safety and reliability of autonomous vehicles through research that reduces the size of communication delays when attempting teleoperation. It is expected to prevent human and material damage in the actual accident situation. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

25 pages, 1144 KiB  
Article
Enhancing the Fault Tolerance of a Multi-Layered IoT Network through Rectangular and Interstitial Mesh in the Gateway Layer
by Sastry Kodanda Rama Jammalamadaka, Bhupati Chokara, Sasi Bhanu Jammalamadaka, Balakrishna Kamesh Duvvuri and Rajarao Budaraju
J. Sens. Actuator Netw. 2023, 12(5), 76; https://doi.org/10.3390/jsan12050076 - 16 Oct 2023
Cited by 1 | Viewed by 1877
Abstract
Most IoT systems designed for the implementation of mission-critical systems are multi-layered. Much of the computing is done in the service and gateway layers. The gateway layer connects the internal section of the IoT to the cloud through the Internet. The failure of [...] Read more.
Most IoT systems designed for the implementation of mission-critical systems are multi-layered. Much of the computing is done in the service and gateway layers. The gateway layer connects the internal section of the IoT to the cloud through the Internet. The failure of any node between the servers and the gateways will isolate the entire network, leading to zero tolerance. The service and gateway layers must be connected using networking topologies to yield 100% fault tolerance. The empirical formulation of the model chosen to connect the service’s servers to the gateways through routers is required to compute the fault tolerance of the network. A rectangular and interstitial mesh have been proposed in this paper to connect the service servers to the gateways through the servers, which yields 0.999 fault tolerance of the IoT network. Also provided is an empirical approach to computing the IoT network’s fault tolerance. A rectangular and interstitial mesh have been implemented in the network’s gateway layer, increasing the IoT network’s ability to tolerate faults by 11%. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

12 pages, 3576 KiB  
Communication
Design of a Broadband Transition from a Coaxial Cable to a Reduced-Height Rectangular Waveguide
by Bayarsaikhan Dansran, Songyuan Xu, Jiwon Heo, Chan-Soo Lee and Bierng-Chearl Ahn
Appl. Sci. 2023, 13(20), 11265; https://doi.org/10.3390/app132011265 - 13 Oct 2023
Cited by 1 | Viewed by 2076
Abstract
For miniaturization, rectangular waveguides with a reduced height are often required, along with a coaxial transition for signal launching. We present a simulation-based design of a broadband transition from a coaxial cable to a rectangular waveguide with the height(b)-to-width(a) [...] Read more.
For miniaturization, rectangular waveguides with a reduced height are often required, along with a coaxial transition for signal launching. We present a simulation-based design of a broadband transition from a coaxial cable to a rectangular waveguide with the height(b)-to-width(a) ratio b/a ranging from 0.125 to 0.375. The proposed transition consists of a coaxial probe with a cylindrical head or a disk and two symmetrically placed tuning posts. To extend the operating frequency range, three sections of the rectangular waveguide are employed with properly chosen dimensions. Design examples are presented for the WR75 waveguide transition with a b/a of 0.125, 0.25, and 0.375, having a bandwidth of 83.4%, 92.7%, and 84.4%, respectively. Compared with previous works, our design offers the largest bandwidth in a right-angle coaxial-to-rectangular waveguide transition employing the aforementioned structure. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

13 pages, 2108 KiB  
Article
Automatic Sleep Stage Classification Using a Taguchi-Based Multiscale Convolutional Compensatory Fuzzy Neural Network
by Chun-Jung Lin, Cheng-Jian Lin and Xue-Qian Lin
Appl. Sci. 2023, 13(18), 10442; https://doi.org/10.3390/app131810442 - 18 Sep 2023
Cited by 3 | Viewed by 1036
Abstract
Current methods for sleep stage detection rely on sensors to collect physiological data. These methods are inaccurate and take up considerable medical resources. Thus, in this study, we propose a Taguchi-based multiscale convolutional compensatory fuzzy neural network (T-MCCFNN) model to automatically detect and [...] Read more.
Current methods for sleep stage detection rely on sensors to collect physiological data. These methods are inaccurate and take up considerable medical resources. Thus, in this study, we propose a Taguchi-based multiscale convolutional compensatory fuzzy neural network (T-MCCFNN) model to automatically detect and classify sleep stages. In the proposed T-MCCFNN model, multiscale convolution kernels extract features of the input electroencephalogram signal and a compensatory fuzzy neural network is used in place of a traditional fully connected network as a classifier to improve the convergence rate during learning and to reduce the number of model parameters required. Due to the complexity of general deep learning networks, trial and error methods are often used to determine their parameters. However, this method is very time-consuming. Therefore, this study uses the Taguchi method instead, where the optimal parameter combination is identified over a minimal number of experiments. We use the Sleep-EDF database to evaluate the proposed model. The results indicate that the proposed T-MCCFNN sleep stage classification accuracy is 85.3%, which is superior to methods proposed by other scholars. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

23 pages, 4041 KiB  
Article
An Excess Kurtosis People Counting System Based on 1DCNN-LSTM Using Impulse Radio Ultra-Wide Band Radar Signals
by Jinlong Zhang, Xiaochao Dang and Zhanjun Hao
Electronics 2023, 12(17), 3581; https://doi.org/10.3390/electronics12173581 - 24 Aug 2023
Cited by 1 | Viewed by 1296
Abstract
As the Artificial Intelligence of Things (AIOT) and ubiquitous sensing technologies have been leaping forward, numerous scholars have placed a greater focus on the use of Impulse Radio Ultra-Wide Band (IR-UWB) radar signals for Region of Interest (ROI) population estimation. To address the [...] Read more.
As the Artificial Intelligence of Things (AIOT) and ubiquitous sensing technologies have been leaping forward, numerous scholars have placed a greater focus on the use of Impulse Radio Ultra-Wide Band (IR-UWB) radar signals for Region of Interest (ROI) population estimation. To address the problem concerning the fact that existing algorithms or models cannot accurately detect the number of people counted in ROI from low signal-to-noise ratio (SNR) received signals, an effective 1DCNN-LSTM model was proposed in this study to accurately detect the number of targets even in low-SNR environments with considerable people. First, human-induced excess kurtosis was detected by setting a threshold using the optimized CLEAN algorithm. Next, the preprocessed IR-UWB radar signal pulses were bundled into frames, and the resulting peaks were grouped to develop feature vectors. Subsequently, the sample set was trained based on the 1DCNN-LSTM algorithm neural network structure. In this study, the IR-UWB radar signal data were acquired from different real environments with different numbers of subjects (0–10). As indicated by the experimental results, the average accuracy of the proposed 1DCNN-LSTM model for the recognition of people counting reached 86.66% at ROI. In general, a high-accuracy, low-complexity, and high-robustness solution in IR-UWB radar people counting was presented in this study. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

20 pages, 3962 KiB  
Article
A Wasserstein Generative Adversarial Network–Gradient Penalty-Based Model with Imbalanced Data Enhancement for Network Intrusion Detection
by Gwo-Chuan Lee, Jyun-Hong Li and Zi-Yang Li
Appl. Sci. 2023, 13(14), 8132; https://doi.org/10.3390/app13148132 - 12 Jul 2023
Cited by 4 | Viewed by 1665
Abstract
In today’s network intrusion detection systems (NIDS), certain types of network attack packets are sparse compared to regular network packets, making them challenging to collect, and resulting in significant data imbalances in public NIDS datasets. With respect to attack types with rare data, [...] Read more.
In today’s network intrusion detection systems (NIDS), certain types of network attack packets are sparse compared to regular network packets, making them challenging to collect, and resulting in significant data imbalances in public NIDS datasets. With respect to attack types with rare data, it is difficult to classify them, even by using various algorithms such as machine learning and deep learning. To address this issue, this study proposes a data augmentation technique based on the WGAN-GP model to enhance the recognition accuracy of sparse attacks in network intrusion detection. The enhanced performance of the WGAN-GP model on sparse attack classes is validated by evaluating three sparse data generation methods, namely Gaussian noise, WGAN-GP, and SMOTE, using the NSL-KDD dataset. Additionally, machine learning algorithms, including KNN, SVM, random forest, and XGBoost, as well as neural network models such as multilayer perceptual neural networks (MLP) and convolutional neural networks (CNN), are applied to classify the enhanced NSL-KDD dataset. Experimental results revealed that the WGAN-GP generation model is the most effective for detecting sparse data probes. Furthermore, a two-stage fine-tuning algorithm based on the WGAN-GP model is developed, fine-tuning the classification algorithms and model parameters to optimize the recognition accuracy of the sparse data probes. The final experimental results demonstrate that the MLP classifier significantly increases the accuracy rate from 74% to 80% after fine tuning, surpassing all other classifiers. The proposed method exhibits a 10%, 7%, and 13% improvement over untuned Gaussian noise enhancement, untuned SMOTE enhancement, and no enhancement. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

17 pages, 4314 KiB  
Article
Characteristic-Mode-Analysis-Based Compact Vase-Shaped Two-Element UWB MIMO Antenna Using a Unique DGS for Wireless Communication
by Subhash Bodaguru Kempanna, Rajashekhar C. Biradar, Praveen Kumar, Pradeep Kumar, Sameena Pathan and Tanweer Ali
J. Sens. Actuator Netw. 2023, 12(3), 47; https://doi.org/10.3390/jsan12030047 - 15 Jun 2023
Cited by 3 | Viewed by 1753
Abstract
The modern electronic device antenna poses challenges regarding broader bandwidth and isolation due to its multiple features and seamless user experience. A compact vase-shaped two-port ultrawideband (UWB) antenna is presented in this work. A circular monopole antenna is modified by embedding the multiple [...] Read more.
The modern electronic device antenna poses challenges regarding broader bandwidth and isolation due to its multiple features and seamless user experience. A compact vase-shaped two-port ultrawideband (UWB) antenna is presented in this work. A circular monopole antenna is modified by embedding the multiple curved segments onto the radiator and rectangular slotted ground plane to develop impedance matching in the broader bandwidth from 4 to 12.1 GHz. The UWB monopole antenna is recreated horizontally with a separation of less than a quarter wavelength of 0.13 λ (λ computed at 4 GHz) to create a UWB multiple input and multiple output (MIMO) antenna with a geometry of 20 × 29 × 1.6 mm3. The isolation in the UWB MIMO antenna is enhanced by inserting an inverted pendulum-shaped parasitic element on the ground plane. This modified ground plane acts as a decoupling structure and provides isolation below 21 dB across the 5–13.5 GHz operating frequency. The proposed UWB MIMO antenna’s significant modes and their contribution to antenna radiation are analyzed by characteristic mode analysis. Further, the proposed antenna is investigated for MIMO diversity features, and its values are found to be ECC < 0.002, DG ≈ 10 dB, TARC < −10 dB, CCL < 0.3 bps/Hz, and MEG < −3 dB. The proposed antenna’s time domain characteristics in different antenna orientations show a group delay of less than 1 ns and a fidelity factor larger than 0.9. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

20 pages, 8458 KiB  
Article
Resource Allocation and Trajectory Optimization in OTFS-Based UAV-Assisted Mobile Edge Computing
by Wei Li, Yan Guo, Ning Li, Hao Yuan and Cuntao Liu
Electronics 2023, 12(10), 2212; https://doi.org/10.3390/electronics12102212 - 12 May 2023
Cited by 2 | Viewed by 1685
Abstract
Mobile edge computing (MEC) powered by unmanned aerial vehicles (UAVs), with the advantages of flexible deployment and wide coverage, is a promising technology to solve computationally intensive communication problems. In this paper, an orthogonal time frequency space (OTFS)-based UAV-assisted MEC system is studied, [...] Read more.
Mobile edge computing (MEC) powered by unmanned aerial vehicles (UAVs), with the advantages of flexible deployment and wide coverage, is a promising technology to solve computationally intensive communication problems. In this paper, an orthogonal time frequency space (OTFS)-based UAV-assisted MEC system is studied, in which OTFS technology is used to mitigate the Doppler effect in UAV high-speed mobile communication. The weighted total energy consumption of the system is minimized by jointly optimizing the time division, CPU frequency allocation, transmit power allocation and flight trajectory while considering Doppler compensation. Thus, the resultant problem is a challenging nonconvex problem. We propose a joint algorithm that combines the benefits of the atomic orbital search (AOS) algorithm and convex optimization. Firstly, an improved AOS algorithm is proposed to swiftly obtain the time slot allocation and high-quality solution of the UAV optimal path. Secondly, the optimal solution for the CPU frequency and transmit power allocation is found by using Lagrangian duality and the first-order Taylor formula. Finally, the optimal solution of the original problem is iteratively obtained. The simulation results show that the weighted total energy consumption of the OTFS-based system decreases by 13.6% compared with the orthogonal frequency division multiplexing (OFDM)-based system. The weighted total energy consumption of the proposed algorithm decreases by 11.7% and 26.7% compared with convex optimization and heuristic algorithms, respectively. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

20 pages, 6059 KiB  
Article
Characteristics Mode Analysis-Inspired Compact UWB Antenna with WLAN and X-Band Notch Features for Wireless Applications
by Praveen Kumar, Manohara Pai MM, Pradeep Kumar, Tanweer Ali, M. Gulam Nabi Alsath and Vidhyashree Suresh
J. Sens. Actuator Netw. 2023, 12(3), 37; https://doi.org/10.3390/jsan12030037 - 23 Apr 2023
Cited by 8 | Viewed by 2494
Abstract
A compact circular structured monopole antenna for ultrawideband (UWB) and UWB dual-band notch applications is designed and fabricated on an FR4 substrate. The UWB antenna has a hybrid configuration of the circle and three ellipses as the radiating plane and less than a [...] Read more.
A compact circular structured monopole antenna for ultrawideband (UWB) and UWB dual-band notch applications is designed and fabricated on an FR4 substrate. The UWB antenna has a hybrid configuration of the circle and three ellipses as the radiating plane and less than a quarter-lowered ground plane. The overall dimensions of the projected antennas are 16 × 11 × 1.6 mm3, having a −10 dB impedance bandwidth of 113% (3.7–13.3 GHz). Further, two frequency band notches were created using two inverted U-shaped slots on the radiator. These slots notch the frequency band from 5–5.6 GHz and 7.3–8.3 GHz, covering IEEE 802.11, Wi-Fi, WLAN, and the entire X-band satellite communication. A comprehensive frequency and time domain analysis is performed to validate the projected antenna design’s effectiveness. In addition, a circuit model of the projected antenna design is built, and its performance is evaluated. Furthermore, unlike the traditional technique, which uses the simulated surface current distribution to verify functioning, characteristic mode analysis (CMA) is used to provide deeper insight into distinct modes on the antenna. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

30 pages, 15900 KiB  
Article
A Quadruple Notch UWB Antenna with Decagonal Radiator and Sierpinski Square Fractal Slots
by Om Prakash Kumar, Pramod Kumar, Tanweer Ali, Pradeep Kumar and Subhash B. K
J. Sens. Actuator Netw. 2023, 12(2), 24; https://doi.org/10.3390/jsan12020024 - 14 Mar 2023
Cited by 7 | Viewed by 1798
Abstract
A novel quadruple-notch UWB (ultrawideband) antenna for wireless applications is presented. The antenna consists of a decagonal-shaped radiating part with Sierpinski square fractal slots up to iteration 3. The ground part is truncated and loaded with stubs and slots. Each individual stub at [...] Read more.
A novel quadruple-notch UWB (ultrawideband) antenna for wireless applications is presented. The antenna consists of a decagonal-shaped radiating part with Sierpinski square fractal slots up to iteration 3. The ground part is truncated and loaded with stubs and slots. Each individual stub at the ground plane creates/controls a particular notch band. Initially, a UWB antenna is designed with the help of truncation at the ground plane. Miniaturization in this design is achieved with the help of Sierpinski square fractal slots. Additionally, these slots help improve the UWB impedance bandwidth. This design is then extended to achieve a quadruple notch by loading the ground with various rectangular-shaped stubs. The final antenna shows the UWB range from 4.21 to 13.92 GHz and notch frequencies at 5.02 GHz (C-band), 7.8 GHz (satellite band), 9.03, and 10.86 GHz (X-band). The simulated and measured results are nearly identical, which shows the efficacy of the proposed design. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

15 pages, 5320 KiB  
Article
Automated and Optimized Regression Model for UWB Antenna Design
by Sameena Pathan, Praveen Kumar, Tanweer Ali and Pradeep Kumar
J. Sens. Actuator Netw. 2023, 12(2), 23; https://doi.org/10.3390/jsan12020023 - 10 Mar 2023
Cited by 4 | Viewed by 2384
Abstract
Antenna design involves continuously optimizing antenna parameters to meet the desired requirements. Since the process is manual, laborious, and time-consuming, a surrogate model based on machine learning provides an effective solution. The conventional approach for selecting antenna parameters is mapped to a regression [...] Read more.
Antenna design involves continuously optimizing antenna parameters to meet the desired requirements. Since the process is manual, laborious, and time-consuming, a surrogate model based on machine learning provides an effective solution. The conventional approach for selecting antenna parameters is mapped to a regression problem to predict the antenna performance in terms of S parameters. In this regard, a heuristic approach is employed using an optimized random forest model. The design parameters are obtained from an ultrawideband (UWB) antenna simulated using the high-frequency structure simulator (HFSS). The designed antenna is an embedded structure consisting of a circular monopole with a rectangle. The ground plane of the proposed antenna is reduced to realize the wider impedance bandwidth. The lowered ground plane will create a new current channel that affects the uniform current distribution and helps in achieving the wider impedance bandwidth. Initially, data were preprocessed, and feature extraction was performed using additive regression. Further, ten different regression models with optimized parameters are used to determine the best values for antenna design. The proposed method was evaluated by splitting the dataset into train and test data in the ratio of 60:40 and by employing a ten-fold cross-validation scheme. A correlation coefficient of 0.99 was obtained using the optimized random forest model. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

11 pages, 4322 KiB  
Article
Application of Somatosensory Computer Game for Nutrition Education in Preschool Children
by Ing-Chau Chang and Chin-En Yen
Computers 2023, 12(1), 20; https://doi.org/10.3390/computers12010020 - 16 Jan 2023
Cited by 2 | Viewed by 2734
Abstract
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development [...] Read more.
With the popularization of technological products, people’s everyday lives are now full of 3C (computer, communication, and consumer electronics) products. Children have gradually become acquainted with these new technological products. In recent years, more somatosensory games have been introduced along with the development of new media puzzle games for children. Several studies have shown that somatosensory games can improve physical, brain, and sensory integrated development in children, as well as promoting parent–child and peer interactions and enhancing children’s attention and cooperation in play. The purpose of this study is to assess the effect of integrating somatosensory computer games into early childhood nutrition education. The subjects of this study were 15 preschool children (aged 5–6 years old) from a preschool in Taichung City, Taiwan. We used the somatosensory game “Arno’s Fruit and Vegetable Journey” as an intervention tool for early childhood nutrition education. The somatosensory game production uses the Scratch software combined with Rabboni sensors. The somatosensory game education intervention was carried out for one hour a week over two consecutive weeks. We used questionnaires and nutrition knowledge learning sheets to evaluate the early childhood nutrition knowledge and learning status and satisfaction degree in the first and second weeks of this study. The results showed that there were no statistically significant differences between the preschool children’s game scores and times, as well as nutritional knowledge scores, before and after the intervention. Most of the preschool children highly enjoyed the somatosensory game educational activities. We reveal some problems in the teaching activities of somatosensory games, which can provide a reference for future research on designing and producing somatosensory games for preschool children and somatosensory game-based education. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

16 pages, 7207 KiB  
Article
Research on Multi-Agent D2D Communication Resource Allocation Algorithm Based on A2C
by Xinzhou Li, Guifen Chen, Guowei Wu, Zhiyao Sun and Guangjiao Chen
Electronics 2023, 12(2), 360; https://doi.org/10.3390/electronics12020360 - 10 Jan 2023
Cited by 13 | Viewed by 2585
Abstract
Device to device (D2D) communication technology is the main component of future communication, which greatly improves the utilization of spectrum resources. However, in the D2D subscriber multiplex communication network, the interference between communication links is serious and the system performance is degraded. Traditional [...] Read more.
Device to device (D2D) communication technology is the main component of future communication, which greatly improves the utilization of spectrum resources. However, in the D2D subscriber multiplex communication network, the interference between communication links is serious and the system performance is degraded. Traditional resource allocation schemes need a lot of channel information when dealing with interference problems in the system, and have the problems of weak dynamic resource allocation capability and low system throughput. Aiming at this challenge, this paper proposes a multi-agent D2D communication resource allocation algorithm based on Advantage Actor Critic (A2C). First, a multi-D2D cellular communication system model based on A2C Critic is established, then the parameters of the actor network and the critic network in the system are updated, and finally the resource allocation scheme of D2D users is dynamically and adaptively output. The simulation results show that compared with DQN (deep Q-network) and MAAC (multi-agent actor–critic), the average throughput of the system is improved by 26% and 12.5%, respectively. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

14 pages, 1042 KiB  
Article
Photoplethysmography Data Reduction Using Truncated Singular Value Decomposition and Internet of Things Computing
by Abdulrahman B. Abdelaziz, Mohammad A. Rahimi, Muhammad R. Alrabeiah, Ahmed B. Ibrahim, Ahmed S. Almaiman, Amr M. Ragheb and Saleh A. Alshebeili
Electronics 2023, 12(1), 220; https://doi.org/10.3390/electronics12010220 - 2 Jan 2023
Cited by 1 | Viewed by 2099
Abstract
Biometric-based identity authentication is integral to modern-day technologies. From smart phones, personal computers, and tablets to security checkpoints, they all utilize a form of identity check based on methods such as face recognition and fingerprint-verification. Photoplethysmography (PPG) is another form of biometric-based authentication [...] Read more.
Biometric-based identity authentication is integral to modern-day technologies. From smart phones, personal computers, and tablets to security checkpoints, they all utilize a form of identity check based on methods such as face recognition and fingerprint-verification. Photoplethysmography (PPG) is another form of biometric-based authentication that has recently been gaining momentum, because it is effective and easy to implement. This paper considers a cloud-based system model for PPG-authentication, where the PPG signals of various individuals are collected with distributed sensors and communicated to the cloud for authentication. Such a model incursarge signal traffic, especially in crowded places such as airport security checkpoints. This motivates the need for a compression–decompression scheme (or a Codec for short). The Codec is required to reduce the data traffic by compressing each PPG signal before it is communicated, i.e., encoding the signal right after it comes off the sensor and before it is sent to the cloud to be reconstructed (i.e., decoded). Therefore, the Codec has two system requirements to meet: (i) produce high-fidelity signal reconstruction; and (ii) have a computationallyightweight encoder. Both requirements are met by the Codec proposed in this paper, which is designed using truncated singular value decomposition (T-SVD). The proposed Codec is developed and tested using a publicly available dataset of PPG signals collected from multiple individuals, namely the CapnoBase dataset. It is shown to achieve a 95% compression ratio and a 99% coefficient of determination. This means that the Codec is capable of delivering on the first requirement, high-fidelity reconstruction, while producing highly compressed signals. Those compressed signals do not require heavy computations to be produced as well. An implementation on a single-board computer is attempted for the encoder, showing that the encoder can average 300 milliseconds per signal on a Raspberry Pi 3. This is enough time to encode a PPG signal prior to transmission to the cloud. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

12 pages, 447 KiB  
Article
A Low-Latency Fair-Arbiter Architecture for Network-on-Chip Switches
by Jifeng Luo, Wenqi Wu, Qianjian Xing, Meiting Xue, Feng Yu and Zhenguo Ma
Appl. Sci. 2022, 12(23), 12458; https://doi.org/10.3390/app122312458 - 6 Dec 2022
Cited by 2 | Viewed by 2997
Abstract
As semiconductor technology evolves, computing platforms attempt to integrate hundreds of processing cores and associated interconnects into a single chip. Network-on-chip (NoC) technology has been widely used for data exchange centers in recent years. As the core element of the NoC, the round-robin [...] Read more.
As semiconductor technology evolves, computing platforms attempt to integrate hundreds of processing cores and associated interconnects into a single chip. Network-on-chip (NoC) technology has been widely used for data exchange centers in recent years. As the core element of the NoC, the round-robin arbiter provides fair and fast arbitration, which is essential to ensure the high performance of each module on the chip. In this paper, we propose a low-latency fair switch arbiter (FSA) architecture based on the tree structure search algorithm. The FSA uses a feedback-based parallel priority update mechanism to complete the arbitration within the leaf nodes and a lock-based round-robin search algorithm to guarantee global fairness. To reduce latency, the FSA keeps the lock structure only at the leaf node so that the complexity of the critical path does not increase. Meanwhile, the FSA achieves a critical path with only O(log4N) delay by using four input nodes in parallel. The latency of the proposed circuit is on average 22.2% better than the existing fair structures and 8.1% better than the fastest arbiter, according to the synthesis results. The proposed architecture is well suited for high-speed network-on-chip switches and has better scalability for switches with large numbers of ports. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

18 pages, 7061 KiB  
Article
An Explainable and Lightweight Deep Convolutional Neural Network for Quality Detection of Green Coffee Beans
by Chih-Hsien Hsia, Yi-Hsuan Lee and Chin-Feng Lai
Appl. Sci. 2022, 12(21), 10966; https://doi.org/10.3390/app122110966 - 29 Oct 2022
Cited by 9 | Viewed by 2963
Abstract
In recent years, the demand for coffee has increased tremendously. During the production process, green coffee beans are traditionally screened manually for defective beans before they are packed into coffee bean packages; however, this method is not only time-consuming but also increases the [...] Read more.
In recent years, the demand for coffee has increased tremendously. During the production process, green coffee beans are traditionally screened manually for defective beans before they are packed into coffee bean packages; however, this method is not only time-consuming but also increases the rate of human error due to fatigue. Therefore, this paper proposed a lightweight deep convolutional neural network (LDCNN) for a quality detection system of green coffee beans, which combined depthwise separable convolution (DSC), squeeze-and-excite block (SE block), skip block, and other frameworks. To avoid the influence of low parameters of the lightweight model caused by the model training process, rectified Adam (RA), lookahead (LA), and gradient centralization (GC) were included to improve efficiency; the model was also put into the embedded system. Finally, the local interpretable model-agnostic explanations (LIME) model was employed to explain the predictions of the model. The experimental results indicated that the accuracy rate of the model could reach up to 98.38% and the F1 score could be as high as 98.24% when detecting the quality of green coffee beans. Hence, it can obtain higher accuracy, lower computing time, and lower parameters. Moreover, the interpretable model verified that the lightweight model in this work was reliable, providing the basis for screening personnel to understand the judgment through its interpretability, thereby improving the classification and prediction of the model. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

22 pages, 10829 KiB  
Article
XOR-Based Meaningful (n, n) Visual Multi-Secrets Sharing Schemes
by Sheng-Yao Huang, An-hui Lo and Justie Su-Tzu Juan
Appl. Sci. 2022, 12(20), 10368; https://doi.org/10.3390/app122010368 - 14 Oct 2022
Cited by 7 | Viewed by 1687
Abstract
The basic visual cryptography (VC) model was proposed by Naor and Shamir in 1994. The secret image is encrypted into pieces, called shares, which can be viewed by collecting and directly stacking these shares. Many related studies were subsequently proposed. The most recent [...] Read more.
The basic visual cryptography (VC) model was proposed by Naor and Shamir in 1994. The secret image is encrypted into pieces, called shares, which can be viewed by collecting and directly stacking these shares. Many related studies were subsequently proposed. The most recent advancement in visual cryptography, XOR-based VC, can address the issue of OR-based VC’s poor image quality of the restored image by lowering hardware costs. Simultaneous sharing of multiple secret images can reduce computational costs, while designing shared images into meaningful unrelated images helps avoid attacks and is easier to manage. Both have been topics of interest to many researchers in recent years. This study suggests ways for XOR-based VCS that simultaneously encrypts several secret images and makes each share separately meaningful. Theoretical analysis and experimental results show that our methods are secure and effective. Compared with previous schemes, our scheme has more capabilities. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

11 pages, 3736 KiB  
Communication
Applying Natural Language Processing and TRIZ Evolutionary Trends to Patent Recommendations for Product Design
by Tien-Lun Liu, Ling-Hsiang Hsieh and Kuan-Chun Huang
Appl. Sci. 2022, 12(19), 10105; https://doi.org/10.3390/app121910105 - 8 Oct 2022
Cited by 3 | Viewed by 2483
Abstract
Traditional TRIZ theory provides methods and processes for systematic analysis on engineering problems, which can improve the efficiency of solving problems. However, the effect of solving problems is not necessarily guaranteed, and depends on the user’s profession and experience. Therefore, this study proposes [...] Read more.
Traditional TRIZ theory provides methods and processes for systematic analysis on engineering problems, which can improve the efficiency of solving problems. However, the effect of solving problems is not necessarily guaranteed, and depends on the user’s profession and experience. Therefore, this study proposes a methodology to apply evolutionary benefits in the 37 trend lines developed by TRIZ researchers to assist in intelligently screening relevant patents applicable to the content of the product design. In such a way, the efficiency of problem solving and product design quality may be improved more effectively. First, the patent database is used as the training dataset, words and sentences in the patent documents are analyzed through natural language processing to obtain keywords that may be related to evolutionary benefits. Using word vectors trained by Doc2vec, the semantic similarity can be calculated to obtain the similarity relationship between patent text and evolutionary benefit. Secondly, the goals of the product development project may make be related to the evolutionary benefits, and then applicable patent recommendations can be provided. The proposed methodology may achieve the purpose of intelligent design assistance to enhance the product development process and problem-solving. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

16 pages, 7456 KiB  
Article
XOR-Based (n, n) Visual Cryptography Schemes for Grayscale or Color Images with Meaningful Shares
by Yu-Hong Chen and Justie Su-Tzu Juan
Appl. Sci. 2022, 12(19), 10096; https://doi.org/10.3390/app121910096 - 8 Oct 2022
Cited by 4 | Viewed by 3549
Abstract
XOR-based Visual Cryptography Scheme (XOR-based VCS) is a method of secret image sharing. The principle of XOR-based VCS is to encrypt a secret image into several encrypted images, called shares. No information about the secret can be obtained from any of the shares, [...] Read more.
XOR-based Visual Cryptography Scheme (XOR-based VCS) is a method of secret image sharing. The principle of XOR-based VCS is to encrypt a secret image into several encrypted images, called shares. No information about the secret can be obtained from any of the shares, and after applying the logical XOR operation to stack these shares, the original secret image can be recovered. In this paper, we present a new XOR-based VCS for grayscale or a color secret image. This scheme encrypts the secret grayscale (or color) image into n meaningful grayscale (or color) shares, which can import n difference cover images. After stacking n shares using the XOR operation, the original secret image can be completely restored. Both the theoretical proof and experimental results show that our method is accurate and efficient. To the best of our knowledge, ours is the only scheme that currently provides this functionality for grayscale and color secret images. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

18 pages, 3396 KiB  
Article
Development of a Machine Learning-Based Framework for Predicting Vessel Size Based on Container Capacity
by Indranath Chatterjee and Gyusung Cho
Appl. Sci. 2022, 12(19), 9999; https://doi.org/10.3390/app12199999 - 5 Oct 2022
Cited by 3 | Viewed by 1687
Abstract
Ports are important hubs in logistics and supply chain systems, where the majority of the available data is still not being fully exploited. Container throughput is the amount of work done by the TEU and the ability to handle containers at a minimal [...] Read more.
Ports are important hubs in logistics and supply chain systems, where the majority of the available data is still not being fully exploited. Container throughput is the amount of work done by the TEU and the ability to handle containers at a minimal cost. This capacity of container throughput is the most important part of the scale of services, which is a crucial factor in selecting port terminals. At the port container terminal, it is necessary to allocate an appropriate number of available quay cranes to the berth before container ships arrive at the port container terminal. Predicting the size of a ship is especially important for calculating the number of quay cranes that should be allocated to ships that will eventually dock at the port terminal. Machine learning techniques are flexible tools for utilizing and unlocking the value of the data. In this paper, we used neighborhood component analysis as a tool for feature selection and state-of-the-art machine learning algorithms for multiclass classification. The paper proposes a novel two-stage approach for estimating and predicting vessel size based on container capacity. Our proposed approach revealed seven unique features of port data, which are the essential parameters for the identification of the vessel size. We obtained the highest average classification accuracy of 97.6% with the linear support vector machine classifier. This study paves a new direction for research in port logistics incorporating machine learning. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

13 pages, 3345 KiB  
Article
A Novel Mechanical Fault Diagnosis Based on Transfer Learning with Probability Confidence Convolutional Neural Network Model
by Hsiao-Mei Lin, Ching-Yuan Lin, Chun-Hung Wang and Ming-Jong Tsai
Appl. Sci. 2022, 12(19), 9670; https://doi.org/10.3390/app12199670 - 26 Sep 2022
Cited by 1 | Viewed by 2339
Abstract
For fault diagnosis, convolutional neural networks (CNN) have been performing as a data-driven method to identify mechanical fault features in forms of vibration signals. However, because of CNN’s ineffective and inaccurate identification of unknown fault categories, we propose a model based on transfer [...] Read more.
For fault diagnosis, convolutional neural networks (CNN) have been performing as a data-driven method to identify mechanical fault features in forms of vibration signals. However, because of CNN’s ineffective and inaccurate identification of unknown fault categories, we propose a model based on transfer learning with probability confidence CNN (TPCCNN) to model the fault features of rotating machinery for fault diagnosis. TPCCNN includes three major modules: (1) feature engineering to perform a series of data pre-processing and feature extraction; (2) transferring learning features of heterogeneous datasets for different datasets to have better generality in model training and reduce the time for modeling and parameter tuning; and (3) building a PCCNN model to classify known and unknown fault categories. In addition to solving the problem of an imbalanced sample size, TPCCNN self-learns and retrains by iterating with unknown classes to the original model. This model is verified with the use of the open-source datasets CWRU and Ottawa. The experimental results showing the feature transfer of heterogeneous datasets are of average accuracy rates of 99.2% and 93.8% respectively for known and unknown categories, and TPCCNN is then proven effectively in training heterogeneous datasets. Likewise, similar feature sets can also be applied to reduce the training of predicting models by 34% and 68% of the time. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

15 pages, 4125 KiB  
Article
Design, Analysis, and Simulation of 60 GHz Millimeter Wave MIMO Microstrip Antennas
by Juan Carlos Martínez Quintero, Edith Paola Estupiñán Cuesta and Gabriel Leonardo Escobar Quiroga
J. Sens. Actuator Netw. 2022, 11(4), 59; https://doi.org/10.3390/jsan11040059 - 24 Sep 2022
Cited by 3 | Viewed by 3462
Abstract
This article comparatively shows the evolution of parameters of three types of arrays for MIMO microstrip antennas, to which the number of ports is gradually incremented until reaching 32. The three arrays have a 1 × 2 configuration in each port and present [...] Read more.
This article comparatively shows the evolution of parameters of three types of arrays for MIMO microstrip antennas, to which the number of ports is gradually incremented until reaching 32. The three arrays have a 1 × 2 configuration in each port and present different geometry or type of coupling in the next way: square patch with quarter-wave coupling (Antenna I), square patch with inset feed (Antenna II) and circular patch with quarter-wave coupling (Antenna III). The arrays were designed and simulated to operate on the millimetric wave band, specifically in the 60 GHz frequency to be used in wireless technologies such as IEEE 802.11 ad. A method of rapid prototyping was formulated to increase the number of elements in the array obtaining dimensions and coordinates of location in the layout in short periods of time. The simulation was conducted through ADS software, and the results of gain, directivity, return loss, bandwidth, beamwidth, and efficiency were evaluated. In terms of array results of 32 ports, Antenna III obtained the lowest return loss with −42.988 dB, being more than 19 dB lower than the others. The highest gain is also obtained by Antenna III with 24.541 dBi and an efficiency of 66%. Antenna II obtained better efficiency, reaching 71.03%, but with a gain of more than 2dB below the Antenna III. Antenna I obtained the best bandwidth. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

30 pages, 1145 KiB  
Article
A Holistic Scalability Strategy for Time Series Databases Following Cascading Polyglot Persistence
by Carlos Garcia Calatrava, Yolanda Becerra Fontal and Fernando M. Cucchietti
Big Data Cogn. Comput. 2022, 6(3), 86; https://doi.org/10.3390/bdcc6030086 - 18 Aug 2022
Cited by 2 | Viewed by 2372
Abstract
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested [...] Read more.
Time series databases aim to handle big amounts of data in a fast way, both when introducing new data to the system, and when retrieving it later on. However, depending on the scenario in which these databases participate, reducing the number of requested resources becomes a further requirement. Following this goal, NagareDB and its Cascading Polyglot Persistence approach were born. They were not just intended to provide a fast time series solution, but also to find a great cost-efficiency balance. However, although they provided outstanding results, they lacked a natural way of scaling out in a cluster fashion. Consequently, monolithic approaches could extract the maximum value from the solution but distributed ones had to rely on general scalability approaches. In this research, we proposed a holistic approach specially tailored for databases following Cascading Polyglot Persistence to further maximize its inherent resource-saving goals. The proposed approach reduced the cluster size by 33%, in a setup with just three ingestion nodes and up to 50% in a setup with 10 ingestion nodes. Moreover, the evaluation shows that our scaling method is able to provide efficient cluster growth, offering scalability speedups greater than 85% in comparison to a theoretically 100% perfect scaling, while also ensuring data safety via data replication. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

11 pages, 607 KiB  
Article
Energy Efficient Hybrid Relay-IRS-Aided Wireless IoT Network for 6G Communications
by Shaik Rajak, Inbarasan Muniraj, Karthikeyan Elumalai, A. S. M. Sanwar Hosen, In-Ho Ra and Sunil Chinnadurai
Electronics 2022, 11(12), 1900; https://doi.org/10.3390/electronics11121900 - 16 Jun 2022
Cited by 8 | Viewed by 2859
Abstract
Intelligent Reflecting Surfaces (IRS) have been recognized as presenting a highly energy-efficient and optimal solution for future fast-growing 6G communication systems by reflecting the incident signal towards the receiver. The large number of Internet of Things (IoT) devices are distributed randomly in order [...] Read more.
Intelligent Reflecting Surfaces (IRS) have been recognized as presenting a highly energy-efficient and optimal solution for future fast-growing 6G communication systems by reflecting the incident signal towards the receiver. The large number of Internet of Things (IoT) devices are distributed randomly in order to serve users while providing a high data rate, seamless data transfer, and Quality of Service (QoS). The major challenge in satisfying the above requirements is the energy consumed by IoT network. Hence, in this paper, we examine the energy-efficiency (EE) of a hybrid relay-IRS-aided wireless IoT network for 6G communications. In our analysis, we study the EE performance of IRS-aided and DF relay-aided IoT networks separately, as well as a hybrid relay-IRS-aided IoT network. Our numerical results showed that the EE of the hybrid relay-IRS-aided system has better performance than both the conventional relay and the IRS-aided IoT network. Furthermore, we realized that the multiple IRS blocks can beat the relay in a high SNR regime, which results in lower hardware costs and reduced power consumption. Full article
(This article belongs to the Topic Electronic Communications, IOT and Big Data)
Show Figures

Figure 1

Back to TopTop