Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (181)

Search Parameters:
Keywords = server less

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 1068 KiB  
Article
Utilization of a Lightweight 3D U-Net Model for Reducing Execution Time of Numerical Weather Prediction Models
by Hyesung Park and Sungwook Chung
Atmosphere 2025, 16(1), 60; https://doi.org/10.3390/atmos16010060 - 8 Jan 2025
Viewed by 427
Abstract
Conventional weather forecasting relies on numerical weather prediction (NWP), which solves atmospheric equations using numerical methods. The Korea Meteorological Administration (KMA) adopted the Met Office Global Seasonal Forecasting System version 6 (GloSea6) NWP model from the UK and runs it on a supercomputer. [...] Read more.
Conventional weather forecasting relies on numerical weather prediction (NWP), which solves atmospheric equations using numerical methods. The Korea Meteorological Administration (KMA) adopted the Met Office Global Seasonal Forecasting System version 6 (GloSea6) NWP model from the UK and runs it on a supercomputer. However, due to high task demands, the limited resources of the supercomputer have caused job queue delays. To address this, the KMA developed a low-resolution version, Low GloSea6, for smaller-scale servers at universities and research institutions. Despite its ability to run on less powerful servers, Low GloSea6 still requires significant computational resources like those of high-performance computing (HPC) clusters. We integrated deep learning with Low GloSea6 to reduce execution time and improve meteorological research efficiency. Through profiling, we confirmed that deep learning models can be integrated without altering the original configuration of Low GloSea6 or complicating physical interpretation. The profiling identified “tri_sor.F90” as the main CPU time hotspot. By combining the biconjugate gradient stabilized (BiCGStab) method, used for solving the Helmholtz problem, with a deep learning model, we reduced unnecessary hotspot calls, shortening execution time. We also propose a convolutional block attention module-based Half-UNet (CH-UNet), a lightweight 3D-based U-Net architecture, for faster deep-learning computations. In experiments, CH-UNet showed 10.24% lower RMSE than Half-UNet, which has fewer FLOPs. Integrating CH-UNet into Low GloSea6 reduced execution time by up to 71 s per timestep, averaging a 2.6% reduction compared to the original Low GloSea6, and 6.8% compared to using Half-UNet. This demonstrates that CH-UNet, with balanced FLOPs and high predictive accuracy, offers more significant execution time reductions than models with fewer FLOPs. Full article
Show Figures

Figure 1

22 pages, 1666 KiB  
Article
CoAP/DTLS Protocols in IoT Based on Blockchain Light Certificate
by David Khoury, Samir Haddad, Patrick Sondi, Patrick Balian, Hassan Harb, Kassem Danach, Joseph Merhej and Jinane Sayah
IoT 2025, 6(1), 4; https://doi.org/10.3390/iot6010004 - 2 Jan 2025
Viewed by 598
Abstract
The Internet of Things (IoT) is expanding rapidly, but the security of IoT devices remains a noteworthy concern due to resource limitations and existing security conventions. This research investigates and proposes the use of a Light certificate with the Constrained Application Protocol (CoAP) [...] Read more.
The Internet of Things (IoT) is expanding rapidly, but the security of IoT devices remains a noteworthy concern due to resource limitations and existing security conventions. This research investigates and proposes the use of a Light certificate with the Constrained Application Protocol (CoAP) instead of the X509 certificate based on traditional PKI/CA. We start by analyzing the impediments of current CoAP security over DTLS with the certificate mode based on CA root in the constrained IoT device and suggest the implementation of LightCert4IoT for CoAP over DTLS. The paper also describes a new modified handshake protocol in DTLS applied for IoT devices and Application server certificate authentication verification by relying on a blockchain without the complication of the signed certificate and certificate chain. This approach streamlines the DTLS handshake process and reduces cryptographic overhead, making it particularly suitable for resource-constrained environments. Our proposed solution leverages blockchain to reinforce IoT gadget security through immutable device characters, secure device registration, and data integrity. The LightCert4IoT is smaller in size and requires less power consumption. Continuous research and advancement are pivotal to balancing security and effectiveness. This paper examines security challenges and demonstrates the effectiveness of giving potential solutions, guaranteeing the security of IoT networks by applying LightCert4IoT and using the CoAP over DTLS with a new security mode based on blockchain. Full article
Show Figures

Figure 1

19 pages, 553 KiB  
Article
ORNIC: A High-Performance RDMA NIC with Out-of-Order Packet Direct Write Method for Multipath Transmission
by Jiandong Ma, Zhichuan Guo, Yipeng Pan, Mengting Zhang, Zhixiang Zhao, Zezheng Sun and Yiwei Chang
Electronics 2025, 14(1), 88; https://doi.org/10.3390/electronics14010088 - 28 Dec 2024
Viewed by 657
Abstract
Remote Direct Memory Access (RDMA) technology provides a low-latency, high-bandwidth, and CPU-bypassed method for data transmission between servers. Recent works have proved that multipath transmission, especially packet spraying, can avoid network congestion, achieve load balancing, and improve overall performance in data center networks [...] Read more.
Remote Direct Memory Access (RDMA) technology provides a low-latency, high-bandwidth, and CPU-bypassed method for data transmission between servers. Recent works have proved that multipath transmission, especially packet spraying, can avoid network congestion, achieve load balancing, and improve overall performance in data center networks (DCNs). Multipath transmission can result in out-of-order (OOO) packet delivery. However, existing RDMA transport protocols, such as RDMA over Converged Ethernet version 2 (RoCEv2), are designed for handling sequential packets, limiting their ability to support multipath transmission. To address this issue, in this study, we propose ORNIC, a high-performance RDMA Network Interface Card (NIC) with out-of-order packet direct write method for multipath transmission. ORNIC supports both in-order and out-of-order packet reception. The payload of OOO packets is written directly to user memory without reordering. The write address is embedded in the packets only when necessary. A bitmap is used to check data integrity and detect packet loss. We redesign the bitmap structure into an array of bitmap blocks that support dynamic allocation. Once a bitmap block is full, it is marked and can be freed in advance. We implement ORNIC on a Xilinx U200 FPGA (Field-Programmable Gate Array), which consumes less than 15% of hardware resources. ORNIC can achieve 95 Gbps RDMA throughput, which is nearly 2.5 times that of MP-RDMA. When handling OOO packets, ORNIC’s performance is virtually unaffected, while the performance of Xilinx ERNIC and Mellanox CX-5 drops below 1 Gbps. Moreover, compared with MELO and LEFT, our bitmap has higher performance and lower bitmap block usage. Full article
(This article belongs to the Topic Advanced Integrated Circuit Design and Application)
Show Figures

Figure 1

22 pages, 739 KiB  
Article
Forward and Backward Private Searchable Encryption for Cloud-Assisted Industrial IoT
by Tianqi Peng, Bei Gong, Shanshan Tu, Abdallah Namoun, Sami Alshmrany, Muhammad Waqas, Hisham Alasmary and Sheng Chen
Sensors 2024, 24(23), 7597; https://doi.org/10.3390/s24237597 - 28 Nov 2024
Viewed by 828
Abstract
In the cloud-assisted industrial Internet of Things (IIoT), since the cloud server is not always trusted, the leakage of data privacy becomes a critical problem. Dynamic symmetric searchable encryption (DSSE) allows for the secure retrieval of outsourced data stored on cloud servers while [...] Read more.
In the cloud-assisted industrial Internet of Things (IIoT), since the cloud server is not always trusted, the leakage of data privacy becomes a critical problem. Dynamic symmetric searchable encryption (DSSE) allows for the secure retrieval of outsourced data stored on cloud servers while ensuring data privacy. Forward privacy and backward privacy are necessary security requirements for DSSE. However, most existing schemes either trade the server’s large storage overhead for forward privacy or trade efficiency/overhead for weak backward privacy. These schemes cannot fully meet the security requirements of cloud-assisted IIoT systems. We propose a fast and firmly secure SSE scheme called Veruna to address these limitations. To this end, we design a new state chain structure, which can not only ensure forward privacy with less storage overhead of the server but also achieve strong backward privacy with only a few cryptographic operations in the server. Security analysis proves that our scheme possesses forward privacy and Type-II backward privacy. Compared with many state-of-the-art schemes, our scheme has an advantage in search and update performance. The high efficiency and robust security make Veruna an ideal scheme for deployment in cloud-assisted IIoT systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

22 pages, 3568 KiB  
Article
Aniline Derivatives Containing 1-Substituted 1,2,3-Triazole System as Potential Drug Candidates: Pharmacokinetic Profile Prediction, Lipophilicity Analysis Using Experimental and In Silico Studies
by Elwira Chrobak, Katarzyna Bober-Majnusz, Mirosław Wyszomirski and Andrzej Zięba
Pharmaceuticals 2024, 17(11), 1476; https://doi.org/10.3390/ph17111476 - 2 Nov 2024
Viewed by 989
Abstract
Background: The triazole ring is an attractive structural unit in medicinal chemistry, and chemical compounds containing this type of system in their structure exhibit a wide spectrum of biological activity. They are used in the development of new pharmaceuticals. One of the [...] Read more.
Background: The triazole ring is an attractive structural unit in medicinal chemistry, and chemical compounds containing this type of system in their structure exhibit a wide spectrum of biological activity. They are used in the development of new pharmaceuticals. One of the basic parameters considered in the initial phase of designing potential drugs is lipophilicity, which affects the bioavailability and pharmacokinetics of drugs. Methods: The study aimed to assess the lipophilicity of fifteen new triazole derivatives of aniline using reversed phase thin layer chromatography (RP-TLC) and free web servers. Based on in silico methods, the drug similarity and pharmacokinetic profile (ADMET) of synthesized molecules were assessed. Results: A relationship was observed between the structure of the title compound, including the position of substitution in the aniline ring, and the experimental values of lipophilicity parameters (logPTLC). Most of the algorithms used to determine theoretical logP values showed less sensitivity to structural differences of the tested molecules. All obtained derivatives satisfy the drug similarity rules formulated by Lipinski, Ghose and Veber. Moreover, in silico analysis of the ADME profile showed favorable values of parameters related to absorption. Full article
(This article belongs to the Section Medicinal Chemistry)
Show Figures

Figure 1

14 pages, 1311 KiB  
Article
Decision Transformer-Based Efficient Data Offloading in LEO-IoT
by Pengcheng Xia, Mengfei Zang, Jie Zhao, Ting Ma, Jie Zhang, Changxu Ni, Jun Li and Yiyang Ni
Entropy 2024, 26(10), 846; https://doi.org/10.3390/e26100846 - 7 Oct 2024
Viewed by 846
Abstract
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission [...] Read more.
Recently, the Internet of Things (IoT) has witnessed rapid development. However, the scarcity of computing resources on the ground has constrained the application scenarios of IoT. Low Earth Orbit (LEO) satellites have drawn people’s attention due to their broader coverage and shorter transmission delay. They are capable of offloading more IoT computing tasks to mobile edge computing (MEC) servers with lower latency in order to address the issue of scarce computing resources on the ground. Nevertheless, it is highly challenging to share bandwidth and power resources among multiple IoT devices and LEO satellites. In this paper, we explore the efficient data offloading mechanism in the LEO satellite-based IoT (LEO-IoT), where LEO satellites forward data from the terrestrial to the MEC servers. Specifically, by optimally selecting the forwarding LEO satellite for each IoT task and allocating communication resources, we aim to minimize the data offloading latency and energy consumption. Particularly, we employ the state-of-the-art Decision Transformer (DT) to solve this optimization problem. We initially obtain a pre-trained DT through training on a specific task. Subsequently, the pre-trained DT is fine-tuned by acquiring a small quantity of data under the new task, enabling it to converge rapidly, with less training time and superior performance. Numerical simulation results demonstrate that in contrast to the classical reinforcement learning approach (Proximal Policy Optimization), the convergence speed of DT can be increased by up to three times, and the performance can be improved by up to 30%. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

17 pages, 1907 KiB  
Article
In Silico Analysis of the Molecular Interaction between Anthocyanase, Peroxidase and Polyphenol Oxidase with Anthocyanins Found in Cranberries
by Victoria Araya, Marcell Gatica, Elena Uribe and Juan Román
Int. J. Mol. Sci. 2024, 25(19), 10437; https://doi.org/10.3390/ijms251910437 - 27 Sep 2024
Cited by 1 | Viewed by 1121
Abstract
Anthocyanins are bioactive compounds responsible for various physiological processes in plants and provide characteristic colors to fruits and flowers. Their biosynthetic pathway is well understood; however, the enzymatic degradation mechanism is less explored. Anthocyanase (β-glucosidase (BGL)), peroxidase (POD), and polyphenol oxidase (PPO) are [...] Read more.
Anthocyanins are bioactive compounds responsible for various physiological processes in plants and provide characteristic colors to fruits and flowers. Their biosynthetic pathway is well understood; however, the enzymatic degradation mechanism is less explored. Anthocyanase (β-glucosidase (BGL)), peroxidase (POD), and polyphenol oxidase (PPO) are enzymes involved in degrading anthocyanins in plants such as petunias, eggplants, and Sicilian oranges. The aim of this work was to investigate the physicochemical interactions between these enzymes and the identified anthocyanins (via UPLC-MS/MS) in cranberry (Vaccinium macrocarpon) through molecular docking to identify the residues likely involved in anthocyanin degradation. Three-dimensional models were constructed using the AlphaFold2 server based on consensus sequences specific to each enzyme. The models with the highest confidence scores (pLDDT) were selected, with BGL, POD, and PPO achieving scores of 87.6, 94.8, and 84.1, respectively. These models were then refined using molecular dynamics for 100 ns. Additionally, UPLC-MS/MS analysis identified various flavonoids in cranberries, including cyanidin, delphinidin, procyanidin B2 and B4, petunidin, pelargonidin, peonidin, and malvidin, providing important experimental data to support the study. Molecular docking simulations revealed the most stable interactions between anthocyanase and the anthocyanins cyanidin 3-arabinoside and cyanidin 3-glucoside, with a favorable ΔG of interaction between −9.3 and −9.2 kcal/mol. This study contributes to proposing a degradation mechanism and seeking inhibitors to prevent fruit discoloration. Full article
Show Figures

Graphical abstract

22 pages, 383 KiB  
Article
Quadratic p-Median Problem: A Bender’s Decomposition and a Meta-Heuristic Local-Based Approach
by Pablo Adasme, Andrés Viveros and Ali Dehghan Firoozabadi
Symmetry 2024, 16(9), 1114; https://doi.org/10.3390/sym16091114 - 27 Aug 2024
Viewed by 720
Abstract
In this paper, the quadratic p-median optimization problem is discussed, where the goal is to connect users to a selected group of facilities (emergency services, telecommunications servers, healthcare facilities) at the lowest possible cost. The problem is aimed at minimizing the cost of [...] Read more.
In this paper, the quadratic p-median optimization problem is discussed, where the goal is to connect users to a selected group of facilities (emergency services, telecommunications servers, healthcare facilities) at the lowest possible cost. The problem is aimed at minimizing the cost of connecting these selected facilities. The costs are symmetric, meaning connecting two different points is the same in both directions. This problem extends the traditional p-median problem, a combinatorial problem used in various fields like facility location, network design, transportation, supply chain networks, emergency services, healthcare, and education planning. Surprisingly, the quadratic version has not been thoroughly considered in the literature. The paper highlights the formulation of two mixed-integer quadratic programming models to find optimal solutions to this problem. One model is a classic formulation, and the other is based on set cover theory. Linear versions and Bender’s decomposition formulations for each model are also derived. A Bender’s decomposition is solved using an algorithm that adds constraints during each iteration to improve the solution. Lazy constraints in the Gurobi solver’s branch and cut algorithm are dynamically added whenever a mixed-integer programming solution is found. Additionally, an efficient local search meta-heuristic is proposed that usually finds optimal solutions for tested instances. Challenging instances with up to 60 facilities and 2000 users are successfully solved. Our results show that Bender’s models with lazy constraints are the most effective for Euclidean and random test cases, achieving optimal solutions in less CPU time. The meta-heuristic also finds near-optimal solutions rapidly for these cases. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

14 pages, 730 KiB  
Article
Fully Scalable Fuzzy Neural Network for Data Processing
by Łukasz Apiecionek
Sensors 2024, 24(16), 5169; https://doi.org/10.3390/s24165169 - 10 Aug 2024
Cited by 1 | Viewed by 1001
Abstract
The primary objective of the research presented in this article is to introduce an artificial neural network that demands less computational power than a conventional deep neural network. The development of this ANN was achieved through the application of Ordered Fuzzy Numbers (OFNs). [...] Read more.
The primary objective of the research presented in this article is to introduce an artificial neural network that demands less computational power than a conventional deep neural network. The development of this ANN was achieved through the application of Ordered Fuzzy Numbers (OFNs). In the context of Industry 4.0, there are numerous applications where this solution could be utilized for data processing. It allows the deployment of Artificial Intelligence at the network edge on small devices, eliminating the need to transfer large amounts of data to a cloud server for analysis. Such networks will be easier to implement in small-scale solutions, like those for the Internet of Things, in the future. This paper presents test results where a real system was monitored, and anomalies were detected and predicted. Full article
Show Figures

Figure 1

30 pages, 4245 KiB  
Article
Evolving High-Performance Computing Data Centers with Kubernetes, Performance Analysis, and Dynamic Workload Placement Based on Machine Learning Scheduling
by Vedran Dakić, Mario Kovač and Jurica Slovinac
Electronics 2024, 13(13), 2651; https://doi.org/10.3390/electronics13132651 - 5 Jul 2024
Cited by 6 | Viewed by 113496
Abstract
In the past twenty years, the IT industry has moved away from using physical servers for workload management to workloads consolidated via virtualization and, in the next iteration, further consolidated into containers. Later, container workloads based on Docker and Podman were orchestrated via [...] Read more.
In the past twenty years, the IT industry has moved away from using physical servers for workload management to workloads consolidated via virtualization and, in the next iteration, further consolidated into containers. Later, container workloads based on Docker and Podman were orchestrated via Kubernetes or OpenShift. On the other hand, high-performance computing (HPC) environments have been lagging in this process, as much work is still needed to figure out how to apply containerization platforms for HPC. Containers have many advantages, as they tend to have less overhead while providing flexibility, modularity, and maintenance benefits. This makes them well-suited for tasks requiring a lot of computing power that are latency- or bandwidth-sensitive. But they are complex to manage, and many daily operations are based on command-line procedures that take years to master. This paper proposes a different architecture based on seamless hardware integration and a user-friendly UI (User Interface). It also offers dynamic workload placement based on real-time performance analysis and prediction and Machine Learning-based scheduling. This solves a prevalent issue in Kubernetes: the suboptimal placement of workloads without needing individual workload schedulers, as they are challenging to write and require much time to debug and test properly. It also enables us to focus on one of the key HPC issues—energy efficiency. Furthermore, the application we developed that implements this architecture helps with the Kubernetes installation process, which is fully automated, no matter which hardware platform we use—x86, ARM, and soon, RISC-V. The results we achieved using this architecture and application are very promising in two areas—the speed of workload scheduling and workload placement on a correct node. This also enables us to focus on one of the key HPC issues—energy efficiency. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

21 pages, 1402 KiB  
Article
Latency-Sensitive Function Placement among Heterogeneous Nodes in Serverless Computing
by Urooba Shahid, Ghufran Ahmed, Shahbaz Siddiqui, Junaid Shuja and Abdullateef Oluwagbemiga Balogun
Sensors 2024, 24(13), 4195; https://doi.org/10.3390/s24134195 - 27 Jun 2024
Cited by 1 | Viewed by 1287
Abstract
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure [...] Read more.
Function as a Service (FaaS) is highly beneficial to smart city infrastructure due to its flexibility, efficiency, and adaptability, specifically for integration in the digital landscape. FaaS has serverless setup, which means that an organization no longer has to worry about specific infrastructure management tasks; the developers can focus on how to deploy and create code efficiently. Since FaaS aligns well with the IoT, it easily integrates with IoT devices, thereby making it possible to perform event-based actions and real-time computations. In our research, we offer an exclusive likelihood-based model of adaptive machine learning for identifying the right place of function. We employ the XGBoost regressor to estimate the execution time for each function and utilize the decision tree regressor to predict network latency. By encompassing factors like network delay, arrival computation, and emphasis on resources, the machine learning model eases the selection process of a placement. In replication, we use Docker containers, focusing on serverless node type, serverless node variety, function location, deadlines, and edge-cloud topology. Thus, the primary objectives are to address deadlines and enhance the use of any resource, and from this, we can see that effective utilization of resources leads to enhanced deadline compliance. Full article
Show Figures

Figure 1

16 pages, 331 KiB  
Article
Verifiable Additive Homomorphic Secret Sharing with Dynamic Aggregation Support
by Sinan Wang, Changgen Peng, Xinxin Deng, Zongfeng Peng and Qihong Chen
Electronics 2024, 13(12), 2378; https://doi.org/10.3390/electronics13122378 - 18 Jun 2024
Viewed by 841
Abstract
(n,m,t)-Homomorphic Secret Sharing (HSS) allows n clients to share data secretly to m servers, which compute a function f homomorphically on the received secretly shared data while restricting the input data acquired by a collection of [...] Read more.
(n,m,t)-Homomorphic Secret Sharing (HSS) allows n clients to share data secretly to m servers, which compute a function f homomorphically on the received secretly shared data while restricting the input data acquired by a collection of t servers to private ones. In Verifiable Homomorphic Secret Sharing (VHSS), if there are partially colluding malicious servers submitting erroneous computation results to the client, such erroneous computation results will be rejected by the client. In traditional static homomorphic secret sharing schemes, once a secret share of raw data is assigned to a group of servers, then all servers in the group must participate in the computation, which means that the computation has to be restarted once some servers fail to perform the task. In order to solve the above problem, we propose the first dynamic homomorphic secret sharing scheme for additive computation in this paper. In our scheme, once some servers fail, there is no need to recalculate the secret sharing but only the need to reissue the index set of servers that perform the computation, Our structure assigns more computation to the servers, which is very useful in real scenarios. In addition, we propose dynamic verifiable homomorphic secret sharing schemes based on the above schemes, which have less computational overhead compared to the existing schemes, although we sacrifice the public verifiability property. Finally, we give a detailed correctness, security, and verifiability analysis of the two proposed schemes and provide the theoretical and experimental evaluation results of the computational overhead. Full article
(This article belongs to the Special Issue Digital Security and Privacy Protection: Trends and Applications)
Show Figures

Figure 1

21 pages, 3414 KiB  
Article
Pedestrian Abnormal Behavior Detection System Using Edge–Server Architecture for Large–Scale CCTV Environments
by Jinha Song and Jongho Nang
Appl. Sci. 2024, 14(11), 4615; https://doi.org/10.3390/app14114615 - 27 May 2024
Viewed by 1100
Abstract
As the deployment of CCTV cameras for safety continues to increase, the monitoring workload has significantly exceeded the capacity of the current workforce. To overcome this problem, intelligent CCTV technologies and server-efficient deep learning analysis models are being developed. However, real-world applications exhibit [...] Read more.
As the deployment of CCTV cameras for safety continues to increase, the monitoring workload has significantly exceeded the capacity of the current workforce. To overcome this problem, intelligent CCTV technologies and server-efficient deep learning analysis models are being developed. However, real-world applications exhibit performance degradation due to environmental changes and limited server processing capacity for multiple CCTVs. This study proposes a real-time pedestrian anomaly detection system with an edge–server structure that ensures efficiency and scalability. In the proposed system, the pedestrian abnormal behavior detection model analyzed by the edge uses a rule-based mechanism that can detect anomalies frequently, albeit less accurately, with high recall. The server uses a deep learning-based model with high precision because it analyzes only the sections detected by the edge. The proposed system was applied to an experimental environment using 20 video streams, 18 edge devices, and 3 servers equipped with 2 GPUs as a substitute for real CCTV. Pedestrian abnormal behavior was included in each video stream to conduct experiments in real-time processing and compare the abnormal behavior detection performance between the case with the edge and server alone and that with the edge and server in combination. Through these experiments, we verified that 20 video streams can be processed with 18 edges and 3 GPU servers, which confirms the scalability of the proposed system according to the number of events per hour and the event duration. We also demonstrate that the pedestrian anomaly detection model with the edge and server is more efficient and scalable than the models with these components alone. The linkage of the edge and server can reduce the false detection rate and provide a more accurate analysis. This research contributes to the development of control systems in urban safety and public security by proposing an efficient and scalable analysis system for large-scale CCTV environments. Full article
Show Figures

Figure 1

26 pages, 8814 KiB  
Article
Network Pharmacology, Molecular Docking, and Molecular Dynamics Simulation Analysis Reveal Insights into the Molecular Mechanism of Cordia myxa in the Treatment of Liver Cancer
by Li Li, Alaulddin Hazim Mohammed, Nazar Aziz Auda, Sarah Mohammed Saeed Alsallameh, Norah A. Albekairi, Ziyad Tariq Muhseen and Christopher J. Butch
Biology 2024, 13(5), 315; https://doi.org/10.3390/biology13050315 - 1 May 2024
Cited by 3 | Viewed by 3978
Abstract
Traditional treatments of cancer have faced various challenges, including toxicity, medication resistance, and financial burdens. On the other hand, bioactive phytochemicals employed in complementary alternative medicine have recently gained interest due to their ability to control a wide range of molecular pathways while [...] Read more.
Traditional treatments of cancer have faced various challenges, including toxicity, medication resistance, and financial burdens. On the other hand, bioactive phytochemicals employed in complementary alternative medicine have recently gained interest due to their ability to control a wide range of molecular pathways while being less harmful. As a result, we used a network pharmacology approach to study the possible regulatory mechanisms of active constituents of Cordia myxa for the treatment of liver cancer (LC). Active constituents were retrieved from the IMPPAT database and the literature review, and their targets were retrieved from the STITCH and Swiss Target Prediction databases. LC-related targets were retrieved from expression datasets (GSE39791, GSE76427, GSE22058, GSE87630, and GSE112790) through gene expression omnibus (GEO). The DAVID Gene Ontology (GO) database was used to annotate target proteins, while the Kyoto Encyclopedia and Genome Database (KEGG) was used to analyze signaling pathway enrichment. STRING and Cytoscape were used to create protein–protein interaction networks (PPI), while the degree scoring algorithm of CytoHubba was used to identify hub genes. The GEPIA2 server was used for survival analysis, and PyRx was used for molecular docking analysis. Survival and network analysis revealed that five genes named heat shot protein 90 AA1 (HSP90AA1), estrogen receptor 1 (ESR1), cytochrome P450 3A4 (CYP3A4), cyclin-dependent kinase 1 (CDK1), and matrix metalloproteinase-9 (MMP9) are linked with the survival of LC patients. Finally, we conclude that four extremely active ingredients, namely cosmosiin, rosmarinic acid, quercetin, and rubinin influence the expression of HSP90AA1, which may serve as a potential therapeutic target for LC. These results were further validated by molecular dynamics simulation analysis, which predicted the complexes with highly stable dynamics. The residues of the targeted protein showed a highly stable nature except for the N-terminal domain without affecting the drug binding. An integrated network pharmacology and docking study demonstrated that C. myxa had a promising preventative effect on LC by working on cancer-related signaling pathways. Full article
Show Figures

Figure 1

20 pages, 873 KiB  
Article
Asynchronous Privacy-Preservation Federated Learning Method for Mobile Edge Network in Industrial Internet of Things Ecosystem
by John Owoicho Odeh, Xiaolong Yang, Cosmas Ifeanyi Nwakanma and Sahraoui Dhelim
Electronics 2024, 13(9), 1610; https://doi.org/10.3390/electronics13091610 - 23 Apr 2024
Cited by 2 | Viewed by 1488
Abstract
The typical industrial Internet of Things (IIoT) network system relies on a real-time data upload for timely processing. However, the incidence of device heterogeneity, high network latency, or a malicious central server during transmission has a propensity for privacy leakage or loss of [...] Read more.
The typical industrial Internet of Things (IIoT) network system relies on a real-time data upload for timely processing. However, the incidence of device heterogeneity, high network latency, or a malicious central server during transmission has a propensity for privacy leakage or loss of model accuracy. Federated learning comes in handy, as the edge server requires less time and enables local data processing to reduce the delay to the data upload. It allows neighboring edge nodes to share data while maintaining data privacy and confidentiality. However, this can be challenged by a network disruption making edge nodes or sensors go offline or experience an alteration in the learning process, thereby exposing the already transmitted model to a malicious server that eavesdrops on the channel, intercepts the model in transit, and gleans the information, evading the privacy of the model within the network. To mitigate this effect, this paper proposes asynchronous privacy-preservation federated learning for mobile edge networks in the IIoT ecosystem (APPFL-MEN) that incorporates the iteration model design update strategy (IMDUS) scheme, enabling the edge server to share more real-time model updates with online nodes and less data sharing with offline nodes, without exposing the privacy of the data to a malicious node or a hack. In addition, it adopts a double-weight modification strategy during communication between the edge node and the edge server or gateway for an enhanced model training process. Furthermore, it allows a convergence boosting process, resulting in a less error-prone, secured global model. The performance evaluation with numerical results shows good accuracy, efficiency, and lower bandwidth usage by APPFL-MEN while preserving model privacy compared to state-of-the-art methods. Full article
(This article belongs to the Section Networks)
Show Figures

Figure 1

Back to TopTop