Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = virtual machine migration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 2730 KiB  
Article
Deep Learning for Network Intrusion Detection in Virtual Networks
by Daniel Spiekermann, Tobias Eggendorfer and Jörg Keller
Electronics 2024, 13(18), 3617; https://doi.org/10.3390/electronics13183617 - 11 Sep 2024
Viewed by 1426
Abstract
As organizations increasingly adopt virtualized environments for enhanced flexibility and scalability, securing virtual networks has become a critical part of current infrastructures. This research paper addresses the challenges related to intrusion detection in virtual networks, with a focus on various deep learning techniques. [...] Read more.
As organizations increasingly adopt virtualized environments for enhanced flexibility and scalability, securing virtual networks has become a critical part of current infrastructures. This research paper addresses the challenges related to intrusion detection in virtual networks, with a focus on various deep learning techniques. Since physical networks do not use encapsulation, but virtual networks do, packet analysis based on rules or machine learning outcomes for physical networks cannot be transferred directly to virtual environments. Encapsulation methods in current virtual networks include VXLAN (Virtual Extensible LAN), an EVPN (Ethernet Virtual Private Network), and NVGRE (Network Virtualization using Generic Routing Encapsulation). This paper analyzes the performance and effectiveness of network intrusion detection in virtual networks. It delves into challenges inherent in virtual network intrusion detection with deep learning, including issues such as traffic encapsulation, VM migration, and changing network internals inside the infrastructure. Experiments on detection performance demonstrate the differences between intrusion detection in virtual and physical networks. Full article
(This article belongs to the Special Issue Network Intrusion Detection Using Deep Learning)
Show Figures

Figure 1

17 pages, 576 KiB  
Article
Elevating Security in Migration: An Enhanced Trusted Execution Environment-Based Generic Virtual Remote Attestation Scheme
by Jie Yuan, Yinghua Shen, Rui Xu, Xinghai Wei and Dongxiao Liu
Information 2024, 15(8), 470; https://doi.org/10.3390/info15080470 - 7 Aug 2024
Viewed by 1369
Abstract
Cloud computing, as the most widely applied and prominent domain of distributed systems, has brought numerous advantages to users, including high resource sharing efficiency, strong availability, and excellent scalability. However, the complexity of cloud computing environments also introduces various risks and challenges. In [...] Read more.
Cloud computing, as the most widely applied and prominent domain of distributed systems, has brought numerous advantages to users, including high resource sharing efficiency, strong availability, and excellent scalability. However, the complexity of cloud computing environments also introduces various risks and challenges. In the current landscape with numerous cloud service providers and diverse hardware configurations in cloud environments, addressing challenges such as establishing trust chains, achieving general-purpose virtual remote attestation, and ensuring secure virtual machine migration becomes a crucial issue that traditional remote attestation architectures cannot adequately handle. Confronted with these issues in a heterogeneous multi-cloud environment, we present a targeted solution—a secure migration-enabled generic virtual remote attestation architecture based on improved TEE. We introduce a hardware trusted module to establish and bind with a Virtual Root of Trust (VRoT), addressing the challenge of trust chain establishment. Simultaneously, our architecture utilizes the VRoT within TEE to realize a general-purpose virtual remote attestation solution across heterogeneous hardware configurations. Furthermore, we design a controller deployed in the trusted domain to verify migration conditions, facilitate key exchange, and manage the migration process, ensuring the security and integrity of virtual machine migration. Lastly, we conduct rigorous experiments to measure the overhead and performance of our proposed remote attestation scheme and virtual machine secure migration process. The results unequivocally demonstrate that our architecture provides better generality and migration security with only marginal overhead compared to other traditional remote attestation solutions. Full article
Show Figures

Figure 1

15 pages, 1744 KiB  
Article
Machine Learning to Estimate Workload and Balance Resources with Live Migration and VM Placement
by Taufik Hidayat, Kalamullah Ramli, Nadia Thereza, Amarudin Daulay, Rushendra Rushendra and Rahutomo Mahardiko
Informatics 2024, 11(3), 50; https://doi.org/10.3390/informatics11030050 - 19 Jul 2024
Cited by 1 | Viewed by 1867
Abstract
Currently, utilizing virtualization technology in data centers often imposes an increasing burden on the host machine (HM), leading to a decline in VM performance. To address this issue, live virtual migration (LVM) is employed to alleviate the load on the VM. This study [...] Read more.
Currently, utilizing virtualization technology in data centers often imposes an increasing burden on the host machine (HM), leading to a decline in VM performance. To address this issue, live virtual migration (LVM) is employed to alleviate the load on the VM. This study introduces a hybrid machine learning model designed to estimate the direct migration of pre-copied migration virtual machines within the data center. The proposed model integrates Markov Decision Process (MDP), genetic algorithm (GA), and random forest (RF) algorithms to forecast the prioritized movement of virtual machines and identify the optimal host machine target. The hybrid models achieve a 99% accuracy rate with quicker training times compared to the previous studies that utilized K-nearest neighbor, decision tree classification, support vector machines, logistic regression, and neural networks. The authors recommend further exploration of a deep learning approach (DL) to address other data center performance issues. This paper outlines promising strategies for enhancing virtual machine migration in data centers. The hybrid models demonstrate high accuracy and faster training times than previous research, indicating the potential for optimizing virtual machine placement and minimizing downtime. The authors emphasize the significance of considering data center performance and propose further investigation. Moreover, it would be beneficial to delve into the practical implementation and dissemination of the proposed model in real-world data centers. Full article
Show Figures

Figure 1

20 pages, 501 KiB  
Article
Efficient Resource Management in Cloud Environments: A Modified Feeding Birds Algorithm for VM Consolidation
by Deafallah Alsadie and Musleh Alsulami
Mathematics 2024, 12(12), 1845; https://doi.org/10.3390/math12121845 - 13 Jun 2024
Cited by 1 | Viewed by 747
Abstract
Cloud data centers play a vital role in modern computing infrastructure, offering scalable resources for diverse applications. However, managing costs and resources efficiently in these centers has become a crucial concern due to the exponential growth of cloud computing. User applications exhibit complex [...] Read more.
Cloud data centers play a vital role in modern computing infrastructure, offering scalable resources for diverse applications. However, managing costs and resources efficiently in these centers has become a crucial concern due to the exponential growth of cloud computing. User applications exhibit complex behavior, leading to fluctuations in system performance and increased power usage. To tackle these obstacles, we introduce the Modified Feeding Birds Algorithm (ModAFBA) as an innovative solution for virtual machine (VM) consolidation in cloud environments. The primary objective is to enhance resource management and operational efficiency in cloud data centers. ModAFBA incorporates adaptive position update rules and strategies specifically designed to minimize VM migrations, addressing the unique challenges of VM consolidation. The experimental findings demonstrated substantial improvements in key performance metrics. Specifically, the ModAFBA method exhibited significant enhancements in energy usage, SLA compliance, and the number of VM migrations compared to benchmark algorithms such as TOPSIS, SVMP, and PVMP methods. Notably, the ModAFBA method achieved reductions in energy usage of 49.16%, 55.76%, and 65.13% compared to the TOPSIS, SVMP, and PVMP methods, respectively. Moreover, the ModAFBA method resulted in decreases of around 83.80%, 22.65%, and 89.82% in the quantity of VM migrations in contrast to the aforementioned benchmark techniques. The results demonstrate that ModAFBA outperforms these benchmarks by significantly reducing energy consumption, operational costs, and SLA violations. These findings highlight the effectiveness of ModAFBA in optimizing VM placement and consolidation, offering a robust and scalable approach to improving the performance and sustainability of cloud data centers. Full article
Show Figures

Figure 1

20 pages, 4067 KiB  
Article
Toward Optimal Virtualization: An Updated Comparative Analysis of Docker and LXD Container Technologies
by Daniel Silva, João Rafael and Alexandre Fonte
Computers 2024, 13(4), 94; https://doi.org/10.3390/computers13040094 - 9 Apr 2024
Viewed by 3642
Abstract
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, [...] Read more.
Traditional hypervisor-assisted virtualization is a leading virtualization technology in data centers, providing cost savings (CapEx and OpEx), high availability, and disaster recovery. However, its inherent overhead may hinder performance and seems not scale or be flexible enough for certain applications, such as microservices, where deploying an application using a virtual machine is a longer and resource-intensive process. Container-based virtualization has received attention, especially with Docker, as an alternative, which also facilitates continuous integration/continuous deployment (CI/CD). Meanwhile, LXD has reactivated the interest in Linux LXC containers, which provides unique operations, including live migration and full OS emulation. A careful analysis of both options is crucial for organizations to decide which best suits their needs. This study revisits key concepts about containers, exposes the advantages and limitations of each container technology, and provides an up-to-date performance comparison between both types of containers (applicational vs. system). Using extensive benchmarks and well-known workload metrics such as CPU scores, disk speed, and network throughput, we assess their performance and quantify their virtualization overhead. Our results show a clear overall trend toward meritorious performance and the maturity of both technologies (Docker and LXD), with low overhead and scalable performance. Notably, LXD shows greater stability with consistent performance variability. Full article
Show Figures

Figure 1

20 pages, 847 KiB  
Article
Queuing Model with Customer Class Movement across Server Groups for Analyzing Virtual Machine Migration in Cloud Computing
by Anna Kushchazli, Anastasia Safargalieva, Irina Kochetkova and Andrey Gorshenin
Mathematics 2024, 12(3), 468; https://doi.org/10.3390/math12030468 - 1 Feb 2024
Cited by 3 | Viewed by 1544
Abstract
The advancement of cloud computing technologies has positioned virtual machine (VM) migration as a critical area of research, essential for optimizing resource management, bolstering fault tolerance, and ensuring uninterrupted service delivery. This paper offers an exhaustive analysis of VM migration processes within cloud [...] Read more.
The advancement of cloud computing technologies has positioned virtual machine (VM) migration as a critical area of research, essential for optimizing resource management, bolstering fault tolerance, and ensuring uninterrupted service delivery. This paper offers an exhaustive analysis of VM migration processes within cloud infrastructures, examining various migration types, server load assessment methods, VM selection strategies, ideal migration timing, and target server determination criteria. We introduce a queuing theory-based model to scrutinize VM migration dynamics between servers in a cloud environment. By reinterpreting resource-centric migration mechanisms into a task-processing paradigm, we accommodate the stochastic nature of resource demands, characterized by random task arrivals and variable processing times. The model is specifically tailored to scenarios with two servers and three VMs. Through numerical examples, we elucidate several performance metrics: task blocking probability, average tasks processed by VMs, and average tasks managed by servers. Additionally, we examine the influence of task arrival rates and average task duration on these performance measures. Full article
(This article belongs to the Special Issue Modeling and Analysis of Queuing Systems)
Show Figures

Figure 1

12 pages, 1257 KiB  
Proceeding Paper
Sustainable Power Prediction and Demand for Hyperscale Datacenters in India
by Ashok Pomnar, Anand Singh Rajawat, Nisha S. Tatkar and Pawan Bhaladhare
Eng. Proc. 2023, 59(1), 124; https://doi.org/10.3390/engproc2023059124 - 27 Dec 2023
Viewed by 1587
Abstract
Data localization, data explosion, data security, data protection, and data acceleration are important driving forces in India’s datacenter revolution, which has raised a demand for datacenter expansion in the country. In addition, the pandemic has pushed the need for technology adoption, digitization across [...] Read more.
Data localization, data explosion, data security, data protection, and data acceleration are important driving forces in India’s datacenter revolution, which has raised a demand for datacenter expansion in the country. In addition, the pandemic has pushed the need for technology adoption, digitization across industries, and migration to cloud-based services across the globe. The launch of 5G services, digital payments, big data analytics, smartphone usage, digital data access, IoT services, and other technologies like AI (artificial intelligence), AR (augmented reality), ML (machine learning), 5G, VR (virtual reality), and Blockchain have been a strong driving force for datacenter investments in India. However, the rapid expansion of these datacenters presents unique challenges, particularly in predicting and managing their power requirements. This abstract focuses on understanding the power prediction and demand aspects specific to hyperscale datacenters in India. The study aims to analyze historical power consumption data from existing hyperscale datacenters in India and develop predictive models to estimate future power requirements. Factors such as server density, workload patterns, cooling systems, and energy-efficient technologies will be considered in the analysis. Datacenter negatively impacts the environment because of the large consumption of power sources and 2% of the global contribution of greenhouse gas emissions. Given the increasing cost of power, datacenter players are naturally encouraged to save energy, as power is a high datacenter operational expenditure cost. Additionally, this research will explore the impact of renewable energy integration, backup power solutions, and demand–response mechanisms to optimize energy usage and reduce reliance on conventional power sources. Many datacenter providers globally have started using power from renewable energy like solar and wind energy through Power Purchase Agreements (PPA) to reduce these carbon footprints and work towards a sustainable environment. In addition, today’s datacenter industry constantly looks for ways to become more energy-efficient through real innovation to reduce its carbon footprint. Full article
(This article belongs to the Proceedings of Eng. Proc., 2023, RAiSE-2023)
Show Figures

Figure 1

16 pages, 2871 KiB  
Article
Service Reliability Based on Fault Prediction and Container Migration in Edge Computing
by Lizhao Liu, Longyu Kang, Xiaocui Li and Zhangbing Zhou
Appl. Sci. 2023, 13(23), 12865; https://doi.org/10.3390/app132312865 - 30 Nov 2023
Cited by 2 | Viewed by 1146
Abstract
With improvements in the computing capability of edge devices and the emergence of edge computing, an increasing number of services are being deployed on the edge side, and container-based virtualization is used to deploy services to improve resource utilization. This has led to [...] Read more.
With improvements in the computing capability of edge devices and the emergence of edge computing, an increasing number of services are being deployed on the edge side, and container-based virtualization is used to deploy services to improve resource utilization. This has led to challenges in reliability because services deployed on edge nodes are pruned owing to hardware failures and a lack of technical support. To solve this reliability problem, we propose a solution based on fault prediction combined with container migration to address the service failure problem caused by node failure. This approach comprises two major steps: fault prediction and container migration. Fault prediction collects the log of services on edge nodes and uses these data to conduct time-sequence modeling. Machine-learning algorithms are chosen to predict faults on the edge. Container migration is modeled as an optimization problem. A migration node selection approach based on a genetic algorithm is proposed to determine the most suitable migration target to migrate container services on the device and ensure the reliability of the services. Simulation results show that the proposed approach can effectively predict device faults and migrate services based on the optimal container migration strategy to avoid service failures deployed on edge devices and ensure service reliability. Full article
Show Figures

Figure 1

23 pages, 2852 KiB  
Article
Urban Advanced Mobility Dependability: A Model-Based Quantification on Vehicular Ad Hoc Networks with Virtual Machine Migration
by Luis Guilherme Silva, Israel Cardoso, Carlos Brito, Vandirleya Barbosa, Bruno Nogueira, Eunmi Choi, Tuan Anh Nguyen, Dugki Min, Jae Woo Lee and Francisco Airton Silva
Sensors 2023, 23(23), 9485; https://doi.org/10.3390/s23239485 - 28 Nov 2023
Cited by 5 | Viewed by 1218
Abstract
In the rapidly evolving urban advanced mobility (UAM) sphere, Vehicular Ad Hoc Networks (VANETs) are crucial for robust communication and operational efficiency in future urban environments. This paper quantifies VANETs to improve their reliability and availability, essential for integrating UAM into urban infrastructures. [...] Read more.
In the rapidly evolving urban advanced mobility (UAM) sphere, Vehicular Ad Hoc Networks (VANETs) are crucial for robust communication and operational efficiency in future urban environments. This paper quantifies VANETs to improve their reliability and availability, essential for integrating UAM into urban infrastructures. It proposes a novel Stochastic Petri Nets (SPN) method for evaluating VANET-based Vehicle Communication and Control (VCC) architectures, crucial given the dynamic demands of UAM. The SPN model, incorporating virtual machine (VM) migration and Edge Computing, addresses VANET integration challenges with Edge Computing. It uses stochastic elements to mirror VANET scenarios, enhancing network robustness and dependability, vital for the operational integrity of UAM. Case studies using this model offer insights into system availability and reliability, guiding VANET optimizations for UAM. The paper also applies a Design of Experiments (DoE) approach for a sensitivity analysis of SPN components, identifying key parameters affecting system availability. This is critical for refining the model for UAM efficiency. This research is significant for monitoring UAM systems in future cities, presenting a cost-effective framework over traditional methods and advancing VANET reliability and availability in urban mobility contexts. Full article
(This article belongs to the Special Issue Vehicular Sensing for Improved Urban Mobility)
Show Figures

Figure 1

17 pages, 3884 KiB  
Article
Exploring Performance Degradation in Virtual Machines Sharing a Cloud Server
by Hamza Ahmed, Hassan Jamil Syed, Amin Sadiq, Ashraf Osman Ibrahim, Manar Alohaly and Muna Elsadig
Appl. Sci. 2023, 13(16), 9224; https://doi.org/10.3390/app13169224 - 14 Aug 2023
Cited by 1 | Viewed by 2122
Abstract
Cloud computing has become a leading technology for IT infrastructure, with many companies migrating their services to cloud servers in recent years. As cloud services continue to expand, the issue of cloud monitoring has become increasingly important. One important metric to monitor is [...] Read more.
Cloud computing has become a leading technology for IT infrastructure, with many companies migrating their services to cloud servers in recent years. As cloud services continue to expand, the issue of cloud monitoring has become increasingly important. One important metric to monitor is CPU steal time, which measures the amount of time a virtual CPU waits for the actual CPU. In this study, we focus on the impact of CPU steal time on virtual machine performance and the potential problems that can arise. We implement our work using an OpenStack-based cloud environment and investigate intrusive and non-intrusive monitoring methods. Our analysis provides insights into the importance of CPU steal time monitoring and its impact on cloud performance. Full article
(This article belongs to the Special Issue Edge and Cloud Computing Systems and Applications)
Show Figures

Figure 1

24 pages, 1244 KiB  
Article
Deep Reinforcement Learning for Workload Prediction in Federated Cloud Environments
by Zaakki Ahamed, Maher Khemakhem, Fathy Eassa, Fawaz Alsolami, Abdullah Basuhail and Kamal Jambi
Sensors 2023, 23(15), 6911; https://doi.org/10.3390/s23156911 - 3 Aug 2023
Cited by 4 | Viewed by 2016
Abstract
The Federated Cloud Computing (FCC) paradigm provides scalability advantages to Cloud Service Providers (CSP) in preserving their Service Level Agreement (SLA) as opposed to single Data Centers (DC). However, existing research has primarily focused on Virtual Machine (VM) placement, with less emphasis on [...] Read more.
The Federated Cloud Computing (FCC) paradigm provides scalability advantages to Cloud Service Providers (CSP) in preserving their Service Level Agreement (SLA) as opposed to single Data Centers (DC). However, existing research has primarily focused on Virtual Machine (VM) placement, with less emphasis on energy efficiency and SLA adherence. In this paper, we propose a novel solution, Federated Cloud Workload Prediction with Deep Q-Learning (FEDQWP). Our solution addresses the complex VM placement problem, energy efficiency, and SLA preservation, making it comprehensive and beneficial for CSPs. By leveraging the capabilities of deep learning, our FEDQWP model extracts underlying patterns and optimizes resource allocation. Real-world workloads are extensively evaluated to demonstrate the efficacy of our approach compared to existing solutions. The results show that our DQL model outperforms other algorithms in terms of CPU utilization, migration time, finished tasks, energy consumption, and SLA violations. Specifically, our QLearning model achieves efficient CPU utilization with a median value of 29.02, completes migrations in an average of 0.31 units, finishes an average of 699 tasks, consumes the least energy with an average of 1.85 kWh, and exhibits the lowest number of SLA violations with an average of 0.03 violations proportionally. These quantitative results highlight the superiority of our proposed method in optimizing performance in FCC environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

18 pages, 1055 KiB  
Article
Algorithmic Approach to Virtual Machine Migration in Cloud Computing with Updated SESA Algorithm
by Amandeep Kaur, Saurabh Kumar, Deepali Gupta, Yasir Hamid, Monia Hamdi, Amel Ksibi, Hela Elmannai and Shilpa Saini
Sensors 2023, 23(13), 6117; https://doi.org/10.3390/s23136117 - 3 Jul 2023
Cited by 24 | Viewed by 3049
Abstract
Cloud computing plays an important role in every IT sector. Many tech giants such as Google, Microsoft, and Facebook as deploying their data centres around the world to provide computation and storage services. The customers either submit their job directly or they take [...] Read more.
Cloud computing plays an important role in every IT sector. Many tech giants such as Google, Microsoft, and Facebook as deploying their data centres around the world to provide computation and storage services. The customers either submit their job directly or they take the help of the brokers for the submission of the jobs to the cloud centres. The preliminary aim is to reduce the overall power consumption which was ignored in the early days of cloud development. This was due to the performance expectations from cloud servers as they were supposed to provide all the services through their services layers IaaS, PaaS, and SaaS. As time passed and researchers came up with new terminologies and algorithmic architecture for the reduction of power consumption and sustainability, other algorithmic anarchies were also introduced, such as statistical oriented learning and bioinspired algorithms. In this paper, an indepth focus has been done on multiple approaches for migration among virtual machines and find out various issues among existing approaches. The proposed work utilizes elastic scheduling inspired by the smart elastic scheduling algorithm (SESA) to develop a more energy-efficient VM allocation and migration algorithm. The proposed work uses cosine similarity and bandwidth utilization as additional utilities to improve the current performance in terms of QoS. The proposed work is evaluated for overall power consumption and service level agreement violation (SLA-V) and is compared with related state of art techniques. A proposed algorithm is also presented in order to solve problems found during the survey. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

19 pages, 2870 KiB  
Article
Agent-Based Virtual Machine Migration for Load Balancing and Co-Resident Attack in Cloud Computing
by Biao Xu and Minyan Lu
Appl. Sci. 2023, 13(6), 3703; https://doi.org/10.3390/app13063703 - 14 Mar 2023
Cited by 4 | Viewed by 2016
Abstract
The majority of cloud computing consists of servers with different configurations which host several virtual machines (VMs) with changing resource demands. Additionally, co-located VMs are vulnerable to co-resident attacks (CRA) in a networked environment. These two issues may cause uneven resource usage within [...] Read more.
The majority of cloud computing consists of servers with different configurations which host several virtual machines (VMs) with changing resource demands. Additionally, co-located VMs are vulnerable to co-resident attacks (CRA) in a networked environment. These two issues may cause uneven resource usage within the server and attacks on the service, leading to performance and security degradation. This paper presents an Agent-based VM migration solution that can balance the burden on commercially diverse servers and avoid potential co-resident attacks by utilizing VM live migrations. The Agent’s policies include the following: (i) a heuristic migration optimization policy to select the VMs to be migrated and the matching hosts; (ii) a migration trigger policy to determine whether the host needs to relocate the VMs; (iii) an acceptance policy to decide if the migration request should be accepted; and (iv) a balancer heuristic policy to make the initial VM allocation. The experiments and analyses demonstrate that the Agents can mitigate CRA in a distributed way to mitigate the associated risks while achieving acceptable load balancing performance. Full article
(This article belongs to the Special Issue Security in Cloud Computing, Big Data and Internet of Things)
Show Figures

Figure 1

23 pages, 1189 KiB  
Article
An Efficient Virtual Machine Consolidation Algorithm for Cloud Computing
by Ling Yuan, Zhenjiang Wang, Ping Sun and Yinzhen Wei
Entropy 2023, 25(2), 351; https://doi.org/10.3390/e25020351 - 14 Feb 2023
Cited by 3 | Viewed by 1998
Abstract
With the rapid development of integration in blockchain and IoT, virtual machine consolidation (VMC) has become a heated topic because it can effectively improve the energy efficiency and service quality of cloud computing in the blockchain. The current VMC algorithm is not effective [...] Read more.
With the rapid development of integration in blockchain and IoT, virtual machine consolidation (VMC) has become a heated topic because it can effectively improve the energy efficiency and service quality of cloud computing in the blockchain. The current VMC algorithm is not effective enough because it does not regard the load of the virtual machine (VM) as an analyzed time series. Therefore, we proposed a VMC algorithm based on load forecast to improve efficiency. First, we proposed a migration VM selection strategy based on load increment prediction called LIP. Combined with the current load and load increment, this strategy can effectively improve the accuracy of selecting VM from the overloaded physical machines (PMs). Then, we proposed a VM migration point selection strategy based on the load sequence prediction called SIR. We merged VMs with complementary load series into the same PM, effectively improving the stability of the PM load, thereby reducing the service level agreement violation (SLAV) and the number of VM migrations due to the resource competition of the PM. Finally, we proposed a better virtual machine consolidation (VMC) algorithm based on the load prediction of LIP and SIR. The experimental results show that our VMC algorithm can effectively improve energy efficiency. Full article
(This article belongs to the Special Issue Information Security and Privacy: From IoT to IoV)
Show Figures

Figure 1

25 pages, 4383 KiB  
Article
Chemokine Receptors—Structure-Based Virtual Screening Assisted by Machine Learning
by Paulina Dragan, Matthew Merski, Szymon Wiśniewski, Swapnil Ganesh Sanmukh and Dorota Latek
Pharmaceutics 2023, 15(2), 516; https://doi.org/10.3390/pharmaceutics15020516 - 3 Feb 2023
Cited by 5 | Viewed by 2834
Abstract
Chemokines modulate the immune response by regulating the migration of immune cells. They are also known to participate in such processes as cell–cell adhesion, allograft rejection, and angiogenesis. Chemokines interact with two different subfamilies of G protein-coupled receptors: conventional chemokine receptors and atypical [...] Read more.
Chemokines modulate the immune response by regulating the migration of immune cells. They are also known to participate in such processes as cell–cell adhesion, allograft rejection, and angiogenesis. Chemokines interact with two different subfamilies of G protein-coupled receptors: conventional chemokine receptors and atypical chemokine receptors. Here, we focused on the former one which has been linked to many inflammatory diseases, including: multiple sclerosis, asthma, nephritis, and rheumatoid arthritis. Available crystal and cryo-EM structures and homology models of six chemokine receptors (CCR1 to CCR6) were described and tested in terms of their usefulness in structure-based drug design. As a result of structure-based virtual screening for CCR2 and CCR3, several new active compounds were proposed. Known inhibitors of CCR1 to CCR6, acquired from ChEMBL, were used as training sets for two machine learning algorithms in ligand-based drug design. Performance of LightGBM was compared with a sequential Keras/TensorFlow model of neural network for these diverse datasets. A combination of structure-based virtual screening with machine learning allowed to propose several active ligands for CCR2 and CCR3 with two distinct compounds predicted as CCR3 actives by all three tested methods: Glide, Keras/TensorFlow NN, and LightGBM. In addition, the performance of these three methods in the prediction of the CCR2/CCR3 receptor subtype selectivity was assessed. Full article
(This article belongs to the Special Issue Recent Advances in Antiviral Drug Development)
Show Figures

Figure 1

Back to TopTop