Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers
Previous Article in Journal
Integration of Phase Change Material in the Design of Solar Concentrator-Based Water Heating System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Many-Objective Quantum-Inspired Particle Swarm Optimization Algorithm for Placement of Virtual Machines in Smart Computing Cloud

Faculty of Mathematics and Computer Science, Warsaw University of Technology, 00-662 Warsaw, Poland
Entropy 2022, 24(1), 58; https://doi.org/10.3390/e24010058
Submission received: 19 November 2021 / Revised: 20 December 2021 / Accepted: 23 December 2021 / Published: 28 December 2021
(This article belongs to the Topic Quantum Information and Quantum Computing)

Abstract

:
Particle swarm optimization algorithm (PSO) is an effective metaheuristic that can determine Pareto-optimal solutions. We propose an extended PSO by introducing quantum gates in order to ensure the diversity of particle populations that are looking for efficient alternatives. The quality of solutions was verified in the issue of assignment of resources in the computing cloud to improve the live migration of virtual machines. We consider the multi-criteria optimization problem of deep learning-based models embedded into virtual machines. Computing clouds with deep learning agents can support several areas of education, smart city or economy. Because deep learning agents require lots of computer resources, seven criteria are studied such as electric power of hosts, reliability of cloud, CPU workload of the bottleneck host, communication capacity of the critical node, a free RAM capacity of the most loaded memory, a free disc memory capacity of the most busy storage, and overall computer costs. Quantum gates modify an accepted position for the current location of a particle. To verify the above concept, various simulations have been carried out on the laboratory cloud based on the OpenStack platform. Numerical experiments have confirmed that multi-objective quantum-inspired particle swarm optimization algorithm provides better solutions than the other metaheuristics.

Graphical Abstract

1. Introduction

An approach based on many-objective decision-making can be developed for smart computer infrastructures in some crucial domains such as education, health care, public transport and urban planning. If the number of criteria is greater than 3, we consider the many-criteria optimization problem as a special case of multi-criteria optimization problem. However, more importantly, as the number of criteria increases, there is an explosion in the number of Pareto-optimal solutions. We show an example where the number of effective evaluations is five for two criteria and then two hundred for four criteria. So, how much will it be for the seven criteria and more? In this paper, we explain this phenomenon of a sudden explosion of the number of Pareto-optimal solutions. Consequently, much more memory should be allocated to the Pareto solution archive and a much larger population size should be assumed in evolutionary algorithms, particle swarm optimization (PSO) or ant colony optimization (ACO).
Because several criteria characterize complex systems such as computing clouds, the placement of virtual machines in smart computing clouds is formulated as the many-optimization problem. In the model of the multi-task training, the selected deep learning models (DLs) can be retrained at the central hosts cyclically, and then their copies migrate as virtual machines (VMs) to an edge of the computing cloud. Besides this, a host failure forces the virtual machines to be moved immediately from that computer to the most appropriate servers. This smart computing cloud should be supported by using teleportation of virtual machines via the Internet of Things (IoT) to optimize several criteria such as energy consumption, the workload of the bottleneck computer, cost of hardware, reliability of the cloud and others. Some criteria can be constrained because of the dramatic increase in wireless devices that influence several additional bounds [1].
Recent advances in cloud computing, deep learning and Big Data have introduced data-driven solutions into platforms based on OpenStack, which enables the construction of private computing clouds with their own intelligent services, for example, for education at universities or for the smart city. On the long side, commercial public clouds such as Microsoft Azure and Amazon Elastic Compute Cloud immediately deliver advanced services based on artificial intelligence and quantum computing. In both cases, efficient placement of VMs and management of computer resources is one of the particularly important issues [2]. An adequate resource assignment is a crucial challenge for optimization of teleportation of virtual machines by a cloud hypervisor that allocates computer resources to virtual machines with requirements due to central processing units (CPUs), graphical processing units (GPUs), random-access memory (RAM) and disk storage. Live migration is very effective because it involves transferring VMs without shutting down client systems for clouds and without doing a lot of administrative work [3].
Teleportation of a virtual machine can change the workload of the bottleneck computer or the workload of the bottleneck transmission node. Besides this, the energy consumption by the set of cloud hosts depends on the virtual machine assignment. Some VMs can be moved to less energy-consuming hosts, and the others can be allocated to database servers to accelerate operations on their data. Therefore, a problem of cloud resource assignment should be formulated as a multi-objective optimization issue with selected criteria. We present the new many-objective problem of virtual machine placement with seven criteria such as: electric power of hosts, reliability of cloud, CPU workload of the bottleneck host, communication capacity of the critical node, a free RAM capacity of the most loaded memory, a free disc memory capacity of the most busy storage, and overall computer costs. This is a significant extension of the current formulation of this issue with four criteria [4].
Metaheuristics such as genetic algorithms, differential evolution, harmony search, bee colony optimization, particle swarm optimization and ant colony algorithms can be used for solving multi-criteria optimization problems to find the representation of Pareto-optimal solutions [5]. Based on many numerical experiments, we propose Many-objective Quantum-inspired Particle Swarm Optimization Algorithm (MQPSO) with additional improvements based on quantum gates. Not only do high-quality solutions justify this approach, but also that decision-making in computing clouds requires quantum-inspired intelligent software to handle some dynamic situations. Furthermore, PSOs have been applied to support several key tasks in computer clouds.
A goal of this paper is to study MQPSO for optimization of VMs placement regarding selected seven criteria subject to the computer resource constraints. One of the main contributions is the formulation of the many-objective optimization problem with seven crucial criteria for the management of virtual machines, and the MQPSO algorithm that can be used to determine the Pareto-optimal solutions for the other many-objective optimization problems, too.
To order many important issues, the rest of this paper is organized, as follows. Related work is described in Section 2. Then, a teleportation model of virtual machines is characterized in Section 3. Next, Section 4 presents many-objective optimization problem formulation, and then Section 5 describes MQPSO with quantum gates for finding Pareto-optimal solutions. Finally, some numerical experiments are presented in Section 6 just before Section 7 with conclusions.

2. Related Work

We consider studies on the placement of virtual machines in the computing cloud, and then we analyze quantum-inspired PSOs [6]. An overview of virtual machine placement schemes in cloud computing is presented in [7]. An energy-efficient algorithm for live migration of virtual machines may reduce wastage of power by initiating a sleep mode of idle hosts [8]. To avoid overloaded servers, the cloud workload optimizer analyzes load on physical machines and determines an optimal placement due to energy consumption by an ant colony optimization algorithm. A local migration agent based on an optimal solution selects appropriate physical servers for VMs. Finally, the migration orchestrator moves the VMs to the hosts, and an energy manager initiates a sleep mode for idle hosts to save energy. In addition, efficient managing of resources is recommended for green (or energy-aware) cloud with parallel machine learning [9].
It is a very small (almost zero) downtime in the order of milliseconds within live migration of virtual machines [1]. This live migration is carried out without disrupting any active network connections, even when the VM is moved to the target host because an original VM is running. There are other benefits associated with migrating VMs. For instance, the placement of VMs can be a solution for the minimization of the processor usage of hosts or for the minimization of their Input/Output usage subject to keeping virtual machines zero downtime [10]. To increase the throughput of the system, VMs are supposed to be distributed to each server in proportion to their computing Input/Output capacity. Migrations of virtual machines can be conducted by the management of services of software platforms such as Red Hat Cluster Suite to create load balancing cluster services [10].
We recommend OpenStack as the suited cloud software for supporting the live migration of VMs. For instance, it was demonstrated by using high network interfaces with transmission capacity10 Gb/s [11]. Our experimental cloud WUT-GUT confirms the adequate possibilities of OpenStack due to the teleportation of VMs.
An influence of both the virtual machine size for transmission and bandwidth of the network on required teleportation time is very crucial [9]. We can save energy by well-organized management of these resources for cloud data centers. Besides this, an adequate placing VM in datacenters may diminish the number of VM migrations that is another optimization criterion [12]. If the smart VMs are going to migrate, some of them may be unable to obtain the destination hosts due to the limited resources. To avoid the above situations, Wang et al. formulated another NP-hard optimization problem, which both maximizes the revenue of successfully placing the VMs and minimizes the total number of migrations subject to the resource constraint of hosts. Regarding complexity proof, the above problem is equivalent to finding the shortest path problem [9].
Deep reinforcement learning was developed for multi-objective placement of virtual machines in cloud datacenters [13]. Because high packing factors could lead to performance and security issues, for example, virtual machines can compete for hardware resources or collude to leak data, it was introduced a multi-objective approach to find optimal placement strategies considering different goals, such as the impact of hardware outages, the power required by the datacenter, and the performance perceived by users. Placement strategies are found by using a deep reinforcement learning framework to select the best placement heuristic. A proposed algorithm outperforms bin packing heuristics [13].
Task scheduling and allocation of cloud resources impact the performance of applications [14]. Customer satisfaction, resource utilization and better performance are crucial for service providers. A multi-cloud environment may significantly reduce the cost, makespan, delay, waiting time and response time to avoid customer dissatisfaction and to improve the quality of services. A multi-swarm optimization model can be used for multi-cloud scheduling to improve the quality of services [14].
Green strategies for networking and computing inside data centers, such as server consolidation or energy-aware routing, should not negatively impact the quality and service level agreements expected from network operators [15]. Therefore, robust strategies can place virtual network functions to save energy savings and to increase the protection level against resource demand uncertainty. The proposed model explicitly provides for robustness to unknown or imprecisely formulated resource demand variations, powers down unused routers, switch ports and servers and calculates the energy optimal virtual network functions placement and network embedding also considering latency constraints on the service chains [15]. A fast robust optimization-based heuristic for the deployment of green virtual network functions was studied in [16].
Because a service function chain specifies a sequence of virtual network functions for user traffic to realize a network service, the problem of orchestration this chain is very crucial [17]. It can be formulated the deadline-aware co-located and geo-distributed orchestration with demand uncertainty as optimization issues with the consideration of end-to-end delay in service chains by modeling queueing and processing delays. The proposed algorithm improved the performance in terms of the ability to cope with demand fluctuations, scalability and relative performance against other recent algorithms [17].
Another aspect of virtualization is the nonlinear optimization problem of mapping virtual links to physical network paths under the condition that bandwidth demands of virtual links are uncertain [18]. To realize virtual links with predictable performance, mapping is required to guarantee a bound on the congestion probability of the physical paths that embed the virtual links.
The second part of the work is about some new approaches to PSO algorithms. Although quantum-inspired PSO algorithms have not been used to optimize the migration of virtual machines in the computing cloud so far, it is worth discussing them due to the PSO algorithm with quantum gates proposed in the article. The quantum-behavior particle swarm optimization (QPSO) algorithm may control placement problems in software-defined networking to make computer networks agile and flexible [19]. To meet the requirements of users and conquer the physical limitation of networks, it is necessary to design an efficient controller placement mechanism, which is defined as an optimization problem to determine the proper positions and number of its controllers. A particle is a vector for placing controllers at each switch. Besides this, the algorithm is characterized by quantum behavior with no quantum gates. QPSO algorithm demonstrates power fast convergence rate but limits in global search ability. Simulation results show that QPSO achieves better performance in several instances of multi-controller placement problems [19].
An enhanced quantum behaved particle swarm optimization (e-QPSO) algorithm improves the exploration and the exploitation properties of the original QPSO for function optimization. It is based on the adaptive balance among the personal best and the global best positions using the parameter alpha. Besides this, it keeps the balance between diversification and intensification using the parameter gamma. In addition, a percentage of the worst-performing population is re-initialized to prevent the stack of the particle at the local optima. The results of e-QPSO outperform twelve QPSO variants, two adaptive variants of PSO, as well as nine well-known evolutionary algorithms due to 59 benchmark instances [6].
A self-organizing quantum-inspired particle swarm optimization algorithm (MMO_SO_QPSO) is another version of PSO for solving multimodal multi-objective optimization problems (MMOPs) [20]. It should find all equivalent Pareto optimal solutions and maintain a balance between the convergence and diversity of solutions in both decision space and objective space. In the algorithm, a self-organizing map is used to find the best neighbor leader of particles, and then a special zone searching method is adopted to update the position of particles and locate equivalent Pareto optimal solutions in decision space. Quantum behaviors of particles are not described as position vectors and velocity vectors, but they are replaced by the wave function. To maintain diversity and convergence of Pareto optimal solutions, a special archive mechanism based on the maximum-minimum distance among solutions is developed. Some outstanding Pareto optimal solutions are maintained in another archive. In addition, a performance indicator estimates the similarity between obtained Pareto optimal solutions and true efficient alternatives. Experimental results demonstrate the superior performance of MMO_SO_QPSO for solving MMOPs [20].
It is worth noting that the MMO_SO_QPSO algorithm is one of the most advanced PSO algorithms inspired by quantum particle behavior. For this reason, it is the foundation on which our algorithm MQPSO is based, which additionally takes into account quantum gates and the many criteria selection procedure.
Besides this, there are several interesting works that have dealt with hybrid metaheuristics, which could provide an effective introduction to advanced metaheuristics in general [21], in combination with exact approaches applied in network design [22] and improved particle swarm optimization [23].

3. Live Migration of Intelligent Virtual Machines

We can identify the following system elements that are relevant for the modeling. Let V be the number of virtual machines that are trained in the cloud on I physical machines. Moreover, let virtual machines be denoted as α1, …, αv, …, αV. Each αv can work at the host assigned to the safety node such as N1, …, Ni, …, NI in the cloud. Let β1, …, βj, …, βJ be the possible types of physical machines providing a set of resources to VMs. Exactly one physical machine βj can be assigned to each node Ni.
Virtual machine migration involves changing the host on the source node to the host on the target node, which is a key decision in the optimization problem under consideration. Destination nodes of the VMs after migrations can be modeled by X α = [ x v i α ] V × I , where [4]:
x v i α = { 1   if   α v   is   assigned   to   the   node   N i , 0   in   the   other   case ,     v = 1 , V ¯ ,   i = 1 , I ¯
To find adequate amounts of resources, we consider the assignment of hosts that are the other important decisions supporting the placement of VMs. Capacities of resources in the cloud can be adapted to the needs of virtual machines by designating an appropriate matrix X β = [ x i j β ] I × J , where:
x i j β = { 1   if   β j   is   assigned   to   the   node   N i , 0   in   the   other   case ,     i = 1 , I ¯ ,   j = 1 , J ¯
Resources provided by physical machines are characterized by six vectors such as: the vector of electric power consumption ε = [ ε 1 , , ε j , , ε J ] (watt), the vector of RAM memory capacities r a m = [ r a m 1 , , r a m j , , r a m J ] (GB), the vector ν = [ ν 1 , , ν j , , ν J ] of the numbers of the preferred computers and the vector of disk storage capacities h d d = [ h d d 1 , , h d d j , , h d d J ] (TB). In addition, the vector ξ = [ ξ 1 , , ξ j , , ξ J ] ($) characterizes costs of using the selected physical machines. Let the computer β j be failed independently due to an exponential distribution with rate θ j . The sixth vector is the vector of the reliability rates θ = [ θ 1 , , θ j , , θ J ] .
Moreover, four characteristics are related to virtual machines. Let T = [ t v j ] V × J be the matrix of cumulative times of VMs runs on different types of computers (s) and also τ = [ τ v u i k ] V × V × I × I —the matrix of total communication time between agent pairs located at different nodes (s). In addition, the memory requirements of VMs are described by the vector r = [ r 1 , , r v , , r V ] of reserved RAM (GB) and the vector h = [ h 1 , , h v , , h V ] of required disk storage (TB).
The binary matrix double x = ( X α ,   X β ) considers decision variables that can minimize four criteria and maximize three criteria characterizing computing cloud. We maximize three criteria. Let κ m i n R A M be the free capacity of the RAM in the bottleneck computer due to RAM (GB) and κ m i n d i s c —the free capacity of the disc storage in the bottleneck computer due to disk storage (TB) [4]. Let R denote the reliability of the cloud that should be maximized, too. Moreover, we minimize E—electric power of the cloud (watt). Let Z ˜ m a x be communication capacity of the bottleneck node (s). Moreover, Ξ—cost of hosts (money unit, i.e., $) is the criterion for minimization. Let Z ^ m a x be the processing workload of the bottleneck CPU (s) that is the fourth criterion for maximization.
Live migration of virtual machines may change the values of the above criteria. Therefore, we should find the set of Pareto-optimal solutions. On the other hand, the decision-maker can choose a compromise solution from this set.
Intelligent virtual machines contain some pre-trained domain models based mainly on convolutional neural network (CNN) and long short term memory (LSTM) ANNs [24,25]. Several virtual machines can run on a host, but they can migrate from the source computer to the destination regarding the Pareto-optimal solution obtained by the MQPSO. For instance, an accuracy of 99% can be achieved within approximately one minute for training the CNN on the virtual machine Linux Fedora Server (the CPU Intel Core i7, 27 GHz, RAM 16 GB) due to the German Traffic Sign Detection Benchmark Dataset. The CNN identifies the traffic signs represented by the matrix with 28 × 28 selected features. The dataset is divided into 570 training signs and 330 test items [24]. In this case, live migration of this virtual machine is unlikely to be recommended, unless there is a host failure [26]. Retraining is very fast, and then a trained model can be sent to nodes on the Internet of Things [27].
However, a virtual machine needs additional resources in the following situation, which is very common in intelligent clouds [28]. Much more time is required to learn the Long Short Term Memory artificial neural network for video classification. The LSTM can detect some dangerous situations from the web camera monitoring city areas using the Cityscapes Dataset with 25,000 stereo videos about the street scenes from 50 cities [29]. After 70 h of the CPU elapsed time, LSTM based model achieved an average accuracy of 54.7% that is rather unacceptable. The LSTM was implemented at the Matlab R2021b environment. Therefore, the supervised learning process should be accelerated by using GPUs, and the virtual machine with LSTM is supposed to be moved to the other server.
Live migration of the virtual machine to the more powerful workstation with the GPUs can be carried out by using protocols HTTPS and WebSocket. Besides this, micro-services exchange data with format JSON, which were verified in the experimental computing cloud GUT-WUT (Gdańsk University of Technology—Warsaw University of Technology). This cloud is based on the OpenStack software platform that maintains possibilities of VM teleportation [4].
Another deep learning model that uses the virtual machine may be the Human Motion DataBase (HMDB) with 6849 clips divided into 51 action categories, each containing a minimum of 101 clips [30]. For instance, the trained LSTM can detect undesirable situations in the city, such as smoking and drinking in the forbidden areas or pedestrians falling on the floor. The quick and correct classifications allow counteracting many extreme situations on city streets [31]. Teleportation of the virtual machine with the LSTM trained on the HMDB can provide enough amount of cloud resources to train this virtual machine with high accuracy.

4. Many-Objective Optimization Problem

There are the upper constraints that have to be respected to guarantee project requirements such as the maximum computer load Z ^ s u p or an upper limit of node transmission Z ˜ s u p . To save energy, let ξ m a x be limit of electric power for the cloud. A budget constraint for the project can be denoted as ε m a x . Furthermore, there are minimal requirements for three maximizing metrics. Rmin is the minimum reliability for the hosts used by virtual machines. κ i n f d i s c represents the free capacity of the disc storage in the bottleneck computer due to disk storage (TB). κ i n f R A M is the free capacity of the RAM in the bottleneck computer due to RAM (GB). We can establish constraints, as below:
Ε ( x ) ε m a x
κ m i n R A M ( x ) κ m i n R A M
κ m i n d i s c ( x ) κ m i n d i s c
R ( x ) R m i n
Z ^ m a x ( x ) Z ^ s u p
Z ˜ m a x ( x ) Z ˜ s u p  
Ξ ( x ) ξ m a x
If we rebuild the cloud, some hosts can be removed from it, but the others can be left to cooperate with the new servers. Therefore, from the current set of computers C n o w we can determine the set of preferred computer types B n o w . Let J n o w be the set of indexes of the preferred computer types for the current infrastructure. Moreover, we are going to buy or rent the new hosts. Let J n e w be the set of indexes of the new computer types. Let J = J n o w J n e w , where J = { 1 , , j , , J } . If we consider the new servers, we can buy 0, 1, 2 or more hosts of the same kind. However, we are supposed to respect the assumed computers number νj of the jth type from J n o w .
In consequence, the following constraint is introduced:
i = 1 I x i j β = ν j , j J n o w
The reliability R is defined, as below [4]:
R ( x ) = v = 1 V i = 1 I j = 1 J e θ j t v j x v i α x i j β
To calculate the workload of the bottleneck computer, we introduce two formulas. If computer βj is located in the cloud node no. i, the decision variable x i j β is equal to 1. If virtual machine no. v is included in this cluster, the decision variable is equal to 1, too. The term t v j x v i α x i j β is the cumulative time of the vth virtual machine run on computer βj (s). Now, we can calculate the workload of the bottleneck computer, as follows [4]:
Z ^ m a x ( x ) = max i = 1 , I ¯ { j = 1 J v = 1 V t v j x v i α x i j β }  
Similarly, we can determine transmission workload of the bottleneck node by the following formula:
Z ˜ m a x ( x ) = max i = 1 , I ¯ { v = 1 V u = 1 u v V i = 1 I k = 1 i k I τ v u i k x v i α x u k β }
If computer βj is located in the cloud node no. i, electric consumption per time unit in this node is equal to ε j x i j β , and all nodes in this cloud require electric power, as below:
E ( x ) = i = 1 I j = 1 J ε j x i j β  
Also, we can calculate the total cost of computers, as follows:
Ξ ( x ) = i = 1 I j = 1 J ξ j x i j β  
Because the cluster of agents requires the RAM capacity v = 1 V r v x v i α in node no. i, the minimal capacity of RAM memory for a bottleneck host can be calculated, as follows:
κ m i n R A M ( x ) = min i = 1 , I ¯ { j = 1 J r a m j x i j β v = 1 V r v x v i α }
Likewise, we can identify the bottleneck computer regarding free disc storage in the cloud:
κ m i n d i s c ( x ) = min i = 1 , I ¯ { j = 1 J h d d j x i j β v = 1 V h v x v i α }
Based on the model, we can formulate the many-criteria optimization problem, as follows.
Given:
numbers V, I, J, ε m a x ,   Rmin, κ i n f d i s c ,   κ i n f R A M ,   Z ^ s u p ,   Z ˜ s u p ,   ξ m a x ,
vectors ε ,   r a m ,   h d d ,   ξ ,   ν , θ ,   r ,   h
matrices T ,   τ
Find the representation of Pareto-optimal solutions X P a r e t o for an ordered tuple:
( X ,   F ,   )
where:
(1)
X —the set of admissible solutions x that satisfy requirements (3)–(10) and the formal constraints, as below:
i = 1 I x v i α = 1 ,   v = 1 , V , ¯
j = 1 J x i j β = 1 ,   i = 1 , I ¯ ,
(2)
F—the vector of seven minimized partial criteria
F : X R 7  
F ( x ) = [   Z ^ m a x ( x ) ,   Z ˜ m a x ( x ) ,   Ε ( x ) , Ξ ( x ) , κ m i n R A M ( x ) , κ m i n d i s c ( x ) , R ( x ) ]  
R —a set of real numbers
(3)
—the domination relation in R7
= { ( a ,   b ) Y × Y | a n b n ,   n = 1 , 7 ¯ } ,   Y = F ( X )
The number of the admissible solutions x = ( X α ,   X β ) is no greater than 2(V+J)I. If the binary encoding of solutions is substituted by the integer encoding, the upper limit of admissible solutions is V I I J. In the integer encoding, we introduce the modified decision variables: X v α = i   if   x v i α = 1 ,   X i β = j   if   x i j β = 1 . In such a way, the formal constraints are satisfied. Let nVIJ. The upper limit of admissible solutions increases in a non-polynomial way due to O(nn). We can prove the following Lemma.
Lemma 1.
If number of nodes I ≥ 4 or I ≥ 2 and memory resources are limited, then the many-criteria combinatorial optimization problem (18)–(23) is NP-hard.
Proof. 
We will prove that the problem (18)–(23) is polynomially reducible to any known NP-hard problem. Let some assumptions be made to transform the formulated dilemma (18)–(23) into the other NP-hard issue. If J = 1, then we consider the placement of V virtual machines on I computers. Moreover, let resource constraints are released and one objective function Z ^ m a x is considered, only. Then, this case of the problem (18)–(23) is equivalent to a task assignment problem without memory limits [32]. If I ≥ 4, it was proved that task assignment dilemma without constraints is NP-hard for minimization of the total cost [32]. On the other hand, if I ≥ 2 and memory resources are limited, then minimization of the total cost for task assignment is NP-hard, too [32]. Each hierarchical solution related to minimization Z ^ m a x is Pareto-optimal solution [4]. Because the problem (18)–(23) with one criterion is NP-hard, the extended issue with seven criteria is NP-hard, which ends the proof. □
Note that the problem of finding a minimum feasible assignment in some cases is equivalent to a knapsack problem, and hence is an NP-hard problem. Consider the I node graph, in which every node is of degree 2 and the source and the sink both have degrees (I-2). The weight wi of the node Ni corresponds to the weight of the ith item in the knapsack problem. A feasible cut of the task assignment graph corresponds to a subset of items whose weights do not exceed the knapsack constraint weight w. A minimum feasible cut corresponds to a knapsack packing of maximum value [32].
Solving problem (18)–(23) by an enumerative algorithm is ineffective for the large search space with VI IJ elements. Let us consider the instance of the problem (18)–(23) with 855 decision variables. In the experimental instance called Benchmark855 (https://www.researchgate.net/publication/341480343_Benchmark_855, accessed: 19 November 2021), an algorithm determines a set of Pareto-optimal solutions for 45 virtual machines, 15 communication nodes and 12 types of servers. A search space contains 8.2 × 1038 elements and 2 × 107 solutions are evaluated during 3 min by an enumerative algorithm implemented in Java on Dell E5640 dual-processor machine under Linux CentOS. It confirms that there are no practical chances to find any Pareto-solution by a systematic enumerative way. Besides this, 2 × 107 independent probabilistic trials are less likely to ensure a high-quality alternative.
There are exact solvers like the multi-criteria branch and bound method [33] or the ε-constraint method [34], but they usually produce poor quality solutions for the limited time of calculations. Metaheuristics find much better results than exact methods for many instances of different NP-hard multi-criteria optimization problems [21].

5. Many-Objective Particle Swarm Optimization with Quantum Gates MQPSO

We simulated teleportation of virtual machines at the cloud GUT-WUT based on OpenStack that uses hosts from two universities [4]. Algorithm 1 shows pseudocode visualizing how the various steps of the general algorithm many-objective particle swarm optimization with quantum gates (MQPSO) are adapted to the specific features of the VMs placement problem (18)–(23). The algorithm is based on Hadamard gates and rotation gates. Hadamard gate converts a qubit of a quantum register Q into a superposition of two basis states ∣0⟩ and ∣1⟩, as follows [35]:
Algorithm 1 Multi-objective Quantum-inspired PSO
1:Set input data, A(t) := Ø; t := 0
2:Initialize quantum register Q(t) with M qubits by the set of Hadamard gates
3:while (not termination condition) do
4: create P(t) by observing the state of Q(t)
5: determine new positions and velocities of particles followed by create B(t)
6: find Fonseca-Fleming ranks for an extended archive C(t) = A(t)∪B(t)∪P(t)
7: calculate crowding distances, fitness and then sort particles in C(t)
8: form A(t) of Pareto-optimal solutions from the sorted set C(t)
9: a tournament selection of an angle rotation matrix M θ based on rating R ( M θ )
10: mutate the selected matrix M θ with the rate pm
11: modify Q(t) using the rotation gates
12: t := t + 1
13:end while
| 0 = { | 0 + | 1 2   for   the   basis   state   | 0 | 0 | 1 2   for   the   basis   state   | 1  
The Hadamard gate is a single-qubit operation based on the 90° rotation around the y-axis, and then a 180° rotation around the x-axis. If we use Dirac notation for the description of the qubit state Q m = α m | 0 β m | 1 , the qubit can be represented by the matrix, as follows [6]:
Q = [ | α 1 | | α m | | α M | | β 1 | | β m | | β M | ]
The procedure of random selection of decision values is involved with a chromosome matrix. If the decision variable xm is characterized by a pair of complex numbers (αm, βm), it is equal to 0 with the probability | α m | 2 and to 1 with | β m | 2 . Alternatively, the state of the mth qubit can be represented as the point on the 3D Bloch sphere, as follows [35]:
| Q m = cos θ m 2   | 0 + e i ϕ m sin θ m 2     | 1 ,   m = 1 , M ¯ ,
where 0     θ m π and 0   ϕ m 2 π .
In the 3D Bloch sphere, the Hadamard gate can be implemented by several rotations to achieve the desired point determined by a pair of angles ( θ m ,   ϕ m ), that is, equal superposition of the two basis states. Two angles θ m and ϕ m determines the localization of qubit on the Bloch sphere. The North Pole represents the state |0⟩, the South Pole represents the state |1⟩ and the points on the equator represent all possible states in which 0 and 1 are the same. Thus, in this version of the quantum-inspired genetic algorithm, there are M Bloch spheres with the quantum gene states.
Therefore, the Hadamard gate can be modeled, as the matrix operation, as below:
H = 1 2 [ 1 1 1 1 ]
The Hadamard gate can be implemented by the Pauli gates. The Pauli-X gate (PX) is a single-qubit rotation through π radians around the x-axis. On the other hand, the Pauli-Y gate (PY) is a single-qubit rotation through π radians around the y-axis. From (27), we get the following:
H = P X · P Y 1 2 = P Y 1 2 · P X ,
where:
  • P X = [ 0 1 1 0 ] ,
  • P Y = [ 0 i i 0 ] ,
  • i—the imaginary unit of a complex number.
For the Pauli-Z gate (PZ) that is a single-qubit rotation through π radians around the z-axis, we have, as below:
H = Z X · P Y 1 2 = P Y 1 2 · Z X ,
where Z X = [ 1 0 0 1 ] .
The initial step of MQPSO (Algorithm 1, step 1) is to enter data such as V, I, J, ε m a x , Rmin, κ i n f d i s c ,   κ i n f R A M ,   Z ^ s u p ,   Z ˜ s u p ,   ξ m a x ,   ε ,   r a m ,   h d d ,   ξ ,   ν , θ ,   r ,   h ,   T ,   τ . Then, the initial value of the main loop is set to 0 (t := 0).
The quantum register Q(t) consists of M qubits (Algorithm 1, step 2). The M Hadamard gates are used, concurrently. We consider (V + I) blocks of qubits representing decision variables X v α ,   v = 1 , V ¯ ,   and   X j β ,   j = 1 , J ¯ . If there are λ quantum bits for encoding decisions X V α and μ quantum bits for encoding X j β , there are M = λ V + μ I quantum bits for the register Q. To minimize the size of the quantum register, we use the following formulas to determine the number of qubits λ = log 2 ( I + 1 ) and μ = log 2 ( J + 1 ) . Each qubit has an index within this register, starting at index 0 and counting up by 1 till M. Besides this, we use λ qubits (instead of λ V ) for the determination placement of virtual machines because of a key advantage of the quantum register. It can proceed with 2 λ virtual machines placements, concurrently. We can create digital decision variables by using a roulette wheel due to the given probability distribution after the measurement of the quantum register (Figure 1). Similarly, we reduce the number of qubits from μ I to μ , allowing for the allocation of appropriate hosts to VMs clusters. Therefore, the quantum register consists of M = λ + μ , only.
An outcome of measuring is saved into a binary measurement register BMQ with the same number of entries as the qubit register. Declared binary states of BMQ entries are 0 or 1. When a qubit of the register Q is measured the second time, the corresponding bit in the binary register is overwritten by the new measurement bit, even when a measurement is done on a basis different than the basis used for an earlier measurement. In this case, the selection of x-basis, z-basis or z-basis does not allow the storage of the previously measured bit in the register BMQ. The most recent qubit measurement introduces achange to the associated binary bit of the measurement register. The quantum register Q can be measured regarding the z-basis of each qubit. Figure 1 shows an example of a probability distribution for placement of the vth virtual machine. This distribution is important to generate digital positions of particles.
An initial population P(t) of L particles px(t) = (x(t), v(t)) is created by measuring the state of the register Q at the iteration t (Algorithm 1, step 4). The current position at the iteration t is encoded as x ( t ) = [ X 1 α ( t ) , , X v α ( t ) , , X V α ( t ) , X 1 β ( t ) , , X i β ( t ) , X I β ( t ) ] . Besides, the velocity 0 v ( t ) v m a x of this particle has V + I coordinates, too. Therefore, the digital particle p x ( t ) has 2(V + I) coordinates. Placements of virtual machines are randomly selected V times due to the roulette wheel constructed by the probability distribution provided by the quantum register Q (Figure 1). Also, the hosts with adequate resources to clusters of virtual machines are randomly chosen I times by the roulette wheel related to measuring another part of the quantum register Q. Besides this, the velocity vector v of this digital particle is created by generation V + I values for 0 v m ( t ) v m a x . Based on the quantum register Q, L digital positions of particles can be created to establish the initial population P(t = 0), where px(t) = (x(t), v(t)) and L is the size of a population.
The population P(t) enables the designation of an offspring population B(t) in accordance with the rules of canonical PSO algorithms (Algorithm 1, step 5). The new position is calculated by adding three vectors to the current position x(t). The first vector is a difference between the best position pbest of this particle from the past and the current position. The vector is multiplied by a random number r1 from the interval [0; 1] and by the given coefficient c1. The second vector is a difference between the best perfect position gbest of the neighborhood and the current position. This vector is multiplied by a random number r2 from the interval [0; 1] and by the given coefficient c2. The third vector is the difference between the velocity and the current position. This vector is multiplied by a random number r0 from the interval [0; 1] and by the given coefficient c0 (Figure 2).
An extended archive C(t) is the sum of three sets of particles A(t)∪B(t)∪P(t) (Algorithm 1, step 6). We compare particles from the extended archive C(t). Criteria values of particles are calculated, followed by Fonseca–Fleming ranks [36]. A rank r(x) of solution x is a number of dominant solutions from C(t).
The next step of the algorithm MQPSO is calculation crowding distances, fitness values, and then sorting particles in C(t) (Algorithm 1, step 7). Each particle is characterized by crowding distance to determine its fitness and to distinguish solutions with the same rank [37]. Sorted particles with the highest fitness values are qualified to an archive A(t) of non-dominated solutions with their criteria values (Algorithm 1, step 8).
An important step of an algorithm is using three rotation gates to modify the quantum register Q (Algorithm 1, steps 9–11). The R x gate is a single-qubit rotation through the angle θ x (radians) around the x-axis. Similarly, the R y gate is a rotation through the angle θ y around the y-axis. A rotation through θ z around the z-axis is the R z gate. The adequate matrix operations can be written, as follows [35]:
R x ( θ x ) = [ cos θ x 2 i sin θ x 2 i sin θ x 2 cos θ x 2 ]
R y ( θ y ) = [ cos θ y 2 sin θ y 2 sin θ y 2 cos θ y 2 ]
R z ( θ z ) = [ e i θ z 2 0 0 e i θ z 2 ]
Figure 3 shows the quantum gates for finding the correction of new particle position. It determines the new assignment of the vth virtual machine to the host. There are Hadamard gates and three rotation gates that determine host number. An important role play rotation angles θ x ,   θ y ,   θ z for each qubit m = 1 , M ¯ . A matrix of angles M θ can be characterized, as below:
M θ = [ θ x 1   θ x m   θ x M θ y 1   θ y m   θ y M θ z 1   θ z m   θ z M ]
Initially, the angles are determined randomly. However, preferences should be given to modifications of the quantum register, which cause greater effects in the set of designated non-dominated solutions in the archive. For this reason, we evaluate each rotational angle matrix with the number of effective solutions in the archive, which solutions were determined using a given matrix. A matrix M θ with a higher rating R ( M θ ) is more likely to be used in the next iteration because of a tournament selection of the rotation angle matrix in conjunction with the roulette rule. Each angle of the matrix can be mutated at the pm rate, which mutation consists in changing the angle by a random value with a normal distribution with standard deviation σ .
Figure 4 shows results after rotations the quantum register Q followed by measurement. The most preferred host by virtual machines is located at the sixth node. Placement both at the 7th and the 15th node have much fewer chances to be selected, but they may be chosen for several VMs.
To sum up, the population P(t) of L particles is created by observing the state of the quantum register Q(t) in main loop of MQPSO. New positions and velocities of particles are generated followed by create a neighborhood B(t). Besides, values of criteria are calculated, and solutions are verified if they satisfy constraints. Then, we can find ranks of feasible solutions for an extended archive C(t) = A(t − 1)∪B(t)∪P(t). If a rank is equal to zero, a solution is non-dominated in an extended archive.
Non-dominated solutions are accepted for the A(t) archive, only. If the number of Pareto-optimal solutions exceeds the archive size, a representation of them is qualified, which takes place by means of the densification function. Solutions with ratings in less dense areas have a greater chance of qualifying for the archive. In the initial period of searching the space of feasible solutions, solutions with higher ranks and even unacceptable solutions based on the fitness function may be qualified.
The algorithm ends the exploration of space when the time limit is exceeded (the number of particle population generations) or when there is no improvement over a given number of iterations.

6. Pareto-Optimal Solutions and Compromise Alternatives

Let X n P a r e t o be a set of Pareto-optimal solutions for the many-objective problem of virtual machine placement ( X ,   F ,   ) (18)–(23) with n criteria, where n = 2, 3, …, 7. The set X of admissible solutions is the same for each n. Because there are seven partial criteria Z ^ m a x , Z ˜ m a x , Ξ ,   E , κ m i n R A M , κ m i n d i s c , R , we can use a notation F = [ F 1 ,   , F n , , F N = 7 ] . There are six sets of Pareto solutions: X 2 P a r e t o ,   X 3 P a r e t o ,   ,   X 7 P a r e t o because n = 2 ,   3 , ,   7 . Also, there are six sets of evaluations Y = F ( X ) .
Besides this, n dimensional domination relation in R n denoted as n can be defined, as below:
n = { ( a ,   b ) Y × Y | a i b i ,   i = 1 , n ¯ } ,   Y = F ( X ) R n ,   n = 2 , 7 ¯
We can formally explain the number growth of Pareto-optimal solutions in many-objective optimization problems due to adding the partial criteria by the following theorem.
Theorem 1.
A set of Pareto solutions X n P a r e t o X for n (n ≥ 2) criteria in the many-criteria optimization problem of virtual machines placement ( X ,   F ,   ) (18)–(23) is included in the set of Pareto solutions X n + k P a r e t o X for n + k criteria, k = 1,2, …, N − n (n + k ≤ N) and a domination relation n in R n + k , which can be formulated, as below:
X n P a r e t o X n + k P a r e t o ,   k = 1 , 2 , , N n    
Proof. 
Let X 2 P a r e t o be a non-empty set of Pareto solutions for two criteria F1 and F2. If we add the third criterion F3, all solutions from X 2 P a r e t o are still Pareto-optimal. Besides, there is no admissible solution x X ,   x X 2 P a r e t o that dominates all solutions from the set X 2 P a r e t o due to three criteria F 1 ,   F 2 , F 3 . On the other hand, another non-dominated solution x X ,   x X 2 P a r e t o may exist regarding a smaller value of F 3 ( x ) . Therefore, X 2 P a r e t o X 3 P a r e t o . Also, we can proof X 3 P a r e t o X 4 P a r e t o , X 4 P a r e t o X 5 P a r e t o and so on.
We have shown that for every natural number k ≥ 1 the implication T(k) ⇒ T(k + 1) is true since the truth of its predecessor implies the truth of the successor. Since the assumptions of the mathematical induction rule are satisfied for this theorem, the formula (35) is true for every k ≥ 1, which ends the proof. □
It can happen that X n P a r e t o = X n + k P a r e t o , but this is extremely rare. Usually, along with additional criteria, the size of the Pareto set increases significantly in the many-criteria optimization problem of virtual machines placement ( X ,   F ,   ) (18)–(23).
The algorithm MQPSO determined the compromise solution (Figure 5) characterized by the score y p = 2 = (1240; 25,952; 10,244; 11,630; 18; 191; 0.92) with the smallest Euclidean distance to an ideal point y i d e a l = (442; 25,221; 6942; 6750; 19; 195; 0.97) in the normalized criterion space Y ¯ . Coordinates of an ideal point are calculated regarding the following formulas:
y n i d e a l = { min x X 7 P a r e t o F n ( x ) ,   n = 1 , 4 ¯ max x X 7 P a r e t o F n ( x ) ,   n = 5 , 7 ¯
The nadir point N * is another characteristic point of the criterion space Y . The nadir point is required to normalize the criterion space. Contrary to the ideal point, N * takes into account the worst values of the Pareto set Y 7 P a r e t o = F ( X 7 P a r e t o ) in terms of the preferences, as follows:
N n * = { max x X 7 P a r e t o F n ( x ) ,   n = 1 , 4 ¯ min x X 7 P a r e t o F n ( x ) ,   n = 5 , 7 ¯
Moreover, an anti-ideal point P n s u p may be used for the normalization of a criterion space. Coordinates of an anti-ideal point are calculated due to the following formulas:
P n s u p = { max x X F n ( x ) ,   n = 1 , 4 ¯ min x X F n ( x ) ,   n = 5 , 7 ¯
Because an algorithm determines the Pareto set of solutions X 7 P a r e t o and its evaluation set Y 7 P a r e t o = F ( X 7 P a r e t o ) , we can normalize an evaluation set Y 7 P a r e t o as the 7D hypercube Y ¯ 7 P a r e t o = [0; 1]7, where the normalized ideal point is y ¯ i d e a l = (0; 0; 0; 0; 1; 1; 1), as below:
y ¯ n = { F n ( x ) F n i d e a l N n * F n i d e a l ,   n = 1 , 4 ¯ F n ( x ) N n * F n i d e a l N n * ,   n = 5 , 7 ¯
The normalized nadir point N ¯ * = (1; 1; 1; 1; 0; 0; 0) is characterized by the maximum Euclidean distance 7 2.65 from the normalized ideal point. In the hypercube Y ¯ 7 P a r e t o , a trade-off (compromise) placement of virtual machines ω p can be selected from the Pareto-optimal set X 7 P a r e t o due to the smallest value of p-norm  L p , as follows:
L p ( y ¯ p ) = min y ¯ Y ¯ 7 P a r e t o L p ( y ¯ ) ,   p = 1 , 2 ,
where
  • y ¯ p is the normalization evaluation point of y p = F ( ω p ) Y 7 P a r e t o ,
  • L p ( y ¯ ) = y ¯ y ¯ i d e a l p = ( n = 1 7 ( y n ¯ y n ¯ i d e a l ) p ) 1 / p .
Theorem 2.
For the given parameter p = 1, p = 2 or p→∞, the normalization (39) and a domination relation in the many-criteria optimization problem of virtual machines placement ( X ,   F ,   ) (18)–(23), p-norm L p is a function of solution x, as follows:
  L 1 ( x ) = n = 1 4 F n ( x ) F n i d e a l N n * F n i d e a l + n = 5 7 F n ( x ) N n * F n i d e a l N n *    
  L 2 ( x ) = n = 1 4 ( F n ( x ) F n i d e a l N n * F n i d e a l ) 2 + n = 5 7 ( F n ( x ) N n * F n i d e a l N n * ) 2    
  L ( x ) = max { max n = 1 , 4 ¯ F n ( x ) F n i d e a l N n * F n i d e a l , max n = 5 , 7 ¯ F n ( x ) N n * F n i d e a l N n * }    
Proof. 
Let p = 1. Then, L 1 ( y ¯ ) = y ¯ y ¯ i d e a l 1 = ( n = 1 7 ( y n ¯ y n ¯ i d e a l ) 1 ) 1 / 1 = n = 1 7 ( y n ¯ y n ¯ i d e a l ) . Because y ¯ i d e a l = (0; 0; 0; 0; 1; 1; 1), we get L 1 ( y ¯ ) = n = 1 4 y ¯ n + n = 5 7 ( y ¯ n 1 ) . We insert the right side of Equation (39) instead of y ¯ n , and we get the Equation (41). We prove the correctness of formulas (42) and (43) in a similar way, which ends the proof. □

7. Numerical Experiments

In order to verify the quality of the mathematical model, the correctness of the formulated many-criteria optimization problem, as well as the quality of the developed algorithm, several multi-variant numerical experiments were carried out, and the designated compromise solutions were simulated in the GUT-WUT cloud computing environment. We consider four instances of the virtual machine placement problem such as: Benchmark90, Benchmark306, Benchmark855 and Benchmark1020 that are available on site https://www.researchgate.net/profile/Piotr-Dryja (accessed: 19 November 2022). For example, Benchmark855 is characterized by 855 binary decision variables, and therefore a binary search space contains 2.4 × 10257 items. There are 45 VMs, 15 nodes and 12 possible hosts. Besides this, there are both 60 integer decision variables and 1.3 × 1069 possible solutions.
If we consider seven criteria, there are 21 pairs: ( Z ^ m a x , Z ˜ m a x ) ,   ( Z ^ m a x , Ξ ) ,   ( Z ^ m a x , E ) ,   ( Z ^ m a x , κ m i n R A M ) ,   ( Z ^ m a x , κ m i n d i s c ) ,   ( Z ^ m a x , R ) , ( Z ˜ m a x ,   Ξ ) and so on. Figure 5 shows three evaluations of Pareto-optimal solutions in two criteria spaces ( Z ^ m a x , Z ˜ m a x ) . Points P1 = (410; 395,223), P2 = (448; 25,952) and P3 = (587; 25,221) are non-dominated due to Z ^ m a x   and   Z ˜ m a x . However, the results of the experiments confirmed a significant increase in the number of Pareto-optimal solutions with the addition of further criteria. For instance, the other criteria Ξ ,   E ,   κ m i n R A M , κ m i n d i s c , R significantly extended the set of Pareto solutions to a set {P1, P2, …, P200}. While these supplementary 197 points are dominated by two criteria Z ^ m a x , Z ˜ m a x , each new criterion usually increases the number of Pareto-optimal solutions that dominate points P1, P2, P3 due to this new metric. As a result, we can expect several Pareto-optimal solutions from which we can choose the compromise evaluation y p = 2 = (1240; 25,952; 10,244; 11,630; 18; 191; 0.92), where each evaluation of solution is presented as y(x) = ( Z ^ m a x ( x ) ,   Z ˜ m a x ( x ) ,   Ξ ( x ) ,   E ( x ) , κ m i n R A M ( x ) , κ m i n d i s c ( x ) , R ( x ) ) . The trade-off evaluation y p = 2 = F ( x p = 2 ) minimizes Euclidean distance to an ideal point in the normalized space R7. On the other hand, P2 is the compromise point in the normalized space R2.
Figure 6 shows the compromise placement of virtual machines. A solution x p = 2 specifies 15 destinations for 45 virtual machines, where the adequate resources are provided to efficient run all tasks. There are three hosts DELL R520 E5640 v1 (Dell Inc., USA), 4 DELL R520 E5640 v2, 4 Infotronik ATX i5-4430 (Infotronik, Poland), 2 Infotronik ATX i7-4790, Fujitsu Primergy RX300S8 (Fujitsu, Japan) and IBM x3650 M4 (IBM, USA) allocated at 15 nodes.
The 7D compromise estimation y p = 2 = F ( x p = 2 ) is dominated by the other solutions due to several pairs of criteria, but there is at least one pair of criteria, where it is non-dominated. Figure 7 shows Pareto evaluations found by MQPSO for the cut ( Ξ ,   E ) . In this case, the compromise point is dominated by seven evaluation points. However, y p = 2 is close to the Pareto front of this pair criterion cut ( Ξ ,   E ) . A similar situation occurs in Figure 5, where the compromise score is dominated by 11 elements. On the other side, these evaluations are dominated by the compromise solution in Figure 6. In summary, the compromise solution is not dominated by other alternatives in the sense of the four criteria and is, therefore, not dominated in the sense of the seven criteria as well.
The 7D estimation y p = 2 of compromise solution was determined for an ideal point y i d e a l = (442; 25,221; 6942; 6750; 19; 195; 0.97) (Table 1). For normalization of the criterion space, the nadir point N* = (2764; 49,346; 87,359; 20,740; 5.2; 21.5; 0.53) was used. Besides this, it was calculated the anti-ideal point y a n t i i d e a l (3600; 50,000; 87,500; 22,000; 4; 7; 0.33) that can be applied for an alternative normalization. A selection of a compromise solution is carried out in the normalized space due to minimization p-norm L p .
If we choose the nadir point N* for the normalization of the criterion space Y, the evaluation of the compromise solution is y p = 2 for p = 1 and p = 2 (No. 1 in Table 1). Without losing the generality of the considerations, Table 1 presents the best 20 solutions in the sense of L 2 and normalization using the nadir point. When analyzing the coordinates of the points closest to the ideal point, it can be noticed that in this case the “middle” values of the criteria are preferred instead of lexicographic solutions, which are characterized by the best value of the selected criterion. The most preferred values in each of the seven categories are marked in yellow (Table 1).
The undoubted advantage of the compromise solution is its full dominance over other competitors due to the size of the disk storage reserve in the most critical host. In this respect, the remaining solutions are characterized by slightly worse values. The advantage is also the largest reserve of RAM memory because only solution number 4 (Table 1) has the same value. Moreover, the compromise alternative has the shortest data transmission time through the busiest cloud transmission node. In this case, solutions No. 5, 9 and 18 are also characterized by an equally high quality of data transmission. Consuming more electricity than several solutions is perhaps the biggest disadvantage of the compromise solution. However, this is not too much of a difference to the most energy-efficient placements of the VMs migration.
Solutions No. 2 and 3 are characterized by lower electric power consumption by more than 2 kilowatts. Furthermore, they are not the best in terms of any criterion, but in terms of L 2 , they are very close to the compromise solution.
If we choose the nadir point N* for the normalization, the evaluation of compromise solution is y p for p∞ (No. 4 in Table 1). Table 2 presents the p-norm values for the best 20 Pareto-optimal VMs placements sorted by L 2 . Solution No. 4 differs from x p = 2 in that all criteria values are more balanced with respect to the ideal point coordinates. This is due to the greater consumption of electricity by the solution No. 1, which causes the value of the L p to be 0.349. On the other hand, the solution No. 4 is characterized by L p = 0.335.
If we select the anti-ideal point y a n t i i d e a l , the differences between the coordinate values of the point and the ideal point are greater than when the nadir point is taken into account. As a result, we are dealing with a completely different normalization. The specificity of this computational instance is such that the ideal point has the greatest impact on the change of normalization for the reliability of the cloud because the distance to the ideal point coordinate is increased by 45.5%. On the other hand, the increase in the length of the value range of 36% is characterized by the load of the CPU bottleneck host in the computing cloud. For the other five criteria, the impact on standardization is below 10%.
However, the change of the normalization point with respect to the ideal point did not result in any major changes related to the compromise solutions. Solution No. 1 remained a compromise solution for p = 1 and for p = 2. On the other hand, a new compromise solution has been identified for p∞ (Table 1, No. 6). In this case, Δ y 1 ¯ decreased from 0.350 to 0.260, which caused L p to be affected by Δ y 4 ¯ , which is 0.290.
The decision-maker can choose one value of the parameter p. We prefer an influence of all criteria on the compromise solutions for p = 1 but some of them can achieve very poor values. On the other hand, if p = 2, we favor the minimal Euclidean distance to the ideal point. Finally, all criteria achieve similar good values not far from ideal ones if p∞ is selected.
Another dilemma is related to a selection between the nadir point and the anti-ideal point for the normalization of the criterion space. The nadir point gives information about a range of all efficient solutions. Besides this, the anti-ideal point gives information about the range of the admissible set. If the selection of compromise solutions is considered from the Pareto set, the nadir point is more suitable than the anti-ideal point for the criterion space normalization. As a consequence, the compromise solution can be selected.
We suggest selecting both p = 2 and the nadir point to determine the compromise solution from the set of Pareto-optimal elements of two or three criteria. The other approach is based on many-objective analysis with seven criteria, where an extended analysis is needed because of greater sensitivity of compromise solutions to the parameter p and the choice of the normalization point. In this way, a decision-maker can find some trade-off solutions after introducing a limit on the size of the representation of Pareto-optimal solutions, which is the specificity of solving optimization problems with many criteria.
A very important experiment is to compare the quality of the designated solutions by the proposed method with other methods. Outcome evaluations of Pareto placement of virtual machines are presented in Table 3. The Benchmark855 was used for this purpose, too. We consider fifteen non-dominated solutions obtained by MQPSO, Non-dominated Sorting Genetic Algorithm II (NSGA-II) [37], Multi-criteria Genetic Programming (MGP) [38], Multi-criteria Differential Evolution (MDE) [4] and Multi-criteria Harmony Search (MHS) [39].
To compare 15 solutions provided by five metaheuristics, we achieved a new ideal point y i d e a l = (385; 2193; 10,242; 9500; 18; 191; 0.92). In addition, the new nadir point N* = (1405; 36,187; 33,259; 14,838; 11; 112; 0.83) was used for normalization. When analyzing the computational load of each algorithm, it was assumed that the population consists of 100 particles (MQPSO), chromosomes (MDE, MHS) or compact programs (MGP). The population number is set to 10,000 and the maximum computation time is 30 min. In MQPSO, the values of individual coefficients were as follows: c0 = 1, c1 = 2, c2 = 2, vmax = 1. In the differential evolution algorithm MSE, q = 0.9 and Cp = 0.4 were assumed. Moreover, an additional type of mutation was used, based on the multi-criteria tabu search algorithm [2]. In contrast, in the harmony search MHS algorithm, the mutation rate pm was 0.1 and the crossover rate pc was 0.01. Using MGP genetic programming, it was assumed that the maximum number of nodes in the programming tree was 50, and the mutation rate and crossing rate were the same as for MHS.
An optimal swarm size is problem-dependent. If the number of particles in the swarm is greater, the initial diversity is larger, and a larger search space is explored. On the other hand, more particles increase the computational complexity, and the PSO exploration leads to a parallel random search. We observed that more particles lead to fewer swarms to reach the Pareto-optimal solutions, compared to a smaller number of particles. Our experiments confirmed that the MQPSO has the ability to find Pareto-optimal solutions with sizes of 60 to 100 particles. Each run was repeated 10 times, and Table 3 lists the best solutions obtained with each algorithm. The p-norm values for the best 15 Pareto-optimal VMs placements were sorted by L2. Moreover, the other values for p-norm were calculated, too.
Based on the obtained data, it can be concluded that the MQPSO method is better than the other methods due to the number of Pareto-optimal solutions in the first 12. The closest three solutions to the ideal point are determined with MQPSO. An important argument is also the fact that the average distance from the ideal point is the smallest for effective solutions provided by the MQPSO method. NSGA-II is the second metaheuristic with an average distance of 1.39 versus 1.16 achieved by MQPSO. If we consider the p-norm for p = 1, the compromise solution is the same. Also, the three nearest solutions to the ideal point are produced by MQPSO. On the other hand, solution No. 10 provided by NSGA-II and solution No. 14 determined by MDE are very close to the compromise solution due to Lp. Moreover, the algorithm MQPSO has great potential to be extended in the nearest feature due to development of the quantum computers and a quantum algorithmic theory.

8. Concluding Remarks

Smart education systems, intelligent health care and smart cities require deep learning models and efficient management of computer resources that can be supported by the live migration of virtual machines. Because the formulated problem of many-objective optimization is NP-hard, we proposed the many-objective PSO algorithm with quantum gates to provide Pareto-optimal placements of virtual machines in computing clouds. Efficient solutions determined by MQPSO satisfy seven criteria such as electric power of hosts, reliability of the cloud, the workload of the bottleneck host, communication capacity of the critical node, RAM usage, disc memory capacity and computer costs. Hadamard gates support forming an initial population in the quantum register by introducing a superposition of qubits. Also, rotation gates can change the current state of the quantum register to explore the neighborhood of the current particle. Extensive numerical results from the experimental cloud based on the OpenStack platform showed that MQPSO is a very efficient tool supporting the management of live migration in the computing cloud.
The cloud can share the workload, which permits efficient training of machine learning algorithms, too. Solvers based on MQPSO can find the compromise solution for parameter p = 2 from the set of Pareto-optimal alternatives that is a recommendation regarding teleportation of virtual machines. Due to the experimental validation of Pareto solutions, a higher quality performance of the cloud is achieved than the performance obtained by solutions from well-known algorithms such as genetic programming or differential evolution.
In our future work, we are going to study the other metaheuristics with quantum gates for the migration of virtual machines with the extended set of optimization criteria.

Funding

This research was partially funded by Warsaw University of Technology, Poland, grants RDN ITiT 504 04547 1120 and RW MiNI 504 04236 1120. The APC was funded by Multidisciplinary Digital Publishing Institute, Basel, Switzerland.

Data Availability Statement

Datasets such as Benchmark90, Benchmark306, Benchmark855 and Benchmark1020 are available on site https://www.researchgate.net/profile/Piotr-Dryja (owner Jerzy Balicki and Piotr Dryja) under CC BY license. Cite: Balicki, J.; Dryja P. Multi-objective tabu-based differential evolution for teleportation of smart virtual machines in private computing clouds. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; pp. 1904–1911.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agarwal, A.; Raina, S. Live migration of virtual machines in cloud. Int. J. Sci. Res. Publ. 2012, 2, 1–5. [Google Scholar]
  2. Balicki, J. Tabu programming for multiobjective optimization problems. Int. J. Comp. Sci. Netw. Secur. 2007, 7, 44–50. [Google Scholar]
  3. Dhanoa, I.S.; Khurmi, S.S. Analyzing energy consumption during VM live migration. In Proceedings of the International Conference on Computing, Communication & Automation, Galgotias University, Greater Noida, India, 15–16 May 2015; pp. 584–588. [Google Scholar]
  4. Balicki, J.; Dryja, P. Multi-objective tabu-based differential evolution for teleportation of smart virtual machines in private computing clouds. In Proceedings of the 2021 IEEE Congress on Evolutionary Computation (CEC), Kraków, Poland, 28 June–1 July 2021; pp. 1904–1911. [Google Scholar]
  5. Liu, D.S.; Tan, K.C.; Huang, S.Y.; Goh, C.K.; Ho, W.K. On solving multiobjective bin packing problems using particle swarm optimisation. Eur. J. Oper. Res. 2008, 190, 357–382. [Google Scholar] [CrossRef]
  6. Agrawal, R.K.; Kaur, B.; Agarwal, P. Quantum inspired Particle Swarm Optimization with guided exploration for function optimization. Appl. Soft Comput. 2021, 102, 107122. [Google Scholar] [CrossRef]
  7. Masdari, M.; Nabavi, S.S.; Ahmadi, V. An overview of virtual machine placement schemes in cloud computing. J. Netw. Comp. Appl. 2016, 66, 106–127. [Google Scholar] [CrossRef]
  8. Wang, X.; Yuen, C.; Hassan, N.U.; Wang, W.; Chen, T. Migration-aware virtual machine placement for cloud data centers. In Proceedings of the Workshop on Cloud Computing Systems, Networks, and Applications, London, UK, 8–12 June 2015. [Google Scholar] [CrossRef]
  9. Sutar, S.G.; Mali, P.J.; More, A. Resource utilization enhancement through live virtual machine migration in cloud using ant colony optimization algorithm. Int. J. Speech Technol. 2020, 23, 79–85. [Google Scholar] [CrossRef]
  10. Kumar, R.; Prashar, T. Performance analysis of load balancing algorithms in cloud computing. Int. J. Comp. Appl. 2015, 120, 19–27. [Google Scholar] [CrossRef]
  11. Biswas, M.I.; Parr, G.; McClean, S.; Morrow, P.; Scotney, B. A practical evaluation in OpenStack live migration of VMs using 10 Gb/s interfaces. In Proceedings of the 2016 Symposium on Service-Oriented System Engineering, Oxford, UK, 29 March–2 April 2016; pp. 346–351. [Google Scholar]
  12. Mahmoudi, S.A.; Belarbi, M.A.; Mahmoudi, S.; Belalem, G.; Manneback, P. Multimedia processing using deep learning technologies, high-performance computing cloud resources, and Big Data volumes. Concurr. Comput. Pract. Exp. 2020, 32, e5699. [Google Scholar] [CrossRef]
  13. Caviglione, L.; Gaggero, M.; Paolucci, M.; Ronco, R. Deep reinforcement learning for multi-objective placement of virtual machines in cloud datacenters. Soft Comput. 2021, 25, 12569–12588. [Google Scholar] [CrossRef]
  14. Mohanraj, T.; Santhosh, R. Multi-swarm optimization model for multi-cloud scheduling for enhanced quality of services. Soft Comput. 2021, 1–11. [Google Scholar] [CrossRef]
  15. Marotta, A.; D’andreagiovanni, F.; Kassler, A.; Zola, E. On the energy cost of robustness for green virtual network function placement in 5G virtualized infrastructures. Comput. Netw. 2017, 125, 64–75. [Google Scholar] [CrossRef]
  16. Marotta, A.; Zola, E.; D’andreagiovanni, F.; Kassler, A. A fast robust optimization-based heuristic for the deployment of green virtual network functions. J. Netw. Comp. Appl. 2017, 95, 42–53. [Google Scholar] [CrossRef]
  17. Nguyen, M.; Dolati, M.; Ghaderi, M. Deadline-aware SFC orchestration under demand uncertainty. IEEE Trans. Netw. Serv. Manag. 2020, 17, 2275–2290. [Google Scholar] [CrossRef]
  18. Hosseini, F.; James, A.; Ghaderi, M. Probabilistic Virtual Link Embedding Under Demand Uncertainty. IEEE Trans. Netw. Serv. Manag. 2019, 16, 1552–1566. [Google Scholar] [CrossRef]
  19. Zhang, Q.; Li, H.; Liu, Y.; Ouyang, S.; Fang, C.; Mu, W.; Gao, H. A new quantum particle swarm optimization algorithm for controller placement problem in software-defined networking. Comput. Electr. Eng. 2021, 95, 107456. [Google Scholar] [CrossRef]
  20. Li, G.; Wang, W.; Zhang, W.; You, W.; Wu, F.; Tu, H. Handling multimodal multi-objective problems through self-organizing quantum-inspired particle swarm optimization. Inf. Sci. 2021, 577, 510–540. [Google Scholar] [CrossRef]
  21. Blum, C.; Puchinger, J.; Raidl, G.; Roli, A. Hybrid metaheuristics in combinatorial optimization: A survey. Appl. Soft Comput. 2011, 11, 4135–4151. [Google Scholar] [CrossRef] [Green Version]
  22. D’Andreagiovanni, F.; Krolikowski, J.; Pulaj, J. A fast hybrid primal heuristic for multiband robust capacitated network design with multiple time. Appl. Soft Comput. 2015, 26, 497–507. [Google Scholar] [CrossRef] [Green Version]
  23. Ghasemi, M.; Akbari, E.; Rahimnejad, A.; Razavi, S.E.; Ghavidel, S.; Li, L. Phasor particle swarm optimization: A simple and efficient variant of PSO. Soft Comput. 2019, 23, 9701–9718. [Google Scholar] [CrossRef]
  24. Houben, S.; Stallkamp, J.; Salmen, J.; Schlipsing, M.; Igel, C. Detection of traffic signs in real-world images: The German Traffic Sign Detection Benchmark. In Proceedings of the International Joint Conference on Neural Networks, Dallas, TX, USA, 4–9 August 2013; Volume 1288. [Google Scholar]
  25. Jiang, R.; Zhang, J.; Tang, Y.; Wang, C.; Feng, J. A collective intelligence based differential evolution algorithm for optimizing the structure and parameters of a neural network. IEEE Access 2020, 8, 69601–69614. [Google Scholar] [CrossRef]
  26. Cardoso, L.P.; Mattos, D.M.; Ferraz, L.H.G.; Duarte, O.C.M.; Pujolley, G. An efficient energy-aware mechanism for virtual machine migration. In Proceedings of the Global Information Infrastructure and Networking Symposium, Guadalajara, Mexico, 28–30 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1–6. [Google Scholar]
  27. Muhammed, A.S.; Ucuz, D. Comparison of the IoT platform vendors, Microsoft Azure, Amazon Web Services, and Google Cloud, from users’ perspectives. In Proceedings of the 8th International Symposium on Digital Forensics and Security (ISDFS), Beirut, Lebanon, 1–2 June 2020; pp. 1–4. [Google Scholar]
  28. Jin, N.; Wu, J.; Ma, X.; Yan, K.; Mo, Y. Multi-task learning model based on multi-scale CNN and LSTM for sentiment classification. IEEE Access 2020, 8, 77060–77072. [Google Scholar] [CrossRef]
  29. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes dataset for semantic urban scene understanding. In Proceedings of the conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1–12. [Google Scholar]
  30. Kuehne, H.; Jhuang, H.; Garrote, E.; Poggio, T.; Serre, T. HMDB: A large video database for human motion recognition. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2556–2563. [Google Scholar]
  31. Galligan, S.D.; O’Keeffe, J. Big Data Helps City of Dublin Improves Its Public Bus Transportation Network and Reduce Congestion; IBM Press: Armonk, NY, USA, 2013. [Google Scholar]
  32. Rao, G.S.; Stone, H.S.; Hu, T.C. Assignment of tasks in a distributed processor system with limited memory. IEEE Trans. Comput. 1979, 28, 291–299. [Google Scholar] [CrossRef]
  33. Florios, K.; Mavrotas, G.; Diakoulaki, D. Solving multiobjective, multiconstraint knapsack problems using mathematical programming and evolutionary algorithms. Eur. J. Oper. Res. 2010, 203, 14–21. [Google Scholar] [CrossRef]
  34. Mavrotas, G. Effective implementation of the ε-constraint method in multi-objective mathematical programming problems. Appl. Math. Comput. 2009, 213, 455–465. [Google Scholar] [CrossRef]
  35. QuTech. Quantum Inspire Home. Retrieved from Quantum Inspire. Available online: https://www.quantum-inspire.com/ (accessed on 22 September 2021).
  36. Fonseca, C.M.; Fleming, P.J. Genetic algorithms for multiobjective optimisation: Formulation discussion and generalization. In Proceedings of the 5th International Conference on Genetic Algorithms, Champaign, IL, USA, 1 June 1993; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1993; pp. 416–423. [Google Scholar]
  37. Deb, D.; Agrawal, S.; Pratap, A.; Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 2002, 6, 182–197. [Google Scholar] [CrossRef] [Green Version]
  38. Balicki, J.; Korłub, W.; Krawczyk, H.; Paluszak, J. Genetic programming with negative selection for volunteer computing system optimization. In Proceedings of the 6th International Conference on Human System Interaction, Sopot, Poland, 6–8 June 2013; pp. 271–278. [Google Scholar]
  39. Balicki, J.; Korłub, W.; Paluszak, J.; Tyszka, M. Harmony search for self-configuration of fault-tolerant and intelligent grids. In Computer Information Systems and Industrial Management, Proceeding of 15th IFIP TC8 International Conference, CISIM 2016, Vilnius, Lithuania, 14–16 September 2016; Saeed, K., Homenda, W., Eds.; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2016; Volume 9842, pp. 566–576. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Probability distribution of the virtual machine placement after quantum register measurement.
Figure 1. Probability distribution of the virtual machine placement after quantum register measurement.
Entropy 24 00058 g001
Figure 2. Determination of the new position of a digital particle.
Figure 2. Determination of the new position of a digital particle.
Entropy 24 00058 g002
Figure 3. Hadamard and rotation gates for updating the quantum register Q.
Figure 3. Hadamard and rotation gates for updating the quantum register Q.
Entropy 24 00058 g003
Figure 4. Distribution of the node selection probabilities after rotations of quantum register.
Figure 4. Distribution of the node selection probabilities after rotations of quantum register.
Entropy 24 00058 g004
Figure 5. Pareto-optimal evaluations of two criteria ( Z ^ m a x , Z ˜ m a x ) for Benchmark855.
Figure 5. Pareto-optimal evaluations of two criteria ( Z ^ m a x , Z ˜ m a x ) for Benchmark855.
Entropy 24 00058 g005
Figure 6. The compromise placement of 45 VMs x p = 2 for Benchmark855.
Figure 6. The compromise placement of 45 VMs x p = 2 for Benchmark855.
Entropy 24 00058 g006
Figure 7. Pareto-optimal evaluations for two selected criteria Ξ   and   E .
Figure 7. Pareto-optimal evaluations for two selected criteria Ξ   and   E .
Entropy 24 00058 g007
Table 1. Criteria values for the best 20 Pareto-optimal solutions sorted by L2 in the normalized criterion space by the nadir point.
Table 1. Criteria values for the best 20 Pareto-optimal solutions sorted by L2 in the normalized criterion space by the nadir point.
No. Z ^ m a x ( s ) Z ˜ m a x ( s ) Ξ ($) E (watt) κ m i n R A M   ( GB ) κ m i n d i s c ( TB ) R
11240.7825,952.4910,244.0011,630.0018.01910.92
21404.8429,973.3013,455.009500.0017.51900.90
31389.7729,973.3016,114.009525.0017.01850.91
41105.8429,973.3030,352.0011,430.0018.01880.91
51240.7825,952.4924,412.0011,625.0016.01800.92
61254.2429,973.3030,895.0011,125.0017.01900.91
71061.1729,973.3020,240.0012,760.0017.01650.88
81374.7029,973.308796.0011,530.0015.01870.88
91683.0125,952.4920,888.007825.0014.01880.87
101240.7832,897.5219,126.0010,650.0016.01880.78
111733.4429,973.3015,623.008200.0017.0980.88
121547.6129,973.308,317.0010,450.0012.01220.90
131683.0129,973.309329.009650.0014.01320.77
141718.3729,973.3014,248.008250.0011.01230.88
151683.0129,973.3014,393.008550.0012.01220.77
161733.4429,973.3012,945.008450.0015.0990.75
171658.7529,973.3018,574.008825.008.01550.97
181683.0125,952.4918,509.009500.009.0770.89
191061.1729,973.3036,498.0011,010.0017.0210.77
201130.0934,725.1629,234.0010,055.0012.0980.59
Table 2. The p-norm values for the best 20 Pareto-optimal VMs placements sorted by L2.
Table 2. The p-norm values for the best 20 Pareto-optimal VMs placements sorted by L2.
No.Lp for the Nadir PointLp for the Anti-Ideal Point
p = 1p = 2p→∞p = 1p = 2p→∞
10.9730.5110.3490.8100.4240.320
21.1860.5420.4150.9940.4380.305
31.2560.5480.4081.0680.4500.300
41.3580.5850.3351.1970.5240.307
51.3580.5960.3481.1770.5160.320
61.4670.6230.3501.2870.5490.297
71.5810.6440.4301.3810.5660.394
81.5040.6670.4021.2730.5620.313
91.4450.7120.5341.1930.5720.393
101.7820.7450.4321.5040.6150.310
111.8730.8640.5591.5930.7270.516
122.0420.8920.5071.7660.7740.467
132.1480.9150.5341.7850.7410.393
142.1440.9540.5801.8420.8180.533
152.3350.9950.5341.9630.8250.467
162.2921.0040.5561.9080.8200.511
172.0421.0220.7971.8040.8990.733
182.4921.1690.7252.1661.0310.667
192.7381.2531.0002.4051.1220.926
203777.1341.3010.8642.6711.0750.594
Table 3. Criteria values for 15 Pareto-optimal solutions determined by five multi-objective metaheuristics.
Table 3. Criteria values for 15 Pareto-optimal solutions determined by five multi-objective metaheuristics.
No.Algorithm Z ^ m a x ( s ) Z ˜ m a x ( s ) Ξ ($) E (watt) κ m i n R A M   ( GB ) κ m i n d i s c ( TB ) RL1L2Lp
1MQPSO124125,95210,24411,630181910.921.520.970.84
2MQPSO140529,97313,4559500181900.92.011.181.00
3MQPSO139029,97316,1149525171850.912.141.180.99
4NSGA-II58725,22133,25912,223151770.922.541.251.00
5NSGA-II44225,22112,13312,163171120.872.571.281.00
6MDE44229,97315,35112,675161520.842.89 1.29 0.89
7MDE58125,22115,29211,598111490.892.901.301.00
8MHS44225,95218,42712,786171120.892.791.311.00
9MQPSO110629,97330,35211,430181880.912.661.310.87
10NSGA-II43529,97328,29312,322121560.93.451.480.86
11NSGA-II58021,93118,40014,838111550.93.221.561.00
12MDE41429,97310,24214,259171120.873.181.561.00
13MGP44825,95222,84912,684121140.893.651.590.97
14MDE38533,99414,70714,104141500.913.951.680.86
15MHS41136,18718,32012,847141770.833.751.701.00
16nadir1,40536,18733,25914,838111120.837.002.641.00
17ideal38521,93110,2429500181910.920.000.000.00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Balicki, J. Many-Objective Quantum-Inspired Particle Swarm Optimization Algorithm for Placement of Virtual Machines in Smart Computing Cloud. Entropy 2022, 24, 58. https://doi.org/10.3390/e24010058

AMA Style

Balicki J. Many-Objective Quantum-Inspired Particle Swarm Optimization Algorithm for Placement of Virtual Machines in Smart Computing Cloud. Entropy. 2022; 24(1):58. https://doi.org/10.3390/e24010058

Chicago/Turabian Style

Balicki, Jerzy. 2022. "Many-Objective Quantum-Inspired Particle Swarm Optimization Algorithm for Placement of Virtual Machines in Smart Computing Cloud" Entropy 24, no. 1: 58. https://doi.org/10.3390/e24010058

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop