Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Server Supported by Cloud and GPS Based On Backpropagation

Download as pdf or txt
Download as pdf or txt
You are on page 1of 16

Server Supported by Cloud and GPS based on

Backpropagation
Ayad Ghany Ismaeel1, Hanan M. Shukur1, Subhi R. M. Zeebaree2, Dilovan Haji3,
Rizgar R. Zebari4, Mohammed A. M.Sadeeq2, Ahmed Alkhayyat5, Omar M.
Ahmed6, Adel Al-Zebari2, Lailan M. Haji7
1
Computer Engineering Dept., Al-Kitab University, Kirkuk, Iraq.

2
Duhok Polytechnic University, Duhok.KRG-Iraq

3
Dept. of Mechanical Engineering, University of Zakho, KRG, Iraq

4
Computer Science Department, Nawroz University, KRG-Iraq

5
Technical Engineering Dept., Islamic University Najaf, Iraq

6
Zakho Technical Institute, Duhok Polytechnic University, KRG-Iraq

7
Computer Science Dept., University Of Zakho, KRG-Iraq

ayad.ghany@uoalkitab.edu.iq,

Abstract. Cloud data centres consist of thousands and more servers that provide services to
clients. These servers should react to the customers’ demands instantly without
delaying/postponing; each server within the datacentre support another server in a situation of
end-to-end delay, failure, etc. But, if the server within the certain data centre is not idle (high
busy rate), therefore, it will need support, a big problem begins if there is no idle server to support
the source server (server which needs supporting) within the same data center where the source
server is located, because this big problem leads to high load on certain server (source server),
causing respond/ fault recovery time delaying in the network, thus network traffic problems. A
technique/model is composed to solve this big problem by presenting adequate support in this
proposed work. This technique was performed by using the queue model and NN
Backpropagation algorithm. The results of applying this technique were satisfied with the best
validation performance (mean square rate) 0.0001.

Keywords: Cloud Computing, idle servers, steady-state queuing model, OPNET, datacenter,
GPS, Backpropagation

1. Introduction
Cloud computing is a server-based paradigm that allows clients to access a shared pool of resources
from a remote location. It offers clients a variety of benefits, including a pay-as-you-go strategy,
flexibility, and dynamic customization provided by virtualization. Cloud Computing grew and
developed due to the era of high-speed networks and low-cost technology [1]. Cloud Computing is
defined by the National Institute of Standards and Technology (NIST) as "a model for enabling
ubiquitous, convenient, on-demand network access to a shared pool of configurable computing
resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned
and released with minimal management effort or service provider interaction" [2]. Cloud computing is
a sort of Internet-based computing that intends to provide information and processing resources that
may be shared among various devices other than computers, depending on the needs of the users. It's an
excellent example of providing on-demand and shared access to a flexible cluster of computer resources.
(For example, apps, servers and services, storage, and computer networks) that can help with quick
provisioning and low administration [3]. A cloud is a network about which we know little, perhaps one
that we don't own or provides connectivity in its way. The Internet's external network has always been
depicted as a cloud from here. Resources can be stored in the cloud (i.e., data centres), and the computer
becomes an instrument for communicating with this cloud, rather than developing applications that work
on devices of the staff, installing these applications in the cloud to communicate across the network and
store software and files which can facilitate users’ jobs and rapid reacting to their demands without the
need for deep knowledge about or previous expertise with the used technologies. Thus CC neglects the
need for IT managers and clients to buy and maintain IT infrastructure and applications. Cloud results
in cost savings because the users leasing resources instead of owning them, thus reducing software
costs[4]. Especially for companies that would normally require vast amounts of startup capital may only
need a fraction of what was previously required to succeed. Cloud Computing has become a key portion
of the global digital economy due to its extensibility, flexibility and reduced costs of processes[5].
Examples of Cloud Computing that cover many trends in our life nowadays: E-mail services (Gmail,
Yahoo and Hotmail), cloud storage services (Xdrive, MediaMax, and Strongspace), cloud music
services (Google Music, Amazon Cloud Player, and iTunes / iCloud), cloud applications (Google Docs
and Photoshop Express), and cloud operating systems (Google Docs and Photoshop Express) are
examples of cloud storage services (Google Chrome OS). Cloud services are provided based on the
cloud's service: Users can access multiple software applications on a pay-per-use basis via "SaaS"
(Software as a Service). Instead of purchasing licensed programs, which can be quite costly. Operating
system, programming language execution environment, database, and web server are examples of
"PaaS" (Platform as a Service). "IaaS" (Infrastructure as a Service) denotes the provision of hardware
such as a central processing unit, memory, and storage as a service or storage, database, information,
process, a, integration, security, management, testing-as-a-service, and so on, (Figure.1) illustrates these
services. These important services (SaaS, PaaS, or IaaS) should be available to clients in all conditions
with high quality because the cloud associated with such applications of enterprise and e-business that
can't be delayed for another time because it will lead to losses considerable[6]. Online services should
react to customers’ demands instantly without delay or postpone for another time.
Figure 1 Cloud Services Types and Examples

Many servers are working as a distributed system in a datacentre under a cloud, and each server supports
another one quickly with effective cost (at peak time, backup, failure, etc.) to provide users with services
continuously. Still, in case of not existing any idle server in the same data centre, which has source
server need the support from another server, thus a big dilemma arises, in this situation must be decided
which server from outside the data centre site(external site) will allocate to that source server which
needs support within cloud quickly with effective cost. Facing and solving this real problem need
technique for giving appropriate servers to support cloud's servers within the cloud data centre to react
to the services of customers within minimum cost (the shortest path between the source server and
supporter servers) and taking another factor into account, which is allocating the idle servers (higher
%idle), i.e. the servers which are ready to support the source servers. As a result, avoiding reject or
losing demands of clients, employing BackPropagation in this technique for predicting effective server
allocation with minimum cost (nearest distance).

2. Related works

Ayad Ghany Ismaeel [2012] proposed a new effective technique for allocating servers to support the
cloud, by determining the available servers which relatively have a higher idle (not utilized) to support
source servers that need support using queue model and algorithm of Haversine equation employing
GIS and GPS techniques to select the idle server which is nearer to achieve the lowest cost and reach
the best performance, i.e. effective cost.

Xiaoming Nan, Yifeng He and Ling Guan [2014], described the service operation in the multimedia
cloud. The presented queuing model consists of three queuing systems: schedule queue, computation
queue, and transmission queue, and applied optimization methods to study resource allocation problems
with response time and resource cost minimization, using experimental Windows Azure parameters.
The findings show that the proposed resource allocation systems can best assign cloud resources for
each service to achieve the shortest response time within a certain budget or provide QoS provisioning
at the lowest resource cost [7].
Yuan, X., Min, Geyong, and others [2017], proposed a Geo-distributed datacenter cloud system model,
developing a framework for resource allocation of Geo-distributed relationship datacenter clouds to
investigate between these clouds and physical networks, using queuing theory to develop an interaction
model and the resource allocation problem is formulated, the new dynamic resource allocation algorithm
allocates system resources periodically according to Internet conditions, and cloud operation states. The
suggested algorithm gives bandwidth resources along the paths of geo-distributed data centers, allowing
content to transport more rapidly and effectively. Furthermore, a bandwidth resource allocation method
helps balance physical network links. [8].
All above-mentioned techniques solve various problems related to allocating servers to support cloud,
server selection including (Cloud Computing, CDN), resource allocation problems with the response
time and resource cost minimization, resource allocation of Geo-distributed relationship datacenter
clouds to investigate between these clouds and physical networks, cloud resource allocation based on
the bio-inspired coral-reefs optimization paradigm to model cloud flexibility in a cloud-data centre. Still,
no one of the previously mentioned techniques employs any using of artificial neural network for
allocation issues. Therefore this paper proposes a technique to overcome the mentioned drawback by
using (Feed BackPropagation) neural network to predict the best server selection/allocation( i.e.
effective allocation because of distance and %idle ) outside the data centre which has a source server
that need support under the cloud, then achieving lowest cost and optimal performance as per the result
of Artificial Neural Network (ANN) analysis. Artificial Neural Networks (ANNs) An information
processing system inspired by biological neural networks is an ANN. In raw data, ANNs recognize
familiar patterns and correlations, learning from them and modifying the outputs as needed. It has a
massive number of highly interconnected processing components akin to synapses, and it is used to
predict results and learn from a set of data. It is also employing queue theory to calculate the (%idle) for
each supportive server.

3. Proposed For Allocation Server To Support Cloud


The proposed technique for allocating servers to support cloud illustrated in the flowchart as shown in
(Figure.2). According the the flowchart of proposed technique, it is clear that at the first stage, the
location (Longitude and Latitude) of each data center(number of servers) should be determined using
Google map(which support the location of each server in Longitude and Latitude), thus knowing the
distance between the source server( which need support) and other supportive servers(idle servers).
Figure 2 Flowchart of Proposed Technique for Allocating Server to Support Cloud.
.

The proposed technique can be achieved by employing two critical decisions to allocate appropriate
server/servers to support the source server within the cloud as follow:

3.1. Steady State Queuing Model

Data network with a queuing system where packets arrive, wait in various queues, receive service at
multiple locations, and then leave after some time. Lines form in any telecommunication system
whenever clients contend/compete for limited resources; queueing process model(in M/M/n) can be
considered a fundamental part of performance evaluation. The Markov processes (M in M/M/1) where
n=1 trunk (i.e. single server) Poisson processes represent the arrival and departure of packets. The
M/M/1 queueing system is the most straightforward non-trivial queue in which requests come at a rate
of (packets/sec), and the interarrival delays are independent and exponentially distributed. The service
completion times, i.e. service rate(Time from packet entry to exit) with rate μ (mean 1/ μ), can also be
described as a Poisson process, corresponding to an exponential distribution for the service time as
shown in (Figure.3), stability achieved with(λ< μ), there is an average of (L) units in the queue system
at any one time, it can be computed as following [6]:

L= λ / (μ- λ) .(1)
The traffic intensity (busy/utilities) can be computed using the following equation[6]:

ρ=λ/µ ………..(2)

Where ρ represent the traffic intensity of queuing system, while[6]:

π0 = 1-ρ (3)

Where π0 represent the %idle this formula will be employed to calculate the rate of the idle server

Figure 3 M/M/1 Queuing

3.2. BackPropagation

BackPropagation networks are ones in which signals only travel one way from an input neuron to an
output neuron, never returning to the source. The Neural Network (NN) topology comprises nodes and
weighted connections, with each layer inside a neural network having its weight connector. In this
presented work, a multi-layer perceptron (MLP) with Feed forward-BackPropagation (FFBP) model is
used without any feedback connections, but the errors are backpropagated during the network training.
A multilayer perceptron consists of three sections: input, hidden and output layers, as illustrated in
(Figure. 4). FFBP algorithm contains three key portions, which are feedforward, error calculation and
updating the weight. Errors are backpropagated during training through Neural Network weights
updating using the BackPropagation algorithm to gain correct predictions. Therefore before applying
BackPropagation, the NN must be trained and updated to minimize the prediction error. FFBP algorithm
represents the neural network type that will train and use to predict the server allocation effectively
(nearest and higher %idle) to support the source server within the cloud.

Figure 4 BP network with three layers

4. Experimental Result
Software and technologies that must be available to implement and apply the proposed technique are:

• OPNET MODELER 14.5

• MATLAB R2015a

• Google Maps API

The proposed technique of server supported by the cloud is implemented using:

4.1. OPNET MODELER 14.5

The OPNET software is used in the proposed work to create M/M/1 queue design and analysis and
determine the percentage idle for each server in the cloud. The M/M/1 queue comprises numerous
objects in the Node Editor, including two processors and a queue. The creation of (SOURCE) packets
is handled by a single CPU. The (SINK) module will be connected to the second processor. The NODE
model will be built by combining these three primary modules: The queuing system is made up of an
infinite buffer and a server, as shown in (Figure.5): SOURCE(a Packet source), QUEUE(Queuing
system), and SINK(receive and discard packets forwarded from the QUEUE) via (Packet Stream).

Figure.5: Creating NODE Model Using (OPNET Modeler14.5).

The following characteristics of the SOURCE module will be changed: Set the process model property
to (simple source), exponentially distribute the packet inter-arrival time parameter value, and set the
mean outcome to 1.0. (packet inter-arrival times are exponentially distributed with a mean value of 1
sec) i.e. the source module generates packets at an exponential rate; the packet size attribute's value is
exponential for the distribution, and 1024 for the mean result (packets size are exponentially distributed
with a mean size of 1024 bits).
The following are the properties of the QUEUE module: The process model is set to (acb-fifo), with a
service rate of 9600, where acb-fifo stands for (a: active queue, i.e. the queue acts as its server), (c: the
module can concentrate multiple incoming packet streams into a single internal queuing resource), and
(b: the service time is a function of the number of bits in the packet) (fifo: first in first out service
ordering discipline). These properties are revealed by the SINK module's process model attribute set as
a sink (Figure.6.a; 6.b and 6.c).
a) Source Attributes b) Queue Attributes

c) SINK Attributes

Figure.6. Processor Model Attributes.

After creating the NODE model and network, the NODE statistics that will appear as a result (i.e. Queue
Size (packets) and Queue Delay (seconds)) should be defined by double-clicking on the network icon
to display the node model and define NODE statistics as shown in (Figure.7). The final step is to simulate
the network using DES (Configure/Run Discrete Event Simulation) in the OPNET package with
different (Run Duration) for (i.e. percent idle), applying these NODE models to six servers numbered
from server No.1 to server No.6 with different (Run Duration)values respectively(5hours, 1day, 7hours,
2hours, 3hours, 0.5hour).
Figure.7: Queue Size, ”time_avarage” and Queue Delay “constnt_shift” overlapped graph

From (Figure.7), the average of packets can be computed, which represents(λ), also calculating queue
delay(µ), thus applying equations(1) and(2) in (subsection.3.a) to get (traffic intensity) and
(%idle)values, these values are for server No.5 with (Run Duratin=3 hours). Applying these procedures
to all other servers, (Figure.8 )shows (%idle) calculations for the six servers.

Figure.8: Calculations of (%idle) for the servers.

4.2. BackPropagation Neural Network

In this case study, we will use MATLAB R2015a program for neural network toolbox to implement
feed forward neural networks, which will use to select the server with idle ratio and nearest distance to
support the source server which needs support within the cloud. Firstly, the inputs and targets of NN
must be prepared; inputs of the NN are two factors: (%idle, distance in KM), while the target will be
server selection which has an idle ratio and nearest distance. Then using (tool) to create the neural
network with one hidden layer, in this proposed approach, the hidden layer consists of 10 neurons, also
using “TANSIG” transfer function in hidden and output layers, Feed forward-BackPropagation (FFBP)
model is used, as shown in (Figure.9).
Figure.9: Creating NN with one hidden layer and 10 neurons

There are diverse backpropagation training algorithms with different capabilities depending on the
nature of the problem for which the network is designed to solve. With BackPropagation, errors are
backpropagated during the network training. Figure. 10 and 11 reveal plots of the crucial elements of
this training network (Figure. 10. shows regression, Figure.11 reveals performance).
Figure.10.: Training Network Regression.

Figure.11: Training Network Performance(MSE).

The Neural Network is trained with 20 servers (with different idle ratios and distances), (Figure.12)
shows the inputs and NN predicted outputs for 20 servers, where the information is two rows, the first
row represents idle ratios. In contrast, the second row represents the distances between the source server
and the supportive idle servers.
Figure.12: The inputs and NN predicted outputs for 20 servers

Supposing that there are various data centres (a group of servers) distributed in the cloud (e.g. these data
centres are Italy-Europe, Spain-Europe, Egypt-Africa, United Kingdom-Europe, Germany-Europe, and
India-Asia). Applying the technique of server supporting cloud showed as follow:

4.2.1. Specify the source server. Assume server G is the source server (which currently requires
assistance from another server) in a cloud data centre in Saudi Arabia-Asia, e.g. this server has reached
peak time (queue overflow), after specifying the Server ID (G), which requires the assistance within
the cloud, its Datacenter ID (Saudi Arabia-Asia), the coordinates (Latitude, Longitude) of the source
server(G) are (23.88594, 45.079162),

4.2.2. Find the idles servers within the cloud. The second step is knowing the %idle rate for the idle
servers in the data centre within the cloud, supporting the source server(G) in the (Saudi Arabia-Asia)
data centre. The values (λ) and (μ), which is resulted from applying steady-state queue model- M/M/1(
as mentioned in section.3.a) for each server, are used to find (ρ) traffic intensity (busy), then (π0)
(%idle), as shown in (Figure.8). For supporting the source server (G) within (Saudi Arabia-Asia )data
center there are 6 idle servers in(Italy-Europe, Spain-Europe, Egypt-Africa, United Kingdom-Europe,
Germany-Europe, and India-Asia), these servers can support the source server(G), which is currently
need support, employing (Google Maps) to compute the distance between the source server(G) and
other supportive idle servers within cloud, as it is illustrated in(Figure13).

Figure.13: Idle Servers in Data Centers over all the world within the cloud.

4.2.3. Select the closest(shortest path)idle server. Assume server G is the source server (which
currently requires support from another server) in a cloud data centre in Saudi Arabia-Asia, e.g. this
server has reached peak time (queue overflow), after specifying the Server ID (G), which requires
support within the cloud, its Datacenter ID (Saudi Arabia-Asia), the coordinates (Latitude, Longitude)
of the source server(G) are (23.88594, 45.079162). Employing Neural Network to select the nearest
distance between the source server and the supportive idle server as referring to in (subsection 4.2.2),
then applying the proposed 6 servers as sample and importing their inputs to the same NN, with the
inputs: idle ratio for the six servers respectively(42, 34, 71, 45, 41, 7 )and distances in km respectively(
3605, 4885, 1471, 5272, 4238, 3502). The selection of the supportive idle server for this case study is
server (C) from Egypt-Africa data center within cloud, the predicted outputs of the NN is the third
server also, which have idle ratio(71) and distance(1471 km) as the nearest distance from the source
server, as a result, the NN prediction is the same as the desired output(higher %idle/nearest), this is
clear in (Figure.14), (Figure.15) and (Table.1).

Figure.14: NN prediction to select the nearest idle server.

Figure.15: The desired idle server selection, which is nearest to the source server(G).

Table.1: Comparison between the desired outputs and NN prediction outputs.

Server No. 1 2 3 4 5 6
% Idle 42 34 71 45 41 7
Distance 3605 4885 1471 5272 4238 3502

Target 0 0 1 0 0 0
NN
predicted 0.045847 0.0022216 0.98775 0.00858 0.015046 0.010443
outputs

5. Discussion of the results

Comparing the crucial results of the proposed technique of server supporting cloud with other
techniques, as shown in (Table .2)

Table .2: Comparing The Proposed Technique of Server supporting cloud with Other Techniques

Proposed Technique of Ayad Ghany Xiaoming Nan, Yuan, X., Min,


Server Supporting Cloud Ismaeel Yifeng He, and Geyong and
Feature
[6] Ling Guan others
[7] [8]

Queue theory for Queue theory for Queue theory only Queue theory
determining the idle% determining the in three different and dynamic
server and the shortest idle% server and the scenarios: single- resource
distance between the idle shortest distance service scenario, allocation
Allocation of server and source server between the idle multi-service algorithm that
idle server by using Neural Network server and source scenario, and periodically
using (BackPropagation server using priority-service allocates the
Algorithm) Haversine equation scenario system resources
according
to the Internet
conditions and
cloud operation
states

GPS is used, it is fast GPS is used, it is


method to compute the fast method to
Employing
distance between the compute the
GPS via Not employed Not employed
source server and idle distance between
Google Maps
server the source server
and idle server

Results More effective because


Also, based on idle
(supporting besides determining the Based on the idle
ratio and distance
the source idle ratio distance server, Internet
(nearest) between Based on the idle
server by (nearest)factor between the conditions and
the source server server, only
selecting the source server and idle cloud operation
and idle server
% idle server must be considered states
supportive (i.e. effective cost)
server )
Feedforward
Using Neural BackPropagation NN is
Network employed to select the Not used Not used Not used
server with(higher %idle
and nearest distance)

Higher relative to other It does not take It does not take


Higher relative to
techniques, because it used into consideration into
Reacting to other techniques,
Neural Network to the nearest distance consideration the
support the because it used
determine the nearest idle between the idle nearest distance
cloud Haversine equation
server server and the between the idle
to determine the
source server server and the
nearest idle server
source server

Conclusion
• The proposed model of server supported by the cloud was developed successfully using
feedforward BackPropagation (FFBP) neural network, i.e. obtained an effective selection with
minimum cost.
• It is an elastic technique, because when there is no idle server in the same site, where the source
server is located, then the source server can get the support from outside its data center(i.e.
another site) easily with effective selection(nearest server).
• The proposed model is the first prediction method using the Neural network to predict effective
idle server selection(higher %idle and nearest distance) to support the source server, which
needs support within the cloud.
• It is an effective cost model because it takes into consideration the distance(nearest/shortest path)
between the source server and supportive idle server, not only idle ratio, i.e. (economic
technique).
• This proposed model suggested a general resource allocation method, i.e. can implement this
method for any resource allocation when the λ(packets/sec) and service-time (μ) are determined.

References
[1] M. Kumaresan and G. K. D. P. Venkatesan, "Enabling high performance computing in cloud
computing environments," 2017 IEEE International Conference on Electrical, Instrumentation
and Communication Engineering (ICEICE), Karur, Tamilnadu, India, , pp. 1-
6.10.1109/ICEICE.2017.8191887,2017.
[2] P. Priyadarshinee, R. D. Raut, M. K. Jha, and B. B. Gardas, "Understanding and predicting the
determinants of cloud computing adoption: A two staged hybrid SEM - Neural networks
approach," Computers in Human Behavior, vol. 76, pp. 341-362, 2017/11/01/ 2017.
[3] Kripa Sekaran and K R Kosala Devi, "SIQ algorithm for efficient load balancing in cloud," 2017
International Conference on Algorithms, Methodology, Models and Applications in Emerging
Technologies (ICAMMAET), CHENNAI, India, pp. 1-5 d.
10.1109/ICAMMAET.2017.8186673., 2017.
[4] S. Vakilinia, M. M. Ali, and D. Qiu, "Modeling of the resource allocation in cloud computing
centers," Computer Networks, vol. 91, pp. 453-470, 2015/11/14/ 2015.
[5] S. Mazumdar and M. Pranzo, "Power efficient server consolidation for Cloud data center," Future
Generation Computer Systems, vol. 70, pp. 4-16, 2017/05/01/ 2017.
[6] Ayad Ghany Ismaeel, ""Effective Technique for Allocating Servers to Support Cloud using GPS
and GIS"," Science and Information Conference pp. 934-939, 2013.
[7] Y. H. Xiaoming Nan, Ling Guan, Queueing model based resource optimization for multimedia
cloud, Journal of Visual Communication and Image Representation, Vol. 25, No. 5, pp. 928 -
942, 2014.
[8] X. Yuan, Min, Geyong ,Yang, Laurence T.,Ding, Yi, Fang, Qing, "A game theory-based dynamic
resource allocation strategy in Geo-distributed Datacenter Clouds," Future Generation Computer
Systems, vol. 76, pp. 63-72, 2017/11/01/ 2017.

You might also like