IJCAC Editorial Board
Editors-in-Chief:
Shadi Aljawarneh, Isra U., Jordan
Hong Cai, IBM China Software Development Lab, China
Associate Editors:
Rajkumar Buyya, U. of Melbourne, Australia
Anna Goy, Universita' di Torino, Italy
Chun Ming Hu, BeiHang U., China
Maik A. Lindner, SAP Research, UK
Yuzhong Sun, Chinese Academy of Science, China
Rama Prasad V. Vaddella, Sree Vidyanikethan Engineering College, India
Qian Xiang Wang, Peking U., China
Song Wu, HuaZhong U. of Science and Technology, China
IGI Editorial:
Heather A. Probst, Director of Journal Publications
Jamie M. Wilson, Assistant Director of Journal Publications
Chris Hrobak, Journal Production Editor
Gregory Snader, Production and Graphics Assistant
Brittany Metzel, Production Assistant
International Editorial Review Board:
Faisal Alkhateeb, Yarmouk U., Jordan
Shadi Al-Masadeh, Applied Science U., Jordan
Juan Caceres, Telefónica Investigación y Desarrollo,
Spain
Kamal Dahbur, NYIT, Jordan
Fabien Gandon, INRIA, France
IGIP
Seny Kamara, Microsoft, USA
Saurabh Mukherjee, Banasthali U., India
Giovanna Petrone, Università degli Studi di Torino,
Italy
Deshmukh Sudarshan, Indian Institute of Technology
Madras, India
IGI PublIshInG
www.igi-global.com
Call
for
artICles
International Journal of Cloud Applications and Computing
An official publication of the Information Resources Management Association
The Editors-in-Chiefs of the International Journal of Cloud Applications and
Computing (IJCAC) would like to invite you to consider submitting a manuscript
for inclusion in this scholarly journal.
IGIP
MISSION:
The main mission of the International Journal of Cloud Applications and Computing
(IJCAC) is to be the premier and authoritative source for the most innovative scholarly
and professional research and information pertaining to aspects of CloudApplications and
Computing. IJCAC presents advancements in the state-of-the-art, standards, and practices
of Cloud Computing, in an effort to identify emerging trends that will ultimately define
the future of “the Cloud.” Topics such as Cloud Infrastructure Services, Cloud Platform
Services, Cloud Application Services (SaaS), Cloud Business Services, Cloud Human
Services are discussed through original papers, review papers, technical reports, case
studies, and conference reports for reference use by academics and practitioners alike.
COVERAGE:
Topics to be discussed in this journal include (but are not limited to) the following:
•
•
•
•
•
•
•
ISSN 2156-1834
eISSN 2156-1826
Published quarterly
Application
Architecture
Business
Cloud engineering
Green technologies
Management and optimization
Technologies and services
All inquiries should be emailed to:
Shadi Aljawarneh and Hong Cai
Editors-in-Chief
shadi.jawarneh@ipu.edu.jo
caihong@ieee.org
Ideas for special theme issues may be submitted to the Editors-in-chief.
Please recommend this publication to your librarian. For a convenient
easy-to-use library recommendation form, please visit: http://www.igiglobal.com/ijcac and click on the "Library Recommendation Form" link
along the right margin.
InternatIonal Journal of Cloud
applICatIons and ComputIng
April-June 2011, Vol. 1, No. 2
Table of Contents
ReseaRch aRticles
1
Cloud Computing in Higher Education: Opportunities and Issues
P. Sasikala, Makhanlal Chaturvedi National University of Journalism and
Communication, India
14 Using Free Software for Elastic Web Hosting on a Private Cloud
Roland Kübert, University of Stutgart, Germany
Gregory Katsaros, University of Stutgart, Germany
29 Applying Security Policies in Small Business Utilizing Cloud
Computing Technologies
Louay Karadsheh, ECPI University, USA
Samer Alhawari, Applied Science Private University, Jordan
41 The Financial Clouds Review
Victor Chang, University of Southampton and University of Greenwich, UK
Chung-Sheng Li, IBM Thomas J. Watson Research Center, USA
David De Roure, University of Oxford, UK
Gary Wills, University of Southampton, UK
Robert John Walters, University of Southampton, UK
Clinton Chee, Commonwealth Bank, Australia
64 Cloud Security Engineering: Avoiding Security Threats the Right Way
Shadi Aljawarneh, Isra University, Jordan
2011
International Journal of Applied Industrial Engineering
International Journal of Art, Culture and Design Technologies
International Journal of Aviation Technology, Engineering and Management
International Journal of Biomaterials Research and Engineering
International Journal of Chemoinformatics and Chemical Engineering
International Journal of Cloud Applications and Computing
International Journal of Computer Vision and Image Processing
International Journal of Computer-Assisted Language Learning and Teaching
International Journal of Cyber Behavior, Psychology and Learning
International Journal of Cyber Ethics in Education
International Journal of Cyber Warfare and Terrorism
International Journal of Fuzzy System Applications
International Journal of Game-Based Learning
International Journal of Information Retrieval Research
International Journal of Intelligent Mechatronics and Robotics
International Journal of Interactive Communication Systems and Technologies
International Journal of Knowledge-Based Organizations
International Journal of Manufacturing, Materials, and Mechanical Engineering
International Journal of Measurement Technologies and Instrumentation Engineering
International Journal of Online Marketing
International Journal of Online Pedagogy and Course Design
International Journal of People-Oriented Programming
International Journal of Privacy and Health Information Management
International Journal of Public and Private Healthcare Management and Economics
International Journal of Quality Assurance in Engineering and Technology Education
International Journal of Signs and Semiotic Systems
International Journal of Social and Organizational Dynamics in IT
International Journal of Space Technology Management and Innovation
International Journal of Technology and Educational Marketing
International Journal of User-Driven Healthcare
International Journal of Wireless Networks and Broadband Technologies
For subscription information, please visit:
www.igi-global.com/journals
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 1
cloud computing in
Higher Education:
opportunities and Issues
P. Sasikala, Makhanlal Chaturvedi National University of Journalism and
Communication, India
AbstrAct
Cloud Computing promises novel and valuable capabilities for computer users and is explored in all possible
areas of information technology dependant fields. However, the literature suffers from hype and divergent
definitions and viewpoints. Cloud powered higher education can gain significant lexibility and agility.
Higher education policy makers must assume activist roles in the shift towards cloud computing. Classroom
experiences show it is a better tool for teaching and collaboration. As it is an emerging service technology,
there is a need for standardization of services and customized implementation. Its evolution can change the
facets of rural education. It is important as a possible means of driving down the capital and total costs of
IT. This paper examines and discusses the concept of Cloud Computing from the perspectives of diverse
technologists, cloud standards, services available today, the status of cloud particularly in higher education,
and future implications.
Keywords:
Cloud Computing, Higher Education, Models, Opportunities, Standards
IntroductIon
The birth of the web and e-commerce has led
to the networking of every human move and
thus the personal lives started moving online.
Today Internet has become a platform to mobilize the entire human society. Enormous data
are required to be processed every day and
therefore it requires too many hard wares and
soft wares at every individual level. This leads
towards high cost and increase in pollution. To
reduce cost and inculcate green environment
concept, attention is required to hold the pool
DOI: 10.4018/ijcac.2011040101
of data accessed and process the same. Hence,
reshaping the data center and evolving into new
paradigms to perform large scale distributed
computing is the need of the hour (Magoules,
Pan, Tan, & Kumar, 2009). An infrastructure
for storage and computing on massive data
and to pay for what you want, advance into
a realistic solution, to centralize the data and
carry out computation on the super computer
with unprecedented storage and computing
capability. Gartner, Inc., defines the solution as
Cloud Computing, a style of computing where
massively scaleable IT-enabled capabilities are
delivered ‘as a service’ to external customers
using Internet technologies (Gartner, 2010).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
2 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
Cloud computing is an important development
on par with the shift from mainframe to clientserver based computing.
McKinsey suggests that using clouds for
computing tasks promises a revolution in IT
similar to the internet and World Wide Web.
Burton Group concludes that IT is finally
catching up with the Internet by extending the
enterprise outside of the traditional data center
walls. Writers like Nicholas Carr argue that a
so-called big switch is ahead, wherein a great
many infrastructure, application, and support
tasks now operated by enterprises will-in the
future-be handled by very-large-scale, highly
standardized counterpart activities delivered
over the Internet. Cloud computing is also
potentially a much more environmentally
sustainable model for computing. The ability
to locate cloud resources anywhere frees up
providers to move operations closer to sources
of cheap and renewable energy.
As higher education faces budget restrictions and sustainability challenges, one
approach to relieve these pressures is cloud
computing. With cloud computing, the operation
of services moves “above the campus,” and an
institution saves the upfront costs of building
technology systems and instead pays only for
the services that are used. As capacity needs rise
and fall, and as new applications and services
become available, institutions can meet the
needs of their constituents quickly and costeffectively. In some cases, a large university
might beam a provider of cloud services. More
often, individual campuses will obtain services
from the cloud. The trend toward greater use of
mobile devices also supports cloud computing
because it provides access to applications, storage, and other resources to users from nearly
any device. While cost savings and flexibility
are benefits to the use of cloud computing,
the downside of such service adoption could
include possible risks to privacy and security.
But ultimately cloud computing could provide
a means to stretch limited resources and make
them more useful, to more people, more of
the time. The growing breadth of institutional
sourcing options requires IT leaders to evaluate
more options and providers. As technologies
like virtualization and cloud computing assume important places within the IT landscape,
higher education leaders will need to consider
which institutional services they wish to leave
to consumer choice, which ones they wish to
source and administer “somewhere else,” and
which services they should operate centrally
or locally on campus. One important option
is the development of collaborative service
offerings among colleges and universities.
Yet, substantial challenges raise at least some
near-term concerns including risk, security, and
governance issues; uncertainty about return on
investment and service provider certification;
and questions regarding which business and
academic activities are best suited for the cloud.
The common perception of infrastructure
that must be bought, housed, and managed
has changed drastically. Institutions are now
seriously considering alternatives that treat
the infrastructure as a service rather than an
asset and, are not bothered about where the
infrastructure is located and who manages it.
A key differentiating element of a successful
information technology is its ability to become
a true, valuable, and economical contributor
to cyber infrastructure (Foster & Kesselman,
2004). Cloud computing embraces cyber infrastructure, and builds upon decades of research
in virtualization, distributed computing, grid
computing, utility computing, and, more recently, networking, web and software services
(Cloud Portal, 2010). Cloud computing is a next
natural step of integration of current diverse
technologies and applications. The literature
asserts that cloud computing in higher education is different and it is important.
Information technologists are skeptical
about hype. Most of the people in this segment
have heard of, tried, used service bureaus, application hosts, grids, and other sourcing techniques in higher education. But what is different
about the cloud? The first key difference is its
technical aspects. The maturity of standards
throughout the stack, the widespread availability of high-performance network capacity,
virtualization technologies are combining to
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 3
enrich the sourcing options in higher education (Geelan, 2009). As Service Providers and
Users are quite different, the generation raised
on broadband connections, Google search, and
Facebook community are likely to grip the
idea of cloud-based services in higher education. Such users, who are raising the sales of
netbooks, are likely to move towards lower
cost lightweight computing, web delivered
services, open-source operating systems and
applications. According to Gartner, “The consumerization of IT is an ongoing process that
further defines the reality that users are making
consumer-oriented decisions before they make
IT department-oriented decisions” (Gartner,
2010). The consumerization of IT along with
the emergence of SaaS and other web based
service options will force way in higher education. At the same time, the focus on managing
IT costs and return on investment are driving
commercial enterprises to move swiftly. The
top two trends identified in higher education
software survey were SaaS and web services/
SOA (INTEROP, 2008).
Finally, recognizing these technical, generational consumer, and enterprise economic
trends, developer communities and system
integrators in higher education are shifting
away from established software vendors, and
the established vendors are working to “cloudenable” their products (INTEROP, 2008).
McKinsey & Company (2010) suggests that
using clouds for computing tasks promises a
revolution in IT similar to web and e-commerce
(Uptîme Institute, 2009). Burton Group (2010)
concludes that, IT is finally catching up with
the Internet by extending the enterprise outside
of the traditional data center walls. According
to Nicholas Carr, the “Big Switch” is ahead,
wherein a great many infrastructure, application,
and support tasks now operated by enterprises
will be handled by very-large-scale, highly
standardized counterpart activities delivered
over the Internet (Carr, 2008). As the topic on
cloud computing in higher education has become the central focus point among researchers
we felt a dire need towards a review of the topic
(Rewatkar & Lanjewar, 2010). In this paper a
thorough review of the literature on the topic
of Cloud Computing has been attempted by us.
A framing of the roles that Higher Education
might play in this emerging area of activity has
also been thoroughly analysed. It also explores
what shape a higher education cloud might take
and identifies opportunities and models.
cloud HIgHEr EducAtIon
cloud computing:
the concept and definition
In recent times the most discussed topic and
the next emerging revolutionary application
anticipated is all about cloud computing and
it’s utility mainly in the higher education sector. However a thorough search over the web
portals about cloud computing in general and
more specifically in higher education leaves
us highly excited as well as equally confused
(Google, 2010).These are common phenomenon observed with things that are new, things
that promise to transform, and things with
ambiguous names. McKinsey & Company
(2010) uncovered 22 distinct definitions of cloud
computing from well known experts. But one
of the biggest problems we have in IT is the
vagueness and lack of precision in all of our work
around these complex topics. A better accepted
definition of Cloud computing is of Gartner’s
(2010) that defines it as a style of computing
where scalable and elastic IT capabilities are
provided as a service to multiple customers
using Internet technologies. This characterizes
a model in which providers deliver a variety of
IT-enabled capabilities to consumers. Cloudbased services can be exploited in a variety of
ways to develop an application or a solution.
Using cloud resources one can rearrange and
reduce the cost of IT solutions. Enterprises will
act as cloud providers and deliver application,
information or business process services to
customers and business partners.
According to NIST cloud computing is
a model for enabling convenient, on-demand
network access to a shared pool of configurable
computing resources (e.g., networks, serv-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
4 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
ers, storage, applications, and services) that
can be rapidly provisioned and released with
minimal management effort or service provider
interaction (National Institute of Standards and
Technology, 2010). This looks to be a clear
definition as it is Internet-based computing,
whereby shared resources, software, and information are provided to computers and other
devices on demand, just like the electricity grid
existing today. In general, the concept of cloud
computing can incorporate various computer
technologies, including web infrastructure, Web
2.0 and many other emerging technologies.
The key technological hypes about cloud
computing in higher education are:
On-demand self-service: A consumer can unilaterally provision computing capabilities,
such as server time and network storage, as
needed automatically without requiring human interaction with each service provider.
Broad network access: Capabilities are available
over the network and accessed through
standard mechanisms that promote use by
heterogeneous thin or thick client platforms
(e.g., mobile phones, laptops, and PDAs).
Resource pooling: The provider’s computing
resources are pooled to serve multiple
consumers using a multi-tenant model, with
different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of
location independence in that the customer
generally has no control or knowledge over
the exact location of the provided resources
but may be able to specify location at a higher
level of abstraction (e.g., country, place, or
datacenter). Examples of resources include
storage, processing, memory, network
bandwidth, and virtual machines.
Rapid elasticity: Capabilities can be rapidly and
elastically provisioned, in some cases automatically, to quickly scale out and rapidly
released to quickly scale in. To the consumer,
the capabilities available for provisioning
often appear to be unlimited and can be
purchased in any quantity at any time.
Measured Service: Cloud systems automatically control and optimize resource use by
leveraging a metering capability at some
level of abstraction appropriate to the type
of service (e.g., storage, processing, bandwidth, and active user accounts). Resource
usage can be monitored, controlled, and
reported providing transparency for both
the provider and consumer of the utilized
service.
McKinsey & Company (2010) presented
a typology of software-as-a-service (SaaS)
depicting it through Delivery Platforms like
Managed hosting contracting with hosting providers to host or manage an infrastructure (for
example, IBM, OpSource), Cloud computing
using an on-demand cloud-based infrastructure
to deploy an infrastructure or applications (for
example, Amazon Elastic Cloud), Development
Platforms like Cloud computing—using an ondemand cloud-based development environment
to provide a general purpose programming
language (for example, Bungee Labs, Coghead), Application-Led Platforms like SaaS
applications—using platforms of popular SaaS
applications to develop and deploy application (for example, Salesforce.com, NetSuite,
Cisco-WebEx).
It implies a service-oriented architecture,
reduced information technology overhead for
the end-user, greater flexibility, reduced total
cost of ownership, on-demand services and
many other things. As per Wikipedia (2010),
cloud computing describes a new supplement,
consumption and delivery model for IT services
based on Internet, and it typically involves the
provision of dynamically scalable and often
virtualized resources as a service over the
Internet. The evolution of on-demand information technology services, products based
on virtualized resources have been around for
some time now (Averitt et al., 2007), but the
term became popular in October 2007 when IBM
and Google announced a collaboration in that
domain (Bell, 2008; Bulkeley, 2007). This was
followed by IBM’s announcement of the “Blue
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 5
Cloud” effort (Kirkpatrick, 2007). Since then,
everyone is talking about “Cloud Computing”.
Certainly, there are many ways to look at cloud
computing but the benefits need to be qualified
in order to be quantified. Recently the iPhone
has become very popular since it is in essence
a cloud computing oriented device.
cloud computIng sErvIcEs
In HIgHEr EducAtIon
Cloud Computing Services in higher education vary depending on the service level via
the surrounding management layer. It may
be Software as a Service (SaaS), Platform as
a Service (PaaS), Infrastructure as a Service
(IaaS), or Data Storage as a Service (DaaS).
Cloud Software as a Service (SaaS): The capability provided to the consumer is to use the
provider’s applications running on a cloud
infrastructure. The applications are accessible from various client devices through a
thin client interface such as a web browser
(e.g., web-based email). The consumer
does not manage or control the underlying
cloud infrastructure including network,
servers, operating systems, storage, or even
individual application capabilities, with the
possible exception of limited user-specific
application configuration settings. This will
lead to end of a traditional, on-premises
software. The functional interface makes
End user interaction with the Application’s
function management Metering and Billing
based on number of users. E.g. Application
services like SalesForce.com.
Cloud Platform as a Service (PaaS): The capability provided to the consumer is to deploy
onto the cloud infrastructure consumer
created or acquired applications created
using programming languages and tools
supported by the provider. The consumer
does not manage or control the underlying
cloud infrastructure including network,
servers, operating systems, or storage, but
has control over the deployed applications
and possibly application hosting environment configurations. This provides an
independent platform or middleware as
a service on which developers can build
and deploy customer application. Common solutions provided in this tier range
from APIs and tools to database and
business process management system, to
security integration, allowing developers
to build applications and run them on the
infrastructure that cloud vendors owns and
maintains. Examples - Microsoft windows
azure platforms services, Google apps.
The functional interface interacts with
Application development and Deployment
environment Management, Manage scale
out of Application, Metering and Billing
based on application QoS. E.g. Application
Infrastructure services like Force.com.
Cloud Infrastructure as a Service (IaaS): The
capability provided to the consumer is to
provision processing, storage, networks,
and other fundamental computing resources where the consumer is able to
deploy and run arbitrary software, which
can include operating systems and applications. The consumer does not manage or
control the underlying cloud infrastructure
but has control over operating systems,
storage, deployed applications, and possibly limited control of select networking
components (e.g., host firewalls). This
primarily compasses the hardware and
technology for computing power, storage,
operating systems or other infrastructure,
delivered as off premises, on-demand
services rather than dedicated as on-site
resources. Because customers can pay for
exactly the amount of service they use,
like for electricity or water, this service is
also called utility computing. Examples –
Amazon elastic compute cloud (Amazon
EC 2) or Amazon simple storage service
(Amazon S3), Eucalyptus open-source
cloud computing system. The functional
interface makes Virtual machine for hosting OS based stacks Management: Manage
life cycle of guest machines, Metering
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
6 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
and Billing based on infrastructure usage.
E.g. System Infrastructure services like
VMWARE V-CLOUD.
Cloud Data Storage as a Service (DaaS): Delivery of virtualized storage on demand.
By abstracting data storage behind a set
of service interfaces and delivering it on
demand, a wide range of actual offerings
and implementations are possible. The
only type of storage that is excluded from
this definition is that which is delivered,
not based on demand, but on fixed capacity increments. Storage as a Service is a
business model in which a large company
rents space in their storage infrastructure
to a smaller company or individual. Storage as a Service is generally seen as a
good alternative for a small or mid-sized
business that lacks the capital budget and/
or technical personnel to implement and
maintain their own storage infrastructure.
The functional interface includes Data
storage interfaces used by any of the other
types Management, Data Requirements and
Storage Usage.
Public cloud: The cloud infrastructure is made
available to the general public or a large
industry group and is owned by an organization selling cloud services. The
resources are dynamically provisioned on
a fine-grained, self-service basis over the
Internet, via web applications/web services,
from an off-site third-party provider who
shares resources and bills on a fine-grained
utility computing basis.
Hybrid cloud: The cloud infrastructure is a composition of two or more clouds (private,
community, or public) that remain unique
entities but are bound together by standardized or proprietary technology that
enables data and application portability
(e.g., cloud bursting for load-balancing
between clouds). A hybrid cloud environment consisting of multiple internal and/
or external providers “will be typical for
most enterprises”.
cloud computIng modEls
In HIgHEr EducAtIon
People may have different perspectives from
different views. For example, from the view
of end-user, the cloud computing service
moves the application software and operation
system from desktops to the cloud side, which
makes users be able to plug-in anytime from
anywhere and utilize large scale storage and
computing resources. On the other hand, the
cloud computing service provider may focus
on how to distribute and schedule the computer resources. Enterprises will act as cloud
providers and deliver application, information
or business process services to customers and
business partners.
A user of the service doesn’t necessarily
care about how it is implemented, what technologies are used or how it’s managed. Only that
there is access to it and has a level of reliability
necessary to meet the application requirements.
In essence this is distributed computing. An
The Cloud Computing models are categorized
based on the targeted group using the cloud
service. They can be grouped as:
Private cloud: The cloud infrastructure is operated solely for an organization. It may be
managed by the organization or a third party
and may exist on premise or off premise
(cloud enterprise owned or leased).
Community cloud: The cloud infrastructure is
shared by several organizations and
supports a specific community that has
shared concerns (e.g., mission, security
requirements, policy, and compliance considerations). It may be managed by the
organizations or a third party and may exist
on premise or off premise.
pErspEctIvEs of
cloud computIng In
HIgHEr EducAtIon
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 7
application is built using the resource from
multiple services potentially from multiple
locations. But the difference is, the endpoint to
access the services has to be known to the user
whereas in cloud it provides the user available
resources. Behind this service interface is usually a grid of computers to provide the resources.
The grid is typically hosted by one company
and consists of a homogeneous environment
of hardware and software making it easier to
support and maintain. Once you start paying
for the services and the resources utilized it
becomes utility computing. Cloud computing really is accessing resources and services
needed to perform functions with dynamically
changing needs. An application or service developer requests access from the cloud rather
than a specific endpoint or named resource.
The cloud manages multiple infrastructures
across multiple organizations and consists of
one or more frameworks overlaid on top of the
infrastructures tying them together.
The cloud is a virtualization of resources
that maintains and manages itself. There are of
course people to keep resources like hardware,
operating systems and networking in proper
order. But from the perspective of a user or application developer only the cloud is referenced.
•
•
stAndArds rEquIrEd
In HIgHEr EducAtIon
•
In this section we explore the readiness of
various standards, gaps and opportunities for
improvement in higher education. The standards
must cover many areas such as Interoperability,
Security, Portability, Governance, Risk Management, Compliance, etc.
National Institute of Standard and Technology (NIST) (2010) USA has initiated activities
to promote standards for cloud computing. To
address the challenges and to enable cloud
computing, several standards groups and industry consortia are developing specifications
and test beds.
Some of the existing Standards and Test
Bed Groups are:
•
•
•
•
•
•
•
•
•
Cloud Security Alliance (CSA)
Distributed Management Task Force
(DMTF)
Storage Networking Industry Association
(SNIA)
Open Grid Forum (OGF)
Open Cloud Consortium (OCC)
Organization for the Advancement of
Structured Information Standards (OASIS)
TM Forum
Internet Engineering Task Force (IETF)
International Telecommunications Union
(ITU)
European Telecommunications Standards
Institute (ETSI)
Object Management Group (OMG)
On the other side, a cloud API provides
either a functional interface or a management
interface (or both). Cloud management has
multiple aspects that can be standardized for
interoperability. Some Possible Standards are:
•
•
•
•
•
•
•
•
Federated security (e.g. identity) across
Clouds
Metadata and data exchanges among
Clouds
Standards for moving applications between
Cloud platforms
Standards for describing resource/performance capabilities and requirements
Standardized outputs for monitoring, auditing, billing, reports and notification for
Cloud applications and services
Common representations (abstract,
APIs, protocols) for interfacing to Cloud
resources
Cloud-independent representation for policies and governance
Portable tools for developing, deploying,
and managing Cloud applications and
services
Orchestration and middleware tools for creating composite applications across Clouds
There is an urgent need to define minimal
standards to enable cloud integration, appli-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
8 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
cation and data portability. It is required to
avoid specifications that will inhibit innovation and need to separately address different
cloud models.
for the people living in rural areas to actively
involve themselves in the IT sector.
cloud computIng In IndIA
Indian businesses are definitely adopting cloud
computing, but it’s still in a budding phase. The
decision makers have to understand the need
of IaaS, PaaS and SaaS for their organization
and then adapt to public, private, or hybrid
clouds. Cloud vendors take India seriously
as India hasn’t hit the saturation levels yet.
It is understood that TCS, Infosys and Wipro
amongst others are taking steps towards making
cloud-based services available to their customers. With India poised to achieve massive
growth in cloud computing, mature markets in
the region are nurturing early adopters while
developing markets are presenting many green
field opportunities for cloud vendors. We need to
work on Evaluating the business case for public,
private and hybrid cloud models; Developing
an enterprise integration and migration strategy
towards cloud provisioning; Optimising the
management of virtualized environment and
cloud implementation; Tracking developments
in cloud security, governance and standards;
and Learning lessons from recent SaaS, PaaS
and IaaS implementation.
India is globally known for its strengths in innovation in IT services and associated models and
cloud computing is an emerging opportunity in
this space. India has always been a playground
and a test bed to pilot IT strategic adoption
techniques. Indian Subcontinent is a very unique
and a potent geography for platform vendors. No
other geography will give the platform vendor
access to the whole ecosystem. This market has
a huge, untapped potential at every level, be it
enterprise or public sector. System integrators
such as Microsoft, IBM, Wipro, Infosys and
TCS are busy assessing the opportunity and
creating the relevant service offerings.
opportunItIEs
By 2030, the population of India will be largest
in the world estimated to be around 1.53 billion.
India’s current population is about 1.15 billion
and about 70% of it resides in the rural areas
and villages. Thus India has a great potential to
make it an economic as well as an IT superpower
(India Online, 2010). Obama Administration
recently termed India as a great and emerging
global power. Also global economic fortunes
and global ambitions make it a potential power.
But the major hindrance in this direction is the
lack of infrastructure for the development of
the technical know-how amongst the people
living in the rural areas and the villages. With
the introduction of the new cloud computing
paradigm these problems can be easily eliminated because it doesn’t require the end users
to have any type of infrastructure, as all of
them are delivered as services (whether it be
infrastructure as a service (IaaS), Platform as
a service (PaaS), Software as a service (SaaS))
on a pay per-cycle basis (utility computing)
virtually which makes it easier and cheaper
prEsEnt stAtus
govErnmEnt And
EntErprIsEs
Cloud computing holds the potential for the
Indian government to offer better services
while adding a green touch to its e-governance
enabled transformation. Cloud computing holds
the promise to transform the functioning of governments. In www.apps.gov, the United States
administration has taken a definitive stride
to infuse cloud computing paradigm into its
Enterprise Architecture. The government (both
central as well as state) and the public sectors
have to understand the benefits of the Cloud in
a right direction. By setting up a private cloud,
state governments can gain access to virtually
unlimited, centralized computing. Through this,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 9
they can save cost by limiting the servers and
maintenance in the local data centers. Cloud
printing has qualitative advantages such as
reduced worker frustration and productivity
loss resulting from searches for enabled network
printers; increased productivity, especially for
mobile and remote workers; the ability to deliver
anywhere and then print the latest file version
at the last minute; enabling non-employees to
print to selected corporate printers etc. The
Small and Medium Enterprise (SME) may use
public SaaS and public clouds and minimise
growth of data centres; Large enterprise data
centres may evolve to act as private clouds;
Large enterprises may also use hybrid cloud
infrastructure software to leverage both internal and public clouds; and Public cloud may
adopt standards in order to run workloads from
competing hybrid cloud infrastructure.
cHAllEngEs
To realise the full potential of cloud computing
and to be mainstream member of IT portfolio and
choices, the challenges has to be met. There is a
lot of challenges to be tackled related to privacy
and security and associated regulations compliance, vendor lock-in and standards, interoperability, latency, performance and reliability
concerns, besides supporting R&D and creating
specific test beds in public-private partnership.
It further enhances scientific and technological
knowledge on all related foundation elements
of cloud computing. The role of academic
institutes to subscribe to Cloud Services that
provide student/teacher/parent collaboration
on subscription, is a massive and important
transition needed at this hour. Cloud computing
could add a new dimension to India’s ongoing
e-governance program. Certain preparatory
steps could be initiated by the Government of
India to launch cloud computing as a model for
e-governance programs. These are as follows:
Setup a nodal agency for cloud computing;
Create pilot solutions and demonstrate their
success; Develop a legal framework and risk
management program; and Creating a solution
portfolio for cloud migration. State governments
and their departments are at varying levels of
e-governance maturity. As a result, citizens and
businesses get varying degrees of accessibility
and quality of government services across India.
Usage of cloud computing can ensure the reach
of citizen services in all states irrespective of
their present e-governance readiness.
cloud In HIgHEr EducAtIon
IT is a critical component of modern higher
education. Despite miraculous improvements
in price and performance, total IT costs in
higher education seem destined to remain on
an upward trajectory, in part because of the
voracious demands of researchers for bandwidth and computing power and of students
for sound and video-intensive applications.
Equally to blame for higher education’s IT cost
management challenge may be higher education’s long tradition of building its own systems
and tendency to self-operate almost everything
related to IT. Growing external expectations
require higher education to sharpen its focus
on its core mission and competencies. The
unsustainable economics of higher education’s
traditional approaches to IT, increased expectations and scrutiny, and the growing complexity
of institutional operations and governance call
for a different modus operandi. So too does the
mass consumerization of services, for which
students and faculty are more likely to look
outside the institution to address their IT needs
and preferences. Cloud computing represents
a real opportunity to rethink and re-craft services in higher education. Among the greatest
benefits of scalable and elastic IT is the option
to pay only for what is used. Robust networks
coupled with virtualization technologies make
less relevant where work happens or where data
is stored. Cloud computing allows the flexibility
for some enterprise activities to move above
campus to providers that are faster, cheaper,
or safer and for some activities to move off the
institution’s responsibility list to the “consumer”
cloud (below campus), while still other activi-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
10 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
ties can remain in-house, including those that
differentiate and provide competitive advantage
to an institution.
The cloud is no longer just a concept. Commercial cloud computing already encompasses
an expanding array of on-demand, pay-as-yougo infrastructure, platform, application, and
software services that are growing in complexity
and capability. The flexibility the cloud offers
coupled with the mounting economic pressures,
along with the massive unbundling and commoditization taking place in IT and a variety
of industries, are prompting higher education
leaders to consider new sourcing arrangements.
While some leaders are acclimating to this new
IT environment and testing the marketplace with
ventures ranging from computing cycles and
data storage to student e-mail, disaster recovery,
or virtual computing labs, most remain cautious
observers as they assess its potential impact.
The major hurdle for development of IT
related education in the rural areas is the lack
of institutes with proper infrastructure. To tap
the maximum potential of the rural India it is
very important that these IT study institutes be
located in the rural areas itself with proper tools
such as proper applications, infrastructure and
development platforms. The difficulty lies in the
huge amount of money spent on buying software licenses, setting up proper infrastructure
required for computation, storage etc.
The evolution of Cloud Computing can
change the facets of rural area, through the
three fundamental concepts namely IaaS, PaaS
and SaaS. Expenses on software licenses shall
be reduced by pay-per cycle basis whether it
would be software development packages or
working platforms. Instead of setting up huge
and expensive infrastructure such as high speed
processing computers or huge data storage
devices, they can use these resources from the
cloud providers.
The cloud computing environment will
lead to better skilled people in rural areas and
villages. These technocrats from the rural areas will involve themselves in the IT sector to
empower the technology development. Hence,
the maximum potential of the rural India can be
utilized. The spread of rural technologies will be
facilitated if they are also employment generators. This scenario will surely lead to the increase
in the standard of living of the rural people
and convert a simple and poor economy into
modern and high-income economy. Economic
development is the social and technological
progress of any nation. Cloud computing may
give an extensive growth to the Economy of
rural areas by providing IT opportunities to
the people which will lead to efficient business
management. The income from the business
will provide better funds for the development
of technical institutes in the rural areas which
will result into more number of technical people.
For academia, cloud computing lets students,
faculty, staff, administrators, and other campus
users access file storage, e-mail, databases,
and other university applications anywhere,
on-demand.
Cloud Class Room: Cloud Computing also finds
applications in the classroom teachinglearning process. Tools like Google Docs,
Microsoft Office Live Workspace, and
Zoho Office Suite are already in usage.
Online office suites like these typically
include word processor, spreadsheet, and
presentation functionalities which users
can utilize to create and edit documents
completely online with collaboration capabilities between geographically separated
users. Cloud computing may just be a
buzzword today, but classroom experience
with Google Docs, it was shown that it
offers a new and better tool for teaching
and collaboration. On the other end IBM
and Google are each shelling out between
$20 million and $25 million to start college
programs focused on cloud computing.
The vision goes like this: Run multiple
data centers in parallel and allow users to
share resources. Microsoft, Sun Microsystems, Hewlett-Packard and others all
have a similar vision of computing in the
cloud. IBM and Google will at first offer
400 computers to teach cloud computing
techniques. The duo plans to expand to
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 11
4,000. So far, six universities–University
of Washington, Carnegie Mellon, MIT,
Stanford, University of California at Berkeley and the University of Maryland–are
participating (Young, 2008).
Cloud Library: We also view on these new
services at increasing the value of the
subscriptions it offers to library members.
It eliminates many of the redundancies
inherent in the current patterns of library
automation and allows libraries to take advantage of Web-scale efficiencies. Visits to
libraries, focus groups, and over a decade of
engagement in the library automation world
have convinced us that libraries require less
complexity in their management systems.
Libraries spend a great deal of time on
repetitive tasks, such as cataloging bestsellers, while ignoring the most valuable
aspects of their collections: the archives,
the rare items, the unique collections.
Libraries must transfer effort into higher
value activity and embrace the web as the
primary technology infrastructure. Some
are already using the cloud in the form of
GoogleDocs. Finally, Cloud computing
will make the library anywhere and anytime for the user. The cloud has emerged
and libraries need to start thinking about
how they may need to adjust services in
order to effectively adapt to how users are
interacting with it.
bEnEfIts of cloud In
HIgHEr EducAtIon
The prospect of a maturing cloud of on-demand
infrastructure, application, and support services
is important as a possible means of driving down
the capital and total costs of IT in higher education; facilitating the transparent matching of IT
demand, costs, and funding; scaling IT; fostering
further IT standardization; accelerating time to
market by reducing IT supply bottlenecks; countering or channeling the ad hoc consumerization
of enterprise IT services; increasing access to
scarce IT talent; creating a pathway to a 24 × 7
× 365 environment; enabling the sourcing of cycles and storage powered by renewable energy;
increasing interoperability between disjointed
technologies between and within institutions;
and facilitate inter-institutional collaboration.
In 2009, National Science Foundation (NSF),
USA announced $ 5 million grants to 14 leading
US universities through its Cluster Exploratory
(CLuE) programme to participate in the IBM/
Google Cloud Computing University Initiative.
Indian Universities should also be given such
grants to enable Cloud Computing in Higher
Education. This may lead to usage of scarce
resources as services by institutions. A higher
education cloud might act as a repository for
modular courses that institutions can use or build
on, making it possible to reduce redundancies.
We need to come together in groups to optimize
our strength and not simply to determine how
to bridge the gap. This is the right time for this
conversation because of our necessity to take
advantage of emerging technologies to change
how we do business on campus, and that includes looking at a higher education solution
for maximizing the benefits of cloud computing
that is inevitable.
futurE of cloud
computIng
Cloud computing, from becoming a significant technology trend in 2010, there is a wide
spread consensus amongst industry observers
that it is ready for noticeable deployment in
2011 and is expected to reshape IT processes
and IT market places in the next 3 years. This
implies that in the near future there would be
a requirement for professionals in this field.
As companies increasingly depend more on
blogs/ online document storage or other web
based applications, enterprising youngsters
can actually set up a business to help people
set-up these applications. Thus while there
would be bigger players like Amazon, Google,
IBM, Microsoft, Yahoo, who would need such
professionals in the field of cloud computing,
the smaller players too would need fresh talent.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
12 International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011
While these companies invest heavily to make
cloud computing mainstream, it is the nimble
startups like Nivio who rush to take advantage to
ever cheaper cloud computing infrastructure to
deliver innovative applications. The undeniable
consensus is that cloud computing is going to
be with us for a number of years. One thing that
stands as a testament to the financial industry
is that it is able to work in every industry and
translate problems and obstacles into bridges
towards success.
Future will be filled with services either at
Management level or at Functional level. Users
will be at one end, Service Providers at others
end and the Service Managers or the middle
layer dealers will help out gluing both. Despite
its possible security and privacy risks, Cloud
Computing has six main benefits that the public
sector and government IT organizations are
certain to want to take the advantages. They are:
•
•
•
•
•
•
Reduced Cost: Cloud technology is paid
incrementally, saving organizations money.
Increased Storage: Organizations can
store more data than on private computer
systems.
Highly Automated: No longer do IT personnel need to worry about keeping software
up to date.
Flexibility: Cloud computing offers much
more flexibility than past computing
methods.
More Mobility: Employees can access
information wherever they are, rather than
having to remain at their desks.
Allows IT to Shift Focus: No longer having to worry about constant server updates
and other computing issues, government
organizations will be free to concentrate
on innovation.
The major decisions facing successful implementation of cloud technologies is whether
to use a solution providers cloud or bring the
cloud inside and oversee the process internally.
Most of the industries/companies will re-label
their products as cloud computing resulting in
a lot of marketing innovation on top of real in-
novation. A sudden transformational change is
poised to succeed where so many other attempts
to deliver on-demand computing to anyone
with a network connection have failed. Some
skepticism is warranted. Developing the business strategy and technical migration towards
cloud services is the order of the day. We can
alter the environment in which we operate, we
can shape the higher education cloud, but we
will begin to lose control to do so if we don’t
start now.
conclusIon
Cloud computing overlaps some of the concepts of distributed, grid and utility computing. However it does have its own meaning
if contextually used correctly. The conceptual
overlap is partly due to technology changes,
usages and implementations over the years.
Cloud Computing in higher education built on
decades of research in virtualization, distributed computing, utility computing, and, more
recently, networking, web and software services. It implies a service-oriented architecture,
reduced information technology overhead for
the end-user, great flexibility, reduced total cost
of ownership, on-demand services and many
other things. But Cloud computing in higher
education has become the new buzz word driven
largely by marketing and service offerings from
big corporate players like Google, IBM and
Amazon. As information becomes even more
on-demand and mobile, cloud computing is
likely to grow in higher education. Is Cloud
Computing limited by the availability of internet? Will Cloud computing actually work for the
‘unconnected’? This has to be made very well
comprehensible. The clear concept and definition of Cloud Computing by the experts have
paved the way for people to explore the Giant
Transition in higher education. The services
that shall be provided by cloud, the models
on which it can be deployed, and the different
dimensional requirement for the user and the
service provider will lead to a contemporary
era in higher education. The outlook for Cloud
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 1-13, April-June 2011 13
Computing in India and more particularly in
Higher Education, needs a make shift to the
cloud. Definitely the future implications in
higher education are scalable and expansive,
and this cheap, utility-supplied computing will
ultimately change the higher education and the
society as profoundly as cheap electricity did in
the past. Ultimately, the cloud must help higher
education consolidate and collaborate.
rEfErEncEs
Averitt, S., Bugaev, M., Peeler, A., Schaffer, H.,
Sills, E., Stein, S., et al. (2007, May 7-8). The virtual
computing laboratory. In Proceedings of the International Conference on Virtual Computing Initiative,
Triangle Park, NC.
Bell, M. (2008). Introduction to service-oriented
modeling, service-oriented modeling: Service
analysis, design, and architecture. New York, NY:
John Wiley & Sons.
Bulkeley, W. M. (2007). IBM, Google, Universities
combine ‘Cloud’ forces. Retrieved from http://online.
wsj.com/article/SB119180611310551864.html
Burton Group. (2010). Comprehensive research
and advisory solution. Retrieved from http://www.
burtongroup.com/research/
Carr, N. (2008). The big switch: Rewiring the world,
from Edison to Google. New York, NY: Norton &
Company.
Cloud Portal. (2010). Cloud computing portal.
Retrieved from http://cloudcomputing.qrimp.com/
portal.aspx
Foster, I., & Kesselman, C. (2004). The grid 2:
Blueprint for a new computing infrastructure (2nd
ed.). San Francisco, CA: Morgan Kauffman.
Gartner. (2010). Gartner research. Retrieved from
http://blogs.gartner.com/
Gartner. (2010). Gartner’s expertise in a variety of
ways. Retrieved from http://www.gartner.com/
Geelan, J. (2009, May 18-19). Deploying virtualization in the enterprise. Paper presented at the
Virtualization Conference, Prague, Czech Republic.
Google. (2010). Indian version of this popular search
engine. Retrieved from http://www.google.co.in/
India Online. (2010). Population of India. Retrieved
from http://www.indiaonlinepages.com/population/
INTEROP. (2008). Enterprise software customer
survey. Retrieved from http://www.interop.com/
Kirkpatrick, M. (2007). IBM unveils Blue Cloud what data would you like to crunch? Retrieved from
http://www.readwriteweb.com/archives/ibm_unveils_blue_cloud_what_da.php
Magoules, F., Pan, J., Tan, K.-A., & Kumar, A.
(2009). Introduction to computing, numerical analysis and scientific computation series. Boca Raton,
FL: CRC Press.
Mckinsey & Company. (2010). Highlights and
features. Retrieved from http://www.mckinsey.com/
McKinsey & Company. (2010). Management
consulting & advising. Retrieved from http://www.
mckinsey.com/
National Institute of Standards and Technology.
(2010). NIST homepage. Retrieved from http://
www.nist.gov/
Rewatkar, L. R., & Lanjewar, U. A. (2010). Data
management in market-oriented cloud computing.
Advances in Computer Science and Technology,
3(2), 217–222.
Uptîme Institute. (2009). Clearing the air on cloud
computing. Retrieved from http://uptimeinstitute.org
Wikipedia. (2010). Welcome to Wikipedia. Retrieved
from http://en.wikipedia.org/wiki/
Young, J. (2008). 3 ways that web-based computing will change colleges and challenge them. The
Chronicle of Higher Education, 55(10), 16.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
14 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
using free software for Elastic
Web Hosting on a private cloud
Roland Kübert, University of Stuttgart, Germany
Gregory Katsaros, University of Stuttgart, Germany
AbstrAct
Even though public cloud providers already exist and offer computing and storage services, cloud computing is still a buzzword for scientists in various fields such as engineering, finance, social sciences, etc. These
technologies are currently mature enough to leave the experimental laboratory in order to be used in real-life
scenarios. To this end, the authors consider that the prime example use case of cloud computing is a web
hosting service. This paper presents the architectural approach as well as the technical solution for applying
elastic web hosting onto a private cloud infrastructure using only free software. Through several available
software applications and tools, anyone can build their own private cloud on top of a local infrastructure and
benefit from the dynamicity and scalability provided by the cloud approach.
Keywords:
Cloud Computing, Distributed Computing, Elasticity, Free Software, Load Balancing, Web
Hosting
IntroductIon
In the past years, cloud computing has evolved
to be one of the major trends in the computing
industry. Cloud computing is, basically, the
provisioning of IT resources on demand to
customers over some kind of network, most
probably the internet. Cloud computing is in
this sense an evolution of utility computing,
with the difference that in cloud computing one
does not demand infrastructure but higher-level
services (compute capacity, storage, software)
and does not need the knowledge to work with
infrastructure (Danielson, 2008).One of the
most prominent cloud providers nowadays
DOI: 10.4018/ijcac.2011040102
is Amazon’s Elastic Compute Cloud (EC2)
(Amazon, n. d.), which even predates the term
“cloud computing” (Figure 1) and can be seen
as the foundation of this type of computing.
Migrating to public clouds is often said to
lead to lower capital expenditure, as there is no
up-front cost for buying infrastructure, having
floor space for it etc. Instead, the costs incurred
by cloud computing relate to operational expenditure, for example if using a cloud provider with pay as you go scheme. While it is
questionable that the total cost of either buying
and running a server or buying capacity on
demand from a cloud provider can be compared
directly with each other, the whole burden of
operating a data center, controlling and managing the infrastructure etc. is removed if a cloud
provider is used(Golden, 2009).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 15
Figure 1. Search volume of the terms “cloud computing” (blue) and “Amazon ec2” (red) according to Google Trends (Google, 2011a)
Exactly this lack of control over the infrastructure is what puts cloud users at risk.
Richard Stallman, president of the Free Software Foundation, coined the term “carless
computing”, stating that users should rather
keep control over their own data and not hand
data over to providers that move it to unknown
servers at unknown locations (Arthur, 2010).
Additionally, with public cloud providers, the
problem of vendor lock-in always exists. Vendor
lock-in “is the situation in which customers are
dependent on a single manufacturer or supplier
for some product (i.e., a good or service), or
products, and cannot move to another vendor
without substantial costs and/or inconvenience”
(The Linux Information Project, 2006). The
costs of lock-in to a customer can be severe
and include, amongst others, “a substantial
inconvenience and expense of converting data
to other formats” and “a lack of bargaining
ability to reduce prices and improve service”
(The Linux Information Project, 2006).
Besides the lack of control over the placement of one’s own data and vendor lock-in, other
problems exist with private clouds, as with any
business offer: the provider might decide that it
is no longer interested in providing service to
a customer, thereby disrupting the clients business, at least temporarily. This has happened,
for example, to the non-profit organization
WikiLeaks (Gross, 2010; MacAskill, 2010).
The question is then: is there a viable alternative to public cloud providers which retains
some of the flexibility one gains by moving
to the cloud? And the answer to this question,
luckily, is positive: yes, there is an alternative
and it is building a private cloud. A private cloud
is essentially the same thing as a public cloud,
only hosted on a private network on one’s own
physical infrastructure. Obviously, the host of
a private cloud has to care about his own resources, which means that there are no upfront
advantages for CAPEX. This, however, is less
valid if a physical infrastructure already exists
and is – either totally or partially – changed to
a virtualized infrastructure. OPEX will probably be reduced when staying on one’s private
cloud and not going to a public cloud, with the
added advantage of having total control over
the physical infrastructure, thereby avoiding
the problems mentioned above. Private clouds
seem to be of more and more interest, as can
be seen in Figure 2.
But how can one turn private, non-virtualized physical resources on one’s site into a
private cloud? This transition is not too difficult,
as we will demonstrate in this work. Building
up this knowledge might be an up-front investment but can be cheaper in the long run, all the
while keeping the advantages of a private cloud
in mind. We will describe how one can turn an
existing in-house cluster of physical hosts into
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
16 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
Figure 2. Search volume of the terms “public cloud” (red) and “private cloud” (blue) according
to Google Trends(Google, 2011b)
a virtualized infrastructure and will demonstrate
the usefulness of this by showing how a scalable
web hosting solution can be built on top of this
private cloud. We will describe an exemplary
existing infrastructure, the software components
we employ and demonstrate that we arrive at
a solution that rivals public cloud offerings but
has further advantages and is using free software
exclusively.
The methodology that we have followed is
to firstly explore the term “Elastic Web Hosting” and its significance and characteristics.
After that, the identified capabilities have been
transformed into architectural requirements and
the high level design of our approach, based
on the private cloud paradigm, is presented.
Further exploration of the realization of the
solution is provided through specific details
of the implementation and the software that
has been used, before the approach is validated
against a private cloud test-bed environment.
Finally, the last section summarizes our findings
and concludes our work.
Elastic Web Hosting
The term “elasticity” may have been used implicitly or explicitly before, but its mainstream
usage in computing stems from Amazon’s
product “Elastic Compute Cloud” (EC2). In
general, elasticity means “the ability to adapt”
(Oxford Dictionaries, 2010b), therefore the
fact of being “able to adjust to new conditions”
(Oxford Dictionaries, 2010a). In the sense
of computing, this means that resources are
provisioned on demand. This feature is made
easy through the use of virtualization technology: users do not access physical hardware but
virtual hosts that seem like physical hosts and
can run anywhere on sufficient hardware. It is
understandable how the term elasticity became
being used to describe these techniques as it
has a connotation of dynamicity in contrast to
the staticity of classical resource provisioning:
if a users hosts a web site on a server and the
capacity is maxed out, a new server needs to
be bought, installed and configured. Elastic
solutions, however, provide infrastructure that
is not visible to the user and can scale depending on the current demand (of course, if the
real infrastructure’s capacity is maxed out, the
same problem occurs, but it is assumed that the
real infrastructure capacity is quite high). As
important as the notion of adapting to higher
capacity is the ability to scale down once excess
capacity is detected. Often, load spikes are short
and it would be too expensive to cater for high
load all the time. Furthermore, this would result
in general in low resource usage.
Apart from the fact the provisioning of
bare virtual machines is a quite straight forward
use case for Elasticity, the same goes for web
hosting: as web servers are normally stateless,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 17
meaning that they store no data from one request to another, additional web servers can be
provided if an increasing number of requests
are incoming. If virtual machine images are
prepared accordingly, for example with an
installed web server and a deployed web site,
once a virtual machine is up and running it can
serve requests and it can be decommissioned
once the load is decreasing again. In this paper
we will propose and present a solution based
on several free software tools that will support
us in realizing elastic web hosting on a private
cloud infrastructure.
ArcHItEcturE
In this section we will describe the architecture
of the proposed solution using a three step approach: initially we will briefly discuss private
cloud architectures starting from a generic point
of view and finishing with the introduction of
the elastic web hosting scenario into such a
model, then we will elaborate on what role load
balancing can play in the web hosting case and
finally we will populate the mechanism with
dynamic VM allocation which will realize the
scalability of the proposed architecture.
private cloud:
High-level Architecture
As mentioned above, more and more people are
interested in building their own private cloud
infrastructure. As this solution gains ground,
several open source toolkits and APIs have
been developed that allow the management of
Virtual Machines (VMs) and generally realize the Infrastructure as a Service paradigm.
Eucalyptus (Baun & Kunze, 2009) is a solution developed at the University of California
that offers the ability to deploy and control
VM instances via different hypervisors (Xen
or KVM) over physical resources. Nimbus
(Freeman, LaBissoniere, Marshall, Bresnahan,
& Keahey, 2009) is similar to Eucalyptus and
provides the necessary interfaces to give users
control over VMs. It is, however, running on
top of a Globus Toolkit Java container (Foster,
2005). Another virtual infrastructure manager
toolkit that allows to build private or hybrid
clouds is OpenNebula (Sotomayor, Montero,
Llorente, & Foster, 2009) mainly supported
by the University of Madrid and being already
used into several research initiatives (BonFIRE,
2010; Reservoir, 2010).
Regardless of the differences, all the aforementioned solutions are sharing a common, high
level architecture for implementing a private
cloud. An abstract architectural approach of a
private cloud is shown in Figure 3.
The multilayered architecture consists of
the physical (local) infrastructure, on top of that
lies the hypervisor (Xen, KVM etc.) while over
that layer the virtual management framework
(OpenNebula, Eucalyptus etc.) is located. Finally, the latter exposes the proper interfaces
in order for the users/administrators to access
and control the private cloud. Based on this
architectural model we can clearly identify two
distinctive infrastructures: the physical infrastructure and the virtual infrastructure (Figure
4). With the introduction of virtualization technologies which have been exploited by cloud
offers, the consumer is no longer interested in
the physical resources but in the VMs that have
been instantiated. The physical infrastructure
becomes transparent for him who now through
Infrastructure as a Service and on-demand
computing negotiates access to a virtual environment with certain specifications. This virtual infrastructure is dynamic, flexible and
customizable according to the application or
the service that every user wants to execute.
We shall not elaborate on the advantages and
disadvantages of cloud computing in general
as this is not the main focus of this paper.
Coming back to our scenario and according
to the private cloud architecture presented
above, elastic web hosting can be realized when
a web server container serves as the hosted
application in a VM running atop a cloud infrastructure. It is clear, though, that this case is
not innovative and new as such services have
been provided by public clouds (e.g. Amazon
EC2) almost from the begging of this cloud
trend. The proposal that we elaborate in this
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
18 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
Figure 3. General architecture of a private cloud
paper is to apply this cloud-enabled web hosting on to private cloud infrastructures using
free software solutions and APIs, in order to
populate the system with certain new functionalities and features that will realize the elastic
web hosting. In the next two sections we will
analyze load balancing and scalability, two
major capabilities of the proposed cloud-enabled, elastic web hosting.
load balancing
Using load balancing techniques over web hosting services is a fairly old topic (Cherkasova,
1999). In our proposed approach we will be hosting web servers on a private cloud infrastructure,
benefiting from the flexibility of virtualization
and of self-controlling and managing our private
infrastructure. Furthermore, we have populated
the architecture with an additional control interface: instead of having a single web server
hosted in a VM we have multiple web server
containers running on several VM instances
deployed within the cloud. The distribution of
the incoming requests is managed by a front-end
component that is performing the load balancing. As shown in Figure 5none of the available
web servers is directly accessible by the clients.
The load balancer will take incoming requests
on port 80 and distribute them equally towards
the web servers. If a web server is deployed or
shut down at run time, the load balancer needs
to be informed of this and needs to balance
requests accordingly.
The introduction of this component has a
very important impact on the reliability of the
provided web hosting service. By having the
control over the incoming request, through the
load balancer we can optimize our resource
utilization and overall the quality of the provided service (Quality of Service, QoS). In
addition, this structure helps us to acquire increased capacity without investing to a single
web server with extreme technical specifications. Furthermore, the proposed model realizes a fault tolerant system: in case a web
server is unavailable or shut down the http
requests will continue being served by the remaining web servers.
Implementing scalability
The elasticity of the proposed solution is being
achieved by instantiating a new VM that will
host an extra web server when the capacity of
the current system is reaching a limit. To this
end, we populate the features of the front-end
control component with monitoring and VM
management functionalities. Overall the high
level operation of the system when a new incoming request is arriving is described by the
following pseudo code (Table 1).
Through this functionality we can guarantee that at all times the incoming requests will
be served and in the same time we will succeed
in having good resource utilization. This dynamicity allows us to utilize the free resources
with other applications and activities and not
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 19
Figure 4. Virtual and physical infrastructure
Table 1. Generic load balancing algorithm
If (web_server_capacity<limit)
{
forward(request)
} else {
instantiate_new_VM()
reset_load_balancer()
forward(request)
}
dedicate our whole infrastructure in a static way
for the operation of the web server. This feature
is the core concept of the elastic web hosting
that we propose which also fits greatly with the
key principles of cloud computing.
rEAlIzAtIon
This section describes the realization of the
architecture described in the previous section.
All software used is free software according to
the definition of the Free Software Foundation
(Free Software Foundation, 2010).We describe
the actual physical infrastructure that has been
used to realize the architecture as well. Existing
infrastructure will most probably be different
for other sites but nonetheless presenting our
infrastructure will be helpful to others which
want to implement a virtualized infrastructure
for the same or a different use case.
physical Infrastructure
The existing physical infrastructure is shown in
Figure 6. As it can be seen, the actual physical
resources, which are used as compute nodes,
are running in an isolated network that can be
accessed only via central frontend node. Jobs
submitted to this frontend node are distributed to
the compute nodes via some resource manager.
Users can access the front-end either from the
internet or another internal network, provided
they are allowed to do so.
It is pretty straightforward to augment this
physical infrastructure with a virtual infrastructure layer: the frontend node provides a virtual
infrastructure manager and uses some or all of
the physical hosts to deploy virtual machines
to. The situation presented in Figure 7shows
how the physical infrastructure can be partitioned in such a way that the original functionality – in our case, execution of computational
jobs on the physical infrastructure – is still kept
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
20 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
Figure 5. Load balancing on web hosting
while allowing the parallel establishment of a
virtualized infrastructure.
It might, however, be beneficial if the
physical infrastructure is converted wholly to
a virtualized one. As can be seen from the
figure, this can be done step by step as more
and more of the physical hosts are prepared to
host virtual machines and the corresponding
virtual machines for execution of compute jobs
are prepared.
ously only applies to CentOS as an operating
system on the physical infrastructure. The fact
that virtual machines running CentOS as well
can be built very easily and that the process is
very well documented (CentOS Wiki, 2009) led
us to the decision to use CentOS as well as the
operating system for the virtual machines. Other
GNU/Linux or BSD distributions (for example
FreeBSD (FreeBSD Wiki, 2010), would probably be an equally good choice.
operating system
cloud middleware
CentOS (short for Community ENTerprise
Operating System) is a GNU/Linux distribution based on the Red Hat Enterprise Linux
(RHEL) distribution by Red Hat (CentOS
Project). While RHEL is aimed at the commercial market, CentOS is free software. It is
used on nearly 30% of all Linux web servers,
making it the distribution used most often on
web servers (Gelbman, 2010). CentOS also offers straightforward virtualization support out of
the box: CentOS provides groups of packages
that can be selected for installation and one
package group provides everything necessary
for virtualization (both “full virtualization”,
where unmodified guest systems can be run
but require special hardware support on the
host, and “para-virtualization”, where a special
component, a hypervisor, is introduced between
the hardware layer and guest systems).
The main choice for CentOS was the outof-the box support for both fully virtualized and
para-virtualized kernels. This however, obvi-
For running a private cloud, different choices
of virtual infrastructure managers (VIM) exist:
Eucalyptus, Nimbus and OpenNebula, just to
name three popular ones. OpenStack, a VIM
developed by Rackspace and NASA (OpenStack, 2010), is another promising project that,
however, is still under heavy development while
the other three VIMs can be already taken to
production use. There are distinct advantages
and disadvantages to each of these three VIMs
and the choice of which to use is not fixed. For
our case, as we are building a private cloud
around a central head node and have a small
group of machines which are, except for the
web traffic, only accessed by trusted users, we
decided to use OpenNebula (Sotomayor et al.,
2009). The latest version of OpenNebula, version 2.0, released in October of 2010, provides
support for Xen, KVM and VMWare, adds an
image repository for the management of VM
images and its basic installation on the frontend
node is easy and quite small. The configuration
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 21
Figure 6. Existing, non-virtualized infrastructure
of the physical hosts which will run the VMs
is minimal as well.
Summarizing, the same solution we developed might be achieved with Eucalyptus
or Nimbus, but for our case OpenNebula fit
well with the requirements we had. It would be
interesting to compare the same solution set up
with the other VIMs as well.
load balancer
As with the operating system and the virtual infrastructure manager, there are multiple choices
for load balancing. In our case, we do not have
strong requirements on the load balancer: we
want to run it on a dedicated VM, has it distribute
incoming requests to multiple servers, be able
to query the current status and interact with it
during run-time. With balance (Inlab Software
GmbH, 2010), we found a straightforward, easy
to use software for this. Balance is a small (about
2k lines of code) generic TCP proxy with roundrobin load balancing and failover mechanisms.
It can easily be controlled at runtime through a
simple command line interface. Its simplicity
means one can build and deploy balance easily
and can get started with simple load balancing
tasks right away.
Setting balance up for load balancing incoming requests to port 80 to two servers is as
easy as the following command line:
# balance www host1 host2
Balance will run in the background and
will balance incoming requests dynamically
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
22 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
Figure 7. Parallel existing non-virtualized and virtualized infrastructure
between hosts host1 and host2. For our demonstration setup, we decided to limit, without the
loss of generality, the amount of connections
that each server can take severely in order to
facilitate testing. This is easily achievable with
balance, as for each host it can be specified how
much connections this host can mange, as shown
in the following command line (Walcott, 2005):
# balance www host1::8 host2::8
This command line specifies that both
host1 and host2 can manage 8 simultaneous
connections. The configuration of balance needs
to be in sync with the configuration of the web
servers, as it is of no use to tell balance that
each host can handle 250 requests if the web
server is configured to only allow 150 requests.
One can connect to a running instance and
get the connection status like this:
# balance http -i -c show
GRP Type# S ip-address portc totalc
maxc sentrcvd
0 RR 0 ENA 10.0.0.12 80 1 4 8 0 0
0 RR 1 ENA 10.0.0.13 80 0 4 8 0 0
Each line specifies a host, the “c“ column
specifies the number of current connections (1
to host 10.0.0.12 and none to 10.0.0.13). “totalc”
specifies the total connections established until
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 23
now and “maxc” the number of connections
that a host can take as a maximum.
Adding a host dynamically to the setup as
displayed with the show command above can
be achieved by the following commands:
•
# balance http -i -c “create 10.0.0.14
80”
# balance –i –c “enable 2”
Finally, a host can be disabled with the
following command:
•
# balance –i –c “disable 2”
As it can be seen, balance allows all the
tasks necessary at runtime. It can be either
controlled interactively – that is, by an administrator – or by dedicated component using the
commands specified above.
Http sErvEr
The sole purpose of the HTTP server we employ in this use case is to serve simple static
web pages. We therefore do not have special
requirements on the HTTP server and in fact
users are free to choose from quite a number
of popular HTTP servers that are free software:
Apache HTTP Server, Lighttpd and nginx, just
to name the most prominent ones. We decided
to use the most popular one of these, Apache
HTTP Server, but any of the others would be
fine as well (Netcraft, 2010).
monitoring and management
While elasticity is one major characteristic of
the proposed architecture, monitoring of the load
and management of the virtual infrastructure are
two very crucial operations of the system. To this
end, we have created a mechanism based on the
Nagios monitoring system (Nagios, 2009) along
with the OpenNebula cloud middleware that
automatically reserves a new virtual machine
and re-distributes the load to all running VMs.
The proposed mechanism consists of three
main components:
•
Nagios API: this is an open source monitoring framework through which we can
acquire the status of multiple hosts regarding their availability, load, memory and
various other metrics. In our case, we have
specifically implemented a Nagios service
check in order to monitor the number of
connections for the httpd service that is
running in every host, a PING check for the
availability and a CPU load service check.
Event Broker Module: this is a custom module written in C code and using the Nagios
Event Broker API (NEB)[REMOVED REF
FIELD] (Ingraham, 2006). The operation
of this component is based on a notification
that the Nagios API generates every time
that a service check is being applied. The
module receives notifications from Nagios
and checks and forwards this information
to the Load Manager.
Load Manager: this component takes as
input the monitoring data coming from the
Nagios API regarding VM instances. In
case that a web server in a VM is reaching a
certain threshold regarding its capacity and/
or other performance oriented parameters
(e.g. CPU load, memory), then the Load
Manager deploys a new virtual machine
and reconfigures the Load Balancer to
use an extra web server. To achieve this, it
communicates with the VM Management
(OpenNebula API) and the Load Balancer
(balance API).
The monitored data (Figure 8) offered to
the Load Manager is derived from the Load
Balancer itself (e.g. number of connection)
and as described before from the Event Broker
Module. While the monitored data are customizable (especially those from Nagios) the logic
that the manager implements can vary based
on available and needed information. To this
end, one might set multiple rules and respective actions to be applied using different input
every time. This feature extends even more the
dynamicity and elasticity of the architecture.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
24 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
loAd crEAtIon softWArE
In order to test our solution, we used the open
source tool curl-loader (Iakobashvili, 2007).
Curl-loader, written in C, can simulate the behavior of a big number of protocols, for example
HTTP(S)/FTP(S), and uses the client protocol
stacks of libcurl (Stenberg, 2010).
Curl-loader is controlled by a configuration file. The configuration file we have used is
listed below; it starts with 100 clients initially
and adds each second another single client until
250 clients are running in total.
--------------------------------------------------------------------------########### GENERAL SECTION
################## BATCH_NAME=
250-clients
CLIENTS_NUM_MAX=250
CLIENTS_NUM_START=100
CLIENTS_RAMPUP_INC=1
INTERFACE =eth0
NETMASK=255.255.0.0
IP_ADDR_MIN= 192.168.1.1
IP_ADDR_MAX= 192.168.53.255
CYCLES_NUM= 1
URLS_NUM= 1
########### URLs SECTION
#######################
URL=http://balance-host/index.html
URL_SHORT_NAME=”balance-index”
REQUEST_TYPE=GET
TIMER_URL_COMPLETION = 0
TIMER_AFTER_URL_SLEEP = 0
Each client obtains one single file, index.
html, using an HTTP get request. This request
is not time-limited.
validation and testing
The implementation of the load balancing
system we have described above has been
validated using curl-loader, described in the
previous section, to simulate clients accessing
web servers. This simple solution we described
is sufficient to adapt dynamically to incoming
load, provisioning new virtual machines when
more requests are incoming than can be handled
at the moment and shutting down VMs if excess
capacity is provisioned at the moment.
In addition, our test-bed consisted of one
front-end node and two workers with the following technical specifications (Table 2).
We used OpenNebula to start up 4 VMs
on the workers and each one of those instances
hosted an Apache HTTP web server. The testing scenario that we executed was to use curlloader to apply load on the web servers already
started and force the manager to start a new
VM when the connections monitored are reaching the defined limit of 100 connections.
For our tests, we used the SSH transfer
manager, which copies VM images to the
host on which they are to be executed using
the Secure Shell network protocol (Ylonen
& Lonvick, 2006). For a VM image with approximately 2GB size, transfer over a Gigabit
Ethernet link using this transfer manager took
in our case around 90 seconds. We decided to
Figure 8. Monitoring and management architecture
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 25
Table 2. Technical specification of the validation testbed
Front-end
CPU
2x Intel(R) Xeon(TM) CPU 3.20GHz
16kiB L1 cache
1MiB L2 cache
RAM
8GiB system memory
Composed of 4 x 2GiB DIMM DDR Synchronous 333 MHz (3.0 ns)
Disk (SATA)
2 x 250GB HDS722525VLSA80
Raid 1 setup (mirrored)
Network
2x 82546GB Gigabit Ethernet Controller
Workers
CPU
2x Intel(R) Xeon(TM) CPU 3.20GHz
16kiB L1 cache
1MiB L2 cache
RAM
8GiB system memory
Composed of 4 x 2GiB DIMM DDR Synchronous 333 MHz (3.0 ns)
Disk (SATA)
2 x 250GB HDS722525VLSA80
Raid 1 setup (mirrored)
Network
82546GB Gigabit Ethernet Controller
use the SSH transfer manager as this requires
no further setup except configuring SSH on
both the frontend node and the “worker” nodes;
using other transfer managers – OpenNebula
supports for example as well the network file
system (NFS) (Shepler et al., 2003) – might
give different results. The encryption overhead
imposed by SSH is probably the limiting factor
at this point.
As the 90 seconds mentioned above are
the time needed for the pure transfer of the
VM image file, the time needed to boot up a
VM needs to be factored in as well. For our
2GB VM image, the boot up time was around
50 seconds, bringing the total time to deploy a
VM to 140 seconds.
This is a quite good time. We have previously mentioned the fact that the threshold, at
which a new VM needs to be deployed, needs
to be lower than the maximum capacity at
which VMs can operate. Now, it is immediately
obvious why: assuming that an incoming connection means that all connections are used at
the moment, just then deploying a new VM
means that for around 140 seconds, no more
new connections can be accepted. Depending on
the number of connections allowed in total and
the inter-arrival time of requests, the threshold
can be set accordingly. Of course this does
not mean that a single connection might not
be served, as total contention situations might
occur if the load increases suddenly. However,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
26 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
for slow rises in the number of connections this
solution can be applied.
A good technique that requires slightly
more implementation effort at the Load Manager
is to deploy VMs at a certain threshold but put
them into pause initially. The VM will already
consume allocated resources like RAM but will
not be scheduled to be run on the actual CPUs.
When the load rises further, the VM can be
unpaused and can immediately serve requests.
The load manager then needs to take care of
pausing and undeploying resources again.
Depending on the situation, the algorithm for
the load manager can be adapted; it can, for
example, undeploy all VMs that have not been
used for a certain time span, provided that the
capacity is still high enough but can leave a
fixed number of VMs in paused mode in order
to react quickly on rising load.
conclusIon
In this paper we have discussed the advantages
of using a private cloud in contrast to a public
cloud. Furthermore, we have shown how this
private cloud can be implemented using only
free software. In this context, we presented how
we can apply and realize elastic web hosting
on a private cloud infrastructure using only
free software as well. Thereby, we relied on a
software stack that is completely free software,
starting from the operating system, up to the web
servers, monitoring software, load balancer and
virtual infrastructure manager. The solution we
developed has been validated through an elastic
web hosting use case on a small-scale test-bed
but the architecture is directly applicable to
larger-scale infrastructures. This is only one
usage of the virtualized infrastructure, which
can be used for various purposes. It shows,
however, how easy and efficient the setup and
operation of a private cloud is. The virtualized
infrastructure can co-exist with other uses of
the physical infrastructure; we have shown
how this can be easily achieved by partitioning the existing physical infrastructure in one
part that is used physically and one that uses
virtualization. Resources for providing the
virtualized infrastructure do not need to be very
expensive; in our case, surplus nodes from an
old compute cluster have been used; due to their
age, they might lack hardware virtualization
but para-virtualization is of course possible. As
many companies previously have acquired IT
resources, there is no point in letting these go
to waste by moving all services into a public
cloud and we have shown they can be easily
used as a basis for a virtualized infrastructure.
The advantage of this solution is that the
complete infrastructure – physical and virtual
– stays in-house but can, due to the ability of
OpenNebula to manage hybrid clouds, even
be augmented with private cloud resources if
one so wishes. Having total control over the
infrastructure is surely an advantage for an
enterprise, which would partially, been given up
when using a hybrid model with a public and a
private cloud at the same time. A public cloud
can, however, be used if the existing physical
infrastructure is not big enough anymore in
order to bridge the gap from the time when this
is realized until more physical resources can
be acquired. Using a private cloud requires inhouse knowledge of the employed techniques.
As we have shown, this know-how is not
difficult to acquire but it still has to be done.
However, even the pure usage of an external
cloud cannot be done without acquiring some
know-how, so the question is which solution is
more beneficial in the long-term and we think
that the answer to this question surely is the
in-house solution.
rEfErEncEs
Amazon. (n. d.). Amazon elastic compute cloud
(Amazon EC2). Retrieved from http://aws.amazon.
com/de/ec2/
Arthur, C. (2010). Google’s ChromeOS means losing
control of data, warns GNU founder Richard Stallman. Retrieved from http://www.guardian.co.uk/
technology/blog/2010/dec/14/chrome-os-richardstallman-warning
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011 27
Baun, C., & Kunze, M. (2009). Building a private
cloud with Eucalyptus. In Proceedings of the IEEE
International Conference on E-Science Workshops
(pp. 33-38).
BonFire. (2010). Building service testbeds on fire.
Retrieved from http://www.bonfire-project.eu/
Cent, O. S. (2009). The community ENTerprise operating systems. Retrieved from http://www.centos.org/
Cent, O. S. Wiki. (2009). Creating and installing
a CentOS 5 domU instance. Retrieved from http://
wiki.centos.org/HowTos
Cherkasova, L. (1999). FLEX: Design and management strategy for scalable web hosting service
(Tech. Rep. No. HPL 1999‐64R1). Palo Alto, CA:
Hewlett-Packard Laboratories.
Danielson, K. (2008). Distinguishing cloud computing from utility computing. Retrieved from http://
www.ebizq.net/blogs/saasweek/2008/03/distinguishing_cloud_computing/
Foster, I. T. (2005). Globus toolkit version 4:
Software for service-oriented systems. Journal of
Computer Science and Technology, 21(4), 513–520.
doi:10.1007/s11390-006-0513-y
Free Software Foundation. (2010). The free software
definition. Retrieved from http://www.gnu.org/
philosophy/free-sw.html
FreeBSD Wiki. (2010). FreeBSD/Xen: FreeBSD/
Xen port. Retrieved from http://wiki.freebsd.org/
FreeBSD/Xen
Freeman, T., LaBissoniere, D., Marshall, P., Bresnahan, J., & Keahey, K. (2009). Nimbus elastic
scaling in the clouds. Retrieved from http://www.
nimbusproject.org/files/epu_poster4.pdf
Gelbman, M. (2010). Highlights of web technology
surveys, July 2010: CentOS is now the most popular
Linux distribution on web servers. Retrieved from
http://w3techs.com/blog/entry/highlights_of_web_
technology_surveys_july_2010
Golden, B. (2009). Capex vs. Opex: Most people
miss the point about cloud economics. Retrieved
from http://www.cio.com/article/484429/Capex_
vs._Opex_Most_People_Miss_the_Point_About_
Cloud_Economics
Google. (2011a). Google trends: Cloud computing,
Amazon ec2. Retrieved from http://www.google.de/
trends?q=cloud+computing%2C+amazon+ec2&cta
b=0&geo=all&date=all
Google. (2011b). Google trends: Private cloud,
public cloud. Retrieved from http://www.google.de/
trends?q=private+cloud%2C+public+cloud
Gross, D. (2010).WikiLeaks cut off from Amazon
servers. Retrieved from http://articles.cnn.com/201012-01/us/wikileaks.amazon_1_julian-assangewikileaks-amazon-officials?_s=PM:US
Iakobashvili, R. M. M. (2007). Welcome to curl-loader. Retrieved from http://curl-loader.sourceforge.net/
Ingraham, R. W. (2006). The Nagios 2.X event
broker module API. Retrieved from http://nagios.
sourceforge.net/download/contrib/documentation/
misc/NEB%202x%20Module%20API.pdf
Inlab Software GmbH. (2010). Balance. Retrieved
from http://www.inlab.de/balance.html
Linux Information Project. (2006).Vendor lock-in
definition. Retrieved from http://www.linfo.org/
vendor_lockin.html
MacAskill, E. (2010). WikiLeaks website pulled by
Amazon after US political pressure. Retrieved from
http://www.guardian.co.uk/media/2010/dec/01/
wikileaks-website-cables-servers-amazon
Nagios. (2009). The industry standard in IT infrastructure monitoring. Retrieved from http://www.
nagios.org/
Netcraft. (2010). November 2010 web server
survey. Retrieved from http://news.netcraft.com/
archives/2010/11/05/november-2010-web-serversurvey.html
OpenStack. (2010). OpenStack open source cloud
computing software. Retrieved from http://www.
openstack.org/
Oxford Dictionaries. (2010a). Adaptable. Retrieved
from http://oxforddictionaries.com/view/entry/m_
en_gb0007570
Oxford Dictionaries. (2010b). Elasticity. Retrieved
from http://oxforddictionaries.com/view/entry/m_
en_gb0980800
Reservoir. (2010).Reservoir fp7. Retrieved from
http://62.149.240.97/
Shepler, S., Callaghan, B., Robinson, D., Thurlow,
R., Beame, C., Eisler, M., et al. (2003). Network
File System (NFS) version 4 protocol (request for
comments no. 3530): IETF. Retrieved from http://
www.ietf.org/rfc/rfc3530.txt
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
28 International Journal of Cloud Applications and Computing, 1(2), 14-28, April-June 2011
Sotomayor, B., Montero, R. S., Llorente, I. M., &
Foster, I. (2009). Virtual infrastructure management
in private and hybrid clouds. IEEE Internet Computing, 13, 14–22. doi:10.1109/MIC.2009.119
Stenberg, D. (2010). libcurl - the multiprotocol file
transfer library. Retrieved from http://curl.haxx.
se/libcurl/
Walcott, C. (2005). Taking a load off: Load balancing with balance. Retrieved from http://www.linux.
com/archive/feature/46735
Ylonen, T., & Lonvick, C. (2006). The Secure Shell
(SSH) protocol architecture (request for comments
no. 4251): IETF. Retrieved from http://www.ietf.
org/rfc/rfc4251.txt
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 29
Applying security policies in
small business utilizing cloud
computing technologies
Louay Karadsheh, ECPI University, USA
Samer Alhawari, Applied Science Private University, Jordan
AbstrAct
Over a decade ago, cloud computing became an important topic for small and large businesses alike. The new
concept promises scalability, security, cost reduction, portability, and availability. While addressing this issue
over the past several years, there have been intensive discussions about the importance of cloud computing
technologies. Therefore, this paper reviews the transition from traditional computing to cloud computing
and the benefit for businesses, cloud computing architecture, cloud computing services classification, and
deployment models. Furthermore, this paper discusses the security policies and types of internal risks that a
small business might encounter implementing cloud computing technologies. It addresses initiatives towards
employing certain types of security policies in small businesses implementing cloud computing technologies
to encourage small business to migrate to cloud computing by portraying what is needed to secure their
infrastructure using traditional security policies without the complexity used in large corporations.
Keyword:
Cloud Computing, Cloud Computing Architecture, Deployment Models, Security Policy, Small
Business
1. IntroductIon
At this time, organizations are expected to gain
an increased in competitiveness and chances to
focus their efforts and use their resources on their
core competence. Therefore, cloud computing
is defined as “a model for enabling convenient,
on-demand network access to a shared pool
of configurable computing resources (e.g.,
networks, servers, storage, applications, and
services) that can be rapidly provisioned and
released with minimal management effort or
DOI: 10.4018/ijcac.2011040103
service provider interaction” (Mell & Grance,
2010). Furthermore, cloud computing enables
dynamic provisioning of resources based on the
requirements of the user (Yogesh & Navonil,
2010). In a recent study, Cloud computing is
not a new technology, but it is a new way of
delivering technology. It is a new way of executing business applications by relying more
on a third party’s infrastructure, then local infrastructure (Srinivasan, 2010). In addition, the
implementation of cloud computing is definitely
accelerating (Cervone, 2010) and much of this is
being motivated by new business requirements
and enabled by information technology (IT).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
30 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
Most importantly, Katharine and David
(2010) explained that in concepts of the cloud,
while widespread usage is not yet common,
some governments are taking stages to guarantee
their information remains authentic and accessible. For this viewpoint, cloud computing is
increasingly being considered as a technology
that has the possible of changing how the internet and the information systems are presently
operated and used (Amir, 2010).
Lately, governments of several countries
have realized the potential of cloud computing
in offering enhanced services to its citizens.
For example, UK Government is developing a
secure cloud infrastructure called “G-Cloud”
for public sector bodies. More significantly, the
strategy will also provide some standardization
for capabilities for the promotion of shared
services with accredited cloud service providers (Heath, 2010).
However, a client computer on the Internet
can communicate with many servers at the same
time, some of which may also be exchanging
information among themselves (Hayes, 2008).
Furthermore, the cost of this service can be
determined by several factors such as an hour
of usage, software type, and storage space
utilization (Srinivasan, 2010). Therefore, this
can be translated into saving of the software
license, number of support labor, maintenance,
office space and utilities.
Recently, Wittow and Buller (2010) stated
that traditional computing model is based on
using hardware and software resources, which
required on-site computing power and disk
storage space, as well as the technical human
expertise necessary to implement, maintain and
secure those resources. Also, complicated and
expensive upgrade procedures were necessary
to take advantage of new developments and features available for software applications in the
traditional computing model (Wittow & Buller,
2010). In addition, the upgraded software or/
and hardware often required upgrading licenses
and increasing backup and recovery capabilities to reduce the downtime that users would
experience should a software or hardware failure
occur. Furthermore, local administrators with
specialized, technical skill-sets were historically responsible for application and hardware
maintenance (Wittow & Buller, 2010). In addition, the “traditional model” often involved
managing a large hardware infrastructure with
dissimilar operating systems and applications
that required individual backups, monitoring
and software updates (Wittow & Buller, 2010).
The traditional computing model required companies (and individuals) to make a significant
financial commitment to set up software and
hardware resources, and these were frequently
difficult to expand when the needs of users
changed (Wittow & Buller, 2010).
Furthermore, for small business, cloud
computing can be a saving and reliability factors
for relying increasingly on these technologies.
In fact, clouds technologies may be very suitable
for small businesses since clouds offer technical
support and lower cost of service. Hence, an
important issue in cloud computing allows for
rapid increases in capacity or capability without
the need to invest in additional infrastructure,
personnel, or software licensing (Wittow &
Buller, 2010). Furthermore, cloud computing
free individuals and small businesses from
worries about quick obsolescence and a lack
of flexibility (Greengard, 2010). Therefore,
small business will not need complicated
infrastructure such as servers and lots of managed switches.
Another advantage for small business is
the increasing popularity of internet notebooks,
or “netbooks,” (Wittow & Buller, 2010). Netbooks are typically low-cost, lightweight laptop
computers with reduced hardware capacity and
processing power that are primarily designed
to provide the user with access to the Internet
(Wittow & Buller, 2010). In this respect, netbooks provide users with vast resources because
the cloud is fully accessible without requiring
users to make a substantial investment in local hardware (Wittow & Buller, 2010). The
virtually unlimited resources available in the
cloud make the local system’s limited hardware
capabilities irrelevant (Wittow & Buller, 2010).
Therefore, the user will not need to install Microsoft Office locally, which uses more CPU
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 31
power and memory comparing to the internet
browser which uses much fewer resources to run
Microsoft Office as an example. In addressing
these issues, Yogesh and Navonil (2010) noted
that the high-speed communication networks
are essential for cloud computing. As a result,
cloud-based applications like Google Reader
and some office productivity tools have an
“offline” option. The purpose of this is to allow
users to continue with their task even when they
have intermittent access to the internet.
Additionally, risks can be categorized as
internal risks and external risks. External risks
is linked between the customer and the cloud
provider and can range from resource exhaustion, isolation failure, interception of data
in transit, ineffective deletion of data, DoS,
loss of encryption keys, supply chain failure,
cloud provider acquisition, cloud service termination, compliance challenges, lock-in and
loss of governance (European Network and
Information Security Agency, 2009). On the
other hand, internal risks apply to the premises of the customer only and can range from
malware, insiders, social engineering and theft.
Furthermore, many small businesses don’t have
adequate or existed security policies to minimize
risks. Also, many small businesses do not have
money to invest in security (Srinivasan, 2010).
Furthermore, security incidents in small businesses are usually lectured by employees with
no expertise in security (Srinivasan, 2010).
With cloud computing model, the design and
implementation of security policies might be
easier especially for small businesses than the
traditional computing model.
It is evident from the prior analysis that
organization increasingly turns to IT security
providers, in addressing this issue. Organizations are generally concerned with external
security threats (such as viruses and hacking
attempts) (D’Arcy & Hovav, 2007). Moreover,
all research discusses the risks of using cloud
computing by business with cloud providers
without discussing the internal risks. Businesses should examine all risks associated with
implementing new technologies such as cloud
computing. In fact, a study by Vista Research
in 2002 estimated that 70% of security breaches
involving losses of more than $100,000 were
internal, often perpetrated by disgruntled
employees (Standage, 2002). Additionally,
Cervone (2010) noted that one of the important
advantages of cloud computing is the potential
cost savings that can be gained. Usually cloud
computing has little or no upfront capital costs.
For the most part, operational responsibilities
are shifted to the cloud provider, who is then
responsible for the on-going maintenance of
the hardware used by the cloud.
For this viewpoint, the authors have chosen
a selection of topics to discuss how to mitigate
risks inside the company implementing cloud
computing technology model using security
policies. In essence, this paper examines the
reasons why some security policies are needed
and how they fit into all elements of the small
business that utilize cloud computing technology model. Furthermore, the paper discusses
the possibilities of applying different types of
security policies to enhance security of small
business and reduces the risks to acceptable
level. To achieve this goal, the authors will
explore different security policies and how it
can be mapped to cloud computing implemented
within small businesses. The objective is to
help small business understand what is needed
to secure their internal infrastructure using
security policies.
The rest of the paper is organized as follows: in the next section, we review relevant
literature. Section three explains security policy
for small business using cloud computing and
proposed the model for the types of security
policy applicable to small business; finally, the
last section presents the overall conclusions and
areas for further research.
2. lItErAturE rEvIEW
2.1. types of Internal risks
There is a need to give secure and safe information security systems through the use of firewalls, intrusion detection and prevention systems, encryption, and authentication; therefore,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
32 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
this section discusses different types of internal
risks that a small business might encounter. For
example, client-side infrastructure embodies a
vast array of vulnerabilities, particularly in the
case of consumer-oriented devices and software,
and even more so in the case of devices that
support user profiles such as the browser using
Active-X or Javascript (Clarke, 2010). Social
engineering is the practice of using deception
or persuasion to fraudulently obtain goods or
information, and the term is often used in relation to computer systems or the information
they contain (Twitchell, 2006). Malware is a
variety of forms of hostile, intrusive, or annoying software or program code and a pervasive
problem in distributed computer and network
systems (Cesare & Xiang, 2010).
Furthermore, internal thefts of data are a
major concern for all organizations regarding
to the size of the business. In fact, employee
theft at small companies can have more serious
consequences because they do not have the
resources of their larger counterparts. Often a
lifetime of hard work can be lost because of a
single unscrupulous employee (Morris, n.d.).
Moreover, misuse of computer resources by
surfing the internet or use the corporate email
system for non-business related or using chat
applications and downloading unauthorized
software and unauthorized data access.
2.2. cloud computing
As mentioned earlier in the introduction section; in a cloud computing environment, the
organization running an application does not
typically own the physical hardware used for the
applications. In fact, when running applications
in the cloud, an organization does not usually
know exactly where the computation work of
the applications is being processed (Cervone,
2010). Therefore, cloud computing are the latest
technology that is being feted by the IT industry
as the next (potential) revolution to modify how
the internet and information systems work and
are used by the world at large (Amir, 2010).
However, Cervone (2010) noted that One
of the important advantages of cloud comput-
ing is the potential cost savings that can be
gained. Usually cloud computing has little
or no upfront capital costs. For the most part,
operational responsibilities are shifted to the
cloud provider, who is then responsible for the
on-going maintenance of the hardware used by
the cloud. Correspondingly, Mark-Shane (2009)
defines that cloud computing as simply the sharing and use of applications and resources of a
network environment to get work done without
concern about ownership and management of
the network’s resources and applications.
Consequently, Buyya, Yeo, and Venugopa (2008) noted that the cloud computing
is defined as a type of parallel and distributed
system consisting of a collection of interconnected and virtualized computers that are
dynamically provisioned and presented as one
or more unified computing resources based on
service-level agreements established during the
negotiation between the service provider and
consumers. For this viewpoint, cloud computing is a relatively novel distributed computing
technology that promises providing services that
are scalable through on-demand provisioning
of computing resources (Weiss, 2007).
National Institute of Standards and Technology (NIST) is a US federal technology
agency that works with industry to develop
and apply technology, measurements, and standards defines cloud computing architecture by
portraying five essential characteristics (Mell
& Grance, 2010):
1.
2.
3.
On-demand self-service. A consumer can
unilaterally provision computing capabilities, such as server time and network
storage, as needed automatically without
requiring human interaction with each
service’s provider.
Broad network access. Capabilities are
available over the network and accessed
through standard mechanisms that promote
use by heterogeneous thin or thick client
platforms (for example, mobile phones,
laptops, and PDAs).
Resource pooling. The provider’s computing resources are pooled to serve multiple
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 33
4.
5.
consumers using a multitenant model, with
different physical and virtual resources
dynamically assigned and reassigned according to consumer demand.
Rapid elasticity. Capabilities can be rapidly
and elastically provisioned, in some cases
automatically, to quickly scale out and
rapidly released to quickly scale in.
Measured service. Cloud systems automatically control and optimize resource
use by leveraging a metering capability at
some level of abstraction appropriate to
the type of service (for example, storage,
processing, bandwidth, and active user
accounts). Resource usage can be monitored, controlled, and reported, providing
transparency for both the provider and
consumer of the utilized service.
Furthermore, cloud computing services is
classified into three models, which is referred to
as the “SPI Model,” where ‘SPI’ refers to Software, Platform or Infrastructure (as a Service)
(Cloud Security Alliance, 2009), respectively
(European Network and Information Security
Agency, 2009):
1.
2.
3.
Software as a service (SaaS): is software
offered by a third party provider, available on demand, usually via the Internet
configurable remotely. Examples include
online word processing and spreadsheet
tools, CRM services and web content delivery services (Salesforce CRM, Google
Docs, etc).
Platform as a service (PaaS): allows customers to develop new applications using
APIs deployed and configurable remotely.
The platforms offered include development tools, configuration management,
and deployment platforms. Examples are
Microsoft Azure, Force and Google App
engine.
Infrastructure as service (IaaS): provides
virtual machines and other abstracted
hardware and operating systems which
may be controlled through a service API.
Examples include Amazon EC2 and S3,
Terremark Enterprise Cloud, Windows
Live Skydrive and Rackspace Cloud.
As well, clouds computing deployment can
be divided into: 1) public: available publicly
- any organization may subscribe; 2) private:
services built according to cloud computing
principles, but accessible only within a private
network; 3) partner: cloud services offered by a
provider to a limited and well-defined number
of parties (European Network and Information
Security Agency, 2009).
2.3. security policies
Security at the application level covers various
aspects, including authentication, authorization, message integrity, confidentiality, and
operational defense (Kannammal & Iyengar,
2007). Also, the transmission and storage of
information in the digital form coupled with
the widespread propagation of networked
computers has created new concerns for policy
(Bronk, 2008). An essential business tool and
knowledge-sharing device, the networked computer is not without vulnerability, including the
disruption of service and the theft, manipulation,
and destruction of electronic data (Bronk, 2008).
Therefore, the development of the information
security policy is a critical activity (Kadam,
2007). Credibility of the entire information
security program of an organization depends
upon a well-drafted information security policy
(Kadam, 2007). Some authors have studied
the effectiveness of the information security
policy. The success of the policy is dependent
on the way the security contents are addressed
in the policy document and how the content is
communicated to users (Höne & Eloff, 2002).
In the security policy management, Huong
Ngo (1999) noted that the security policy is to
the security environment like the law is to a
legal system. Without a policy, security practices
will be developed without clear demarcation
of objectives and responsibility, leading to
increased weakness. Therefore, a policy is the
start of security management. To integrate all
related functions, the policy should be devel-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
34 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
oped at both board and department levels, and
executed in conjunction with the IT department
and authorized users (Huong Ngo, 1999).
Information protection program should
be part of any organization’s overall asset
protection program. Management is charged
to ensure that adequate controls are in place
to protect the assets of the enterprise. An
information security program that includes
policies, standards and procedures will allow
management to demonstrate a standard of care
(Peltier, 2004). Furthermore, management from
all communities of interest, including general
employees, IT personnel and information security specialist, must make policies the basis
for all information security planning, designing
and deployment (Whitman & Mattord, 2009).
Any quality security programs begin and end
with policy and security policies are the least
cost expensive control to execute, but the most
difficult to implement properly (Whitman
& Mattord, 2009). Similarly, Swanson and
Guttman (1996) stated that policy is senior
management’s directives to create a computer
security program, establish its goals and assign
responsibilities. The term policy is also used to
refer to the specific security rules for particular systems. Additionally, policy may refer to
entirely different matters, such as the specific
managerial decisions setting an organization’s
e-mail privacy policy or fax security policy.
Clearly, the complexity of issues involved
means that the size and shape of information security policies may vary widely from a company
to a company. This may depend on many factors,
including the size of the company, the sensitivity
of the business information they own and deal
with in their marketplace and the numbers and
types of information and computing systems
they use (Diver, 2007). Therefore, for small
size business the complexity and number of
security policies needed are reduced than large
size business if the small business implements
cloud computing technology model.
The purpose of security policy is to protect
people and information, set rules for expected
behavior by users, define and authorize the consequences of violation, minimize risks and help
to track compliances with regulation (Diver,
2007). Additionally, there are three different
types of security policy: enterprise information
security policy (EISP), Issue-Specific security
policy (ISSP), and System Specific policy
(SysSp) (Swanson & Guttman, 1996; Whitman
& Mattord, 2009):
EISP is a general security policy and supports the mission, vision and direction of the
organization and sets the strategic direction,
scope for all security efforts (Whitman & Mattord, 2009). Furthermore, EISP should: 1) Create
and define a computer security program; 2) Set
organizational strategic directions; 3) Assign
responsibilities and address compliance issues.
ISSP should 1) Address specific areas of
technology such as e-mail usage, internet usage,
privacy, corporate’s network usage; 2) Requires
consistent updates as changes in technology take
place; 3) Contains issue statement, applicability, roles and responsibilities, compliance and
point of contact.
SysSP should 1) Focus on decisions taken
by management to protect a particular system,
such as defining the extent to which individuals
will be held accountable for their actions on the
system, should be explicitly stated; 2) Be made
by a management official. The decision management makes should be based on a technical
analysis; 3) Vary From System to System. Variances will occur because each system needs defined security objectives based on the system’s
operational requirements, environment, and the
From System to System. Variances will occur
because each system needs defined security
objectives based on the system’s operational
requirements, environment, and the manager’s
acceptance of risk. 4) Be Expressed as Rules.
Who (by job category, organization placement,
or name) can do what (e.g., modify, delete) to
which specific classes and records of data, and
under what conditions. Furthermore, SysSP can
be divided into managerial guidance created by
management to guide the implementation and
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 35
Figure 1. Typical network architecture for small business
configuration of technology and behavior of
people and technical specification, which are
an implementation of the managerial policy.
Hence, the real and fundamental success of
security policy is actually needed to be maintained, distributed, read, understood, agreed
and signed by employees and enforced by the
organization in order to be effective (Whitman
& Mattord, 2009). The main focus of small
business is the ISSP, with less emphasis on EISP
and not as much of on SysSP. Part of EISP can
be incorporated into ISSP since small business
doesn’t have a complicated infrastructure.
3. sEcurIty polIcy for
smAll busInEss usIng
cloud computIng
Obviously, as a small business relying on cloud
computing to execute business transactions this
can be translated into saving of IT infrastructures, servers, labors and complexity. Therefore,
a small business implementing cloud computing
will need fewer numbers of security policies
comparing to a large enterprise to manage the
network infrastructures and servers; especially
that security policies are the least cost expensive
control to execute (Whitman & Mattord, 2009).
However, for this to be realized, a small
business infrastructure can consist of a thin client
interface as a web browser using a thin-client
computer or laptop or desktop, switch, network
cables and internet line as presented in Figure 1.
From literature reviews, many small businesses don’t have the resources or capital to
develop in-house applications, or hire technical.
Therefore, for small business the SaaS might
be feasible and easier to implement than any
other kind of cloud service. For example, if a
small business employed cloud SaaS, then the
organization doesn’t manage or control the
fundamental cloud infrastructure, including
network, servers, operating systems, storage,
even individual application capabilities, with
the possible exception of limited user specific
application configuration settings (Cloud Security Alliance, 2009). On the other hand, in
PaaS service model, the consumer does not
manage or control the underlying cloud infrastructure including network, servers, operating
systems, or storage, but has control over the
deployed applications and possibly application
hosting environment configurations (Cloud
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
36 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
Security Alliance, 2009) and in this model it
gets more complicated for small businesses.
Also, for cloud IaaS, the consumer does not
manage or control the underlying cloud infrastructure but has control over operating systems,
storage, deployed applications, and possibly
limited control of select networking components
(e.g., host firewalls) (Cloud Security Alliance,
2009) and this is considered as the most complicated setup for small business, since this
might require personnel with information technology background.
Any small business utilizing SaaS to
conduct their day-to-day operation will need
a certain kind of security policies for using
information systems. In addition to many factors that must be taken into account, including
audience type and company business and size
(Diver, 2007). The key to ensuring that your
company’s security policy is useful and useable
is to develop a suite of policy documents that
match your audience and marry with existing
company policies. Policies must be useable,
workable and realistic (Diver, 2007). A small
business might not have the internal expertise
to create security policies; therefore, it is better
and feasible to consult a third party to study the
current infrastructure and requirements. The
successful deployment of a security policy is
closely related not only to the complexity of
the security requirements but also to the capabilities/functionalities of the security devices
(Preda et al., 2009).
In this regard, this paper has proposed
the authors’ viewpoint on the security policies
for small business considering implementing
cloud computing technologies, which requires
fewer number of security policies as proposed
in Figure 2.
Based on the above figure, are recommended security policies that can be implemented for small business with a brief description for each. Some of these policies can be
combined together or might be not applicable
due to usage of thin client computer versus
laptop or desktop as an example:
Internet Usage Policy (IUP): it applies to all
internet users (individuals working for the
Company, including permanent full-time
and part-time employees, contract workers,
temporary agency workers, business partners, and vendors) who access the Internet
through the computing or networking resources. The Company’s Internet users are
expected to be familiar with and to comply
with this policy, and are also required to use
their common sense and exercise their good
judgment while using Internet services
(SANS, 2006). Since the internet might
create the possibilities of infection to the
company’s system via viruses, spyware,
adware or Trojan. Therefore, the objective
of IUP is to protect the internet resources
from abusing by employees surfing the internet for non-business related or to obtain,
view any pornographic or unethical. The
IUP provides a clear distinction between
personal use and work use related. Therefore, the depending of small business on
the internet to conduct their tasks via cloud
computing model requires implementing
this policy to conserve the internet line
usage and to protect the assets.
Email Usage Policy (EUP): This policy covers
the appropriate use of any email sent from
a Company’s email address and applies to
all employees, vendors, and agents operating on behalf of the Company. The email
system shall not to be used for the creation
or distribution of any disruptive or offensive messages (SANS, 2006). This policy
protects against unauthorized email usage,
distribution of non-work related emails,
malware infections, and lost productivity.
Also, this policy prohibits employees to
send any confidential information using
email to outside and to send pornographic
jokes or stories which might be considered
as a sexual harassment. Furthermore, this
policy is directed against employees by
virtue of any protected classification including race, gender, nationality, religion,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 37
Figure 2. Proposed the model for types of security policy applicable for small business
and so forth, will be dealt with according
to the harassment policy. Therefore, when
small business decides to host their email
system into the clouds, the EUP can protect
the small business’s email assets.
System usage policy (SUP): it protects against
program installation without authorization
such as: no Instant Messaging, no file sharing software. Furthermore, SUP specifies
the restrictions on use of your account or
password (not to be given away) (Computer Technology Documentation Project,
2010). Information systems such as VoIP
phones, email, web, software, printers,
network, computers, computer accounts,
video system and smart phones are for use
of the employees to support the business.
Therefore, SUP helps to ensure the system
used to support the business is protected
against unauthorized activities.
Wireless Security Policy (WSP): The purpose
of this policy is to secure and protect the
information assets owned by the Company.
This policy applies to all wireless infrastructure devices that connect to a Company’s network or reside on a Company’s
site that provides wireless connectivity to
endpoint devices including, but not limited
to, laptops, desktops, cellular phones, and
personal digital assistants (PDAs) (SANS,
2006). WSP protects against users attempting to modify or extend the network, any
effort to break into or gain unauthorized
access to the system and running data
packet collection programs. Therefore,
WSP ensures the protection of company’s
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
38 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
assets while using this technology to reduce
the usage of cables and switches installation to facilitate the implementation of the
cloud computing model.
Physical Policy (PhyP): The purpose of this
document is to provide guidance for
Visitors to Company’s premises, as well as
for employees sponsoring Visitors to the
Company (SANS, 2006). Also, PhyP can
be used for securing network switches, access points and cables against unauthorized
usage. Therefore, PhyP protects company’s
assets against theft, modification and unauthorized usage.
Password Policies (PassP): This policy is to help
keep user accounts secure. It defines how
often users must change their passwords,
how long they must be, complexity rules
(types of characters used such as lower
case letters, uppercase letters, numbers, and
special characters), and other items (Computer Technology Documentation Project,
2010). Furthermore, PassP protects against
sharing of passwords among employees
and disclose of important passwords to
unauthorized personnel.
Proprietary Information Use (PIU): Acceptable
use of any proprietary information owned
by the company. It defines where it can
be stored and where it may be taken, how
and where it can be transmitted (Computer
Technology Documentation Project, 2010).
Security policies need tools to support it
such as anti-virus, web security, URL filtering,
data loss prevention. These tools will add cost,
but some cloud providers provide protection
against viruses, spyware, botnets and thousands
of other advanced internet based threats with low
monthly cost based on SaaS service without the
need to deploy and manage security appliances
and PC-based anti-virus/firewall utilities (Moran, 2010). This works by simply point all their
internet traffic to certain web security cloud application. This cloud application protects small
businesses against viruses, spyware, botnets
and thousands of other advanced internet based
threats. Furthermore, this cloud application provides a full suite of URL filtering capabilities.
Therefore, unwanted URL categories can be
blocked, while productivity draining categories
can be controlled. Moreover, overall internet
usage can be easily monitored with a simple
cloud based reporting portal.
The implementation of security policy requires training and awareness sessions to ensure
that employees understand their responsibilities toward company’s assets. Therefore, the
implementation can only be done by a major
drive to educate everyone. Furthermore, the
right message should reach the right people. The
training programs have to be designed keeping
in mind the actual groups being addressed. The
trainer has to use easy language to the audience without technical jargon (Kadam, 2007).
Moreover, it is recommended to cover only
the relevant policies for each group depending
on their job role with possible of customizing
training program (Kadam, 2007). Finally, each
policy document should be updated regularly.
At a minimum, an annual review strikes a good
balance, ensuring that policy does not become
out of date due to changes in technology or
implementation (SANS, 2006).
4. conclusIon
Cloud computing model is a relatively a new
topic which has not been adequately researched
up to now. Thus, this paper presents a suggestion
for securing the internal infrastructure for small
business using SaaS cloud computing service.
To the best of our knowledge, this is the first
theoretical study to provide a comprehensive
identify internal risks to small business using
cloud computing, Most importantly, this research describes the benefit of security policies
and the benefit of cloud computing for small
business in terms of reduction to cost and risk.
The motivation for this will primarily be cost
savings since the greater the success. The aim is
to encourage small business to migrate to cloud
computing model by portraying what is needed
to secure their infrastructure using traditional
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011 39
security policies without the complexity used
in large corporations.
This research suggests using specific applicable security policies to reduce the risks and
encourage small business to migrate to cloud
computing technologies.
Through an empirical study, the findings
of this study can provide a foundation that can
facilitate further study in cloud computing for
small business to reduce cost and risk. Hopefully, the research is not intended to describe
the process of designing security policy, on
the other hand, the intention is this research is
to contribute to the understanding of risks and
security requirement for small businesses implementing or plan to implement cloud computing
model. The research has succeeded in proposing
security reduction techniques against internal
risks for small business by implementing certain
types of security policies.
The paper has confirmed the proposed
model which satisfies the research aim. Also,
the paper revealed a considerable number of
interesting issues that would require future study
such as: investigate and enhance the predictive
power of the model proposed in this research.
One major direction for further research would
be geared towards to improve risk reduction
by automated security policy creations based
on small business requirements. Also, a future
research can be amid on developing the appropriate security policies for small business
with in-house technical capabilities using PaaS
cloud service model.
rEfErEncEs
Amir, M. S. (2010). It’s written in the cloud: the hype
and promise of cloud computing. Journal of Enterprise Information Management, 23(2), 131–134.
doi:10.1108/17410391011019732
Bronk, C. (2008). Hacking the nation-state: Security,
information technology and policies of assurance.
Information Security Journal: A Global Perspective,
17(3), 132-142.
Buyya, R., Yeo, C. S., & Venugopal, S. (2008).
Market-oriented cloud computing: Vision, hype, and
reality for delivering IT Services as computing utilities. In Proceedings of the 10th IEEE International
Conference on High Performance Computing and
Communications (p. 1).
Cervone, H. F. (2010). An overview of virtual and
cloud computing. OCLC Systems & Services, 26(3),
162–165. doi:10.1108/10650751011073607
Cesare, S., & Xiang, Y. (2010). Classification of malware using structured control flow. In Proceedings
of the Eighth Australasian Symposium on Parallel
and Distributed Computing, Brisbane, Australia.
(Vol. 107).
Clarke, R. (2010). User requirements for cloud
computing architecture. In Proceedings of the 10th
IEEE/ACM International Conference on Cluster,
Cloud and Grid Computing (pp. 623-630).
Cloud Computing World. (2010). Why your small
business needs cloud computing. Retrieved from
http://www.cloudcomputingworld.org/cloud-computing-for-businesses/why-your-small-businessneeds-cloud-computing.html
Cloud Security Alliance. (2009). Security
guidance for critical areas of focus in cloud
computing V2.1. Retrieved from http://www.
privatecloud.com/2010/01/26/security-guidancefor-critical-areas-of-focus-in-cloud-computing-v21/?fbid=uCRTvs9w3Cs
Computer Technology Documentation Project.
(2010). Network and computer security tutorial
2010. Retrieved from http://www.comptechdoc.org/
D’Arcy, J., & Hovav, A. (2007). Deterring
internal information systems misuse. Communications of the ACM, 50(10), 113–117.
doi:10.1145/1290958.1290971
Diver, S. (2007). Information security policy – a
development guide for large and small companies
(p. 43). Reston, VA: SANS Institute.
European Network and Information Security Agency.
(2009). Cloud computing - benefits, risks and recommendations for information security. Retrieved
from http://itlaw.wikia.com/wiki/Cloud_Computing:_Benefits,_Risks,_and_Recommendations_for_
Information_Security
Greengard, S. (2010). Cloud computing and developing nations. Communications of the ACM, 53(5),
18–20. doi:10.1145/1735223.1735232
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
40 International Journal of Cloud Applications and Computing, 1(2), 29-40, April-June 2011
Hayes, B. (2008). Cloud computing. Communications of the ACM, 51(7), 9–11.
doi:10.1145/1364782.1364786
Heath, N. (2010). How cloud computing will
help government save taxpayer £3.2bn. Retrieved
from http://www.silicon.com/management/publicsector/2010/01/27/how-cloud-computing-will-helpgovernment-save-taxpayer-32bn-39745389/
Höne, K., & Eloff, J. H. O. (2002). Information
security policy — what do international information
security standards say? Computers & Security, 21(5),
382–475. doi:10.1016/S0167-4048(02)00504-7
Peltier, T. R. (2004). Developing an enterprisewide
policy structure. Information Systems Security,
13(1), 44–50. doi:10.1201/1086/44119.13.1.2004
0301/80433.6
Preda, S., Cuppens, F., & Cuppens-Boulahia, N.
Alfaro.J., Toutain, L., & Elrakaiby, Y. (2009). Semantic context aware security policy deployment.
In Proceedings of the 4th International Symposium
on Information, Computer, and Communications
Security, Sydney, Australia.
SANS. (2006). Information security policy templates.
Reston, VA: SANS Institute.
Huong Ngo, H. (1999). Corporate system security:
towards an integrated management approach. Information Management & Computer Security, 7(5),
217–222. doi:10.1108/09685229910292817
Srinivasan, M. (2010). Cloud security for small
businesses. In Proceedings of the Allied Academies
International Conference of the Academy of Information & . Management Science, 14, 72–73.
Kadam, A. W. (2007). Information security
policy development and implementation. Information Systems Security, 16(5), 246–256.
doi:10.1080/10658980701744861
Standage, T. (2002). The weakest link. The Economist, 11-14.
Kannammal, A., & Iyengar, N. C. S. N. (2007). A
model for mobile agent security in e-business applications. International Journal of Business and
Information, 2(2), 185–198.
Katharine, S., & David, B. (2010). Current state
of play: Records management and the cloud.
Records Management Journal, 20(2), 217–225.
doi:10.1108/09565691011064340
Mark-Shane, E. S. (2009). Cloud computing and
collaboration. Library Hi Tech News, 26(9), 10–13.
doi:10.1108/07419050911010741
Mell, P., & Grance, T. (2010). The NIST definition
of cloud computing. Communications of the ACM,
53(6), 50–50.
Moran, J. (2010). ZScaler web security cloud
for small business. Retrieved from http://www.
smallbusinesscomputing.com/webmaster/article.
php/3918716/ZScaler-Web-Security-Cloud-forSmall-Business.htm
Morris, M. (n. d.). Employee theft schemes. Retrieved
from http://www.cowangunteski.com/documents/
EmployeeTheftSchemes_001.pdf
Swanson, M., & Guttman, B. (1996). Generally accepted principles and practices for securing information technology systems. Retrieved from http://csrc.
nist.gov/publications/nistpubs/800-14/800-14.pdf
Twitchell, D. P. (2006). Social engineering in information assurance curricula. In Proceedings of
the 3rd Annual Conference on Information Security
Curriculum Development, Kennesaw, GA.
Weiss, A. (2007). Computing in the clouds. netWorker, 11(4), 16–25. doi:10.1145/1327512.1327513
Whitman, M., & Mattord, H. (2009). Principles of
information security (3rd ed.). Boston, MA: Course
Technology.
Wittow, M. H., & Buller, D. J. (2010). Cloud computing: Emerging legal issues for access to data, anywhere, anytime. Journal of Internet Law, 14(1), 1–10.
Yogesh, K. D., & Navonil, M. (2010). It’s unwritten
in the Cloud: The technology enablers for realising
the promise of cloud computing. Journal of Enterprise Information Management, 23(6), 673–679.
doi:10.1108/17410391011088583
Zscaler. (2010). Zscaler web security cloud for small
business. Retrieved from http://www.zscaler.com/
pdf/brochures/ds_zscalerforsmb.pdf
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 41
the financial clouds review
Victor Chang, University of Southampton and University of Greenwich, UK
Chung-Sheng Li, IBM Thomas J. Watson Research Center, USA
David De Roure, University of Oxford, UK
Gary Wills, University of Southampton, UK
Robert John Walters, University of Southampton, UK
Clinton Chee, Commonwealth Bank, Australia
AbstrAct
This paper demonstrates financial enterprise portability, which involves moving entire application services
from desktops to clouds and between different clouds, and is transparent to users who can work as if on their
familiar systems. To demonstrate portability, reviews for several financial models are studied, where Monte
Carlo Methods (MCM) and Black Scholes Model (BSM) are chosen. A special technique in MCM, Least
Square Methods, is used to reduce errors while performing accurate calculations. Simulations for MCM are
performed on different types of Clouds. Benchmark and experimental results are presented for discussion. 3D
Black Scholes are used to explain the impacts and added values for risk analysis. Implications for banking
are also discussed, as well as ways to track risks in order to improve accuracy. A conceptual Cloud platform
is used to explain the contributions in Financial Software as a Service (FSaaS) and the IBM Fined Grained
Security Framework. This study demonstrates portability, speed, accuracy, and reliability of applications in the
clouds, while demonstrating portability for FSaaS and the Cloud Computing Business Framework (CCBF).
Keywords:
3D Black Scholes, Black Scholes Model, Cloud Computing Business Framework, Enterprise
Portability for Clouds, Financial Clouds, Least Square Methods, MATLAB and Mathematica
Applications on Clouds, Monte Carlo Methods (MCM), Operational Risk
1. IntroductIon
The Global economic downturn triggered by
the finance sector is an interdisciplinary research question that expertise from different
sectors needs to work on altogether. There are
different interpretations for the cause of the
problem. Firstly, Hamnett (2009) conducted a
study to investigate the cause, and concluded
DOI: 10.4018/ijcac.2011040104
unsustainable mortgage lending leads to out
of control status and that the housing bubble
and subsequent collapse were result of these.
Irresponsible mortgage lending was the cause
for Lehman Brother collapse that has triggered
global financial crisis. Secondly, Lord Turner,
Chair of the Financial Service Authority (FSA),
is quoted as follows: “The problem, he said,
was that banks’ mathematical models assumed
a ‘normal’ or ‘Gaussian’ distribution of events,
represented by the bell curve, which danger-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
42 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
ously underestimated the risk of something
going seriously wrong” (Financial Times,
2009). Thirdly, there are reports showing the
lack of regulations on financial practice. Currently there are remedies proposed by several
governments to improve on this (City A.M.,
2010). All the above suggested possibilities
contribute to complexity that caused global
downturn. However, Cloud Computing (CC)
offers a good solution to deal with challenges
in risk analysis and financial modelling. The
use of Cloud resources can improve accuracy
of risk analysis, and knowledge sharing in an
open and professional platform (Chang, Wills,
& De Roure, 2010a, 2010c). Rationales are
explained as follows. The Clouds provide a
common platform to run different modelling
and simulations based on Gaussian and nonGaussian models, including less conventional
models. The Clouds offer distributed highperforming resources for experts in different
areas within and outside financial services to
study and review the modelling jointly, so that
other models with Monte Carlo Methods and
Black Scholes Models can be investigated and
results compared. The Clouds allow regulations
to be taken with ease while establishing and
reminding security and regulation within the
Clouds resources.
2. lItErAturE rEvIEW
Literature review is presented as follows. Three
challenges in business context and Software
as a Service (SaaS) are explained. This paper
is focused on the third issue, enterprise portability, and how financial SaaS is achieved
with portability. Financial models with Monte
Carlo methods and Black Scholes models are
also explained.
2.1. three challenges in
business context
There are three Cloud Computing problems
experienced in the current business context
(Chang, Wills, & De Roure, 2010b, 2010c).
Firstly, all cloud business models and frame-
works proposed by several leading researchers are either qualitative (Briscoe & Marinos,
2009; Chou, 2009; Weinhardt et al., 2009;
Schubert, Jeffery, & Neidecker-Lutz, 2010) or
quantitative (Brandic et al., 2009; Buyya et al.,
2009; Patterson et al., 2009). Each framework
is self-contained, and not related to others’
work. There are few frameworks or models
which demonstrate linking both quantitative
and qualitative aspects, and when they do, the
work is still at an early stage.
Secondly, there is no accurate method for
analysing cloud business performance other
than the stock market. A drawback with the
stock market is that it is subject to accuracy and
reliability issues (Chang, Wills, & De Doure,
2010a, 2010c). There are researchers focusing
on business model classifications and justifications for which cloud business can be successful (Chou, 2009; Weinhardt et al., 2009). But
these business model classifications need more
cases to support them and more data modelling
to validate them for sustainability. Ideally, a
structured framework is required to review
cloud business performance and sustainability
in systematic ways.
Thirdly, communications between different
types of clouds from different vendors are often
difficult to implement. Often work-arounds
require writing additional layers of APIs, or an
interface or portal to allow communications.
This brings interesting research questions such
as portability, as portability of some applications
from desktop to cloud is challenging (Beaty et
al., 2009; Patterson et al., 2009). Portability
refers to moving enterprise applications and
services, and not just files or VM over clouds.
2.2. financial models
Gaussian-based mathematical models have been
frequently used in financial modelling (Birge
& Massart, 2001). As the FSA has pointed out,
many banks’ mathematical models assumed
normal (Gaussian) distribution as an expected
outcome, and might underestimate the risk
for something going wrong. To address this,
other non-Gaussian financial models need
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 43
to be investigated and demonstrated for how
financial SaaS can be successfully calculated
and executed on Clouds. Based on various studies (Feiman & Cearley, 2009; Hull, 2009), one
model for pricing and one model for risk analysis
should be selected respectively. A number of
methods for calculating prices include Monte
Carlo Methods (MCM), Capital Asset Pricing
Models and Binomial Model. However, the
most commonly used method is MCM since
MCM is commonly used in stochastic and
probabilistic financial models, and provides data
for investors’ decision-making (Hull, 2009).
MCM is thus chosen for pricing. On the other
hand, methods such as Fourier series, stochastic
volatility and Black Scholes Model (BSM) are
used for volatility. As a main stream option,
BSM is selected for risk analysis, since BSM
has finite difference equations to approximate
derivatives. Origins in literature and mathematical formulas in relation to MCM and BSM are
presented in the next two sections.
2.2.1. Monte Carlo
Methods in Theory
Monte Carlo Simulation (MCS), originated
from mathematical Monte Carlo Methods, is
a computational technique used to calculate
risk analysis and the probability of an event
or investment to happen. MCS is based on
probability distributions, so that uncertain
variables can be described and simulated with
controlled variables (Hull 2009; Waters 2008).
Originated from Physics, Brownian Motions
follow underlying random variables can influence the Black-Scholes models, where stock
price becomes:
dS = µSdt + σSdWt
(1)
single normal variable of mean 0 and variance
δt, and leading to:
k
σ2
S (k δt ) = S (0) exp ∑ µ − δt + σεi δt
2
i =1
(2)
for each k between 1 and M, and each ei is
drawn from a standard normal distribution. If
a derivative H pays the average value of S
between 0 and T then a sample path ω corresponds to a set {e1,..., eM } and hence:
H (ω) =
M
1
S (k δt )
∑
M + 1 k =0
(3)
The Monte Carlo value of this derivative
is obtained by generating N lots of M normal
variables, creating N sample paths and so N
values of H, and then taking the mean. The
error has order e = O(N −1/2 ) convergence in
standard deviation based on the central limit
theorem.
2.2.2. Black Scholes Model (BSM)
The BSM is commonly used for financial
markets and derivatives calculations. It is also
an extension from Brownian motion. The BSM
formula calculates call and put prices of European options (a financial model) (Hull, 2009).
The value of a call option for the BSM is:
C (S , t ) = SN (d1 ) − Ke −r (T −t )N (d2 )
(4)
S
s2
ln( ) + (r + )(T − t )
K
2
where d1 =
and
s T −t
d2 = d1 − s T − t
where W is Brownian the dW term here stands
in for any and all sources of uncertainty in the
price history of the stock. The time interval is
divided into M units of length δt from time 0
to T in a sampling path, and the Brownian motion over the interval dt are approximated by a
The price for the put option is:
(5)
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
44 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
For both formulas (Hull, 2009),
•
•
•
•
•
•
N(•) is the cumulative distribution function
of the standard normal distribution.
T - t is the time to maturity.
S is the spot price of the underlying asset.
K is the strike price.
r is the risk free rate.
σ is the volatility in the log-returns of the
underlying asset.
2.3. least square methods (lsm)
for monte carlo simulations (mcs)
Variance Gamma Processes are used in our
previous papers (Chang, Wills, & De Roure,
2010a, 2010c), and although it reduces errors
while calculating pricing and risk analysis on
Clouds, it can only go up to 20,000 simulations
in one go before performance drops off. In addition, it takes approximately 10 seconds for
error correction due to stratification of sampling,
although it takes less than 1 second for 5,000
simulations per attempt for executing financial
applications with Octave 3.2.4 on Clouds. This
leads us to investigate other methodology that
can offer much more simulations to be executed
in one go, in other words, improvements in performance on Clouds while maintaining accuracy
and quality of our simulations. Monte Carlo
Methods (MCM) are used in our simulations,
and this means other methods supporting MCM
are required to meet our objectives. Various
methods such as stochastic simulation, Terms
Structure Models (Piazzesi, 2010), Triangular
Methods (Mullen et al., 1988; Mullen & Ennis, 1991), and Least Square Methods (LSM)
are studied (Longstaff & Schwartz, 2001;
Moreno & Navas, 2001; Choudhury et al.,
2008). LSM is chosen because of the following advantages. Firstly, LSM provides a direct
method for problem solving, and is extremely
useful for linear regressions. LSM only needs
a short starting time, and is therefore a good
choice. Secondly, Terms Structure Models
and Triangular Methods are not necessarily
used in the Clouds. LSM can be used in the
Clouds, because often jobs that require high
computations in the Clouds, need extensive
resources and computational powers to run.
LSM is suitable if a large problem is divided
into several sections where each section can be
calculated swiftly and independently. This also
allows improvements in efficiency.
Here is the explanation for the LSM. There
is a data set (x1,y1), (x2, y2),....,(xn, yn) and the
fitting curve f(x) has the deviation d1, d1, ....,
dn which are caused from each data point, the
least square method produces the best fitting
curve with the property as follows:
(6)
The least squares line method uses an
equation f(x) = a + bx which is a line graph and
describes the trend of the raw data set (x1,y1), (x2,
y2),....,(xn, yn). The n should be greater or equal
to 2 (n ≥ 2)in order to find the unknowns a and
b. So the equation for the least square line is:
(7)
The least squares line method uses an
equation f(x) = a + bx + cx2 which is a parabola
graph. The n should be greater or equal to 3
(n ≥ 3)in order to find the unknowns a, b, and
c. When you get the first derivatives of ∏ in
parabola, you will have:
(8)
The LSM has been mathematically proven,
and allows advanced calculations of complex
systems. The LSM is the most suitable for a
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 45
complex problem divided into several sections
where each section runs its own calculations.
These complex systems include robot, financial
modelling and medical engineering. Longstaff
and Schwartz (2001) have developed an algorithm based on LSM Monte Carlo simulations
(MCS) to estimate best values precisely. Moreno
and Navas (2001) have adopted a similar approach, and demonstrate their algorithm and
robustness of LSM MCS for pricing American
derivatives. Choudhury (2008) used an approach
presented Longstaff and Schwartz, except they
focused on code algorithms and performance
optimisation. These three papers have demonstrated how LSM can be used for financial
computing to achieve accurate estimation and
optimisation. Abdi (2009) demonstrate that
LSM is very useful for regression and explain
why LSM is popular and versatile for calculations. He also states the drawback is that LSM
does not cope well with extreme calculations,
but such volatile calculations will be handled
by 3D Black Scholes (Section 4).
2.4. the cloud computing
business framework
To address the three challenges in business
context earlier, the Cloud Computing Business
Framework (CCBF) is proposed. The core
concept of CCBF is an improved version from
Weinhardt’s et al. (2009) Cloud Business Model
Framework (CBMF) where they demonstrate
how technical solutions and Business Models
fit into their CBMF. The CCBF is proposed to
deal with four research problems:
1.
2.
3.
Classification of business models with consolidation and explanations of its strategic
relations to IaaS, PaaS and SaaS.
Accurate measurement of cloud business
performance and ROI.
Dealing with communications between
desktops and clouds, and between different
clouds offered by different vendors, which
focus on enterprise portability.
4.
Providing linkage and relationships between different cloud research methodologies, and between IaaS, PaaS, SaaS and
Business Models.
The Cloud Computing Business Framework is a highly-structured conceptual and
architectural framework to allow a series of
conceptual methodologies to apply and fit
into Cloud Architecture and Business Models.
Based on the summary in Section 2.1, our
research questions can be summed up as: (1)
Classification; (2) Sustainability; (3) Portability and (4) Linkage. This paper focuses on the
third research question, Portability, which is
described as follows.
Portability: This refers to enterprise portability,
which involves moving the entire application services from desktops to clouds and
between different clouds. For financial
services and organisations that are not
yet using clouds, portability involves a
lot of investment in terms of outsourcing,
time and effort, including rewriting APIs
and additional costs. This is regarded as a
business challenge. Portability deals with
IaaS, PaaS and SaaS. Examples in Grid,
Health and Finance will be demonstrated.
Financial SaaS (FSaaS) Portability is the
focus for this paper.
2.5. financial software
as a service (fsaas)
In relation to finance, portability is highly
relevant. This is because a large number of
financial applications are written for desktops.
There are financial applications for Grid but not
all of them are portable onto clouds. Portability
often requires rewrites in software design and the
API suitable for clouds. Apart from portability,
factors such as accuracy, speed, reliability and
security of financial models from desktop to
clouds must be taken into consideration. The
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
46 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
second problem related to finance is there are
few financial clouds as described in opening
section. Salesforce offers CRM but it is not
directly related to financial modelling (FM).
Paypal is a payment system and not dealing
with financial modelling. Enterprise portability
from desktops to clouds, and between different
clouds, is useful for businesses and financial
services, as they cannot afford to spend time
and money migrating the entire applications,
API libraries and resources to clouds. Portability must be made as easy as possible. However,
there are more advantages in moving all applications and resources to clouds. These added
values include the following benefits:
•
•
The community cloud – this encourages
groups of financial services to form an
alliance to analyse complex problems.
Risk reduction – financial computing results can be compared and jointly studied
together to reduce risks. This includes
running other less conventional models
(non-Gaussians) to exploit causes of errors
and uncertainties. Excessive risk taking
can be minimised with the aid of stricter
regulations.
Financial Software as a Service (FSaaS)
is the proposal for dealing with two financespecific problems. FSaaS is designed to improve
the accuracy and quality of both pricing and
risk analysis. This is essential because incorrect
analysis or excessive risk taking might cause
adverse impacts such as financial loss or severe
damage in credibility or credit crunch. Research
demonstration is on SaaS, which means it can
calculate best prices or risks based on different
values in volatility, maturity, risk free rate and
so forth on cloud applications. Different models
for FSaaS are presented and explained from
Section 2.3 onwards, in which Monte Carlo
Methods (MCM) and Black Scholes Models
(BSM) will be demonstrated as the core models
used in FSaaS.
3. fAAs portAbIlIty:
montE cArlo sImulAtIons
WItH lEAst squArE mEtHods
This section describes how Financial SaaS
portability on clouds can be achieved. This
mainly involves Monte Carlo Methods (MCM)
and Black Scholes Model (BSM). Before describing how they work and how validation
and experiments are done, current practice in
Finance is presented as follows. Mathematical models such as MCM are used in Risk
Management area, where models are used to
simulate the risk of exposures to various types
of operational risks. Monte Carlo Simulations
(MCS) in Commonwealth Bank Australia are
written in Fortran and C#. Such simulations take
several hours or over a day (Chang, Wills, &
De Roure, 2010c). The results may be needed
by the bank for the quarterly reporting period.
Monte Carlo Methods (MCM) are suitable
to calculate best prices for buy and sell, and
provides data for investors’ decision-making
(Waters, 2008). MATLAB is used due to its ease
of use with relatively good speed. While the
volatility is known and provided, prices for buy
and sale can be calculated. Chang, Wills, and De
Roure (2010b, 2010c) have demonstrated their
examples on how to calculate both call and put
prices, with their respective likely price, upper
limit and lower limit.
3.1. motivation for using the
least square method
As discussed in Section 2.3, Variance-Gamma
Processes (VGP) with Financial Clouds and
FSaaS with error reductions are demonstrated
by Chang, Wills, and De Roure (2010a, 2010b,
2010c). It has two drawbacks: (1) the program
focuses on error correction, which takes time,
and seems to make the program slow to start;
and (2) 20,000 simulations per attempt is the
optimum. This is perhaps because of the high
amount of memory required for VGP. Improvements are necessary, including the use of another
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 47
Table 1. The first part of coding algorithm for LSM
S=100; %underlying price
X=100; %strike
T=1; %maturity
r=0.04; %risk free rate
dividend=0;
v=0.2; % volatility
nsimulations=10000; % No of simulations, which can be updated
nsteps=10; % 10 steps are taken. Can be changed to 50, 100, 150 and 200 steps.
CallPutFlag=”p”;
%%%%%%%%%%%%%%%%%%%%%%%%%
%AnalyAmerPrice=BjerkPrice(CallPutFlag,S,X,r,dividend,v,T)
r=r-dividend; %risk free rate is unchanged
%AnalyEquropeanPrice=BlackScholesPrice(CallPutFlag,S,X,T,r,v)
if CallPutFlag==”c”,
z=1;
else
z=-1;
end;
HPC language or a better method. Adopting
a better methodology not only enhances performance but also resolves some aspects of
challenges. Barnard et al. (2003) demonstrate
that having the right method is more important
than using a particular language.
The Least Square Methods (LSM) fits
into the improvement plan with the following
rationale. Firstly, LSM provides a quick execution time, more than 50% compared with VGP
(as shown in Section 5). Secondly, it allows the
number of simulations to be pushed to 100,000
in one go, before encountering issues such as
stability and performance. By offering these two
distinct advantages over VGP, LSM is therefore
a more suitable method for FSaaS to achieve
speed, accuracy and performance. In addition,
LSM has been extensively used in robots, or
intelligent systems where a major problem
is divided into sections, and each section is
performed with fast and accurate calculations.
3.2. coding Algorithm for
the least square method
This section describes the coding algorithm
for the Least Square Method. Table 1 shows
the initial part of the code, where key figures
such as maturity, volatility and risk free rate are
given. This allows us to calculate and track call
prices if variations for maturity, risk free rate
and volatility change. Similarly, we can modify
our code to track volatility for risk analysis if
other variables are changed.
Both American price and European price
methods are commonly used in Monte Carlo
Simulations (Hull, 2009). It is an added value
to calculate both prices in one go, and so both
options are included in our code.
The next step involves defining the three
important variables for both American and European options, which include cash flow from
continuation (CC), cash flow from exercise
(CE) and exercise flag (EF), shown in Table
2. The ‘for’ loop is to start the LSM process.
Table 3 shows how the three variables CC, CE
and EF are updated.
Table 4 shows the main body of LSM
calculations. The ‘regrmat’ function is used to
perform regression of continuation value. This
value is calculated, and fed into the ‘ols’ function, which is a built-in function offered by
open-source Octave to calculate ordinary LSM
estimation. The p value is the outcome of the
‘ols’ function, which is then used to determine
final values of CC, EF and CE. In MATLAB,
the equivalent function is ‘lscov’ for the LSM.
Table 5 shows the last part of the algorithm
for the LSM. EF, calculated in Table 4 is used
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
48 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Table 2. The second part of coding algorithm for the LSM
smat=zeros(nsimulations,nsteps);
CC=zeros(nsimulations,nsteps); %cash flow from continuation
CE=zeros(nsimulations,nsteps); %cash flow from exercise
EF=zeros(nsimulations,nsteps); %Exercise flag
dt=T/(nsteps-1);
smat(:,1)=S;
drift=(r-v^2/2)*dt;
qrdt=v*dt^0.5;
for i=1:nsimulations,
st=S;
curtime=0;
for k=2:nsteps,
curtime=curtime+dt;
st=st*exp(drift+qrdt*randn);
smat(i,k)=st;
end
end
Table 3. The third part of coding algorithm for the LSM
CC=smat*0; %cash flow from continuation
CE=smat*0; %cash flow from continuation
EF=smat*0; %Exercise flag
st=smat(:,nsteps);
CE(:,nsteps)=max(z*(st-X),0);
CC(:,nsteps)=CE(:,nsteps);
EF(:,nsteps)=(CE(:,nsteps)>0);
paramat=zeros(3,nsteps); %coefficient of basis functions
to decide values of an important variable ‘payoff_sum’, which is then used to calculate the
best price for American and European options.
Upon running the MATLAB application,
‘lsm’, it calculates the best pricing values for
American and European options. The following
shows the outcome of executing LSM code.
> lsm
MCAmericanPrice = 6.3168
MCEuropeanPrice = 5.9421
4. A pArtIculAr fsAAs:
tHE 3d blAck scHolEs
modEl by mAtHEmAtIcA
Black Scholes Model (BSM) has been extensively used in financial modelling and opti-
misation. Chang, Wills and De Roure (2010a,
2010c) have demonstrated their Black Scholes
MATLAB applications running on Clouds for
risk analysis. Often risk analysis is presented
in visualisation, so that it makes analysis easier
to read and understand. MATLAB is useful for
calculation and 3D computation, but its 3D
computational performance tends to be more
time-consuming than Mathematica, which
offers commands to compute 3D diagrams
swiftly. For this reason, Mathematica is used
as the platform for demonstration.
Miller (2009) explain how Mathematica
can be used for BSM, and he demonstrates that
it is relatively complex to model BSM, so the
Black Scholes formulas (BSF) are therefore the
best to be expressed in terms of auxiliary function. His rationale is that BSM is based on an
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 49
Table 4. The fourth part of coding algorithm for the LSM
for k=nsteps-1:-1:2,
st=smat(:,k);
CE(:,k)=max(z*(st-X),0);
%Only the positive payoff points are input for regression
idx=find(CE(:,k)>0);
Xvec=smat(idx,k);
Yvec=CC(idx,k+1)*exp(-r*dt);
% Use regression - Regress discounted continuation value at the
% next time step to S variables at current time step
regrmat=[ones(size(Xvec,1),1),Xvec,Xvec.^2];
p=ols(Yvec,regrmat); %p = lscov(Yvec, regrmat) for MATLAB CC(idx,k)=p(1)+p(2)*Xvec+p(3)*Xvec.^2;
%If exercise value is more than continuation value, then
%choose to exercise
EF(idx,k)=CE(idx,k) > CC(idx,k);
EF(find(EF(:,k)),k+1:nsteps)=0;
paramat(:,k)=p;
idx=find(EF(:,k) == 0);
%No need to store regressed value of CC for next use
CC(idx,k)=CC(idx,k+1)*exp(-r*dt);
idx=find(EF(:,k) == 1);
CC(idx,k)=CE(idx,k);
end
Table 5. The fifth part of coding algorithm for the LSM
payoff_sum=0;
for i=1:nsteps,
idx=find(EF(:,i) == 1);
st=smat(idx,i);
payoffvec=exp(-r*(i-1)*dt)*max(z*(st-X),0);
payoff_sum=payoff_sum+sum(payoffvec);
end
MCAmericanPrice=payoff_sum/nsimulations
st=smat(:,nsteps);
payoffvec=exp(-r*(nsteps-1)*dt)*max(z*(st-X),0);
payoff_sum=sum(payoffvec);
MCEurpeanPrice=payoff_sum/nsimulations
arbitrage argument in which any risk premium
above the risk-free rate is cancelled out. Hence,
both BSF and auxiliary functions take the same
five variables as follows.
r = continuously compounded risk-free rate of
return, e.g., the return on U.S. Treasury
bills with very short maturities.
t = time (in years) until the expiration date.
p = current price of the stock.
k = exercise price of the option.
sd = volatility of the stock (standard deviation
of annual rate of return).
The first step is to define the auxiliary
function, ‘AuxBS’, which is then used to define
Black Scholes function. The code algorithm and
formals are presented as follows:
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
50 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
AuxBS[p_,k_,sd_,r_,t_] = (Log[p/k]+r t)/
(sd Sqrt[t])+.5 sd Sqrt[t]
This is equivalent to 0.5 sd
(r t+Log[p/k])/(sd
t+
t)
(9)
Similarly, Black Scholes can be defined as:
BlackScholes[p_,k_,sd_,r_,t_] =
p Norm[AuxBS[p,k,sd,r,t]]- k Exp[-r t]
(Norm[AuxBS[p,k,sd,r,t]-sd Sqrt[t]])
The formula is: -
-rt
4.1.1. Scenarios in Risk Analysis
with 3D Visualisation
k Norm[-0.5 sd
t +(rt+Log[p/k])/(sd t )]+p Norm[0.5 sd
t +(r t+Log[p/k])/(sd
t )]
of MCM is to demonstrate portability on top
of computational simulations and modelling in
pricing on different Clouds, and this does not
need results to be on 3D formats. However, BSM
is used to investigate risk. Risk can be difficult
to be accurately measured, and models may
have possibilities to undermine or miss areas
and probability of risk. It is difficult to keep
track risks if extreme circumstances happen.
The use of 3D Visualisation can help to exploit
any hidden errors or missing calculations. Thus,
it helps the quality of risk analysis.
(10)
‘Norm’ is a function in Mathematica to
compute complex mathematical modelling such
as Gaussian integers, vectors, matrices and so
on. By using these two functions effectively,
pricing and risks can be calculated and then
presented in 3D Visualisation. The advantages
are discussed in the next section.
4.1. 3d black scholes
Methods such as Fourier series, stochastic
volatility and BSM are used for volatility. As
a main stream option, BSM is selected for risk
analysis in this paper, since BSM has finite
difference equations to approximate derivatives. Our previous papers (Chang, Wills, &
De Roure, 2010a, 2010c) have demonstrated
risk and pricing calculations based on Black
Scholes Model (BSM). Results are presented in
numerical forms, and occasionally require users
and collaborators to visualise some scenarios
of numerical computation in their minds. In
other papers by Chang, Wills, and De Roure
(2010b, 2010c), they demonstrate that Cloud
business performance can be presented by 3D
Visualisation. Where computational applications can be presented using 3D Visualisation,
this can improve usability and understanding
(Pajorova & Hluchy, 2010). Currently the focus
This section describes some scenarios to
calculate and present risks. The first scenario
involves investigations of profits/loss in relation
to put price. The call price (buying price) for a
particular investment is 60 per stock. The put
price (selling price) to get zero profit/loss is 60.
The risk-free rate, the guarantee rate that will
not incur loss, is between 0 and 0.5%. However,
the profit and loss will be varied due to impacts
of volatility, which means selling price between
50 and 60 will get to a different extent of loss.
Similarly, selling prices between 60 and 70 will
get to a different extent of profits. The intent
is to find out the percentage of profit and loss
for massive sale, and the risk associated with it.
While using auxiliary and Black Scholes function, the result can be computed in 3D swiftly
and presented in Figure 1, which is similar to
a 3D parabola.
The second scenario is to identify the best
put price for a range of fluctuating volatilities.
Volatility is used to quantify the risk of the financial instrument, and is subject to fluctuation
that may result in different put prices. The
volatility ranges between 0.20% and 0.40%,
the best put price is between 6.5 and 9.2, and
the risk-free rate is between 0 and 0.5%. The
higher the risk is, the more the return will be.
However, this situation is reversed when risk
(volatility in this case) goes beyond cut-off
volatility. Hence, the task is to keep track the
risk pattern, and to identify the cut-off point for
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 51
Figure 1. The 3D risk analysis to investigate volatile percentage of profits and loss
Figure 2. The 3D risk analysis to investigate the best put price in relations to fluctuating volatility
volatility. Similarly, auxiliary and Black Scholes functions are used to compute 3D Visualisation swiftly, and result is presented in Figure 2
which looks like an inverted-V and shows the
best price is 9 while volatility is 0.30.
4.1.2. Delta and Theta: Scenarios in
Risk Analysis with 3D Visualisation
In BSM, the partial derivative of an option
value with respect to stock price is known as
Delta. Hull (2009) and Millers (2009) assert
that Delta is useful in risk measurement for an
option because it indicates how much the price
of an option will respond to a change of price of
the stock. Delta is a useful tool in risk management where a portfolio contains more than one
option of the stock. The derivative function, D,
is built in Mathematica. This much simplifies
coding for Delta, which can be presented as:
Delta[p_,k_,sd_,r_,t_] =
D[BlackScholes[p,k,sd,r,t],p] which corresponds to this formula:
p
rt Log - 1
k )2 )/
(0.398942 (0.5sd t
2
sd t
(sd
t )-(0.398942
1
rt (0.5sd t
2
+Norm[0.5 sd
rt Log -
p
k )2 k)/(p sd
t)
sd t
t +(r t+Log[p/k])/(sd
t )]
(11)
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
52 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Figure 3. The 3D risk analysis to explore the percentage of loss and the best put price in relations to the impact of economic downturn
Delta computes positive derivatives in
BSM, and to get an inverted Delta, a new function, Theta, is introduced.
Theta[p_,k_,sd_,r_,t_] =
-D[BlackScholes[p,k,sd,r,t],t] which corresponds to this formula:
p
rt Log - 1
k
)2 k
0.398942 rt (0.5sd t
2
sd t
(r/(sd
t )-(0.25 sd)/ t -(r t+Log[p/k])/(2 sd
p
rt Log - 1
k )2 p
t3/2))-0.398942 (0.5sd t
2
sd t
(r/(sd
t )+(0.25 sd)/ t -(r t+Log[p/k])/(2
sd t ))- e-rt k r Norm[-0.5 sd
3/2
t+Log[p/k])/(sd
t )]
t +(r
(12)
The third scenario is to investigate the extent of loss in an organisation during the financial
crisis between 2008 and 2009, and to identify
which put prices (in relations to volatility) will
get the least extent of loss while keeping track
of risks in 3D. This needs using Theta function
to present the risk and pricing in relations high
volatility. The put price is between 20 and 100,
and the percentage of loss is between -5% and
-25%, and the risk-free rate is 0 and 0.5%. In
this case, risk-free rate means the percentage
this organisation can get assistance from. The
Theta function is used to compute the 3D risk
swiftly and to get the result in Figure 3. This
shows the percentage of loss gets better when
the put prices are raised to approximately 55.
However, when it gets to 60, this is the price
that uncontrolled volatility (such as human
speculation or natural disasters) takes hold
and the percentage of loss goes down sharply
at -25%. The percentage of loss is raised to
-5%, and is slowly lowering its value to-25%.
However, if the risk-free rate is improved up
to 0.5%, the extent of loss is less, and stays
nearly at -5%. It means credit guarantee from
somewhere may help this organisation with the
minimum impacts from loss. However, this is
just a computer simulation and does not reflect
the real difficulty faced by this organisation.
Even so, our FSaaS simulations can produce a
range of likely outcomes, which are valuable
to decision-makers.
5. ExpErImEnt And
bEncHmArk In tHE clouds
Monte Carlo Simulations with LSM can be
used for FSaaS on Public, Private and Hybrid
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 53
Clouds. This is further enhanced by the use of
open source package, Octave 3.2.4, so that there
is no need to write additional APIs to achieve
enterprise portability. Applications written on
the developer platform can be portable and
executable on different desktops and Clouds of
different hardware and software requirements,
and execute as if they are on the same platform.
3D Black Scholes has fast execution time
and only runs in Mathematica, which is not yet
portable to different Clouds due to licensing issues and also there is no open source alternative
to simplify the process of enterprise portability.
At the time of writing, MATLAB licences on
Private Clouds are still under development, and
therefore results on MATLAB have only Private
Cloud in Virtual Machines. Chang, Wills and De
Roure (2010a, 2010c) have demonstrated the
same FSaaS application running with Octave
and MATLAB on different Clouds, and their
results demonstrate that the execution speed
on MATLAB is approximately five times
quicker than Octave, though MATLAB is more
expensive and needs to deal with licensing issues regularly.
5.1. Experiments with
octave in running the lsm
on different clouds
Code written for LSM in Section 3.2 has been
used for experimenting and benchmarking in the
Clouds. 10,000 to 100,000 simulations (increase
with an additional 10,000 simulations each
time) of Monte Carlo Methods (MCM) adopting LSM are performed and the time taken at
each of a desktop, private clouds and Amazon
EC2 public clouds are recorded and averaged
with three attempts. Hardware specifications
for desktop, public cloud and private clouds
are described as follows.
The desktop has 2.67 GHz Intel Xeon Quad
Core and 4 GB of memory (800 MHz) with installed. One Amazon EC2 public cloud is used.
The first virtual server is a 64-bit Ubuntu 8.04
with large resource instance of dual core CPU,
with 2.33 GHz speed and 7.5GB of memory.
There are two private clouds set up. The first
private cloud is hosted on a Windows virtual
server, which is created by a VMware Server
on top of a rack server, and its network is in
a network translated and secure domain. The
virtual server has 2 cores of 2.67 GHz and 4GB
of memory at 800 MHz. The second private
cloud is a 64-bit Windows server installed
on a rack, with 2.8GHz Six Core Opteron, 16
GB of memory. All these five settings have
installed Octave 3.2.4, an open source compiler
equivalent to MATLAB. The experiment began
by running the FSaaS code (in Section 3.2) on
desktop, private clouds and public cloud and
started one at a time. Three attempts for each
set of simulations are done, and the result is
the average of three attempts. Benchmark is
execution time, since it is a common benchmark
used in several financial applications. Figure
4 shows the complete result of running FSaaS
code on different Clouds.
Figure 4 shows the execution time for
FSaaS application on desktop, public cloud and
two private clouds. Experiments confirm with
the followings. Firstly, enterprise portability is
achieved and the FSaaS application can be
executed on different platforms. Secondly, the
improved FSaaS application can go for 100,000
simulations in one go on Clouds. Although
above 100,000 simulations in one go, factors
such as performance and stability must be balanced, before tuning up the capabilities of our
FSaaS. The six-core processing rack server has
the most advanced CPU, disk, memory, 64-bit
operating system and networking hardware,
and is not surprising that it is always the quickest. Although the desktop has similar hardware
specification to server, it comes out slowest in
all experiments. The difference between the
Public Cloud (large instance) and Private Cloud
(virtual server) is minimal. Although the large
instance of a public cloud has the edge on
hardware specification against the Virtual Private Cloud (VPC), the networking speed
within the VPC is faster than the Public Cloud,
and this explains the small differences between
them.
Benchmark results show pricing and risk
analysis can be calculated rapidly with accurate
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
54 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Figure 4. Timing benchmark comparison for desktop, public cloud and two private clouds for
time steps = 10
outcomes. Portability is achieved with a good
reliable performance in clouds. These experiments demonstrate portability, speed, accuracy
and reliability from desktop to clouds. Figure
4 shows the benchmark graph.
5.2. Experiments with mAtlAb
in running the lsm on desktop
and one private cloud
MATLAB is used for high performance Cloud
computation, since it allows faster calculations
than Octave. The drawback of using MATLAB
2009 is license, which means all desktops and
Cloud resources must be licensed prior setting
up experiments. For this reason, only desktop
and a Private Cloud (virtual machine) are used
for experiments. The use of MATLAB 2009
reduces execution time for FSaaS, and also
allows experiments to proceed with a higher
number of time steps. The more the time steps
used, the more accurate the outcome is, although
higher numbers of time steps need more computing resources.
Five different sets of experiments are designed, and each set of experiments count execu-
tion time from 10,000 to 100,000 simulations
as described in Section 5.2. The only difference
is time step. The first experiment gets time step
equals to 10, and second experiment has time
step equals to 50, and the third experiment sets
time steps equal to 100, and the fourth experiment has time steps equal to 150, and finally,
the last experiment gets time step equal to 200.
The time step can be increased up to 1,000, but
performance seems to drop off, particularly for
experiments running high numbers of simulations. For this reason, the maximum time step
in the experiments is limited to 200. Results
for each set of experiments are recorded and
shown in Figure 5, 6, 7, 8 and 9.
The results presented in Figures 5, 6, 7, 8
and 9 have the following implications. Firstly,
the execution time and number of simulations
are directly proportional to each other. It means
the higher the number of simulations to be
computed, the longer the execution time on
desktop and Private Cloud. It is not so obvious
to identify linear relationship with a lower time
step involved. This is likely because that execution time is so quick to complete that the range
of errors and uncertainties are higher. When the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 55
Figure 5. MATLAB timing benchmark for time step = 10
Figure 6. MATLAB timing benchmark for time step = 50
Figure 7. MATLAB timing benchmark for time step = 100
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
56 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Figure 8. MATLAB timing benchmark for time step = 150
Figure 9. MATLAB timing benchmark for time step = 200
time steps increases, it is easier to identify the
linear relationship. This linear relationship also
confirms what LSM suggests and recommends.
Secondly, MATLAB 2009 offers quick execution time for the portability to Cloud, and the
significant time reduction is experienced.
However, the licensing issue still prevents from
a large scale of adoptions to different Clouds.
This means portability should be made as easy
as possible, and not only includes technical
implementations but also licensing issues.
However, this paper will not go into details
about licensing.
6. A concEptuAl
cloud plAtform:
ImplEmEntAtIons And
Work-In-progrEss
As discussed in previous sections, the primary
objective for optimal provisioning and runtime
management of cloud infrastructures at the infrastructure, platform, and software as a service
levels is to optimise the delivery of the overall
business outcome of the user. Improved business outcome in general refers to the increased
revenue or reduced cost or both. Uncertainties
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 57
of outcome, measured in terms of variance, are
often regarded as negative impacts (or risk) and
must be accounted for in the pricing calculations
of the service delivery.
There are many types of risks that might
impact the variance of the business outcome –
including market risk, credit risk, liquidity risk,
legal/reputation risk and operational risk. (Risk
taxonomy was previously established in the
context of various banking regulations such as
Basil II.) Among these types of risks, operational
risk is considered to be most directly related
to the IT infrastructure as it might impact the
business through internal and external fraud,
workplace safety, business practice, damage to
physical assets, business disruption and system
failures, and execution delivery and process
management. In particular, over and under
capacity, general availability of the system,
failed transactions, loss of data due to virus or
intrusion, poor business decision due to poor
data, and failure of communication systems are
all considered as part of the business disruption
and system failures and need to be considered
as part of the operational risk.
Behaviour models of systems are often
constructed to predict the likely outcome under
different context and scenarios. Both analytical and simulations methodologies have been
applied to these behaviour models to predict
the likely outcomes, and our demonstrations in
MCM and BSM present some of these predictability features. Maximize the outcome requires
minimizing the risk, cost, and maximize the
performance.
In regard to all possible causes, “Poor business decision due to poor data quality” is the
one that we address. The proposal of FSaaS can
track and display risks in 3D Visualisation, so
that there is no hidden area or missing data not
covered within simulations. Accurate results can
be computed quickly for 100,000 simulations
in one go, and this greatly helps directors to
make the right business decisions.
Apart from MCM and BSM simulations,
other technologies such as workflows are
used to present risks in business processes and
help making the right business decision. This
includes Risk Tolerance, which is commonly
associated with the industry framework and
business processes and have to be established
top down. Figure 10 shows a business processbased behaviour model of a typical e-commerce
operation. The customer interacts with the
web site through web server for placing a new
order or initiates a return/exchange. Either
of the two scenarios will require interaction
with the customer order system and accessing the customer records. A new order might
also involve preparing the billing, sending the
request to the warehouse for fulfillment. This
business process based behaviour model clearly
illustrates different types of operational risk
involved during various stage of the business
process. In Figure 10, the types of operational
risk identified from the front end part of the
business process includes Business Reputation Natural Disaster, System failure/System
Capacity Security, and other system failures
and security issues. Business Risk includes
Business Reputation and other system failures
and security issues.
6.1. contributions from
southampton: the financial
software as a service (fsaas)
Figure 11 shows a conceptual architecture based
on Operational Risk Exchange (www.orx.org),
which currently includes 53 banks from 18
countries for sharing the operational risk data
(a total of 177K loss incidents with a total of
62B Euros of loss as of the end of 2010), and
demonstrated how financial clouds could be
implemented successfully for aggregating and
sharing operational risk data. One of the main
contributions from the University of Southampton is the use of MCM (MATLAB) for pricing
and BSM (Mathematica) for risk analysis.
This cloud platform offers calculation for risk
modelling, fraud detection, pricing analysis
and a critical analysis with warning over risktaking. It reports back to participating banks and
bankers about their calculations, and provides
useful feedback for their potential investment.
Risk data computed by different models
such as MCM, BSM and other models can be
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
58 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Figure 10. The operational risk and business risk analysis by workflow
Figure 11. A conceptual financial cloud platform [using orx.org as an example] and contributions from Southampton in relations to this platform
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 59
Figure 12. The IBM Fine-Grained Security Framework (Li, 2010)
simulated and shared within the secure platform
that offers anonymisation and data encryption.
It also allows bank clients to double check with
mortgage lending interests and calculations
whether they are fit for purpose. This platform
also works closely with regulations and risk
control, thus risks are managed and monitored
in the Financial Cloud platform. Our FSaaS is
one part of the platform (as shown in the red
arrow) to demonstrate accuracy, performance
and enterprise portability over Clouds, and is
not only conceptual but has been implemented.
6.2. the Ibm fined grained
security framework
Figure 12 shows the Fined Grained Security
Framework currently being developed at IBM
Research Division. The framework consists of
layers of security technologies to consolidate security infrastructure used by financial services.
In additional to the traditional perimeter defence
mechanisms such as access control, intrusion
detection (IDS) and intrusion prevention (IPS),
this fine-grained security framework introduced
fine-grained perimeter defence at a much finer
granularity such as a virtual machine, a database,
a JVM, or a web service container.
Starting with the more traditional approach
side, the first layer of defence is Access Control
and firewalls, which only allow restricted
members to access. The second layer consists
of Intrusion Detection System (IDS) and Prevent
System (IPS), which detect attack, intrusion
and penetration, and also provide up-to-date
technologies to prevent attack such as DoS,
anti-spoofing, port scanning, known vulnerabilities, pattern-based attacks, parameter tampering, cross site scripting, SQL injection and
cookie poisoning.
The novel approach in the proposed
fine-grained approach imposes the additional
protection in terms of isolation management –
which enforces top down policy based security
management; integrity management – which
monitors and provides early warning as soon as
the behaviour of the fine-grained entity starts to
behave abnormally; and end-to-end continuous
assurance which includes the investigation and
remediation after abnormally is detected. This
environment intends to provide strong isolation of guest environment in an infrastructure
or platform as a service environment and to
contain possibly subverted and malicious
hosts for security. Weak isolation can also be
provided when multiple guest environments
need to collaborate and work closely – such as
in a three tier architecture among web server,
application server, and database environment.
Weak isolation usually focuses more on monitoring and captures end-to-end provenance
so that investigation and remediation can be
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
60 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
greatly facilitated. Strong isolation and integrity management is also required for the cloud
management infrastructure – as this is often
among the first few vulnerabilities of the cloud
are exposed. See Figure 12 for details.
7. dIscussIons
7.1. variance in volatility,
maturity and risk free rate
Calculating the impacts of volatility, maturity
and risk free rate is helpful to risk management. Our code in Section 3.2 can calculate
these three aspects with these observations.
Firstly, the higher the volatility is, the lower
the call price, so that risk can be minimised.
Secondly, the more the maturity becomes, the
higher the call price, which improves higher
returns of assets before the end of life in a bond
or a security. Thirdly, the higher the risk free
rate, the higher the call price, as high risk free
rate has reduced risk and boosts on investors’
confidence level. Both Monte Carlo Methods
and Black Scholes models are able to calculate
these three aspects.
7.2. Accuracy
Monte Carlo Simulations are suitable to analyse
pricing and provide reliable calculations up to
several decimal numbers. In addition, the use
of LSM reduces errors and thus improves the
quality of calculation. New and existing ways
to improve error corrections are under further
investigation while achieving enterprise SaaS
portability onto Clouds. In addition, the use of
3D Black Scholes will ensure the accuracy and
quality of risk analysis. Risks can be quantified
and also presented in 3D Visualisation, so that
risks can be tracked and checked with the ease.
7.3. Implications for banking
There are implications for banking. Firstly,
security is a main concern. This is in particular
when Cloud vendors tend to mitigate this risk
technically by segregating different parts of the
Clouds but still need to convince clients about
the locality of their data, and data protection and
security. Security concerns for banks in terms
of using Cloud Computing, may be limited to
cases where data need to be transferred (even for
a moment) to the cloud infrastructure. However,
certain risk management simulations, such as
those involving Monte Carlo, where input data
are usually random data based on statistical
distribution (instead of using real client data),
then these computations can be performed on
the cloud without security concerns.
Secondly, financial regulators are imposing
tighter risk management controls. Thus, financial institutions are involved in running more
analytical simulations to calculate risks to the
client organisations. This may present a greater
need for the use of the Cloud computation and
resources. Thirdly, portability of the Cloud can
imply letting clients install their own libraries.
Users who run MATLAB on the Cloud may
only need the MATLAB application script or
executable and to install the MATLAB Runtime
once on the Cloud. For financial simulations
written in Fortran or C++, users may also need
Mathematical libraries to be installed in the
Cloud. The Cloud must facilitate an easy way
to install and configure user required libraries,
without the need to write additional APIs like
several practices do.
Portability would be important since bank
personnel who run the simulations, should be
able to install the necessary software infrastructure such as ‘dlls’. One key benefit offered by
Cloud is the cost. In Risk Management where
mathematical models are always changing
and becoming more advanced, the hardware
requirement changes with it. Using the cloud
service such as FSaaS would reduce upgrade
costs. Hence greater hardware requirement may
be facilitated by upgrading cloud subscription
to a higher level, instead of decommissioning
the company’s own servers and replaced by
new ones.
7.4. Enterprise portability
to the clouds
Enterprise portability involves moving entire
application services from desktops to clouds
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 61
and between different Clouds, so that users need
not worry about complexity and work as if on
their familiar systems. This paper demonstrates
financial clouds that modelling and simulations
can take place on the Clouds, where users can
connect and compute. This has the following
advantages:
•
•
•
Performance and speed: Calculations can
be completed in a short time.
Accuracy: The improved models based
on LSM provide a more accurate range of
prices comparing to traditional computation in normal distribution.
Usability: users need not worry about
complexity. This includes using iPhone or
other user-friendly resources to compute.
However, this is not the focus of this paper.
However, the drawback for portability is
that additional APIs need to be written (Chang,
Wills, & De Roure, 2010c). Clouds must facilitate an easy way to install and configure user
required libraries, without the need to write additional APIs like several practices do. If writing
APIs is required for portability, an alternative
is to make APIs as easy and user-friendly as
Facebook and iPhone do. In our demonstration, there is no need to write additional APIs
to execute financial clouds.
7.5. other Alternatives such
as parallel computing
In parallel computing, one way to speed up is
to divide the data up into chunks and compute
on different machines. However, there is an
overhead in designing the problem (requiring
human design effort) also there is machine overhead in sending the chunks of data to different
machines and having a host machine to keep
track of it. In a cloud, this may involve sending
to different parts of the cloud and depending
on how busy the cloud is; perhaps it will take
longer in waiting time than when it actually
takes to compute the chunk of data.
MCM is used for simulating losses due to
Operational Risks, and there are plans in the
Commonwealth Bank, Australia, to perform
experiments in parallel computing with virtual
machines, which have been recently set up.
8. conclusIon And
futurE Work
FSaaS including MCM and BSM are used to
demonstrate how portability, speed, accuracy
and reliability can be achieved while demonstrating financial enterprise portability on
different Clouds. This fits into the third objective in the CCBF to allow portability on top
of, secure, fast, accurate and reliable clouds.
Financial SaaS provides a useful example to
provide pricing and risk analysis while maintaining a high level of reliability and security.
Our research purpose is to port and test financial
applications to run on the Clouds, and ensure
enterprise level of portability is workable, thus
users can work on Clouds as they work on their
desktops or familiar environments. Six areas of
discussions are presented to support our cases
and demonstration.
Benchmark is regarded as time execution
to complete calculations after portability is
achieved. Timing is essential since less time
with accuracy is expected in using Financial
SaaS on Clouds. The LSM provides added
values and improvements. Firstly, it has a short
starting and execution time to complete pricing
calculations, and secondly, it allows 100,000
simulations in one go in different Clouds. This
confirms enterprise portability can be delivered
with LSM application on Octave 3.2.4 and
MATLAB 2009. Five sets of experiments with
MATLAB in running the LSM are performed,
where the time steps have been increased for
each set. The results confirm the linear relationship and also fast execution time for up to
100,000 simulations in one go on the Private
Cloud. Portability should be made as easy as
possible, and not only includes technical implementations but also licensing issues. In addition,
the 3D Black Scholes presentation can enhance
the quality of risk analysis, since risks are not
easy to track down in real-time. The 3D Black
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
62 International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011
Scholes improve the risk analysis so that risks
can be presented in BSM formulas and are easier
to be checked and understood. Three different
scenarios of risk analysis are illustrated, and
3D simulations can provide a range of likely
outcomes, so that decision makers can avoid
potential pitfalls.
Implementations and work-in-progress for
a conceptual Cloud Platform have been demonstrated. This includes the use of workflow to
present risks in business processes, including the
operational risk and business risk, so that risk
tolerance can be established and the analysis can
help making the right decision. Contributions
from Southampton are the implementation of
FSaaS, which allow pricing calculations and risk
modelling to be computed fast and accurately
to meet the research and business demands.
Technical implementation in enterprise portability also meets challenges in business context:
reduce time and cost with better performance.
The IBM Fined Grained Security Framework
provides a comprehensive model to consolidate
security, which impose the additional protection
in terms of isolation management and integrity
management. This ensures trading, transaction
and any financial related activities on Clouds
are further protected and safeguarded.
Future work may include the following.
HPC languages such as Visual C++ and/or
.NET Framework 3.5 (or 4.0) will be used for
the next stages. Other methods such as parallelism in MCM are potentially possible for
further investigations. New error correction
methods related to MCM will be investigated,
and any useful outcomes will be discussed in
the future work. New techniques to improve
current 3D Black Scholes Visualisation will
be investigated. There are plans to investigate
Financial SaaS and its enterprise portability
over clouds with Commonwealth Bank Australia, IBM US and other institutions, so that
better platforms, solutions and techniques may
be demonstrated. We hope to present different
perspectives, recommendations and solutions
for risk analysis, pricing calculations, security
and financial modelling on Clouds, and to de-
liver improved prototypes, proof of concepts,
advanced simulations and visualisation.
AcknoWlEgmEnt
We greatly thank Howard Lee, a Researcher from
Deakin University in Australia, for his effort to
inspect our code and improve overall quality.
rEfErEncEs
Abdi, H. (2009). The methods of least squares. Dallas,
TX: The University of Texas.
Assuncao, M. D., Costanzo, A., & Buyya, R. (2010).
A cost-benefit analysis of using cloud computing to
extend the capacity of clusters. Journal of Cluster Computing, 13, 335–347. doi:10.1007/s10586-010-0131-x
Barnard, K., Duygulu, P., Forsyth, D., De Freitas, N.,
Blei, D. M., & Jordan, M. I. (2003). Matching words
and pictures. Journal of Machine Learning Research,
1107–1135. doi:10.1162/153244303322533214
Beaty, K., Kochut, A., & Shaikh, H. (2009, May 2329). Desktop to cloud transformation planning. In
Proceedings of the IEEE International Symposium
on Parallel and Distributed Processing, Rome, Italy
(pp. 1-8).
Birge, L., & Massart, P. (2001). Gaussian model selection. Journal of the European Mathematical Society,
3(3), 203–268. doi:10.1007/s100970100031
Brandic, I., Music, D., Leitner, P., & Dustdar, S. (2009,
August 25-28). VieSLAF framework: Enabling adaptive and versatile SLA-management. In Proceedings
of the 6th International Workshop on Grid Economics
and Business Models, Delft, The Netherlands.
Briscoe, G., & Marinos, A. (2009, June 1-3). Digital ecosystems in the clouds: Towards community
cloud computing. In Proceedings of the 3rd IEEE
International Conference on Digital Ecosystems and
Technologies, New York, NY (pp. 103-108).
Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., &
Brandic, I. (2009). Cloud computing and emerging
IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Journal of Future
Generation Computer Systems, 25(6), 559–616.
doi:10.1016/j.future.2008.12.001
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 41-63, April-June 2011 63
Chang, V., Mills, H., & Newhouse, S. (2007, September). From open source to long-term sustainability:
Review of business models and case studies. Paper
presented at the UK e-Science All Hands Meeting,
Nottingham, UK.
Chang, V., Wills, G., & De Roure, D. (2010a, July
5-10). A review of cloud business models and sustainability. In Proceedings of the Third IEEE International Conference on Cloud Computing, Miami, FL.
Chang, V., Wills, G., & De Roure, D. (2010b). Case
studies and sustainability modelling presented by
cloud computing business framework. International
Journal of Web Services Research.
Chang, V., Wills, G., & De Roure, D. (2010c, September 13-16). Cloud business models and sustainability: Impacts for businesses and e-research. Paper
presented at the UK e-Science All Hands Meeting
Software Sustainability Workshop, Cardiff, UK.
Chang, V., Wills, G., De Roure, D., & Chee, C. (2010,
September 13-16). Investigating the cloud computing
business framework - modelling and benchmarking of
financial assets and job submissions in clouds. Paper
presented at the UK e-Science All Hands Meeting
on Research Clouds: Hype or Reality Workshop,
Cardiff, UK.
Chou, T. (2009). Seven clear business models. Active Book Press.
Choudhury, A. R., King, A., Kumar, S., & Sabharwal,
Y. (2008). Optimisations in financial engineering: The
least-squares Monte Carlo method of Longstaff and
Schwarts. In Proceedings of the IEEE International
Symposium on Parallel and Distributed Computing
(pp. 1-11).
City, A. M. (2010). Business with personality. Retrieved from http://www.cityam.com
Feiman, J., & Cearley, D. W. (2009). Economics of
the cloud: Business value assessments. Stamford,
CT: Gartner RAS Core Research.
Financial Times. (2009). Interview with Lord Turner,
Chair of Financial Services Authority. Retrieved from
http://www.ft.com/cms/s/0/d76d0250-9c1f-11dda42e-000077b07658.html#axzz1Iqssz7az
Hamnett, C. (2009). The madness of mortgage
lenders: Housing finance and the financial crisis.
London, UK: King’s College.
Hull, J. C. (2009). Options, futures, and other derivatives (7th ed.). Upper Saddle River, NJ: Pearson/
Prentice Hall.
Li, C. S. (2010, July 5-10). Cloud computing in an
outcome centric world. In Proceedings of the IEEE
International Conference on Cloud Computing,
Miami, FL.
Longstaff, F. A., & Schwartz, E. S. (2001). Valuing
American options by simulations: A simple leastsquares approach. Review of Financial Studies, 14(1),
113–147. doi:10.1093/rfs/14.1.113
Millers, R. M. (2011). Option valuation. Niskayuna,
NY: Miller Risk Advisor.
Moreno, M., & Navas, J. F. (2001). On the robustness of least-square Monte Carlo (LSM) for pricing
American derivatives. Journal of Economic Literature Classification.
Mullen, K., & Ennis, D. M. (1991). A simple multivariate probabilistic model for preferential and
triadic choices. Journal of Psychometrika, 56(1),
69–75. doi:10.1007/BF02294586
Mullen, K., Ennis, D. M., de Doncker, E., & Kapenga,
J. A. (1988). Models for the duo-trio and triangular
methods. Journal of Bioethics, 44, 1169–1175.
Pajorova, E., & Hluchy, L. (2010, May 5-7). 3D
visualization the results of complicated grid and
cloud-based applications. In Proceedings of the 14th
International Conference on Intelligent Engineering
Systems, Las Palmas, Spain.
Patterson, D., Armbrust, M., Fox, A., Griffith, R.,
Jseph, A. D., Katz, R. H., et al. (2009). Above the
clouds: A Berkeley view of cloud computing (Tech.
Rep. No. UCB/EECS-2009-28). Berkeley, CA:
University of California.
Piazzesi, M. (2010). Affine term structure models.
Amsterdam, The Netherlands: Elsevier.
Schubert, L., Jeffery, K., & Neidecker-Lutz, B.
(2010). The future for cloud computing: Opportunities for European cloud computing beyond 2010
(public version 1.0). Retrieved from http://cordis.
europa.eu/fp7/ict/ssai/docs/cloud-report-final.pdf
Waters, D. (2008). Quantitative methods for business
(4th ed.). Upper Saddle River, NJ: Prentice Hall.
Weinhardt, C., Anandasivam, A., & Blau, B., & StoBer, J. (2009). Business models in the service world.
IT Professional, 11(2). doi:10.1109/MITP.2009.21
Weinhardt, C., Anandasivam, A., Blau, B., Borissov,
N., Meinl, T., & Michalk, W. (2009). Cloud computing – a classification, business models, and research
directions. Journal of Business and Information
Systems Engineering, 1(5), 391–399. doi:10.1007/
s12599-009-0071-2
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
64 International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011
cloud security Engineering:
Avoiding security threats the right Way
Shadi Aljawarneh, Isra University, Jordan
AbstrAct
Information security is a key challenge in the Cloud because the data will be virtualized across different
host machines, hosted on the Web. Cloud provides a channel to the service or platform in which it operates.
However, the owners of data will be worried because their data and software are not under their control.
In addition, the data owner may not recognize where data is geographically located at any particular time.
So there is still a question mark over how data will be more secure if the owner does not control its data
and software. Indeed, due to shortage of control over the Cloud infrastructure, use of ad-hoc security tools
is not sufficient to protect the data in the Cloud; this paper discusses this security. Furthermore, a vision
and strategy is proposed to mitigate or avoid the security threats in the Cloud. This broad vision is based
on software engineering principles to secure the Cloud applications and services. In this vision, security is
built into all phases of Service Development Life Cycle (SDLC), Platform Development Life Cycle (PDLC)
or Infrastructure Development Life Cycle (IDLC).
Keywords:
Cloud Computing, Distributed Computing, Infrastructure Development Life Cycle (IDLC),
Platform Development Life Cycle (PDLC), Security, Service Development Life Cycle (SDLC),
Web Service
IntroductIon
Due to lack of control over the Cloud software,
platform and/or infrastructure, several researchers stated that a security is a major challenge
in the Cloud. In Cloud computing, the data
will be virtualized across different distributed
machines, hosted on the Web (Taylor, 2010;
Marchany, 2010). In business respective, the
cloud introduces a channel to the service or platform in which it could operate (Taylor, 2010).
Thus, the security issue is the main risk
that Cloud environment might be faced. This
risk comes from the shortage of control over the
DOI: 10.4018/ijcac.2011040105
Cloud environment. A number of practitioners
described this point. For example, Stallman (Arthur, 2010) from the Free Software Foundation
re-called the Cloud computing with Careless
Computing because the Cloud customers will
not control their own data and software and then
there is no monitoring over the Cloud providers and subsequently the data owner may not
recognize where data is geographically located
at any particular time.
Threats in the Cloud computing might be
resulted from the generic Cloud infrastructure
which is available to the public; while it is possessed by organization selling Cloud services
(Marchany, 2010; Chow et al.,2009).
In Cloud computing, software and its data
is created and managed virtually from its users
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011 65
Figure 1. Models of Cloud environment-taken from (Taylor, 2010)
and might only accessible via a certian cloud’s
software, platform or infrastructure. As shown
in Figure 1, there are three Cloud models that
describe the Cloud architecture for applications
and services (Taylor, 2010; Marchany, 2010):
1.
2.
3.
The Software as a Service (SaaS) model:
The Cloud user rents/uses software for use
on a paid subscription (Pay-As-You-Go).
The Platform as a Service (PaaS) model:
The user rents a development environment
for application developers.
The Infrastructure as a Service (IaaS)
model: The user uses the hardware infrastructure on pay-per-use model, and the
service can be expanded in relation to
demands from customers.
In spite of this significant growth, a little
attention has been given to the issue of Cloud
security both in research and in practice. Today, academia requires sharing, distributing,
merging, changing information, linking applications and other resources within and among
organizations. Due to openness, virtualization,
distribution interconnection, security becomes
critical challenge in order to ensure the integrity
and authenticity of digitized data (Cárdenas et
al., 2005; Wang et al., 2005).
Cloud opts to use scalable architecture.
Scalability means that hardware units that are
added bringing more resources to the Cloud architecture (Taylor, 2010). However, this feature
is in trade-off with the security. Therefore, scalability eases to expose the Cloud environment
and it will increase the criminals who would
access illegally to the Cloud storage and Cloud
Datacenters as illustrated in Figure 2.
Availability is another characteristic for
Cloud. So the services, platform, data can be
accessible at any time and place. Cloud is
candidate to expose to greater security threats,
particularly when the cloud is based on the
Internet rather than an organization’s own
platform (Taylor, 2010).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
66 International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011
Figure 2. Cloud computing Security - taken from (Marchany, 2010)
Although the security is a risk in the Cloud
environment, several companies are offering
now Cloud services including Microsoft Azure
Services Platform, Amazon Web Services,
Google and open source Cloud systems such
as Sun Open Cloud Platform for academic,
customers and administrative purposes (Taylor,
2010). Yet, some organizations have not realized the importance of security for the Cloud
systems. These organizations adopted some
ready security and protection tools to secure
their systems and platforms.
rElAtEd Work
In this section, the Amazon Web Services
(AWS) is only discussed. Amazon uses the
Cloud services for introducing a number of
web services for customers.
Amazon constructed Amazon Web Services (AWS) platform to secure the access for
web services (Amazon, 2010). The AWS platform introduces a protection against traditional
security issues in the Cloud network.
Physical access to AWS Datacenters is
limited controlled both at the perimeter and at
building ingress nodes by security experts to
raise Video Surveillance (VS), Intrusion Detection Systems (IDS), and other electronic means.
Authorized staff has to log in two authentication phases with restricted number of time for
accessing to Amazon Web Services and AWS
Datacenters at maximum (Amazon, 2010).
Note that Amazon only offers restricted
Datacenter access and information to people
who have a legal business need for these privileges. If the business need for these privileges
is revoked, then the access is stopped, even
though if employees continue to be an employee of Amazon or Amazon Web Services
(Amazon, 2010).
However, one of the weakness of the AWS
is the dynamic data which is generated from
the AWS could be listened and penetrated from
hackers or professional criminals.
Basically there are six areas for security
vulnerabilities in cloud computing (Trusted
Computing Group, 2010): (a) data at end-to-end
points, (b) data in the communication channel,
(c) authentication, (d) separation between clients, (e) legal issues and (f) incident response.
This article is organized as follows: a study
shows that the Cloud threats and an overview
of existing cloud computing concerns are described. Next, the proposed vision and strategies
that might be improved to mitigate or avoid some
of the concerns outlined. Finally conclusions
future works are offered.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011 67
cloud tHrEAts
However, security principles (such as data
integrity, and confidentiality) in the Cloud environment could be lost (Amazon, 2010). For
example, a criminal might penetrate the web
system in many forms (Snodgrass et al., 2004;
Provos et al., 2007). An insider adversary, who
gains physical access to Datacenters, is able to
destroy any type of static content in the root
of a web server. It is not only physical access
to Datacenter that can corrupt data. Malicious
web manipulation software can penetrate servers and Datacenter machines and once located
them such malicious software can monitor,
intercept, and tamper online transactions in a
trusted organization. The result typically allows
a criminal full root access to Datacenter and
web server application. Once such access has
been established, the integrity of any data or
software is in question.
There are several security products (such
as Antivirus, Firewalls, gateways, and scanners)
to secure the Cloud systems but they are not
sufficient because each one has only specific
purpose and hence, they are called ad-hoc security tool. For example, Network firewalls
provide protection at the host and network level
(Gehling et al., 2005). There are, however, five
reasons why these security defences cannot
be only used to secure the systems (Gehling
et al., 2005):
•
•
•
They cannot stop malicious attacks that
perform illegal transactions, because they
are designed to prevent vulnerabilities of
signatures and specific ports.
They cannot manipulate form operations
such as asking the user to submit certain
information or validate false data because
they cannot distinguish between the original request-response conversation and the
tampered conversation.
They do not track conversations and do
not secure the session information. For
example, they cannot track when session
information in cookies is exchanged over
an HTTP request-response model.
•
•
They provide no protection against web
application/services attacks since these are
launched on port 80 (default for web sites)
which has to remain open to allow normal
operations of the business.
Previously, a firewall could suppose that
an adversary could only be on the outside.
Currently, with Cloud systems, an attack
might originate from the inside as well,
where firewall can offer no protection.
Note that the computer forensics has classified e-crime into three classes (Mohay et al.,
2003): The computer is the target of the crime;
data storage which is created during the commission of a crime; or a tool or scheme that
used in performing a crime.
Figure 2 shows the data storage and
Datacenters which are possibly targeted by the
criminals. According to the computer forensics,
the distrusted servers and Datacenters are the
target of crime. Therefore, question should be
attempted to answer is that whether data is safe
and secure?
Data confidentiality might be exposed
either from insider user threats or outsider
user threats from (CPNI, 2010). For instance,
Insider user threats might maliciously form
from: cloud operator/provider, cloud customer,
or malicious third party. The threat of insiders
accessing customer data take place within the
cloud is larger as each of models can offer the
need for multiple users:
•
•
•
SaaS – Cloud clients and administrators
PaaS – Application developers and testers
IaaS – Third party consultants
A vIsIon And strAtEgy to
mItIgAtE or AvoId cloud
sEcurIty concErns
In this article, a vision is proposed to avoid
the Cloud security threat at the SaS level. Our
vision is that SaS is based on Service-oriented
architecture. A service is a standard approach
to make a reusable component available and
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
68 International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011
could be accessible across the web or possible
technology. Thus, service provision is independent of the application that using the service.
In reference to the article (Aljawarneh,
2011) a case study has been described to mention
a number of significant threat vulnerabilities
that can be introduced during all phases of the
software (service) development life cycle.
For instance, number security vulnerabilities might be occurred at the requirements
specification process cycle (Bono et al., 2005;
Cappelli et al., 2006):
•
•
•
Ignoring to declare authentication and
role-based access control requirements
eased the insider and or outsider attacks.
Ignoring to declare security requirements
of duties for automated business processes
provided a simplified method for attack.
Ignoring to declare requirements for data
integrity checks gave insiders the security
of knowing their actions would not be
detected.
The existing Cloud services face some
security issues because a security design is not
integrated into the Cloud architecture development process (Glisson et al., 2005). Thus,
organizations should pay more attention to the
insider threats to operational systems; it turns
out that vulnerabilities can be accidentally and
intentionally introduced throughout the development life cycle – during requirements definition, design, implementation, deployment,
and maintenance (Cappelli et al., 2006). Once
business leaders are aware of these, they can
implement practices that will aid in mitigating
these vulnerabilities.
As illustrated in Figure 3, security should
be built in all steps of the service development
process to identify what a customer and an organization need for every stage of the software
engineering principles. This proposed vision
or strategy could help to detect the threats and
concerns at each stage instead of processing
them at the implementation stage. Consequently,
our vision and strategies might help the Cloud
Figure 3. The proposed strategy
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011 69
developers, providers and administrators to
eliminate the attacks or mitigate them if possible in the design stage not waiting for actual
attacks to occur.
Cárdenas, R. G., & Sanchez, E. (2005). Security
challenges of distributed e-learning systems. In F. F.
Ramos, V. A. Rosillo, & H. Unger (Eds.), Proceedings of the 5th International School and Symposium
on Advanced Distributed Systems (LNCS 3563, pp.
538-544).
conclusIon
Chow, R., Golle, P., Jakobsson, M., Shi, E., Staddon,
J., Masuoka, R., et al. (2009). Controlling data in the
cloud: Outsourcing computation without outsourcing
control. In Proceedings of the ACM Workshop on
Cloud Computing Security (pp. 85-90). New York,
NY: ACM Press.
Cloud faces some security issues at the SaS,
PaS, IaS models. One main reason is that the
lack of control over the Cloud Datacenters and
distrubted servers. Furthermore, security is not
integrated into the service development process.
Indeed, the traditional security tools alone
will not solve current security issues and so it
will be effective to incorporate security component upfront into the development methodology
of Cloud system phases. In the next part of this
article, we will propose a methodology that
could help to mitigate the security concerns on
the Cloud models.
rEfErEncEs
Aljawarneh, S. (2011). A web engineering security
methodology for e-learning systems. Network Security Journal, 2011(3), 12-16. doi:10.1016/S13534858(11)70026-5
Amazon. (2010). Amazon web services: Overview
of security processes. Retrieved from awsmedia.
s3.amazonaws.com/pdf/AWS_Security_Whitepaper.pdf
CPNI. (2010). Information security briefing 01/2010
cloud computing. Retrieved from www.cpni.gov.
uk/Documents
Gehling, B., & Stankard, D. (2005). eCommerce
security. In Proceedings of the Information Security
Curriculum Development Conference, Kennesaw,
GA (pp. 32-37). New York, NY: ACM Press.
Glisson, W., & Welland, R. (2005). Web development evolution: The assimilation of web engineering
security. In Proceedings of the Third Latin American Web Congress (p. 49). Washington, DC: IEEE
Computer Society.
Google. (2011b).Google trends: private cloud, public
cloud. Retrieved from http://www.google.de/trends
?q=private+cloud%2C+public+cloud
Marchany, R. (2010). Cloud computing security
issues: VA Tech IT security. Retrieved from http://
www.issa-centralva.org
Mohay, G., Anderson, A., Collie, B., & del Vel, O.
(2003). Computer and intrusion forensics (p. 9).
Boston, MA: Artech House.
Arthur, C. (2010). Google’s ChromeOS means losing
control of data, warns GNU founder Richard Stallman. Retrieved from http://www.guardian.co.uk/
technology/blog/2010/dec/14/chrome-os-richardstallman-warning
Provos, N., McNamee, D., Mavrommatis, P., Wang,
K., & Modadugu, N. (2007). The ghost in the browser
analysis of web-based malware. In Proceedings of
the RST Conference on First Workshop on Hot Topics in Understanding Botnets, Berkeley, CA (p. 4).
Bono, S. C., Green, M., Stubblefield, A., Juels, A.,
Rubin, A. D., & Szydlo, M. (2005). Security analysis of a cryptographically-enabled RFID device. In
Proceedings of the 14th Conference on USENIX
Security, Berkeley, CA.
Ramim, M., & Levy, Y. (2006). Securing e-learning
systems: A case of insider cyber attacks and novice
IT management in a small university. Journal of
Cases on Information Technology, 8(4), 24–34.
doi:10.4018/jcit.2006100103
Cappelli, D. M., Trzeciak, R. F., & Moore, A. B.
(2006). Insider threats in the SLDC: Lessons learned
from actual incidents of fraud: Theft of sensitive information, and IT sabotage. Pittsburgh, PA: Carnegie
Mellon University.
Snodgrass, R. T., Yao, S. S., & Collberg, C. (2004).
Tamper detection in audit logs. In Proceedings of the
Thirtieth International Conference on Very Large
Data Bases (pp. 504-515).
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
70 International Journal of Cloud Applications and Computing, 1(2), 64-70, April-June 2011
Taylor, M. (2010). Enterprise architecture – architectural strategies for cloud computing: Oracle.
Retrieved from http://www.techrepublic.com/
whitepapers/oracle-white-paper-in-enterprisearchitecture-architecture-strategies-for-cloudcomputing/2319999
Trusted Computing Group. (2010). Cloud computing
and security –a natural match. Retrieved from http://
www.infosec.co.uk/
Wang, H., Zhang, Y., & Cao, J. (2005). Effective
collaboration with information sharing in virtual
universities. IEEE Transactions, 21(6), 840–853.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
International Journal of Cloud Applications
and Computing
An official publication of the Information Resources Management Association
Mission
The main mission of the International Journal of Cloud Applications and Computing (IJCAC) is to be the
premier and authoritative source for the most innovative scholarly and professional research and information pertaining to aspects of Cloud Applications and Computing. IJCAC presents advancements in the
state-of-the-art, standards, and practices of Cloud Computing, in an effort to identify emerging trends that
will ultimately define the future of “the Cloud.” Topics such as Cloud Infrastructure Services, Cloud Platform Services, Cloud Application Services (SaaS), Cloud Business Services, Cloud Human Services are
discussed through original papers, review papers, technical reports, case studies, and conference reports
for reference use by academics and practitioners alike.
Subscription Information
IJCAC is published quarterly: January-March; April-June; July-September; October-December by IGI
Global. Full subscription information may be found at www.igi-global.com/ijcac. The journal is available
in print and electronic formats.
Institutions may also purchase a site license providing access to the full IGI Global journal collection featuring more than 100 topical journals in information/computer science and technology applied to business
& public administration, engineering, education, medical & healthcare, and social science. For information visit www.infosci-journals.com or contact IGI at eresources@igi-global.com.
Copyright
The International Journal of Cloud Applications and Computing (ISSN 2156-1834; eISSN 2156-1826).
Copyright © 2011 IGI Global. All rights, including translation into other languages reserved by the publisher. No part of this journal may be reproduced or used in any form or by any means without written
permission from the publisher, except for noncommercial, educational use including classroom teaching
purposes. Product or company names used in this journal are for identification purposes only. Inclusion
of the names of the products or companies does not indicate a claim of ownership by IGI Global of the
trademark or registered trademark. The views expressed in this journal are those of the authors but not
necessarily of IGI Global.
Correspondence and questions:
Editorial:
Shadi Aljawarneh and Hong Cai
Editors-in-Chief
IJCAC
E-mails: shadi.jawarneh@ipu.edu.jo;
caihong@ieee.org
Subscriber Info:
IJCAC is currently listed or indexed in: DBLP; MediaFinder;
The Standard Periodical Directory; Ulrich's Periodicals
Directory
IGI Global
Customer Service
701 E Chocolate Avenue
Hershey PA17033-1240, USA
Tel: 717/533-8845 x100
E-mail: cust@igi-global.com
Information Resources
Management Association
Research Community!
www.irma-international.org
Discover thousands of open access research papers,
as well as journal articles and book chapters on information
resources management concepts and applications.
Information Resources Management Association
701 E. Chocolate Avenue, Hershey, PA 17033, USA
Tel: 717.533.8845; Fax: 717.533.8661
E-mail: member@irma-international.org
www.irma-international.org