The New Enterprise Data Center Technical White Paper
The New Enterprise Data Center Technical White Paper
The New Enterprise Data Center Technical White Paper
Abstract: This paper describes the technical view of the new enterprise data center through
a conceptual view of the components and key elements, a staged approach and a set of
patterns to guide data center transformation activities for the new enterprise data center.
Table of Contents
1.0 Introduction 2
1.1 Business Drivers for the Adoption of the
New Enterprise Data Center 2
2.0 The New Enterprise Data Center “Evolution” 3
Simplified stage 5
Shared stage 6
Dynamic stage 7
Defining characteristics mapped to each stage of adoption 8
3.0 Characteristics of the New Enterprise Data Center 10
3.1 Highly Virtualized Resources 10
Consolidation of resources 10
Virtualization of resources 11
Ensembles 12
3.2 Efficient, Green and Optimized Infrastructure and Facilities 13
3.3 Information Infrastructure 15
3.4 Security and Business Resiliency 17
Security evolution 17
Risk factors and resiliency 18
Recovery planning and trusted virtual domains 19
3.5 Business-driven Service Management 20
IT service management 20
IT operational management 21
IT process management 22
3.6 Service-Oriented IT Delivery 22
Cloud computing 23
Cloud computing evolution 23
Service-oriented IT delivery transformation 24
4.0 Why IBM? 26
5.0 Next Steps 27
1
1.0 introduction
Rather than simply comprising a cost to do business, information technology (IT) should
link with and complement business strategy. This requires an efficient, flexible and resil-
ient infrastructure that is primed to anticipate and respond rapidly to shifting business
requirements. These requirements have driven us to the evolution of a new data center
architecture—one that allows for massive scalability and dynamic responsiveness while
also providing an energy efficient and resilient infrastructure.
This paper presents a technical overview of the new enterprise data center, including a
description of its key characteristics, a description of the functions and capabilities of the
underlying architecture and a description of an evolutionary approach to implementation
through stages of adoption. The new enterprise data center strategy allows companies to
focus on the services provided by the infrastructure, rather than on the underlying tech-
nology that enables these services. For example:
The new enterprise data center strategy allows companies to focus on the services
provided by the infrastructure, while streamlining the underlying technology that enables
these services. Thus, it yields a more productive and satisfied user community, as well as
provides better alignment between business priorities and information technology investments.
1.1 Business Drivers for the Adoption of the New Enterprise Data Center
After years of working with thousands of clients in their data center transformations, IBM
has taken a holistic approach to the transformation of IT and has developed the new
enterprise data center—a vision and strategy for the future of enterprise computing. The
new enterprise data center enables you to leverage today’s best practices and technolo-
gies to better manage costs, improve operational performance and resiliency and quickly
respond to business needs. Its goal is to deliver the following:
DYNAMIC
SHARED
SIMPLIFIED
HIGHLY RESPONSIVE AND
RAPID DEPLOYMENT OF BUSINESS DRIVEN
NEW INFRASTRUCTURE Virtualization of IT as a
DRIVES IT EFFICIENCY AND SERVICES service—“cloud computing”
Physical consolidation Highly virtualized resource Business-driven service
and optimization pools—“ensembles” management
Virtualization of Integrated information Service-oriented delivery
individual systems infrastructure of IT
Systems, network and Security and business resiliency
energy management Green by design
Each of these stages is realized through the implementation of one or more architectural
patterns. Patterns are methods, approaches and best practices that, when implemented,
strive to attain a particular goal. The architectural patterns include:
Aligned with the architectural patterns are a set of key characteristics outlined below that
serve as the modular building blocks in the evolution of a new enterprise data center.
The architectural patterns apply these characteristics to realize the proposed stages of
adoption in a progressive journey—an evolution rather than a revolution. These defining
characteristics include:
The diagram below applies the key evolutionary characteristics to illustrate how the archi-
tectural patterns are associated and aligned with adoption stages in the journey to a new
enterprise data center.
4
SERVICE - ORIENTED
D
DELIVERY OF IT
SECURITY AND
BUSINESS RESILIENCY
INFORMATION SHARED
INFRASTRUCTURE
RE
SIMPLIFIED HIGHLY
VIRTUALIZED
Simplified stage
The simplified stage addresses the complexity of server sprawl, including many data
center locations, disparate management tools and inconsistent processes by introducing
consolidation, virtualization and standardization.
This stage begins by consolidating IT assets and data center facilities and standardizing
management tools and processes. The consolidation pattern is used to help transform
the data center in the following areas:
In summary, the simplified stage provides control over the entire IT infrastructure through
the reduction of complexity, while simultaneously producing cost savings, which can be
used to fund activities in later stages.
Shared stage
As enterprises implement activities in the simplified stage, the inefficiencies associated
with business unit-specific infrastructure designs become apparent. The shared stage
involves moving from organizational and technological silos to a shared services model.
This stage creates a shared IT infrastructure that can be provisioned and scaled rapidly
and efficiently. Organizations can create virtualized resource pools for server platforms,
storage systems, networks and applications, delivering IT and information to end users
in a more fluid way. Advanced virtualization patterns and increased automation are key
elements at this stage of IT transformation.
By centralizing policies and consolidating server, storage and network capacity across
the enterprise, IT organizations position themselves to balance infrastructure demands
across business units. Through the integration of infrastructure and the breaking down
of silos, a shared view of information access that is consistent and reliable across the
enterprise is achieved.
The virtualization pattern helps to scale up shared capacity through virtual infrastructure
6
deployment across the enterprise, thereby making more efficient use of the total capacity
available. Ensembles, which expand the boundaries of virtualization across homogeneous
resources to make them appear as one contiguous pool, will further help to share infra-
structure for effective utilization across the enterprise. This provides the flexibility required
to create logical resources of any size to meet application needs, as well as eliminates
inefficiencies caused by underutilization of infrastructure at the physical layer.
The flexible IT pattern drives automation across various service management functions,
providing visibility and control of the shared infrastructure performance and availabil-
ity at the application level, rather than at the physical device level. Basic automation
helps to reduce manual error-prone tasks, such as provisioning of infrastructure, and
increases human efficiency, allowing a smaller number of IT professionals to manage a
larger environment. Advanced automation implements autonomic technologies through a
policy-based orchestration to dynamically adjust resource capacity or to move workloads
to meet varying demands. Metering technologies track resource consumption, providing
visibility into the “cost of manufacturing” for IT-enabled business services. The features of
the flexible IT pattern significantly improve both business performance and the ability to
meet service levels, as well as further reduce the percentage of the IT budget consumed
by operational activities.
The shared stage also helps to enhance energy efficiencies through the implementation of
energy management software, which can intelligently monitor power and cooling across
the infrastructure and facility resources. Techniques such as hibernation turn off unneed-
ed power and cooling sources, or move underutilized resources to low-power states.
As flexibility and automation are increased across the shared infrastructure, security and
resiliency within the shared data center must also be enhanced. This stage expands
resiliency to include heterogeneous environments and should provide a central point of
management for multiple systems or applications. Monitoring and management should
be automated, and switching should include as little human intervention as possible. The
security requirements are addressed by implementing identity management solutions
and by applying network-level isolation techniques to prevent intrusions and to ensure
the integrity of applications and data on the shared infrastructure. Resiliency techniques,
such as active/active or active/standby configurations, can be implemented across a set
of infrastructure resources that support business-critical applications.
Dynamic stage
The dynamic stage leverages the principles of the service-oriented IT delivery model to
create and deliver IT as a set of services. At this stage, the IT-as-a-service pattern utilizes
the virtual infrastructure and automated service management capabilities established in
the prior stage to create a layer of abstraction between the IT service and the physical in-
frastructure. By hiding technology complexities and making IT available as a consumable
service that can be subscribed to, provisioned and managed throughout its life cycle, IT
becomes a contributor, rather than an inhibitor, to business innovation.
At this stage, the IT-as-a-service pattern models the business processes, identifying
7
opportunities to leverage technology for business benefits and composing them into a
set of service definitions. These service definitions are made available to end users in the
form of a service catalog. Each of the services included in the catalog are implemented
through a set of automated workflows designed to fulfill service expectations.
The emergence of cloud computing, in which IT services are delivered over a network,
makes this stage more dynamic. Essentially, cloud-enabled infrastructures are built based
on a service-oriented delivery model and allow for massive scaling and rapid delivery of
IT services. This process can be further accelerated through the use of ensembles, en-
abling IT staff to view and access infrastructure pools as one contiguous capacity. These
dynamic characteristics encourage the adoption of cloud computing, which can deliver IT
services as true “cloud experiences” to users. The latest technology innovations facilitated
by cloud computing and ensembles will also help to accelerate dynamic stage adoption.
The rapid deployment of development and test environments to support the dynamic
release cycles of an application life cycle is an example of a simple IT service available at
this stage. These environments can be rapidly created and decommissioned, leveraging a
dynamic infrastructure that is truly economical, highly integrated, agile and responsive.
Another example is the massive scaling of production infrastructure to add capacity on the
fly, as demands continue to grow. The true dynamic nature at this stage will allow work-
loads to be moved across the infrastructure to eliminate the downtime for planned mainte-
nance, thus increasing the business continuity and SLAs for the IT services being delivered.
8
DEFINING
SIMPLIFIED SHARED DYNAMIC
CHARACTERISTICS
HIGHLY VIRTUALIZED Physical consolidation • Highly virtualized • Advanced virtualization
and optimization “ensembles” that supports cloud
RESOURCES • Creation of a management computing
• Server, storage and data
center facility consolidation layer to view the entire
• Network convergence shared capacity as a
Virtualization of contiguous linear system
individual systems
• Server, storage,
application, network and
desktop virtualization
The sheer growth of deployed IT components, be they server, storage or networking entities,
coupled with the associated management costs and surrounded by ever-increasing electricity
costs, makes this a challenging environment for data center managers and IT executives.
Consolidation of resources
As mentioned above, IT consolidation serves as an initial step toward achieving higher
levels of virtualization. Consolidation typically spans servers, storage, networks and data
center facilities, and leads to greater IT efficiency. Sweeping up distributed servers with
underutilized capacities and consolidating them into fewer, more efficient and better man-
aged servers offers significant benefits. It is not uncommon for distributed servers to run,
on average, at 10 percent of their full utilization capabilities. When you consider the fact
that many enterprises may have hundreds or thousands of servers deployed, it’s easy to
appreciate the value that server consolidation can provide.
Lastly, any consolidation initiative must take into account a data center strategy and the
potential benefits associated with consolidating facilities. The same attributes of cost,
efficiencies and return on investment that apply to servers, storage and networks also
apply to data center facilities. In fact, they likely offer the most significant opportunities
10
for cost savings.
IBM offers the industry’s most scalable servers, as well as storage products that are ideal
platforms for consolidation. We have years of experience working with clients to address
their data center strategies for consolidation, and have harnessed unique intellectual
capital that is included in our technology services. Our products and services are tailored
to help clients assess, plan, design and implement the right consolidation solutions for
their needs.
Virtualization of resources
Virtualization is another important building block in the process of becoming a highly opti-
mized IT provider. IBM is no stranger to virtualization technologies, having built these capa-
bilities into its mainframe servers as early as the 1970s, and having continued to enhance
these capabilities and offer them across its complete server and storage product lines. This
practical experience has allowed IBM to offer valuable virtualization points of entry through
hardware logical partitioning; dynamic and scalable virtual guest hosting; virtualized I/O;
industry-leading, finer-grain virtualization; more comprehensive management capabilities;
and seamless integration with other virtualization layers that exist in the marketplace.
• The installed base of virtual machines will grow more than tenfold between 2007
and 2011.
• One out of every four x86 workloads deployed or redeployed during 2008 will be
installed in virtual machines, and by 2012, the majority of x86 server workloads
will be running in a virtual machine. 4
IBM’s experience is based on more than 30,000 enterprise clients who have deployed
system-level virtualization capabilities with IBM offerings. Our IBM System x™ customers
are deploying over 1,000 virtual servers a day, and we have more than 2,000 customers
using storage virtualization. IBM typically expects that 80 percent of a client’s infrastruc-
ture can be virtualized.
So, how do you begin? While you can consolidate similar IT resources to gain simple
efficiencies, such as by moving a SAP R/3 distributed three-tier server environment into
a two-tier environment based on fewer yet more powerful servers, virtualization offers
additional capabilities that address inefficiencies across a more disparate set of envi-
ronments, while offering similar and compelling benefits. Using the SAP R/3 example,
virtualization might allow for additional application workloads to be executed on the same
servers to take advantage of available capacity. By consolidating multiple applications
onto a single server running multiple virtual servers, a significant reduction in the num-
ber of physical servers—and a corresponding improvement in server utilization rates—is
obtainable. With our IBM POWER System servers, fine-grain virtualization can support a
more effective allocation of resources by dynamically allowing for as little as one-tenth of
a processor to be virtualized.
11
A recent Forrester report indicates that 65 percent of enterprise decision makers expect
to use server virtualization by 2009.5 Not surprisingly, IBM’s market research confirms that
virtualization implementation continues across the industry. IBM’s strategy for the new
enterprise data center positions virtualization as a key element in assisting data center
managers seeking to simplify the integration of complex server, storage and networking
architectures. Virtualization removes barriers that inhibit the increased use of IT resourc-
es, maximizes the use of existing IT investments, helps improve productivity by fostering
an environment that supports composite applications and aligns the performance of the
IT infrastructure with an organization’s business goals.
Ensembles
Growing business demands are causing customers to look for ways to extend the char-
acteristics of virtualization that can scale up horizontally across a pool of similar systems.
This leads to the evolution of ensembles aimed at reducing complexity and management
overhead by creating large pools of like resources that are managed as one.
Ensembles simplify and improve the planning, deployment, configuration, operations and
management of the new enterprise data center. Ensembles can scale from a few to many
thousands of systems while maintaining management complexity and costs essentially
independent of their size and similar to those of a single system. Ensembles can be clas-
sified into four types:
An ensemble manager is a key component of all ensembles, and is responsible for the
systems management aspects of an ensemble, such as workload optimization, availability,
12
restart, recovery and ability to change software. It also has hardware resource management
responsibilities, including functions such as heat production and power consumption.
Management
Management
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
APP
LPAR/VM
LPAR/VM
LPAR/VM
OS OS OS OS OS OS
As described in the diagram above, an ensemble manager provides all of the external in-
terfaces and states of the ensemble, as well as the capabilities and behaviors inherent to
the ensemble. The ensemble manager will serve as an extension of IBM Systems Director
to manage multiple ensembles in an ecosystem. The ensemble manager provides an
aggregated view of how the ensemble is running (health monitoring, metering, utilization,
etc.), encapsulating the individual components that make up the ensemble and appear-
ing to the rest of the data center as a single, continually available, dynamically scalable
virtualized server.
To satisfy this robust demand for computing capacity, IT executives are increasing both
the numbers and the density of their servers and storage devices. IBM and consultant
studies project that the server installed base will increase by a factor of six between 2000
and 2010, while storage is expected to grow even more significantly.7
Rising energy costs affect businesses of all sizes. IBM surveyed more than 1,100 execu-
tives from small and mid-size businesses across ten markets in Europe, Asia and the
Americas. Nearly half of those surveyed reported that energy represented one of their
largest cost increases over the past two years.8
Increased demand for IT capacity to support business growth, increased energy use by
the data center, rising energy costs and environmental concerns are coming together to
define a new field of competition for the enterprise—data center energy efficiency. The
more energy efficient your data center, the more prepared your company will be to com-
pete in a business environment where energy is becoming increasingly expensive.
IBM Systems lead the way to higher efficiencies and improve the ratio of compute capac-
ity per kilowatt consumed. Further, IBM’s leadership in virtualization helps IBM servers
achieve a higher level of utilization, thus significantly reducing the number of servers
and, consequently, the space, power and cooling required. IBM systems also provide
technology that allows the power associated with a system to be monitored, managed
and capped.
According to Gartner, “Traditionally, the power required for non-IT equipment in the data
center (such as that for cooling, fans, pumps and UPS systems) represented on aver-
age about 60 percent of total annual energy consumption.”9 Based on the data center
energy-efficiency assessments that it conducts for clients around the world, IBM has
learned that it can implement effective solutions to reduce such high consumption by 15
to 40 percent annually.10 This means that the payback on investment can be achieved in
as little as two years, thereby covering the cost of the assessment in the first year.
The data center energy challenge affects both the physical data center and the IT
infrastructure. IBM assessments provide insight into not only the data center’s energy
efficiency, but also the potential for gaining efficiencies through server and storage consoli-
dation. These assessments of the current state of your data center can be compared to in-
dustry benchmarks, and they provide a fact-based business case for making improvements.
Compounding the financial challenges that occur when the rising demand for IT capac-
ity meets the rising cost of energy, data centers themselves are often out of sync with
the information technologies they support. A study conducted by Gartner found that “36
percent of respondents indicated that their organizations’ newest data centers are seven
or more years old.”11 By contrast, IBM’s client experience has indicated that the IT equip-
ment in those data centers typically turns over every two to four years. As a result, the
14
older data centers may not be able to power and cool the newer IT equipment—especially
blade servers—in an energy-efficient manner.
Cooling new IT equipment has become a major problem in many data centers. According
to Gartner, “High-density equipment, such as blade servers, demand enormous equip-
ment power and air conditioning power. Rack enclosures can accommodate 60 to 70
(IU) units, equating to 20,000 watts to 25,000 watts of power per rack. In addition, for
every watt of equipment power, there is a need for another 50 percent to 60 percent for
air conditioning equipment.”12 Innovative cooling technologies can help you beat the heat
in high-density computing facilities, and they can enable and accelerate the growth of IT
capacity by making it possible for the data center to increase its use of blade servers.
IBM’s active energy management software includes a new monitoring and management
function that integrates geospatial visualization, device monitoring and the ability to
“take action” to manage energy efficiently. This enables administrators and/or autonomic
software to correlate IT equipment power/thermal events with IT performance events, and
to more quickly identify root cause problems and solutions. For example, from a single
interface, a system administrator can drill down into the power, temperature and perfor-
mance of a selected device and resolve a temperature-over-threshold alert by capping
or reducing the power consumed by the device, as long as a lower power state can still
achieve the defined service level agreement.
Autonomic management of the power consumption in the new enterprise data center is
not restricted to adjustments of the IT equipment. Government Computing News (GCN)
reported in June of 2007 that “instead of separate data, building, access, physical secu-
rity, elevator, HVAC, fire and energy systems—with separate control environments and
their own console and monitoring programs—the goal is to use IP to centrally manage
everything.”13 IBM management software will be a leader in integrating the automation
of building automation systems with IT infrastructures, providing a holistic solution for
managing energy consumption in the new enterprise data center.
IBM’s software balances and adjusts the workloads across a virtualized infrastructure,
aligns the power and cooling consumption with business processing requirements and
provides the means to fairly allocate energy costs to users based on the energy they con-
sume. As a result, energy demands are balanced to avoid high peak energy use and the
associated higher energy billing rates, while still meeting service level agreements aligned
with business priorities.
By working with partners, IBM can provide its customers with comprehensive green data
center implementations that bridge modularized facilities’ building blocks, and provide en-
ergy-efficient IT and cooling solutions. These services help to unlock additional power and
cooling capacity by identifying and resolving problems with air management, utilization of
water cooling, advanced stored cooling technology and over-provisioned power budgets.
According to an IDC report on data centers, the enterprise data center of the future will
be much more interested and focused on the business value of data and information
services, on leveraging storage infrastructure efficiency through policy-based software
and on true automation.14
The scale of data growth, regulations for data retention and formal compliance requirements
present new challenges. In the era of the new enterprise data center, the requirements for
information infrastructure will be identified early on, and will address both the selection of
specific virtualization technologies and the flexibility offered through service-oriented delivery
of IT. When properly mapped to the operational requirements, the information infrastructure
provides the enterprise with information on demand.
There are four basic roles of information infrastructure for which business requirements
must be understood: availability, security, retention and compliance. These are discussed
in order:
The functional attributes of information infrastructure are key elements of NEDC operations,
and will be planned as overlays on the physical infrastructure. Integrated management,
which combines physical management with information management, will be a vital
element of NEDC deployments.
3.4 Security and Business Resiliency
IBM customers and industry analysts continually identify security and resiliency as top
business requirements. The ability to provide a secure infrastructure that incorporates
recovery and resiliency at all layers, from hardware to the business as a whole, is a ne-
cessity in today’s global, 24/7 economy. Increasingly, customers, shareholders, regulatory
bodies, insurers and supply chains are driving the highest levels of availability, recovery
and security. This challenge is exacerbated by an increasing need to share infrastructure
resources among applications. With the shift from a customized IT environment to a
flexible, shared and highly optimized one, the trade off between redundancy and optimal
sharing can be better managed in an integrated manner.
IT continues to face both internal and external security threats. In a highly shared environ-
ment, isolation management is a critical additional security requirement. Customers wish
to see familiar and intuitive notions of physical isolation mirrored in the shared environment.
As services and applications are provisioned in a shared, virtual environment, isolation
policies can be consistently managed through all layers of the IT stack.
Security evolution
Security requires many capabilities, and enterprise IT must be armed with these capabili-
ties. Among these security functions, authentication, authorization and distribution of
security policies are the foundational capabilities.
Every user must be authenticated before using any of the services provided. If a user
does not belong to an authentication domain, the user’s identity must be verified through
ID federation.
After a user is authenticated, the user can access resources based on his or her creden-
tials. As multiple services and resources exist in every data center, access capability for
the user should be appropriately determined. This is not an easy task, as services are
often packed in ensembles or application images and may be ported anywhere, while
access decisions usually vary depending on the exact instance of the services. Naturally,
this configuration leads to the separation of access enforcement functions embedded in
the application image from access decision functions.
Through its service orientation and service management focus, the new enterprise data
center affords enterprises the opportunity to consider resiliency in an integrated manner.
Resiliency risks are complex, as they can be business-driven, data-driven or event-driv-
en. Business-driven risks include compliance, governance and mergers. Data-driven risks
include data corruption, viruses and system failures. Event-driven risks include natural
disasters, pandemics and fires. If not mitigated through careful management, these risks
will manifest themselves in the facilities, technologies, processes, applications and data,
organization and strategy. The following are key resiliency requirements:
IBM innovations that address isolation and security requirements are aimed at creating
both virtual appliances running on ensembles of resources and trusted virtual domains.
Virtual appliances are preassembled application (or application component) stacks that
can be replicated easily for both scalability and reliability. They run on ensembles that are
homogeneous nodes, meaning that the software versions are either identical or nearly
identical on all of the nodes. Virtual appliances have simplified capabilities for workload
optimization, availability, restart and recovery, as well as the ability to change software. This
allows for the development of application architectures that can easily take advantage of
the similarity of resources to build higher-level applications and business process continuity.
A trusted virtual domain is an example of virtual regions of trusted domains. IBM has
extensive research activity in this area, and has designed many advanced technologies to
implement trusted virtual domains. A trusted virtual domain can be created dynamically
based on request, or, alternatively, it can be a permanent subset of the new enterprise
data center. All components (server, virtual guest machine, connections and clients) are
mutually trusted by authenticating themselves. Further, it can be assumed that operations
within a trusted virtual domain are safe from malicious parties, as security protection
mechanisms are embedded in every key component of virtual environments.
19
3.5 Business-driven service management
As infrastructure becomes increasingly automated and autonomic, IT responsibilities will
shift from managing complex technical operations to managing complex service opera-
tions. Operational complexity, process compliance, speed of change and costs are
driving the need for business-driven service management.
According to a report conducted by Gartner, through 2012, only 5 percent of large enter-
prises will achieve operational and infrastructural management excellence.16
The new enterprise data center helps businesses meet these challenges with an integrated
service management framework that enables the fusion of people, processes and tech-
nology. The new enterprise data center service management framework is comprised of:
IT service management
The IBM IT Service Management Platform, as depicted in the diagram below, is an open,
standards-based platform for data, workflow and policy integration across IT manage-
ment processes. The platform includes automated, pre-configured and customizable
process workflows for the change and configuration management processes. The IBM
Change and Configuration Management Database (CCMDB) includes an open, federated
CMDB designed to automate process execution, simplify architectural complexity and
help reduce incident and problem management costs. The CCMDB solution:
20
QUALITY
ICE
E RV GEMENT
S NA
MA SE
M
RV AG
AN
IC
E R M EN T
EQU
E
M AGEMENT
T
CE ASSE
EST
SERVICE MANAGEMENT
N
SERVI
FOUNDATION
NG
IO E
C
NI
R
VI
SE VIS
S O
M ERVI PR
ON CE
ITO
RING
IT operational management
The IBM IT Operational Management solution automates tasks to address application or
business service operational management challenges. This solution helps to optimize the
performance and availability of business-critical applications, along with supporting IT
infrastructure. It also helps to ensure the confidentiality and data integrity of information
assets, while protecting and maximizing the data utility and availability.
The IBM IT Operational Management solution can be grouped into the following areas:
IT process management
The IBM IT Process Management solution, which employs innovative and self-managing
technology, automates tasks down to the execution layer. Since the solution uses stan-
dards-based API interfaces, it is easily customizable, and can enable you to standardize
information across tasks and tools for consistent policy administration, thus bridging
organizational silos and integrating IT management processes for rapid responsiveness
and greater flexibility. The IT Process Management solution for the new enterprise data
center can help you to:
The IBM IT Process Management solution also includes predefined implemented process-
es based on IBM’s best practices and IBM’s extensive experience applying standards,
including the Information Technology Infrastructure Library (ITIL), enhanced Telecom Op-
erations Map (eTOM), Control Objectives for Information and Related Technology (CoBIT),
Capacity Maturity Model Integrated (CMMI), Process Reference Model for IT (PRM-IT) and
other standards in customer environments.
In summation, the new enterprise data center service management framework, which
consists of comprehensive service management, operational management and process
management components, can help you to:
Cloud computing
Enterprises are beginning to look to turn-key models in an effort to offer new services and
run new applications and business processes in a flexible and expeditious manner. This
results in an emerging data center architecture that offers massive scaling but simplified
administration and optimized operations. Cloud computing is a term used to describe
both a platform and a type of application. A cloud computing platform dynamically
provisions, configures, reconfigures and de-provisions servers as needed. Servers in the
cloud can be physical machines or virtual machines. Advanced clouds typically include
other computing resources, such as storage area networks (SANs), network equipment,
firewalls or other security devices.
Cloud computing is generating new excitement in the information technology (IT) in-
dustry. If you can envision optimization around consolidation and virtualization of IT
resources, then you might view cloud computing as the ability to seamlessly connect
these capabilities across your entire enterprise. The cloud computing model is based on
a shared infrastructure in which large pools of systems are linked to provide dynamic and
cost-efficient IT services. The need for cloud computing is justified not only by the growth
of traditional IT environments, but also by dramatic growth in connected devices and
real-time data streams, as well as the adoption of service-oriented architecture and Web
2.0 applications, such as mash-ups, open collaboration, social networking and mobile
commerce. By delivering appropriate resources only when those resources are needed,
cloud computing has enabled teams and organizations to streamline lengthy procurement
processes and drive down overall costs.
Cloud computing forms the foundation for a new way of delivering IT capabilities, which
is similar to the way utility companies deliver water or electrical services. Consumers rely
on electric companies to provide electricity when and where it is needed; people generally
don’t worry about where the resource is generated. Likewise, a cloud computing plat-
form dynamically provisions, configures, reconfigures and de-provisions IT capability as
needed, transparently and seamlessly, thus allowing IT consumers to focus on their value
proposition. A cloud computing infrastructure conforms to a set of well-defined specifica-
tions for data, application and services life cycle management.
IBM has established the first cloud computing center for software companies in China.
The center will allow Chinese software companies to support their development activities
by providing them with the ability to tap into a virtual computing environment. In addition,
Google and IBM recently announced a partnership with six universities (Stanford University,
University of Washington, MIT, CMU, UC Berkeley, and University of Maryland) to foster
research and development in parallel programming techniques for cloud computing.
Internally, IBM Research has built and deployed the Research Computing Cloud in order
to offer cloud computing services to a broader community. To help clients take advantage
of cloud computing, IBM is also developing Blue Cloud, which includes a feature that
allows cloud applications to integrate with their existing IT infrastructures through SOA-
based Web services. The design of Blue Cloud is particularly focused on breakthroughs
in IT management simplification to ensure security, privacy and reliability, as well as high
utilization and efficiency.
This new model of IT delivery will help to deliver infrastructural services by:
Major industry analysts view service orientation as a key IT ingredient in the achievement
of business agility. IT virtualization and IT automation are two vital elements in the ability
24
of service orientation to deliver IT as a service. IT virtualization is viewed as a technological
aspect of service-oriented IT delivery, creating a pool of infrastructure resources, such as
computing power and data storage, in order to mask the physical nature of the boundaries from
users. IT automation, on the other hand, is viewed as a way to better govern IT services,
enabling policy-based, service-oriented, dynamic management of underlying virtualized
resources.
MANAGEMENT
DEVELOPMENT
SERVICES
SERVICES
INFRASTRUCTURE SERVICES
FEDERATED OPERATIONAL
MANAGEMENT DATA MANAGEMENT
CONTAINER SERVICES
OPTIMIZATION
SERVER STORAGE NETWORK EXECUTION
SERVICES SERVICES SERVICES SERVICES
FACILITY SERVICES
By leveraging IBM’s broad technology portfolio and deep solution integration expertise,
clients can build IT services on this foundation. These IT services can be built at different
layers of abstraction, ranging from simple infrastructure services, such as server services,
storage services and network services, to a more complex composition of these capabilities.
IT service management components represented on the right side of the figure above
help to create and manage the delivery of IT services. Service request management
consists of a front-end service catalog, making these services accessible to end users.
Process management contains a set of task-based workflows designed to orchestrate
the delivery of IT services. Federated management components discover asset data and
store resource configurations. During the run time, operational management compo-
nents take actions based on federated management data to proactively rebalance traffic,
25
resize virtual machines, move resources and schedule the provisioning/de-provisioning of
resources accordingly.
APPLICATIONS
IBM software has the broadest array of service management software, dynamic middle-
ware and IT tools of any company in the world. IBM’s server and storage products and
services provide industry-leading technologies for virtualization, high-volume computing,
energy management, cooling and massive scalability.
26
Our combination of skills and technology is unmatched in the industry. Combine these
attributes with our advanced research capabilities and exceptional focus on customer
satisfaction, and you get the industry leader in data center transformations.
You should also talk to your IBM client representative about our new enterprise data cen-
ter assessment. For a limited time, IBM is offering a no-cost enterprise evaluation on a
nomination basis to help you pinpoint problem areas in your data center and recommend
a roadmap for evolution.
Finally, be on the lookout for more new enterprise data center technical white papers.
IBM is producing a series of deep-dive technical papers on the key technologies outlined
in this overview.
27
Copyright IBM Corporation 2008
IBM Corporation
New Orchard Road
Armonk, NY 10504
Printed in the United States of America, 05/2008
All Rights Reserved
Adobe, the Adobe logo, PostScript and the PostScript logo are either
registered trademarks or trademarks of Adobe Systems Incorporated
in the United States, other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Cen-
trino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium and Pentium
are trademarks or registered trademarks of Intel Corporation or its
subsidiaries in the United States and other countries.
All statements regarding IBM’s future direction and intent are subject
to change or withdrawal without notice, and represent goals and
objectives only.
OIW03013-USEN-00
28