IBM Red Paper Re
IBM Red Paper Re
IBM Red Paper Re
ibm.com/redbooks
Redpaper
International Technical Support Organization The Green Data Center: Steps for the Journey August 2008
REDP-4413-00
Note: Before using this information and the product it supports, read the information in Notices on page vii.
First Edition (August 2008) This document created or updated on June 1, 2009.
Copyright International Business Machines Corporation 2008. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The team that wrote this paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x Chapter 1. The benefits of a green data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.1 Managing the increasing cost of the energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Running out of power capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Running out of cooling capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1.4 Running out of space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 How energy is used in a data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.3 Environmental laws and the company image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 What is happening around the world. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.5 The benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.6 Next steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2 2 3 3 3 3 4 5 6 6
Chapter 2. Developing a strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1 Assess the greenness of your data center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.1 Calculate data center infrastructure efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.1.2 Important questions to consider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Strategy recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 A summary of best green practices. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Chapter 3. Energy optimization with IT equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 Energy flow in a computer system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 How the electric power is used . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 What to do with the heat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Working with energy efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 System instrumentation for power management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 IBM systems with embedded instrumentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 IBM intelligent power distribution units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Power management: The hardware side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 IBM POWER6 EnergyScale . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 IBM BladeCenter features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 System x features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 ACPI and friendly power saving features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.5 High input voltages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Power management: The software side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 Consolidation and virtualization overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Consolidation: A key in energy efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Virtualization: The greenest of technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 Server virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.1 Partitioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.2 Special virtualization features of IBM systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Other virtualization techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Copyright IBM Corp. 2008. All rights reserved.
17 18 18 19 21 22 22 23 23 23 24 25 25 25 26 27 27 28 29 29 30 30 iii
3.8 Storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.1 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.8.2 Virtual tapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.9 Client virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.10 Integration of energy and systems management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.11 Where to start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 4. Site and facilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 Tips to start moving your data center towards green . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Reducing power consumption with innovative technologies . . . . . . . . . . . . . . . . . 4.1.2 Reducing cooling requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Improving physical infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 Optimize your data center cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Manage airflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.2 Structured cable management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Recommendations for raised floor height . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.4 Data center insulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Hot aisle and cold aisle configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3 Localized cooling equipment options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 IBM Cool Blue rear door heat exchanger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Modular water unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Enclosed rack cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.4 Sidecar heat exchanger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Heating ventilation and air conditioning (HVAC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1 Types of chillers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.2 Variable-speed drive pumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3 Air handling unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.4 Economizers to enable free cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 Cool less, save more. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6 Uninterruptible power supply. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Components and types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Flywheel technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7 Power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Utility power supply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.2 Power factor correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.3 Power distribution and resistance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.4 Intelligent power distribution unit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.5 DC versus AC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.6 On-site generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8 Generators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.1 Standby generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.8.2 On-site electrical generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9 Recommendations for existing data centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.1 Investigate, consolidate, and replace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.9.2 Investigate industry initiatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 5. Who can help: IBM services and Business Partners . . . . . . . . . . . . . . . . . 5.1 Services IBM can deliver. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 IBM Global Technology Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 IBM Asset Recovery Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.3 IBM financing solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.4 IBM green data centers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
31 32 33 33 35 36 37 38 38 38 39 39 40 42 43 43 43 46 47 48 49 50 51 51 52 52 52 52 53 53 54 54 54 55 55 55 55 55 55 55 56 57 57 57 58 59 60 60 63 65 65
iv
5.2 IBM Business Partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Chapter 6. Conclusion: Green is a journey, act now . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Appendix A. Commitment to green from IBM: The past, present, and future . . . . . . . A.1 A history of leadership in helping the environment. . . . . . . . . . . . . . . . . . . . . . . . . . . . A.2 Project Big Green . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.3 IBM internal efficiency results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.4 Future directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ASHRAE publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 70 70 71 71 73 73 73 75 76 76 76
Contents
vi
Notices
This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice. Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you. Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products. This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. COPYRIGHT LICENSE: This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
vii
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:
AIX BladeCenter Cool Blue DB2 DPI EnergyScale FICON HiperSockets IBM Project Financing IBM iDataPlex Power Systems POWER6 POWER PR/SM Processor Resource/Systems Manager Redbooks Redbooks (logo) Redpaper System p5 System p System x System z10 System z Tivoli X-Architecture z/OS
The following terms are trademarks of other companies: AMD, AMD PowerNow!, the AMD Arrow logo, and combinations thereof, are trademarks of Advanced Micro Devices, Inc. InfiniBand, and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office. Oracle, JD Edwards, PeopleSoft, Siebel, and TopLink are registered trademarks of Oracle Corporation and/or its affiliates. VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions. Power Management, Solaris, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both. Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel SpeedStep, Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.
viii
Preface
The information technology (IT) industry is a big user of resources as well as an enabler of efficiencies that reduce CO2 emissions. However, as companies continue to grow to meet the demands of their customers and as environmental concerns continue to be an issue, organizations are looking for ways to reduce corporate energy consumption and to become more environmentally responsibleto become green. This IBM Redpaper publication can help your IT organization as it begins the journey to becoming a green data center. IBM wants to help others, particularly our clients, to chart a course to reap the benefits of lower costs and improved sustainability that running a green data center can provide. Understanding what is possible can speed your journey to an optimized green data center with sustainability designed into both the IT and facilities infrastructures. Although this paper is not all inclusive, it provides a quick start for going green in data centers. It also provides additional pointers and information. You can use this paper as a guide to becoming more energy efficient.
ix
Thanks to the following people for their contributions to this project: David F Anderson PE is a Green Architect from the IBM Briefing Center in Poughkeepsie supporting IBM worldwide. He is a licensed Professional Engineer with a B.S. from the United States Military Academy (West Point), an MBA from the University of Puget Sound, and a Masters in Engineering Science from Rensselaer Polytechnic Institute. Don Roy is a Senior IT Consultant and Server Presenter for the IBM RTP Executive Briefing Center. Prior to joining the RTP Executive Briefing Center, Don was the World Wide Product Marketing Manager and Team Lead for System x high volume servers. In the Executive Briefing Center, Don provides specialized coverage for System x servers and enterprise solutions. Dr. Roger Schmidt is a Distinguished Engineer, National Academy of Engineering Member, IBM Academy of Technology Member, and ASME Fellow. He has over 25 years experience in engineering and engineering management in the thermal design of IBM large scale computers. He has led development teams in cooling mainframes, client/servers, parallel processors, and test equipment utilizing such cooling mediums as air, water, and refrigerants. He currently leads IBM's lab services team, providing customer support for power and cooling issues in data centers. Dr. Schmidt is a past Chair of the ASHRAE Technical Committee 9.9 (TC 9.9) on Mission Critical Facilities, Technology Spaces, and Electronic Equipment.
Comments welcome
Your comments are important to us! We want our papers to be as helpful as possible. Send us your comments about this paper or other IBM Redbooks in one of the following ways: Use the online Contact us review Redbooks form found at: ibm.com/redbooks Send your comments in an e-mail to: redbooks@us.ibm.com Mail your comments to: IBM Corporation, International Technical Support Organization Dept. HYTD Mail Station P099, 2455 South Road Poughkeepsie, NY 12601-5400 x
The Green Data Center: Steps for the Journey
Chapter 1.
1.1 Overview
The journey to a green data center has cost and sustainability rewards. Energy consumption and environmental concerns are becoming priorities because they can impede the ability to grow and respond to organizational IT needs. In September 2007, IBM and the Economist Intelligence Unit issued a report entitled IT and the environment: A new item on the CIO agenda?1 This report found that although most organizations say they are green organizations, many of them are not actually doing as much as they could. Two-thirds of the two hundred or more executives polled said that their organizations have a board-level executive responsible for energy and the environment; however, only 45% of firms had a program in place to reduce their carbon footprint. In addition, of those that did have a carbon reduction strategy, the majority (52%) had no specific targets for it, although a small hard core (9%) aimed to be carbon-neutral by 2012.
Green awareness can become green action. Using green technologies to extend capabilities
while reducing costs and risks is more than smart business. Planning for energy efficiency and corporate responsibility with positive results go together. The global economy has made the world more competitive and cooperative. As organizations strive to thrive in a constantly changing world, adapting for sustainability is a business imperative. Figure 1-1 lists multiple concerns that CIOs have as they look toward the future.
http://www-05.ibm.com/no/ibm/environment/pdf/grennit_oktober2007.pdf Kenneth G. Brill, Data Center Energy Efficiency and Productivity, The Uptime Institute, 2007
Data center
IT Load
Server hardware
Processor
Server loads
Resource usage rate
55%
45%
70%
30%
80%
20%
Idle
Finally, companies should consider the use of IT resources in the data center. Commonly, servers are underutilized, yet they consume the same amount of energy as though they were running at 100%. A typical server utilization rate is 20%. Underutilized systems can be a big issue because a lot of energy is expended on non-business purposes, thus wasting a major investment. Virtualization and consolidation help utilize the entire capacity of your IT equipment. IBM System p and IBM System z offer integrated virtualization capabilities that are often more optimized than other servers. By using your IT equipment at its full capacity with consolidation and virtualization, you can achieve energy efficiency and maximize the return on investment. We discuss this in section 3.6, Consolidation and virtualization overview on page 27.
J. Koomey, Estimating Total Power Consumption by Servers by the US and the World, 15 Feb 15 2007
4 5
For more information, see the information provided by the U.S. EPA: http://www.epa.gov/climatechange/index.html IT and the environment: A new item on the CIOs agenda?; IBM and The Economist Intelligence Unit Business Study Findings at: http://www-05.ibm.com/no/ibm/environment/pdf/grennit_oktober2007.pdf Green to Gold, p33. Daniel C. Esty and Andrew S. Winston, Yale University Press, 2006.
From
Financial
Rising global energy prices Squeeze on IT budgets Constraints on IT growth
To
Ability to accurate view baseline energy cost Cost savings from more efficient energy use Relax budgetary pressures to allow growth
Operational
High density server systems Exploding power & cooling cost Aging data centres More computing performance per kilowatt Shift energy to cool / energy to operate ratio Extend the life of existing facilities
Environmental
Meaningful energy conservation and reduced carbon footprint Improved public image Positive contribution to the Green movement creates a good place to work
Chapter 2.
Developing a strategy
A thoughtful green strategy can reduce operational costs and risks. Being green by design can also improve sustainability and corporate image. In this chapter, we discuss how to assess your data center and we provide strategy recommendations. We also provide an overview of best practices at the end of this chapter, and then develop these practices in the chapters that follow.
PUE is the original metric, while DCiE was created to understand more easily where a data center stands in terms of efficiency.
IT equipment power includes the load that is associated with all of your IT equipment (such as servers, storage, and network equipment) and supplemental equipment (such as keyboard, video, and mouse switches; monitors; and workstations or mobile computers) that are used to monitor or otherwise control the data center. Total facility power includes IT equipment and everything that supports the IT equipment load, such as: Power delivery components such as uninterruptible power supply (UPS), switch gear, generators, power distribution units (PDUs), batteries, and distribution losses external to the IT equipment Cooling system components such as chillers, computer room air conditioning units (CRACs), direct expansion air handler (DX) units, pumps, and cooling towers Computer, network, and storage nodes The decreased efficiency of uninterruptible power supply (UPS) equipment when run at low loads Other miscellaneous component loads such as data center lighting
A DCiE value of 33% (equivalent to a PUE of 3.0) suggests that the IT equipment consumes 33% of the power in the data center. Thus, for 100 dollars spent in energy, only 33 dollars are really used by IT equipment. Improvements in energy efficiency result in movement towards 100% DCiE, the ideal number. IBM is now using the DCiE metric instead of PUE. However, PUE is still in use.
40%
5 0%
3.0 DCiE
33 %
2.5
PUE
3.5
% 28
1.5
1.5
3.57
Important: DCiE indicates the percentage of the energy that is used by IT equipment compared to the total energy drawn. It shows the efficiency of the facility infrastructure that supports your IT equipment. It does not provide information about the energy efficiency of the IT equipment itself (servers, storage, and network equipment) or whether the return on investment (ROI) of this equipment is maximized (few idle resources).
The facilities
The following set of questions pertains to your site and facilities: How and where is energy being used? What is the facilitys current DCiE or PUE? Am I power- and performance-oriented or only performance-oriented? Do I invest in a new data center or can I invest in the evolution of my current data center? Is the physical site of my data center adaptable to changes? Is my desired level of reliability driving the facilitys energy consumption? How much idle capacity for redundancy or resilience exist? Is it enough or too much? Could I eliminate any equipment? What support equipment should I choose (uninterruptible power supply, flywheel, generators, power distribution, chillers, CRACs, and so forth)? What are the future trends? Is the facilitys infrastructure adaptable to the power and cooling requirements of the next generation hardware? For example, more IT equipment will be water-cooled in the future. Does the facility have problems of overheating? Of humidity? Can I use free cooling? (Refer to section 4.4.4, Economizers to enable free cooling on page 52.) Does power, cooling, or space impact operations today? Which will impact business growth in the future? Can I add future compute capacity within my energy envelope? Is my site infrastructure optimized, with regard to the following categories? Airflow and heat removal Power distribution Cooling Lighting Monitoring and management
The IT equipment
The following set of questions pertains to IT equipment. It includes design of the hardware but also the options for cooling, powering, and monitoring that exist at the rack level: Does the equipment use energy-efficient hardware? Does it use power-reduction features? Should I choose power and cooling options at the site, facilities, or at the rack level? Does the hardware provide options for power, thermal, and usage resource monitoring? Do I monitor and control power consumption? How is billing of the power usage done? Who can help?
Do I monitor utilization rates of my resources? What about real-time and trends? How is billing done for the services that the infrastructure is providing? Who can help?
? do I n a p? t c hel ha n W ca ho W
Consolidation?
Utilization rate
Capping?
IT equipment
Old, Expensive? Hum idity? Lighting? How is energy used? Heat removal? Space? Air flow? DCiE?
Facilities
Location?
Which tier am I?
Note: To continue the assessment on energy efficiency, access the free Web questionnaire from IBM at the following location:
http://www.ibm.com/systems/optimizeit/cost_efficiency/energy_efficiency/services.html
The output from the assessment is a listing of areas that deserve additional focus for better energy management. It also recommends IBM services that can be deployed to help improve the energy efficiency of your data center. IBM Global Technology Services also offers assessment services. You can find further details in Section 5.1, Services IBM can deliver on page 60.
11
Complexity Low
Cost Low
Timeframe <1yr
Payback High
http://www.ibm.com/systems/optimizeit/datacenter
12
Have executive management approval, have a dedicated team, and involve everybody. Make plans that consider IT and site/facilities together. Begin with an assessment of the current situation. Make energy costs part of every business case Consider the benefits offered by the New Enterprise Data Center. Share the costs with everybody IT equipment best practices Only buy servers with virtualization and power management features. Lay the foundation for maximum flexibility and a sustained investment into the future. Enable share everything architectures, such as the mainframe, to drive up utilizations and reduce the need for systems. Install new servers as virtual instances for flexibility. Move from less efficient to more efficient hardware. Even better, consolidate old servers to virtual servers on efficient hardware. This reduces energy, CO2 , and even the space footprint of the data center. Identify systems with complementary loads and consolidate them. Partially utilized systems can often make one fully utilized system. Reduce energy, CO2 , and space footprint. Manage power consumption of your IT systems. Take advantage of the energy consumption and heat load according to the workload. Reduce the energy and CO2 footprint (carbon dioxide emitted) of the data center
2.2, Strategy recommendations on page 12 2.2, Strategy recommendations on page 12 2.2, Strategy recommendations on page 12 2.2, Strategy recommendations on page 12 2.2, Strategy recommendations on page 12 2.2, Strategy recommendations on page 12 Section 3.7, Server virtualization on page 29
Low
Low
<1yr
High
Low
Low to High
<1yr
High
Low Medium
Low Low
<1yr <1yr
High High
Medium
Low
<1yr
High
Medium
Low
<1yr
High
13
Measure thermal and power loads of your IT systems. Take control of the energy and CO2 footprint. Virtualize your storage if possible. This can have a high initial cost, but it is the foundation for flexibility and energy savings in the future. A sustained investment. Virtualize your desktops. High initial cost but reduces your total cost of ownership dramatically. Reduces the energy and CO2 footprint of the site, simplifies management. A sustained investment. Site and facilities best practices Manage airflow. To increase airflow efficiency, have a clear path for the cool air to travel under the raised floor and to get to the loaded areas. Above the raised floor, allow a path for the hot air to return back to the CRAC units. Arrange hot and cold aisles. The hot aisle and cold aisle configuration enables much better airflow management on the raised floor, for both hot and cold air. Localize cooling. Locate heat exchangers at the heat source, which lessens the need for a CRAC unit. This increases the efficiency of the remaining CRAC units and the capacity of the available cooling within the data center. These heat exchangers are all scalable. Plan for water-cooling in your data center. Newer infrastructure products are more energy efficient. Replace the oldest infrastructure equipment first, because it is likely to fail next and is less energy efficient.
Medium
Med
<1yr
Low
High
Med
<1yr
High
High
High
1-3yr
High
Complexity Low
Cost Low
Medium
Low
<1yr
High
Medium
Med
<1yr
High
4.3, Localized cooling equipment options on page 46 4.1.3, Improving physical infrastructure on page 39 4.9, Recommendations for existing data centers on page 57
14
Figure 2-4 shows the strategy of moving towards having a green data center, with several recommended steps. It shows the need to coordinate all actions simultaneously with the IT infrastructure and with the people involved in the process.
Migrate m any application s into fewer imag es Si mp lify I T en viron m ent Reduce op eratio ns re sou rces Improve application spe cific m onitorin g and tuni ng
Application Integration
C on so lidate m any centers in to fewer R educe in fras tructure com plex ity Im p rove facilities m an agem en t R educe staffing requi rements Im p rove b us iness resi lience (m ana ge fewer thin gs better) Im p rove o pe rational costs
Virtualization
State-ofthe-Art
Integrated power manag emen t Direct liquid coo lin g Com b ined h eat and po wer
Physical Consolidation
Best Pr actices
H ot and cold aisles Im p roved efficien cy trans formers, U PS, chillers, fan s, and p um p s F ree cooling F lywhe el tec hn olo gy C on servation te chniques In fras tructure ene rgy effici ency Im proved airfl ow ma nag em ent
15
16
Chapter 3.
17
Figure 3-1 Typical relative power consumption by component for typical systems
From left to right, Figure 3-1 shows typical power allocations for mainframe systems (such as the IBM System z10), high-end UNIX servers (such as the IBM POWER 595), high performance computing (HPC) servers (such as the IBM POWER 575), entry-level UNIX systems (such as the IBM POWER 520), and blade servers, which represent a power-efficient, typical 1U server replacement. Because the power consumed by the processor makes up approximately 20-30% for the mainframe while the blade is over 50%, we use different strategies for each when optimizing their energy effectiveness.
18
The red portions at the bottom of the vertical bars show the energy required to provide power to each system. Transforming AC power into DC loses some energy. The efficiency of a transformer (power supply) depends on the load, and it is non-linear. The most efficient load is 50-75%. Efficiency drops dramatically below a 50% load, while it does not improve significantly, with higher loads. The challenge is to balance the system so that each component can operate in the most efficient way.
Figure 3-2 Heat flow path from transistor device to data center
Each watt saved in system power results is approximately another watt saved in heat load. These savings also have an effect on the uninterruptible power supply (UPS) and cooling. Therefore, reducing system power consumption pays back more than double, which is a big benefit when moving to a green data center. Air is a very inefficient cooling medium, therefore liquid cooling is increasing in popularity. Water is currently the most common liquid used for cooling. One liter of water can absorb about 4000 times more heat than the same volume of air. As more heat is generated in a smaller space, water cooled systems seem inevitable in the near future. When planning new data centers or revamping current centers, consider that new IT equipment will undoubtedly require liquid cooling for efficiencies.
19
Node #1
s
Solenoid Shut-off Valve
Left MWU
Right MWU
Figure 3-3 Schematic cooling system overview of the POWER 575 system
The rear door heat exchanger (RDHX) The rear door heat exchanger (RDHX) is described in Section 4.3.1, IBM Cool Blue rear door heat exchanger on page 47 in more detail. It is another device to help reduce the heat load in your data center. Any of these devices are helpful when there are single hotspot problems, or if the air-based cooling of the data center is at its limit. When the chilled water infrastructure is
20
in place, the RDHX is a very attractive solution because the dissipated heat now bypasses the CRAC unit and can be more efficiently absorbed by the chillers. The pictures in Figure 3-4 on page 21 were taken in a production data center. The rack houses nine System p5 550Q servers. The RDHX reduces the interior temperature of 46.9 Celsius (114.8 Fahrenheit) to 25.1C (77F) on its outside.
iDataPlex
System x takes a new approach to solving data center challenges through its latest innovation, iDataPlex. This is a data center solution for Web 2.0, HPC cluster, and corporate batch processing clients who are experiencing limitations of power, cooling, or physical space. iDataPlex servers help pack more processors into the same power and cooling envelope, better utilize floor space, and right size data-center design. With the iDataPlex solution, less power per processor means more processing capacity per kilowatt. The iDataPlex can run cooler to deliver greater reliability. iDataPlex offers flexibility at the rack level. It can be cabled either through the bottom, if it is set on a raised floor, or from the ceiling. Front-access cabling and Direct Dock Power allow you to quickly and easily make changes in networking, power connections, and storage. The rack supports multiple networking topologies including Ethernet, InfiniBand, and Fibre Channel. It is an excellent option for a green data center because it offers the following benefits: Efficient power and cooling Flexible, integrated configurations New levels of density Cost-optimized solution Single-point management Data center planning and power and cooling assessment
21
Key questions from the previous section are: What makes IT equipment energy efficient? How do systems compare to each other? One approach to answering the two questions is to examine the relationship between the power that is used and the workload that is processed. This comparison provides a handy concept of an energy efficient IT system. Choose equipment that is efficient, and when options exists, go with the system that is more efficient. Note: The more energy efficient a system is, the more workload it can process with a certain power input, or the less power it needs for processing a certain workload. In other words, being energy efficient means that you can either decrease power and do the same work or increase workload with the same power. Most efficiency improvements result in a combination of these two solutions. For example, we could assign the workload of computer A to another, more power-efficient computer B, then switch off computer A. This means we choose from both alternatives at a time. Note: For IT equipment energy efficiency, we will primarily look at the power to workload relationship. But there is another workload-related characteristic we do not want to miss: the workload to time relationship, better known as performance. Using physics, we can see our power-to-workload relationship as an analogy to mechanical work, so we view performance as analogous to mechanical power. After each change, we want to know that we actually improved our overall energy balance. Obviously, this can only be achieved with proper before-after evaluation. The very basic starting point and first step in all our efforts must therefore be to enable proper measurement.
The instrumentation for workload measurement is out of scope for this document. Tracking the operating systems utilization and using accounting measurement tools is (or should be) an established practice for all data centers, whether going green or not.
22
Rack servers
Network
Service processor
Management module
iPDU
Figure 3-5 How system instrumentation and Active Energy Manager work together
The following overview describes instrumentation in IBM systems: System z Gas Gauge: The System Activity Display within the Hardware Management Console shows the machines current power and temperature information. EnergyScale technology in POWER6 processors, which are built into POWER blades and the Power System computers. Described in greater detail later, EnergyScale and Power Trending monitors the actual power and thermal load of the processor. System x servers and BladeCenter chassis provide their power and monitoring information to the Active Energy Manger product, also described later in more detail.
23
reduce peak energy consumption. For example during low CPU utilization cycles in the night, the processor may run in Power Saver Mode. This feature resembles the Intel SpeedStep or AMD PowerNow! technologies. Power Capping: Enforces a specified power usage limit. This feature is handy when there are general power limitations such as maximum power available for a set of systems. However, this should not be used as a power saving feature like Power Saver Mode, as it has a significant impact on performance. Processor Core Nap: Uses the IBM POWER6 processor low-power mode (called Nap) for reduction of power consumption by turning off the clock for the core. Signalled by the operating system, the hypervisor controls cores to enter or leave Nap mode. EnergyScale for I/O: Enables Power System models to automatically power off pluggable PCI adapter slots. Slots are automatically powered off if they are empty, are not assigned to a partition, or their assigned partition is powered off. This saves approximately 14 W per slot. The power management functions can be configured using the Advanced System Management Interface (ASMI), the Hardware Management Console (HMC), or Active Energy Manager. However, only Active Energy Manager allows configuration of Power Capping and evaluation of Power Trending. Note, that all POWER6 based systems employ this technology.
The IBM System x and BladeCenter Power Configurator can be downloaded from: http://www.ibm.com/systems/bladecenter/resources/powerconfig/
24
25
Single Server
Server X
Server 1
Figure 3-6 Optimize rack layout with the help of Active Energy Manager
26
When you have a power management infrastructure in place, you can take additional steps. For example, you can locate hot spots in your data center, perhaps supported by physical location information about your servers. Alternatively, after you know the hot spots, you can prevent them by provisioning your applications to cool servers. If your utility provider offers load dependent rates, or day and night rates, why not optimize your power consumption to these? Many options are available. However the first step is to build the infrastructure.
APP 1
APP 3
APP 4
APP 7 APP 1
System 1
10% busy 2KW
APP 2
System 2
10% busy 2KW
System 4
10% busy 2KW
APP 2
APP 8 APP 3
APP 4
APP 5
APP 6
APP 7
Consolidated system
70% busy 4KW
Figure 3-7 Consolidation of applications from under utilized servers to a single, more efficient server
APP 8
27
As we indicated earlier, a decrease in overall power consumption is not the only factor. Hand-in-hand with the power reduction goes the same amount of heat load reduction and another add-on for the infrastructure. This double reduction is the reason why consolidation is an enormous lever to moving to a green data center. However, a particular drawback of consolidation is that none of systems 1 through 4 is allowed to be down during the time that the respective applications are moving to the consolidated system. So, during that migration time, higher demands on resources might occur temporarily.
communicate using the virtualization systems capabilities, often transferring in-memory data at enormous speed. Performance and energy efficiency increase because the network components are dropped. Once again, this reduces the need for site and facilities resources. Each of the separate systems has its own storage system, namely disks. The virtualized systems can now share the disks available to the virtualization system. By virtualizing its storage, the virtualization system can provide optimal disk capacityin terms of energy efficiencyto the virtualized systems.
APP 1
APP 2
APP 3
APP 4
APP 7
System 1
10% busy 2KW
System 2
10% busy 2KW
System 4
10% busy 2KW
APP 8
APP 1
APP 2
APP 3
APP 4
APP 7
Virtual System 1
Virtual System 4
Figure 3-8 Virtualization allows us to consolidate systems the way they are
3.7.1 Partitioning
Partitioning is sometimes confused with virtualization, but the partitioning feature is a tool that supports virtualization. Partitioning is the ability of a computer system to connect its pool of resources (CPU, memory, and I/O) together to form a single instance of a working computer or logical partition (LPAR). Many of these LPARs can be defined on a single machine, as long as resources are available. Of course, other restrictions apply, such as the total number of LPARs a machine can support. The power supplied to the existing physical
APP 8
29
computer system is now used for all these logical systems, yet these logical systems operate completely independently from each other. Important: LPARs each work independently at the maximum performance of the underlying system. All partitions share the energy provided to the overall system. LPARs have been available on the IBM System z since the late 1980s and on System p since approximately 2000. Although the System z and System p partitioning features differ in their technical implementations, they both provide a way to divide up a physical system into several independent logical systems.
30
VMware ESX Server and Microsoft Virtual Server come with a hypervisor that is transparent to the virtual machines operating system. These products fall into the full virtualization category. Their advantage is their transparency to the virtualized system. An application stack bound to a certain operating system can easily be virtualized, as long as the operating system is supported by the product. VMware offers a technology for moving servers called VMotion. By completely virtualizing servers, storage, and networking, an entire running virtual machine can be moved instantaneously from one server to another. VMwares VMFS cluster file system allows both the source and the target server to access the virtual machine files concurrently. The memory and execution state of a virtual machine can then be transmitted over a high speed network. The network is also virtualized by VMware ESX, so the virtual machine retains its network identity and connections, ensuring a seamless migration process.3 System p Live Partition Mobility offers a similar concept. Xen uses either the paravirtualization approach (mentioned in 3.7.2, Special virtualization features of IBM systems on page 30), as the POWER architecture does, or full virtualization. In the partial approach (paravirtualization), virtualized operating systems should be virtual-aware. Xen for example, requires virtual Linux systems to run a modified Linux kernel. Such an approach establishes restrictions to the usable operating systems. However, while they are hypervisor-aware, different operating systems with their application stacks can be active on one machine. In the full approach, the hardware must be virtual-aware, such as Intel's Vanderpool or AMD's Pacifica technology. In this case, running unmodified guests on top of the Xen hypervisor is possible, gaining the speed of the hardware. Another technique is operating system level virtualization. One operating system on a machine is capable of making virtual instances of itself available as a virtual system. Solaris containers (or zones) are an example of this technique. In contrast to the other techniques, all virtualized systems are running on the same operating system level, which is the only operating system the machine provides. As mentioned in the introductory section, this can become a very limiting restriction, especially when consolidating different server generations. Often the application stack is heavily dependent on the particular operating system. We reach a dead end when we want to consolidate servers running different operating systems such Windows and Linux. Attention: In addition to the network virtualization products mentioned in this paper, a popular virtualization technique is to combine related applications on one central server or complex. This allows the networking between them to be done internally at computer speeds rather than network speeds, and it saves the cost of networking hardware and software.
31
storage. ILM is not the scope of this paper, however, keep in mind that optimizing your storage landscape by adapting it to your actual needs can be a very green strategy. Dynamic address translation (DAT) is the process of translating a virtual address during a storage reference into the corresponding real address. If the virtual address is already in main storage, the DAT process can be accelerated through the use of a translation look-alike buffer. If the virtual address is not in main storage, a page fault interrupt occurs, z/OS is notified, and the page is brought in from auxiliary storage. Looking at this process more closely reveals that the machine can present any one of a number of different types of faults. A type, region, segment, or page fault is presented, depending at which point in the DAT structure invalid entries are found. The faults repeat down the DAT structure until, ultimately, a page fault is presented and the virtual page is brought into main storage either for the first time (there is no copy on auxiliary storage) or by bringing the page in from auxiliary storage. DAT is implemented by both hardware and software through the use of page tables, segment tables, region tables and translation look-alike buffers. DAT allows different address spaces to share the same program or other data that is for read only. This is because virtual addresses in different address spaces can be made to translate to the same frame of main storage. Otherwise, there would have to be many copies of the program or data, one for each address space.
Disk A
Disk B
Disk C
32
Obviously, storage virtualization is another tool for consolidation. If underutilized, disks can be virtualized and consolidated in a SAN, the data can reside in more efficient storage pools. The SVC supports migration of data among the connected device and to remote sites for redundancy or backup purposes. And it can help manage storage hierarchies, where low-activity or inactive data can be migrated to cheaper storage. The integrated cache on the other hand is able to improve the performance of lower tier storage. The data migrations are managed transparently, so they do not interrupt the applications. The following points make the SVC an attractive tool for an energy efficient storage strategy. Data migration from older to newer, more efficient systems can happen transparently Tiered storage enables you to use media with a smaller energy footprint, while the SVC cache improves its performance. Consolidation of the system individual storage devices to virtual storage has the same effect, of increasing storage utilization, as is shown for server virtualization. Storage virtualization requires more effort than server virtualization, often requiring us to rethink the existing storage landscape. During consolidation, large amounts of data must be moved from the old systems to the consolidated storage system. This can become a long task that requires detailed planning. However, when done, the effect can be enormous because now storage can be assigned to systems in the most flexible way.
environment, and produces noise. Desktop virtualization can improve the situation dramatically. The underlying principal of client virtualization is to replace the office workstation with a box having a much smaller energy footprint. The needed computing power is moved into the data center. This does not sound excitingly new, remembering terminal-to-host applications or the Xwindows system, but those configurations also had their advantages and todays virtualization techniques make this approach even more attractive. Benefits are many, and not only to the energy balance. Software deployment, for example, can become a mess if the desktop machine contains many historically grown software stacks. If we do not want to bother users by running updates during the day, machines can run overnight. An erroneously switched off machine can make a whole deployment fail. Central machines reduce the risk and cost. The three strategies for using virtualization in desktop consolidation are: Shared services: The client PC continues to run the full operating system and has access to remote service providers for special applications. Examples of this strategy are Pure Windows Terminal Service (WTS), Exceed on Demand, and Citrix Metaframe and Presentation Server. Virtual clients:. The client PC is replaced by a thin client. The users PC runs now as a virtual machine on a central server, while the thin client supports only appropriate protocols such as WTS, Windows, and so forth, but cannot run programs locally. The IBM virtual client solution is an example of the virtual client strategy. It is based on BladeCenter or System x servers. VMware virtualizes these physical servers into multiple virtual servers. Each of these servers can be assigned to a user (thin client). Connections between clients and servers are managed with Leostream Virtual Desktop Connection Broker, which itself runs in a virtual machine. Provisioning of the virtual machines to the respective servers is supported with the IBM Virtualization Manager. Workstation blades: The client PC is replaced by a special device because the advanced graphics requirements must be handled with special compression features or special hardware. There is a 1:1 assignment between user machine and central workstation blade. An IBM solution with workstation blades is the HC10 Blade Workstation and the CP20 Workstation Connection Device approach. The HC10 Blade Workstation serves as a replacement for desktop workstations, which can take advantage of all BladeCenter features (as discussed in 3.4.2, IBM BladeCenter features on page 24), such as optimized power and cooling efficiency, or the ability of power management using AEM. On the desktop side, the CP20 Workstation Connection Device is the user interface. This device has no moving parts and serves as the remote KVM device for the HC10. It provides connections for a keyboard, video display, and mouse (KVM) and optionally other USB devices. To support advanced graphic applications, the video output from the HC10 is compressed before it is sent to the CP20 device. Connections and assignments between the CP20 and HC10 devices are maintained by the connection management software, also known as connection broker. The connection broker assigns the CP20 to its corresponding HC10 using different modes: Fixed seating assigns a distinct CP20 to a specific HC10 Free seating assigns a distinct user to a specific HC10 Pooling as a subset of free seating assigns users to a pool of HC10s to which they are connected. In contrast to the virtual client solution, the HC10 and CP20 solution aims at environments using advanced graphics applications such as CAD, geographic info systems or trading
34
floor systems. In these environments, the workstation is used for higher computing and graphics requirements than the typical office PC. Note: When installing blade systems, pay attention to additional cooling requirements (hot spots) that are generated when placing multiple blades in the same area.
TEMS
Having this entry point into the Tivoli environment enables you to employ all the well-known features of IBM Tivoli Monitoring and other tools with which it interacts. You can also add the performance aspect discussed in section 3.2, Working with energy efficiency on page 21. Optimizing for power and performance might include the following scenarios: Reprovisioning a server based on the machines environmental temperature or overall rack power consumption to another rack in a cooler area of your data center. On a temperature alert in ITM, you would trigger the reprovisioning in IBM Tivoli Provisioning Manager. Power capping a single server having a temperature problem, perhaps because of an obstructed airflow, until the problem is solved on site. 35
Feeding power, temperature, and CPU usage data into the IBM Tivoli Monitoring Warehouse. Using IBM Tivoli Usage and Accounting Manager this data can be correlated with accounting data. Charge the IT users according to their CPU and correlated power usage. The opportunities are many, after the AEM data is available to the Tivoli environment. As energy management begins to play an important role, additional integration products from Tivoli are evolving. Due to the flexible nature of the Tivoli toolset, user setup might be complex. IBM services can help you find the best-fit solution.
36
Chapter 4.
37
38
Companies such as APC and Emerson Network Power4 have worked as business partners with IBM to devise cooling solutions for energy efficient data centers.
4 5 6
39
room layout, and localized cooling all can increase energy efficiency with relatively low up front investment. Note: Thanks to Dr. Roger Schmidt. Parts of this information came from his presentation, CIOs guide to The green data center. This is located at: http://www-07.ibm.com/systems/includes/content/optimiseit/pdf/CIO_Guide_to_Gree n_Data_Center.pdf
http://www.anixter.com/AXECOM/US.NSF/ProductsTechnology/SolutionsCaseStudiesOverview
40
Figure 4-2 shows pillows used to seal the cable cutout. This solution dramatically reduces cool air escaping into unnecessary areas.
Clear under-floor obstructions. Excessive under-floor obstructions can lead to an increase in static pressure. High static pressure can have a reverse impact on the airflow under and above the raised floor. Remove under-floor obstructions such as: Unused cables and wiring Unused under-floor equipment or communication boxes Figure 4-3 on page 42 shows excessive cables obstructing airflow.
Chapter 4. Site and facilities
41
42
43
Note: It is not always practical to move existing equipment; however, great efficiencies can be gained if doing so is possible. Figure 4-6 shows a hot and cold aisle configuration, as recommended by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE).
Perforated Tiles
Figure 4-7 shows the thermal flow benefits achieved by hot and cold aisles. The hotter the air returning back to the CRAC unit, the greater heat transfer that is achieved, increasing the efficiency of the CRAC unit.
44
If flexibility exists, locate CRAC units facing the hot aisle rather than cold aisles. The under-floor velocity pressure will be maximized in the cold aisles for this layout versus locating the CRAC at the end of the cold aisle. Strategically placed ceiling partitions can help prevent recirculation of the hot exhaust air into the inlets of the servers. Important: Be aware of the local fire codes before placing these partitions.
Figure 4-9 on page 46 shows simple, effective options to create corridors for hot and cold air.
45
Raised floor
Power/ Cabling
Raised floor
Power/ Cabling
Cold aisle
CRAC unit
Raised floor
Raised floor
http://www.liebert.com/product_pages/SecondaryCategory.aspx?id=33&hz=60 http://www.apc.com/go/promo/isxglobal_registration/index.cfm?tsk=h423x
46
47
Figure 4-11 IBM cool blue rear door heat exchanger comparison
48
10
http://www.apc-mge.com/
49
50
Pump
Reciprocating compressors are least efficient. They are usually set to run at three stages, which are 33%, 66%, and 100%. When the load requirement is 25%, the additional 8% is manufactured and then discarded. Centrifugal compressors are more efficient because they have fewer moving parts. Screw-driven compressors are the most efficient. They are able to stage to the exact load
that is required from as little as 10% to 100%. Note: Chillers that continually operate above 75% lose efficiency. So, if you are able to reduce the heat load within the data center, this will benefit the chiller load. For every watt you save in the data center, you will save 1.25 watts at the chiller.
51
A new technology chiller system can improve efficiency by up to 50%. New chiller plants also can be installed with variable-speed drives, reducing pumping system energy usage and allowing better integration of the liquid cooling system into the chilled water infrastructure. Water-side economizers, which use outside air to directly cool the chilled water, can further reduce the energy required to run the chillers.
Air-side economizers
Air-side economizers can be used as a free cooling unit. However, depending on your location, these units work best where there is a constant supply of cool clean air. The consistency can be maintained by running these units overnight. The outside air economizer directly draws outside air for use by the data center. Attention: Be aware of the following items with inside economizers: Temperature and humidity control is pivotal Gaseous contamination that can enter your data center and that can destroy equipment Particle contamination can enter your data center
Water-side economizers
Water-side economizers use cool outdoor air to generate chilled condenser water that is used to partially or fully meet the cooling demands of the facility. When the outside air is cool enough, the water-side economizer will take over part or full load of the chiller. This can result in a number of free cooling hours per day.
52
The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) has a Technical Committee 9.9 (TC 9.9). The members of this committee are currently evaluating moving the upper and lower temperature set points for data centers beyond the previous recommendations. Moving these set points, if feasible, would have multiple benefits: Allow better utilization of outside air. Reduce the direct load placed on the chillers by increasing chilled water temperature. IBM has increased the temperature in its own data centers from 20 degrees Celsius to 22 degrees Celsius (68 degrees Fahrenheit to 72 degrees Fahrenheit).
Rectifier:
Power Store:
Each UPS type caters to different applications. Understanding your equipment is important because one size does not fit all. The following types are the most common UPS units: Standby UPS This type is commonly used for personal computers or single servers. The inverter starts only when the power fails. This is very efficient and cheap. Line Interactive UPS This type is commonly used for 5-15 servers. When the power fails, power flows from the battery through the UPS to the servers. The inverter is always connected to the output. This provides filtering compared with the standby UPS. Double Conversion UPS This type is used for larger data centers. It recreates the AC power by using a rectifier to change from AC to DC and then using an inverter to change back from DC to AC. This provides the best quality power and protection and is usually the most expensive option.
Chapter 4. Site and facilities
53
Although UPS units are typically rated in kilovolt-amperes (kVA), for example 330 kVA, the usable power is measured in watts. The efficiency of the UPS unit depends on the power factor (PF) of the unit, as shown in the following examples: A 330 kVA UPS unit operating at 0.8 = 264 kW of usable energy A 330 kVA UPS unit operating at 0.9 = 297 kW of usable energy As you can see, the unit with the higher PF rating provides more usable energy.
4.7 Power
This section discussions terms to understand when planning for more efficient power.
4.7.5 DC versus AC
Power supplies are now offered in a choice of direct current (DC) or alternating current (AC). As AC power supplies are developed and the technology moves to switch-mode power supplies, the efficiency gains in this area have been astounding. The result has been a marginal difference between DC and AC, where DC is up by only 5% to 7%. AC parts are widely available and cost considerably less than that of DC. Consider the following questions: Should we change our infrastructure for this improvement? Will AC computer power supplies continue to improve?
4.8 Generators
The last infrastructure component that we discuss is generators.
55
center will not operate and the likelihood of overheating the data center is a certainty. High-power availability is a key in achieving a high-availability data center. A standby generator supplies power when the utility power supply is not available. Both the standby generator and the utility power supply are connected to an automatic transfer switch (ATS). When the utility power supply is not available, the generator automatically starts, taking approximately 40 seconds to assume the load. During this time, the UPS supports the data center load. Without the UPS, the data facility loses power and IT systems power down. Standby generators typically run on diesel fuel or natural gas. These units have a life span of 15 to 20 years. In areas that have a very good utility power supply, these units might not get very much run-time, so it is important to maintain and test them. When sizing these units, be sure to account for all the infrastructure that maintains the data center, including chillers, pumps, CRACs, UPS, AHUs, and other site infrastructure.
57
ASHRAE is located on the Web at the following location: http://www.ashrae.com/ Uptime Institute, offers information about, but is not limited to the following areas: Has data center availability and reliability information. Provides operating criteria for adding energy efficiency to a data center. Uptime Institute is located on the Web at the following location: http://www.uptimeinstitute.org/
4.10 Summary
You might believe you have outgrown your data center, but you probably have not. The inefficiencies in older design principles and technologies are the main things holding you back from improving the data center. As you move towards implementing best practice initiatives and upgrading your IT equipment, you will generate additional capacity for space, power, and cooling from your existing infrastructure. Advances in technology of IT systems are delivering more computing power for your dollar, but they are also stressing your power and cooling infrastructures and, in some cases, back to your local utility grid. With virtualization offering increased performance per watt of power and advanced thermal diagnostics delivering pinpoint control for your cooling infrastructure, you can regain control of your energy and cooling requirements. The introduction of best practice initiatives to improve data center cooling can start overnight, with minimal up-front costs. The introduction of new equipment does not happen overnight. Server migration adds a short-term increase to the data center power and cooling demands. So, implement the new and remove the old, and the benefits will start to show. As the power and cooling demands reduce in the data center, the cascading affect on the site and facility infrastructure will immediately take place. As an example mentioned previously, If you remove 1 kW of power load from the data center, you will reduce an additional 1.35 kW of load from the site and facility infrastructure. Total savings is 2.35 kW. Planning for site and facility infrastructure upgrades takes time, sometimes between 12 to 24 months to implement. So when designing a solution, do not simply look at what is on the market today, but ask the vendors for their roadmaps, and then plan for the best, newest, technology available. In summary, review the following suggestions: Assess your environment, including the data center and facility. Improve cooling efficiencies by implementing best practice initiatives. Use less space through consolidation. Consider consolidating multiple data centers. Decrease the amount of servers to reduce the heat load. Optimize your data center in conjunction with the site and facility infrastructure. Work towards energy efficiency for CO2 reductions.
58
Chapter 5.
59
Diagnose
IBM can help assess energy efficiency of your data center by using power management analysis and thermal analytics. The options of server and storage virtualization and consolidation are also studied. A fact-based business case is provided and major opportunities for improvement can be detected with reduction of energy costs up to 40%. Actions at the thermal level can help eliminate hot spots (regions of high power density) and undesired intermixing of hot and cold air. Payback on the investment can be achieved in as little as two years, thereby covering the cost of the assessment in the first year. For example, a major U.S. utility provider needed to support IT growth with its existing 400 square meters (4400 square foot) data center as well as demonstrate to its customer base that the company is a leader in energy efficiency. It asked IBM to conduct a comprehensive, fact-based analysis of its IT infrastructure. The IBM analysis evaluated cooling system components, electrical systems, and other building systems. It was then possible to provide a baseline metric for data center energy efficiency and deliver a roadmap of cost-justified recommendations. IBM established that if the company spent U.S. $18 000 once, it could save U.S. $23 000 annually by reducing its energy consumption by 46%, providing a payback in less than two years. It could go further and realize U.S. $100 000 in annual energy savings with upgrades to its UPS systems.
60
Build
IBM can provide expertise to customers, based on the experience of building 3 million square meters of data centers for clients worldwide. Planning, building, or upgrading a new data center provides the perfect opportunity to rationalize the data center strategy as a way for the customer to gain substantial savings on capital, operations, and energy efficiency. For example, Bryant University1 in Rhode Island needed to reduce costs and grow the capacity of its IT infrastructure to meet rising student enrollments and student expectations for IT services. The Universitys previous decentralized IT infrastructure was costly, inefficient, and increasingly unable to scale up to meet these growing demands. The University worked with IBM to consolidate and upgrade its IT operations. IBM helped Bryant design and build a centralized, 500 square-foot data center. The project consolidated 75 servers in three data centers down to three IBM BladeCenter platforms holding a total of 40 IBM System x and IBM System p servers. The new data center was implemented in half the total space required by the previous three data centers. This smaller footprint, coupled with energy efficient components, significantly reduced the universitys energy costscontributing to a 40% reduction in overhead costs.2 Table 5-2 shows examples of Build category offerings.
Table 5-2 Examples of Build category offerings from IBM IT facilities assessment, design and construction services Server optimization and integration services Helps you create stable, security-rich, energy efficient, future-ready data centers and enterprise command-center facilities. The end-to-end services include a review of your existing data center's reliability, points of failure, growth, floor space, and power and cooling needs. Helps you create a cost-effective, scalable, flexible and resilient server infrastructure to support business applications using industry-leading practices based on IBM experience and intellectual capital.
1 2
See the Build section in The green data center: cutting energy costs to gain competitive advantage, April 2008, at: http://www-935.ibm.com/services/us/cio/outsourcing/gdc-wp-gtw03020-usen-00-041508.pdf See The art of the possible: Rapidly deploying cost-effective, energy-efficient data centers, February 2008,at: http://www.ibm.com/services/us/its/pdf/smdc-eb-sfe03001-usen-00-022708.pdf
61
Helps you reduce complexity, optimize performance and manage growth by creating cost-effective, highly utilized, scalable, and resilient storage infrastructures. Helps maximize energy efficiency through consolidation, virtualization and storage optimization leading to a greener infrastructure. As a suite of technologies and services, enables the movement of stored datatypically included in energy assessment recommendationsin a non disruptive manner regardless of server platform or storage array vendor. Through specialized facilities services, helps enable the integration of all building subsystems so that they can operate in a safe, efficient, and ecologically friendly environment. The system manages the air-conditioning, fire and security systems of the building and can lead to significant energy savings.
Cool
One example of deploying cooling technology from IBM is at Georgia Institute of Technologys Center for the Study of Systems Biology3. The Center required supercomputing capabilities for protein structure simulations and other techniques supporting research of new drugs. Its supercomputer demanded the highest possible computational performance while generating significant heat output from its ultradense blade servers. By implementing a mix of advanced IBM cooling technologies, including an innovative rear door heat exchanger, the university was able to maintain computing performance while reducing air-conditioning requirements by 55%. The resulting energy savings helped cut operational costs by 10-15% and helped save an estimated U.S. $780 000 in data center costs.4 Table 5-3 shows examples of Cool category offerings.
Table 5-3 Examples of Cool category offerings from IBM Installation of Cool Blue Rear Door Heat eXchanger (RDHX) Data center infrastructure energy efficiency optimization Server and storage power / cooling trends and data center best practices Provides a simple, cost effective, energy efficient solution to solve hot spot problems within the data center. The overall approach is to provide and oversee a simple step-by-step process for implementation of the RDHX units. Allows clients to work one-on-one with IBM Data Center Services Power / Thermal Development Engineers to formulate a balanced plan to improve efficiency, reduce total cost of ownership, and maximize the aggregate IT equipment supported by an existing data center infrastructure. Helps you understand the current and future power, and cooling and I/O demands that IT equipment places on your existing or planned data center infrastructure.
Note: IBM has four types of services to address facilities problems you might have: Data Center and Facilities Strategy Services IT Facilities Consolidation and Relocation Services IT Facilities Assessment, Design and Construction Services Specialized Facilities Services
3 4
See the Cool section in The green data center: cutting energy costs to gain competitive advantage, April 2008, at: http://www-935.ibm.com/services/us/cio/outsourcing/gdc-wp-gtw03020-usen-00-041508.pdf Georgia Tech implements a cool solution for green HPC with IBM, October 2007
62
An example of that comes from the University of Pittsburgh Medical Center (UPMC)5, which is seeking to become a truly integrated, self-regulating health care system, using evidence-based medicine to produce superb clinical outcomes and lower costs. To support this goal, UPMC has been undergoing an IT service transformation program with help from IBM. Health Industry Insights, an IDC Company, has been tracking the project and reports that, UPMC is facing multiple challenges that translate into doing more with less. IDC noted such challenges as pressures to improve customer service, patient safety, and service quality while reducing care delivery costs. In order to deliver highly integrated, efficient care in the face of this rapid growth and industry pressures, UPMC recognized that enterprise-wide IT systems, data integration, and platform standardization were crucial for its quality and business integration goals and to achieve the economies of scale expected to accrue from these acquisitions, IDC said. UPMC worked with IBM in virtualizing its Wintel and UNIX systems and consolidating 1,000 physical servers down to 300 IBM servers. Storage also was reduced from 40 databases down to two centralized storage area network (SAN) arrays. IDC reports, Our initial estimate that UPMC would avoid almost $20 million in server costs has grown to $30 million and is likely to exceed $40 million by the conclusion of the project in 2008.6 IBM services can help with assessments, methods, technology and knowledge.
See the Virtualize and simplify section in The green data center: cutting energy costs to gain competitive advantage, April 2008, at: http://www-935.ibm.com/services/us/cio/outsourcing/gdc-wp-gtw03020-usen-00-041508.pdf Health Industry Insights, an IDC Company, Virtualization: Healthcares Cure for the Common Cost, Part 2, Doc #HI209705, December 2007 at: http://www-03.ibm.com/industries/healthcare/doc/content/resource/insight/3721514105.html J. Nicholas Hoover, Data Center Best Practices, InformationWeek, March 3, 2008, http://www.informationweek.com/management/showArticle.jhtml?articleID=206900660&pgno=1&queryText=)
63
comply with applicable environmental regulations. Also, the customer will want one single-source provider instead of using a patchwork of different services. IBM Global Financing (IGF) can meet these challenges and more. Asset Recovery Solutions, an offering from our Global Asset Recovery Services division, offers a suite of highly competitive solutions to dispose of IT equipment and equipment related to IT from IBM and other sources. The equipment can include hardware, fax machines, and printers. IBM methods provide for safe and proper processing of used equipment. IBM has worldwide product remarketing and logistics capabilities complemented by state-of-the-art systems, and brings these best-of-breed capabilities to the customer, with the reach, resources, and knowledge to serve its customers anywhere in the country, providing real peace of mind. Figure 5-1 provides an overview of Asset Recovery Solutions.
To give you an idea of the magnitude of what can be achieved, review the following accomplishments from IBM: Processes an estimated 40 000 machines per week across 22 centers around the world Reused or resold almost 87% of the assets returned to its Remanufacturing Centers Processed over 108 million pounds of end-of-life products and product waste in 2006 and sent only 78% of that to landfill During a three-year period, took in and reused more than 1.9 million machines; generated billions in revenue; processed over 147 000 metric tons of equipment, parts, and waste from materials and products, such as: Steel: Over three times the amount used in the Eiffel Tower Plastic: Over 22 railroad cars of condensed plastic Paper: Enough bales to span the Golden Gate Bridge 23 times Non-ferrous material: Equivalent to 150 18-wheeler trucks For more information, see the following Web site: http://ibm.com/financing/us/recovery
64
65
GE supplied the lighting, using energy-efficient compact fluorescent bulbs. See the companys Web pages about Ecomagination11, containing products and practices that are both economical and ecological. IBM has joined forces with Neuwing Energy Ventures to enable organizations to receive Energy Efficiency Certificates (EEC) for projects that reduce energy.12 ADC has an online brochure on green data center solutions.13 Other organizations and concepts to watch include: Leadership in Energy and Environmental Design (LEED) is a voluntary consensus-based national rating system for developing high-performance, sustainable buildings. Developed by U.S. Green Building Council (USGBC), LEED addresses all building types and emphasizes state-of-the-art strategies for sustainable site development, water savings, energy efficiency, materials and resources selection, and indoor environmental quality. LEED is a practical rating tool for green building design and construction that provides immediate and measurable results for building owners and occupants. The Web site is: http://www.usgbc.org/ Energy Star14 is a joint program of the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE). Its purpose is to encourage efficient use of energy, thereby protecting our environment and saving us money. With the help of Energy Star, Americans have saved considerable amounts of energy to reduce greenhouse gas emissions and to save significantly on their energy bills. For business, Energy Star offers help with energy management, which can produce a considerable savings for a companys bottom line and for the environment. Partnering with Energy Star helps a company set goals, track savings, and reward improvements. Top performing buildings can earn the ENERGY STAR, which is a mark of excellence for energy efficiency that is recognized nationally.
11 12 13 14
http://www.ge.com/innovation/eco/index.html For more information regarding EECs in data centers, contact David F. Anderson PE at dfa@us.ibm.com http://www.adc.com/Library/Literature/106057AE.pdf http://www.energystar.gov/
66
Chapter 6.
67
This paper has highlighted green concepts, technologies, products, and services that can assist sustainability as data centers evolve by simplifying, sharing, and dynamically managing and using IT. Going green can enable organizational and competitive advantages. Many solutions and actions are available to assist data centers in their endeavors to become more green. By working simultaneously at the site, facilities, and IT equipment levels, you can achieve new green ways to run your data centers. Figure 6-1 illustrates the vision of the future from IBM, the New Enterprise Data Center. This vision supports being green by design, with energy efficiency as a strength.
The New Enterprise Data Center: An evolutionary new model for efficient IT delivery . . .
Green
Enterprise Information Architecture Security and Business Resilience Busine ss-Driven S ervice Management Highly Virtualized Resources Efficient and Optimized IT Infrastructure and Facilities
Consolidation
Virtualization
Green goals => NE DC => Simplify / S hared / Dynamic Optimization Energy Management
Cloud Computing
Going green is a collaborative experience. IBM welcomes ideas and suggestions on this topic. For information about how to contact IBM, see Comments welcome on page x. Becoming green is part of the journey towards a better future. Each step in the process brings more benefits. Develop a green strategy. Enable IT equipment to become more energy efficient with green technologies such as virtualization. Optimize data center power and cooling. Manage for sustainability using new measuring and management tools. Use IBM Services and partners to boost your green capabilities. Keep in mind that a green data center is not a finite project with a destination at the end. Rather, a green data center is an evolving program and should be considered a journey that provides dividends of lower costs, improved sustainability, and a better public image.
68
Appendix A.
69
70
2 3
Appendix A. Commitment to green from IBM: The past, present, and future
71
72
Related publications
We consider the publications listed in this section particularly suitable for a more detailed discussion of the topics covered in this paper.
Online resources
These Web sites are also relevant as further information sources: APC and APC MGE http://www.apc.com/ http://www.apc-mge.com/ American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) http://www.ashrae.com/ Anixter and Anixter cabling solution case studies http://www.anixter.com http://www.anixter.com/AXECOM/US.NSF/ProductsTechnology/SolutionsCaseStudiesOve rview Assessment questionnaire from IBM http://www.ibm.com/systems/optimizeit/cost_efficiency/energy_efficiency/service s.html Carbon Footprint to determine the amount of carbon dioxide emitted. http://www.carbonfootprint.com/ Centrinet launches UKs first operational zero carbon data center with help from IBM
73
http://www.ibm.com/software/success/cssdb.nsf/CS/JGIL-7BRTQ3?OpenDocument&Site= default&cty=en_us/ Creating the Green Data Center, Simple Measures to Reduce Energy Consumption http://www.anixter.com/./AXECOM/AXEDocLib.nsf/(UnID)/9D5FF4F709EB68E9862573D700 7788D2/$file/Energy_Awareness.pdf Data Center Best Practices. J. Nicholas Hoover, InformationWeek http://www.informationweek.com Eaton http://www.eaton.com/EatonCom/ProductsServices/index.htm Emerson Network Power http://www.liebert.com/ Energy Star http://www.energystar.gov/ Energy tips for small and medium businesses http://www-304.ibm.com/jct03004c/businesscenter/smb/us/en/contenttemplate/!!/gc l_xmlid=80670/ Environmental Protection Agency http://www.epa.gov/climatechange/index.html GE lighting and Ecomagination http://www.ge.com/products_services/lighting.html http://www.ge.com/innovation/eco/index.html Georgia Tech implements a cool solution for green HPC with IBM; case study http://www-01.ibm.com/software/success/cssdb.nsf/cs/STRD-788DC6?OpenDocument&Si te=gicss67educ&cty=en_us%20for%20green%20HPC%20with%20IBM Green Grid http://www.greengrid.com/ How going green can improve day-to-day operations http://www.ibm.com/industries/government/doc/content/landing/3727822109.html?g_ type=pspo IBM Asset Recovery Solutions http://ibm.com/financing/us/recovery IBM Global Financing http://www.ibm.com/financing IBM green data centers http://www.ibm.com/systems/optimizeit/cost_efficiency/energy_efficiency/ IT and the environment: A new item on the CIO agenda? http://www-05.ibm.com/no/ibm/environment/pdf/grennit_oktober2007.pdf Kalbe Collaborates With IBM to Build Green Data Center and Reduce Energy Consumption http://www.ibm.com/press/us/en/pressrelease/23467.wss
74
New Enterprise Data Center (NEDC) http://www.ibm.com/systems/optimizeit/datacenter SearchDataCenter.com from Uptime Institute, Inc. http://searchdatacenter.techtarget.com/?int=off&Offer=DCregtui208 Service offerings from IBM http://www.ibm.com/services/us/index.wss/allservices/ Site and facilities, servers, and storage http://www-935.ibm.com/services/us/index.wss The IBM System x and BladeCenter Power Configurator can be downloaded from: http://www.ibm.com/systems/bladecenter/resources/powerconfig/ The art of the possible: Rapidly deploying cost-effective, energy-efficient data centers, February 2008 http://www.ibm.com/services/us/its/pdf/smdc-eb-sfe03001-usen-00-022708.pdf The green data center: cutting energy costs for a powerful competitive advantage. http://www-935.ibm.com/services/us/cio/outsourcing/gdc-wp-gtw03020-usen-00-0415 08.pdf U.S. Green Building Council (USGBC) http://www.usgbc.org/ Uptime Institute http://www.uptimeinstitute.org/ VMware VMotion technology http://www.vmware.com/products/vi/vc/vmotion.html Virtualization: Healthcares Cure for the Common Cost? Part 2 http://www-03.ibm.com/industries/healthcare/doc/content/resource/insight/372151 4105.html White paper: The green data center http://www.ibm.com/industries/education/doc/content/resource/thought/2794854110 .html
Other publications
These publications are also relevant as further information sources: Esty, Daniel, and Winston, Andrew S. Green to Gold: How Smart Companies Use Environmental Strategy to Innovate, Create Value, and Build Competitive Advantage. Yale University Press, 2006. ISBN-13: 9780300119978 Hoover, J. Nicholas. Data Center Best Practices. InformationWeek, March 3, 2008 Koomey, Jonathan. Estimating total power consumption by servers in the U.S. and the world. Oakland, CA: Analytics Press, 2007.
Related publications
75
ASHRAE publications
The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE) has a committee named Technical Committee 9.9 (TC 9.9) that is concerned with mission-critical facilities, technology spaces, and electronic equipment. TC 9.9 has a number of books available to help with data energy efficiency and best practices.
76
Back cover
Redpaper
INTERNATIONAL TECHNICAL SUPPORT ORGANIZATION