Don't Let Busway Be The Weak Link in Your DC Electrical Safety Chain
Don't Let Busway Be The Weak Link in Your DC Electrical Safety Chain
Don't Let Busway Be The Weak Link in Your DC Electrical Safety Chain
An open channel busway allows free tap box placement, and integral coupling improves
safety. (Image: Anord Mardix)
LinkedinTwitterFacebookSubscribe
Andy Banks, Director of Databar Sales (North America) for Anord Mardix, explores in
detail how the busway can efficiently and effectively distribute power throughout a data
center environment.
Andy Banks, Director of Databar Sales (North America) for Anord Mardix
As the conduit for high-density, flexible power distribution for many data center
applications, the busway needs to be a key element of any data center’s workplace
safety considerations. Electrical conductors and bus bars are covered in the
Occupational Safety and Health Administration (OSHA) standards 29 – CFR, including
in Parts 1910.303, 1910.308, 1926.403 and 1926.408.
Busway has been defined by the National Electrical Manufacturers Association (NEMA)
as “a prefabricated electrical distribution system consisting of bus bars in a protective
enclosure, including straight lengths, fittings, devices, and accessories. Busway
includes bus bars, an insulating and/or support material, and a housing.”
Figure 1. A typical overhead busway. (Photo: Anord Mardix)
2 bus connectors
2 housing couplers
24 screws
In addition, a specialist tool is often required to install these components onto the
busway. Installation time of eachjoint pack is typically up to five minutes dependent on
the busway manufacturer. In a 40’ x 40’ busway run data hall, that can equate to two
days of installation time. Mislay or mis-order any of those joint pack components and
much larger installation delays may result.
Furthermore, given the critical nature of this busway coupler, annual thermography
should be performed on each joint pack area to check for loose connections. Overall,
this traditional installation method is very labor intensive, time-consuming, and poses an
ongoing risk if there are any loose busbar connections.
Figure 2. A typical coupler for a traditional enclosed busway. (Anord Mardix)
In theory, open channel busway allows the power feed to the server rack or equipment
to be located directly above or adjacent to that equipment, for ease of identification.
However, it’s often not that simple in practice. By their very presence the couplers form
a physical barrier preventing plug-in units from being inserted along their whole length.
In some cases, they must be placed some distance away from the coupler.
I refer to this distance along the open channel busway where plug-in units cannot be
installed as the “keep out area”, which can be significant. For example, one widely
installed busway from a well-known manufacturer has a keep out area of more than 21
inches—this means over 17% of the busway run is unusable for plug-in units.
Other common systems also pose similar risks, which is why most busway
manufacturers include a warning such as DO NOT install Plug-in units under load.
Make sure breakers are in the off position. Giving a caveat like this may limit a
manufacturer’s legal liability, but it is not an ideal solution for preventing electrical
accidents in a real-world data center operation.
In testing, busway with integral coupling units and their associated plug-in units have
been successful at mitigating the effects of potential arc flash, meeting IEC/TR
61641 standards, with the plug-in unit breaker detecting and clearing the faults in under
24.7ms.
Conclusion
Busway continues to be the most efficient and effective way of distributing power
throughout a data center environment. By designing safety directly into the components
being installed on live busway, operators can minimize risks associated with busway
installation and maintenance and prevent electrical accidents.
An on-site solar energy array at the Google data center campus iin Belgium. (Photo:
Google)
LinkedinTwitterFacebookSubscribe
Google is using sophisticated new software tools to reduce the carbon impact of its
massive data center network, shifting large computing jobs to times where they can be
powered with renewable energy.
The Internet giant says this new “carbon intelligent” approach make it easier for giant
data center operators to power servers using solar and wind energy, solving complex
problems in the availability of renewables. Google’s new approach uses a practice
called load shifting that schedules workloads to optimize their resource use.
“Shifting the timing of non-urgent compute tasks — like creating new filter features on
Google Photos, YouTube video processing, or adding new words to Google Translate
— helps reduce the electrical grid’s carbon footprint, getting us closer to 24×7 carbon-
free energy,” writes Ana Radovanovic, Google’s Technical Lead for Carbon-Intelligent
Computing, in a blog post.
Google said these changes require no additional computer hardware, and have no
impact on the performance of Google services like Search, Maps and YouTube that
have huge global audiences.
The announcement, timed to Earth Day 2020, underscores how the recent growth of
cloud computing has sharpened the focus on how data centers can retool the economy
for a sustainable future. As the COVID-19 pandemic shifts more essential activities
online, the carbon impact of the world’s IT infrastructure becomes even more critical in
addressing climate change.
Google isn’t alone in this effort. In January Microsoft announced plans to be carbon
negative by 2030, and begin tracking the climate impact of its vendors,
while Amazon and Apple have procured substantial amounts of renewable energy to
support their data centers. Multi-tenant data center developers like Switch, Digital
Realty, Aligned and Iron Mountain have also lined up green energy for their data center
clients.
We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper
you are agreeing to our terms of service. You can opt out at any time.
“We designed and deployed this first-of-its kind system for our hyperscale (very large)
data centers to shift the timing of many compute tasks to when low-carbon power
sources, like wind and solar, are most plentiful,” Radovanovic said.
How Google’s Carbon-Aware Computing Works
Google says the platform is now in use at every Google data center, and compares two
types of daily forecasts. One is provided by Tomorrow, a climate data firm know for
its Electricity Map, and predicts how the average hourly carbon intensity of the local
electrical grid will change over the course of a day. Google has also developed an
internal tool to predict the hourly power resources that a data center needs to carry out
its compute tasks during the same period.
The company then uses the datasets to optimize its operations on an hour-by-hour
basis, aligning compute tasks with times of low-carbon electricity supply. Here’s a
diagram of what this shift looks like in practice:
This chart shows how Google is shifting workloads throughout the day so its servers will use
more renewable energy. (Image: Google)
Google’s carbon-aware platform currently focuses on shifting tasks to different times of
the day within the same data center. The company also hopes to move workloads
between data centers to boost its use of renewables, a strategy that offers even greater
potential gains by shifting data center capacity to locations where green energy is more
plentiful, routing around utilities that are slow to adopt renewables.
“Our plan for the future is to shift load in both time and location to maximize the
reduction in grid-level CO2 emissions,” Google said.
Google said it will share its methodology and performance results with the industry in
upcoming research publications. “We hope that our findings inspire other organizations
to deploy their own versions of a carbon-intelligent platform, and together, we can
continue to encourage the growth of carbon-free electricity worldwide,” the company
said.
In the first phase, Google dramatically improved the efficiency of its data centers,
innovating in every aspect of operations, from the chips powering servers to the power
infrastructure and cooling systems. Google’s relentless focus on efficiency yielded huge
savings in electricity, slashing the amount of carbon needed to operate its Internet
business. In 2009 Google began sharing its best practices, allowing others in the
industry to improve their efficiency. Annual data center consumption increased by 90
percent from 2000 to 2005, but only by 4 percent from 2010 to 2014, a trend reinforced
by recent research.
In the second phase of its sustainability journey, Google’s data center team has focused
on procuring renewable energy to power its operations instead of electricity sources
based on coal. Google’s use of power purchase agreements (PPAs) for renewable
energy has been adopted by other cloud providers and data center REITs.
LinkedinTwitterFacebookSubscribe
On Earth Day 2020, the data center industry is focusing on sustainability as never
before, with the executive suite and customers aligned on the importance of using
renewable energy to power digital infrastructure.
The most significant trend is that the demands for a greener cloud are coming from
customers, not just operators and Greenpeace. Over the last six months, Data Center
Frontier has tracked the conversation about sustainability across multiple industry
events, which has reflected a much more prominent role for the customer perspective.
“The green story of our data centers is becoming much more important for our
customers,” said Jaime Leverton, Chief Commercial Officer at eStruxture Data Centers.
“We’re seeing consumers and builders deciding to put sustainability in first,” said Craig
Scroogie, CEO and Managing Director of NextDC, one of Australia’s leading data center
providers.
“Renewable energy is becoming a prerequisite,” said Gil Santaliz, the President and
CEO of the NJFX data center in Wall, N.J. “If you can’t get hold of renewable energy,
you’ll be at a disadvantage.”
Over the past several years, the hyperscalers have been joined in earnest by the largest
multi-tenant data center providers. Today Digital Realty announced a new 7.5-year
power and renewable energy credit agreement with Citi to supply wind energy for Digital
Realty’s 13 data centers in the Dallas area, equivalent to about 30 percent of the power
needs in the Dallas market. The deal provides Digital Realty with over 260,000
megawatt-hours of renewable energy annually from the Bearkat Wind Energy II project
on Glasscock County, Texas.
The wind power agreement builds on Digital Realty’s green strategy. In 2019, the
company announced an additional 50 megawatts of renewables to support its data
centers in Ashburn, Virginia and signed a green tariff agreement with Portland General
Electric to supply approximately 120,000 megawatt-hours annually to a new project in
Hillsboro, Oregon.
We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper
you are agreeing to our terms of service. You can opt out at any time.
This attention will not go away, and some data center executives say the sector needs
to acknowledge its crucial role in the transition to a cleaner economy.
“Our whole industry ethos will be judged over the next five years,” said Bruno Lopez,
Group CEO and Director of ST Telemedia Global Data Centres, based in Singapore.
“We are becoming a lightning rod, and building data centers the same way is not an
option anymore. We have to build more sustainable solutions. Using renewables is
going to be the way forward.”
Some data center thought leaders say the industry needs to do a better job telling the
story of its role in the green power economy.
“On the sustainability side, the data center industry gets a lot of knocks,” said Lee
Kestler, the Chief Commercial Office for Vantage Data Centers, which has made
sustainability a key theme in its recent data center designs. “We are driving a lot of
opportunities for investment in the renewable energy side.”
“Everyone’s seen the announcement Microsoft has made,” said Dana Adams, the Chief
Operating Officer of AirTrunk. “hey will be pressing their supply chain to deliver on this.”
“We see an acute need to begin removing carbon from the atmosphere, which we
believe we can help catalyze through our investments,” said Microsoft President Brad
Smith. “We will achieve this through a portfolio of negative emission technologies (NET)
potentially including afforestation and reforestation, soil carbon sequestration, bioenergy
with carbon capture and storage (BECCs), and direct air capture. Given the current
state of technology and pricing, we will initially focus on nature-based solutions, with the
goal of shifting to technology-based solutions between now and 2050, when they
become more viable.”
How to Approach Your Data Center
Provider About Social & Environmental
Responsibility
BY SARAH RUBENOFF - MAY 18, 2020 LEAVE A COMMENT
Power and cooling equipment at an Iron Mountain data center in Manassas, Virginia.
(Photo: Rich Miller)
LinkedinTwitterFacebookSubscribe
When one thinks about sustainability and environmental responsibility, data centers
don’t always come to mind. They consume a huge amount of power. And as demand for
these institutions continues to grow, so do energy requirements.
Get the full report.
For many businesses, especially in a world and amid a public increasingly concerned
about climate change and related natural disasters, a new report from Iron Mountain
states these growing energy requirements can be a liability — especially for businesses
that are data dense, and/or derive lots of value from their data storage, findings and
related reporting.
But that’s not all bad news. The report shares that if willing to take a few steps, data
center decision makers can use their energy purchasing power to move data centers
and the data center industry toward a more sustainable future.
Green energy is not only good for the planet, it makes business sense. — Iron
Mountain
The new report in particular explores three key questions for these decision makers to
ask your data center provider, explained in full in the new white paper.
Can we get credit for your green energy? Lastly, data center decision makers should
see if their business can reap any of the rewards if you data center provider does use
renewable energy. For example, Iron Mountain’s Green Power Pass is a data center
renewable energy solution that gives customers the ability to include the power they
consume at any Iron Mountain data center as green power in their CDP, RE100, GRI, or
other sustainability reporting, the white paper explains.
Ultimately, making good choices surrounding energy sourcing can help save money,
reduce risks, and contribute to solutions to today’s environmental and social
responsibility challenges.
LinkedinTwitterFacebookSubscribe
During last fall’s wildfires in California, the largest electric utility provider in the state was
forced to shut off power for millions of customers. In early October, Pacific Gas and
Electric (PG&E), curtailed power to more than 30 counties in Central and Northern
California. California is prone to more wildfires, natural disasters, and inevitably, more
power shutdowns, making microgrids a critical part of the infrastructure to support future
operations.
This is why PG&E is planning to build 20 new microgrids near utility substations that
could be affected by future power shutoffs. Communities, cities, schools and universities
-and yes, data centers – are looking to microgrids to deploy more resilient power
solutions above and beyond generators and traditional backup solutions.
One of the nation’s largest microgrids helps power Alcatraz Island and its 1.5 million
annual visitors, helping save more than 25,000 gallons of diesel a year, while reducing
the island’s fuel consumption by more than 45% since 2012. How did the Texas A&M
RELLIS Campus, boasting a growing list of multimillion-dollar state and national
research facilities, testbeds, and proving grounds, deliver high availability power supply
for their mission? Microgrids.
In a traditional sense, microgrids act as a self-sufficient energy system. And they are
capable of serving discrete geographic footprints. These locations and geographies
include college campuses, hospital complexes, business centers, or entire
neighborhoods.
Here’s what’s changed: Microgrid architecture has advanced from merely delivering
power to doing so intelligently. Advanced microgrids are smart and leverage data-driven
solutions for software and their control plane.
Rob Thornton, president and CEO of the 105-year-old International District Energy
Association, often says that microgrids are “more than diesel generators with an
extension cord.” In other words, a microgrid is not just a backup generation mechanism
but should be a robust, 24/7/365 asset. Also, an advanced microgrid may provide grid
and energy management services.
A microgrid controller gives the microgrid its islanding capability as well as new, data-
driven capabilities. Also known as the central brain of the system, the controller can
manage the generators, batteries, and nearby building energy systems with a high
degree of sophistication. The controller orchestrates multiple resources to meet the
energy goals established by the microgrid’s customers by increasing or decreasing the
use of any of the microgrid’s resources – or combinations of resources. These types of
solutions can also create microgrid-as-a-service capabilities.
The realm of power delivery in the data center and IT space continues to become more
interesting. Although power consumption is becoming more efficient, we definitely see
more compute instances deployed. These instances translate to edge computing,
remote locations, more distributed computing, and more ecosystem that will require
access to reliable and secure power solutions.
To shift your paradigm around microgrids and power delivery, start by asking some
essential questions:
If you’ve never looked at microgrids as a real option for your data center, enterprise, or
specific use-case, it might be an excellent moment to explore these solutions. These
systems are supporting major hyperscale data centers, critical healthcare facilities,
cities and towns, and even the island of Alcatraz.
LinkedinTwitterFacebookSubscribe
Marc Bowman, Multi-Circuit Metering Systems (MCMS) Product General Manager for
Anord Mardix, delves into the factors that distinguishes MCM systems.
According to the U.S. Department of Energy, “metering only at the site and building
level is often the cheapest option, however, it is generally insufficient when trying to
determine system and facility performance.” And if a data center (DC) has just site or
system level monitoring, or is relying on traditional branch-circuit monitoring to track
system performance, that leaves an information gap. Multi-Circuit Monitoring Systems
(MCMS) offer new solutions to close that gap.
The lack of integration and the limited features and capabilities of most currently
available monitoring hardware and software systems results in significant impact for
day-to-day and year-to-year DC operations—in lost efficiency, higher costs,
unnecessary power usage, and unplanned downtime.
One factor that distinguishes MCM systems is that they gives DCs the ability
to integrate not only upstream and downstream power utilization data, but to
also incorporate additional analytics—such as harmonics, sag/swell, ITIC
compliance and waveform analysis—that go beyond traditional metering
approaches.
Benefits of MCMS
In developing MCMS, engineers set out to create a “calculation engine” that goes
beyond off-the-shelf, pre-programmed chips. This innovation has allowed MCM systems
to become much more cost effective than their predecessors, while offering a richer set
of features and capabilities. By leveraging technology advancements, an MCMS offers
high-end performance at an affordable price. Among their many advantages:
One factor that distinguishes MCM systems is that they gives DCs the ability to
integrate not only upstream and downstream power utilization data, but to also
incorporate additional analytics—such as harmonics, sag/swell, ITIC compliance and
waveform analysis—that go beyond traditional metering approaches. An Advanced
MCMS approach makes use of the capabilities already built in to MCMS meters and
then overlays sophisticated software and analytics on top to provide a robust view of the
entire electrical landscape of a data center.
Marc Bowman is the Multi-Circuit Metering Systems (MCMS) Product General Manager
for Anord Mardix.
To learn more about MCMS, read the White Paper: Optimizing Data Center Power
Monitoring Systems with Advanced Multi-Circuit Monitoring Systems.
LinkedinTwitterFacebookSubscribe
We conclude our Data Center Executive Roundtable today with a look at the data center
supply chain and how it is faring with the global business lockdowns during the COVID-
19 pandemic. We explore this topic with our panel of data center executives, including
John Sasser of Sabey Data Centers, Phillip Marangella of EdgeConneX, CoreSite’s
Steve Smith, Scott Walker from NTT Global Data Centers Americas, and Nancy Novak
from Compass Datacenters and Infrastructure Masons.
The conversation is moderated by Rich Miller, the founder and editor of Data Center
Frontier.
SCOTT WALKER, NTT Global Data Centers, Americas
Scott Walker: The good news for data centers is that most – if not all – of our supply
chain has been classified as essential businesses. As a result, the supply chain has
been minimally affected for us as we continue construction at our data center campus
sites in Chicago, Hillsboro (Portland), Silicon Valley, and Ashburn.
Early in the pandemic, there was some concern (and there still is to a limited degree)
that second- and third-tier vendors would not be able to provide components. But for the
most part, that concern has been mitigated.
We are, however, staying close to our providers to monitor any changes. The biggest
challenges we face are the social distancing restrictions put in place by each region’s
local Authority Having Jurisdiction (AHJ). These restrictions are forcing our contractors
to look at staffing in a different way, and could possibly lead to increased costs going
forward. Despite these challenges, we are on track to bring new capacity online later
this year at affordable price points for customers of all sizes.
NANCY NOVAK, Compass Datacenters and iMasons
Nancy Novak: Maintaining a level of development and construction to keep supply level
with demand is only going to be an increasing challenge. Right now, being able to
deliver a facility in six months is becoming more of a standard requirement, and I’m not
sure how much more time can be shaved from that timeframe.
The new challenge will be to be to maintain that level of velocity across multiple
locations here in the U.S. and internationally. In order to compete in this new landscape,
it is going to require providers to tighten their supply chains to ensure that critical
components, particularly long lead parts, and materials are available in synch with
schedule require of course.
However, the most important areas for providers to ensure that they are successfully
leveraging technology (BIMs, VR/AR, etc.) in both the design and construction phases
of their efforts. Capabilities like VR, for example, identify potential problems before they
are discovered on the job site to help ensure schedule integrity. Technology will also be
important in helping to provide the number of personnel required to support multiple
projects. Advances in exoskeleton functionality is just one area that enables women to
become more active participants on the jobsite, thereby helping to address the needs
for an expanded labor pool.
STEVE SMITH, CoreSite
Steve Smith: With an unplanned, global health crisis like the COVID-19 pandemic,
supply chains across many industries are affected, including data center infrastructure.
If you think about it, all components of a data center are built off-site, delivered, and
then installed – there are many opportunities for the supply chain to break. Individual
parts of UPSs, CRAH units, among other machines, are sometimes built in different
countries or states than where the product will be manufactured. Then getting the unit to
and into the data center is a separate process. The number of people, resources and
effort to coordinate this process is complex and critical.
In the midst of the pandemic, CoreSite delivered its CH2 data center—the first purpose-
built data center in downtown Chicago—on schedule. Thankfully, when the pandemic
took effect in March, we were deep in the construction process and didn’t experience a
delay. During construction, and due to stay at home orders and social distancing
measures, the corporate team wasn’t able to visit the site as often as we typically do.
We completed inspections and commissioning virtually, and worked closely with our
general contractor to ensure proper installation of equipment.
We are expanding in Los Angeles with LA3 and the Bay Area with SV9 and don’t
anticipate supply chain delays at this time. We standardized critical construction and
data center products, ensuring speed to delivering to capacity to market. Additionally,
we’re able to move these assets, if needed, to accommodate a need elsewhere in our
portfolio.
PHILLIP MARANGELLA, EdgeConneX
Phillip Marangella: If there’s one thing COVID-19 has tested, it’s been supply chains
and data center management processes. Here, diversity in procurement coupled with
flexibility and agility in deployment capabilities is the solution to overcoming challenges.
Part of our strength as a company is the speed at which we can build our data center
facilities. On average it is between 6-9 months. That is done, in part, by warehousing
key equipment so that we can quickly deploy gear where it is needed globally. We also
use multiple vendors and source from diverse locations so that we are not single-
threaded. As such, we have not experienced any impediments in our build times during
the pandemic.
We continue to communicate and collaborate with our vendors to ensure that all of our
current expansions and planned builds will be ready for service per our commitment to
our customers.
JOHN SASSER, Sabey Data Centers
As far as other requirements, we have seen some increased lead times, but for the
most part, our projects have not been greatly affected by supply chain issues. Still, we
will be focusing on ways that we can bring these more in our control, including
maintaining a larger inventory of materials on hand than has been our custom and
adopting optimizing procurement practices.
RECAP: Links to all the Roundtable articles and Executive Insights transcripts for
all our participants.
Keep pace with the fact-moving world of data centers and cloud computing by following
us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our
weekly newspaper using the form below: