Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Don't Let Busway Be The Weak Link in Your DC Electrical Safety Chain

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 33

Don’t Let Busway Be the Weak Link in

Your DC Electrical Safety Chain


BY VOICES OF THE INDUSTRY - APRIL 13, 2020 LEAVE A COMMENT

An open channel busway allows free tap box placement, and integral coupling improves
safety. (Image: Anord Mardix)

LinkedinTwitterFacebookSubscribe
Andy Banks, Director of Databar Sales (North America) for Anord Mardix, explores in
detail how the busway can efficiently and  effectively distribute power throughout a data
center environment.
Andy Banks, Director of Databar Sales (North America) for Anord Mardix

Electrical safety is of paramount importance in the data center industry. Awareness of


electrical incident risks and preventive procedures (such as lock-out/tag-out) have
become well established in the last decade. Nevertheless, in 2018 there were 160
deaths from electrical accidents and more than 1,500 injuries in the U.S. alone. Most
operators and technicians are highly aware of safety protocols when working around
live panels, but one electrical component that also demands caution is the busway

As the conduit for high-density, flexible power distribution for many data center
applications, the busway needs to be a key element of any data center’s workplace
safety considerations. Electrical conductors and bus bars are covered in the
Occupational Safety and Health Administration (OSHA) standards 29 – CFR, including
in Parts 1910.303, 1910.308, 1926.403 and 1926.408.

Busway has been defined by the National Electrical Manufacturers Association (NEMA)
as “a prefabricated electrical distribution system consisting of bus bars in a protective
enclosure, including straight lengths, fittings, devices, and accessories. Busway
includes bus bars, an insulating and/or support material, and a housing.”
Figure 1. A typical overhead busway. (Photo: Anord Mardix)

Traditional Busway Practices & Challenges


Since its introduction within the automotive industry in the 1930’s, busway power
distribution has become widespread in data centers and industrial facilities.  Busway
distribution systems in general have some common features. One of those is the need
to couple together multiple sections of bus bar of up to 10-12 feet long to form the
required length of busway. Traditionally, this coupling has been achieved by the addition
of a separate set of components, commonly referred to as the “joint pack”. The joint
pack typically comprises:

 2 bus connectors
 2 housing couplers
 24 screws

In addition, a specialist tool is often required to install these components onto the
busway. Installation time of eachjoint pack is typically up to five minutes dependent on
the busway manufacturer. In a 40’ x 40’ busway run data hall, that can equate to two
days of installation time. Mislay or mis-order any of those joint pack components and
much larger installation delays may result.

Furthermore, given the critical nature of this busway coupler, annual thermography
should be performed on each joint pack area to check for loose connections. Overall,
this traditional installation method is very labor intensive, time-consuming, and poses an
ongoing risk if there are any loose busbar connections.
Figure 2. A typical coupler for a traditional enclosed busway. (Anord Mardix)

Open Channel Busway Systems – A Partial


Improvement
In recent years, the use of “open channel” busway has become commonplace. The
open channel construction allows the plug-in (or tap off) sub-distribution units to be
placed—theoretically—at any location along the busway. This flexible placement
enables the sub-feed to be positioned directly above or adjacent to its respective load,
which in a data center environment would generally be the server cabinet.

In theory, open channel busway allows the power feed to the server rack or equipment
to be located directly above or adjacent to that equipment, for ease of identification.
However, it’s often not that simple in practice. By their very presence the couplers form
a physical barrier preventing plug-in units from being inserted along their whole length.
In some cases, they must be placed some distance away from the coupler.

I refer to this distance along the open channel busway where plug-in units cannot be
installed as the “keep out area”, which can be significant.  For example, one widely
installed busway from a well-known manufacturer has a keep out area of more than 21
inches—this means over 17% of the busway run is unusable for plug-in units.

Busway Safety Considerations


Working on “live” electrical equipment always presents the risk of potential accident
from arc flash, arc blast, or electric shock. Unfortunately, working on branch circuit and
busway installations is one of the tasks that is often performed on an energized circuit,
based on the need to keep mission-critical components operational 24/7.
With open channel busway, this presents multiple risks, since it’s required to connect
the plug-in units to the bus-rail when it’s live. While the technicians responsible for this
task should always follow NFPA and safety regulations and the specific site
requirements, the design of the plug-in unit can also help to significantly reduce the
risks posed to both personnel and to the continued operation of the busway system.

By designing safety directly into the components being installed on live


busway, operators can minimize risks associated with busway installation and
maintenance and prevent electrical accidents.
Busway manufacturers offer various solutions. For example, one manufacturer utilizes a
system of twisting the plug-in unit into the busway to both mechanically attach the unit
to the busway and to simultaneously make the electrical connections into the
conductors. The risks of arc flash or electrical shock during this single step method,
however, are heightened due to the physical force required and the ability to both
misalign plug-in unit phase connections and/or connect the plug-in unit while on load.

Other common systems also pose similar risks, which is why most busway
manufacturers include a warning such as DO NOT install Plug-in units under load.
Make sure breakers are in the off position. Giving a caveat like this may limit a
manufacturer’s legal liability, but it is not an ideal solution for preventing electrical
accidents in a real-world data center operation.

A New Busway Approach: Integral Coupling


Integral coupling is a new type of plug-in that can be installed more safely. Integral
coupling is a design that includes interlocks at the tap off boxes to ensure that:

1. Components CANNOT be installed in the incorrect rotation


2. Components CANNOT be installed or removed with the breakers switched on i.e.
on load
3. The breakers can ONLY be switched on when
4. The tap off box is correctly installed and grounded on the bus-rail and
5. The connections have been successfully engaged through separate key
operation.

In testing, busway with integral coupling units and their associated plug-in units have
been successful at mitigating the effects of potential arc flash, meeting IEC/TR
61641 standards, with the plug-in unit breaker detecting and clearing the faults in under
24.7ms.

Conclusion
Busway continues to be the most efficient and effective way of distributing power
throughout a data center environment. By designing safety directly into the components
being installed on live busway, operators can minimize risks associated with busway
installation and maintenance and prevent electrical accidents.

Google Shifting Server Workloads to


Use More Renewable Energy
BY RICH MILLER - APRIL 22, 2020 LEAVE A COMMENT

An on-site solar energy array at the Google data center campus iin Belgium. (Photo:
Google)

LinkedinTwitterFacebookSubscribe
Google is using sophisticated new software tools to reduce the carbon impact of its
massive data center network, shifting large computing jobs to times where they can be
powered with renewable energy.

The Internet giant says this new “carbon intelligent” approach make it easier for giant
data center operators to power servers using solar and wind energy, solving complex
problems in the availability of renewables. Google’s new approach uses a practice
called load shifting that schedules workloads to optimize their resource use.

“Shifting the timing of non-urgent compute tasks — like creating new filter features on
Google Photos, YouTube video processing, or adding new words to Google Translate
— helps reduce the electrical grid’s carbon footprint, getting us closer to 24×7 carbon-
free energy,” writes Ana Radovanovic, Google’s Technical Lead for Carbon-Intelligent
Computing, in a blog post.

Google said these changes require no additional computer hardware, and have no
impact on the performance of Google services like Search, Maps and YouTube that
have huge global audiences.

The announcement, timed to Earth Day 2020, underscores how the recent growth of
cloud computing has sharpened the focus on how data centers can retool the economy
for a sustainable future. As the COVID-19 pandemic shifts more essential activities
online, the carbon impact of the world’s IT infrastructure becomes even more critical in
addressing climate change.

Google isn’t alone in this effort. In January Microsoft announced plans to be carbon
negative by 2030, and begin tracking the climate impact of its vendors,
while Amazon and Apple have procured substantial amounts of renewable energy to
support their data centers. Multi-tenant data center developers like Switch, Digital
Realty, Aligned and Iron Mountain have also lined up green energy for their data center
clients.

Continued Leadership in Sustainability


Sustainability has been a huge priority for Google, which has been a leader in green
innovation in the industry. Google has matched its electricity consumption with
renewable energy purchases in each of the past three years, purchasing 1.1 gigawatts
in 2019. Most of that green energy goes to support Google’s massive network of data
centers, which power everything from YouTube videos to Gmail to every query you type
into the search field.
Today’s announcement brings Google closer to its goal of using renewable energy
to power every hour of operation of its data centers, around the clock and around the
globe.

The intermittent nature of renewable energy creates challenges in matching green


power to IT operations around the clock. Solar power is only available during daylight
hours. Wind energy can be used at night, but not when the wind dies down.

Google says its carbon-intelligent computing platform solves that problem by


rescheduling workloads that are not time-sensitive. It can match workloads to solar
power during the day, and wind energy in the evening, for example.

Free Resource from Data Center Frontier White Paper Library

Data Center & Infrastructure Report: Outlook for 2020


Service Express surveyed IT professionals on a variety of topics from changing technology to
plans regarding the use of a hybrid approach to finding the optimal solutions within their data
center environments. We think you will find these insights useful in understanding how IT
professionals are prioritizing and acting on current and future needs and opportunities.
Download the new report that dives deeper into the findings and insights from both the CTO &
CIO of Service Express.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper
you are agreeing to our terms of service. You can opt out at any time.

EMAIL ME THIS PDF

“We designed and deployed this first-of-its kind system for our hyperscale (very large)
data centers to shift the timing of many compute tasks to when low-carbon power
sources, like wind and solar, are most plentiful,” Radovanovic said.
How Google’s Carbon-Aware Computing Works
Google says the platform is now in use at every Google data center, and compares two
types of daily forecasts. One is provided by Tomorrow, a climate data firm know for
its Electricity Map, and predicts how the average hourly carbon intensity of the local
electrical grid will change over the course of a day. Google has also developed an
internal tool to predict the hourly power resources that a data center needs to carry out
its compute tasks during the same period.

The company then uses the datasets to optimize its operations on an hour-by-hour
basis, aligning compute tasks with times of low-carbon electricity supply. Here’s a
diagram of what this shift looks like in practice:
This chart shows how Google is shifting workloads throughout the day so its servers will use
more renewable energy. (Image: Google)
Google’s carbon-aware platform currently focuses on shifting tasks to different times of
the day within the same data center. The company also hopes to move workloads
between data centers to boost its use of renewables, a strategy that offers even greater
potential gains by shifting data center capacity to locations where green energy is more
plentiful, routing around utilities that are slow to adopt renewables.
“Our plan for the future is to shift load in both time and location to maximize the
reduction in grid-level CO2 emissions,” Google said.

Google said it will share its methodology and performance results with the industry in
upcoming research publications. “We hope that our findings inspire other organizations
to deploy their own versions of a carbon-intelligent platform, and together, we can
continue to encourage the growth of carbon-free electricity worldwide,” the company
said.

Encouraging Broader Adoption of Green Practices


Google has been a leader in two major phases of green innovation in the industry.

In the first phase, Google dramatically improved the efficiency of its data centers,
innovating in every aspect of operations, from the chips powering servers to the power
infrastructure and cooling systems. Google’s relentless focus on efficiency yielded huge
savings in electricity, slashing the amount of carbon needed to operate its Internet
business. In 2009 Google began sharing its best practices, allowing others in the
industry to improve their efficiency. Annual data center consumption increased by 90
percent from 2000 to 2005, but only by 4 percent from 2010 to 2014, a trend reinforced
by recent research.

In the second phase of its sustainability journey, Google’s data center team has focused
on procuring renewable energy to power its operations instead of electricity sources
based on coal. Google’s use of power purchase agreements (PPAs) for renewable
energy has been adopted by other cloud providers and data center REITs.

Data Centers Accelerate Their Focus on


Sustainability, Green Power
BY RICH MILLER - APRIL 22, 2020 LEAVE A COMMENT
The Bearkat Wind Energy facility will provide energy for Digital Realty’s 13 data centers
in the Dallas region. (Photo: Digital Realty)

LinkedinTwitterFacebookSubscribe

On Earth Day 2020, the data center industry is focusing on sustainability as never
before, with the executive suite and customers aligned on the importance of using
renewable energy to power digital infrastructure.

The rapid growth of cloud computing has sharpened


the focus on the data center sector’s role in retooling the economy for a sustainable
future. As the largest cloud builders deepen their commitments to green energy, the
supply chain will need to embrace sustainability in new ways.
“The data center industry underpins the growth of the digital economy, and we believe it
is critical for industry participants to recognize the importance of managing the
environmental impact of their digital infrastructure,” said William Stein, the CEO
of Digital Realty, which today announced a major purchase of wind energy to support its
data centers.

The most significant trend is that the demands for a greener cloud are coming from
customers, not just operators and Greenpeace. Over the last six months, Data Center
Frontier has tracked the conversation about sustainability across multiple industry
events, which has reflected a much more prominent role for the customer perspective.

“The green story of our data centers is becoming much more important for our
customers,” said Jaime Leverton, Chief Commercial Officer at eStruxture Data Centers.

“We’re seeing consumers and builders deciding to put sustainability in first,” said Craig
Scroogie, CEO and Managing Director of NextDC, one of Australia’s leading data center
providers.

“Renewable energy is becoming a prerequisite,” said Gil Santaliz, the President and
CEO of the NJFX data center in Wall, N.J. “If you can’t get hold of renewable energy,
you’ll be at a disadvantage.”

Hyperscale Players Make Huge Green Power Deals


That kind of alignment signals the growing recognition of climate risk as a business
concern. Enterprise support for renewably-powered data centers has historically lagged
the commitments seen by major hyperscale operators, who have led the push to slash
dependence on fossil fuels. In 2019, the four largest hyperscale operators procured
3.76 gigawatts of renewable energy using power purchase agreements (PPAs).
Source: Renewable Energy Buyers Alliance

 In January Microsoft unveiled an aggressive plan to cut its carbon emissions in half by


2030, both for direct emissions and its entire supply and value chain. “By July of 2021, we
will begin to implement new procurement processes and tools to enable and incentivize
our suppliers to reduce their emissions,” the company said.
 Facebook was the single largest purchaser of renewable energy in 2019, lining up 1.54
gigawatts of wind and solar energy to support its data centers through PPAs. Procuring
local sources of renewable energy is now a key factor in how Facebook selects its new
data center campuses.
 Google continues to be one of the largest corporate purchasers of green energy, as well
as a pioneer in saving energy through data center efficiency. Google said today unveiled
a carbon-intelligent computing platform that can reduce the carbon impact of its massive
data center network, shifting large computing jobs to times where they can be powered
with renewable energy.
 Amazon Web Services, which has come under fire for lagging other cloud leaders on
renewables, has committed to meet the terms of the Paris Agreement 10 years early,
reaching net carbon zero operations by 2040. While Amazon did not provide details,
those goals cannot be met without a major acceleration of renewable energy purchases to
support Amazon Web Services.

Over the past several years, the hyperscalers have been joined in earnest by the largest
multi-tenant data center providers. Today Digital Realty announced a new 7.5-year
power and renewable energy credit agreement with Citi to supply wind energy for Digital
Realty’s 13 data centers in the Dallas area, equivalent to about 30 percent of the power
needs in the Dallas market. The deal provides Digital Realty with over 260,000
megawatt-hours of renewable energy annually from the Bearkat Wind Energy II project
on Glasscock County, Texas.
The wind power agreement builds on Digital Realty’s green strategy. In 2019, the
company announced an additional 50 megawatts of renewables to support its data
centers in Ashburn, Virginia and signed a green tariff agreement with Portland General
Electric to supply approximately 120,000 megawatt-hours annually to a new project in
Hillsboro, Oregon.

Free Resource from Data Center Frontier White Paper Library

Distributed and Centralized Bypass Architectures Compared


When designing a power protection scheme for data centers, IT and facility managers must ask
themselves whether a distributed or centralized backup strategy makes more sense.
Unfortunately, there is no easy answer to that question. Download the new white paper from
Vertiv that explores the principle of centralized versus distributed bypass and applies it equally
to standalone monolithic and integrated-modular UPS architectures.

We always respect your privacy and we never sell or rent our list to third parties. By downloading this White Paper
you are agreeing to our terms of service. You can opt out at any time.

EMAIL ME THIS PDF

Multi-tenant data center developers like Switch, Digital Realty, Aligned and Iron


Mountain have also lined up green energy for their data center clients.

Embracing a Leadership Role in Sustainability


While hyperscale data center operators have been leaders in renewable energy
purchasing, they’ve also been under scrutiny from the environmental group
Greenpeace, which has noted the industry’s progress but continues to single out
providers and utilities that it believes are not keeping pace.

This attention will not go away, and some data center executives say the sector needs
to acknowledge its crucial role in the transition to a cleaner economy.

“Our whole industry ethos will be judged over the next five years,” said Bruno Lopez,
Group CEO and Director of ST Telemedia Global Data Centres, based in Singapore.
“We are becoming a lightning rod, and building data centers the same way is not an
option anymore. We have to build more sustainable solutions. Using renewables is
going to be the way forward.”

Some data center thought leaders say the industry needs to do a better job telling the
story of its role in the green power economy.

“On the sustainability side, the data center industry gets a lot of knocks,” said Lee
Kestler, the Chief Commercial Office for Vantage Data Centers, which has made
sustainability a key theme in its recent data center designs. “We are driving a lot of
opportunities for investment in the renewable energy side.”

Now that scrutiny will also apply to the supply chain.

“Everyone’s seen the announcement Microsoft has made,” said Dana Adams, the Chief
Operating Officer of AirTrunk. “hey will be pressing their supply chain to deliver on this.”

Microsoft’s commitment includes a commitment to invest $1 billion in carbon


remediation technologies over the next four years.

“We see an acute need to begin removing carbon from the atmosphere, which we
believe we can help catalyze through our investments,” said Microsoft President Brad
Smith. “We will achieve this through a portfolio of negative emission technologies (NET)
potentially including afforestation and reforestation, soil carbon sequestration, bioenergy
with carbon capture and storage (BECCs), and direct air capture. Given the current
state of technology and pricing, we will initially focus on nature-based solutions, with the
goal of shifting to technology-based solutions between now and 2050, when they
become more viable.”
How to Approach Your Data Center
Provider About Social & Environmental
Responsibility
BY SARAH RUBENOFF - MAY 18, 2020 LEAVE A COMMENT

Power and cooling equipment at an Iron Mountain data center in Manassas, Virginia.
(Photo: Rich Miller)

LinkedinTwitterFacebookSubscribe
When one thinks about sustainability and environmental responsibility, data centers
don’t always come to mind. They consume a huge amount of power. And as demand for
these institutions continues to grow, so do energy requirements.
Get the full report.

For many businesses, especially in a world and amid a public increasingly concerned
about climate change and related natural disasters, a new report from Iron Mountain
states these growing energy requirements can be a liability — especially for businesses
that are data dense, and/or derive lots of value from their data storage, findings and
related reporting.

As we head toward a more sustainable future, ignoring environmental impact is simply


not an option, and brings risk. According to Iron Mountain, “That’s especially true in the
data center industry.”

But that’s not all bad news. The report shares that if willing to take a few steps, data
center decision makers can use their energy purchasing power to move data centers
and the data center industry toward a more sustainable future.

Green energy is not only good for the planet, it makes business sense. — Iron
Mountain
The new report in particular explores three key questions for these decision makers to
ask your data center provider, explained in full in the new white paper.

Do you publicly report your carbon footprint and energy usage? Transparency is


key here.  “If they know their carbon footprint and disclose their environmental impacts,
they’re probably doing a lot of things right,” said Iron Mountain. “Putting real data in
the public domain is a key differentiator between a good green talk track and real
accountability.”
Do you use renewable energy to power your data centers? Iron Mountain shares
using green energy to offset energy consumption “will go a long way toward meeting
your sustainability goals.” And today, renewable energy can often be the less expensive
option. 

Can we get credit for your green energy? Lastly, data center decision makers should
see if their business can reap any of the rewards if you data center provider does use
renewable energy. For example, Iron Mountain’s Green Power Pass is a data center
renewable energy solution that gives customers the ability to include the power they
consume at any Iron Mountain data center as green power in their CDP, RE100, GRI, or
other sustainability reporting, the white paper explains.

Ultimately, making good choices surrounding energy sourcing can help save money,
reduce risks, and contribute to solutions to today’s environmental and social
responsibility challenges.

Microgrids and Data Centers: A More


Holistic Approach to Power Security
BY BILL KLEYMAN - JUNE 12, 2020 1 COMMENT
Image credits: Shutter stock - ESB Professional

LinkedinTwitterFacebookSubscribe
During last fall’s wildfires in California, the largest electric utility provider in the state was
forced to shut off power for millions of customers. In early October, Pacific Gas and
Electric (PG&E), curtailed power to more than 30 counties in Central and Northern
California. California is prone to more wildfires, natural disasters, and inevitably, more
power shutdowns, making microgrids a critical part of the infrastructure to support future
operations.

This is why PG&E is planning to build 20 new microgrids near utility substations that
could be affected by future power shutoffs. Communities, cities, schools and universities
-and yes, data centers – are looking to microgrids to deploy more resilient power
solutions above and beyond generators and traditional backup solutions.

There’s More to Microgrids Than You Think


Microgrids can bring together multiple sources of energy, offering a holistic approach
rather then reliance upon a single utility provider. Microgrid solutions are entering
mainstream use-cases and are delivering some serious benefits to resiliency, uptime,
and power delivery. And the use-cases just keep growing. Outside of the California
wildfires, there are other use-case as well.

One of the nation’s largest microgrids helps power Alcatraz Island and its 1.5 million
annual visitors, helping save more than 25,000 gallons of diesel a year, while reducing
the island’s fuel consumption by more than 45% since 2012. How did the Texas A&M
RELLIS Campus, boasting a growing list of multimillion-dollar state and national
research facilities, testbeds, and proving grounds, deliver high availability power supply
for their mission? Microgrids.

In a traditional sense, microgrids act as a self-sufficient energy system. And they are
capable of serving discrete geographic footprints. These locations and geographies
include college campuses, hospital complexes, business centers, or entire
neighborhoods.

Here’s what’s changed: Microgrid architecture has advanced from merely delivering
power to doing so intelligently. Advanced microgrids are smart and leverage data-driven
solutions for software and their control plane.

Rob Thornton, president and CEO of the 105-year-old International District Energy
Association, often says that microgrids are “more than diesel generators with an
extension cord.” In other words, a microgrid is not just a backup generation mechanism
but should be a robust, 24/7/365 asset. Also, an advanced microgrid may provide grid
and energy management services.

Consider this list of microgrid capabilities:

 Produce on-site generation, and, in some cases, thermal energy.


 Sell capacity, energy and ancillary services to the grid and participate in demand
response — activities that create a potential revenue stream for the asset owner.
 Optimize energy resources to priorities set by the host.
 Manage load to reduce energy waste and achieve superior efficiency.

A fundamental feature of a microgrid is its ability to island — meaning it can


disconnect from the central grid and operate independently and then reconnect and
work in parallel with the grid. So, for example, whenever there is a significant storm or
another natural weather event that potentially causes an outage on the power grid, the
microgrid islands and activates its on-site power generators. When the power outage
ends, the microgrid reconnects to the grid.

A microgrid controller gives the microgrid its islanding capability as well as new, data-
driven capabilities. Also known as the central brain of the system, the controller can
manage the generators, batteries, and nearby building energy systems with a high
degree of sophistication. The controller orchestrates multiple resources to meet the
energy goals established by the microgrid’s customers by increasing or decreasing the
use of any of the microgrid’s resources – or combinations of resources. These types of
solutions can also create microgrid-as-a-service capabilities.

Microgrid-as-a-service delivers a fully managed, data-driven solution to help you with


your power delivery requirements. Advanced data gathering from numerous operational
microgrid deployments allows leading partners to make better decisions and proactively
service units. This type of managed offering will enable customers to never worry about
their microgrid unit; it’s all serviced, monitored, and maintained by your microgrid
provider.

3 Myths Regarding Microgrids and Data Centers


In researching microgrids and learning about their capabilities, I quickly ran into three
myths that people still carry around this piece of technology.

1. Microgrids are too expensive. Yes, there is an upfront cost of building a


microgrid., which will vary depending on your use-case and the scale of the
project. Some design costs are in the thousands of dollars, while more complex
systems may cost more than a few million dollars. However, look at it from a
healthcare data center perspective for a second. “The extreme case would be for
your medical device to stop working,” says Dave Carter, the managing research
engineer at the Schatz Energy Research Center and the lead technical engineer
on microgrid projects. “The value of the power that the microgrid can provide when
the rest of the county [in California] is de-energized is high.”
2. Microgrids are way too complicated and challenging to manage. Modern
microgrids are a lot smarter, automated, and data-driven than ever before. Plus,
the whole design around microgrid-as-a-service enables enterprises, healthcare
providers, cities, and even data center operators to focus on what they’re good at
and what their business requirements. Today, the microgrid is far easier to
manage, has more integration points with power solutions, and can significantly
improve resiliency.
3. Microgrids are basically the same as a generator. Microgrids are certainly not
the same as a traditional generator. First of all, if you have a diesel generator,
there is a chance that you might be limited in how much you can test it due to
environmental regulations. Secondly, microgrids can be wholly independent and
not rely on diesel fuel. Remember, they can source power from multiple locations.
Finally, you can absolutely use a generator alongside a microgrid. Here is a
specific example, there was a diesel-backed microgrid operated during Super
Bowl 50 in San Francisco, California. Using Tier-4 technology, the microgrid
powered Super Bowl City using renewable diesel fuel — as opposed to petroleum
diesel fuel. A big difference is that this fuel is not biodiesel. Instead, it’s Neste
renewable diesel, created from renewable raw materials, including any organic
biomass, such as vegetable oil.
Join Bill Kleyman and other energy experts on June 23 for a live webinar on
Microgrids for Data Centers – Click Here to Learn More.
Getting Started Means Asking the Right Questions            

The realm of power delivery in the data center and IT space continues to become more
interesting. Although power consumption is becoming more efficient, we definitely see
more compute instances deployed. These instances translate to edge computing,
remote locations, more distributed computing, and more ecosystem that will require
access to reliable and secure power solutions.

To shift your paradigm around microgrids and power delivery, start by asking some
essential questions:

 Is my power delivery as efficient as I need it to be?


 Am I worried about power outages?
 How much do I really trust my current generators?
 When was the last time I reviewed my power solution?

If you’ve never looked at microgrids as a real option for your data center, enterprise, or
specific use-case, it might be an excellent moment to explore these solutions. These
systems are supporting major hyperscale data centers, critical healthcare facilities,
cities and towns, and even the island of Alcatraz.

Linkedin

Advanced Multi-Circuit Metering


Closes the Data Gap
BY VOICES OF THE INDUSTRY - JUNE 15, 2020 LEAVE A COMMENT
High-capacity power lines in Manassas, support data center development in Prince
William County. ( Photo: Rich Miller)

LinkedinTwitterFacebookSubscribe
Marc Bowman, Multi-Circuit Metering Systems (MCMS) Product General Manager for
Anord Mardix, delves into the factors that distinguishes MCM systems. 

According to the U.S. Department of Energy, “metering only at the site and building
level is often the cheapest option, however, it is generally insufficient when trying to
determine system and facility performance.” And if a data center (DC) has just site or
system level monitoring, or is relying on traditional branch-circuit monitoring to track
system performance, that leaves an information gap. Multi-Circuit Monitoring Systems
(MCMS) offer new solutions to close that gap.

To date, metering engineering specifications for mission-critical facilities have typically


focused on high-level power quality metering (PQM). For example, some DCs use
switchgear and high-level meters to measure overall power quality and waveform
capture. By contrast, branch circuit monitoring (BCM) has focused more on low-level
components, measuring watts and other granular data. The power of an MCMS is that it
can provide PQM-quality data at the branch circuit level. This is accomplished not by
simply adding more meters, but by adding the right, integrated metering systems.
Only by integrating both upstream data (site, building, switchboard level) and
downstream data (circuit, PDU and rack level) can operators have a full picture of the
power flow at a DC facility—and be able to detect and prevent potentially critical
downtime events before they occur. Traditional metering hardware and software
solutions have not bridged this information gap—and some vendors do not even
acknowledge that there is a need to address it. Both PQM and BCM level data are
invaluable, but when they are only available in isolation the resulting knowledge gap
leaves vulnerabilities in the power infrastructure.

The lack of integration and the limited features and capabilities of most currently
available monitoring hardware and software systems results in significant impact for
day-to-day and year-to-year DC operations—in lost efficiency, higher costs,
unnecessary power usage, and unplanned downtime.

One factor that distinguishes MCM systems is that they gives DCs the ability
to integrate not only upstream and downstream power utilization data, but to
also incorporate additional analytics—such as harmonics, sag/swell, ITIC
compliance and waveform analysis—that go beyond traditional metering
approaches.

Advanced Multi-Circuit Monitoring Systems


What’s needed in the industry now is smarter metering hardware that can integrate with
any legacy systems (using Serial or Ethernet communications), with easy-to-use
software for comprehensive monitoring, tracking, and predictive analysis. An MCMS
takes advantage of the last decade of technology advancements—such as smaller,
faster processors and chip sets—to create sophisticated metering products that
integrate monitoring and analysis capabilities. They can help DC operators not only
track electrical activity but increase efficiency and prevent adverse events.

Essentially, an MCMS replaces multiple single-point meters with one sophisticated


metering component. An MCMS comes with its own firmware that’s capable of
connecting with a full range of DC systems and protocols, making it an affordable,
“universal” meter that can be installed virtually anywhere—monitoring a UPS, PDU,
switchgear, racks in the white space, etc. This adaptability enables an MCMS to capture
power quality metrics at a granular level, while linking with other MCM components and
legacy meters facility-wide for integrated tracking and analysis.
As a “smart” solution for DC power monitoring, the Anord Mardix MCMS is built around a
Core Module 96-circuit power monitor, measuring all circuits in switchgear, PDUs, RPPs, or
panelboards to provide critical, logged and real-time data on power consumption. No
additional software or integration is necessary, and it can be integrated into DCIM. (Graph:
Anord Mardix)

Benefits of MCMS
In developing MCMS, engineers set out to create a “calculation engine” that goes
beyond off-the-shelf, pre-programmed chips. This innovation has allowed MCM systems
to become much more cost effective than their predecessors, while offering a richer set
of features and capabilities. By leveraging technology advancements, an MCMS offers
high-end performance at an affordable price. Among their many advantages:

 MCMS systems are a “smart” solution, capable of simultaneously supporting


various protocols (e.g. BACnet, SNMP, MODBUS, and REST), which enables
easier integration into software systems without excessive costs of EGX /
conversion devices.
 MCMS systems provide more functionality that traditional meters, which typically
offer data on just Volts, Amps, and Watts. Now, a single MCMS meter can provide
that data plus metrics that previously had only been available by installing multiple
high-cost, single-point meters. The MCMS standard adds harmonics, sag/swell,
waveform capture, and even measures the presence of voltage at the circuit level.
 A user-friendly interface (built in HTML) for easy set up and customization.
Installing MCMS doesn’t require a service technician or vendor software support
to configure the system for the specifics needs of any facility. The user interface
makes it quick and easy to get the metering installed and the system up and
running, for fast start-up and commissioning. It is also easy to make changes
when required (e.g., to swap out breakers).
 No overhead software or middleware is required, MCMS uses native Ethernet
and all protocols. The system can connect easily with any component that uses
standard protocols.
 MCMS hardware is “future-proof”—it can be designed with future operating
requirements in mind. The systems are modular and scalable to easily
accommodate evolving industry standards. For example, the MCMS uses a single
IP address, which can be used for more than just metering. Data from IO modules,
breakers, temperature, and humidity sensors can be consolidated. No separate
software system is required to monitor key metrics for PQM, BCM, temp/humidity,
and other auxiliary contacts.
 An MCMS has higher polling rates and larger data storage capacity, which
enables key data to be captured for later analysis.

One factor that distinguishes MCM systems is that they gives DCs the ability to
integrate not only upstream and downstream power utilization data, but to also
incorporate additional analytics—such as harmonics, sag/swell, ITIC compliance and
waveform analysis—that go beyond traditional metering approaches. An Advanced
MCMS approach makes use of the capabilities already built in to MCMS meters and
then overlays sophisticated software and analytics on top to provide a robust view of the
entire electrical landscape of a data center.

Marc Bowman is the Multi-Circuit Metering Systems (MCMS) Product General Manager
for Anord Mardix. 

To learn more about MCMS, read the White Paper: Optimizing Data Center Power
Monitoring Systems with Advanced Multi-Circuit Monitoring Systems.

So Far, So Good: Pandemic Tests


Resilience of Data Center Supply Chain
BY RICH MILLER - JUNE 25, 2020 LEAVE A COMMENT
A generator delivery in central London. Our Executive Roundtable panel explores the
status of the data center supply chain. (Photo: Fortrust)

LinkedinTwitterFacebookSubscribe
We conclude our Data Center Executive Roundtable today with a look at the data center
supply chain and how it is faring with the global business lockdowns during the COVID-
19 pandemic. We explore this topic with our panel of data center executives, including
John Sasser of Sabey Data Centers, Phillip Marangella of EdgeConneX, CoreSite’s
Steve Smith, Scott Walker from NTT Global Data Centers Americas, and Nancy Novak
from Compass Datacenters and Infrastructure Masons.

The conversation is moderated by Rich Miller, the founder and editor of Data Center
Frontier.
SCOTT WALKER, NTT Global Data Centers, Americas

Scott Walker: The good news for data centers is that most – if not all – of our supply
chain has been classified as essential businesses. As a result, the supply chain has
been minimally affected for us as we continue construction at our data center campus
sites in Chicago, Hillsboro (Portland), Silicon Valley, and Ashburn.

Early in the pandemic, there was some concern (and there still is to a limited degree)
that second- and third-tier vendors would not be able to provide components. But for the
most part, that concern has been mitigated.

We are, however, staying close to our providers to monitor any changes. The biggest
challenges we face are the social distancing restrictions put in place by each region’s
local Authority Having Jurisdiction (AHJ). These restrictions are forcing our contractors
to look at staffing in a different way, and could possibly lead to increased costs going
forward. Despite these challenges, we are on track to bring new capacity online later
this year at affordable price points for customers of all sizes.
NANCY NOVAK, Compass Datacenters and iMasons

Nancy Novak: Maintaining a level of development and construction to keep supply level
with demand is only going to be an increasing challenge. Right now, being able to
deliver a facility in six months is becoming more of a standard requirement, and I’m not
sure how much more time can be shaved from that timeframe.

The new challenge will be to be to maintain that level of velocity across multiple
locations here in the U.S. and internationally. In order to compete in this new landscape,
it is going to require providers to tighten their supply chains to ensure that critical
components, particularly long lead parts, and materials are available in synch with
schedule require of course.

However, the most important areas for providers to ensure that they are successfully
leveraging technology (BIMs, VR/AR, etc.) in both the design and construction phases
of their efforts. Capabilities like VR, for example, identify potential problems before they
are discovered on the job site to help ensure schedule integrity. Technology will also be
important in helping to provide the number of personnel required to support multiple
projects. Advances in exoskeleton functionality is just one area that enables women to
become more active participants on the jobsite, thereby helping to address the needs
for an expanded labor pool.
STEVE SMITH, CoreSite

Steve Smith: With an unplanned, global health crisis like the COVID-19 pandemic,
supply chains across many industries are affected, including data center infrastructure.
If you think about it, all components of a data center are built off-site, delivered, and
then installed – there are many opportunities for the supply chain to break. Individual
parts of UPSs, CRAH units, among other machines, are sometimes built in different
countries or states than where the product will be manufactured. Then getting the unit to
and into the data center is a separate process. The number of people, resources and
effort to coordinate this process is complex and critical.

In the midst of the pandemic, CoreSite delivered its CH2 data center—the first purpose-
built data center in downtown Chicago—on schedule. Thankfully, when the pandemic
took effect in March, we were deep in the construction process and didn’t experience a
delay. During construction, and due to stay at home orders and social distancing
measures, the corporate team wasn’t able to visit the site as often as we typically do.
We completed inspections and commissioning virtually, and worked closely with our
general contractor to ensure proper installation of equipment.

We are expanding in Los Angeles with LA3 and the Bay Area with SV9 and don’t
anticipate supply chain delays at this time. We standardized critical construction and
data center products, ensuring speed to delivering to capacity to market. Additionally,
we’re able to move these assets, if needed, to accommodate a need elsewhere in our
portfolio.
PHILLIP MARANGELLA, EdgeConneX

Phillip Marangella: If there’s one thing COVID-19 has tested, it’s been supply chains
and data center management processes. Here, diversity in procurement coupled with
flexibility and agility in deployment capabilities is the solution to overcoming challenges.

Part of our strength as a company is the speed at which we can build our data center
facilities. On average it is between 6-9 months. That is done, in part, by warehousing
key equipment so that we can quickly deploy gear where it is needed globally. We also
use multiple vendors and source from diverse locations so that we are not single-
threaded. As such, we have not experienced any impediments in our build times during
the pandemic.

We continue to communicate and collaborate with our vendors to ensure that all of our
current expansions and planned builds will be ready for service per our commitment to
our customers.
JOHN SASSER, Sabey Data Centers

John Sasser: Management and stewardship of power is a signature strength for Sabey


and we are continually looking for ways to increase efficiency.

As far as other requirements, we have seen some increased lead times, but for the
most part, our projects have not been greatly affected by supply chain issues. Still, we
will be focusing on ways that we can bring these more in our control, including
maintaining a larger inventory of materials on hand than has been our custom and
adopting optimizing procurement practices.

RECAP: Links to all the Roundtable articles and Executive Insights transcripts for
all our participants. 

Keep pace with the fact-moving world of data centers and cloud computing by following
us on Twitter and Facebook, connecting with me on LinkedIn, and signing up for our
weekly newspaper using the form below:

You might also like