Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Limitation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

8.

Limitation
Due to their area being outside of the structures, a portion of the office’s modules are inclined to
being harmed from an assortment of sources, including precipitation, wind, vandals, and
creatures, etc. Likewise, our office modules aren't ready to oblige further server farm
development, as they have been worked as constrained, independent units and an undertaking.
Looking to add to its server farm framework would require purchasing greater office modules. In
addition, the restricted space inside may make a few challenges for upkeep work force dropping
in to do fixes. Our office modules are fabricated more to give power and cooling to server farms,
and not to give a mystery entertainment space to server farm staff.
9.Hardware Components
9.1 Server
A little about Servers: The history of servers moves parallel to the history of computer networks.
The computer networks allow multiple computer systems to communicate with each other at the
same time. A little about Servers: The history of servers moves parallel to the history of
computer networks. The computer networks allow multiple computer systems to communicate
with each other at the same time[ CITATION Adm18 \l 1033 ].

Figure 1PRIMERGY BX400 S1

Managing large computing and storage requirements with limited resources, budgets and space is
a challenge. The Fujitsu Server PRIMERGY BX400 helps to solve it.  It is a fully-featured blade
system built from the ground up as a user-friendly and versatile IT infrastructure. Up to eight
server and storage blades are all packed into a surprisingly small enclosure that is as easy to
install and manage as it is to use[ CITATION fuj19 \l 1033 ].

Characteristics Affordable and fully-featured blade system


built from the ground up to be user-friendly
and versatile, saving time and costs for
midsized companies.
System Unit Type 6 U chassis for 19-inch rack, or floorstand
version
Weight Rack: up to 98 kg / Floorstand: up to 112.5 kg
Front bays 8 half height bays for server or storage blades
Midplane High speed midplane with 3 fabrics
Rear bays 4 x for Connection Blades, 4 x for PSU/fan
modules
Management Blades 1x hot-plug management blade as standard,
redundant management blades as option
Fan Configuration Up to 3 additional hot plug, redundant fan
modules
Power Supply Configuration Up to 4x hot-plug power supply modules (1x as
standard)
Operating buttons On/off switch, ID button
Status LEDs Power (amber / green), System status (orange /
yellow), Identification (blue)
Service Display ServerView Local Service Display for Blade (LSB)
Warranty Period 3 years
Warranty Type Onsite warranty
Recommended Service - 24x7, Onsite Response Time: 4h - For locations
outside of EMEIA please contact your local Fujitsu
partner.
Service Lifecycle 5 years after end of product life

9.2 Switch

Figure 2 Huawei CloudEngine 6880-24S4Q2CQ-EI Switch

Huawei CloudEngine 6880-24S4Q2CQ-EI Switch is Huawei’s next-generation high-


performance, high-density, low-latency 10GE/25GE Ethernet switches for data centers and high-
end campuses. Huawei CloudEngine 6880-24S4Q2CQ-EI Switch adopts an advanced hardware
structure to provide high-density 10GE/25GE port access and supports 40GE/100GE uplink
ports. It supports rich data center features and high-performance stacking, and the air duct
direction can be flexibly selected[ CITATION yci18 \l 1033 ].

Huawei CloudEngine 6880-24S4Q2CQ-EI Switch Specifications

Items CloudEngine 6880-24S4Q2CQ-EI


Downlink Ports 24 x 10 GE SFP+
Uplink Ports 4 x 40 GE QSFP+ and 2 x 40 GE/100 GE
QSFP28
Switching Capacity 1.2 Tbit/s
Forwarding Rate 406 Mpps
Buffer 16.5 MB
Reliability Micro segmentation, Hardware-based BFD
SFC IETF-defined NSH
Maximum Power Consumption 224 W

9.3 Router

Figure 3HPE FlexNetwork MSR4000 Router Series

The HPE MSR4000 Router Series deliver high-performance large branch routing with up to 36
Mpps in a cost-optimized form factor. Featuring integrated routing, switching, security, VPN,
and SIP with no additional licensing, you can boost your service delivery while simplifying the
management of your corporate WAN. With the Open Application Platform module, the
MSR4000 Router Series offers a wide range of virtualized applications. Its distributed, modular
architecture and high reliability also strengthen the resiliency of large branches[ CITATION
hpe19 \l 1033 ].

Differentiator Modular, next-generation router


IPv6 and MPLS
Up to 36 Mpps forwarding capacity
28 Gbps of IPsec VPN encrypted throughput
10 Gigabit SFP+ integrated
Supports HPE Open Application Platform
For extra-large branch offices, headquarters
and campus
Ports (8) HMIM slots, (2) MPU (main processing
unit) and 1 SPU (service processing unit)
slots
Maximum, depending on model and
configuration
Throughput 36 Mpps
Maximum, depending on model and
configuration
Routing table size 1000000 entries (IPv4), 1000000 entries
(IPv6)
Maximum, depending on model and
configuration
Wireless capability 3G, 4G LTE
Depending on options and configuration
PoE capability IEEE 802.11at, 450 W
Maximum, depending on model, options and
configuration

Input voltage -36 to -75 VDC


Maximum, depending on model and
configuration

Power Consumption 300 W (maximum)


9.4 Cooler

Figure 4CyberAir 3PRO DX

The CyberAir 3PRO DX is the result of more than three decades of experience with projects
around the world, and is the logical next step in the development of the successful CyberAir-3
series. To achieve maximum cooling capacity with a minimal footprint while promising you
maximum potential savings, these units are more adaptable than any other precision air
conditioning unit on the market[ CITATION stu19 \l 1033 ].

Technical Data

Cooling capacity total 20 – 150kw


Airflow volume 4,000 - 32,000m3/h
Sizes 6
Air conduction Upflow, Downflow,
Downflow with outlet front/back/down
10. Services
10.1 Software as a Service (SaaS)
SaaS is computing solution service that allows software to be deployed over the internet. The
major advantages of SaaS are that it is very often user-friendly, and it allows internet access to
popular commercial software. SaaS allows people to use cloud-based web applications.

In fact, email services such as Gmail and Hotmail are examples of cloud-based SaaS services.
Other examples of SaaS services are office tools (Office 365 and Google Docs), customer
relationship management software (Salesforce), event management software (Planning Pod), and
so on. SaaS services are usually available with a pay-as-you-go (which means subscription)
pricing model. All software and hardware are provided and managed by a vendor, so you don’t
need to install or configure anything. The application is ready to go as soon as you get your login
and password[ CITATION BGl19 \l 1033 ].

10.2 Platform as a Service (PaaS)


PaaS combines the benefits of SaaS but includes the added benefit of allowing software
development. Not only is the software delivered over the internet, but end-users are given access
to a platform for creating software.

PaaS refers to cloud platforms that provide runtime environments for developing, testing, and
managing applications. Thanks to PaaS solutions, software developers can deploy applications,
from simple to sophisticated, without needing all the related infrastructure (servers, databases,
operating systems, development tools, etc.) Examples of PaaS services are Heroku and Google
App Engine. PaaS is perfect for end-users who need to have the ability of deploying software
over the internet, but that also want a platform where a team can collaborate to develop or
improve its existing software[ CITATION BGl19 \l 1033 ].

10.3 Infrastructure as a Service (IaaS)


IaaS offers a complete suite of on demand services to end-users including access to servers,
storage, network and operating systems. It is the combined benefits SaaS and PaaS with an actual
system. IaaS services can be used for a variety of purposes, from hosting websites to analyzing
big data. Clients can install and use whatever operating systems and tools they like on the
infrastructure they get. Major IaaS providers include Amazon Web Services, Microsoft Azure,
and Google Compute Engine[ CITATION BGl19 \l 1033 ].

10.4 Colocation
There are several explanations on why a business decides to use a colocation facility over
constructing their own data center. While colocation facilities are slightly different then the
facility which does not provide colocation services, the statement that it is no longer only about
managing a data center, it is also about the advanced data center infrastructure management or
otherwise known as DCIM, and it is about the data center operating system. Data centers have
been and continue to become the distributed entity in a business. When working with DCIM
there are many key factors to consider which include:

•Not always the least expensive solutions

•Must include energy efficiency, asset visibility and capacity planning

•Proper planning and utilization of spacing and resources[ CITATION BGl19 \l 1033 ]
11. Architecture of DCI
Tier 1 Tire 2 Tire 3 Tire 4
Minimum N N+1 N+1 N After any
Capacity failure
Components to
support the IT
Load
Distribution 1 1 1 Active and 1 2
paths Alternative Simultaneously
Active
Critical Power 1 1 2 2
Distribution Simultaneously Simultaneously
Active Active
Fault Tolerance No No No Yes
Concurrency No No Yes Yes
Maintenance
Continuous No No No Yes
Cooling
Redundancy No No Yes Yes
backbone
Redundancy No No No Optional
horizontal
cabling
Raised floors 12 " 18" 30"-36 " 30 " -36"
Availability 99.67% 99.749% 99.982% 99.995%

11.1 Tier 1 (Basic Capacity)


Tier 1 data centers go beyond staging your servers in a spare office or large closet inside a larger
facility. Tier 1 data centers are in the lowest tier due to the level of redundancy and downtime
they have. A Tier 1 system is the simplest due to the fact that it doesn’t particularly promise a
major maximum level of uptime, though this level still tends to be about 99.671%. This is due to
the fact that there tends to be little to no redundancy built into the system, namely in that there is
only one path for power and cooling equipment, so there can be up to 28-29 hours of downtime
per year. Tier 1 DCs need a dedicated space for all your IT systems (a server room which may or
may not include a locked door); uninterruptable power supplies (UPSes) to condition incoming
power and to prevent spikes from damaging your equipment; a controlled cooling control
environment that runs 24x7x365; and a generator to keep your equipment running during an
extended power outage[ CITATION Her17 \l 1033 ].

11.2 Tier 2 (Redundant Capacity)


A tier 2 data center incorporates all the characteristics of a tier 1 DC. It also contains some
partial redundancy in power and cooling components (the power and cooling systems are not
totally redundant). The next tier of data centers includes a slightly higher uptime: 99.741%. In
other words, there are no more than 22 hours of downtime per year. This is due to the fact that,
while these data centers still retain the single path model for power and cooling found in Tier 1
data centers, they do have some redundant components, such as backup cooling systems, backup
generators, etc. These are not completely redundant systems, but they do offer a level of
reliability not found in their Tier 1 counterparts. A tier 2 DC exceeds tier 1 requirements,
providing some additional insurance that power or cooling needs won’t shut down
processing[ CITATION Her17 \l 1033 ].

11.3 Tier 3 (Concurrently maintainable DC)


A tier 3 DC incorporates all the characteristics of tier 1 and tier 2 data centers. A tier 3 data
center also requires that any power and cooling equipment servicing the DC can be shut down
for maintenance without affecting your IT processing. Tier 3 has uptimes of around 99.982% (no
more than 1.6 hours of downtime per year). These increased uptimes are due to the more
sophisticated redundancy and infrastructure, which includes multiple power and cooling
distribution paths (so if one fails, there are others to fall back on). All IT equipment also has
multiple power sources in these data centers, and there are specific procedures in place to allow
maintenance and repairs to be done without shutting down the system. There is usually some sort
of power outage protection in place as well in Tier 3 facilities. All IT equipment must have dual
power supplies attached to different UPS units, such that a UPS unit can be taken off-line
without crashing servers or cutting off network connectivity. Redundant cooling systems must
also be in place so that if one cooling unit fails, the other one kicks in and continues to cool the
room. Tier 3 DCs are not fault tolerant as they may share different components such as utility
company feeds and external cooling system components that reside outside the data
center[ CITATION Her17 \l 1033 ].
11.4 Tier 4 (Fault Tolerance)
A tier 4 DC incorporates all the capabilities found in tier 1, 2, and 3 DCs. In addition, all tier 4
power and cooling components are 2N fully redundant, meaning that all IT components are
serviced by two different utility power suppliers, two generators, two UPS systems, two power
distribution units (PDUs), and two different cooling systems powered (again) by different utility
power services. With a fully redundant infrastructure, a Tier 4 data center meets and exceeds all
of the requirements of the aforementioned three tiers. Not only do these data centers, preferred
by enterprise corporations, provide 99.995% uptime per year (less than 0.5 hours of downtime
per year), they also are complete with at least 96-hour power outage protection. The
redundancies built into Tier 4 data centers are made to ensure that the system can function
normally even if one or more pieces of equipment fail. Everything is redundant, including
generators, cooling units, power sources, and more, so that another system can immediately take
over in the event that another fails. Each data and cooling path are independent of the other (fully
redundant). If any single power or cooling infrastructure component fails in a tier 4 DC,
processing will continue without issue. IT processing can only be affected if components from
two different electrical or cooling paths fail[ CITATION Her17 \l 1033 ].
12. Cooling System
Data centers are an integral part of our society, and those who work in the industry know that
cooling has been and will always be an important part of this ecosystem. Computers in a data
center work 24/7 at tremendously high rates that they get exceptionally hot. Sophisticated
cooling systems need to be applied for these computers to continue working without overheating.

Cooling systems have evolved over the years, and in this article, we will discuss data center
cooling options, methods, and the best practices for your data center. We will also discuss new
innovations that may be ahead in the coming years[ CITATION Isb18 \l 1033 ].

12.1. Closed-circuit air conditioning cooling

First of all, the temperature in the cold aisle will be maintained in between 18° C and 27° C, and
air humidity will be made between 40% and 60% RH. Operating conditions with extremely cold
temperatures of under 18° C and high humidity, which may lead to production of condensation
on IT devices, must always be avoided and for that a maximum temperature fluctuation of 5° C
per hour will also be taken.
Closed-loop enclosure air conditioners are specifically designed to mount on to electronic
enclosures and remove heat without letting outside air into the sealed enclosure. This type of
cooling is typically used to cool electronic equipment housed inside a NEMA rated enclosure
which protects sensitive electronics from dust, splashing liquids and production residues. In the
closed-loop system, the heated enclosure air is drawn into the air conditioner by a powerful
blower where heat and moisture are removed as it passes through an evaporator coil and forced
back into the enclosure, maintaining the NEMA integrity of the enclosure[ CITATION ice19 \l
1033 ].

12.2 Direct-Cooling Principle -Water-Cooled Server Rack


Water is widely used to cool all kinds of machinery and industrial systems, but what about being
used to directly cool servers in a Data Center? Why isn't it a widespread solution?

The answer is pretty simple: Water + Electricity = Disaster.

Also known as liquid submersion cooling, it is the practice of submerging computer components
(or full servers) in a thermally, but not electrically, conductive liquid (dielectric coolant). Liquid
submersion is a routine method of cooling large power distribution components such as
transformers. Still rarely used for the cooling of IT Hardware, this method is slowly becoming
popular with innovative datacenters the world over. IT Hardware or servers cooled in this
manner don't require fans and the heat exchange between the warm coolant and cool water
circuit usually occurs through a heat exchanger (i.e. heater core or radiator). Some extreme
density supercomputers such as the Cray-2 and Cray T90 use large liquid-to-chilled liquid heat
exchangers for heat removal.
The liquid used must have sufficiently low electrical conductivity not to interfere with the
normal operation of the computer. If the liquid is somewhat electrically conductive, it may be
necessary to insulate certain parts of components susceptible to electromagnetic interference,
such as the CPU. For these reasons, it is preferred that the liquid be dielectric[ CITATION sub18
\l 1033 ].

12.2. Recommendation
Investigating every one of the contemplations that have been made, the proposed cooling
framework that will be used by Panadox is hot path control framework. It might be a more
proficient methodology than cold walkway control framework since it licenses higher workplace
temperatures and broadened chilled water temperatures which bring nearly expanded economizer
hours and basic electrical cost venture reserves. Cooling set decisions can be set higher while
still keep up a satisfying workplace temperature.

You might also like