Powerprotect DD and Data Domain Hardware Installation: Participant Guide
Powerprotect DD and Data Domain Hardware Installation: Participant Guide
PARTICIPANT GUIDE
PARTICIPANT GUIDE
PowerProtect DD and Data Domain Hardware Installation-SSP
Preparing to Install
Systems Overview
Shown on the slide is the current Data Domain family. The Data Domain family
includes systems for small to large enterprise organizations.
The hardware models that ship with DD OS 7.0 are the DD3300, DD6900,
DD9400, and DD9900. The hardware models that shipped with DD OS 6.2 are the
DD3300, DD6300, DD6800, DD9300, and DD9800.
Basic Topology
Topology
Expansion Shelf
SAS
Data Domain
Controller
Serial
Fibre Channel Media Server
Expansion Shelf
Ethernet
Management
Third-Party
Switches
LAN Clients
Backup and
Archive Servers
The Data Domain system, that includes the controller and any additional expansion
shelves, is connected to storage applications using Ethernet or Fibre Channel.
Fibre Channel is used for VTL, DD Boost over Fibre Channel (FC). FC can also be
used for vDisk/ProtectPoint Block Services, and anything that has a target LUN
(logical unit number) on a Data Domain system. Ethernet is used for CIFS, NFS, or
DD Boost applications.
In the exploded view, the Data Domain controller is at the center of the topology
that is implemented through other connectivity and system configuration, including:
• Expansion shelves for extra storage, depending on the model and site
requirements
• Fibre Channel for VTL and other applications
• LAN environments for connectivity for Ethernet based data storage, for basic
data interactions, and for Ethernet-based system management
Installation Checklist
Before installing a Data Domain system, collect all necessary tools and
documentation for the specific system you are installing. Then, unpack and verify
the contents of the shipped components to ensure the system you received
matches what was ordered.
If a pre-configured system was ordered, the job of rack-mounting the hardware has
already been taken care of. If it is not a pre-configured system, you will need to
rack mount the hardware units and connect any expansion shelves to the
controller.
When the Data Domain system is racked and expansion shelves are attached,
connect it to the network using Ethernet or Fibre Channel. Also, attach a terminal to
the system and power up the Data Domain system to perform the initial setup and
verify the status of the hardware.
There are various tools, supplies, and cables that you need to use to install the
Data Domain hardware.
For initial network connectivity, an Ethernet cable is required. Use a null modem
cable or USB-to-DB9 serial male connector for initial connection.
Log into the system and run DD OS commands with your laptop. The
recommended terminal emulation program is SecureCRT®, configured with a
5,000 line or larger buffer. Any version of SecureCRT works. If SecureCRT is not
available, use PuTTY version 0.58 or later. A 2 GB or greater USB flash memory
drive is also recommended.
• Wrist strap
• Screwdrivers
• Flashlight
• Wire cutters
• Pliers
• Tie wraps
• Velcro cable wraps
Site Requirements
The site should have optimal space, power, good air conditioning, required
temperature, ventilation, and airflow. The Data Domain appliance should fit most
common datacenter racks.
For specific Data Domain Model information, see the Data Domain Installation and
Setup Guide, Data Domain Systems Rail Installation and System Mounting
Procedures, or Expansion Shelf Hardware Guide.
Most Data Domain systems support the hot addition of expansion shelves. Some
models support up to 24 shelves. These large capacity systems require additional
planning and resources for installation.
1. Rack
If necessary, installation may be split across more than one rack, but requires
advance site-specific planning to determine appropriate SAS or interconnect
cable lengths. Whenever possible, grouping shelves in logical sets within
contiguous rack spaces simplify the shelf-to-shelf SAS cabling.
The logical shelf spacing is divided into three or more shelf sets, labeled 1, 2,
and 3, and so forth.
2. Cables
3. Space
Plan ahead by leaving space in the rack for additional shelves based on the
maximum amount of expected storage. Leaving space enables simple,
predictable upgrades, if the site expects further expansion of storage capacity
in the future,
Safety Requirements
Documentation Resources
Data Domain Documentation is your most important resource. Each Data Domain
model (controllers and expansion shelves) has specific documentation for installing
and configuring the Data Domain system. Before installing a particular Data
Domain model in the field, become familiar with the Data Domain Installation and
Setup Guide, Data Domain Systems Rail Installation and System Mounting
Procedures, Expansion Shelf Hardware Guide, and related documents for the
model system you are installing.
Installing Hardware
Data Domain platforms including controller and expansion shelves are shipped to
the customer site in one of two possible ways:
• The platform is preinstalled and cabled in Dell EMC 40 Rack Unit (RU) rack(s).
• Typically AC power cables are already plugged into the power distribution
unit (PDU). SAS cables are preconnected within each rack
• At the site, installers install the rack-to-rack cabling and perform some
adjustment to cabling within racks 2 to 5 as necessary..
• Units are shipped in separate boxes for installation into Dell EMC or third-party
rack(s)
Data Domain preconfigured racks are available for most currently shipping Data
Domain models. Depending on the model, the preconfigured rack can contain
ES30, ES40, or DS60, and FS15 and FS25 shelves. See the Dell EMC Data
Domain Hardware Features and Specifications Guide for detailed information,
including performance, capacity, and physical specifications about preconfigured
systems.
Rack 1 is the main rack with the controller and is installed in RU 13-16. Shelves are
loaded from bottom of the rack first with expansion shelves cabled in groups of 4.
For shelf counts not in increments of 4, the last group will contain fewer than 4
shelves.
Racks 2 through 5 contain expansion shelves only. Shelves are loaded from the
bottom of the rack first with shelves that are connected in groups of 4. For shelf
counts not in increments of 4, the last contain fewer than four shelves. There is
always a gap in U's 13 through 16 for manufacturing economy of scale. Racks 2
through 5 are connected to rack 1 at the customer site.
Refer to the Dell EMC Data Domain Rail Kit for further rail kit information. For
specifc model information, refer to the System Mounting Procedures Guide and the
Dell EMC Data Domain System Installation and Setup Guide.
AC Power Distribution
Ensure the racked systems and not overload the power distribution system as the
breakers would trip. Doing so would shut down all the systems that are connected
to the power distribution system.
Pre-configured racks are shipped with one set of Power Distribution Panels (PDPs)
supporting four Power Distribution Units (PDUs). The second PDP set must be
used when power exceeds the power capacity of a single PDP. Also, an additional
Power Cord kit must be ordered and connected to building AC circuits. Using the
second set of PDPs doubles the power available to the rack.
To determine the amount of power that a rack requires, check the Dell EMC power
calculator (http://powercalculator.emc.com).
When you open the box containing the Data Domain system you should see a
screwdriver, product documentation, power cords, and a null modem cable.
You should also see the packaged rails for mounting the chassis, rail installation
instructions, and the Data domain controller or expansion shelf.
1 4
1: Screwdriver
2: Documentation
4: Power cord
5: Rails
An accessory kit box includes several items that are needed to install the Data
Domain hardware: such as various cables, rail adapters, keys, a bezel and bezel
clips, and Velcro strips.
• Various cables
• Rail adapters
• Keys
• Bezel
• Bezel clips
• Velcro strips
An additional, extended length screwdriver may also be included for some specific
screw locations. A console null modem serial cable is included for serial console
connection.
Once onsite, verify the received Data Domain equipment against the order that is
placed by the customer. Ask the customer for the purchase order, if it is not in the
shipping box.
Open and compare the components in the box with the P.O.
• System model
• Cards (HBA, NICs)
• System Cables
• Power Cables
• Licenses
In a shipment with multiple appliances, each appliance may have different licenses.
Install the correctly licensed appliance for its function. If for any reason the
equipment is not correct, immediately contact Dell EMC Support.
Types of Mounts
There are three types of possible racks; a round, a square or a tapped hole rack.
The screws that you use to fasten the outer rails to the rack will depend on which
type of rack is used.
Each of the screw types are clearly labeled with the equipment kit. Use the labeled
screws provided with the appropriate rack. Use the correct mount based on the unit
size, provided rail types (sliding or nonsliding), and chassis release mechanisms.
Cage Nuts
Cage nuts are only required when fastening a rail to a square holed rack. Cage
nuts are not required when using a round or tapped hole rack. Use a cage nut tool
to attach the cage nut to the rack post.
There are several basic steps common to all units when rack mounting Data
Domain Controllers. Prepare the rack holes working with the correct screws or
applying adapters for the correct type of hole.
Next, install the slide rails or the outer rails to the rack itself. Then attach rails to
each side of the controller. Once all rails are in place, mount the chassis, secure it
to the rack, and finally attach the front bezel.
Pull Handle
To help with the rack mounting process, some Data Domain systems have red D-
shaped pull handles. D-shaped pull handles are low profile, flip-down handles used
for sliding the system in and out of the rack. The handles retract against the front
face of the fan tray.
Movie:
Expansion Shelves
Data Domain systems can use ES30, ES40, FS15, and DS60 expansion shelves to
add capacity. This table shows expansion shelves for models shipping with DD OS
6.2. For all legacy systems, consult the “EMC Data Domain ES30 Expansion Shelf
and FS15 SSD Shelf Hardware Guide” for specific information.
The ES30-SATA can accommodate 15 one, two, or three TB drives and supports
the DD6800, DD9300, and DD9800. The ES30-SAS can accommodate 15 two or
three TB drives and supports the DD6300, DD6800, DD9300, and DD9800.
Both the ES30-SATA and ES30-SAS have one spare drive. ES30-SATA and
ES30-SAS shelves can be attached to the same head unit, but cannot be
combined in the same set.
The ES30-60 can accommodate 15 four TB drives, and supports the DD6300,
DD6800, DD9300, and DD9800.
The ES40 can accommodate 15 four or eight TB drives, and supports the DD6300,
DD6800, DD9300, and DD9800.
The FS15 is a solid-state expansion shelf. The FS15 is used exclusively for the
metadata cache in the active or extended retention tiers of a Data Domain system.
It supports 2, 5, 8, or 15, 800 GB SSD drives. When configured for high availability,
the DD6800 requires the FS15 use two or five disks. DD9300 models require five
or eight disks. The DD9800 supports an eight disk (256 GB) to 15 disk (768 GB)
FS15 configured shelf.
The FS25 SSD shelf is similar to the FS15 in purpose and build. It contains either
10 or 15 four TB SSD drives and is used only for metadata on the DD6300,
DD6800, and DD9300 for high availability configuration.
DS60 (Dense Storage) shelf supports 3TB and 4TB SAS drives in 15 drive
increments, up to 60 drives per shelf. DS60 is available for the DD6300, DD6800,
DD9300, and DD9800 systems.
Number of 15 15 15 15 2, 5, 8, 10 or 15 15
Drives or 15
Spare 1 1 2 1 1 1 1
Drives
Expansion shelves are connected to each other and to the Data Domain controller
with SAS (serial-attached SCSI) cables.
• An ES30 has the same type of connector at both ends and is used to connect
ES30s to each other or to connect ES30s to controllers with SAS HBAs
• The other type of cable has a different connector on one end and is used to
connect ES30s to controllers that have SAS I/O modules
The connector on the ES30 is called mini-SAS. The I/O module connector is called
HD-mini-SAS. The cables with HD-mini-SAS at one end are available in 2M, 3M,
and 5M lengths. The cables with mini-SAS connectors at both ends are available in
1M, 2M, 3M, and 5M lengths.
The mini-SAS connectors are keyed and labeled with an identifying symbol: a dot
for the host port and a diamond for the expansion port.
1: ES30 Cables
The cable used to include the FS15 in a SAS chain is the mini-SAS type. The cable
is keyed and labeled with different host and expansion connectors in the SAS
chain. These cables are available in 1M, 2M, 3M, and 5M lengths.
The connectors are keyed and labeled with an identifying symbol: a dot for the host
port and a diamond for the expansion port. The expansion shelves are 3U in size
and the Data Domain controllers that support the ES30 shelf are either 2U or 4U.
When a 2U controller is mounted in a 4U gap, it can be mounted in any of the three
positions in that gap. For more information see the Data Domain System Hardware
Guide for your specific model.
There is a physical shelf count limit per SAS string. You cannot attach an FS15
shelf to a SAS string already containing a maximum number of shelves – 7 ES30s,
for instance. Attach it to a string with fewer than seven shelves. The FS15 is always
counted in the number of ES30 shelf maximums but since it is only used for
metadata, it does not affect capacity.
For more information on racking, cabling and configuring the FS15 expansion shelf,
see the Dell EMC Data Domain ES30 Expansion Shelf and FS15 SSD Hardware
Guide.
1: FS15 Cables
DS60 Cables
The DS60 shelves use cables with HD-mini-SAS connectors at both ends to
connect the shelves to the controllers.
• Use the 3M cable in the same rack either to connect to a controller or shelf-to-
shelf
• Use a 3M, 4M, or 5M cable when connecting a DS60 from one rack to another
• Use the 3M shelf-to-shelf cables to connect shelves to other shelves within a
shelf set in the same rack
• Use a 3M, 4M, or 5M cable to connect shelves to other shelves when the set
spans racks
• Special cables must be used when attaching an ES30 to a chain with a DS60.
The DS60 shelves are 5U and a Data Domain controller that supports DS60
shelves is 4U. For more information see the Data Domain System Hardware Guide
for your specific model. For more information on racking, cabling and configuring
the DS60 expansion shelf, see the Dell EMC Data Domain DS60 Expansion Shelf
Hardware Guide.
3
1
2 4
1: DS60 Cables
2: SAS Ports
4: SAS Ports
The Data Domain system rediscovers newly configured shelves after it restarts.
You can power-down the system and recable shelves to any other position in a set,
or to another set. To take advantage of this flexibility, follow these rules before
making any cabling changes:
• Do not exceed the maximum shelf configuration values for your Data Domain
system
• For redundancy, the two connections from a Data Domain system to one or
more shelves must use ports on different SAS HBAs or I/O modules
• A Data Domain system cannot exceed its maximum raw external shelf capacity,
regardless of added shelf capacity
• If ES30 or ES40 SAS shelves are on the same chain as a DS60, the maximum
number of shelves on that chain is five
• ES30 SATA shelves must be on their own chain
• When used, a FS15 shelf does not count against the capacity total but it is
counted for shelf limits
• When used, a FS25 shelf is cabled on a separate, private chain
Use the Installation and Setup Guide for your Data Domain system to minimize the
chance of a cabling mistake. For more information see the Data Domain System
Hardware Guide for your specific model.
Cabling Basics
Data Domain
Here is an example of a Data Domain controller with two expansion shelves. There
are some general cabling rules for connecting expansion shelves to a Data Domain
system:
• The Data Domain controller HBA port should always connect to the Host port of
an expansion shelf. In other words, the HOST port on the expansion shelf
connects upstream to the Data Domain Controller.
• The expansion port on the expansion shelf is used to connect downstream to
another expansion shelf.
• The expansion port on the last shelf should be empty. It does not connect back
to the Data Domain controller.
In this example:
• Cable 1 (C1) connects from Port a on the SAS controller in Slot 7 (right) to Port
A of storage controller S-B on the first shelf.
• Cable 2 (C2) connects from Port a on the SAS controller in Slot 3 (left) to Port A
of shelf controller S-A on the last shelf.
• Cable 3 (C3) connects from Port B of S-B on the first shelf to Port A of S-B on
the last shelf.
• Cable 4 (C4) connects from Port B of S-A on the last shelf connects to port A of
S-A on the first shelf.
For more information see the Data Domain System Hardware Guide for your
specific model.
Cabling Order
For systems that do not come pre-configured, there is a recommended order for
the hot addition of expansion shelves over time.
• In steps 1, 2, 3, and 4 in the example, establish the first expansion shelf at the
bottom of each shelf set based on the system capacity. Positioning each shelf in
the rack according to the diagram shipped with the specific system. This
positioning establishes the full range of space in the racks that are required for
future expansion of capacity as needs require. It also allows for easy installation
of extra shelves into any shelf set, with a simple process of installing the shelf
and recabling so that the B side cable from the controller is connected to the
host port on the new shelf at the end of the chain. Interconnect cables are also
added between the two shelves.
• In steps 5, 6, 7, and 8 add additional shelves from the bottom up in each shelf
set.
• Continue to add shelves in step 9, 10, and so forth up to the maximum capacity
of the system. In this example, the system supports a maximum of 18 shelves
installed and positioned in two racks as shown.
When you unlock and remove the snap-on bezel from the front panel, the 15 disks
are visible. Disk numbers range from 1 to 15 as reported by system commands.
When facing the front of the panel, disk 1 is the leftmost disk and 15 is the far right
disk.
Indicators on the appliance will show disk slot numbering from 0 to 14, but the
software refers to logical numbering of 1-15.
1 2
1: Disk 1
2: Disk 15
The following diagram is a top view diagram of the DS60 labeled drive locations.
Drives installed in the DS60 are only visible when the enclosure is pulled out of the
rack. To access the drives, pull the chassis forward from the rack, and remove the
top cover. The drives are installed in packs of 15. Packs are color that is coded
within the enclosure. Purple is pack 1, yellow is pack 2, green is pack 3, and pink is
pack 4.
Slots are identified in rows of 12 (0-11). There is room in the DS60 for 60 drives or
4 packs, total. A pack must contain the same drive size. Packs of different drive
sizes can be mixed within the DS60. For example, Pack 1 may contain 15, 4 TB
drives while Pack 2 may contain 15, 3TB drives.
Movie:
Initial Configuration
Once the network information has been saved, additional information can continue
to be provided in the CLI or the GUI.
It serves as a shared document between Dell EMC and authorized customers and
partners. The spreadsheet is available for download from the internal Dell EMC
website: https://elabadvisor.psapps.emc.com.
The PEQ also contains important reference charts and checklists to help manage
the installation.
The serial number of the Data Domain system is in a different location on each
model. For some models, the system serial number is found on the center right
edge of the back panel for some models.
For newer models, a serial number tag located on the left side of the back panel.
Log in with the default username admin. The default password is the system serial
number printed on the product serial number tag (PSNT).
Emulator Settings
When the Data Domain system boots up for the first time, the CLI configuration
wizard script will start automatically. The script can also be started manually using
the command config setup.
The first prompt asks if you want to use the GUI wizard. The system wants to
determine if the shortened version of the CLI wizard will start. Then the system
checks to see whether the System Manager Configuration wizard, or the complete
CLI wizard shall be used.
If the choice is yes, the bare minimum configuration data is collected to configure
network access. A shortened CLI wizard prompts for data that was collected in the
system setup worksheet. At the end of the section, a prompt to accept or reject the
changes appear.
Once the configuration data has been saved, the wizard requests the user to
launch the System Manager Configuration Wizard to finalize the setup.
If the user declines to use the GUI wizard, the CLI wizard starts with the section for
license configuration. The CLI wizard continues to the network configuration
section, the file system configuration section, and the system configuration section.
Each section displays a summary and prompt to either accept or reject the changes
as it would in the shortened version.
The System Manager GUI can be used to configure the same information. The GUI
is used once the initial configuration is completed from the CLI. Using a web
browser, open the Data Domain System Manager and find the wizard by selecting
Maintenance > System > Configure System.
The individual sections are listed on the left and the details of the sections are on
the right. Sections may be skipped, with the exception of the license configuration
section. Both configuration wizards will suggest a reboot when complete. If the time
zone was changed during the configuration then the reboot is mandatory.
For each expansion shelf installed in the rack, such as the ES30 or DS60, there
must also be a shelf capacity license. The license is specific to either active tier or
archive tier (extended retention) usage of the shelf. The Expanded-Storage license
may also be required depending on the Data Domain system.
Data Domain systems running DD OS 6.1 or later, only the Electronic Licensing
Management System (ELMS) is used to manage all feature and capacity licenses
for a Data Domain system. ELMS on Data Domain systems uses one license file
per system. The license file contains a single license for all purchased features.
If you use the Data Domain Legacy Licensing, input the license key using the
System Manager. Go to Administration > Licenses > Add Licenses to enter a single
license key per line. Click Add when done. The CLI can also be used to add or
update licensed features on a Data Domain system.
Once the licenses have been added, the expansion shelf enclosures must be
added to either the active or archive tier. This procedure is performed in the
System Manager GUI.
From the home screen, go to Hardware > Storage > Overview and click Configure.
Recently installed shelves appear in the Addable Storage section where they can
be added to the necessary tier.
In CLI, use the storage add tier command to add storage to either the active or
archive tier.
Display the RAID group information for the active and archive tiers of each shelf by
entering the command storage show all. The rest of the disks should report that
they are either available or spare disks.
In order for the file system to make use of the new space enter the CLI command #
filesys expand. Begin the file system operations with the CLI command #
filesys enable.
Once the expansion shelf is installed and online, perform a few steps to verify the
state of the file system and disks. Check the status of the SAS HBA cards by
entering the command disk port show summary.
The output shows the port for each SAS connection and the online status. When
the shelves are connected, the output displays the connected enclosure IDs for
each port. The status of the connected shelves change to online.
After adding expansion shelves, verify the state of the disks with the command disk
show state. See the legend in the command output for state definitions. Some disk
states include spare, available, unknown, and reconstructing. The progress and
time remaining are for disks that are in a reconstructing state.
For disks labeled "unknown" instead of "spare" in the output of the # disk show
state command, enter the # disk unfail command for each unknown disk.
For example, if disk 2.1 is labeled "unknown", enter the command: # disk
unfail 2.1.
Verify that the Data Domain system recognizes the shelves by entering the
command enclosure show summary.
This command shows each recognized enclosure ID, model number, serial
number, and slot capacity. The command also shows the state of the enclosure
and information about the shelf manufacturer.
Verify the state of the file system and disks by entering the command # filesys
status. It should show as available and running. When a shelf is added to the file
system, run the command # filesys show space. This command showsview
the total size, used, and available space for each file system resource. You can
also view data, metadata, and index.
Movie:
Other Configuration
Additional Configuration
The DD3300, DD6900, DD9400, and DD9900 all support remote management
through iDRAC. iDRAC gives system administrators the ability to configure a
system as if they were at the local console.
All Data Domain systems can utilize two industry-standard specifications, IPMI, and
SOL to enable remote powering and console management capabilities, from a
remote site.
IPMI and SOL access requires two Data Domain systems, one initiator and one
target. Access could also include a target Data Domain system and any computer
with IPMI tools installed as an Initiator.
To access one Data Domain system from another, you can use the System
Manager. Go to Maintenance > IPMI and click the Login to Remote System button.
Enter the IPMI IP address or DNS name, username, and password for an IPMI
user and click Connect.
To access a Data Domain system from a non-Data Domain system you can use
the ipmitool. ipmitool is an open source program for management of systems that
support IPMI v2.0. To enable a computer to be used as an initiator, locate a
compatible copy of the open source ipmitool, download and install it on the
computer.
For DD3300, DD6900, DD9400, and DD9900 systems, set up iDRAC by attaching
an Ethernet cable to the dedicated iDRAC port. The port is located on the back
panel of the system to your network.
iDRA
C Port
When connected, you can use an HTML web interface to navigate to the default
iDRAC IP address 192.168.0.120.
All shipping Data Domain models have a dedicated IPMI management Ethernet
port. An Ethernet cable is connected to the dedicated Ethernet port and then to the
LAN.
The dedicated Ethernet port is configured with any available IPMI IP address.
Configure IPMI on a separate management network in case the data LAN goes
down. The separate management network should be used only for IPMI and SOL
access. The dedicated Ethernet port name is bmc0a. The other Ethernet ports like
eth0 and eth1 on the system are used only for data and normal operations.
Some older models with DD OS 5.7 may require sideband access for IPMI and
SOL. Sideband access uses one of the standard Ethernet ports. Each IPMI-
enabled Ethernet port is shared for data and normal operations, and for IPMI power
management and SOL access. The IPMI IP address must be on the same network
as the data access.
Shared port names are: bmc-eth01, bmc-eth02. To determine the specific location
of the Ethernet ports and whether out-of-band or sideband access is supported,
consult the EMC Data Domain System Hardware Overview document. You can find
information for your specific model.
For DD3300, DD6900, DD9400, and DD9900 systems, set up iDRAC by attaching
an Ethernet cable to the dedicated iDRAC port on the rear panel of the system to
your network.
You can perform IPMI configuration in the System Manager. On the target Data
Domain system, go to Maintenance > IPMI, select the port you wish to enable, and
click the Enable button.
To configure the port, select the port and click the Configure button. In the
Configure Port dialog you can configure the port for either DHCP or a static IP
address, netmask and gateway.
Add IPMI users by clicking the Add button in the IPMI Users section. The IPMI
users are independent of other users on the Data Domain system. Usernames and
passwords that are used for IPMI users can be different from any other users who
are created on the system.
You can also configure the target Data Domain system using the CLI with the ipmi
config and ipmi user commands. See the EMC Data Domain Operating
System Command Reference Guide for complete information on using any Data
Domain CLI commands.
This demonstration shows how to configure IPMI and SOL to enable Data Domain
remote system management.
Movie:
Configure iDRAC
The current Data Domain family supports the Integrated Dell Remote Access
Controller (iDRAC) with Lifecycle Controller to remotely power the system off or on.
To access iDRAC, ensure that you connect the Ethernet cable to the iDRAC
dedicated network port on the rear panel of the system to your network.
To configure iDRAC, use an HTML web interface to navigate to the default iDRAC
IP address 192.168.0.120. Alternatively, you can connect directly to the iDRAC
direct USB port with a USB cable and a laptop.
Login
Log in with the default username: admin. The default password is the system serial
number printed on the product serial number tag (PSNT).
Change IP Settings
You can change the iDRAC IP address in iDRAC Settings | Connectivity | IPv4
Settings.
Dashboard
The iDRAC GUI dashboard displays key system and health information outside of
the Data Domain operating system. iDRAC monitors and reports health status for
components such as batteries, cooling, CPUs, and memory.
Power the system off with iDRAC by selecting Dashboard | Graceful Shutdown or
by using one of these selections:
DD OS Configuration
You can configure the Data Domain operating system using the # config
setup command in the CLI. You can also configure the system using the Data
Domain System Manager Configuration Wizard.
Autosupport reports and alert messages help solve Data Domain system problems.
Data Domain system commands and entries from various logfiles. In the
autosupport are extensive and detailed internal statistics and log information to aid
Data Domain Support in identifying and debugging system problems.
Autosupport reports are simple text logs sent by email. Autosupport report
distribution can be scheduled, with the default time being 6:00 a.m. During normal
operation, a Data Domain system may produce warnings or encounter failures
whereby the administrators must be informed immediately.
You can also configure Autosupport settings from the CLI using the autosupport
family of commands. To test Autosupport delivery you can use the autosupport test
command.
Verify Hardware
When you install your Data Domain system, verify that you have the correct system
identifiers. Verify the model number, DD OS version, and serial number to ensure
that they match what was ordered.
To verify the system number, chassis number, and enclosure status, click chassis.
These settings can be identified through the CLI.
See the Dell EMC Data Domain Operating System Command Reference Guide for
details on using the commands referenced in this course.
When you expand the Active Tier button on the overview tab, it displays the
information about Disks in Use and Disks Not in Use.
Expand Addable Storage, for details on optional enclosures that are available for
the system. In this example, there are no additional enclosures available.
Failed/Foreign/Absent Disks (Excluding Systems Disks) displays the disks that are
in a failed state. These disks cannot be added to the systems Active or Retention
tiers.
The Hardware Storage section under the Enclosures tab displays a table
summarizing the details of the enclosures connected to the system.
The Disks tab displays the Disk State table with information on each of the system
disks. You can filter the disks viewed to display all disks, disks in a specific tier, or
disks in a specific group.
Use the beacon feature to flash an LED on the hard drive. The beacon feature
corresponds the flashing LED to a disk displayed in the table.
Disk fail functionality enables you to manually set a disk to a failed state to force
reconstruction of the data stored on the disk. Disk Unfail functionality enables a
disk in a failed state and return it to operation.