Data Domain OS
Data Domain OS
Version 6.0
Administration Guide
302-003-094
REV. 04
Copyright 2010-2017 Dell Inc. or its subsidiaries All rights reserved.
Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS-IS. DELL MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND
WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. USE, COPYING, AND DISTRIBUTION OF ANY DELL SOFTWARE DESCRIBED
IN THIS PUBLICATION REQUIRES AN APPLICABLE SOFTWARE LICENSE.
Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners.
Published in the USA.
EMC Corporation
Hopkinton, Massachusetts 01748-9103
1-508-435-1000 In North America 1-866-464-7381
www.EMC.com
Preface 15
Using the Command Line Interface (CLI) to configure DD Cloud Tier....... 431
Configuring encryption for DD cloud units................................................ 434
Information needed in the event of system loss........................................435
Using DD Replicator with Cloud Tier.........................................................436
Displaying capacity consumption charts for DD Cloud Tier.......................436
As part of an effort to improve its product lines, EMC periodically releases revisions of
its software and hardware. Therefore, some functions described in this document
might not be supported by all versions of the software or hardware currently in use.
The product release notes provide the most up-to-date information on product
features, software updates, software compatibility guides, and information about EMC
products, licensing, and service.
Contact your EMC technical support professional if a product does not function
properly or does not function as described in this document.
Note
This document was accurate at publication time. Go to EMC Online Support (https://
support.emc.com) to ensure that you are using the latest version of this document.
Purpose
This guide explains how to manage the EMC Data Domain systems with an emphasis
on procedures using the EMC Data Domain System Manager (DD System Manager), a
browser-based graphical user interface (GUI). If an important administrative task is
not supported in DD System Manager, the Command Line Interface (CLI) commands
are described.
Note
Audience
This guide is for system administrators who are familiar with standard backup
software packages and general backup administration.
Related documentation
The following Data Domain system documents provide additional information:
l Installation and setup guide for your system, for example, EMC Data Domain
DD2500 Storage System, Installation and Setup Guide
l EMC Data Domain DD9500 /DD9800 Hardware Overview and Installation Guide
l EMC Data Domain DD6300/DD6800/DD9300 Hardware Overview and Installation
Guide
l EMC Data Domain Operating System USB Installation Guide
l EMC Data Domain Operating System DVD Installation Guide
l EMC Data Domain Operating System Release Notes
l EMC Data Domain Operating System Initial Configuration Guide
l EMC Data Domain Product Security Guide
Preface 15
Preface
NOTICE
Note
A note identifies information that is incidental, but not essential, to the topic. Notes
can provide an explanation, a comment, reinforcement of a point in the text, or just a
related point.
Typographical conventions
EMC uses the following type style conventions in this document:
Table 1 Typography
Monospace italic Highlights a variable name that must be replaced with a variable
value
Monospace bold Indicates text for user input
Technical support
Go to EMC Online Support and click Service Center. You will see several options
for contacting EMC Technical Support. Note that to open a service request, you
must have a valid support agreement. Contact your EMC sales representative for
details about obtaining a valid support agreement or with questions about your
account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and
overall quality of the user publications. Send your opinions of this document to:
DPAD.Doc.Feedback@emc.com.
17
Preface
l Revision history..................................................................................................20
l EMC Data Domain system overview................................................................... 21
l EMC Data Domain system features.................................................................... 21
l Storage environment integration........................................................................27
Revision history
The revision history lists the major changes to this document to support DD OS
Release 6.0.
03 (6.0.1) February 2017 This revision includes information about these new
features:
l Amazon Web Services (AWS) Signature Version 4
request signing support for DD Cloud Tier
l Support for new AWS regions (The China region is
not supported.)
l Enhancements to DD Cloud Tier resiliency
l DD System Manager enhancements, including
support for DD Cloud Tier file recall
02 (6.0) October 2016 This revision includes information about these new
features:
l Four new hardware models: DD6300, DD6800,
DD9300, and DD9800
l Metadata on Flash (MDoF)
l Instant access-instant restore
l Azure support for Data Domain Cloud Tier
01 (6.0) September 2016 This revision includes information about these new
features:
l Data Domain Cloud Tier
l Minimally disruptive upgrade
l RPM signature verification
l Replication context scaling
l Directory-to-MTree replication migration
Note
Systems consist of appliances that vary in storage capacity and data throughput.
Systems are typically configured with expansion enclosures that add storage space.
Data integrity
The DD OS Data Invulnerability Architecture protects against data loss from
hardware and software failures.
l When writing to disk, the DD OS creates and stores checksums and self-
describing metadata for all data received. After writing the data to disk, the DD OS
then recomputes and verifies the checksums and metadata.
l An append-only write policy guards against overwriting valid data.
l After a backup completes, a validation process examines what was written to disk
and verifies that all file segments are logically correct within the file system and
that the data is identical before and after writing to disk.
l In the background, the online verify operation continuously checks that data on
the disks is correct and unchanged since the earlier validation process.
l Storage in most Data Domain systems is set up in a double parity RAID 6
configuration (two parity drives). Additionally, most configurations include a hot
spare in each enclosure, except the DD1xx series systems, which use eight disks.
Each parity stripe uses block checksums to ensure that data is correct.
Checksums are constantly used during the online verify operation and while data is
read from the Data Domain system. With double parity, the system can fix
simultaneous errors on as many as two disks.
l To keep data synchronized during a hardware or power failure, the Data Domain
system uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An
NVRAM card with fully charged batteries (the typical state) can retain data for a
period of hours, which is determined by the hardware in use.
l When reading data back on a restore operation, the DD OS uses multiple layers of
consistency checks to verify that restored data is correct.
l When writing to SSD cache, the DD OS:
Data deduplication
DD OS data deduplication identifies redundant data during each backup and stores
unique data just once.
The storage of unique data is invisible to backup software and independent of data
format. Data can be structured, such as databases, or unstructured, such as text files.
Data can derive from file systems or from raw volumes.
Typical deduplication ratios are 20-to-1, on average, over many weeks. This ratio
assumes there are weekly full backups and daily incremental backups. A backup that
includes many duplicate or similar files (files copied several times with minor changes)
benefits the most from deduplication.
Depending on backup volume, size, retention period, and rate of change, the amount
of deduplication can vary. The best deduplication happens with backup volume sizes
of at least 10 MiB (MiB is the base 2 equivalent of MB).
To take full advantage of multiple Data Domain systems, a site with more than one
Data Domain system must consistently backup the same client system or set of data
to the same Data Domain system. For example, if a full back up of all sales data goes
to Data Domain system A, maximum deduplication is achieved when the incremental
backups and future full backups for sales data also go to Data Domain system A.
Restore operations
File restore operations create little or no contention with backup or other restore
operations.
When backing up to disks on a Data Domain system, incremental backups are always
reliable and can be easily accessed. With tape backups, a restore operation may rely
on multiple tapes holding incremental backups. Also, the more incremental backups a
site stores on multiple tapes, the more time-consuming and risky the restore process.
One bad tape can kill the restore.
Using a Data Domain system, you can perform full backups more frequently without
the penalty of storing redundant data. Unlike tape drive backups, multiple processes
can access a Data Domain system simultaneously. A Data Domain system allows your
site to offer safe, user-driven, single-file restore operations.
as a source for one or more pairs and a destination for one or more pairs. After
replication is started, the source system automatically sends any new backup data to
the destination system.
High Availability
The High Availability (HA) feature lets you configure two Data Domain systems as an
Active-Standby pair, providing redundancy in the event of a system failure. HA keeps
the active and standby systems in sync, so that if the active node were to fail due to
hardware or software issues, the standby node can take over services and continue
where the failing node left off.
The HA feature:
l Supports failover of backup, restore, replication and management services in a
two-node system. Automatic failover requires no user intervention.
l Provides a fully redundant design with no single point of failure within the system
when configured as recommended.
l Provides an Active-Standby system with no loss of performance on failover.
l Provides failover within 10 minutes for most operations. CIFS, DD VTL, and NDMP
must be restarted manually.
Note
Note
Note
The Hardware Overview and Installation Guides for the Data Domain systems that
support HA describes how to install a new HA system. The Data Domain Single
Node to HA Upgrade describes how to upgrade an existing system to an HA pair.
l Does not impact the ability to scale the product.
l Supports nondisruptive software updates.
HA is supported on the following Data Domain systems:
l DD6800
l DD9300
l DD9500
l DD9800
HA architecture
HA functionality is available for both IP and FC connections. Both nodes must have
access to the same IP networks, FC SANs, and hosts in order to achieve high
availability for the environment.
Over IP networks, HA uses a floating IP address to provide data access to the Data
Domain HA pair regardless of which physical node is the active node.
Over FC SANs, HA uses NPIV to move the FC WWNs between nodes, allowing the FC
initiators to re-establish connections after a failover.
Figure 1 on page 24 shows the HA architecture.
Figure 1 HA architecture
Note
The maximum random I/O stream count is limited to the maximum restore stream
count of a Data Domain system.
The random I/O enhancements allow the Data Domain system to support instant
access/instant restore functionality for backup applications such as Avamar and
Networker.
Note
Some systems support access using a keyboard and monitor attached directly to the
system.
Licensed features
Feature licenses allow you to purchase only those features you intend to use. Some
examples of features that require licenses are DD Extended Retention, DD Boost, and
storage capacity increases.
Consult with your EMC representative for information on purchasing licensed
features.
EMC Data Domain DDBOOST Enables the use of a Data Domain system with
Boost the following applications: EMC Avamar, EMC
NetWorker, Oracle RMAN, Quest vRanger,
Symantec Veritas NetBackup (NBU), and Backup
Exec. The managed file replication (MFR) feature
of DD Boost also requires the DD Replicator
license.
EMC Data Domain CLOUDTIER- Enables a Data Domain system to move data from
Cloud Tier CAPACITY the active tier to low-cost, high-capacity object
storage in the public, private, or hybrid cloud for
long-term retention.
EMC Data Domain ENCRYPTION Allows data on system drives or external storage
Encryption to be encrypted while being saved and locked
when moving the system to another location.
EMC Data Domain I/OS An I/OS license is required when DD VTL is used
I/OS (for IBM i to backup systems in the IBM i operating
operating environment. Apply this license before adding
environments) virtual tape drives to libraries.
EMC Data Domain REPLICATION Adds DD Replicator for replication of data from
Replicator one Data Domain system to another. A license is
required on each system.
EMC Data Domain RETENTION- Meets the strictest data retention requirements
Retention Lock LOCK- from regulatory standards such as SEC17a-4.
Compliance Edition COMPLIANCE
EMC Data Domain RETENTION- Protects selected files from modification and
Retention Lock LOCK- deletion before a specified retention period
Governance Edition GOVERNANCE expires.
EMC Data Domain CAPACITY- Enables a Data Domain system to expand the
Shelf Capacity-Active ACTIVE active tier storage capacity to an additional
Tier enclosure or a disk pack within an enclosure.
EMC Data Domain STORAGE- Enables migration of data from one enclosure to
Storage Migration MIGRATION-FOR- another to support replacement of older, lower-
DATADOMAIN- capacity enclosures.
SYSTEMS
EMC Data Domain VTL Enables the use of a Data Domain system as a
Virtual Tape Library virtual tape library over a Fibre Channel network.
(DD VTL) This license also enables the NDMP Tape Server
feature, which previously required a separate
license.
EMC High Availability HA-ACTIVE- Enables the High Availability feature in an Active-
PASSIVE Standby configuration. You only need to purchase
one HA license; the license runs on the active
node and is mirrored to the standby node.
1. Primary storage
2. Ethernet
3. Backup server
4. SCSI/Fibre Channel
5. Gigabit Ethernet or Fibre Channel
6. Tape system
7. Data Domain system
8. Management
9. NFS/CIFS/DD VTL/DD Boost
10. Data Verification
11. File system
12. Global deduplication and compression
13. RAID
As shown in Figure 2 on page 28, data flows to a Data Domain system through an
Ethernet or Fibre Channel connection. Immediately, the data verification processes
begin and are continued while the data resides on the Data Domain system. In the file
system, the DD OS Global Compression algorithms dedupe and compress the data
for storage. Data is then sent to the disk RAID subsystem. When a restore operation is
required, data is retrieved from Data Domain storage, decompressed, verified for
consistency, and transferred via Ethernet to the backup servers using Ethernet (for
NFS, CIFS, DD Boost), or using Fiber Channel (for DD VTL and DD Boost).
The DD OS accommodates relatively large streams of sequential data from backup
software and is optimized for high throughput, continuous data verification, and high
compression. It also accommodates the large numbers of smaller files in nearline
storage (DD ArchiveStore).
Data Domain system performance is best when storing data from applications that are
not specifically backup software under the following circumstances.
l Data is sent to the Data Domain system as sequential writes (no overwrites).
l Data is neither compressed nor encrypted before being sent to the Data Domain
system.
Getting Started 31
Getting Started
Note
Data Domain Management Center allows you to manage multiple systems from a
single browser window.
DD System Manager provides real-time graphs and tables that allow you to monitor
the status of system hardware components and configured features.
Additionally, a command set that performs all system functions is available to users at
the command-line interface (CLI). Commands configure system settings and provide
displays of system hardware status, feature configuration, and operation.
The command-line interface is available through a serial console or through an
Ethernet connection using SSH or Telnet.
Note
Some systems support access using a keyboard and monitor attached directly to the
system.
Note
DD System Manager uses HTTP port 80 and HTTPS port 443. If your Data
Domain system is behind a firewall, you may need to enable port 80 if using
HTTP, or port 443 if using HTTPS to reach the system. The port numbers can
be easily changed if security requirements dictate.
a secure log in, or you must install the certificate in your browser. For
instructions on how to install the certificate in your browser, see your browser
documentation.
3. Enter your assigned username and password.
Note
The initial username is sysadmin and the initial password is the system serial
number. For information on setting up a new system, see the EMC Data Domain
Operating System Initial Configuration Guide.
Note
If you enter an incorrect password 4 consecutive times, the system locks out
the specified username for 120 seconds. The login count and lockout period are
configurable and might be different on your system.
Note
If this is the first time you are logging in, you might be required to change your
password. If the system administrator has configured your username to require
a password change, you must change the password before gaining access to DD
System Manager.
5. To log out, click the log out button in the DD System Manager banner.
When you log out, the system displays the log in page with a message that your
log out is complete.
Page elements
The primary page elements are the banner, the navigation panel, the information
panels, and footer.
Figure 3 DD System Manager page components
1. Banner
2. Navigation panel
3. Information panels
4. Footer
Banner
The DD System Manager banner displays the program name and buttons for Refresh,
Log Out, and Help.
Navigation panel
The Navigation panel displays the highest level menu selections that you can use to
identify the system component or task that you want to manage.
The Navigation panel displays the top two levels of the navigation system. Click any
top level title to display the second level titles. Tabs and menus in the Information
panel provide additional navigation controls.
Information panel
The Information panel displays information and controls related to the selected item in
the Navigation panel. The information panel is where you find system status
information and configure a system.
Depending on the feature or task selected in the Navigation panel, the Information
panel may display a tab bar, topic areas, table view controls, and the More Tasks
menu.
Tab bar
Tabs provide access to different aspects of the topic selected in the Navigation panel.
Topic areas
Topic areas divide the Information panel into sections that represent different aspects
of the topic selected in the Navigation panel or parent tab.
For high-availability (HA) systems, the HA Readiness tab on the System Manager
dashboard indicates whether the HA system is ready to fail over from the active node
to the standby node. You can click on HA Readiness to navigate to the High
Availability section under HEALTH.
Working with table view options
Many of the views with tables of items contain controls for filtering, navigating, and
sorting the information in the table.
How to use common table controls:
l Click the diamond icon in a column heading to reverse the sort order of items in
the column.
l Click the < and > arrows at the bottom right of the view to move forward or
backward through the pages. To skip to the beginning of a sequence of pages,
click |<. To skip to the end, click >|.
l Use the scroll bar to view all items in a table.
l Enter text in the Filter By box to search for or prioritize the listing of those items.
l Click Update to refresh the list.
l Click Reset to return to the default listing.
More Tasks menu
Some pages provide a More Tasks menu at the top right of the view that contains
commands related to the current view.
Footer
The DD System Manager footer displays important information about the management
session.
The banner lists the following information.
l System hostname.
l DD OS version
l Selected system model number.
l User name and role for the current logged in user.
Information panel 35
Getting Started
Help buttons
Help buttons display a ? and appear in the banner, in the title of many areas of the
Information panel, and in many dialogs. Click the help button to display a help window
related to the current feature you are using.
The help window provides a contents button and navigation button above the help.
Click the contents button to display the guide contents and a search button that you
can use to search the help. Use the directional arrow buttons to page through the help
topics in sequential order.
Note
The following procedure describes how to start and run the DD System Manager
configuration wizard after the initial configuration of your system. For instructions on
running the configuration wizards at system startup, see the EMC Data Domain
Operating System Initial Configuration Guide.
Note
If you want to configure your system for high availability (HA), you must perform this
operation using the CLI Configuration Wizard. For more information, see the EMC Data
Domain DD9500/DD9800 Hardware Overview and Installation Guide and the EMC Data
Domain Operating System Initial Configuration Guide.
Procedure
1. Select Maintenance > System > Configure System.
2. Use the controls at the bottom of the Configuration Wizard dialog to select
which features you want to configure and to advance through the wizard. To
display help for a feature, click the help icon (question mark) in the lower left
corner of the dialog.
License page
The License page displays all installed licenses. Click Yes to add a license or click No
to skip license installation.
Item Description
Obtain Settings using DHCP Select this option to specify that the system collect network
settings from a Dynamic Host Control Protocol (DHCP)
server. When you configure the network interfaces, at least
one of the interfaces must be configured to use DHCP.
Manually Configure Select this option to use the network settings defined in the
Settings area of this page.
Note
Domain Name Specifies the network domain to which this system belongs.
Default IPv4 Gateway Specifies the IPv4 address of the gateway to which the
system will forward network requests when there is no route
entry for the destination system.
Default IPv6 Gateway Specifies the IPv6 address of the gateway to which the
system will forward network requests when there is no route
entry for the destination system.
Item Description
Interface Lists the interfaces available on your system.
Note
Netmask Specifies the network mask for this system. To configure the
network mask, you must set DHCP to No.
Link Displays whether the Ethernet link is active (Yes) or not (No).
Item Description
Obtain DNS using DHCP. Select this option to specify that the system collect DNS IP
addresses from a Dynamic Host Control Protocol (DHCP)
server. When you configure the network interfaces, at least
one of the interfaces must be configured to use DHCP.
Manually configure DNS list. Select this option when you want to manually enter DNS
server IP addresses.
Add (+) button Click this button to display a dialog in which you can add a
DNS IP address to the DNS IP Address list. You must select
Manually configure DNS list before you can add or
delete DNS IP addresses.
Delete (X) button Click this button to delete a DNS IP address from the DNS IP
Address list. You must select the IP address to delete before
this button is enabled. You must also select Manually
Item Description
IP Address Checkboxes Select a checkbox for a DNS IP address that you want to
delete. Select the DNS IP Address checkbox when you want
to delete all IP addresses. You must select Manually
configure DNS list before you can add or delete DNS IP
addresses.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
l The enclosure and disk number (in the form Enclosure
Slot, or Enclosure Pack for DS60 shelves)
l A device number for a logical device such as those used by
DD VTL and vDisk
l A LUN
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
License Needed The licensed capacity required to add the storage to the tier.
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturers rating.
Item Description
ID (Device in DD VE) The disk identifier, which can be any of the following.
Item Description
Disks The disks that comprise the disk pack or LUN. This does not
apply to DD VE instances.
Model The type of disk shelf. This does not apply to DD VE instances.
Disk Count The number of disks in the disk pack or LUN. This does not
apply to DD VE instances.
Disk Size (Size in DD VE) The data storage capacity of the disk when used in a Data
Domain system.a
Failed Disks Failed disks in the disk pack or LUN. This does not apply to DD
VE instances.
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturers rating.
Item Description
User Name The default administrator name is sysadmin. The sysadmin
user cannot be renamed or deleted.
Item Description
Send Alert Notification Emails Check to configure DD System Manager to send alert
to this address notifications to the Admin email address as alert events
occur.
Send Daily Alert Summary Check to configure DD System Manager to send alert
Emails to this address summaries to the Admin email address at the end of each day.
Send Autosupport Emails to Check to configure DD System Manager to send the Admin
this address user autosupport emails, which are daily reports that
document system activity and status.
Item Description
Mail Server Specify the name of the mail server that manages emails to
and from the system.
Send Alert Notification Emails Check to configure DD System Manager to send alert
to Data Domain notification emails to Data Domain.
Item Description
Storage Unit The name of your DD Boost Storage Unit. You may optionally
change this name.
User For the default DD Boost user, either select an existing user,
or select Create a new Local User, and enter their User name,
Item Description
Password, and Management Role. This role can be one of the
following:
l Admin role: Lets you configure and monitor the entire
Data Domain system.
l User role: Lets you monitor Data Domain systems and
change your own password.
l Security role: In addition to user role privileges, lets you
set up security-officer configurations and manage other
security-officer operators.
l Backup-operator role: In addition to user role privileges,
lets you create snapshots, import and export tapes to, or
move tapes within a DD VTL.
l None role: Intended only for EMC DD Boost
authentication, so you cannot monitor or configure a Data
Domain system. None is also the parent role for the SMT
tenant-admin and tenant-user roles. None is also the
preferred user type for DD Boost storage owners.
Creating a new local user here only allows that user to
have the "none" role.
Item Description
Configure DD Boost over Fibre Select the checkbox if you want to configure DD Boost over
Channel Fibre Channel.
Group Name (1-128 Chars) Create an Access Group. Enter a unique name. Duplicate
access groups are not supported.
Devices The devices to be used are listed. They are available on all
endpoints. An endpoint is the logical target on the Data
Domain system to which the initiator connects.
Item Description
Active Directory/Kerberos Expand this panel to enable, disable, and configure Active
Authentication Directory Kerberos authentication.
Workgroup Authentication Expand this panel to configure Workgroup authentication.
Item Description
Share Name Enter a share name for the system.
Item Description
Directory Path Enter a pathname for the export.
Item Description
Library Name Enter a name of from 1 to 32 alphanumeric characters.
Drive Model Select the desired model from the drop-down list:
l IBM-LTO-1
l IBM-LTO-2
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
Number of CAPs (Optional) Enter the number of cartridge access ports (CAPs):
l Up to 100 CAPs per library
l Up to 1000 CAPs per system
Changer Model Name Select the desired model from the drop-down list:
l L180 (default)
l RESTORER-L180
l TS3500
l I2000
l I6000
l DDVTL
Starting Barcode Enter the desired barcode for the first tape, in the format
A990000LA.
Tape Capacity (Optional) Enter the tape capacity. If not specified, the capacity is
derived from the last character of the barcode.
Item Description
Group Name Enter a unique name of from 1 - 128 characters. Duplicate access groups
are not supported.
Initiators Select one or more initiators. Optionally, replace the initiator name by
entering a new one. An initiator is a backup client that connects to a
system to read and write data using the Fibre Channel (FC) protocol. A
specific initiator can support EMC DD Boost over FC or DD VTL, but not
both.
Devices The devices (drives and changer) to be used are listed. These are
available on all endpoints. An endpoint is the logical target on the Data
Domain system to which the initiator connects.
The following example shows SSH login to a system named mysystem using SSH
client software.
l To list the top-level CLI commands, enter a question mark (?), or type the
command help at the prompt.
l To list all forms of a top-level command with an introduction, enter help
command or ? command.
l The end of each help description is marked END. Press Enter to return to the CLI
prompt.
l When the complete help description does not fit in the display, the colon prompt
(:) appears at the bottom of the display. The following guidelines describe what
you can do when this prompt appears.
n To move through the help display, use the up and down arrow keys.
n To quit the current help display and return to the CLI prompt, press q.
n To display help for navigating the help display, press h.
n To search for text in the help display, enter a slash character (/) followed by a
pattern to use as search criteria and press Enter. Matches are highlighted.
Note
When processing a heavy load, a system might be less responsive than normal. In this
case, management commands issued from either DD System Manager or the CLI
might take longer to complete. When the duration exceeds allowed limits, a timeout
error is returned, even if the operation completed.
The following table recommends the maximum number of user sessions supported by
DD System Manager:
8 GB modelsb 10 15
Note
Initial HA system set-up cannot be done from the DD System Manager, but the status
of an already-configured HA system can be viewed from DD System Manager.
Note
Both DDRs are required to have identical hardware which will be validated during
setup and system boot-up.
If set-up is from a fresh install of systems, the ha create command needs to be run
on the node with the license installed. If setup is from an existing system and a new
fresh install system (upgrade), then it should be run from the existing system.
Rebooting a system
Reboot a system when a configuration change, such as changing the time zone,
requires that you reboot the system.
Procedure
1. Select Maintenance > System > Reboot System.
2. Click OK to confirm.
Note
A controller is the chassis and any internal storage. A Data Domain system
refers to the controller and any optional external storage.
2. Plug in the power cord for your controller, and if there is a power button on the
controller, press the power button (as shown in the Installation and Setup Guide
for your Data Domain system).
3. To shut down power to a Data Domain system, use the system poweroff CLI
command.
This command automatically performs an orderly shut down of DD OS
processes and is available to administrative users only.
Results
NOTICE
Do not use the chassis power switch to power off the system. Doing so prevents
remote power control using IPMI. Use the system poweroff command instead.
The system poweroff command shuts down the system and turns off the power.
Note
The IMPI Remote System Power Down feature does not perform an orderly shutdown
of the DD OS. Use this feature only if the system poweroff command is
unsuccessful.
CAUTION
Note
When upgrading from 5.6.0.x to 6.0, first upgrade the 5.6.0.x system to 5.6.1.x (or
later) before upgrading to 6.0.
Note
You can use FTP or NFS to copy an upgrade package to a system. DD System
Manager is limited to managing 5 system upgrade packages, but there are no
restrictions, other than space limitations, when you manage the files directly in the /
ddvar/releases directory. FTP is disabled by default. To use NFS, /ddvar needs
to be exported and mounted from an external host).
Procedure
1. Select Maintenance > System.
2. To obtain an upgrade package, click the EMC Online Support link, click
Downloads, and use the search function to locate the package recommended
for your system by Support personnel. Save the upgrade package to the local
computer.
3. Verify that there are no more than four packages listed in the Upgrade
Packages Available on Data Domain System list.
DD System Manager can manage up to five upgrade packages. If five packages
appear in the list, remove at least one package before uploading the new
package.
4. Click Upload Upgrade Package to initiate the transfer of the upgrade package
to the system.
5. In the Upload Upgrade Package dialog, click Browse to open the Choose File to
Upload dialog. Navigate to the folder with the downloaded file, select the file,
and click Open.
6. Click OK.
An upload progress dialog appears. Upon successful completion of the upload,
the download file (with a .rpm extension) appears in the list titled: Upgrade
Packages Available on Data Domain System.
7. To verify the upgrade package integrity, click View Checksum and compare the
calculated checksum displayed in the dialog to the authoritative checksum on
the EMC Online Support site.
Note
Upgrade package files use the .rpm file extension. This topic assumes that you are
updating only DD OS. If you make hardware changes, such as adding, swapping, or
moving interface cards, you must update the DD OS configuration to correspond with
the hardware changes.
Procedure
1. Log into the system where the upgrade is to be performed.
Note
For most releases, upgrades are permitted from up to two prior major release
versions. For Release 6.0, upgrades are permitted from Releases 5.6 and 5.7.
Note
As recommended in the Release Notes, reboot the Data Domain system before
upgrading to verify that the hardware is in a clean state. If any issues are
discovered during the reboot, resolve those issues before starting the upgrade.
For an MDU upgrade, a reboot may not be needed.
2. Select Data Management > File System, and verify that the file system is
enabled and running.
3. Select Maintenance > System.
4. From the Upgrade Packages Available on Data Domain System list, select the
package to use for the upgrade.
Note
You must select an upgrade package for a newer version of DD OS. DD OS does
not support downgrades to previous versions.
6. Verify the version of the upgrade package, and click OK to continue with the
upgrade.
The System Upgrade dialog displays the upgrade status and the time remaining.
When upgrading the system, you must wait for the upgrade to complete before
using DD System Manager to manage the system. If the system restarts, the
upgrade might continue after the restart, and DD System Manager displays the
upgrade status after login. EMC recommends that you keep the System
Upgrade progress dialog open until the upgrade completes or the system
powers off. When upgrading DD OS Release 5.5 or later to a newer version, and
if the system upgrade does not require a power off, a Login link appears when
the upgrade is complete.
Note
To view the status of an upgrade using the CLI, enter the system upgrade
status command. Log messages for the upgrade are stored in /ddvar/log/
debug/platform/upgrade-error.log and /ddvar/log/debug/
platform/upgrade-info.log.
7. If the system powers down, you must remove AC power from the system to
clear the prior configuration. Unplug all of the power cables for 30 seconds and
then plug them back in. The system powers on and reboots.
8. If the system does not automatically power on and there is a power button on
the front panel, press the button.
After you finish
For environments that use self-signed SHA-256 certificates, the certificates must be
regenerated manually after the upgrade process is complete, an trust must be re-
established with external systems that connect to the Data Domain system.
1. Run the adminaccess certificate generate self-signed-cert
regenerate-ca command to regenerate the self-signed CA and host certificates.
Regenerating the certificates breaks existing trust relationships with external
systems.
2. Run the adminaccess trust add host hostname type mutual command to
reestablish mutual trust between the Data Domain system and the external
system.
Note
Overview tab
The Overview tab displays information for all disks in the Data Domain system
organized by type. The categories that display are dependent on the type of storage
configuration in use.
The Overview tab lists the discovered storage in one or more of the following sections.
l Active Tier
Disks in the Active Tier are currently marked as usable by the file system. Disks are
listed in two tables, Disks in Use and Disks Not in Use.
l Retention Tier
If the optional EMC Data Domain Extended Retention (formerly DD Archiver)
license is installed, this section shows the disks that are configured for DD
Extended Retention storage. Disks are listed in two tables, Disks in Use and Disks
Not in Use. For more information, see the EMC Data Domain Extended Retention
Administration Guide.
l Cache Tier
SSDs in the Cache Tier are used for caching metadata. The SSDs are not usable
by the file system. Disks are listed in two tables, Disks in Use and Disks Not in Use.
l Cloud Tier
Disks in the Cloud Tier are used to store the metadata for data that resides in
cloud storage. The disks are not usable by the file system. Disks are listed in two
tables, Disks in Use and Disks Not in Use.
l Addable Storage
For systems with optional enclosures, this section shows the disks and enclosures
that can be added to the system.
l Failed/Foreign/Absent Disks (Excluding Systems Disks)
Shows the disks that are in a failed state; these cannot be added to the system
Active or Retention tiers.
l Systems Disks
Shows the disks where the DD OS resides when the Data Domain controller does
not contain data storage disks.
l Migration History
Shows the history of migrations.
Each section heading displays a summary of the storage configured for that section.
The summary shows tallies for the total number of disks, disks in use, spare disks,
reconstructing spare disks, available disks, and known disks.
Click a section plus (+) button to display detailed information, or click the minus (-)
button to hide the detailed information.
Item Description
Disk Group The name of the disk group that was created by the file
system (for example, dg1).
Disks Reconstructing The disks that are undergoing reconstruction, by disk ID (for
example, 1.11).
Total Disks The total number of usable disks (for example, 14).
Disks The disk IDs of the usable disks (for example, 2.1-2.14).
Size The size of the disk group (for example, 25.47 TiB).
Item Description
Disk The disk identifier, which can be any of the following.
l The enclosure and disk number (in the form Enclosure
Slot)
l A device number for a logical device such as those used by
DD VTL and vDisk
l A LUN
Pack The disk pack, 1-4, within the enclosure where the disk is
located. This value will only be 2-4 for DS60 expansion
shelves.
Item Description
State The status of the disk, for example In Use, Available, Spare.
Size The data storage capacity of the disk when used in a Data
Domain system.a
Type The disk connectivity and type (For example, SAS).
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturers rating.
Enclosures tab
The Enclosures tab displays a table summarizing the details of the enclosures
connected to the system.
The Enclosures tab provides the following details.
Item Description
Enclosure The enclosure number. Enclosure 1 is the head unit.
Size The data storage capacity of the disk when used in a Data
Domain system.a
a. The Data Domain convention for computing disk space defines one gibibyte as 230 bytes,
giving a different disk capacity than the manufacturers rating.
Disks tab
The Disks tab displays information on each of the system disks. You can filter the disks
viewed to display all disks, disks in a specific tier, or disks in a specific group.
The Disk State table displays a summary status table showing the state of all system
disks.
Item Description
Total The total number of inventoried disks in the Data Domain
system.
Item Description
Spare (reconstructing) The number of disks that are in the process of data
reconstruction (spare disks replacing failed disks).
Not Installed The number of empty disk slots that the system can detect.
The Disks table displays specific information about each disk installed in the system.
Item Description
Disk The disk identifier, which can be:
l The enclosure and disk number (in the form
Enclosure.Slot).
l A device number for a logical device such as those used by
DD VTL and vDisk..
l A LUN.
Pack The disk pack, 1-4, within the enclosure where the disk is
located. This value will only be 2-4 for DS60 expansion
shelves.
State The status of the disk, which can be one of the following.
l Absent. No disk is installed in the indicated location.
l Available. An available disk is allocated to the active or
retention tier, but it is not currently in use.
l Copy Recovery. The disk has a high error rate but is not
failed. RAID is currently copying the contents onto a spare
drive and will fail the drive once the copy reconstruction is
complete.
l Destination. The disk is in use as the destination for
storage migration.
l Error. The disk has a high error rate but is not failed. The
disk is in the queue for copy reconstruction. The state will
Viewing system storage information 59
Managing Data Domain Systems
Item Description
Disk Life Used The percentage of an SSD's rated life span consumed.
Reconstruction tab
The Reconstruction tab displays a table that provides additional information on
reconstructing disks.
The following table describes the entries in the Reconstructing table.
Item Description
Disk Identifies disks that are being reconstructed. Disk labels are of
the format enclosure.disk. Enclosure 1 is the Data Domain
system, and external shelves start numbering with enclosure 2.
For example, the label 3.4 is the fourth disk in the second
shelf.
Item Description
Disk Group Shows the RAID group (dg#) for the reconstructing disk.
Tier The name of the tier where the failed disk is being
reconstructed.
Time Remaining The amount of time before the reconstruction is complete.
When a spare disk is available, the file system automatically replaces a failed disk with
a spare and begins the reconstruction process to integrate the spare into the RAID
disk group. The disk use displays Spare and the status becomes Reconstructing.
Reconstruction is performed on one disk at a time.
Note
The Beaconing Disk dialog box appears, and the LED light on the disk begins
flashing.
Configuring storage
Storage configuration features allow you to add and remove storage expansion
enclosures from the active, retention, and cloud tiers. Storage in an expansion
enclosure (which is sometimes called an expansion shelf) is not available for use until it
is added to a tier.
Note
Additional storage requires the appropriate license or licenses and sufficient memory
to support the new storage capacity. Error messages display if more licenses or
memory is needed.
DD6300 systems support the option to use ES30 enclosures with 4 TB drives ( 43.6
TiB) at 50% utilization (21.8 TiB) in the active tier if the available licensed capacity is
exactly 21.8 TiB. The following guidelines apply to using partial capacity shelves.
l No other enclosure types or drive sizes are supported for use at partial capacity.
l A partial shelf can only exist in the Active tier.
l Only one partial ES30 can exist in the Active tier.
l Once a partial shelf exists in a tier, no additional ES30s can be configured in that
tier until the partial shelf is added at full capacity.
Note
This requires licensing enough additional capacity to use the remaining 21.8 TiB of
the partial shelf.
l If the available capacity exceeds 21.8 TB, a partial shelf cannot be added.
l Deleting a 21 TiB license will not automatically convert a fully-used shelf to a
partial shelf. The shelf must be removed, and added back as a partial shelf.
Procedure
1. Select Hardware > Storage > Overview.
2. Expand the dialog for one of the available storage tiers:
l Active Tier
l Extended Retention Tier
l Cache Tier
l Cloud Tier
3. Click Configure.
4. In the Configure Storage dialog, select the storage to be added from the
Addable Storage list.
5. In the Configure list, select either Active Tier or Retention Tier.
The maximum amount of storage that can be added to the active tier depends
on the DD controller used.
Note
The licensed capacity bar shows the portion of licensed capacity (used and
remaining) for the installed enclosures.
Note
To remove an added shelf, select it in the Tier Configuration list, click Remove
from Configuration, and click OK.
Fail a disk
Fail a disk and force reconstruction. Select Hardware > Storage > Disks > Fail.
Unfail a disk
Make a disk previously marked Failed or Foreign usable to the system. Select
Hardware > Storage > Disks > Unfail.
Note
Floating IP addresses only exist in the two-node HA system; during failover, the IP
address "float" to the new active node and are:
l Only configured on the active node
l Used for filesystem access and most configuration
l Can only be static
l Configuration requires the type floating argument
Item Description
Interface The name of each interface associated with the selected system.
IP Address IP address associated with the interface. The address used by the
network to identify the interface. If the interface is configured
through DHCP, an asterisk appears after this value.
Item Description
Link Whether the Ethernet connection is active (Yes/No).
IPMI interfaces Displays Yes or No and indicates if IPMI health monitoring and power
configured management is configured for the interface.
2. To filter the interface list by interface name, enter a value in the Interface
Name field and click Update.
Filters support wildcards, such as eth*, veth*, or eth0*
3. To filter the interface list by interface type, select a value from the Interface
Type menu and click Update.
On an HA system, there is a filter dropdown to filter by IP Address Type (Fixed,
Floating, or Interconnect).
4. To return the interfaces table to the default listing, click Reset.
5. Select an interface in the table to populate the Interface Details area.
Item Description
Auto-generated Displays the automatically generated IPv6 addresses for the
Addresses selected interface.
Auto Negotiate When this feature displays Enabled, the interface automatically
negotiates Speed and Duplex settings. When this feature displays
Disabled, then Speed and Duplex values must be set manually.
Cable Shows whether the interface is Copper or Fiber.
Note
Duplex Used in conjunction with the Speed value to set the data transfer
protocol. Options are Unknown, Full, Half.
Hardware Address The MAC address of the selected interface. For example,
00:02:b3:b0:8a:d2.
Latent Fault Detection The LFD field has a View Configuration link, displaying a
(LFD) - HA systems pop-up that lists LFD addresses and interfaces.
only
Item Description
Speed Used in conjunction with the Duplex value to set the rate of data
transfer. Options are Unknown, 10 Mb/s, 100 Mb/s, 1000 Mb/s, 10
Gb/s.
Note
Supported Speeds Lists all of the speeds that the interface can use.
6. To view IPMI interface configuration and management options, click View IPMI
Interfaces.
This link displays the Maintenance > IPMI information.
port is labeled 0 and corresponds to physical interface name ethxa, the next is 1/
ethxb, the next is 2/ethxc, and so forth.
Note
DD140, DD160, DD610, DD620, and DD630 systems do not support IPv6 on
interface eth0a (eth0 on systems that use legacy port names) or on any VLANs
created on that interface.
3. Click Configure.
4. In the Configure Interface dialog, determine how the interface IP address is to
be set:
Note
On an HA system, the Configure Interface dialog has a field for whether or not
to designate the Floating IP (Yes/No). Selecting Yes the Manually
Configure IP Address radio button is auto-selected; Floating IP interfaces
can only be manually configured.
l Use DHCP to assign the IP addressin the IP Settings area, select Obtain
IP Address using DHCP and select either DHCPv4 for IPv4 access or
DHCPv6 for IPv6 access.
Setting a physical interface to use DHCP automatically enables the
interface.
Note
If you choose to obtain the network settings through DHCP, you can
manually configure the hostname at Hardware > Ethernet > Settings or
with the net set hostname command. You must manually configure the
host name when using DHCP over IPv6.
l Specify IP Settings manuallyin the IP Settings area, select Manually
configure IP Address.
The IP Address and Netmask fields become active.
5. If you chose to manually enter the IP address, enter an IPv4 or IPv6 address. If
you entered an IPv4 address, enter a netmask address.
Note
You can assign just one IP address to an interface with this procedure. If you
assign another IP address, the new address replaces the old address. To attach
an additional IP address to an interface, create an IP alias.
7. Specify the MTU (Maximum Transfer Unit) size for the physical (Ethernet)
interface.
Do the following:
l Click the Default button to return the setting to the default value.
l Ensure that all of your network components support the size set with this
option.
Note
9. Click Next.
The Configure Interface Settings summary page appears. The values listed
reflect the new system and interface state, which are applied after you click
Finish.
Note
Note
The minimum MTU for IPv6 interfaces is 1280. The interface fails if you try to set the
MTU lower than 1280.
Note
the list of interfaces in the dialog box, clear the checkbox for the interface to
remove it from bonding (failover or aggregate), and click Next.
l For a bonded interface, the bonded interface is created with remaining slaves if
the hardware for a slave interface fails. If no slaves, the bonded interface id
created with no slaves. This slave hardware failure will generate managed alerts,
one per failed slave.
Note
The alert for a failed slave disappears after the failed slave is removed from the
system. If new hardware is installed, the alerts disappear and the bonded interface
uses the new slave interface after the reboot.
l On DD4200, DD4500, and DD7200 systems, the ethMa interface does not support
failover or link aggregation.
Note
which links within the bond are available for use. LACP provides a kind of
heartbeat failover and must be configured at both ends of the link.
7. If you selected Balanced or LACP mode, specify a bonding hash type in the
Hash list.
Options are: XOR-L2, XOR-L2L3, or XOR-L3L4.
XOR-L2 transmits through a bonded interface with an XOR hash of Layer 2
(inbound and outbound MAC addresses).
XOR-L2L3 transmits through a bonded interface with an XOR hash of Layer 2
(inbound and outbound MAC addresses) and Layer 3 (inbound and outbound IP
addresses).
XOR-L3L4 transmits through a bonded interface with an XOR hash of Layer 3
(inbound and outbound IP addresses) and Layer 4 (inbound and outbound
ports).
Note
Configuring a VLAN
Create a new VLAN interface from either a physical interface or a virtual interface.
The recommended total VLAN count is 80. You can create up to 100 interfaces (minus
the number of aliases, physical and virtual interfaces) before the system prevents you
from creating any more.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. In the interfaces table, select the interface to which you want to add the VLAN.
The interface you select must be configured with an IP address before you can
add a VLAN.
3. Click Create and selectVLAN.
4. In the Create VLAN dialog box, specify a VLAN ID by entering a number in the
VLAN Id box.
The range of a VLAN ID is between 1 and 4094 inclusive.
9. Click Next.
The Create VLAN summary page appears.
10. Review the configuration settings, click Finish, and click OK.
Configuring an IP alias
An IP alias assigns an additional IP address to a physical interface, a virtual interface,
or a VLAN.
The recommended total number of IP aliases, VLAN, physical, and virtual interfaces
that can exist on the system is 80. Although up to 100 interfaces are supported, as the
maximum number is approached, you might notice slowness in the display.
Note
When using a Data Domain HA system, if a user is created and logins to the standby
node without logging into the active node first, the user will not have a default alias to
use. Therefore, in order to use aliases on the standby node, the user should login to
the active node first.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. Click Create, and select IP Alias.
The Create IP Alias dialog appears.
7. Click Next.
The Create IP Alias summary page appears.
4. In the Configure IP Alias dialog box, change the settings as described in the
procedure for creating an IP Alias.
5. Click Next and Finish.
Destroying an interface
You can use DD System Manager to destroy or delete virtual, VLAN, and IP alias
interfaces.
When a virtual interface is destroyed, the system deletes the virtual interface, releases
its bonded physical interface, and deletes any VLANs or aliases attached to the virtual
interface. When you delete a VLAN interface, the OS deletes the VLAN and any IP
alias interfaces that are created under it. When you destroy an IP alias, the OS deletes
only that alias interface.
Procedure
1. Select Hardware > Ethernet > Interfaces.
2. Click the box next to each interface you want to destroy (Virtual or VLAN or IP
Alias).
3. Click Destroy.
4. Click OK to confirm.
Domain Name
The fully qualified domain name associated with the selected system.
Hosts Mapping
IP Address
IP address of the host to resolve.
Host Name
Hostnames associated with the IP address.
DNS List
DNS IP Address
Current DNS IP addresses associated with the selected system. An asterisk
(*) indicates that the IP addresses were assigned through DHCP.
d. Click OK.
The system displays progress messages as the changes are applied.
4. To obtain the host and domain names from a DHCP server, select Obtain
Settings using DHCP and click OK.
At least one interface must be configured to use DHCP.
Procedure
1. Select Hardware > Ethernet > Settings.
2. Click Edit in the DNS List area.
3. To manually add a DNS IP address:
a. Select Manually configure DNS list.
The DNS IP address checkboxes become active.
b. Click Add (+).
c. In the Add DNS dialog box, enter the DNS IP address to add.
d. Click OK.
The system adds the new IP address to the list of DNS IP addresses.
e. Click OK to apply the changes.
4. To delete a DNS IP address from the list:
a. Select Manually configure DNS list.
The DNS IP address checkboxes become active.
b. Select the DNS IP address to delete and click Delete (X).
The system removes the IP address from the list of DNS IP addresses.
c. Click OK to apply the changes.
5. To obtain DNS addresses from a DHCP server, select Obtain DNS using DHCP
and click OK.
At least one interface must be configured to use DHCP.
backups. There are cases in which static addresses may be required to allow
connections to work, such as connections from the Data Domain system to remote
systems.
Static routes can be added and deleted from individual routing tables by adding or
deleting the table from the route specification. This provides the rules to direct
packets with specific source addresses through specific route tables. If a static route
is required for packets with those source addresses, the routes must be added the
specific table where the IP address is routed.
Note
Routing for connections initiated from the Data Domain system, such as for
replication, depends on the source address used for interfaces on the same subnet. To
force traffic for a specific interface to a specific destination (even if that interface is
on the same subnet as other interfaces), configure a static routing entry between the
two systems: this static routing overrides source routing. This is not needed if the
source address is IPv4 and has a default gateway associated with it. In that case, the
source routing is already handled via its own routing table.
Item Description
Destination The destination host/network where the network traffic (data) is sent.
Genmask The netmask for the destination net. Set to 255.255.255.255 for a host
destination and 0.0.0.0 for the default route.
Metric The distance to the target (usually counted in hops). Not used by the
DD OS, but might be needed by routing daemons.
MTU Maximum Transfer Unit (MTU) size for the physical (Ethernet)
interface.
Window Default window size for TCP connections over this route.
IRTT Initial RTT (Round Trip Time) used by the kernel to estimate the best
TCP protocol parameters without waiting on possibly slow answers.
Item Description
Interface Interface name associated with the routing interface.
7. Click Finish.
8. After the process is completed, click OK.
The new route specification is listed in the Route Spec list.
Results
The system passphrase is set and the Change Passphrase button replaces the Set
Passphrase button.
Note
The file system must be disabled to change the passphrase. If the file system is
running, you are prompted to disable it.
NOTICE
Be sure to take care of the passphrase. If the passphrase is lost, you can never
unlock the file system and access the data; the data is irrevocably lost.
limited-admin
The limited-admin role can configure and monitor the Data Domain system with
some limitations. Users who are assigned this role cannot perform data deletion
operations, edit the registry, or enter bash or SE mode.
user
The user role enables users to monitor systems and change their own password.
Users who are assigned the user management role can view system status, but
they cannot change the system configuration.
l Separation of privilege and duty apply. admin role users cannot perform
security officer tasks, and security officers cannot perform system
configuration tasks.
l During an upgrade, if the system configuration contains security officers, a
sec-off-defaults permission is created that includes a list of all current
security officers.
backup-operator
A backup-operator role user can perform all tasks permitted for user role users,
create snapshots for MTrees, import, export, and move tapes between elements
in a virtual tape library, and copy tapes across pools.
A backup-operator role user can also add and delete SSH public keys for non-
password-required log ins. (This function is used mostly for automated scripting.)
He or she can add, delete, reset and view CLI command aliases, synchronize
modified files, and wait for replication to complete on the destination system.
none
The none role is for DD Boost authentication and tenant-unit users only. A none
role user can log in to a Data Domain system and can change his or her password,
but cannot monitor, manage, or configure the primary system. When the primary
system is partitioned into tenant units, either the tenant-admin or the tenant-user
role is used to define a user's role with respect to a specific tenant unit. The
tenant user is first assigned the none role to minimize access to the primary
system, and then either the tenant-admin or the tenant-user role is appended to
that user.
tenant-admin
A tenant-admin role can be appended to the other (non-tenant) roles when the
Secure Multi-Tenancy (SMT) feature is enabled. A tenant-admin user can
configure and monitor a specific tenant unit.
tenant-user
A tenant-user role can be appended to the other (non-tenant) roles when the
SMT feature is enabled. The tenant-user role enables a user to monitor a specific
tenant unit and change the user password. Users who are assigned the tenant-
user management role can view tenant unit status, but they cannot change the
tenant unit configuration.
Item Description
Passphrase If no passphrase is set, the Set Passphrase button appears. If a
passphrase is set, the Change Passphrase button appears.
Enabled (Yes/No) The status of the service. If the service is disabled, enable it by
selecting it in the list and clicking Configure. Fill out the General
tab of the dialog box. If the service is enabled, modify its settings
by selecting it in the list and clicking Configure. Edit the settings
in the General tab of the dialog box.
Allowed Hosts The host or hosts that can access the service.
Service Options The port or session timeout value for the service selected in the
list.
HTTP port The port number opened for the HTTP protocol (port 80, by
default).
HTTPS port The port number opened for the HTTPS protocol (port 443, by
default).
SSH/SCP port The port number opened for the SSH/SCP protocol (port 22, by
default).
Session Timeout The amount of inactive time allowed before a connection closes.
The default is Infinite, that is, the connection does not close. EMC
recommends a session timeout maximum of five minutes. Use the
Advanced tab of the dialog box to set a timeout in seconds.
Note
Only users who are assigned the admin management role are permitted to access the
system using FTP
Note
LFTP clients that connect to a Data Domain system via FTPS or FTP are disconnected
after reaching a set timeout limit. However the LFTP client uses its cached username
and password to reconnect after the timeout while you are running any command.
Procedure
1. Select Administration > Access > Administrator Access.
2. Select FTP and click Configure.
3. To manage FTP access and which hosts can connect, select the General tab
and do the following:
a. To enable FTP access, select Allow FTP Access.
b. To enable all hosts to connect, select Allow all hosts to connect.
c. To restrict access to select hosts, select Limit Access to the following
systems, and modify the Allowed Hosts list.
Note
You can identify a host using a fully qualified hostname, an IPv4 address, or
an IPv6 address.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit
(pencil). Change the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete
(X).
4. To set a session timeout, select the Advanced tab, and enter the timeout value
in seconds.
Note
The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
If FTPS is enabled, a warning message appears with a prompt to click OK to
proceed.
Procedure
1. Select Administration > Access > Administrator Access.
2. Select FTPS and click Configure.
3. To manage FTPS access and which hosts can connect, select the General tab
and do the following:
a. To enable FTPS access, select Allow FTPS Access.
b. To enable all hosts to connect, select Allow all hosts to connect.
c. To restrict access to select hosts, select Limit Access to the following
systems, and modify the hosts list.
Note
You can identify a host using a fully qualified hostname, an IPv4 address, or
an IPv6 address.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit
(pencil). Change the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete
(X).
4. To set a session timeout, select the Advanced tab and enter the timeout value
in seconds.
Note
The session timeout default is Infinite, that is, the connection does not close.
5. Click OK. If FTP is enabled, a warning message appears and prompts you to
click OK to proceed.
Note
You can identify a host using a fully qualified hostname, an IPv4 address, or
an IPv6 address.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit
(pencil). Change the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete
(X).
4. To configure system ports and session timeout values, select the Advanced
tab, and complete the form.
l In the HTTP Port box, enter the port number. Port 80 is assigned by
default.
l In the HTTPS Port box, enter the number. Port 443 is assigned by default.
l In the Session Timeout box, enter the interval in seconds that must elapse
before a connection closes. The minimum is 60 seconds and the maximum is
31536000 seconds (one year).
Note
The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
Note
You must configure a system passphrase (system passphrase set) before you can
generate a CSR.
Procedure
1. Select Administration > Access > Administrator Access.
2. In the Services area, select HTTP or HTTPSand click Configure.
3. Select the Certificate tab.
4. Click Add.
A dialog appears for the protocol you selected earlier in this procedure.
Note
6. Complete the CSR form and click Generate and download a CSR.
The CSR file is saved at the following path: /ddvar/certificates/
CertificateSigningRequest.csr. Use SCP, FTP or FTPS to transfer the
CSR file from the system to a computer from which you can send the CSR to a
CA.
Note
You can identify a host using a fully qualified hostname, an IPv4 address, or
an IPv6 address.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit
(pencil). Change the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete
(X).
4. To configure system ports and session timeout values, click the Advanced tab.
l In the SSH/SCP Port text entry box, enter the port number. Port 22 is
assigned by default.
l In the Session Timeout box, enter the interval in seconds that must elapse
before connection closes.
Note
The session timeout default is Infinite, that is, the connection does not close.
Note
5. Click OK.
Note
Telnet access allows user names and passwords to cross the network in clear text,
making Telnet an insecure access method.
Procedure
1. Select Administration > Access > Administrator Access.
2. Select Telnet and click Configure.
3. To manage Telnet access and which hosts can connect, select the General tab.
a. To enable Telnet access, select Allow Telnet Access.
b. To enable all hosts to connect, select Allow all hosts to connect.
c. To restrict access to select hosts, select Limit Access to the following
systems, and modify the host list.
Note
You can identify a host using a fully qualified hostname, an IPv4 address, or
an IPv6 address.
l To add a host, click Add (+). Enter the host identification and click OK.
l To modify a host ID, select the host in the Hosts list and click Edit
(pencil). Change the host ID and click OK.
l To remove a host ID, select the host in the Hosts list and click Delete
(X).
4. To set a session timeout, select the Advanced tab and enter the timeout value
in seconds.
Note
The session timeout default is Infinite, that is, the connection does not close.
5. Click OK.
Note
The user-authentication module uses Greenwich Mean Time (GMT). To ensure that
user accounts and passwords expire correctly, configure settings to use the GMT that
corresponds to the target local time.
Procedure
1. Select Administration > Access > Local Users .
The Local Users view appears and shows the Local Users table and the Detailed
Information area.
Item Description
Name The user ID, as added to the system.
Last Login From The location where the user last logged in.
Last Login Time The time the user last logged in.
Note
User accounts configured with the admin or security officer roles can view all
users. Users with other roles can view only their own user accounts.
2. Select the user you want to view from the list of users.
Information about the selected user displays in the Detailed Information area.
Item Description
Tenant-User The list of tenant units the user can access as a tenant-user role
user.
Tenant-Admin The list of tenant units the user can access as a tenant-admin role
user.
Password Last Changed The date the password was last changed.
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
Note
The default values are the initial default password policy values. A system
administrator (admin role) can change them by selecting More Tasks > Change
Login Options.
Item Description
User The user ID or name.
Password The user password. Set a default password, and the user can
change it later.
Management Role The role assigned to the user, which can be admin, user, security,
backup-operator, or none. .
Item Description
Note
Only the sysadmin user (the default user created during the DD OS
installation) can create the first security-role user. After the first
security-role user is created, only security-role users can create
other security-role users.
Force Password Change Select this checkbox to require that the user change the password
during the first login when logging in to DD System Manager or to
the CLI with SSH or Telnet.
The default value for the minimum length of a password is 6 characters. The
default value for the minimum number of character classes required for a user
password is 1. Allowable character classes include:
l Lowercase letters (a-z)
l Uppercase letters (A-Z)
l Numbers (0-9)
l Special Characters ($, %, #, +, and so on)
Note
4. To manage password and account expiration, select the Advanced tab and use
the controls described in the following table.
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
Disable account on the Check this box and enter a date (mm/dd/yyyy) when you want to
following date disable this account. Also, you can click the calendar to select a
date.
5. Click OK.
Note
Note: The default password policy can change if an admin-role user changes
them (More Tasks > Change Login Options). The default values are the initial
default password policy values.
Note
If SMT is enabled and a role change is requested from none to any other role,
the change is accepted only if the user is not assigned to a tenant-unit as a
management-user, is not a DD Boost user with its default-tenant-unit set, and is
not the owner of a storage-unit that is assigned to a tenant-unit.
Item Description
User The user ID or name.
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. Default is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. Default is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. Default is 7.
Disable Days After Expire The number of days after a password expires to disable the user
account. Default is Never.
6. Click OK.
Note
The DD Retention Lock Compliance license must be installed. You are not permitted to
disable the authorization policy on DD Retention Lock Compliance systems.
Procedure
1. Log into the CLI using a security officer username and password.
2. To enable the security officer authorization policy, enter: # authorization
policy set security-officer enabled
3. Specify the new configuration in the boxes for each option. To select the
default value, click Default next to the appropriate option.
4. Click OK to save the password settings.
Item Description
Minimum Days Between The minimum number of days between password changes that you
Change allow a user. This value must be less than the Maximum Days
Between Change value minus the Warn Days Before Expire
value. The default setting is 0.
Maximum Days Between The maximum number of days between password changes that you
Change allow a user. The minimum value is 1. The default value is 90.
Warn Days Before Expire The number of days to warn the users before their password
expires. This value must be less than the Maximum Days
Item Description
Minimum Number of The minimum number of character classes required for a user
Character Classes password. Default is 1. Character classes include:
l Lowercase letters (a-z)
l Uppercase letters (A-Z)
l Numbers (0-9)
l Special Characters ($, %, #, +, and so on)
Lowercase Character Enable or disable the requirement for at least one lowercase
Requirement character. The default setting is disabled.
Uppercase Character Enable or disable the requirement for at least one uppercase
Requirement character. The default setting is disabled.
One Digit Requirement Enable or disable the requirement for at least one numerical
character. The default setting is disabled.
Special Character Enable or disable the requirement for at least one special character.
Requirement The default setting is disabled.
Max Consecutive Enable or disable the requirement for a maximum of three repeated
Character Requirement characters. The default setting is disabled.
Prevent use of Last N Specify the number of remembered passwords. The range is 0 to
Passwords 24, and the default settings is 1.
Note
Maximum login attempts Specifies the maximum number of login attempts before a
mandatory lock is applied to a user account. This limit applies to all
user accounts, including sysadmin. A locked user cannot log in
while the account is locked. The range is 4 to 10, and the default
value is 4.
Unlock timeout Specifies how long a user account is locked after the maximum
(seconds) number of login attempts. When the configured unlock timeout is
reached, a user can attempt login. The range is 120 to 600 seconds,
and the default period is 120 seconds.
Item Description
Mode The type of authentication mode. In Windows/Active Directory
mode, CIFS clients use Active Directory and Kerberos
authentication, and NFS clients use Kerberos authentication. In
Unix mode, CIFS clients use Workgroup authentication (without
Kerberos), and NFS clients use Kerberos authentication. In
Disabled mode, Kerberos authentication is disabled and CIFS
clients use Workgroup authentication.
Domain Controllers The name of the domain controller for the Workgroup or Active
Directory.
Organizational Unit The name of the organizations unit for the Workgroup or Active
Directory.
CIFS Server Name The name of the CIFS server in use (Windows mode only).
WINS Server The name of the WINS server in use (Windows mode only).
Key Distribution Centers Hostname(s) or IP(s) of KDC in use (UNIX mode only)
Item Description
Windows Group The name of the Windows group.
Item Description
Management Role The role of the group (admin, user, and so on)
Note
Use the complete realm name. Ensure that the user is assigned sufficient
privileges to join the system to the domain. The user name and password must
be compatible with Microsoft requirements for the Active Directory domain.
This user must also be assigned permission to create accounts in this domain.
6. Select the default CIFS server name, or select Manual and enter a CIFS server
name.
7. To select domain controllers, select Automatically assign, or select Manual
and enter up to three domain controller names.
You can enter fully qualified domain names, hostnames, or IP (IPv4 or IPv6)
addresses.
8. To select an organizational unit, select Use default Computers, or select
Manual and enter an organization unit name.
Note
9. Click Next.
The Summary page for the configuration appears.
10. Click Finish.
The system displays the configuration information in the Authentication view.
11. To enable administrative access, click Enable to the right of Active Directory
Administrative Access.
3. Modify the domain and group name. These names are separated by a backslash.
For example: domainname\groupname.
4. Modify the management role for the group by selecting a different role from the
drop-down menu.
Note
Keytab files are generated on the authentication servers (KDCs) and contain a
shared secret between the KDC server and the DDR.
NOTICE
7. Click Finish.
The system displays the configuration information in the Active Directory/
Kerberos Authentication panel.
Item Description
Mode The type of authentication mode (Workgroup or Active Directory).
3. Click Configure.
The Workgroup Authentication dialog appears.
4. For Workgroup Name, select Manual and enter a workgroup name to join, or
use the default.
The Workgroup mode joins a Data Domain system to a workgroup domain.
5. For CIFS Server Name, select Manual and enter a server name (the DDR), or
use the default.
6. Click OK.
Item Description
NIS Status Enabled or Disabled.
Management Role The role of the group (admin, user, and so on).
4. Click OK.
l To add an authentication server, click Add (+) in the server table, enter the
server name, and click OK.
l To modify an authentication server, select the authentication server name
and click the edit icon (pencil). Change the server name, and click OK.
l To remove an authentication server name, select a server, click the X icon,
and click OK.
4. Click OK.
l To modify an NIS group, select the checkbox of the NIS group name in the
NIS group list and click Edit (pencil). Change the NIS group name, and click
OK.
l To remove an NIS group name, select the NIS group in the list and click
Delete X.
4. Click OK.
3. Enter the name of the mail server in the Mail Server box.
4. Click OK.
2. To change the configuration, select More Tasks > Configure Time Settings.
The Configure Time Settings dialog appears.
3. In the Time Zone dropdown list, select the time zone where the Data Domain
system resides.
4. To manually set the time and date, select None, type the date in the Date box,
and select the time in the Time dropdown lists.
5. To use NTP to synchronize the time, select NTP and set how the NTP server is
accessed.
l n To use DHCP to automatically select a server, select Obtain NTP
Servers using DHCP.
n To configure an NTP server IP address, select Manually Configure, add
the IP address of the server, and click OK.
Note
6. Click OK.
7. If you changed the time zone, you must reboot the system.
a. Select Maintenance > System.
b. From the More Tasks menu, select Reboot System.
c. Click OK to confirm.
2. To change the configuration, select More Tasks > Set System Properties.
The Set System Properties dialog box appears.
3. In the Location box, enter information about where the Data Domain system is
located.
4. In the Admin Email box, enter the email address of the system administrator.
5. In the Admin Host box, enter the name of the administration server.
6. Click OK.
SNMP management
The Simple Network Management Protocol (SNMP) is a standard protocol for
exchanging network management information, and is a part of the Transmission
Control Protocol/Internet Protocol (TCP/IP) protocol suite. SNMP provides a tool for
network administrators to manage and monitor network-attached devices, such as
Data Domain systems, for conditions that warrant administrator attention.
To monitor Data Domain systems using SNMP, you will need to install the Data
Domain MIB in your SNMP Management system. DD OS also supports the standard
MIB-II so you can also query MIB-II statistics for general data such as network
statistics. For full coverage of available data you should utilize both the Data Domain
MIB and the standard MIB-II MIB.
The Data Domain system SNMP agent accepts queries for Data Domain-specific
information from management systems using SNMP v1, v2c, and v3. SNMP V3
provides a greater degree of security than v2c and v1 by replacing cleartext
community strings (used for authentication) with user-based authentication using
either MD5 or SHA1. Also, SNMP v3 user authentication packets can be encrypted
and their integrity verified with either DES or AES.
Data Domain systems can send SNMP traps (which are alert messages) using SNMP
v2c and SNMP v3. Because SNMP v1 traps are not supported, EMC recommends
using SNMP v2c or v3.
The default port that is open when SNMP is enabled is port 161. Traps are sent out
through port 162.
l The EMC Data Domain Operating System Initial Configuration Guide describes how to
set up the Data Domain system to use SNMP monitoring.
l The EMC Data Domain Operating System MIB Quick Reference describes the full set
of MIB parameters included in the Data Domain MIB branch.
Item Description
SNMP System Location The location of the Data Domain system being monitored.
SNMP System Contact The person designated as the person to contact for the Data
Domain system administration.
SNMP V3 Configuration
Table 41 SNMP Users column descriptions
Item Description
Name The name of the user on the SNMP manager with access to the
agent for the Data Domain system.
Access The access permissions for the SNMP user, which can be Read-
only or Read-write.
Authentication Protocols The Authentication Protocol used to validate the SNMP user,
which can be MD5, SHA1, or None.
Item Description
Privacy Protocol The encryption protocol used during the SNMP user
authentication, which can be AES, DES, or None.
Item Description
Host The IP address or domain name of the SNMP management host.
Port The port used for SNMP trap communication with the host. For
example, 162 is the default.
User The user on the trap host authenticated to access the Data
Domain SNMP information.
Item Description
Community The name of the community. For example, public, private, or
localCommunity.
Item Description
Host The systems designated to receive SNMP traps generated by
the Data Domain system. If this parameter is set, systems
receive alert messages, even if the SNMP agent is disabled.
Port The port used for SNMP trap communication with the host. For
example, 162 is the default.
Note
3. In the text fields, add an SNMP system location (a description of where the
Data Domain system is located) and/or an SNMP system contact (for example,
the email address of the system administrator for the Data Domain system).
4. Click OK.
3. In the Name text field, enter the name of the user for whom you want to grant
access to the Data Domain system agent. The name must be a minimum of eight
characters.
4. Select either read-only or read-write access for this user.
Note
If the Delete button is disabled, the selected user is being used by one or more
trap hosts. Delete the trap hosts and then delete the user.
Note
The SNMP V2c Community string is a sent in cleartext and is very easy to intercept. If
this occurs, the interceptor can retrieve information from devices on your network,
modify their configuration, and possibly shut them down. SNMP V3 provides
authentication and encryption features to prevent interception.
Note
3. In the Community box, enter the name of a community for whom you want to
grant access to the Data Domain system agent.
4. Select either read-only or read-write access for this community.
5. If you want to associate the community to one or more hosts, add the hosts as
follows:
a. Click + to add a host.
The Host dialog box appears.
b. In the Host text field, enter the IP address or domain name of the host.
c. Click OK.
The Host is added to the host list.
6. Click OK.
The new community entry appears in the Communities table and lists the
selected hosts.
3. To change the access mode for this community, select either read-only or
read-write access.
Note
The Access buttons for the selected community are disabled when a trap host
on the same system is configured as part of that community. To modify the
access setting, delete the trap host and add it back after the community is
modified.
Note
DD System Manager does not allow you to delete a host when a trap host on
the same system is configured as part of that community. To delete a trap host
from a community, delete the trap host and add it back after the community is
modified.
Note
The Access buttons for the selected community are not disabled when the trap
host uses an IPv6 address and the system is managed by an earlier DD OS
version that does not support IPv6. EMC recommends that you always select a
management system that uses the same or a newer DD OS version than the
systems it manages.
a. Select the checkbox for each host or click the Host check box in the table
head to select all listed hosts.
b. Click the delete button (X).
6. To edit a host name, do the following:
a. Select the checkbox for the host.
b. Click the edit button (pencil icon).
Note
If the Delete button is disabled, the selected community is being used by one or
more trap hosts. Delete the trap hosts and then delete the community.
3. In the Host box, enter the IP address or domain name of the SNMP Host to
receive traps.
4. In the Port box, enter the port number for sending traps (port 162 is a common
port).
5. Select the user (SNMP V3) or the community (SNMP V2C) from the drop-
down menu.
Note
The Community list displays only those communities to which the trap host is
already assigned.
3. To modify the port number, enter a new port number in the Port box (port 162
is a common port).
4. Select the user (SNMP V3) or the community (SNMP V2C) from the drop-
down menu.
Note
The Community list displays only those communities to which the trap host is
already assigned.
CLI equivalent
2. Click the file name link to view the report using a text editor. If doing so is
required by your browser, download the file first.
Note
Note
If the bundle is too large to be emailed, use the EMC support site to upload the
bundle. (Go to https://support.emc.com.)
2. Click the file name link and select a gz/tar decompression tool to view the ASCII
contents of the bundle.
If there are active alerts on the file system, replication, or protocols when a failover
occurs, these active alerts continue to show on the new active node after failover if
the alert conditions have not cleared up.
Historical alerts on the filesystem, replication, and protocols stay with the node where
they originated rather than failing over together with the filesystem on a failover. This
means the CLIs on the active node will not present a complete/continuous view of
historical alerts for filesystem, replication, and protocols
During a failover, local historical alerts stay with the node from which they were
generated; however, the historical alerts for the filesystem, replication, and protocols
(generally called "logical alerts") fail over together with the filesystem.
Note
The Health > High Availability panel displays only alerts that are HA-related. Those
alerts can be filtered by major HA component, such as HA Manager, Node,
Interconnect, Storage, and SAS connection.
2. To limit (filter) the entries in the Group Name list, type a group name in the
Group Name box or a subscriber email in the Alert Email box, and click Update.
Note
3. To display detailed information for a group, select the group in the Group Name
list.
Notification tab
The Notification tab allows you to configure groups of email address that receive
system alerts for the alert types and severity levels you select.
Item Description
Group Name The configured name for the group.
Classes The number of alert classes that are reported to the group.
Item Description
Class A service or subsystem that can forward alerts. The listed classes
are those for which the notification group receives alerts.
Severity The severity level that triggers an email to the notification group. All
alerts at the specified severity level and above are sent to the
notification group.
Subscribers The subscribers area displays a list of all email addresses configured
for the notification group.
Control Description
Add button Click the Add button to begin creating a
notification group.
Class Attributes Configure button Click this Configure button to change the
classes and severity levels that generate
alerts for the selected notification group.
Filter By: Alert Email box Enter text in this box to limit the group name
list entries to groups that include an email
address that contains the specified text.
Filter By: Group Name box Enter text in this box to limit the group name
list entries to group names that contain the
specified text.
5. Click OK.
b. Use the list boxes to select the hour, minute, and either AM or PM for the
summary report.
c. Click OK.
CLI equivalent
c. Click Finish.
Item Description
Delivery Time The delivery time shows the configured time for daily emails.
Email List This list displays the email addresses of those who receive the daily
emails.
Control Description
Configure button Click the Configure button to edit the
subscriber email list.
4. In the Notification Groups list, select groups to receive the test email and click
Next.
5. Optionally, add additional email addresses to receive the email.
6. Click Send Now and OK.
CLI equivalent
7. If you disabled sending of the test alert to EMC Data Domain and you want to
enable this feature now, do the following.
a. Select Maintenance > Support > Autosupport.
b. In the Alert Support area, click Enable .
Results
To test newly added alerts emails for mailer problems, enter: autosupport test
email email-addr
For example, after adding the email address djones@yourcompany.com to the list,
check the address with the command: autosupport test email
djones@yourcompany.com
CLI equivalent
Note
The hostname or IP address specified for the ESRS gateway must match the
hostname or IP addresses specified when creating SSL certificates on the Data
Domain system, otherwise the support connectemc device register
command will fail.
Note
Log messages on an HA system are preserved on the node where the log file
originated.
Log files are rotated weekly. Every Sunday at 0:45 a.m., the system automatically
opens new log files for the existing logs and renames the previous files with appended
numbers. For example, after the first week of operation, the previous week messages
file is renamed messages.1, and new messages are stored in a new messages file.
Each numbered file is rolled to the next number each week. For example, after the
second week, the file messages.1 is rolled to messages.2. If a messages.2 file
already existed, it rolls to messages.3. At the end of the retention period (shown in
the table below, the expired log is deleted. For example, an existing messages.9 file
is deleted when messages.8 rolls to messages.9.
Except as noted in this topic, the log files are stored in /ddvar/log.
Note
Files in the /ddvar directory can be deleted using Linux commands if the Linux user is
assigned write permission for that directory.
The set of log files on each system is determined by the features configured on the
system and the events that occur. The following table describes the log files that the
system can generate.
cifs.log Log messages from the CIFS subsystem are logged only in 10 weeks
debug/cifs/cifs.log. Size limit of 50 MiB.
space.log Messages about disk space usage by system components, A single file is
and messages from the clean process. A space use message kept
is generated every hour. Each time the clean process runs, it permanently.
creates approximately 100 messages. All messages are in There is no
comma-separated-value format with tags you can use to log file
separate the disk space messages from the clean process rotation for
messages. You can use third-party software to analyze either this log.
set of messages. The log file uses the following tags.
l CLEAN for data lines from clean operations.
l CLEAN_HEADER for lines that contain headers for the
clean operations data lines.
l SPACE for disk space data lines.
l SPACE_HEADER for lines that contain headers for the
disk space data lines.
2. Click a log file name to view its contents. You may be prompted to select an
application, such as Notepad.exe, to open the file.
2. When viewing the log, use the up and down arrows to scroll through the file; use
the q key to quit; and enter a slash character (/) and a pattern to search
through the file.
The display of the messages file is similar to the following. The last message in the
example is an hourly system status message that the Data Domain system generates
automatically. The message reports system uptime, the amount of data stored, NFS
operations, and the amount of disk space used for data storage (%). The hourly
messages go to the system log and to the serial console if one is attached.
# log view
Jun 27 12:11:33 localhost rpc.mountd: authenticated unmount
request from perfsun-g.emc.com:668 for /ddr/col1/segfs (/ddr/
col1/segfs)
Note
Severity levels, in descending order, are: Emergency, Alert, Critical, Error, Warning,
Notice, Info, Debug.
Procedure
1. Go to the EMC Online Support website at https://support.emc.com, enter
Error Message Catalog in the search box, and click the search button.
2. In the results list, locate the catalog for your system and click on the link.
3. User your browser search tool to search for a unique text string in the message.
The error message description looks similar to the following display.
Note
Some web browsers do not automatically ask for a login if a machine does not
accept anonymous logins. In that case, add a user name and password to the
FTP line. For example: ftp://sysadmin:your-pw@Data Domain
system_name.yourcompany.com/
5. At the login pop-up, log into the Data Domain system as user sysadmin.
6. On the Data Domain system, you are in the directory just above the log
directory. Open the log directory to list the messages files.
7. Copy the file that you want to save. Right-click the file icon and select Copy To
Folder from the menu. Choose a location for the file copy.
8. If you want the FTP service disabled on the Data Domain system, after
completing the file copy, use SSH to log into the Data Domain system as
sysadmin and invoke the command adminaccess disable ftp.
The following command adds the system named log-server to the hosts that receive
log messages.
The following command removes the system named log-server from the hosts that
receive log messages.
The following command disables the sending of logs and clears the list of destination
hostnames..
Note
SOL is used to view the boot sequence after a power cycle on a remote system. SOL
enables text console data that is normally sent to a serial port or to a directly attached
console to be sent over a LAN and displayed by a management host.
The DD OS CLI allows you to configure a remote system for SOL and view the remote
console output. This feature is supported only in the CLI.
NOTICE
IPMI power removal is provided for emergency situations during which attempts to
shut down power using DD OS commands fail. IPMI power removal simply removes
power to the system, it does not perform an orderly shutdown of the DD OS file
system. The proper way to remove and reapply power is to use the DD OS system
reboot command. The proper way to remove system power is to use the DD OS
system poweroff command and wait for the command to properly shut down the
file system.
Note
Note
The IPMI user list for each remote system is separate from the DD System Manager
lists for administrator access and local users. Administrators and local users do not
inherit any authorization for IPMI power management.
Procedure
1. Select Maintenance > IPMI.
2. To add a user, complete the following steps.
a. Above the IPMI Users table, click Add.
b. In the Add User dialog box, type the user name (16 or less characters) and
password in the appropriate boxes (reenter the password in the Verify
Password box).
c. Click Create.
The user entry appears in the IPMI Users table.
Note
DD4200, DD4500, and DD7200 systems are an exception to the naming ruled
described earlier. On these systems, IPMI port, bmc0a, corresponds to shared port
ethMa in the network interface list. EMC recommends that the shared port ethMa be
reserved for IPMI traffic and system management traffic (using protocols such as
HTTP, Telnet, and SSH). Backup data traffic should be directed to other ports.
When IPMI and nonIPMI IP traffic share an Ethernet port, EMC recommends that you
do not use the link aggregation feature on the shared interface because link state
changes can interfere with IPMI connectivity.
Procedure
1. Select Maintenance > IPMI.
The IPMI Configuration area shows the IPMI configuration for the managed
system. The Network Ports table lists the ports on which IPMI can be enabled
and configured. The IPMI Users table lists the IPMI users who can access the
managed system.
Item Description
Port The logical name for a port that supports IPMI communications.
Item Description
Enabled Whether the port is enabled for IPMI (Yes or No).
DHCP Whether the port uses DHCP to set its IP address (Yes or No).
Item Description
User Name The name of a user with authority to power manage the remote
system.
Note
If the IPMI port also supports IP traffic (for administrator access or backup
traffic), the interface port must be enabled before you configure IPMI.
5. Enable a disabled IPMI network port by selecting the network port in the
Network Ports table, and clicking Enable.
6. Disable a disabled IPMI network port by selecting the network port in the
Network Ports table, and clicking Disable.
7. Click Apply.
Preparing for remote power management and console monitoring with the
CLI
Remote console monitoring uses the Serial Over Lan (SOL) feature to enable viewing
of text-based console output without a serial server. You must use the CLI to set up a
system for remote power management and console monitoring.
Remote console monitoring is typically used in combination with the ipmi remote
power cycle command to view the remote systems boot sequence. This procedure
should be used on every system for which you might want to remotely view the
console during the boot sequence.
Procedure
1. Connect the console to the system directly or remotely.
l Use the following connectors for a direct connection.
n DIN-type connectors for a PS/2 keyboard
n USB-A receptacle port for a USB keyboard
n DB15 female connector for a VGA monitor
Note
Note
If the IPMI port also supports IP traffic (for administrator access or backup
traffic), the interface port must be enabled with the net enable command
before you configure IPMI.
6. If this is the first time using IPMI, run ipmi user reset to clear IPMI users
that may be out of synch between two ports, and to disable default users.
7. To add a new IPMI user, enter ipmi user add user.
8. To set up SOL, do the following:
a. Enter system option set console lan.
b. When prompted, enter y to reboot the system.
Preparing for remote power management and console monitoring with the CLI 141
Managing Data Domain Systems
3. Enter the remote system IPMI IP address or hostname and the IPMI username
and password, then click Connect.
4. View the IPMI status.
The IPMI Power Management dialog box appears and shows the target system
identification and the current power status. The Status area always shows the
current status.
Note
The Refresh icon (the blue arrows) next to the status can be used to refresh
the configuration status (for example, if the IPMI IP address or user
configuration were changed within the last 15 minutes using the CLI
commands).
NOTICE
The IPMI Power Down feature does not perform an orderly shutdown of the DD
OS. This option can be used if the DD OS hangs and cannot be used to
gracefully shutdown a system.
Note
The remote system must be properly set up before you can manage power or monitor
the system.
Procedure
1. Establish a CLI session on the system from which you want to monitor a remote
system.
2. To manage power on the remote system, enter ipmi remote power {on |
off | cycle | status} ipmi-target <ipaddr | hostname> user
user.
3. To begin remote console monitoring, enter ipmi remote console ipmi-
target <ipaddr | hostname> user user.
Note
The user name is an IPMI user name defined for IPMI on the remote system. DD
OS user names are not automatically supported by IPMI.
2. To view the system uptime and identity information, select Maintenance >
System.
The system uptime and identification information appears in the System area.
others). Click anywhere in the alerts area to display more information on the current
alerts.
Column Description
Count A count of the current alerts for the
subsystem type specified in the adjacent
column. The background color indicates the
severity of the alert.
Most recent alerts The text of the most recent alert for the
subsystem type specified in the adjacent
column
Column Description
Status The current status of the file system.
Data Written: Post-compression The data quantity stored on the system after
compression.
Column Description
Left column The left column lists the services that may be
used on the system. These service can include
replication, DD VTL, CIFS, NFS, DD Boost,
vDisk.
Column Description
enabled, disabled, or not licensed. The
replication service row displays the number of
replication contexts that are in normal,
warning, and error states. A color coded box
displays green for normal operation, yellow for
warning situations, or red when errors are
present).
Label Description
Enclosures The enclosure icons display the number of
enclosures operating in the normal (green
checkmark) and degraded (red X) states.
Label Description
Model Number The model number is the number assigned to
the EMC Data Domain system.
Label Description
System Uptime The system uptime displays how long the
system has been running since the last system
start. The time in parenthesis indicates when
the system uptime was last updated.
System Serial No. The system serial number is the serial number
assigned to the system. On newer systems,
such as DD4500 and DD7200, the system
serial number is independent of the chassis
serial number and remains the same during
many types of maintenance events, including
chassis replacements. On legacy systems,
such as DD990 and earlier, the system serial
number is set to the chassis serial number.
Chassis Serial No. The chassis serial number is the serial number
on the current system chassis.
3. To display additional information for a specific alert in the Details area, click the
alert in the list.
4. To clear an alert, select the alert checkbox in the list and click Clear.
A cleared alert no longer appears in the current alerts list, but it can be found in
the alerts history list.
5. To remove filtering and return to the full listing of current alerts, click Reset.
Item Description
Message The alert message text.
Severity The level of seriousness of the alert. For example, warning, critical,
info, or emergency.
Date The time and date the alert occurred.
Item Description
Name A textual identifier for the alert.
Severity The level of seriousness of the alert. For example, warning, critical,
info, emergency.
b. Click Update.
All alerts not matching the Severity and Class are removed from the list.
3. To display additional information for a specific alert in the Details area, click the
alert in the list.
4. To remove filtering and return to the full listing of cleared alerts, click Reset.
Item Description
Message The alert message text.
Severity The level of seriousness of the alert. For example, warning, critical,
info, or emergency.
Item Description
Name A textual identifier for the alert.
Severity The level of seriousness of the alert. For example, warning, critical,
info, emergency,
Fan status
Fans are numbered and correspond to their location in the chassis. Hover over a
system fan to display a tooltip for that device.
Item Description
Description The name of the fan.
Level The current operating speed range (Low, Medium, High). The
operating speed changes depending on the temperature inside
the chassis.
Temperature status
Data Domain systems and some components are configured to operate within a
specific temperature range, which is defined by a temperature profile that is not
configurable. Hover over the Temperature box to display the temperature tooltip.
Item Description
Description The location within the chassis being measured. The components
listed depend on the model and are often shown as abbreviations.
Some examples are:
l CPU 0 Temp (Central Processing Unit)
Item Description
Item Description
Description The type of NIC installed in the management panel.
Item Description
Description The name of the SSD.
Life Used The percentage of the rated operating life the SSD has used.
NVRAM status
Hover over NVRAM to display information about the Non-Volatile RAM, batteries, and
other components.
Item Description
Component The items in the component list depend on the NVRAM installed
in the system and can include the following items.
l Firmware version
l Memory size
l Error counts
l Flash controller error counts
l Board temperature
l CPU temperature
l Battery number (The number of batteries depends on the
system type.)
l Current slot number for NVRAM
Item Description
C/F Displays the temperature for select components in the Celsius/
Fahrenheit format.
Value Values are provided for select components and describe the
following.
l Firmware version number
l Memory size value in the displayed units
l Error counts for memory, PCI, and controller
l Flash controller error counts sorted in the following groups:
configuration errors (Cfg Err), panic conditions (Panic), Bus
Hang, bad block warnings (Bad Blk Warn), backup errors
(Bkup Err), and restore errors (Rstr Err)
l Battery information, such percent charged and status
(enabled or disabled)
for data read from the system by DD Boost clients and data written to the system
by DD Boost clients.
Disk
The Disk graph displays the amount of data in the appropriate unit of
measurement based on the data received, such as KiB or MiB per second, going
to and from all disks in the system.
Network
The Network graph displays the amount of data in the appropriate unit of
measurement based on the data received, such as KiB or MiB per second, that
passes through each Ethernet connection. One line appears for each Ethernet
port.
Item Description
Name User name of the logged-in user.
Item Description
Last Login From System from which the user logged in.
Note
Types of reports
The New Report area lists the types of reports you can generate on your system.
Note
Replication reports can only be created if the system contains a replication license and
a valid replication context is configured.
Item Description
Data Written (GiB) The amount of data written before compression. This is
indicated by a purple shaded area on the report.
Time The timeline for data that was written. The time displayed on
this report changes based upon the Duration selection when
the chart was created.
Total Compression Factor The total compression factor reports the compression ratio.
Item Description
Used (GiB) The amount of space used after compression.
Time The date the data was written. The time displayed on this
report changes based upon the Duration selection when the
chart was created.
Usage Trend The dotted black line shows the storage usage trend. When
the line reaches the red line at the top, the storage is almost
full.
Cleaning Cleaning is the Cleaning cycle (start and end time for each
cleaning cycle). Administrators can use this information to
choose the best time for space cleaning the best throttle
setting.
Item Description
Date (or Time for 24 hour The last day of each week, based on the criteria set for the
report) report. In reports, a 24-hour period ranges from noon-to-
noon.
Data Written (Pre-Comp) The cumulative data written before compression for the
specified time period.
Used (Post-Comp) The cumulative data written after compression for the
specified time period.
Compression Factor The total compression factor. This is indicated by a black line
on the report.
Table 70 File System Weekly Cumulative Capacity chart label descriptions (continued)
Item Description
Space Used (GiB) The amount of space used. Post-comp is red shaded area.
Pre-Comp is purple shaded area.
Time The date the data was written.
Item Description
Date The date the data was written.
Item Description
Start Date The first day of the week for this summary.
End Date The last day of the week for this summary.
Data (Post -Comp) The cumulative data written before compression for the
specified time period.
Replication (Post-Comp) The cumulative data written after compression for the
specified time period.
Item Description
Time The period of data collection for this report.
Item Description
Data Written (Pre-Comp) The amount of data written pre-compression.
Item Description
Start Time The time the cleaning activity started.
End Time The time the cleaning activity finished.
Item Description
ID The Replication Context identification.
Item Description
ID The Replication Context identification.
Item Description
Destination Destination system name.
Item Description
Destination Destination system name.
Item Description
Network In (MiB) The amount of data entering the system. Network In is
indicated by a thin green line.
Network Out (MiB) The amount of data sent from the system. Network Out is
indicated by a thick orange line.
2. Select a filter by which to display the Task Log from the Filter By list box. You
can select All, In Progress, Failed, or Completed.
The Tasks view displays the status of all tasks based on the filter you select and
refreshes every 60 seconds.
4. To display detailed information about a task, select the task in the task list.
Item Description
System The system name.
Item Description
HA System bar Displays a green check mark when the system
is operating normally and ready for failover.
Take Node 1 Offline Allows you to take the active node offline if
necessary.
Post Time Indicates the time and date the alert was
posted.
Domain system. Logical space is the amount of uncompressed data written to the
system.
The file system space reporting tools (DD System Manager graphs and filesys
show space command, or the alias df) show both physical and logical space. These
tools also report the size and amounts of used and available space.
When a Data Domain system is mounted, the usual tools for displaying a file systems
physical use of space can be used.
The Data Domain system generates warning messages as the file system approaches
its maximum capacity. The following information about data compression gives
guidelines for disk use over time.
The amount of disk space used over time by a Data Domain system depends on:
l The size of the initial full backup.
l The number of additional backups (incremental and full) retained over time.
l The rate of growth of the backup dataset.
l The change rate of data.
For data sets with typical rates of change and growth, data compression generally
matches the following guidelines:
l For the first full backup to a Data Domain system, the compression factor is
generally 3:1.
l Each incremental backup to the initial full backup has a compression factor
generally in the range of 6:1.
l The next full backup has a compression factor of about 60:1.
Over time, with a schedule of weekly full and daily incremental backups, the aggregate
compression factor for all the data is about 20:1. The compression factor is lower for
incremental-only data or for backups with less duplicate data. Compression is higher
when all backups are full backups.
Types of compression
Data Domain compresses data at two levels: global and local. Global compression
compares received data to data already stored on disks. Duplicate data does not need
to be stored again, while data that is new is locally compressed before being written to
disk.
Local Compression
A Data Domain system uses a local compression algorithm developed specifically to
maximize throughput as data is written to disk. The default algorithm (lz) allows
shorter backup windows for backup jobs but uses more space. Local compression
options provide a trade-off between slower performance and space usage. To change
compression, see the section regarding changing local compression.
After you change the compression, all new writes use the new compression type.
Existing data is converted to the new compression type during cleaning. It takes
several rounds of cleaning to recompress all of the data that existed before the
compression change.
The initial cleaning after the compression change might take longer than usual.
Whenever you change the compression type, carefully monitor the system for a week
or two to verify that it is working properly.
End-to-end verification
End-to-end checks protect all file system data and metadata. As data comes into the
system, a strong checksum is computed. The data is deduplicated and stored in the
file system. After all data is flushed to disk, it is read back, and re-checksummed. The
checksums are compared to verify that both the data and the file system metadata
are stored correctly.
How the file system reclaims storage space with file system cleaning
When your backup application (such as NetBackup or NetWorker) expires data, the
data is marked by the Data Domain system for deletion. However, the data is not
deleted immediately; it is removed during a cleaning operation.
l During the cleaning operation, the file system is available for all normal operations
including backup (write) and restore (read).
l Although cleaning uses a significant amount of system resources, cleaning is self-
throttling and gives up system resources in the presence of user traffic.
l Data Domain recommends running a cleaning operation after the first full backup
to a Data Domain system. The initial local compression on a full backup is generally
a factor of 1.5 to 2.5. An immediate cleaning operation gives additional
compression by another factor of 1.15 to 1.2 and reclaims a corresponding amount
of disk space.
l When the cleaning operation finishes, a message is sent to the system log giving
the percentage of storage space that was reclaimed.
A default schedule runs the cleaning operation every Tuesday at 6 a.m. (tue 0600).
You can change the schedule or you can run the operation manually (see the section
regarding modifying a cleaning schedule).
Data Domain recommends running the cleaning operation once a week.
Any operation that disables the file system, or shuts down a Data Domain system
during a cleaning operation (such as a system power-off or reboot) aborts the
cleaning operation. The cleaning operation does not immediately restart when the
system restarts. You can manually restart the cleaning operation or wait until the next
scheduled cleaning operation.
With MTree replication, If a file is created and deleted while a snapshot is being
replicated, then the next snapshot will not have any information about this file, and the
system will not replicate any content associated with this file. Directory replication will
replicate both the create and delete, even though they happen close to each other.
With the replication log that directory replication uses, operations like deletions,
renaming, and so on, execute as a single stream. This can reduce the replication
throughput. The use of snapshots by MTree replication avoids this problem.
Supported interfaces
Interfaces supported by the file system.
l NFS
l CIFS
l DD Boost
l DD VTL
DD990 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
DD7200 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
Note
The overall performance for the Data Domain system will fall to unacceptable levels if
the system is required to support the maximum file amount and the workload from the
client machines is not carefully controlled.
When the file system passes the billion file limit, several processes or operations might
be adversely affected, for example:
l Cleaning may take a very long time to complete, perhaps several days.
l AutoSupport operations may take more time.
l Any process or command that needs to enumerate all the files.
If there are many small files, other considerations arise:
l The number of separate files that can be created per second, (even if the files are
very small) may be more of a limitation than the number of MB/s that can be
moved into a Data Domain system. When files are large, the file creation rate is not
significant, but when files are small, the file creation rate dominates and may
become a factor. The file creation rate is about 100 to 200 files per second
depending upon the number of MTrees and CIFS connections. This rate should be
taken into account during system sizing when a bulk ingest of a large number of
files is needed by a customer environment.
l File access latencies are affected by the number of files in a directory. To the
extent possible, we recommend directory sizes of less than 250,000. Larger
directory sizes might experience slower responses to metadata operations such as
listing the files in the directory and opening or creating a file.
NOTICE
The Data Domain DD2200 system does not use NVRAM so firmware calculations
decide whether the battery charge is sufficient to save the data and disable the file
system if there is a loss of AC power.
Unavailable
Unavailable
Cloud Tier Local Comp The type of compression in use for the cloud tier.
l See the section regarding types of compression for an
overview.
l See the section regarding changing local compression
Marker Type Backup software markers (tape markers, tag headers, or other
names are used) in data streams. See the section regarding
tape marker settings
You can adjust the workload balance of the file system to increase performance based
on your usage.
Table 84 Workload Balance settings
Sequential workloads (%) Traditional backups and restores perform better with sequential
workloads.
Throttle The percentage of available resources the system uses for data
movement. A throttle value of 100% is the default throttle and
means that data movement will not be throttled.
Setting Description
DD System Status can be one of the following:
l Not licensedNo other information provided.
l Not configuredEncryption is licensed but not configured.
l EnabledEncryption is enabled and running.
l DisabledEncryption is disabled.
Setting Description
Encryption Progress View encryption status details for the active tier regarding the
application of changes and re-encryption of data. Status can be
one of the following:
l None
l Pending
l Running
l Done
Click View Details to display the Encryption Status Details dialog
that includes the following information for the Active Tier:
l Type (Example: Apply Changes when encryption has already
been initiated, or Re-encryption when encryption is a result of
compromised data-perhaps a previously destroyed key.)
l Status (Example: Pending)
l Details: (Example: Requested on December xx/xx/xx and will
take after the next system clean).
Key Management
Key Manager Either the internal Data Domain Embedded Key Manager or the
optional RSA Data Protection Manager (DPM) Key Manager. Click
Configure to switch between key managers (if both are
configured), or to modify Key Manager options.
Setting Description
Server Status Online or offline, or the error messages returned by the RSA Key
Manager Server.
Key Class A specialized type of security class used by the optional RSA Data
Protection Manager (DPM) Key Manager that groups
crytopgraphic keys with similar characteristics. The Data Domain
system retrieves a key from the RSA server by key class. A key
class to be set up to either return the current key, or to generate a
new key each time.
Note
FIPS mode Whether or not the imported host certificate is FIPS compliant. The
default mode is enabled.
Encryption Keys Lists keys by ID numbers. Shows when a key was created, how long
it is valid, its type (RSA DPM Key Manager or the Data Domain
internal key), its state (see Working with the RSA DPM Key
Manager, DPM Encryption Key States Supported by Data Domain),
and the amount of the data encrypted with the key. The system
displays the last updated time for key information above the right
column. Selected keys in the list can be:
l Synchronized so the list shows new keys added to the RSA
server (but are not usable until the file system is restarted).
l Deleted.
l Destroyed.
Note
The process of deleting files and removing snapshots does not immediately reclaim
disk space, the next cleaning operation reclaims the space.
l Level 1At the first level of fullness, no more new data can be written to the file
system. An informative out of space alert is generated.
RemedyDelete unneeded datasets, reduce the retention period, delete
snapshots, and perform a file system cleaning operation.
l Level 2At the second level of fullness, files cannot be deleted. This is because
deleting files also require free space but the system has so little free space
available that it cannot even delete files.
RemedyExpire snapshots and perform a file system cleaning operation.
l Level 3At the third and final level of fullness, attempts to expire snapshots,
delete files, or write new data fail.
RemedyPerform a file system cleaning operation to free enough space to at
least delete some files or expire some snapshots and then rerun cleaning.
Note
To join the alert email list, see Viewing and Clearing Alerts.
Procedure
1. Verify that storage has been installed and configured (see the section on
viewing system storage information for more information). If the system does
not meet this prerequisite, a warning message is displayed. Install and configure
the storage before attempting to create the file system.
2. Select Data Management > File System > Summary > Create.
The File System Create Wizard is launched. Follow the instructions provided.
CAUTION
Disabling the file system when a backup application is sending data to the system
can cause the backup process to fail. Some backup software applications are able
to recover by restarting where they left off when they are able to successfully
resume copying files; others might fail, leaving the user with an incomplete
backup.
Procedure
1. Select Data Managment > File System > Summary.
2. For File System, click Enable or Disable.
3. On the confirmation dialog, click Close.
Note
This requires licensing enough additional capacity to use the remaining 21.8 TiB of
the partial shelf.
l If the available capacity exceeds 21.8 TB, a partial shelf cannot be added.
l Deleting a 21 TiB license will not automatically convert a fully-used shelf to a
partial shelf. The shelf must be removed, and added back as a partial shelf.
To expand the file system:
Procedure
1. Select Data Managment > File System > Summary > Expand Capacity.
The Expand File System Capacity wizard is launched. The Storage Tier drop-
down list always contains Active Tier, and it may contain either Extended
Retention Tier or Cloud Tier as a secondary choice. The wizard displays the
current capacity of the file system for each tier as well as how much additional
storage space is available for expansion.
Note
File system capacity can be expanded only if the physical disks are installed on
the system and file system is enabled.
CAUTION
The optional Write zeros to disk operation writes zeros to all file system disks,
effectively removing all traces of data. If the Data Domain system contains a
large amount of data, this operation can take many hours, or a day, to complete.
Note
Procedure
1. Select Data Management > File System > Summary > Destroy.
2. In the Destroy File System dialog box, enter the sysadmin password (it is the
only accepted password).
3. Optionally, click the checkbox for Write zeros to disk to completely remove
data.
4. Click OK.
Performing cleaning
This section describes how to start, stop, and modify cleaning schedules.
Starting cleaning
To immediately start a cleaning operation.
Procedure
1. Select Data Managment > File System > Summary > Settings > Cleaning.
The Cleaning tab of the File System Setting dialog displays the configurable
settings for each tier.
2. For the active tier:
a. In the Throttle % text box, enter a system throttle amount. This is the
percentage of CPU usage dedicated to cleaning. The default is 50 percent.
b. In the Frequency drop-down list, select one of these frequencies: Never,
Daily, Weekly, Biweekly, and Monthly. The default is Weekly.
c. For At, configure a specific time.
d. For On, select a day of the week.
3. For the cloud tier:
a. In the Throttle % text box, enter a system throttle amount. This is the
percentage of CPU usage dedicated to cleaning. The default is 50 percent.
b. In the Frequency drop-down list, select one of these frequencies: Never,
After every 'N' Active Tier cleans.
Note
If a cloud unit is inaccessible when cloud tier cleaning runs, the cloud unit is
skipped in that run. Cleaning on that cloud unit occurs in the next run if the
cloud unit becomes available. The cleaning schedule determines the duration
between two runs. If the cloud unit becomes available and you cannot wait
for the next scheduled run, you can start cleaning manually.
4. Click Save.
Stopping cleaning
To immediately stop a cleaning operation.
Procedure
1. Select Data Managment > File System > Summary > Settings > Cleaning.
The Cleaning tab of the File System Setting dialog displays the configurable
settings for each tier.
2. For the active tier:
a. In the Frequency drop-down list, select Never.
Performing sanitization
To comply with government guidelines, system sanitization, also called data shredding,
must be performed when classified or sensitive data is written to any system that is
not approved to store such data.
When an incident occurs, the system administrator must take immediate action to
thoroughly eradicate the data that was accidentally written. The goal is to effectively
restore the storage device to a state as if the event never occurred. If the data
leakage is with sensitive data, the entire storage will need to be sanitized using EMC
Professional Services' Secure Data erasure practice.
The Data Domain sanitization command exists to enable the administrator to delete
files at the logical level, whether a backup set or individual files. Deleting a file in most
file systems consists of just flagging the file or deleting references to the data on disk,
freeing up the physical space to be consumed at a later time. However, this simple
action introduces the problem of leaving behind a residual representation of underlying
data physically on disks. Deduplicated storage environments are not immune to this
problem.
Shredding data in a system implies eliminating the residual representation of that data
and thus the possibility that the file may be accessible after it has been shredded.
Data Domain's sanitization approach ensures is compliant with the 2007 versions of
Department of Defense (DoD) 5220.22 of the following specifications:
l US Department of Defense 5220.22-M Clearing and Sanitization Matrix
l National Institute of Systems and Technology (NIST) Special Publication 800-88
Guidelines for Media Sanitization
zerotization mechanism. Clean data in the system being sanitized is online and
available to users.
Procedure
1. Delete the contaminated files or backups through the backup software or
corresponding client. In the case of backups, be sure to manage the backup
software appropriately to ensure that related files on that image are reconciled,
catalog records are managed as required, and so forth.
2. Run the system sanitize start command on the contaminated Data
Domain system to cause all previously used space in it to be overwritten once
(see the figure below).
3. Wait for the affected system to be sanitized. Sanitization can be monitored by
using the system sanitize watch command.
If the affected Data Domain system has replication enabled, all the systems
containing replicas need to be processed in a similar manner. Depending on how
much data exists in the system and how it is distributed, the system
sanitize command could take some time. However, during this time, all clean
data in the system is available to users.
Note
Procedure
1. Select Data Managment > File System > Summary > Settings > General.
2. From the Local Compression Type drop-down list, select a compression type.
Option Description
NONE Do not compress data.
LZ The default algorithm that gives the best throughput. Data Domain
recommends the lz option.
GZFAST A zip-style compression that uses less space for compressed data, but more
CPU cycles (twice as much as lz). Gzfast is the recommended alternative
for sites that want more compression at the cost of lower performance.
GZ A zip-style compression that uses the least amount of space for data
storage (10% to 20% less than lz on average; however, some datasets get
much higher compression). This also uses the most CPU cycles (up to five
times as much as lz). The gz compression type is commonly used for
nearline storage applications in which performance requirements are low.
3. Click Save.
Note
The DD VTL feature is not required or supported when the Data Domain system is
used as a Disk Staging device.
The reason that some backup applications use disk staging devices is to enable tape
drives to stream continuously. After the data is copied to tape, it is retained on disk for
as long as space is available. Should a restore be needed from a recent backup, more
than likely the data is still on disk and can be restored from it more conveniently than
from tape. When the disk fills up, old backups can be deleted to make space. This
delete-on-demand policy maximizes the use of the disk.
In normal operation, the Data Domain System does not reclaim space from deleted
files until a cleaning operation is done. This is not compatible with backup software
that operates in a staging mode, which expects space to be reclaimed when files are
deleted. When you configure disk staging, you reserve a percentage of the total space
typically 20 to 30 percentin order to allow the system to simulate the immediate
freeing of space.
The amount of available space is reduced by the amount of the staging reserve. When
the amount of data stored uses all of the available space, the system is full. However,
whenever a file is deleted, the system estimates the amount of space that will be
recovered by cleaning and borrows from the staging reserve to increase the available
space by that amount. When a cleaning operation runs, the space is actually recovered
and the reserve restored to its initial size. Since the amount of space made available
by deleting files is only an estimate, the actual space reclaimed by cleaning may not
match the estimate. The goal of disk staging is to configure enough reserve so that
you do not run out before cleaning is scheduled to run.
Note
For information about how applications work in a Data Domain environment, see How
EMC Data Domain Systems Integrate into the Storage Environment. You can use these
matrices and integration guides to troubleshoot vendor-related issues.
Note
A fast copy operation makes the destination equal to the source, but not at a specific
time. There are no guarantees that the two are or were ever equal if you change either
folder during this operation.
Note
3. In the Destination text box, enter the pathname of the directory where the data
will be copied to. For example, /data/col1/backup/dir2. This destination
directory must be empty, or the operation fails.
l If the Destination directory exists, click the checkbox Overwrite existing
destination if it exists.
4. Click OK.
5. In the progress dialog box that appears, click Close to exit.
MTrees 189
MTrees
MTrees overview
An MTree is a logical partition of the file system.
You can use MTrees in the following ways: for DD Boost storage units, DD VTL pools,
or an NFS/CIFS share. MTrees allow granular management of snapshots, quotas, and
DD Retention Lock. For systems that have DD Extended Retention and granular
management of data migration policies from Active Tier to Retention Tier, MTree
operations can be performed on a specific MTree as opposed to the entire file system.
Note
There can be up to the maximum configurable MTrees designated for MTree
replication contexts.
MTree limits
MTree limits for Data Domain systems
All other DD systems 5.7 and later 100 Up to 32 based on the model
Quotas
MTree quotas apply only to the logical data written to the MTree.
An administrator can set the storage space restriction for an MTree, Storage Unit, or
DD VTL pool to prevent it from consuming excess space. There are two kinds of quota
limits: hard limits and soft limits. You can set either a soft or hard limit or both a soft
and hard limit. Both values must be integers, and the soft value must be less than the
hard value.
When a soft limit is set, an alert is sent when the MTree size exceeds the limit, but
data can still be written to it. When a hard limit is set, data cannot be written to the
MTree when the hard limit is reached. Therefore, all write operations fail until data is
deleted from the MTree.
See the section regarding MTree quota configuration for more information.
Quota enforcement
Enable or disable quota enforcement.
Item Description
MTree Name The pathname of the MTree.
Last 24 Hr Pre-Comp (pre- Amount of raw data from the backup application that has been
compression) written in the last 24 hours.
Last 24 Hr Post-Comp Amount of storage used after compression in the last 24 hours.
(post-compression)
Last 24 hr Comp Ratio The compression ratio for the last 24 hours.
Weekly Avg Post-Comp Average amount of compressed storage used in the last five
weeks.
Last Week Post-Comp Average amount of compressed storage used in the last seven
days.
Weekly Avg Comp Ratio The average compression ratio for the last five weeks.
Last Week Comp Ratio The average compression ratio for the last seven days.
Quotas 191
MTrees
Item Description
Full Path The pathname of the MTree.
Pre-Comp Used The current amount of raw data from the backup application
that has been written to the MTree.
Quota
Pre-Comp Soft Limit Current value. Click Configure to revise the quota limits.
Pre-Comp Hard Limit Current value. Click Configure to revise the quota limits.
Protocols
Item Description
VTL Pool If applicable, the name of the DD VTL pool that was converted
to an MTree.
Physical Capacity
Measurements
Used (Post-Comp) MTree space that is used after compressed data has been
ingested.
Last Measurement Time Last time the system measured the MTree.
Submitted Measurements Displays the post compression status for the MTree.
Item Description
Item Description
Source The source MTree pathname.
Status The status of the MTree replication pair. Status can be Normal,
Error, or Warning.
Sync As Of The last day and time the replication pair was synchronized.
Item Description
Total Snapshots The total number of snapshots created for this MTree. A total
of 750 snapshots can be created for each MTree.
Expired The number of snapshots in this MTree that have been marked
for deletion, but have not been removed with the clean
operation as yet.
Unexpired The number of snapshots in this MTree that are marked for
keeping.
Oldest Snapshot The date of the oldest snapshot for this MTree.
Newest Snapshot The date of the newest snapshot for this MTree.
Next Scheduled The date of the next scheduled snapshot.
Assigned Snapshot The name of the snapshot schedule assigned to this MTree.
Schedules
Note
For information on how to manage DD Retention Lock for an MTree, see the section
about working with DD Retention Lock.
Item Description
Status Indicates whether DD Retention Lock is enabled or disabled.
Retention Period Indicates the minimum and maximum DD Retention Lock time
periods.
3. In the Modify Retention Lock dialog box, select Enable to enable DD Retention
Lock on the Data Domain system.
4. Modify the minimum or maximum retention period (the feature must be enabled
first), in the Retention Period panel.
5. Select an interval (minutes, hours, days, years). Click Default to show the
default values.
6. Click OK.
Results
After you close the Modify Retention Lock dialog box, updated MTree information is
displayed in the DD Retention Lock summary area.
Note
For the MTrees Space Usage view, the system displays only pre-compressed
information. Data can be shared between MTrees so compressed usage for a single
MTree cannot be provided.
Note
information for MTrees, but from the command line interface you can view space
usage information for MTrees, tenants, tenant units, and pathsets. For more
information about how to use PCM from the command line, see the EMC Data Domain
Operating System Command Reference Guide.
5. Select how often the schedule triggers a measurement occurrence: every Day,
Week, or Month.
l For Day, select the time.
l For Week, select the time and day of the week.
l For Month, select the time, and days during the month.
6. Select MTree assignments for the schedule (the MTrees that the schedule will
apply to):
7. Click Create.
8. Optionally, click on the heading names to sort by schedule: Name, Status
(Enabled or Disabled) Priority (Urgent or Normal), Schedule (schedule timing),
and MTree Assignments (the number of MTrees the schedule is assigned to).
Note
Procedure
1. Select Data Management > MTree > Summary.
2. Select MTrees to assign schedules to.
3. Scroll down to the Physical Capacity Measurements area and click Assign to
the right of Schedules.
4. Select schedules to assign to the MTree and click Assign.
4. Click Save.
Creating an MTree
An MTree is a logical partition of the file system. Use MTrees in for DD Boost storage
units, DD VTL pools, or an NFS/CIFS share.
MTrees are created in the area /data/col1/mtree_name.
Procedure
1. Select Data Management > MTree.
2. In the MTree overview area, click Create.
3. Enter the name of the MTree in the MTree Name text box. MTree names can be
up to 50 characters. The following characters are acceptable:
l Upper- and lower-case alphabetical characters: A-Z, a-z
l Numbers: 0-9
l Embedded space
l comma (,)
l period (.), as long as it does not precede the name.
l explanation mark (!)
l number sign (#)
l dollar sign ($)
l per cent sign (%)
l plus sign (+)
l at sign (@)
l equal sign (=)
l ampersand (&)
l semi-colon (;)
l parenthesis [(and)]
l square brackets ([and])
l curly brackets ({and})
l caret (^)
l tilde (~)
l apostrophe (unslanted single quotation mark)
l single slanted quotation mark ()
4. Set storage space restrictions for the MTree to prevent it from consuming
excessive space. Enter a soft or hard limit quota setting, or both. With a soft
limit, an alert is sent when the MTree size exceeds the limit, but data can still be
written to the MTree. Data cannot be written to the MTree when the hard limit
is reached.
Note
Note
When setting both soft and hard limits, a quotas soft limit cannot exceed the
quotas hard limit.
5. Click OK.
The new MTree displays in the MTree table.
Note
You may need to expand the width of the MTree Name column to see the entire
pathname.
Note
3. In the MTree tab, click the Summary tab, and then click the Configure button
in the Quota area.
4. In the Quota tab, click the Configure Quota button.
Deleting an MTree
Removes the MTree from the MTree table. The MTree data is deleted at the next
cleaning.
Note
Because the MTree and its associated data are not removed until file cleaning is run,
you cannot create a new MTree with the same name as a deleted MTree until the
deleted MTree is completely removed from the file system by the cleaning operation.
Procedure
1. Select Data Management > MTree.
2. Select an MTree.
3. In the MTree overview area, click Delete.
4. Click OK at the Warning dialog box.
5. Click Close in the Delete MTree Status dialog box after viewing the progress.
Undeleting an MTree
Undelete retrieves a deleted MTree and its data and places it back in the MTree table.
An undelete of an MTree retrieves a deleted MTree and its data and places it back in
the MTree table.
An undelete is possible only if file cleaning has not been run after the MTree was
marked for deletion.
Note
Procedure
1. Select Data Management > MTree > More Tasks > Undelete.
2. Select the checkboxes of the MTrees you wish to bring back and click OK.
3. Click Close in the Undelete MTree Status dialog box after viewing the progress.
The recovered MTree displays in the MTree table.
Renaming an MTree
Use the Data Management MTree GUI to rename MTrees.
Procedure
1. Select Data Management > MTree.
2. Select an MTree in the MTree table.
3. Select the Summary tab.
4. In the Detailed Information overview area, click Rename.
5. Enter the name of the MTree in the New MTree Name text box.
See the section about creating an MTree for a list of allowed characters.
6. Click OK.
The renamed MTree displays in the MTree table.
Snapshots 205
Snapshots
Snapshots overview
This chapter describes how to use the snapshot feature with MTrees.
A snapshot saves a read-only copy (called a snapshot) of a designated MTree at a
specific time. You can use a snapshot as a restore point, and you can manage MTree
snapshots and schedules and display information about the status of existing
snapshots.
Note
Snapshots created on the source Data Domain system are replicated to the
destination with collection and MTree replication. It is not possible to create snapshots
on a Data Domain system that is a replica for collection replication. It is also not
possible to create a snapshot on the destination MTree of MTree replication. Directory
replication does not replicate the snapshots, and it requires you to create snapshots
separately on the destination system.
Snapshots for the MTree named backup are created in the system directory /data/
col1/backup/.snapshot. Each directory under /data/col1/backup also has
a .snapshot directory with the name of each snapshot that includes the directory.
Each MTree has the same type of structure, so an MTree named SantaClara would
have a system directory /data/col1/SantaClara/.snapshot, and each
subdirectory in /data/col1/SantaClara would have a .snapshot directory as
well.
Note
The .snapshot directory is not visible if only /data is mounted. When the MTree
itself is mounted, the .snapshot directory is visible.
An expired snapshot remains available until the next file system cleaning operation.
The maximum number of snapshots allowed per MTree is 750. Warnings are sent when
the number of snapshots per MTree reaches 90% of the maximum allowed number
(from 675 to 749 snapshots), and an alert is generated when the maximum number is
reached. To clear the warning, expire snapshots and then run the file system cleaning
operation.
Note
To identify an MTree that is nearing the maximum number of snapshots, check the
Snapshots panel of the MTree page regarding viewing MTree snapshot information.
Snapshot retention for an MTree does not take any extra space, but if a snapshot
exists and the original file is no longer there, the space cannot be reclaimed.
Note
Field Description
Total Snapshots (Across The total number of snapshots, active and expired, on all MTrees
all MTrees) in the system.
Expired The number of snapshots that have been marked for deletion, but
have not been removed with the cleaning operation as yet.
Next file system clean The date the next scheduled file system cleaning operation will be
scheduled performed.
Snapshots view
View snapshot information by name, by MTree, creation time, whether it is active, and
when it expires.
The Snapshots tab displays a list of snapshots and lists the following information.
Field Description
Selected Mtree A drop-down list that selects the MTree the snapshot operates on.
Filter By Items to search for in the list of snapshots that display. Options
are:
l NameName of the snapshot (wildcards are accepted).
l YearDrop-down list to select the year.
Status The status of the snapshot, which can be Expired or blank if the
snapshot is active.
Schedules view
View the days snapshots will be taken, the times, the time they will be retained, and
the naming convention.
Field Description
Name The name of the snapshot schedule.
Snapshot Name Pattern A string of characters and variables that translate into a snapshot
name (for example, scheduled-%Y-%m-%d-%H-%M, which
translates to scheduled-2010-04-12-17-33).
1. Select a schedule in the Schedules tab. The Detailed Information area appears
listing the MTrees that share the same schedule with the selected MTree.
2. Click the Add/Remove button to add or remove MTrees from schedule list.
Managing snapshots
This section describes how to manage snapshots.
Creating a snapshot
Create a snapshot when an unscheduled snapshot is required.
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. In the Snapshots view, click Create.
3. In the Name text field, enter the name of the snapshot.
4. In the MTree(s) area, select a checkbox of one or more MTrees in the Available
MTrees panel and click Add.
5. In the Expiration area, select one of these expiration options:
a. Never Expire.
b. Enter a number for the In text field, and select Days, Weeks, Month, or
Years from the drop-down list. The snapshot will be retained until the same
time of day as when it is created.
c. Enter a date (using the formatmm/dd/yyyy) in the On text field, or click
Calendar and click a date. The snapshot will be retained until midnight
(00:00, the first minute of the day) of the given date.
6. Click OK and Close.
Note
3. In the Expiration area, select one of the following for the expiration date:
a. Never Expire.
b. In the In text field, enter a number and select Days, Weeks, Month, or Years
from the drop-down list. The snapshot will be retained until the same time of
day as when it is created.
c. In the On text field, enter a date (using the format mm/dd/yyyy) or click
Calendar and click a date. The snapshot will be retained until midnight
(00:00, the first minute of the day) of the given date.
4. Click OK.
Renaming a snapshot
Use the Snapshot tab to rename a snapshot.
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. Select the checkbox of the snapshot entry in the list and click Rename.
3. In the Name text field, enter a new name.
4. Click OK.
Expiring a snapshot
Snapshots cannot be deleted. To release disk space, expire snapshots and they will be
deleted in the next cleaning cycle after the expiry date.
Procedure
1. Select Data Management > Snapshots to open the Snapshots view.
2. Click the checkbox next to snapshot entry in the list and click Expire.
Note
Note
If multiple snapshots with the same name are scheduled to occur at the same time,
only one is retained. Which one is retained is indeterminate, thus only one of the
snapshots with that name should be scheduled for a given time.
10. Review the parameters in the schedule summary and click Finish to complete
the schedule or Back to change any entries.
11. If an MTree is not associated with the schedule, a warning dialog box asks if you
would like to add an MTree to the schedule. Click OK to continue (or Cancel to
exit).
12. To assign an MTree to the schedule, in the MTree area, click the checkbox of
one or more MTrees in the Available MTrees panel, then click Add and OK.
l CIFS overview...................................................................................................214
l Configuring SMB signing.................................................................................. 214
l Performing CIFS setup..................................................................................... 215
l Working with shares......................................................................................... 217
l Managing access control..................................................................................222
l Monitoring CIFS operation............................................................................... 227
l Performing CIFS troubleshooting.....................................................................230
CIFS 213
CIFS
CIFS overview
Common Internet File System (CIFS) clients can have access to the system
directories on the Data Domain system.
l The /data/col1/backup directory is the destination directory for compressed
backup server data.
l The /ddvar/core directory contains Data Domain System core and log files
(remove old logs and core files to free space in this area).
Note
You can also delete core files from the /ddvar or the /ddvar/ext directory if it
exists.
Clients, such as backup servers that perform backup and restore operations with a
Data Domain System, at the least, need access to the /data/col1/backup
directory. Clients that have administrative access need to be able to access the /
ddvar/core directory to retrieve core and log files.
As part of the initial Data Domain system configuration, CIFS clients were configured
to access these directories. This chapter describes how to modify these settings and
how to manage data access using the Data DD Manager and the cifs command.
Note
l The DD System Manager Protocols > CIFS page allows you to perform major CIFS
operations such as enabling and disabling CIFS, setting authentication, managing
shares, and viewing configuration and share information.
l The cifs command contains all the options to manage CIFS backup and restores
between Windows clients and Data Domain systems, and to display CIFS statistics
and status. For complete information about the cifs command, see the EMC Data
Domain Operating System Command Reference Guide.
l For information about the initial system configuration, see the EMC Data Domain
Operating System Initial Configuration Guide.
l For information about setting up clients to use the Data Domain system as a
server, see the related tuning guide, such as the CIFS Tuning Guide, which is
available from the support.emc.com web site. Search for the complete name of
the document using the Search EMC Support field.
l When SMB signing is set to required, SMB signing is required, and both computers
in the SMB connection must have SMB signing enabled.
SMB Signing CLI Commands
cifs option set "server-signing" required
Sets server signing to required.
cifs option reset "server-signing"
Resets server signing to the default (disabled).
As a best practice, whenever you change the SMB signing options, disable and then
enable (restart) CIFS service using the following CLI commands:
cifs disable
cifs enable
The DD System Manager interface displays whether the SMB signing option is
disabled or set to auto or mandatory. To view this setting in the interface, navigate to:
Protocols > CIFS > Configuration tab. In the Options area, the value for the SMB
signing option will be disabled, auto or mandatory reflecting the value set using the CLI
commands.
Procedure
1. For the Data Domain system that is selected in the DD System Manager
Navigation tree, click Protocols > CIFS.
2. In the CIFS Status area, click Enable.
4. In the Log Level area, click the drop-down list to select the level number.
The level is an integer from 1 (one) to 5 (five). One is the default system level
that sends the least-detailed level of CIFS-related log messages, five results in
the most detail. Log messages are stored in the file /ddvar/log/debug/
cifs/cifs.log.
Note
A log level of 5 degrades system performance. Click the Default in the Log
Level area after debugging an issue. This sets the level back to 1.
Note
Procedure
1. Select Protocols > CIFS tabs to navigate to the CIFS view.
2. Ensure authentication has been configured, as described in the section
regarding setting authentication parameters.
3. On the CIFS client, set shared directory permissions or security options.
4. On the CIFS view, click the Shares tab.
5. Click Create.
6. In the Create Shares dialog box, enter the following information:
Item Description
Share Name A descriptive name for the share.
Directory Path The path to the target directory (for example, /data/col1/
backup/dir1).
Note
Note
The share name can be a maximum of 80 characters and cannot contain the
following characters: \ / : * ? " < > | + [ ] ; , = or extended ASCII characters.
7. Add a client by clicking Add (+) in the Clients area. The Client dialog box
appears. Enter the name of the client in the Client text box and click OK.
Consider the following when entering the client name.
l No blanks or tabs (white space) characters are allowed.
l It is not recommended to use both an asterisk (*) and individual client name
or IP address for a given share. When an asterisk (*) is present, any other
client entries for that share are not used.
l It is not required to use both client name and client IP address for the same
client on a given share. Use client names when the client names are defined
in the DNS table.
l To make share available to all clients, specify an asterisk (*) as the client. All
users in the client list can access the share, unless one or more user names
are specified, in which case only the listed names can access the share.
Repeat this step for each client that you need to configure.
8. In the Max Connections area, select the text box and enter the maximum
number of connections to the share that are allowed at one time. The default
value of zero (also settable via the Unlimited button) enforces no limit on the
number of connections.
9. Click OK.
The newly created share appears at the end of the list of shares, located in the
center of the Shares panel.
Note
To make the share available to all clients, specify an asterisk (*) as the
client. All users in the client list can access the share, unless one or more
user names are specified, in which case only the listed names can access the
share.
d. Click OK.
5. In the Max Connections area, in the text box, change the maximum number of
connections to the share that are allowed at one time. Or select Unlimited to
enforce no limit on the number of connections.
6. Click OK.
Note
User permissions from the existing share are carried over to the new share.
Procedure
1. In the CIFS Shares tab, click the checkbox for the share you wish to use as the
source.
2. Click Create From.
3. Modify the share information, as described in the section about modifying a
share on a Data Domain system.
3. Click OK.
The shares are removed.
4. Enter the path for the Folder to share, for example, enter C:\data
\col1\backup\newshare.
5. Enter the Share name, for example, enter newshare. Click Next.
6. For the Share Folder Permissions, selected Administrators have full access.
Other users have read-only access. Click Next.
7. The Completing dialog shows that you have successfully shared the folder with
all Microsoft Windows clients in the network. Click Finish.
The newly created shared folder is listed in the Computer Management dialog
box.
# \\dd02\backup /USER:dd02\backup22
This command maps the backup share from Data Domain system dd02 to drive H on
the Windows system and gives the user named backup22 access to the \\DD_sys
\backup directory.
Note
For a description of DD OS user roles and Windows groups, see the section
about managing Data Domain systems.
l You can map the default Data Domain System group dd admin group2 to a
Windows group named Data Domain that you create on a Windows domain
controller.
l Access is available through SSH, Telnet, FTP, HTTP, and HTTPS.
l After setting up administrative access to the Data Domain system from the
Windows group Data Domain, you must enable CIFS administrative access
using the adminaccess command.
File access
This sections contains information about ACLs, setting DACL and SACL permissions
using Windows Explorer, and so on.
CAUTION
Data Domain recommends that you do not disable NTFS ACLs once they have
been enabled. Contact Data Domain Support prior to disabling NTFS ACLs.
Table 99 Permissions
Note
CREATOR OWNER is replaced by the user creating the file/folder for normal users
and by Administrators for administrative users.
Permissions for a New Object when the Parent Directory Has No ACL
The permissions are as follows:
l BUILTIN\Administrators:(OI)(CI)F
l NT AUTHORITY\SYSTEM:(OI)(CI)F
l CREATOR OWNER:(OI)(CI)(IO)F
l BUILTIN\Users:(OI)(CI)R
l BUILTIN\Users:(CI)(special access:)FILE_APPEND_DATA
l BUILTIN\Users:(CI)(IO)(special access:)FILE_WRITE_DATA
l Everyone:(OI)(CI)R
These permissions are described in more detail as follows:
l CIFS must be disabled to set this option. If CIFS is running, disable CIFS services.
l The idmap-type can be set to none only when ACL support is enabled.
l Whenever the idmap type is changed, a file system metadata conversion might be
required for correct file access. Without any conversion, the user might not be
able to access the data. To convert the metadata, consult your contracted support
provider.
Item Description
Open Connections Open CIFS connections
Max Open Files Maximum number of open files on a Data Domain system
Item Description
Sessions Active CIFS sessions
Item Description
Files File location
Authentication configuration
The information in the Authentication panel changes, depending on the type of
authentication that is configured.
Click the Configure link in to the left of the Authentication label in the Configuration
tab. The system will navigate to the Administration > Access > Authentication page
where you can configure authentication for Active Directory, Kerberos, Workgroups,
and NIS.
Active directory configuration
Table 103 Active directory configuration information
Item Description
Mode The Active Directory mode displays.
CIFS Server Name The name of the configured CIFS server displays.
WINS Server Name The name of the configured WINS server displays.
Workgroup configuration
Table 104 Workgroup configuration authentication information
Item Description
Mode The Workgroup mode displays.
CIFS Server Name The name of the configured CIFS server displays.
WINS Server Name The name of the configured WINS server displays.
Item Description
Share Name The name of the share (for example, share1).
Directory Path The directory path to the share (for example, /data/col1/
backup/dir1).
Note
l To list information about a specific share, enter the share name in the Filter by
Share Name text box and click Update.
l Click Update to return to the default list.
l To page through the list of shares, click the < and > arrows at the bottom right of
the view to page forward or backward. To skip to the beginning of the list, click |<
and to skip to the end, click >|.
l Click the Items per Page drop-down arrow to change the number of share entries
listed on a page. Choices are 15, 30, or 45 entries.
Item Description
Share Name The name of the share (for example, share1).
Directory Path The directory path to the share (for example, /data/col1/
backup/dir1).
Note
Directory Path Status Indicates whether the configured directory path exists on the
DDR. Possible values are Path Exists or Path Does Not Exist,
the later indicating an incorrect or incomplete CIFS
configuration.
Item Description
Max Connections The maximum number of connections allowed to the share at
one time. The default value is Unlimited.
Comment The comment that was configured when the share was
created.
Share Status The status of the share: either enabled or disabled.
l The Clients area lists the clients that are configured to access the share, along
with a client tally beneath the list.
l The User/Groups area lists the names and type of users or groups that are
configured to access the share, along with a user or group tally beneath the list.
l The Options area lists the name and value of configured options.
Note
96 GB 600 30,000
Note
The system has a maximum limit of 600 CIFS connections and 250,000 open files.
However, if the system runs out of open files, the number of files can be
increased.
Note
File access latencies are affected by the number of files in a directory. To the
extent possible, we recommend directory sizes of less than 250,000. Larger
directory sizes might experience slower responses to metadata operations such as
listing the files in the directory and opening or creating a file.
Note
This example is for Windows 2003 SP1; substitute your domain server for the NTP
servers name (ntpservername).
Procedure
1. On the Windows system, enter commands similar to the following:
C:\>w32tm /config /syncfromflags:manual /manualpeerlist: ntp-
server-name C:\>w32tm /config /update C:\>w32tm /resync
2. After NTP is configured on the domain controller, configure the time server
synchronization, as described in the section about working with time and date
settings.
l NFS overview...................................................................................................234
l Managing NFS client access to the Data Domain system................................. 235
l Displaying NFS information.............................................................................. 238
l Integrating a DDR into a Kerberos domain........................................................239
l Add and delete KDC servers after initial configuration...................................... 241
NFS 233
NFS
NFS overview
Network File System (NFS) clients can have access to the system directories or
MTrees on the Data Domain system.
l The/backup directory is the default destination for non-MTree compressed
backup server data.
l The /data/col1/backup path is the root destination when using MTrees for
compressed backup server data.
l The /ddvar/core directory contains Data Domain System core and log files
(remove old logs and core files to free space in this area).
Note
You can also delete core files from the /ddvar or the /ddvar/ext directory if it
exists.
Clients, such as backup servers that perform backup and restore operations with a
Data Domain System, need access to the /backup or /data/col1/backup areas.
Clients that have administrative access need to be able to access the /ddvar/core
directory to retrieve core and log files.
As part of the initial Data Domain system configuration, NFS clients were configured
to access these areas. This chapter describes how to modify these settings and how
to manage data access.
Note
l For information about the initial system configuration, see the EMC Data Domain
Operating System Initial Configuration Guide.
l The nfs command manages backups and restores between NFS clients and Data
Domain systems, and it displays NFS statistics and status. For complete
information about the nfs command, see the EMC Data Domain Operating System
Command Reference Guide.
l For information about setting up third-party clients to use the Data Domain system
as a server, see the related tuning guide, such as the Solaris System Tuning, which
is available from the Data Domain support web site. From the Documentation >
Integration Documentation page, select the vendor from the list and click OK.
Select the tuning guide from the list.
Note
/ddvar is an ext3 file system, and cannot be shared like a normal MTree-based share.
The information in /ddvar will become stale when the active node fails over to the
standby node because the filehandles are different on the two nodes. If /ddvar is
mounted to access log files or upgrade the system, unmount and remount /ddvar if a
failover has occurred since the last time /ddvar was mounted.
To create valid NFS exports that will failover with HA, the export needs to be created
from the Active HA node, and generally shared over the failover network interfaces.
2. Click Enable.
2. Click Disable.
Creating an export
You can use Data Domain System Managers Create button on the NFS view or use
the Configuration Wizard to specify the NFS clients that can access the /backup, /
data/col1/backup,/ddvar, /ddvar/core areas, or the/ddvar/ext area if it
exists.
A Data Domain system supports a maximum of 128 NFS exports2, and 900
simultaneous connections are allowed.
Note
You have to assign client access to each export separately and remove access from
each export separately. For example, a client can be removed from /ddvar and still
have access to /data/col1/backup.
CAUTION
Procedure
1. Select ProtocolsNFS.
The NFS view opens displaying the Exports tab.
2. Click Create.
3. Enter the pathname in the Directory Path text box (for example, /data/col1/
backup/dir1).
Note
4. In the Clients area, select an existing client or click the + icon to create a client.
The Client dialog box is displayed.
Note
Anonymous UID/GID:
l Map requests from UID (user identifier) or GID (group identifier) 0 to the
anonymous UID/GID (root _squash).
l Map all user requests to the anonymous UID/GID (all _squash).
l Use Default Anonymous UID/GID.
Note
c. Click OK.
5. Click OK to create the export.
Modifying an export
Change the directory path, domain name, and other options using the GUI.
Procedure
1. SelectProtocols > NFS.
The NFS view opens displaying the Exports tab.
Note
Anonymous UID/GID:
l Map requests from UID (user identifier) or GID (group identifier) 0 to the
anonymous UID/GID (root _squash).
Note
c. Click OK.
6. Click OK to modify the export.
Deleting an export
Delete an export from the NFS Exports tab.
Procedure
1. In the NFS Exports tab, click the checkbox of the export you wish to delete.
2. Click Delete.
3. Click OK and Close to delete the export.
Note
Click Configure to view the Administration > Access > Authentication tab where
you can configure Kerberos authentication.
2. Click an export in the table to populate the Detailed Information area, below the
Exports table.
In addition to the exports directory path, configured options, and status, the
system displays a list of clients.
Use the Filter By text box to sort by mount path.
Click Update for the system to refresh the table and use the filters supplied.
Click Reset for the system to clear the Path and Client filters.
CAUTION
The examples provided in this description are specific to the operating system
(OS) used to develop this exercise. You must use commands specific to your OS.
Note
For UNIX Kerberos mode, a keytab file must be transferred from the Key Distribution
Center (KDC) server, where it is generated, to the DDR. If you are using more than
one DDR, each DDR requires a separate keytab file. The keytab file contains a shared
secret between the KDC server and the DDR.
Note
When using a UNIX KDC, the DNS server does not have to be the KDC server, it can
be a separate server.
Procedure
1. Set the host name and the domain name for the DDR, using DDR commands.
net set hostname <host>
net set {domainname <local-domain-name>}
Note
2. Configure NFS principal (node) for the DDR on the Key Distribution Center
(KDC).
Example:
addprinc nfs/hostname@realm
Note
3. Verify that there are nfs entries added as principals on the KDC.
Example:
listprincs
nfs/hostname@realm
Note
The <keytab_file> is the keytab file used to configure keys in a previous step.
6. Copy the keytab file from the location where the keys for NFS DDR are
generated to the DDR in the /ddvar/ directory.
7. Set the realm on the DDR, using the following DDR command:
authentication kerberos set realm <home realm> kdc-type <unix,
windows.> kdcs <IP address of server>
8. When the kdc-type is UNIX, import the keytab file from /ddvar/ to /ddr/etc/,
where the Kerberos configuration file expects it. Use the following DDR
command to copy the file:
authentication kerberos keytab import
NOTICE
11. For each NFS client, import all its principals into a keytab file on the client.
Example:
ktadd -k <keytab_file> host/hostname@realm
ktadd -k <keytab_file> nfs/hostname@realm
This command joins the system to the krb5.test realm and enables Kerberos
authentication for NFS clients.
Note
A keytab generated on this KDC must exist on the DDR to authenticate using
Kerberos.
Note
A keytab generated on this KDC must exist on the DDR to authenticate using
Kerberos.
Note
Because the data set is divided between the source and destination enclosures during
migration, you cannot halt a migration and resume use of only the source enclosures.
Once started, the migration must complete. If a failure, such as a faulty disk drive,
interrupts the migration, address the issue and resume the migration.
Depending on the amount of data to migrate and the throttle settings selected, a
storage migration can take days or weeks. When all data is migrated, the finalize
process, which must be manually initiated using the storage migration
finalize command, restarts the filesystem. During the restart, the source
enclosures are removed from the system configuration and the destination enclosures
become part of the filesystem. When the finalize process is complete, the source
enclosures can be removed from the system.
Note
Note
It is not possible to determine the utilization of the source shelf. The Data
Domain system performs all calculations based on the capacity of the shelf.
l Data migration is not supported for disks in the system controller.
l
CAUTION
Note
CAUTION
l Loading shelves at the top of the rack may cause the shelf to tip over.
l Validate that the floor can support the total weight of the DS60 shelves.
l Validate that the racks can provide enough power to the DS60 shelves.
l When adding more than five DS60s in the first rack, or more than six DS60s
in the second rack, stabilizer bars and a ladder are required to maintain the
DS60 shelves.
Note
3. When a storage migration is in progress, you can also view the status by
selecting Health > Jobs.
8. In the Review Migration Plan dialog, review the estimated migration schedule,
then click Next.
9. Review the precheck results in the Verify Migration Preconditions dialog, then
click Close.
Results
If any of the precheck tests fail, resolve the issue before you start the migration.
8. In the Review Migration Plan dialog, review the estimated migration schedule,
then click Start.
9. In the Start Migration dialog, click Start.
The Migrate dialog appears and updates during the three phases of the
migration: Starting Migration, Migration in Progress, and Copy Complete.
10. When the Migrate dialog title displays Copy Complete and a filesystem restart is
acceptable, click Finalize.
Note
This task restarts the filesystem and typically takes 10 to 15 minutes. The
system is unavailable during this time.
Results
When the migration finalize task is complete, the system is using the destination
enclosures and the source enclosures can be removed.
P4. The current migration request is the same as the interrupted migration request.
Resume and complete the interrupted migration.
P5. Check the disk group layout on the existing enclosures.
Storage migration requires that each source enclosure contain only one disk
group, and all the disks in the group must be in that enclosure.
P8. Source enclosures are in the same active tier or retention unit.
The system supports storage migration from either the active tier or the retention
tier. It does not support migration of data from both tiers at the same time.
Note
The preparation of new enclosures for storage migration is managed by the storage
migration process. Do not prepare destination enclosures as you would for an
enclosure addition. For example, use of the filesys expand command is
appropriate for an enclosure addition, but this command prevents enclosures from
being used as storage migration destinations.
A DS60 disk shelf contains four disk packs, of 15 disks each. When a DS60 shelf is the
migration source or destination, the disk packs are referenced as enclosure:pack. In
this example, the source is enclosure 7, pack 2 (7:2), and the destination is enclosure
7, pack 4 (7:4).
Procedure
1. Install the destination enclosures using the instructions in the product
installation guides.
2. Check to see if the storage migration feature license is installed.
# elicense show
3. If the license is not installed, update the elicense to add the storage migration
feature license
# elicense update
4. View the disk states for the source and destination disks.
# disk show state
The source disks should be in the active state, and the destination disks should
be in the unknown state.
5. Run the storage migration precheck command to determine if the system is
ready for the migration.
# storage migration precheck source-enclosures 7:2 destination-
enclosures 7:4
6. View the migration throttle setting.
storage migration option show throttle
8. Optionally, view the disk states for the source and destination disks during the
migration.
# disk show state
During the migration, the source disks should be in the migrating state, and the
destination disks should be in the destination state.
9. Review the migration status as needed.
# storage migration status
10. View the disk states for the source and destination disks.
# disk show state
During the migration, the source disks should be in the migrating state, and the
destination disks should be in the destination state.
11. When the migration is complete, update the configuration to use the destination
enclosures.
Note
This task restarts the file system and typically takes 10 to 15 minutes. The
system is unavailable during this time.
12. If you want to remove all data from each of the source enclosures, remove the
data now.
storage sanitize start enclosure <enclosure-id>[:<pack-id>]
Note
The storage sanitize command does not produce a certified data erasure. EMC
offers certified data erasure as a service. For more information, contact your
EMC representative.
13. View the disk states for the source and destination disks.
# disk show state
After the migration, the source disks should be in the unknown state, and the
destination disks should be in the active state.
Results
When the migration finalize task is complete, the system is using the destination
storage and the source storage can be removed.
elicense update
# elicense update mylicense.lic
New licenses: Storage Migration
Feature licenses:
## Feature Count Mode Expiration Date
-- ----------- ----- --------------- ---------------
1 REPLICATION 1 permanent (int) n/a
2 VTL 1 permanent (int) n/a
3 Storage Migration 1 permanent (int)
-- ----------- ----- --------------- ---------------
** This will replace all existing Data Domain licenses on the system with the above EMC ELMS
licenses.
Do you want to proceed? (yes|no) [yes]: yes
eLicense(s) updated.
Source enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
-------- ----- ----- ---------- --------- --------------
2.1-2.15 15 dg1 1.81 TiB ES30 APM00111103820
-------- ----- ----- ---------- --------- --------------
Total source disk size: 27.29 TiB
Destination enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
---------- ----- ------- -------- --------- --------------
11.1-11.15 15 unknown 931.51 GiB ES30 APM00111103840
---------- ----- ------- -------- --------- --------------
Total destination disk size: 13.64 TiB
Source enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
-------- ----- ----- ---------- --------- --------------
2.1-2.15 15 dg1 1.81 TiB ES30 APM00111103820
-------- ----- ----- ---------- --------- --------------
Total source disk size: 27.29 TiB
Destination enclosures:
Disks Count Disk Disk Enclosure Enclosure
Group Size Model Serial No.
---------- ----- ------- -------- --------- --------------
11.1-11.15 15 unknown 931.51 GiB ES30 APM00111103840
---------- ----- ------- -------- --------- --------------
Total destination disk size: 13.64 TiB
Note
Currently storage migration is only supported on the active node. Storage migration is
not supported on the standby node of an HA cluster.
Note
Caching the file system metadata on SSDs improves I/O performance for both
traditional and random workloads.
For traditional workloads, offloading random access to metadata from HDDs to SSDs
allows the hard drives to accommodate streaming write and read requests.
For random workloads, SSD cache provides low latency metadata operations, which
allows the HDDs to serve data requests instead of cache requests.
Read cache on SSD improves random read performance by caching frequently
accessed data. Writing data to NVRAM combined with low latency metadata
operations to drain the NVRAM faster improve random write latency. The absence of
cache does not prevent file system operation, it only impacts file system performance.
When the cache tier is first created, a file system restart is only required if the cache
tier is being added after the file system is running. For new systems that come with
cache tier disks, no file system restart is required if the cache tier is created before
enabling the file system for the first time. Additional cache can be added to a live
system, without the need to disable and enable the file system.
Note
DD9500 systems that were upgraded from DD OS 5.7 to DD OS 6.0 require a one-
time file system restart after creating the cache tier for the first time.
One specific condition with regard to SSDs is when the number of spare blocks
remaining gets close to zero, the SSD enters a read only condition. When a read only
condition occurs, DD OS treats the drive as read-only cache and sends an alert.
MDoF is supported on the following Data Domain systems:
l DD6300
l DD6800
l DD9300
l DD9500
l DD9800
MDoF licensing
A license enabled through ELMS is necessary for using the MDoF feature; the SSD
Cache license will not be enabled by default.
The following table describes the various SSD capacity licenses and the SSD
capacities for the given system:
Note
If SSDs are added to the system later, the system should automatically create the
SSD volume and notify the file system. SSD Cache Manager notifies its registered
clients so they can create their cache objects.
l If the SSD volume contains only one active drive, the last drive to go offline will
come back online if the active drive is removed from the system.
The next section describes how to manage the SSD cache tier from Data Domain
System Manager, and with the DD OS CLI.
Note
The licensed capacity bar shows the portion of licensed capacity (used and
remaining) for the installed enclosures.
Note
To remove an added shelf, select it in the Tier Configuration list, click Remove
from Configuration, and click OK.
CLI Equivalent
When the cache tier SSDs are installed in the head unit:
a. Add the SSDs to the cache tier.
# storage add disks 1.13,1.14 tier cache
Checking storage requirements...done
Adding disk 1.13 to the cache tier...done
SSD alerts
There are three alerts specific to the SSD cache tier.
The SSD cahce tier alerts are:
l Licensing
If the file system is enabled and less physical cache capacity present than what
the license permits is configured, an alert is generated with the current SSD
capacity present, and the capacity license. This alert is classified as a warning
alert. The absence of cache does not prevent file system operation, it only impacts
file system performance. Additional cache can be added to a live system, without
the need to disable and enable the file system.
l Read only condition
When the number of spare blocks remaining gets close to zero, the SSD enters a
read only condition. When a read only condition occurs, DD OS treats the drive as
read-only cache.
Alert EVT-STORAGE-00001 displays when the SSD is in a read-only state and
should be replaced.
l SSD end of life
When an SSD reaches the end of its lifespan, the system generates a hardware
failure alert identifying the location of the SSD within the SSD shelf. This alert is
classified as a critical alert.
Alert EVT-STORAGE-00016 displays when the EOL counter reaches 98. The drive
is failed proactively when the EOL counter reaches 99.
l Multiple endpoints are allowed per physical port, each using a virtual (NPIV) port.
The base port is a placeholder for the physical port and is not associated with an
endpoint.
l Endpoint failover/failback is automatically enabled when using NPIV.
Note
After NPIV is enabled, the Secondary System Address must be specified at each
of the endpoints. If not, endpoint failover does not occur.
l Multiple Data Domain systems can be consolidated into a single Data Domain
system, however, the number of HBAs remains the same on the single Data
Domain system.
l The endpoint failover is triggered when FC-SSM detects when a port goes from
online to offline. If the physical port is offline before scsitarget is enabled, and the
port is still offline after scsitarget is enabled, an endpoint failover is not possible.
The endpoint failover is not possible because FC-SSM does not generate a port
offline event. If the port comes back online and auto-failback is enabled, any failed
over endpoints that use that port as a primary port fail-back to the primary port.
The Data Domain HA feature requires NPIV to move WWNs between the nodes of an
HA pair during the failover process.
Note
Procedure
1. Select Hardware > Fibre Channel.
2. Next to NPIV: Disabled, select Enable.
3. In the Enable NPIV dialog box, you are warned that all Fibre Channel ports must
be disabled before NPIV can be enabled. If you are sure that you want to
continue, select Yes.
CLI Equivalent
a. Ensure (global) NPIV is enabled.
System Address: 0a
Enabled: Yes
Status: Online
Transport: FibreChannel
Operational Status: Normal
FC NPIV: Enabled (auto)
.
.
.
e. Create an endpoint using the primary and secondary ports that you have
selected.
For example:
a. Run the following command:
Endpoint: test0a0b
Current System Address: 0b
Primary System Address: 0a
Secondary System Address: 0b
Enabled: Yes
Status: Online
Transport: FibreChannel
FC WWNN: 50:02:18:80:08:a0:00:91
FC WWPN: 50:02:18:84:08:b6:00:91
f. Zone a host system to the auto-generated WWPN of the new endpoint.
g. Create a DD VTL, vDisk, or DD Boost over Fibre Channel (DFC) device, and
make this device available on the host system.
h. Ensure that the Data Domain device that is chosen can be accessed on the
host (read and/or written).
i. To test the endpoint failover, use the secondary option to move the
endpoint to the secondary system address (SSA).
For example, run the following command:
j. Ensure that the Data Domain device that is chosen can still be accessed on
the host (read and/or written). To test the failback, use the primary option
to move the endpoint back to the primary system address (PSA).
For example, run the following command:
k. Ensure that the Data Domain device that is chosen can still be accessed on
the host (read and/or written).
Disabling NPIV
Before you can disable NPIV, you must not have any ports with multiple endpoints.
Note
Procedure
1. Select Hardware > Fibre Channel.
2. Next to NPIV: Enabled, select Disable.
3. In the Disable NPIV dialog, review any messages about correcting the
configuration, and when ready, select OK.
Resources tab
The Hardware > Fibre Channel > Resources tab displays information about ports,
endpoints, and initiators.
Item Description
System Address System address for port
Link Status Link status: either Online or Offline; that is, whether or not
the port is up and capable of handling traffic.
Item Description
Name Name of endpoint.
Item Description
WWNN Unique worldwide node name, which is a 64-bit identifier (a
60-bit value preceded by a 4-bit Network Address Authority
identifier), of the FC node
Link Status Either Online or Offline; that is, whether or not the port is up
and capable of handling traffic.
Item Description
Name Name of initiator.
Configuring a port
Ports are discovered, and a single endpoint is automatically created for each port, at
startup.
The properties of the base port depend on whether NPIV is enabled:
l In non-NPIV mode, ports use the same properties as the endpoint, that is, the
WWPN for the base port and the endpoint are the same.
l In NPIV mode, the base port properties are derived from default values, that is, a
new WWPN is generated for the base port and is preserved to allow consistent
switching between NPIV modes. Also, NPIV mode provides the ability to support
multiple endpoints per port.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Ports, select an port, and then select Modify (pencil).
3. In the Configure Port dialog, select whether to automatically enable or disable
NPIV for this port.
4. For Topology, select Loop Preferred, Loop Only, Point to Point, or Default.
Enabling a port
Ports must be enabled before they can be used.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Ports > Enable. If all ports are already enabled, a message
to that effect is displayed.
3. In the Enable Ports dialog, select one or more ports from the list, and select
Next.
4. After the confirmation, select Next to complete the task.
Disabling a port
You can simply disable a port (or ports), or you can chose to failover all endpoints on
the port (or ports) to another port.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Ports > Disable.
3. In the Disable Ports dialog, select one or more ports from the list, and select
Next.
4. In the confirmation dialog, you can continue with simply disabling the port, or
you can chose to failover all endpoints on the ports to another port.
Adding an endpoint
An endpoint is a virtual object that is mapped to a underlying virtual port. In non-NPIV
mode (not available on HA configuration), only a single endpoint is allowed per
physical port, and the base port is used to configure that endpoint to the fabric. When
NPIV is enabled, multiple endpoints are allowed per physical port, each using a virtual
(NPIV) port, and endpoint failover/failback is enabled.
Note
Note
Note
When using NPIV, it is recommended that you use only one protocol (that is, DD VTL
Fibre Channel, DD Boost-over-Fibre Channel, or vDisk Fibre Channel) per endpoint.
For failover configurations, secondary endpoints should also be configured to have the
same protocol as the primary.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Endpoints, select Add (+ sign).
3. In the Add Endpoint dialog, enter a Name for the endpoint (from 1 to 128
characters). The field cannot be empty or be the word all, and cannot contain
the characters asterisk (*), question mark (?), front or back slashes (/, \), or
right or left parentheses [(,)].
4. For Endpoint Status, select Enabled or Disabled.
5. If NPIV is enabled, for Primary system address, select from the drop-down list.
The primary system address must be different from any secondary system
address.
6. If NPIV is enabled, for Fails over to secondary system addresses, check the
appropriate box next to the secondary system address.
7. Select OK.
Configuring an endpoint
After you have added an endpoint, you can modify it using the Configure Endpoint
dialog.
Note
When using NPIV, it is recommended that you use only one protocol (that is, DD VTL
Fibre Channel, DD Boost-over-Fibre Channel, or vDisk Fibre Channel) per endpoint.
For failover configurations, secondary endpoints should also be configured to have the
same protocol as the primary.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Endpoints, select an endpoint, and then select Modify (pencil).
3. In the Configure Endpoint dialog, enter a Name for the endpoint (from 1 to 128
characters). The field cannot be empty or be the word all, and cannot contain
the characters asterisk (*), question mark (?), front or back slashes (/, \), or
right or left parentheses [(,)].
4. For Endpoint Status, select Enabled or Disabled.
5. For Primary system address, select from the drop-down list. The primary
system address must be different from any secondary system address.
6. For Fails over to secondary system addresses, check the appropriate box next
to the secondary system address.
7. Select OK.
associated with a system address that no longer exists, for example after a controller
upgrade or when a controller HBA (host bus adapter) has been moved. When the
system address for an endpoint is modified, all properties of the endpoint, including
WWPN and WWNN (worldwide port and node names, respectively), if any, are
preserved and are used with the new system address.
In the following example, endpoint ep-1 was assigned to system address 5a, but this
system address is no longer valid. A new controller HBA was added at system address
10a. The SCSI Target subsystem automatically created a new endpoint, ep-new, for
the newly discovered system address. Because only a single endpoint can be
associated with a given system address, ep-new must be deleted, and then ep-1 must
be assigned to system address 10a.
Note
It may take some time for the modified endpoint to come online, depending on the
SAN environment, since the WWPN and WWNN have moved to a different system
address. You may also need to update SAN zoning to reflect the new configuration.
Procedure
1. Show all endpoints to verify the endpoints to be changed:
# scsitarget endpoint show list
2. Disable all endpoints:
# scsitarget endpoint disable all
3. Delete the new, unnecessary endpoint, ep-new:
# scsitarget endpoint del ep-new
4. Modify the endpoint you want to use, ep-1, by assigning it the new system
address 10a:
# scsitarget endpoint modify ep-1 system-address 10a
5. Enable all endpoints:
# scsitarget endpoint enable all
Enabling an endpoint
Enabling an endpoint enables the port only if it is currently disabled, that is, you are in
non-NPIV mode.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Enable. If all endpoints are already enabled, a
message to that effect is displayed.
3. In the Enable Endpoints dialog, select one or more endpoints from the list, and
select Next.
4. After the confirmation, select Next to complete the task.
Disabling an endpoint
Disabling an endpoint does not disable the associated port, unless all endpoints using
the port are disabled, that is, you are in non- NPIV mode.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Disable.
3. In the Disable Endpoints dialog, select one or more endpoints from the list, and
select Next. If an endpoint is in use, you are warned that disabling it might
disrupt the system.
4. Select Next to complete the task.
Deleting an endpoint
You may want to delete an endpoint if the underlying hardware is no longer available.
However, if the underlying hardware is still present, or becomes available, a new
endpoint for the hardware is discovered automatically and configured based on default
values.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Select More Tasks > Endpoints > Delete.
3. In the Delete Endpoints dialog, select one or more endpoints from the list, and
select Next. If an endpoint is in use, you are warned that deleting it might
disrupt the system.
4. Select Next to complete the task.
Adding an initiator
Add initiators to provide backup clients to connect to the system to read and write
data using the FC (Fibre Channel) protocol. A specific initiator can support DD Boost
over FC, or DD VTL, but not both. A maximum of 1024 initiators can be configured for
a DD system.
Procedure
1. Select Hardware > Fibre Channel > Resources.
2. Under Initiators, select Add (+ sign)
3. In the Add Initiator dialog, enter the ports unique WWPN in the specified
format.
4. Enter a Name for the initiator.
5. Select the Address Method: Auto is used for standard addressing, and VSA
(Volume Set Addressing) is used primarily for addressing virtual buses, targets,
and LUNs.
6. Select OK.
CLI Equivalent
Item Description
Group Name Name of access group.
Note
There are two components to DD Boost: one component that runs on the backup
server and another that runs on the Data Domain system.
l In the context of the EMC NetWorker backup application, EMC Avamar backup
application and other DDBoost partner backup applications, the component that
runs on the backup server (DD Boost libraries) is integrated into the particular
backup application.
l In the context of Symantec backup applications (NetBackup and Backup Exec)
and the Oracle RMAN plug-in, you need to download an appropriate version of the
DD Boost plugin that is installed on each media server. The DD Boost plugin
includes the DD Boost libraries for integrating with the DD Boost server running on
the Data Domain system.
The backup application (for example, Avamar, NetWorker, NetBackup, or Backup
Exec) sets policies that control when backups and duplications occur. Administrators
manage backup, duplication, and restores from a single console and can use all of the
features of DD Boost, including WAN-efficient replicator software. The application
manages all files (collections of data) in the catalog, even those created by the Data
Domain system.
In the Data Domain system, storage units that you create are exposed to backup
applications that use the DD Boost protocol. For Symantec applications, storage units
are viewed as disk pools. For Networker, storage units are viewed as logical storage
units (LSUs). A storage unit is an MTree; therefore, it supports MTree quota settings.
(Do not create an MTree in place of a storage unit.)
This chapter does not contain installation instructions; refer to the documentation for
the product you want to install. For example, for information about setting up DD
Boost with Symantec backup applications (NetBackup and Backup Exec), see the
EMC Data Domain Boost for OpenStorage Administration Guide. For information on
setting up DD Boost with any other application, see the application-specific
documentation.
Additional information about configuring and managing DD Boost on the Data Domain
system can also be found in the EMC Data Domain Boost for OpenStorage Administration
Guide (for NetBackup and Backup Exec) and the EMC Data Domain Boost for Partner
Integration Administration Guide (for other backup applications).
Note
3. To select an existing user, select the user name in the drop-down list.
EMC recommends that you select a user name with management role privileges
set to none.
4. To create and select a new user, select Create a new Local User and do the
following:
a. Enter the new user name in the User field.
The user must be configured in the backup application to connect to the
Data Domain system.
4. Click Remove.
After removal, the user remains in the DD OS access list.
Enabling DD Boost
Use the DD Boost Settings tab to enable DD Boost and to select or add a DD Boost
user.
Procedure
1. Select Protocols > DD Boost > Settings.
2. Click Enable in the DD Boost Status area.
The Enable DD Boost dialog box is displayed.
3. Select an existing user name from the menu, or add a new user by supplying the
name, password, and role.
Configuring Kerberos
You can configure Kerberos by using the DD Boost Settings tab.
Procedure
1. Select Protocols > DD Boost > Settings.
2. Click Configure in the Kerberos Mode status area.
The Authentication tab under Administration > Access appears.
Note
Disabling DD Boost
Disabling DD Boost drops all active connections to the backup server. When you
disable or destroy DD Boost, the DD Boost FC service is also disabled.
Before you begin
Ensure there are no jobs running from your backup application before disabling.
Note
File replication started by DD Boost between two Data Domain restores is not
canceled.
Procedure
1. Select Protocols > DD Boost > Settings.
2. Click Disable in the DD Boost Status area.
3. Click OK in the Disable DD Boost confirmation dialog box.
Item Description
Storage Unit The name of the storage unit.
Item Description
Quota Hard Limit The percentage of hard limit quota used.
Last 24 hr Pre-Comp The amount of raw data from the backup application that has
been written in the last 24 hours.
Last 24 hr Post-Comp The amount of storage used after compression in the last 24
hours.
Last 24 hr Comp Ratio The compression ratio for the last 24 hours.
Weekly Avg Post-Comp The average amount of compressed storage used in the last
five weeks.
Last Week Post-Comp The average amount of compressed storage used in the last
seven days.
Weekly Avg Comp Ratio The average compression ratio for the last five weeks.
Last Week Comp Ratio The average compression ratio for the last seven days.
Note
The Data Movement tab is available only if the optional EMC Data Domain
Extended Retention (formerly DD Archiver) license is installed.
l Takes you to Replication > On-Demand > File Replication when you click the
View DD Boost Replications link.
Note
A DD Replicator license is required for DD Boost to display tabs other than the File
Replication tab.
Note
The storage-unit name must be enclosed in double quotes (") if the name
has an embedded space.
l comma (,)
l period (.), as long as it does not precede the name
l exclamation mark (!)
l number sign (#)
l dollar sign ($)
l per cent sign (%)
l plus sign (+)
l at sign (@)
l equal sign (=)
l ampersand (&)
l semi-colon (;)
l parenthesis [(and)]
l square brackets ([and])
l curly brackets ({and})
l caret (^)
l tilde (~)
l apostrophe (unslanted single quotation mark)
l single slanted quotation mark (')
l minus sign (-)
l underscore (_)
4. To select an existing username that will have access to this storage unit, select
the user name in the dropdown list.
EMC recommends that you select a username with management role privileges
set to none.
5. To create and select a new username that will have access to this storage unit,
select Create a new Local User and:
a. Enter the new user name in the User box.
The user must be configured in the backup application to connect to the
Data Domain system.
Note
Quota limits are pre-compressed values. To set quota limits, select Set to
Specific Value and enter the value. Select the unit of measurement: MiB, GiB,
TiB, or PiB.
Note
When setting both soft and hard limits, a quotas soft limit cannot exceed the
quotas hard limit.
7. Click Create.
8. Repeat the above steps for each Data Domain Boost-enabled system.
l The Quota panel shows quota information for the selected storage unit.
Table 118 Quota panel
Pre-Comp Hard Limit Current value of hard quota set for the storage unit.
To modify the pre-comp soft and hard limits shown in the tab:
1. Click the Configure button in the Quota panel.
2. In the Configure Quota dialog box, enter values for hard and soft quotas and
select the unit of measurement: MiB, GiB, TiB, or PiB. Click OK.
l Snapshots
The Snapshots panel shows information about the storage units snapshots.
Item Description
Total Snapshots The total number of snapshots created for this MTree. A total
of 750 snapshots can be created for each MTree.
Expired The number of snapshots in this MTree that have been marked
for deletion, but have not been removed with the clean
operation as yet.
Unexpired The number of snapshots in this MTree that are marked for
keeping.
Oldest Snapshot The date of the oldest snapshot for this MTree.
Newest Snapshot The date of the newest snapshot for this MTree.
Assigned Snapshot The name of the snapshot schedule assigned to this MTree.
Schedules
Note
n Click the Snapshots link to go to the Data Management > Snapshots tab.
Space Usage tab
The Space Usage tab graph displays a visual representation of data usage for the
storage unit over time.
l Click a point on a graph line to display a box with data at that point.
l Click Print (at the bottom on the graph) to open the standard Print dialog box.
l Click Show in new window to display the graph in a new browser window.
There are two types of graph data displayed: Logical Space Used (Pre-Compression)
and Physical Capacity Used (Post-Compression).
Daily Written tab
The Daily Written view contains a graph that displays a visual representation of data
that is written daily to the system over a period of time, selectable from 7 to 120 days.
The data amounts are shown over time for pre- and post-compression amounts.
Data Movement tab
A graph in the same format as the Daily Written graph that shows the amount of disk
space moved to the DD Extended Retention storage area (if the DD Extended
Retention license is enabled).
4. To rename the storage unit, edit the text in the Name field.
5. To select a different existing user, select the user name in the drop-down list.
EMC recommends that you select a username with management role privileges
set to none.
6. To create and select a new user, select Create a new Local User and do the
following:
a. Enter the new user name in the User box.
The user must be configured in the backup application to connect to the
Data Domain system.
Note
Quota limits are pre-compressed values. To set quota limits, select Set to
Specific Value and enter the value. Select the unit of measurement: MiB, GiB,
TiB, or PiB.
Note
When setting both soft and hard limits, a quotas soft limit cannot exceed the
quotas hard limit.
8. Click Modify.
Note
Deleted storage units are available until the next filesys clean command is run.
Procedure
1. Select Protocols > DD Boost > Storage Units > More Tasks > Undelete
Storage Unit....
2. In the Undelete Storage Units dialog box, select the storage unit(s) that you
want to undelete.
3. Click OK.
5. Click OK.
Note
You can also manage distributed segment processing via the ddboost option
commands, which are described in detail in the EMC Data Domain Operating
System Command Reference Guide.
Note
Virtual synthetics
A virtual synthetic full backup is the combination of the last full (synthetic or full)
backup and all subsequent incremental backups. Virtual synthetics are enabled by
default.
Low-bandwidth optimization
If you use file replication over a low-bandwidth network (WAN), you can increase
replication speed by using low bandwidth optimization. This feature provides additional
compression during data transfer. Low bandwidth compression is available to Data
Domain systems with an installed Replication license.
Low-bandwidth optimization, which is disabled by default, is designed for use on
networks with less than 6 Mbps aggregate bandwidth. Do not use this option if
maximum file system write performance is required.
Note
You can also manage low bandwidth optimization via the ddboost file-
replication commands, which are described in detail in the EMC Data Domain
Operating System Command Reference Guide.
Note
If DD Boost file replication encryption is used on systems without the Data at Rest
option, it must be set to on for both the source and destination systems.
Note
4. Select Protocols > DD Boost > More Tasks > Manage Certificates....
Note
Note
4. Select Protocols > DD Boost > More Tasks > Manage Certificates....
Note
a. Copy the certificate text to the clipboard using the controls in your
operating system.
b. Select I want to copy and paste the certificate text.
c. Paste the certificate text in the box below the copy and paste selection.
d. Click Add.
Note
6. Click OK.
7. Click OK.
l The IP address must be configured on the Data Domain system, and its interface
enabled. To check the interface configuration, select Hardware > Ethernet >
Interfaces page, and check for free ports. See the net chapter of the EMC Data
Domain Operating System Command Reference Guide or the EMC Data Domain
Operating System Initial Configuration Guide for information about configuring an IP
address for an interface.
l You can use the ifgroup commands to manage interface groups; these
commands are described in detail in the EMC Data Domain Operating System
Command Reference Guide.
l Interface groups provide full support for static IPv6 addresses, providing the same
capabilities for IPv6 as for IPv4. Concurrent IPv4 and IPv6 client connections are
allowed. A client connected with IPv6 sees IPv6 ifgroup interfaces only. A client
connected with IPv4 sees IPv4 ifgroup interfaces only. Individual ifgroups include
all IPv4 addresses or all IPv6 addresses. For details, see the EMC Data Domain
Boost for Partner Integration Administration Guide or the EMC Data Domain Boost for
OpenStorage Administration Guide.
l Configured interfaces are listed in Active Connections, on the lower portion of the
Activities page.
Note
See Using DD Boost on HA systems on page 307 for important information about
using interface groups with HA systems.
Interfaces
IFGROUP supports physical and virtual interfaces.
An IFGROUP interface is a member of a single IFGROUP <group-name> and may
consist of:
l Physical interface such as eth0a
l Virtual interface, created for link failover or link aggregation, such as veth1
l Virtual alias interface such as eth0a:2 or veth1:2
l Virtual VLAN interface such as eth0a.1 or veth1.1
l Within an IFGROUP <group-name>, all interfaces must be on unique interfaces
(Ethernet, virtual Ethernet) to ensure failover in the event of network error.
IFGROUP provides full support for static IPv6 addresses, providing the same
capabilities for IPv6 as for IPv4. Concurrent IPv4 and IPv6 client connections are
allowed. A client connected with IPv6 sees IPv6 IFGROUP interfaces only. A client
connected with IPv4 sees IPv4 IFGROUP interfaces only. Individual IFGROUPs include
all IPv4 addresses or all IPv6 addresses.
For more information, see the EMC Data Domain Boost for Partner Integration
Administration Guide or the EMC Data Domain Boost for OpenStorage Administration
Guide.
Interface enforcement
IFGROUP lets you enforce private network connectivity, ensuring that a failed job
does not reconnect on the public network after network errors.
When interface enforcement is enabled, a failed job can only retry on an alternative
private network IP address. Interface enforcement is only available for clients that use
IFGROUP interfaces.
Interface enforcement is off (FALSE) by default. To enable interface enforcement,
you must add the following setting to the system registry:
system.ENFORCE_IFGROUP_RW=TRUE
After you've made this entry in the registry, you must do a filesys restart for
the setting to take effect.
For more information, see the EMC Data Domain Boost for Partner Integration
Administration Guide or the EMC Data Domain Boost for OpenStorage Administration
Guide.
Clients
IFGROUP supports various naming formats for clients. Client selection is based on a
specified order of precedence.
An IFGROUP client is a member of a single ifgroup <group-name> and may consist of:
l A fully qualified domain name (FQDN) such as ddboost.datadomain.com
l Wild cards such as *.datadomain.com or *
l A short name for the client, such as ddboost
l Client public IP range, such as 128.5.20.0/24
Prior to write or read processing, the client requests an IFGROUP IP address from the
server. To select the client IFGROUP association, the client information is evaluated
according to the following order of precedence.
1. IP address of the connected Data Domain system. If there is already an active
connection between the client and the Data Domain system, and the connection
exists on the interface in the IFGROUP, then the IFGROUP interfaces are made
available for the client.
2. Connected client IP range. An IP mask check is done against the client source IP;
if the client's source IP address matches the mask in the IFGROUP clients list,
then the IFGROUP interfaces are made available for the client.
l For IPv4, you can select five different masks in the range /8 to /32.
l For IPv6, fixed masks /64, /112, and /128 are available.
This host-range check is useful for separate VLANs with many clients where there
isn't a unique partial hostname (domain).
3. Client Name: abc-11.d1.com
4. Client Domain Name: *.d1.com
5. All Clients: *
For more information, see the EMC Data Domain Boost for Partner Integration
Administration Guide.
Clients 297
Working with DD Boost
Note
5. Click OK.
6. In the Configured Clients section, click Add (+).
7. Enter a fully qualified client name or *.mydomain.com.
Note
The * client is initially available to the default group. The * client may only be a
member of one ifgroup.
Note
If the interface group does not have both clients and interfaces assigned, you
cannot enable the group.
4. Click Enabled to enable the interface group; clear the checkbox to disable.
5. Click OK.
Note
If you remove all interfaces from the group, it will be automatically disabled.
6. Click OK.
Note
6. Click OK.
Note
If the interface group to which the client belongs has no other clients, the
interface group is disabled.
Note
Interface groups used for replication are different from the interface groups previously
explained and are supported for DD Boost Managed File Replication (MFR) only. For
detailed information about using interface groups for MFR, see the EMC Data Domain
Boost for Partner Integration Administration Guide or the EMC Data Domain Boost for
OpenStorage Administration Guide.
Without the use of interface groups, configuration for replication requires several
steps:
1. Adding an entry in the /etc/hosts file on the source Data Domain system for the
target Data Domain system and hard coding one of the private LAN network
interfaces as the destination IP address.
2. Adding a route on the source Data Domain system to the target Data Domain
system specifying a physical or virtual port on the source Data Domain system to
the remote destination IP address.
3. Configuring LACP through the network on all switches between the Data Domain
systems for load balancing and failover.
4. Requiring different applications to use different names for the target Data Domain
system to avoid naming conflicts in the /etc/hosts file.
Using interface groups for replication simplifies this configuration through the use of
the DD OS System Manager or DD OS CLI commands. Using interface groups to
configure the replication path lets you:
l Redirect a hostname-resolved IP address away from the public network, using
another private Data Domain system IP address.
l Identify an interface group based on configured selection criteria, providing a
single interface group where all the interfaces are reachable from the target Data
Domain system.
l Select a private network interface from a list of interfaces belonging to a group,
ensuring that the interface is healthy.
l Provide load balancing across multiple Data Domain interfaces within the same
private network.
l Provide a failover interface for recovery for the interfaces of the interface group.
l Provide host failover if configured on the source Data Domain system.
l Use Network Address Translation (NAT)
The selection order for determining an interface group match for file replication is:
1. Local MTree (storage-unit) path and a specific remote Data Domain hostname
2. Local MTree (storage-unit) path with any remote Data Domain hostname
3. Any MTree (storage-unit) path with a specific Data Domain hostname
The same MTree can appear in multiple interface groups only if it has a different Data
Domain hostname. The same Data Domain hostname can appear in multiple interface
groups only if it has a different MTree path. The remote hostname is expected to be
an FQDN, such as dd890-1.emc.com.
The interface group selection is performed locally on both the source Data Domain
system and the target Data Domain system, independent of each other. For a WAN
replication network, only the remote interface group needs to be configured since the
source IP address corresponds to the gateway for the remote IP address.
Destroying DD Boost
Use this option to permanently remove all of the data (images) contained in the
storage units. When you disable or destroy DD Boost, the DD Boost FC service is also
disabled. Only an administrative user can destroy DD Boost.
Procedure
1. Manually remove (expire) the corresponding backup application catalog entries.
Note
If multiple backup applications are using the same Data Domain system, then
remove all entries from each of those applications catalogs.
2. Select Protocols > DD Boost > More Tasks > Destroy DD Boost....
3. Enter your administrative credentials when prompted.
4. Click OK.
Note
Windows, Linux, HP-UX (64-bit Itanium architecture), AIX, and Solaris client
environments are supported.
Note
If you are using DD System Manager, the SCSI target daemon is automatically
enabled when you enable the DD Boost-over-FC service (later in this procedure).
l Verify that the DD Boost license is installed. In DD System Manager, select
Protocols > DD Boost > Settings. If the Status indicates that DD Boost is not
licensed, click Add License and enter a valid license in the Add License Key dialog
box.
CLI equivalents
# license show
# license add license-code
Procedure
1. Select Protocols > DD Boost > Settings.
2. In the Users with DD Boost Access section, specify one or more DD Boost user
names.
A DD Boost user is also a DD OS user. When specifying a DD Boost user name,
you can select an existing DD OS user name, or you can create a new DD OS
user name and make that name a DD Boost user. This release supports multiple
DD Boost users. For detailed instructions, see Specifying DD Boost User
Names.
CLI equivalents
# ddboost enable
Starting DDBOOST, please wait...............
DDBOOST is enabled.
Results
You are now ready to configure the DD Boost-over-FC service on the Data Domain
system.
Configuring DD Boost
After you have added user(s) and enabled DD Boost, you need to enable the Fibre
Channel option and specify the DD Boost Fibre Channel server name. Depending on
your application, you may also need to create one or more storage units and install the
DD Boost API/plug-in on media servers that will access the Data Domain system.
Procedure
1. Select Protocols > DD Boost > Fibre Channel.
2. Click Enable to enable Fibre Channel transport.
CLI equivalent
3. To change the DD Boost Fibre Channel server name from the default
(hostname), click Edit, enter a new server name, and click OK.
CLI equivalent
4. Select Protocols > DD Boost > Storage Units to create a storage unit (if not
already created by the application).
You must create at least one storage unit on the Data Domain system, and a DD
Boost user must be assigned to that storage unit. For detailed instructions, see
Creating a Storage Unit.
CLI equivalent
Results
You are now ready to verify connectivity and create access groups.
Note
Avoid making access group changes on a Data Domain system during active backup or
restore jobs. A change may cause an active job to fail. The impact of changes during
active jobs depends on a combination of backup software and host configurations.
Procedure
1. Select Hardware > Fibre Channel > Resources > Initiators to verify that
initiators are present.
It is recommended that you assign aliases to initiators to reduce confusion
during the configuration process.
CLI equivalent
# scsitarget initiator show list
Initiator System Address Group Service
------------ ----------------------- ---------- -------
initiator-1 21:00:00:24:ff:31:b7:16 n/a n/a
initiator-2 21:00:00:24:ff:31:b8:32 n/a n/a
initiator-3 25:00:00:21:88:00:73:ee n/a n/a
initiator-4 50:06:01:6d:3c:e0:68:14 n/a n/a
initiator-5 50:06:01:6a:46:e0:55:9a n/a n/a
initiator-6 21:00:00:24:ff:31:b7:17 n/a n/a
initiator-7 21:00:00:24:ff:31:b8:33 n/a n/a
initiator-8 25:10:00:21:88:00:73:ee n/a n/a
initiator-9 50:06:01:6c:3c:e0:68:14 n/a n/a
initiator-10 50:06:01:6b:46:e0:55:9a n/a n/a
tsm6_p23 21:00:00:24:ff:31:ce:f8 SetUp_Test VTL
------------ ----------------------- ---------- -------
2. To assign an alias to an initiator, select one of the initiators and click the pencil
(edit) icon. In the Name field of the Modify Initiator dialog, enter the alias and
click OK.
CLI equivalents
# scsitarget initiator rename initiator-1 initiator-renamed
Initiator 'initiator-1' successfully renamed.
# scsitarget initiator show list
Initiator System Address Group
Service
----------------- ----------------------- ----------
-------
initiator-2 21:00:00:24:ff:31:b8:32 n/a
n/a
initiator-renamed 21:00:00:24:ff:31:b7:16 n/a
n/a
----------------- ----------------------- ----------
-------
3. On the Resources tab, verify that endpoints are present and enabled.
CLI equivalent
# scsitarget endpoint show list
------------- -------------- ------------ ------- ------
endpoint-fc-0 5a FibreChannel Yes Online
endpoint-fc-1 5b FibreChannel Yes Online
------------- -------------- ------------ ------- ------
7. Select one or more initiators. Optionally, replace the initiator name by entering a
new one. Click Next.
CLI equivalent
#ddboost fc group add test-dfc-group initiator initiator-5
Initiator(s) "initiator-5" added to group "test-dfc-group".
See the Data Domain Boost for OpenStorage Administration Guide for the
recommended value for different clients.
9. Indicate which endpoints to include in the group: all, none, or select from the list
of endpoints. Click Next.
CLI equivalents
# scsitarget group add Test device ddboost-dev8 primary-
endpoint all
secondary-endpoint all
Device 'ddboost-dev8' successfully added to group.
When presenting LUNs via attached FC ports on HBAs, ports can be designated
as primary, secondary or none. A primary port for a set of LUNs is the port that
is currently advertizing those LUNs to a fabric. A secondary port is a port that
will broadcast a set of LUNs in the event of primary path failure (this requires
manual intervention). A setting of none is used in the case where you do not
wish to advertize selected LUNs. The presentation of LUNs is dependent upon
the SAN topology.
10. Review the Summary and make any modifications. Click Finish to create the
access group, which is displayed in the DD Boost Access Groups list.
CLI equivalent
# scsitarget group show detailed
Note
To change settings for an existing access group, select it from the list and click
the pencil icon (Modify).
Note
You cannot delete a group that has initiators assigned to it. Edit the group to
remove the initiators first.
Settings
Use the Settings tab to enable or disable DD Boost, select clients and users, and
specify advanced options.
The Settings tab shows the DD Boost status (Enabled or Disabled). Use the Status
button to switch between Enabled or Disabled.
Under Allowed Clients, select the clients that are to have access to the system. Use
the Add, Modify, and Delete buttons to manage the list of clients.
Under Users with DD Boost Access, select the users that are to have DD Boost
access. Use the Add, Change Password, and Remove buttons to manage the list of
users.
Expand Advanced Options to see which advanced options are enabled. Go to More
Tasks > Set Options to reset these options.
Active Connections
Use the Active Connections tab to see information about clients, interfaces, and
outbound files.
Item Description
Client The name of the connected client.
Memory (GiB) The amount of memory (in GiB) the client has, such as 7.8.
Item Description
Encrypted Whether the connection is encrypted (Yes) or not (No).
Item Description
Interface The IP address of the interface.
IP Network
The IP Network tab lists configured interface groups. Details include whether or not a
group is enabled and any configured client interfaces. Administrators can use the
Interface Group menu to view which clients are associated with an interface group.
Fibre Channel
The Fibre Channel tab lists configured DD Boost access groups. Use the Fibre Channel
tab to create and delete access groups and to configure initiators, devices, and
endpoints for DD Boost access groups.
IP Network 309
Working with DD Boost
Storage Units
Use the Storage Unit tab to create, modify, and delete storage units. To see detailed
information about a listed storage unit, select its name.
Item Description
Existing Storage Units
Pre-Comp Soft Limit Current value of soft quota set for the storage unit.
% of Pre-Comp Soft Limit Used Percentage of hard limit quota used.
Pre-Comp Hard Limit Current value of hard quota set for the storage unit.
Total Files The total number of file images on the storage unit.
Download Files Link to download storage unit file details in .tsv format. You
must allow pop-ups to use this function.
Storage Unit Status The current status of the storage unit (combinations are
supported). Status can be:
l DDeleted
l RORead-only
l RWRead/write
l RDReplication destination
l RLEDD Retention lock enabled
l RLDDD Retention lock disabled
Original Size The size of the file before compression was performed.
Global Compression Size The total size after global compression of the files in the
storage unit when they were written.
Locally Compressed Size Total size after local compression of the files in the storage
unit when they were written.
Note
At present, for 16 Gb/s, EMC supports fabric and point-to-point topologies. Other
topologies will present issues.
Planning a DD VTL
The DD VTL (Virtual Tape Library) feature has very specific requirements, such as
proper licensing, interface cards, user permissions, etc. These requirements are listed
here, complete with details and recommendations.
l An appropriate DD VTL license.
n DD VTL is a licensed feature, and you must use NDMP (Network Data
Management Protocol) over IP (Internet Protocol) or DD VTL directly over FC
(Fibre Channel).
For added security, run the net filter add operation allow clients
<client-IP-address> interfaces <DD-interface-IP-address>
command.
Add the seq-id 1 option to the command to enforce this rule before any
other net filter rules.
l A backup software minimum record (block) size.
n EMC strongly recommends that backup software be set to use a minimum
record (block) size of 64 KiB or larger. Larger sizes usually give faster
performance and better data compression.
n Depending on your backup application, if you change the size after the initial
configuration, data written with the original size might become unreadable.
l Appropriate user access to the system.
n For basic tape operations and monitoring, only a user login is required.
n To enable and configure DD VTL services and perform other configuration
tasks, a sysadmin login is required.
DD VTL limits
Before setting up or using a DD VTL, review these limits on size, slots, etc.
l I/O Size The maximum supported I/O size for any DD system using DD VTL is 1
MB.
l Libraries DD VTL supports a maximum of 64 libraries per DD system (that is, 64
DD VTL instances on each DD system).
l Initiators DD VTL supports a maximum of 1024 initiators or WWPNs (world-wide
port names) per DD system.
l Tape Drives Information about tape drives is presented in the next section.
l Data Streams Information about data streams is presented in the following table.
DD990 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
DD7200 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
Note
Some device drivers (for example, IBM AIX atape device drivers) limit library
configurations to specific drive/slot limits, which may be less than what the DD
system supports. Backup applications, and drives used by those applications, may
be affected by this limitation.
Note
There are no references to model numbers in this table because there are many
combinations of CPU cores and memories for each model, and the number of
supported drives depends only on the CPU cores and memories not on the particular
model, itself.
40 to 59 NA NA 540
60 or more NA NA 1080
Tape barcodes
When you create a tape, you must assign a unique barcode (never duplicate barcodes
as this can cause unpredictable behavior). Each barcode consists of eight characters:
the first six are numbers or uppercase letters (0-9, A-Z), and the last two are the tape
code for the supported tape type, as shown in the following table.
Note
Although a DD VTL barcode consists of eight characters, either six or eight characters
may be transmitted to a backup application, depending on the changer type.
For multiple tape libraries, barcodes are automatically incremented, if the sixth
character (just before the "L") is a number. If an overflow occurs (9 to 0), numbering
moves one position to the left. If the next character to increment is a letter,
incrementation stops. Here are a few sample barcodes and how each will be
incremented:
l 000000L1 creates tapes of 100 GiB capacity and can accept a count of up to
100,000 tapes (from 000000 to 99999).
l AA0000LA creates tapes of 50 GiB capacity and can accept a count of up to
10,000 tapes (from 0000 to 9999).
l AAAA00LB creates tapes of 30GiB capacity and can accept a count of up to 100
tapes (from 00 to 99).
l AAAAAALC creates one tape of 10 GiB capacity. Only one tape can be created
with this name.
l AAA350L1 creates tapes of 100 GiB capacity and can accept a count of up to 650
tapes (from 350 to 999).
l 000AAALA creates one tape of 50 GiB capacity. Only one tape can be created
with this name.
l 5M7Q3KLB creates one tape of 30 GiB capacity. Only one tape can be created
with this name.
LTO-3 R RW RW
LTO-2 R RW RW
LTO-1 R RW RW
Setting up a DD VTL
To set up a simple DD VTL, use the Configuration Wizard, which is described in the
Getting Started chapter.
Similar documentation is available in the EMC Data Domain Operating System Initial
Configuration Guide.
Then, continue with the following topics to enable the DD VTL, create libraries, and
create and import tapes.
Managing a DD VTL
You can manage a DD VTL using the Data Domain System Manager (DD System
Manager) or the Data Domain Operating System (DD OS) Command Line Interface
(CLI). After you login, you can check the status of your DD VTL process, check your
license information, and review and configure options.
Logging In
To use a graphical user interface (GUI) to manage your DD Virtual Tape Library (DD
VTL), log in to the DD System Manager.
CLI Equivalent
You can also log in at the CLI:
login as: sysadmin
Data Domain OS
Using keyboard-interactive authentication.
Password:
# scsitarget enable
Please wait ...
SCSI Target subsystem is enabled.
Accessing DD VTL
From the menu at the left of the DD System Manager, select Protocols > VTL.
Status
In the Virtual Tape Libraries > VTL Service area, you can see the status of your DD
VTL process is displayed at the top, for example, Enabled: Running. The first part of
the status will be Enabled (on) or Disabled (off). The second part will be one of the
following process states.
State Description
Running DD VTL process is enabled and active (shown in green).
DD VTL License
The VTL License line tells you whether your DD VTL license has been applied. If it says
Unlicensed, select Add License. Enter your license key in the Add License Key dialog.
Select Next and OK.
Note
All license information should have been populated as part of the factory configuration
process; however, if DD VTL was purchased later, the DD VTL license key may not
have been available at that time.
CLI Equivalent
You can also verify that the DD VTL license has been installed at the CLI:
# license show
## License Key Feature
-- ------------------- -----------
1 DEFA-EFCD-FCDE-CDEF Replication
2 EFCD-FCDE-CDEF-DEFA VTL
-- ------------------- -----------
If the license is not present, each unit comes with documentation a quick install card
which will show the licenses that have been purchased. Enter the following
command to populate the license key.
# license add license-code
Enabling DD VTL
Enabling DD VTL broadcasts the WWN of the Data Domain HBA to customer fabric
and enables all libraries and library drives. If a forwarding plan is required in the form of
change control processes, this process should be enabled to facilitate zoning.
Procedure
1. Make sure that you have a DD VTL license and that the file system is enabled.
2. Select Virtual Tape Libraries > VTL Service.
3. To the right of the Status area, select Enable.
4. In the Enable Service dialog, select OK.
5. After DD VTL has been enabled, note that Status will change to Enabled:
Running in green. Also note that the configured DD VTL options are displayed
in the Option Defaults area.
CLI Equivalent
# vtl enable
Starting VTL, please wait ...
VTL is enabled.
Disabling DD VTL
Disabling DD VTL closes all libraries and shuts down the DD VTL process.
Procedure
1. Select Virtual Tape Libraries > VTL Service.
2. To the right of the Status area, select Disable.
3. In the Disable Service dialog, select OK.
4. After DD VTL has been disabled, notice that the Status has changed to
Disabled: Stopped in red.
CLI Equivalent
# vtl disable
Item Description
Property Lists the configured options:
Item Description
l auto-eject
l auto-offline
l barcode-length
Note
DD VTLs are assigned global options, by default, and those options are updated
whenever global options change, unless you change them manually using this method.
Procedure
1. Select Virtual Tape Libraries > VTL Service.
2. In the Option Defaults area, select Configure. In the Configure Default Options
dialog, change any or all of the default options.
3. Select OK.
4. Or to disable all of these service options, select Reset to Factory, and the
values will be immediately reset to factory defaults.
Item Description
Name The name of a configured library.
From the More Tasks menu, you can create and delete libraries, as well as search for
tapes.
Creating libraries
DD VTL supports a maximum of 64 libraries per system, that is, 64 concurrently active
virtual tape library instances on each DD system.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries.
2. Select More Tasks > Library > Create
3. In the Create Library dialog, enter the following information:
Number of Drives Enter the number of drives (from 1 to 98 (see Note). The
number of drives to be created will correspond to the number of
data streams that will write to a library.
Note
Drive Model Select the desired model from the drop-down list:
l IBM-LTO-1
l IBM-LTO-2
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
Do not mix drive types, or media types, in the same library. This
can cause unexpected results and/or errors in the backup
operation.
Number of Slots Enter the number of slots in the library. Here are some things to
consider:
l The number of slots must be equal to or greater than the
number of drives.
l You can have up to 32,000 slots per individual library
l You can have up to 64,000 slots per system.
l Try to have enough slots so tapes remain in the DD VTL and
never need to be exported to a vault to avoid reconfiguring
the DD VTL and to ease management overhead.
l Consider any applications that are licensed by the number of
slots.
As an example, for a standard 100-GB cartridge on a DD580,
you might configure 5000 slots. This would be enough to hold
up tp 500 TB (assuming reasonably compressible data).
Number of CAPs (Optional) Enter the number of cartridge access ports (CAPs).
l You can have up to 100 CAPs per library.
l You can have up to 1000 CAPs per system.
Check your particular backup software application
documentation on the EMC Online Support Site for guidance.
Changer Model Name Select the desired model from the drop-down list:
l L180 (default)
l RESTORER-L180
l TS3500 (which should be used for IBMi deployments)
l I2000
l I6000
l DDVTL
Check your particular backup software application
documentation on the EMC Online Support Site for guidance.
Also refer to the DD VTL support matrix to see the compatibility
of emulated libraries to supported software.
Options
auto-eject default (disabled), enable, disable
4. Select OK.
After the Create Library status dialog shows Completed, select OK.
The new library appears under the Libraries icon in the VTL Service tree, and
the options you have configured appear as icons under the library. Selecting the
library displays details about the library in the Information Panel.
Note that access to VTLs and drives is managed with Access Groups.
CLI Equivalent
# vtl add NewVTL model L180 slots 50 caps 5
This adds the VTL library, NewVTL. Use 'vtl show config NewVTL'
to view it.
Deleting libraries
When a tape is in a drive within a library, and that library is deleted, the tape is moved
to the vault. However, the tape's pool does not change.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries.
2. Select More Tasks > Library > Delete.
3. In the Delete Libraries dialog, select or confirm the checkbox of the items to
delete:
l The name of each library, or
4. Select Next.
5. Verify the libraries to delete, and select Submit in the confirmation dialogs.
6. After the Delete Libraries Status dialog shows Completed, select Close. The
selected libraries are deleted from the DD VTL.
CLI Equivalent
# vtl del OldVTL
Pool Select the name of the pool in which to search for the tape. If no pools have
been created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character
and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
5. Select Search.
Item Description
Device The elements in the library, such a drives, slots, and CAPs
(cartridge access ports).
Item Description
Empty The number of devices with no media loaded.
Property Value
auto-eject enabled or disabled
Item Description
Pool The name of the pool where the tapes are located.
Capacity The total configured data capacity of the tapes in that pool, in
GiB (Gibibytes, the base-2 equivalent of GB, Gigabytes).
Used The amount of space used on the virtual tapes in that pool.
From the More Tasks menu, you can delete, rename, or set options for a library;
create, delete, import, export, or move tapes; and add or delete slots and CAPs.
Creating tapes
You can create tapes in either a library or a pool. If initiated from a pool, the system
first creates the tapes, then imports them to the library.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library or Vault or
Pools > Pools > pool.
2. Select More Tasks > Tapes > Create.
3. In the Create Tapes dialog, enter the following information about the tape:
Number of Tapes For a library, select from 1 to 20. For a pool, select from 1 to 100,000, or
leave the default (20). [Although the number of supported tapes is
unlimited, you can create no more than 100,000 tapes at a time.]
Starting Barcode Enter the initial barcode number (using the format A99000LA).
Tape Capacity (optional) Specify the number of GiBs from 1 to 4000 for each tape (this
setting overrides the barcode capacity setting). For efficient use of disk
space, use 100 GiB or fewer.
CLI Equivalent
Note
Deleting tapes
You can delete tapes from either a library or a pool. If initiated from a library, the
system first exports the tapes, then deletes them. The tapes must be in the vault, not
in a library. On a Replication destination DD system, deleting a tape is not permitted.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library or Vault or
Pools > Pools > pool.
2. Select More Tasks > Tapes > Delete.
3. In the Delete Tapes dialog, enter search information about the tapes to delete,
and select Search:
Pool Select the name of the pool in which to search for the tape. If no pools have
been created, use the Default pool.
Barcode Specify a unique barcode, or leave the default (*) to search for a group of
tapes. Barcode allows the wildcards ? and *, where ? matches any single
character and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
Select all Select the Select All Pages checkbox to select all tapes returned by the
pages search query.
Items Shows the number of tapes selected across multiple pages updated
Selected automatically for each tape selection.
4. Select the checkbox of the tape that should be deleted or the checkbox on the
heading column to delete all tapes, and select Next.
5. Select Submit in the confirmation window, and select Close.
Note
After a tape is removed, the physical disk space used for the tape is not
reclaimed until after a file system cleaning operation.
CLI Equivalent
For example:
Note
You can act on ranges; however, if there is a missing tape in the range, the
action will stop.
Importing tapes
Importing a tape means that an existing tape will be moved from the vault to a library
slot, drive, or cartridge access port (CAP).
The number of tapes you can import at one time is limited by the number of empty
slots in the library, that is, you cannot import more tapes than the number of currently
empty slots.
To view the available slots for a library, select the library from the stack menu. The
information panel for the library shows the count in the Empty column.
l If a tape is in a drive, and the tape origin is known to be a slot, a slot is reserved.
l If a tape is in a drive, and the tape origin is unknown (slot or CAP), a slot is
reserved.
l If a tape is in a drive, and the tape origin is known to be a CAP, a slot is not
reserved. (The tape returns to the CAP when removed from the drive.)
l To move a tape to a drive, see the section on moving tapes, which follows.
Procedure
1. You can import tapes using either step a. or step b.
a. Select Virtual Tape Libraries > VTL Service > Libraries > library. Then,
select More Tasks > Tapes > Import. In the Import Tapes dialog, enter
search information about the tapes to import, and select Search:
Pool Select the name of the pool in which to search for the tape. If no pools
have been created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of
tapes. Barcode allows the wildcards ? and *, where ? matches any single
character and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If
you leave this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values
Page are 15, 30, and 45.
Items Shows the number of tapes selected across multiple pages updated
Selected automatically for each tape selection.
b. Select Virtual Tape Libraries > VTL Service > Libraries> library >
Changer > Drives > drive > Tapes. Select tapes to import by selecting the
checkbox next to:
l An individual tape, or
l The Barcode column to select all tapes on the current page, or
l The Select all pages checkbox to select all tapes returned by the search
query.
Only tapes showing Vault in the Location can be imported.
Select Import from Vault. This button is disabled by default and enabled
only if all of the selected tapes are from the Vault.
2. From the Import Tapes: library view, verify the summary information and the
tape list, and select OK.
3. Select Close in the status window.
CLI Equivalent
A00002L3 VTL_Pool vault RW 100 GiB 0.0 GiB (0.00%) 0x 2010/07/16 09:50:41
A00003L3 VTL_Pool vault RW 100 GiB 0.0 GiB (0.00%) 0x 2010/07/16 09:50:41
A00004L3 VTL_Pool vault RW 100 GiB 0.0 GiB (0.00%) 0x 2010/07/16 09:50:41
-------- -------- -------- ----- ------- --------------- ---- -------------------
VTL Tape Summary
----------------
Total number of tapes: 5
Total pools: 1
Total size of tapes: 500 GiB
Total space used by tapes: 0.0 GiB
Average Compression: 0.0x
Exporting tapes
Exporting a tape removes that tape from a slot, drive, or cartridge-access port (CAP)
and sends it to the vault.
Procedure
1. You can export tapes using either step a. or step b.
a. Select Virtual Tape Libraries > VTL Service > Libraries > library. Then,
select More Tasks > Tapes > Export. In the Export Tapes dialog, enter
search information about the tapes to export, and select Search:
Pool Select the name of the pool in which to search for the tape. If no pools have
been created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to return a group of
tapes. Barcode allows the wildcards ? and *, where ? matches any single
character and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
Tapes Select the maximum number of tapes to display per page. Possible values
Per Page are 15, 30, and 45.
Select all Select the Select All Pages checkbox to select all tapes returned by the
pages search query.
Items Shows the number of tapes selected across multiple pages updated
Selected automatically for each tape selection.
b. Select Virtual Tape Libraries > VTL Service > Libraries> library >
Changer > Drives > drive > Tapes. Select tapes to export by selecting the
checkbox next to:
l An individual tape, or
l The Barcode column to select all tapes on the current page, or
l The Select all pages checkbox to select all tapes returned by the search
query.
Only tapes with a library name in the Location column can be exported.
Select Export from Library. This button is disabled by default and enabled
only if all of the selected tapes have a library name in the Location column.
2. From the Export Tapes: library view, verify the summary information and the
tape list, and select OK.
3. Select Close in the status window.
3. In the Move Tape dialog, enter search information about the tapes to move, and
select Search:
Pool N/A
Barcode Specify a unique barcode. or leave the default (*) to return a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character and
* matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you leave
this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15,
Page 30, and 45.
4. From the search results list, select the tape or tapes to move.
5. Do one of the following:
a. Select the device from the Device list (for example, a slot, drive, or CAP),
and enter a starting address using sequential numbers for the second and
subsequent tapes. For each tape to be moved, if the specified address is
occupied, the next available address is used.
b. Leave the address blank if the tape in a drive originally came from a slot and
is to be returned to that slot; or if the tape is to be moved to the next
available slot.
6. Select Next.
7. In the Move Tape dialog, verify the summary information and the tape listing,
and select Submit.
8. Select Close in the status window.
Adding slots
You can add slots from a configured library to change the number of storage elements.
Note
Some backup applications do not automatically recognize that slots have been added
to a DD VTL. See your application documentation for information on how to configure
the application to recognize this type of change.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library.
2. Select More Tasks > Slots > Add.
3. In the Add Slots dialog, enter the Number of Slots to add. The total number of
slots in a library, or in all libraries on a system, cannot exceed 32,000 for a
library and 64,000 for a system.
4. Select OK and Close when the status shows Completed.
Deleting slots
You can delete slots from a configured library to change the number of storage
elements.
Note
Some backup applications do not automatically recognize that slots have been deleted
from a DD VTL. See your application documentation for information on how to
configure the application to recognize this type of change.
Procedure
1. If the slot that you want to delete contains cartridges, move those cartridges to
the vault. The system will delete only empty, uncommitted slots.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library.
3. Select More Tasks > Slots > Delete.
4. In the Delete Slots dialog, enter the Number of Slots to delete.
5. Select OK and Close when the status shows Completed.
Adding CAPs
You can add CAPs (cartridge access ports) from a configured library to change the
number of storage elements.
Note
CAPs are used by a limited number of backup applications. See your application
documentation to ensure that CAPs are supported.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library.
2. Select More Tasks > CAPs > Add.
3. In the Add CAPs dialog, enter the Number of CAPs to add. You can add from 1
to 100 CAPs per library and from 1 to 1,000 CAPs per system.
4. Select OK and Close when the status shows Completed.
Deleting CAPs
You can delete CAPs (cartridge access ports) from a configured library to change the
number of storage elements.
Note
Some backup applications do not automatically recognize that CAPs have been
deleted from a DD VTL. See your application documentation for information on how to
configure the application to recognize this type of change.
Procedure
1. If the CAP that you want to delete contains cartridges, move those cartridges
to the vault, or this will be done automatically.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library.
3. Select More Tasks > CAPs > Delete.
4. In the Delete CAPs dialog, enter the Number of CAPs to delete. You can delete
a maximum of 100 CAPs per library or 1000 CAPs per system.
5. Select OK and Close when the status shows Completed.
Item Description
Vendor The name of the vendor who manufactured the changer
Column Description
Drive The list of drives by name, where name is Drive # and # is a number between 1
and n representing the address or location of the drive in the list of drives.
Status Whether the drive is Empty, Open, Locked, or Loaded. A tape must be present for
the drive to be locked or loaded.
Tape and library drivers To work with drives, you must use the tape and library
drivers supplied by your backup software vendor that support the IBM LTO-1, IBM
LTO-2, IBM LTO-3, IBM LTO-4, IBM LTO-5 (default), HP-LTO-3, or HP-LTO-4 drives
and the StorageTek L180 (default), RESTORER-L180, IBM TS3500, I2000, I6000, or
DDVTL libraries. For more information, see the Application Compatibility Matrices and
Integration Guides for your vendors. When configuring drives, also keep in mind the
limits on backup data streams, which are determined by the platform in use.
LTO drive capacities Because the DD system treats LTO drives as virtual drives,
you can set a maximum capacity to 4 TiB (4000 GiB) for each drive type. The default
capacities for each LTO drive type are as follows:
l LTO-1 drive: 100 GiB
l LTO-2 drive: 200 GiB
l LTO-3 drive: 400 GiB
l LTO-4 drive: 800 GiB
l LTO-5 drive: 1.5 TiB
Migrating LTO-1 tapes You can migrate tapes from existing LTO-1 type VTLs to
VTLs that include other supported LTO-type tapes and drives. The migration options
are different for each backup application, so follow the instructions in the LTO tape
migration guide specific to your application. To find the appropriate guide, go to the
EMC Online Support Site, and in the search text box, type in LTO Tape Migration for
VTLs.
Tape full: Early warning You will receive a warning when the remaining tape space
is almost completely full, that is, greater than 99.9, but less than 100 percent. The
application can continue writing until the end of the tape to reach 100 percent
capacity. The last write, however, is not recoverable.
From the More Tasks menu, you can create or delete a drive.
Creating drives
See the Number of drives supported by a DD VTL section to determine the maximum
number of drives supported for your particular DD VTL.
Procedure
1. Select Virtual Tape Libraries > VTL Service > Libraries > library> Changer >
Drives.
2. Select More Tasks > Drives > Create.
3. In the Create Drive dialog, enter the following information:
Number of See the table in the Number of Drives Supported by a DD VTL section, earlier
Drives in this chapter.
Model Name Select the model from the drop-down list. If another drive already exists, this
option is inactive, and the existing drive type must be used. You cannot mix
drive types in the same library.
l IBM-LTO-1
l IBM-LTO-2
l IBM-LTO-3
l IBM-LTO-4
l IBM-LTO-5 (default)
l HP-LTO-3
l HP-LTO-4
4. Select OK, and when the status shows Completed, select OK.
The added drive appears in the Drives list.
Deleting drives
A drive must be empty before it can be deleted.
Procedure
1. If there is a tape in the drive that you want to delete, remove the tape.
2. Select Virtual Tape Libraries > VTL Service > Libraries > library > Changer >
Drives.
3. Select More Tasks > Drives > Delete.
4. In the Delete Drives dialog, select the checkboxes of the drives to delete, or
select the Drive checkbox to delete all drives.
5. Select Next, and after verifying that the correct drive(s) has been selected for
deletion, select Submit.
6. When the Delete Drive Status dialog shows Completed, select Close.
The drive will have been removed from the Drives list.
Column Description
Drive The list of drives by name, where name is Drive # and
# is a number between 1 and n representing the address
or location of the drive in the list of drives.
Column Description
Tape The barcode of the tape in the drive (if any).
Column Description
Endpoint The specific name of the endpoint.
From the More Tasks menu, you can delete the drive or perform a refresh.
Item Description
Barcode The unique barcode for the tape.
Pool The name of the pool that holds the tape. The Default pool
holds all tapes unassigned to a user-created pool.
Item Description
Compression The amount of compression performed on the data on a tape.
Last Modified The date of the last change to the tapes information.
Modification times used by the system for age-based policies
might differ from the last modified time displayed in the tape
information sections of the DD System Manager.
Locked Until If a DD Retention Lock deadline has been set, the time set is
shown. If no retention lock exists, this value is Not
specified.
From the information panel, you can import a tape from the vault, export a tape to the
library, set a tape's state, create a tape, or delete a tape.
From the More Tasks menu, you can move a tape.
Item Description
Location The name of the pool.
Item Description
Tape Count The number of tapes in the pool.
From the More Tasks menu, you can create, delete, and search for tapes in the vault.
Item Description
Group Name Name of group.
If you select View All Access Groups, you are taken to the Fibre Channel view.
From the More Tasks menu, you can create or delete a group.
check the LUNs for devices assigned to ports, and if there is no device
assigned to LUN 0, change the LUN of a device so it is assigned to LUN 0.
e. Select OK.
You are returned to the Devices dialog box where the new group is listed. To
add more devices, repeat these five substeps.
7. Select Next.
8. Select Close when the Completed status message is displayed.
CLI Equivalent
# vtl group add VTL_Group vtl NewVTL changer lun 0 primary-port all secondary-port all
# vtl group add VTL_Group vtl NewVTL drive 1 lun 1 primary-port all secondary-port all
# vtl group add SetUp_Test vtl SetUp_Test drive 3 lun 3 primary-port endpoint-fc-0
secondary-port endpoint-fc-1
Initiators:
Initiator Alias Initiator WWPN
--------------- -----------------------
tsm6_p23 21:00:00:24:ff:31:ce:f8
--------------- -----------------------
Devices:
Device Name LUN Primary Ports Secondary Ports In-use Ports
------------------ --- ------------- --------------- -------------
SetUp_Test changer 0 all all all
SetUp_Test drive 1 1 all all all
SetUp_Test drive 2 2 5a 5b 5a
SetUp_Test drive 3 3 endpoint-fc-0 endpoint-fc-1 endpoint-fc-0
------------------ --- ------------- --------------- -------------
d. In the Primary and Secondary Ports area, change the option that determines
the ports from which the selected device is seen. The following conditions
apply for designated ports:
l all The checked device is seen from all ports.
l none The checked device is not seen from any port.
l select The checked device is seen from selected ports. Select the
checkboxes of the ports from which it will be seen.
If only primary ports are selected, the checked device is visible only from
primary ports.
If only secondary ports are selected, the checked device is visible only
from secondary ports. Secondary ports can be used if primary ports
become unavailable.
The switchover to a secondary port is not an automatic operation. You must
manually switch the DD VTL device to the secondary ports if the primary
ports become unavailable.
The port list is a list of physical port numbers. A port number denotes the
PCI slot, and a letter denotes the port on a PCI card. Examples are 1a, 1b, or
2a, 2b.
A drive appears with the same LUN on all ports that you have configured.
e. Select OK.
CLI Equivalent
Item Description
LUN Device address maximum number is 16383. A LUN can be
used only once within a group, but can be used again within
another group. DD VTL devices added to a group must use
contiguous LUNs.
Item Description
Primary Endpoints Initial (or default) endpoint used by backup application. In the
event of a failure on this endpoint, the secondary endpoints
may be used, if available.
Item Description
Name Name of initiator, which is either the WWPN or the alias
assigned to the initiator.
From the More Tasks menu, with a group selected, you can configure that group, or
set endpoints in use.
# ndmpd enable
Starting NDMP daemon, please wait...............
NDMP daemon is enabled.
5. Ensure that the NDMP daemon sees the devices in the TapeServer group:
# ndmpd show devicenames
NDMP Device Virtual Name Vendor Product Serial Number
----------------- ---------------- ------ ----------- -------------
/dev/dd_ch_c0t0l0 dd990-16 changer STK L180 6290820000
/dev/dd_st_c0t1l0 dd990-16 drive 1 IBM ULTRIUM-TD3 6290820001
/dev/dd_st_c0t2l0 dd990-16 drive 2 IBM ULTRIUM-TD3 6290820002
/dev/dd_st_c0t3l0 dd990-16 drive 3 IBM ULTRIUM-TD3 6290820003
/dev/dd_st_c0t4l0 dd990-16 drive 4 IBM ULTRIUM-TD3 6290820004
----------------- ---------------- ------ ----------- -------------
6. Add an NDMP user (ndmp in this example) with the following command:
# ndmpd user add ndmp
Enter password:
Verify password:
7. Verify that user ndmp is added correctly:
# ndmpd user show
ndmp
8. Display the NDMP configuration:
# ndmpd option show all
Name Value
-------------- --------
authentication text
debug disabled
port 10000
preferred-ip
-------------- --------
9. Change the default user password authentication to use MD5 encryption for
enhanced security, and verify the change (notice the authentication value
changed from text to md5):
# ndmpd option set authentication md5
# ndmpd option show all
Name Value
-------------- --------
authentication md5
debug disabled
port 10000
preferred-ip
-------------- --------
Results
NDMP is now configured, and the TapeServer access group shows the device
configuration. See the ndmpd chapter of the EMC Data Domain Operating System
Command Reference Guide for the complete command set and options.
Item Description
Name Name of initiator, which is either the WWPN or the alias
assigned to the initiator.
Online Endpoints Group name where ports are seen by initiator. Displays None
or Offline if the initiator is unavailable.
Item Description
Name Specific name of endpoint.
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
Status DD VTL link status, which is either Online (capable of
handling traffic) or Offline.
Configure Resources
Selecting Configure Resources takes you to the Fibre Channel area, where you can
configure endpoints and initiators.
Note
Item Description
Name Name of initiator.
Selecting Configure Initiators takes you to the Fibre Channel area, where you can
configure endpoints and initiators.
CLI Equivalent
# vtl initiator show
Initiator Group Status WWNN WWPN Port
--------- --------- ------ ----------------------- ----------------------- ----
tsm6_p1 tsm3500_a Online 20:00:00:24:ff:31:ce:f8 21:00:00:24:ff:31:ce:f8 10b
--------- --------- ------ ----------------------- ----------------------- ----
Item Description
System Address System address of endpoint.
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
Item Description
NPIV NPIV status of this endpoint: eithe Enabled or Disabled.
Item Description
Name Specific name of endpoint.
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
LInk Status Link status of this endpoint: either Online or Offline.
Configure Endpoints
Selecting Configure Endpoints takes you to the Fibre Channel area, where you can
change any of the above information for the endpoint.
CLI Equivalent
# scsitarget endpoint show list
Endpoint System Address Transport Enabled Status
-------- -------------- --------- ------- ------
endpoint-fc-0 5a FibreChannel Yes Online
endpoint-fc-1 5b FibreChannel Yes Online
Item Description
System Address System address of endpoint.
Item Description
WWNN Unique worldwide node name, which is a 64-bit identifier (a
60-bit value preceded by a 4-bit Network Address Authority
identifier), of the FC node.
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
NPIV NPIV status of this endpoint: eithe Enabled or Disabled.
Item Description
Name Specific name of endpoint.
Enabled HBA (host bus adapter) port operational state, which is either
Yes (enabled) or No (not enabled).
LInk Status Link status of this endpoint: either Online or Offline.
Item Description
Endpoint Specific name of endpoint.
Item Description
Endpoint Specific name of endpoint.
Item Description
Location The location of the pool.
Item Description
Name The name of the pool.
Remote Source Contains an entry only if the pool is replicated from another
DD system.
From the More Tasks menu, you can create and delete pools, as well as search for
tapes.
Creating pools
You can create backward-compatible pools, if necessary for your setup, for example,
for replication with a pre-5.2 DD OS system.
Procedure
1. Select Pools > Pools.
2. Select More Tasks > Pool > Create.
3. In the Create Pool dialog, enter a Pool Name, noting that a pool name:
l cannot be all, vault, or summary.
l cannot have a space or period at its beginning or end.
l is case-sensitive.
4. If you want to create a directory pool (which is backward compatible with the
previous version of DD System Manager), select the option Create a directory
backwards compatibility mode pool. However, be aware that the advantages
of using an MTree pool include the ability to:
l make individual snapshots and schedule snapshots.
l apply retention locks.
CLI Equivalent
Deleting pools
Before a pool can be deleted, you must have deleted any tapes contained within it. If
replication is configured for the pool, the replication pair must also be deleted.
Deleting a pool corresponds to renaming the MTree and then deleting it, which occurs
at the next cleaning process.
Procedure
1. Select Pools > Pools > pool.
2. Select More Tasks > Pool > Delete.
3. In the Delete Pools dialog, select the checkbox of items to delete:
l The name of each pool, or
l Pool Names, to delete all pools.
Item Description
Convert to MTree Pool Select this button to convert a Directory pool to an MTree
pool.
Item Description
Capacity The total configured data capacity of tapes in the pool, in GiB
(Gibibytes, base-2 equivalent of GB, Gigabytes).
Item Description
Name The name of the pool.
Remote Source Contains an entry only if the pool is replicated from another
DD system.
You can also select the Replication Detail button, at the top right, to go directly to
the Replication information panel for the selected pool.
From either the Virtual Tape Libraries or Pools area, from the More Tasks menu, you
can create, delete, move, copy, or search for a tape in the pool.
From the Pools area, from the More Tasks menu, you can rename or delete a pool.
2. With the directory pool you wish to convert highlighted, choose Convert to
MTree Pool.
3. Select OK in the Convert to MTree Pool dialog.
4. Be aware that conversion affects replication in the following ways:
l DD VTL is temporarily disabled on the replicated systems during conversion.
l The destination data is copied to a new pool on the destination system to
preserve the data until the new replication is initialized and synced.
Afterward, you may safely delete this temporarily copied pool, which is
named CONVERTED-pool, where pool is the name of the pool that was
upgraded (or the first 18 characters for long pool names). [This applies only
to DD OS 5.4.1.0 and later.]
l The target replication directory will be converted to MTree format. [This
applies only to DD OS 5.2 and later.]
l Replication pairs are broken before pool conversion and re-established
afterward if no errors occur.
l DD Retention Lock cannot be enabled on systems involved in MTree pool
conversion.
Note
You cannot move tapes from a tape pool that is a directory replication source. As a
workaround, you can:
l Copy the tape to a new pool, then delete the tape from the old pool.
l Use an MTree pool, which allows you to move tapes from a tape pool that is a
directory replication source.
Procedure
1. With a pool highlighted, select More Tasks > Tapes > Move.
Note that when started from a pool, the Tapes Panel allows tapes to be moved
only between pools.
2. In the Move Tapes dialog, enter information to search for the tapes to move,
and select Search:
Pool Select the name of the pool where the tapes reside. If no pools have been
created, use the Default pool.
Barcode Specify a unique barcode. or leave the default (*) to import a group of tapes.
Barcode allows the wildcards ? and *, where ? matches any single character
and * matches 0 or more characters.
Count Enter the maximum number of tapes you want to be returned to you. If you
leave this blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are
Page 15, 30, and 45.
Items Shows the number of tapes selected across multiple pages updated
Selected automatically for each tape selection.
Pool To copy tapes between pools, select the name of the pool where the tapes
currently reside. If no pools have been created, use the Default pool.
Count Enter the maximum number of tapes you want to be imported. If you leave this
blank, the barcode default (*) is used.
Tapes Per Select the maximum number of tapes to display per page. Possible values are 15,
Page 30, and 45.
Items Shows the number of tapes selected across multiple pages updated
Selected automatically for each tape selection.
Renaming pools
A pool can be renamed only if none of its tapes is in a library.
Procedure
1. Select Pools > Pools > pool.
2. Select More Tasks > Pool > Rename.
3. In the Rename Pool dialog, enter the new Pool Name, with the caveat that this
name:
l cannot be all, vault, or summary.
l cannot have a space or period at its beginning or end.
l is case-sensitive.
l DD Replicator overview....................................................................................358
l Prerequisites for replication configuration....................................................... 359
l Replication version compatibility...................................................................... 361
l Replication types..............................................................................................364
l Using DD Encryption with DD Replicator..........................................................368
l Replication topologies...................................................................................... 369
l Managing replication........................................................................................ 374
l Monitoring replication ..................................................................................... 389
l Replication between HA and non-HA systems................................................. 390
l Replicating a system with quotas to one without..............................................391
l Replication Scaling Context ............................................................................. 391
l Directory-to-MTree replication migration......................................................... 391
DD Replicator 357
DD Replicator
DD Replicator overview
EMC Data Domain Replicator (DD Replicator) provides automated, policy-based,
network-efficient, and encrypted replication for DR (disaster recovery) and multi-site
backup and archive consolidation. DD Replicator asynchronously replicates only
compressed, deduplicated data over a WAN (wide area network).
DD Replicator performs two levels of deduplication to significantly reduce bandwidth
requirements: local and cross-site deduplication. Local deduplication determines the
unique segments to be replicated over a WAN. Cross-site deduplication further
reduces bandwidth requirements when multiple sites are replicating to the same
destination system. With cross-site deduplication, any redundant segment previously
transferred by any other site, or as a result of a local backup or archive, will not be
replicated again. This improves network efficiency across all sites and reduces daily
network bandwidth requirements up to 99%, making network-based replication fast,
reliable, and cost-effective.
In order to meet a broad set of DR requirements, DD Replicator provides flexible
replication topologies, such as full system mirroring, bi-directional, many-to-one, one-
to-many, and cascaded. In addition, you can choose to replicate either all or a subset
of the data on your DD system. For the highest level of security, DD Replicator can
encrypt data being replicated between DD systems using the standard SSL (Secure
Socket Layer) protocol.
DD Replicator scales performance and supported fan-in ratios to support large
enterprise environments. When deployed over a 10GB network, DD Replicator can
mirror data between two systems at up to 52 TB/hour.
Before getting started with DD Replicator, note the following general requirements:
l DD Replicator is a licensed product. See your EMC Data Domain sales
representative to purchase licenses.
l You can usually replicate only between machines that are within two releases of
each other, for example, from 5.6 to 6.0. However, there may be exceptions to
this (as a result of atypical release numbering), so review the tables in the
Replication version compatibility section, or check with your EMC representative.
l If you are unable to manage and monitor DD Replicator from the current version of
the DD System Manager, use the replication commands described in the EMC
Data Domain Operating System Command Reference Guide.
DD990 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
DD7200 128 or 256 540 150 270 540 w<=540; r<=150; ReplSrc<=270;
GBb / 4 GB ReplDest<=540; ReplDest
+w<=540; Total<=540
l Adequate Storage At a minimum, the destination must have the same amount
of space as the source.
l Destination Empty for Directory Replication The destination directory must
be empty for directory replication, or its contents no longer needed, because it will
be overwritten.
In these tables:
l Each DD OS release includes all releases in that family, for example, DD OS 5.7
includes 5.7.1, 5.7.x, 6.0, etc.
l c = collection replication
l dir = directory replication
l m = MTree replication
l del = delta (low bandwidth optimization) replication
l dest = destination
l src = source
l NA = not applicable
5.0 5.1 5.2 5.3 5.4 5.5 5.6 5.7 6.0 (dest)
(dest) (dest) (dest) (dest) (dest) (dest) (dest) (dest)
5.0 (src) c, dir, del dir, del dir, del NA NA NA NA NA NA
5.1 (src) dir, del c, dir, del, dir, del, dir, del, dir, del, ma NA NA NA NA
ma ma ma
5.2 (src) dir, del dir, del, c, dir, del, dir, del, m dir, del, m dir, del, m NA NA NA
ma mb
5.3 (src) NA dir, del, dir, del, m c, dir, del, dir, del, m dir, del, m NA NA NA
ma m
5.4 (src) NA dir, del, dir, del, m dir, del, m c, dir, del, dir, del, m dir, del, m NA NA
ma m
5.5 (src) NA NA dir, del, m dir, del, m dir, del, m c, dir, del, dir, del, m dir, del, m NA
m
5.6 (src) NA NA NA NA dir, del, m dir, del, m c, dir, del, dir, del, m dir, del, m
m
5.7 (src) NA NA NA NA NA dir, del, m dir, del, m c, dir, del, dir, del, m
m
5.0 5.1 5.2 5.3 5.4 5.5 5.6 (dest) 5.7 (dest) 6.0
(dest) (dest) (dest) (dest) (dest) (dest) (dest.)
5.0 (src) c NA NA NA NA NA NA NA NA
5.1 (src) NA c ma mb mb NA NA NA NA
5.2 (src) NA ma c, ma ma ma ma NA NA NA
5.3 (src) NA mc mc c, m m m NA NA NA
5.4 (src) NA mc mc m c, m m m NA NA
5.5 (src) NA NA mc m m c, m m m NA
5.6 (src) NA NA NA NA m m c, m m m
5.7 (src) NA NA NA NA NA m m c, m m
6.0 (src) NA NA NA NA NA NA m m c, m
a. File migration is not supported with MTree replication on either the source or destination in this configuration.
b. File migration is not supported with MTree replication on the source in this configuration.
c. File migration is not supported with MTree replication on the destination in this configuration.
5.0 5.1 (dest) 5.2 (dest) 5.3 (dest) 5.4 (dest) 5.5 (dest) 5.6 (dest) 5.7 (dest)
(dest)
5.0 (src) dir dir NA NA NA NA NA NA
Replication types
Replication typically consists of a source DD system (which receives data from a
backup system) and one or more destination DD systems. Each DD system can be the
source and/or the destination for replication contexts. During replication, each DD
system can perform normal backup and restore operations.
Each replication type establishes a context associated with an existing directory or
MTree on the source. The replicated context is created on the destination when a
context is established. The context establishes a replication pair, which is always
active, and any data landing in the source will be copied to the destination at the
earliest opportunity. Paths configured in replication contexts are absolute references
and do not change based on the system in which they are configured.
A Data Domain system can be set up for directory, collection, or MTree replication.
l Directory replication provides replication at the level of individual directories.
l Collection replication duplicates the entire data store on the source and transfers
that to the destination, and the replicated volume is read-only.
l MTree replication replicates entire MTrees (that is, a virtual file structure that
enables advanced management). Media pools can also be replicated, and by
default (as of DD OS 5.3), an MTree is created that will be replicated. (A media
pool can also be created in backward-compatibility mode that, when replicated,
will be a directory replication context.)
For any replication type, note the following requirements:
l A destination Data Domain system must have available storage capacity that is at
least the size of the expected maximum size of the source directory. Be sure that
the destination Data Domain system has enough network bandwidth and disk
space to handle all traffic from replication sources.
l The file system must be enabled or, based on the replication type, will be enabled
as part of the replication initialization.
l The source must exist.
l The destination must not exist.
Directory replication
Directory replication transfers deduplicated data within a DD file system directory
configured as a replication source to a directory configured as a replication destination
on a different system.
With directory replication, a DD system can simultaneously be the source of some
replication contexts and the destination of other contexts. And that DD system can
also receive data from backup and archive applications while it is replicating data.
Directory replication has the same flexible network deployment topologies and cross-
site deduplication effects as managed file replication (the type used by DD Boost).
Here are some additional points to consider when using directory replication:
l Do not mix CIFS and NFS data within the same directory. A single destination DD
system can receive backups from both CIFS clients and NFS clients as long as
separate directories are used for CIFS and NFS.
l Any directory can be in only one context at a time. A parent directory may not be
used in a replication context if a child directory of that parent is already being
replicated.
Note
MTree replication
MTree replication is used to replicate MTrees between DD systems. Periodic
snapshots are created on the source, and the differences between them are
transferred to the destination by leveraging the same cross-site deduplication
mechanism used for directory replication. This ensures that the data on the
destination is always a point-in-time copy of the source, with file consistency. This
also reduces replication of churn in the data, leading to more efficient utilization of the
WAN.
Collection replication
Collection replication performs whole-system mirroring in a one-to-one topology,
continuously transferring changes in the underlying collection, including all of the
logical directories and files of the DD file system.
Collection replication does not have the flexibility of the other types, but it can provide
higher throughput and support more objects with less overhead, which may work
better for high-scale enterprise cases.
Note
Here are some additional points to consider when using collection replication:
l No granular replication control is possible. All data is copied from the source to the
destination producing a read-only copy.
l Collection replication requires that the storage capacity of the destination system
be equal to, or greater than, the capacity of the source system. If the destination
capacity is less than the source capacity, the available capacity on the source is
reduced to the capacity of the destination.
l The DD system to be used as the collection replication destination must be empty
before configuring replication. After replication is configured, this system is
dedicated to receive data from the source system.
l With collection replication, all user accounts and passwords are replicated from
the source to the destination. However, as of DD OS 5.5.1.0, other elements of
configuration and user settings of the DD system are not replicated to the
destination; you must explicitly reconfigure them after recovery.
l DD Retention Lock Compliance supports collection replication.
l Collection replication is not supported in cloud tier-enabled systems.
Note
association phase, and the data is re-encrypted at the source using the
destinations encryption key before transmission to the destination.
If the destination has a different encryption configuration, the data transmitted is
prepared appropriately. For example, if the feature is turned off at the destination,
the source decrypts the data, and it is sent to the destination un-encrypted.
l In a cascaded replication topology, a replica is chained among three Data Domain
systems. The last system in the chain can be configured as a collection, MTree, or
directory. If the last system is a collection replication destination, it uses the same
encryption keys and encrypted data as its source. If the last system is an MTree or
directory replication destination, it uses its own key, and the data is encrypted at
its source. The encryption key for the destination at each link is used for
encryption. Encryption for systems in the chain works as in a replication pair.
Replication topologies
DD Replicator supports five replication topologies (one-to-one, one-to-one
bidirectional, one-to-many, many-to-one, and cascaded). The tables in this section
show (1) how these topologies work with three types of replication (MTree, directory,
and collection) and two types of DD systems [single node (SN) and DD Extended
Retention] and (2) how mixed topologies are supported with cascaded replication.
In general:
l Single node (SN) systems support all replication topologies.
l Single node-to-single node (SN -> SN) can be used for all replication types.
l DD Extended Retention systems cannot be the source for directory replication.
l Collection replication cannot be configured from either a single node (SN) system
to a DD Extended Retention-enabled system, nor from a DD Extended Retention-
enabled system to an SN system.
l Collection replication cannot be configured if any or both systems have Cloud Tier
enabled.
In this table:
l SN = single node DD system (no DD Extended Retention)
l ER = DD Extended Retention system
Table 171 Topology Support by Replication Type and DD System Type (continued)
Cascaded replication supports mixed topologies where the second leg in a cascaded
connection is different from the first type in a connection (for example, A -> B is
directory replication, and B -> C is collection replication).
Mixed Topologies
SN Dir Repl -> ER MTree Repl -> ER SN Dir Repl -> ER Col Repl -> ER Col
MTree Repl Repl
SN MTree Repl -> SN Col Repl -> SN SN MTree Repl -> ER Col Repl -> ER
Col Repl Col Repl
One-to-one replication
The simplest type of replication is from a DD source system to a DD destination
system, otherwise known as a one-to-one replication pair. This replication topology
can be configured with directory, MTree, or collection replication types.
Figure 9 One-to-one replication pair
Bi-directional replication
In a bi-directional replication pair, data from a directory or MTree on DD system A is
replicated to DD system B, and from another directory or MTree on DD system B to
DD system A.
Figure 10 Bi-directional replication
One-to-many replication
In one-to-many replication, data flows from a source directory or MTree on one DD
system to several destination DD systems. You could use this type of replication to
create more than two copies for increased data protection, or to distribute data for
multi-site usage.
Figure 11 One-to-many replication
Many-to-one replication
In many-to-one replication, whether with MTree or directory, replication data flows
from several source DD systems to a single destination DD system. This type of
replication can be used to provide data recovery protection for several branch offices
on a corporate headquarters IT system.
Figure 12 Many-to-one replication
Cascaded replication
In a cascaded replication topology, a source directory or MTree is chained among
three DD systems. The last hop in the chain can be configured as collection, MTree, or
directory replication, depending on whether the source is directory or MTree.
For example, DD system A replicates one or more MTrees to DD system B, which then
replicates those MTrees to DD system C. The MTrees on DD system B are both a
destination (from DD system A) and a source (to DD system C).
Figure 13 Cascaded directory replication
Data recovery can be performed from the non-degraded replication pair context. For
example:
l In the event DD system A requires recovery, data can be recovered from DD
system B.
l In the event DD system B requires recovery, the simplest method is to perform a
replication resync from DD system A to (the replacement) DD system B. In this
case, the replication context from DD system B to DD system C should be broken
first. After the DD system A to DD system B replication context finishes resync, a
new DD system B to DD System C context should be configured and resynced.
Managing replication
You can manage replication using the Data Domain System Manager (DD System
Manager) or the Data Domain Operating System (DD OS) Command Line Interface
(CLI).
To use a graphical user interface (GUI) to manage replication, log in to the DD System
Manager.
Procedure
1. From the menu at the left of the DD System Manager, select Replication. If
your license has not been added yet, select Add License.
2. Select Automatic or On-Demand (you must have a DD Boost license for on-
demand).
CLI Equivalent
You can also log in at the CLI:
login as: sysadmin
Data Domain OS 6.0.x.x-12345
Using keyboard-interactive authentication.
Password:
Replication status
Replication Status shows the system-wide count of replication contexts exhibiting a
warning (yellow text) or error (red text) state, or if conditions are normal.
Summary view
The Summary view lists the configured replication contexts for a DD system,
displaying aggregated information about the selected DD system that is, summary
information about the inbound and outbound replication pairs. The focus is the DD
system, itself, and the inputs to it and outputs from it.
The Summary table can be filtered by entering a Source or Destination name, or by
selecting a State (Error, Warning, or Normal).
Item Description
Source System and path name of the source context, with format
system.path. For example, for directory dir1 on system
dd120-22, you would see dd120-22.chaos.local/data/
col1/dir1.
Destination System and path name of destination context, with format
system.path. For example, for MTree MTree1 on system
dd120-44, you would see dd120-44.chaos.local/data/
col1/MTree1.
Type Type of context: MTree, directory (Dir), or Pool.
Item Description
Completion Time (Est.) Value is either Completed, or the estimated amount of time
required to complete the replication data transfer based on the
last 24 hours transfer rate.
Item Description
State Description Message about state of replica.
Connection Port System name and listen port used for replication connection.
Item Description
Pre-Comp Remaining Pre-compressed data remaining to be replicated.
Item Description
Pre-Comp Written Pre-compressed data written on the source.
Item Description
Synced As Of Time Timestamp for last automatic replication sync operation
performed by the source. For MTree replication, this value is
updated when a snapshot is exposed on the destination. For
directory replication, it is updated when a sync point inserted by
the source is applied. A value of unknown displays during
replication initialization.
Completion Time (Est.) Value is either Completed or the estimated amount of time
required to complete the replication data transfer based on the
last 24 hours transfer rate.
Files Remaining (Directory Replication Only) Number of files that have not yet
been replicated.
Completion Predictor
The Completion Predictor is a widget for tracking a backup job's progress and for
predicting when replication will complete, for a selected context.
Note
Procedure
1. In the Create Pair dialog, select Add System.
2. For System, enter the hostname or IP address of the system to be added.
3. For User Name and Password, enter the sysadmin's user name and password.
4. Optionally, select More Options to enter a proxy IP address (or system name)
of a system that cannot be reached directly. If configured, enter a custom port
instead of the default port 3009.
Note
IPv6 addresses are supported only when adding a DD OS 5.5 or later system to
a management system using DD OS 5.5 or later.
5. Select OK.
Note
6. If the system certificate is not verified, the Verify Certificate dialog shows
details about the certificate. Check the system credentials. Select OK if you
trust the certificate, or select Cancel.
l The file system is disabled on the source, while configuring and enabling
encryption on the replica.
Procedure
1. In the Create Pair dialog, select Collection from the Replication Type menu.
2. Select the source system hostname from the Source System menu.
3. Select the destination system hostname from the Destination System menu.
The list includes only those hosts in the DD-Network list.
4. If you want to change any host connection settings, select the Advanced tab.
5. Select OK. Replication from the source to the destination begins.
Results
Test results from Data Domain returned the following performance guidelines for
replication initialization. These are guidelines only, and actual performance seen in
production environments may vary.
l Over a gibibit LAN: With a high enough shelf count to drive maximum input/output
and ideal conditions, collection replication can saturate a 1GigE link (modulo 10%
protocol overhead), as well as 400-900 MB/sec on 10gigE, depending on the
platform.
l Over a WAN, performance is governed by the WAN link line speed, bandwidth,
latency, and packet loss rate.
2. Select the source system hostname from the Source System menu.
3. Select the destination system hostname from the Destination System menu.
4. Enter the source path in the Source Path text box (notice the first part of the
path is a constant that changes based on the type of replication chosen).
5. Enter the destination path in the Destination Directory text box (notice the
first part of the path is a constant that changes based on the type of replication
chosen).
6. If you want to change any host connection settings, select the Advanced tab.
7. Select OK.
The Replication from the source to the destination begins.
Test results from Data Domain returned the following guidelines for estimating
the time needed for replication initialization.
These are guidelines only and may not be accurate in specific production
environments.
l Using a T3 connection, 100ms WAN, performance is about 40 MiB/sec of
pre-compressed data, which gives data transfer of:
40 MiB/sec = 25 seconds/GiB = 3.456 TiB/day
l Using the base-2 equivalent of gigabit LAN, performance is about 80
MiB/sec of pre-compressed data, which gives data transfer of about double
the rate for a T3 WAN.
CLI Equivalent
Here are examples of creating MTree or directory replication pairs at the CLI.
The last example specifies the IP version used as a replication transport.
# replication add source mtree://ddsource.test.com/data/col1/
examplemtree destination mtree://ddtarget.test.com/data/col1/
examplemtree (Mtree example)
# replication add source dir://ddsource.test.com/data/col1/
directorytorep destination dir://ddtarget.test.com/backup/
directorytorep
# replication add source dir://ddsource.test.com/data/col1/
directorytorep destination dir://ddtarget.test.com/backup/
directorytorep ipversion ipv6
CLI Equivalent
# replication disable {destination | all}
CLI Equivalent
Before running this command, always run the filesys disable command.
Then, afterward, run the filesys enable command
# replication break {destination | all}
The host entry will indicate an alternate destination address for that host. This may be
required on both the source and destination systems.
Procedure
1. Select the replication pair in the Summary table, and select Modify Settings.
You can also change these settings when you are performing Create Pair, Start
Resync, or Start Recover by selecting the Advanced tab.
2. In the Modify Connection Settings dialog, modify any or all of these settings:
a. Use Low Bandwidth Optimization For enterprises with small data sets
and 6 Mb/s or less bandwidth networks, DD Replicator can further reduce
the amount of data to be sent using low bandwidth optimization. This
enables remote sites with limited bandwidth to use less bandwidth or to
replicate and protect more of their data over existing networks. Low
bandwidth optimization must be enabled on both the source and destination
DD systems. If the source and destination have incompatible low bandwidth
optimization settings, low bandwidth optimization will be inactive for that
context. After enabling low bandwidth optimization on the source and
destination, both systems must undergo a full cleaning cycle to prepare the
existing data, so run filesys clean start on both systems. The
duration of the cleaning cycle depends on the amount of data on the DD
system, but takes longer than a normal cleaning. For more information on
the filesys commands, see the EMC Data Domain Operating System
Command Reference Guide.
Important: Low bandwidth optimization is not supported if the DD Extended
Retention software option is enabled on either DD system. It is also not
supported for Collection Replication.
CLI Equivalent
#replication modify <destination> connection-host <new-host-
name> [port <port>]
5. Select the replication destination system host name from the Destination
System menu.
6. Enter the replication source path in the Source Path text box.
7. Enter the replication destination path in the Destination Path text box.
8. To change any host connection settings, select the Advanced tab.
9. Select OK.
CLI Equivalent
# replication resync destination
DD Boost view
The DD Boost view provides configuration and troubleshooting information to
NetBackup administrators who have configured DD systems to use DD Boost AIR
(Automatic Image Replication) or any DD Boost application that uses managed file
replication.
See the EMC Data Domain Boost for OpenStorage Administration Guide for DD Boost
AIR configuration instructions.
The File Replication tab displays:
l Currently Active File Replication:
n Direction (Out-Going and In-Coming) and the number of files in each.
n Remaining data to be replicated (pre-compressed value in GiB) and the amount
of data already replicated (pre-compressed value in GiB).
n Total size: The amount of data to be replicated and the already replicated data
(pre-compressed value in GiB).
l Most Recent Status: Total file replications and whether completed or failed
n during the last hour
n over the last 24 hours
l Remote Systems:
n Select a replication from the list.
n Select the time period to be covered from the menu.
n Select Show Details for more information about these remote system files.
The Storage Unit Associations tab displays the following information, which you can
use for audit purposes or to check the status of DD Boost AIR events used for the
storage unit's image replications:
l A list of all storage unit Associations known to the system. The source is on the
left, and the destination is on the right. This information shows the configuration
of AIR on the Data Domain system.
l The Event Queue is the pending event list. It shows the local storage unit, the
event ID, and the status of the event.
An attempt is made to match both ends of a DD Boost path to form a pair and present
this as one pair/record. If the match is impossible, for various reasons, the remote
path will be listed as Unresolved.
Item Description
Start Starting point of time period.
Item Description
Duration Duration for replication (either 1d, 7d or 30d).
Pre-Comp Replicated Amount of pre-compressed outbound and inbound data (in GiB).
Topology view
The Topology view shows how the selected replication pairs are configured in the
network.
l The arrow which is green (normal), yellow (warning), or red (error) between
DD systems represents one or more replication pairs.
l To view details, select a context , which opens the Context Summary dialog, with
links to Show Summary, Modify Options, Enable/Disable Pair, Graph
Performance, and Delete Pair.
l Select Collapse All to roll-up the Expand All context view and show only the name
of the system and the count of destination contexts.
l Select Expand All to show all the destination directory and MTree contexts
configured on other systems.
l Select Reset Layout to return to the default view.
l Select Print to open a standard print dialog.
Performance view
The Performance view displays a graph that represents the fluctuation of data during
replication. These are aggregated statistics of each replication pair for this DD system.
l Duration (x-axis) is 30 days by default.
l Replication Performance (y-axis) is in GibiBytes or MebiBytes (the binary
equivalents of GigaBytes and MegaBytes).
l Network In is the total replication network bytes entering the system (all
contexts).
l Network Out is the total replication network bytes leaving the system (all
contexts).
l For a reading of a specific point in time, hover the cursor over a place on the
graph.
l During times of inactivity (when no data is being transferred), the shape of the
graph may display a gradually descending line, instead of an expected sharply
descending line.
Network Settings
l Bandwidth Displays the configured data stream rate if bandwidth has been
configured, or Unlimited (default) if not. The average data stream to the
replication destination is at least 98,304 bits per second (12 KiB).
l Delay Displays the configured network delay setting (in milliseconds) if it has
been configured, or None (default) if not.
l Listen Port Displays the configured listen port value if it has been configured, or
2051 (default) if not.
Note
Currently, you can set and modify destination throttle only by using the command-
line interface (CLI); this functionality is not available in the DD System Manager.
For documentation on this feature, see the replication throttle command
in the EMC Data Domain Operating System Command Reference Guide. If the DD
System Manager detects that you have one or more destination throttles set, you
will be given a warning, and you should use the CLI to continue.
l Enter a number in the text box (for example, 20000), and select the rate
from the menu (bps, Kbps, Bps, or KBps).
l Select the 0 Bps (disabled) option to disable all replication traffic.
5. Select OK to set the schedule. The new schedule is shown under Permanent
Schedule.
Results
Replication runs at the given rate until the next scheduled change, or until a new
throttle setting forces a change.
l You can determine the actual bandwidth and the actual network delay values for
each server by using the ping command.
l The default network parameters in a restorer work well for replication in low
latency configurations, such as a local 100Mbps or 1000Mbps Ethernet network,
where the latency round-trip time (as measured by the ping command) is usually
less than 1 millisecond. The defaults also work well for replication over low- to
moderate-bandwidth WANs, where the latency may be as high as 50-100
milliseconds. However, for high-bandwidth high-latency networks, some tuning of
the network parameters is necessary.
The key number for tuning is the bandwidth-delay number produced by multiplying
the bandwidth and round-trip latency of the network. This number is a measure of
how much data can be transmitted over the network before any acknowledgments
can return from the far end. If the bandwidth-delay number of a replication
network is more than 100,000, then replication performance benefits from setting
the network parameters in both restorers.
Procedure
1. Select Replication > Advanced Settings > Change Network Settings to
display the Network Settings dialog.
2. In the Network Settings area, select Custom Values.
3. Enter Delay and Bandwidth values in the text boxes. The network delay setting
is in milliseconds, and bandwidth is in bytes per second.
4. In the Listen Port area, enter a new value in the text box. The default IP Listen
Port for a replication destination for receiving data streams from the replication
source is 2051. This is a global setting for the DD system.
5. Select OK. The new settings appear in the Network Settings table.
Monitoring replication
The DD System Manager provides many ways to track the status of replication from
checking replication pair status, to tracking backup jobs, to checking performance, to
tracking a replication process.
When specifying an IP version, use the following command to check its setting:
# replication show config rctx://2
CTX: 2
Source: mtree://ddbeta1.dallasrdc.com/data/col1/EDM1
Destination: mtree://ddbeta2.dallasrdc.com/data/col1/EDM_ipv6
Connection Host: ddbeta2-ipv6.dallasrdc.com
Connection Port: (default)
Ipversion: ipv6
Low-bw-optim: disabled
Encryption: disabled
Enabled: yes
Propagate-retention-lock: enabled
on the HA system if you want to use the DD System Manager graphical user interface
(GUI).
However, you can perform replications from a non-HA system to an HA system using
the CLI as well as from the HA system to the non-HA system.
Note
Note
This feature appears only on Data Domain systems running DD OS version 6.0.
Note
Although you can use the graphical user interface (GUI) for this operation, it is
recommended you use the Command Line Interface (CLI) for optimal performance.
Note
Note
If the source system contains retention-locked files, you might want to maintain
DD Retention Lock on the new MTree.
Note
This command might take longer than expected to complete. Do not press Ctrl-
C during this process; if you do, you will cancel the D2M migration.
Phase 1 of 4 (precheck):
Marking source directory /backup/dir1 as read-only...Done.
Phase 2 of 4 (sync):
Syncing directory replication context...0 files flushed.
current=45 sync_target=47 head=47
current=45 sync_target=47 head=47
Done. (00:09)
Phase 3 of 4 (fastcopy):
Starting fastcopy from /backup/dir1 to /data/col1/
mtree1...
Waiting for fastcopy to complete...(00:00)
Phase 4 of 4 (initialize):
Initializing MTree replication context...
(00:08) Waiting for initialize to start...
(00:11) Initialize started.
2. Begin ingesting data to the MTree on the source DD system when the migration
process is complete.
3. (Optional) Break the directory replication context on the source and target
systems.
See the EMC Data Domain Operating System Version 6.0 Command Reference
Guide for more information about the replication break command.
Troubleshooting D2M
If you encounter a problem setting directory-to-MTree (D2M) replication, there is an
operation you can perform to address several different issues.
The dir-to-mtree abort procedure can help cleanly abort the D2M process. You
should run this procedure in the following cases:
l The status of the D2M migration is listed as aborted.
l The Data Domain system rebooted during D2M migration.
l An error occurred when running the replication dir-to-mtree start
command.
l Ingest was not stopped before beginning migration.
l The MTree replication context was initialized before the replication dir-to-
mtree start command was entered.
Note
Do not run replication break on the MTree replication context before the D2M
process finishes.
Always run replication dir-to-mtree abort before running the replication
break command on the mrepl ctx.
Running the replication break command prematurely will permanently render the
drepl source directory as read-only.
If this occurs, please contact EMC Support.
Procedure
1. Enter replication dir-to-mtree abort to abort the process.
2. Break the newly created MTree replication context on both the source and
destination Data Domain systems.
In the following example, the MTree replication context is
rctx://2
.
3. Delete the corresponding MTrees on both the source and destination systems.
Note
MTrees marked for deletion remain in the file system until the filesys clean
command is run.
See the EMC Data Domain Operating System Version 6.0 Command Reference
Guide for more information.
4. Run the filesys clean start command on both the source and
destination systems.
For more information on the filesys clean commands, see the EMC Data
Domain Operating System Version 6.0 Command Reference Guide.
5. Restart the process.
See Performing migration from directory replication to MTree replication.
l Abort the current D2M process as described in Aborting D2M replication on page
394 and restart the process with DD Retention Lock enabled on the source MTree.
An error occurs after initialization
If the replication dir-to-mtree start process finishes without error but you
detect an error during the MTree replication initialization (phase 4 of the D2M
migration process), you can perform the following steps:
1. Make sure there is no network issue.
2. Initialize the MTree replication context.
Note
For more information about DD Management Center, see the DD Management Center
User Guide. For more information about the DD OS command line interface, see the DD
OS Command Reference.
without affecting the entire file system. MTrees are assigned to Tenant Units and
contain that Tenant Unit's individualized settings for managing and monitoring SMT.
Multitenancy
Multitenancy refers to the hosting of an IT infrastructure by an internal IT department,
or an external service provider, for more than one consumer/workload (business unit/
department/Tenant) simultaneously. Data Domain SMT enables Data Protection-as-a-
Service.
RBAC (role-based access control)
RBAC offers multiple roles with different privilege levels, which combine to provide
the administrative isolation on a multi-tenant Data Domain system. (The next section
will define these roles.)
Storage Unit
A Storage Unit is an MTree configured for the DD Boost protocol. Data isolation is
achieved by creating a Storage Unit and assigning it to a DD Boost user. The DD Boost
protocol permits access only to Storage Units assigned to DD Boost users connected
to the Data Domain system.
Tenant
A Tenant is a consumer (business unit/department/customer) who maintains a
persistent presence in a hosted environment.
Tenant Self-Service
Tenant Self-Service is a method of letting a Tenant log in to a Data Domain system to
perform some basic services (add, edit, or delete local users, NIS groups, and/or AD
groups). This reduces the bottleneck of always having to go through an administrator
for these basic tasks. The Tenant can access only their assigned Tenant Units. Tenant
Users and Tenant Admins will, of course, have different privileges.
Tenant Unit
A Tenant Unit is the partition of a Data Domain system that serves as the unit of
administrative isolation between Tenants. Tenant Units are secured and logically
isolated from each other, which ensures security and isolation of the control path
when running multiple Tenants simultaneously on the shared infrastructure. Tenant
Units can contain one or more MTrees, which hold all configuration elements needed
in a multi-tenancy setup. Users, management-groups, notification-groups, and other
configuration elements are part of a Tenant Unit.
admin and tenant-user roles. You must use smt tenant-unit management-ip to
add and maintain management IP address(es) for Tenant Units.
Similarly, data access and data flow (into and out of Tenant Units) can be restricted to
a fixed set of local or remote data access IP address(es). The use of assigned data
access IP address(es) enhances the security of the DD Boost and NFS protocols by
adding SMT-related security checks. For example, the list of storage units returned
over DD Boost RPC can be limited to those which belong to the Tenant Unit with the
assigned local data access IP address. For NFS, access and visibility of exports can be
filtered based on the local data access IP address(es) configured. For example, using
showmount -e from the local data access IP address of a Tenant Unit will only
display NFS exports belonging to that Tenant Unit.
The sysadmin must use smt tenant-unit data-ip to add and maintain data
access IP address(es) for Tenant Units.
Note
If multiple Tenant Units are belong to the same tenant, they can share a default
gateway. However, if multiple Tenant Units that belong to different tenants are
oprevented from using the same default gateway.
Multiple Tenant Units belonging to the same tenant can share a default gateway.
Tenant Units that belong to different tenants cannot use the same default gateway.
limited-admin role
A user with an limited-admin role can perform all administrative operations on a Data
Domain system as the admin; however, users with the limited-admin role cannot delete
or destroy Mtrees. In DD OS, there is an equivalent limited-admin role.
tenant-admin role
A user with a tenant-admin role can perform certain tasks only when tenant self-
service mode is enabled for a specific Tenant Unit. Responsibilities include scheduling
and running a backup application for the Tenant and monitoring resources and
statistics within the assigned Tenant Unit. The tenant-admin is able to view audit logs,
but RBAC ensures that only audit logs from the Tenant Unit(s) belonging to the
tenant-admin are accessible. In addition, tenant-admins ensure administrative
separation when Tenant self-service mode is enabled. In the context of SMT, the
tenant-admin is usually referred to as the backup admin.
tenant-user role
A user with a tenant-user role can monitor the performance and usage of SMT
components only on Tenant Unit(s) assigned to them and only when Tenant self-
service is enabled, but a user with this role cannot view audit logs for their assigned
Tenant Units. In addition, tenant-users may run the show and list commands.
none role
A user with a role of none is not allowed to perform any operations on a Data Domain
system other than changing their password and accessing data using DD Boost.
However, after SMT is enabled, the admin can select a user with a none role from the
Data Domain system and assign them an SMT-specific role of tenant-admin or tenant-
user. Then, that user can perform operations on SMT management objects.
management groups
BSPs (backup service providers) can use management groups defined in a single,
external AD (active directory) or NIS (network information service) to simplify
managing user roles on Tenant Units. Each BSP Tenant may be a separate, external
company and may use a name-service such as AD or NIS.
With SMT management groups, the AD and NIS servers are set up and configured by
the admin in the same way as SMT local users. The admin can ask their AD or NIS
administrator to create and populate the group. The admin then assigns an SMT role
to the entire group. Any user within the group who logs in to the Data Domain system
is logged in with the role assigned to the group.
When users leave or join a Tenant company, they can be removed or added to the
group by the AD or NIS administrator. It is not neceesary to modify the RBAC
configuration on a Data Domain system when users who are part of the group are
added or removed.
# smt enable
SMT enabled.
2. Verify that SMT is enabled.
# smt status
SMT is enabled.
3. Launch the SMT configuration wizard.
# smt tenant-unit setup
No tenant-units.
4. Follow the configuration prompts.
SMT TENANT-UNIT Configuration
Tenant-unit Name
Enter tenant-unit name to be created
: SMT_5.7_tenant_unit
Invalid tenant-unit name.
Enter tenant-unit name to be created
: SMT_57_tenant_unit
Do you want to add a local management ip to this tenant-unit? (yes|no) [no]: yes
Choose an ip from above table or enter a new ip address. New ip addresses will need
to be created manually.
Ip Address
Enter the local management ip address to be added to this tenant-unit
: 192.168.10.57
Do you want to add another local management ip to this tenant-unit? (yes|no) [no]:
Do you want to add another remote management ip to this tenant-unit? (yes|no) [no]:
Do you want to create a mtree for this tenant-unit now? (yes|no) [no]: yes
MTree Name
Enter MTree name
: SMT_57_tenant_unit
Invalid mtree path name.
Enter MTree name
:
SMT_57_tenant_unit
MTree Soft-Quota
Enter the quota soft-limit to be set on this MTree (<n> {MiB|GiB|TiB|PiB}|none)
:
MTree Hard-Quota
Enter the quota hard-limit to be set on this MTree (<n> {MiB|GiB|TiB|PiB}|none)
:
Do you want to assign another MTree to this tenant-unit? (yes|no) [no]: yes
Do you want to create another mtree for this tenant-unit? (yes|no) [no]:
Do you want to configure a management user for this tenant-unit? (yes|no) [no]:
Do you want to configure a management group for this tenant-unit (yes|no) [no]: yes
Management-Group Name
Enter the group name to be assigned to this tenant-unit
: SMT_57_tenant_unit_group
Management-Group Type
What type do you want to assign to this group (nis|active-directory)?
: nis
Do you want to configure another management user for this tenant-unit? (yes|no) [no]:
Do you want to configure another management group for this tenant-unit? (yes|no)
[no]:
Alert Configuration
Configuration complete.
Modifying quotas
To meet QoS criteria, a system administrator uses DD OS knobs to adjust the
settings required by the Tenant configuration. For example, the administrator can set
soft and hard quota limits on DD Boost Storage Units. Stream soft and hard
quota limits can be allocated only to DD Boost Storage Units assigned to Tenant Units.
After the administrator sets the quotas, the tenant-admin can monitor one or all
Tenant Units to ensure no single object exceeds its allocated quotas and deprives
others of system resources.
Quotas are set initially when prompted by the configuration wizard, but they can be
adjusted or modified later. The example below shows how to modify quotas for DD
Boost. (You can also use quota capacity and quota streams to deal with
capacity and stream quotas and limits.)
Procedure
1. To modify soft and hard quota limits on DD Boost Storage Unit su33:
ddboost storage-unit modify su33 quota-soft-limit 10 Gib quota-
hard-limit 20 Gib
2. To modify stream soft and hard limits on DD Boost Storage Unit su33:
ddboost storage-unit modify su33 write-stream-soft-limit 20
read-stream-soft-limit 6 repl -stream-soft-limit 20 combined-
stream-soft-limit 20
3. To report physical size for DD Boost Storage Unit su33:
ddboost storage-unit modify su33 report-physical-size 8 GiB
Tenant action: This alert is generated when the system restarts after
a power loss. If this alert repeats, contact your System
Administrator.
Managing snapshots
A snapshot is a read-only copy of an MTree captured at a specific point in time. A
snapshot can be used for many things, for example, as a restore point in case of a
system malfunction. The required role for using snapshot is admin or tenant-admin.
To view snapshot information for an MTree or a Tenant Unit:
# snapshot list mtree mtree-path | tenant-unit tenant-unit
Supported platforms
Cloud Tier is supported on physical platforms that have the necessary memory, CPU,
and storage connectivity to accommodate another storage tier.
DD Cloud Tier is supported on these systems (each must have expanded memory and
an additional SAS controller):
l DD990
l DD4200
l DD4500
l DD6800
l DD7200
l DD9300
l DD9500
l DD9800
l Data Domain Virtual Edition (DD VE)Selected configurations
l Data Domain High Availability (HA)Both nodes must be running DD OS 6.0 (or
higher), and they must be HA-enabled.
Note
DD Cloud Tier is not supported on any system not listed and is not supported on any
system with the Extended Retention feature enabled.
Note
The Cloud Tier feature may consume all available bandwidth in a shared WAN link,
especially in a low bandwidth configuration (1 Gbps), and this may impact other
applications sharing the WAN link. If there are shared applications on the WAN, the
use of QoS or other network limiting is recommended to avoid congestion and ensure
consistent performance over time.
If bandwidth is constrained, the rate of data movement will be slow and you will not be
able to move as much data to the cloud. It is best to use a dedicated link for data going
to the Cloud Tier.
Note
Do not send traffic over onboard management network interface controllers (ethMx
interfaces).
9. Click OK.
10. After the file system is disabled, select Enable Cloud Tier.
To enable the cloud tier, you must meet the storage requirement for the
licensed capacity. Configure the cloud tier of the file system. Click Next.
A cloud file system requires a local store for a local copy of the cloud metadata.
Note
Several public cloud providers use IP ranges for their endpoint and
authentication addresses. In this situation, the IP ranges used by the provider
need to be unblocked to accommodate potential IP changes.
n Remote cloud provider destination IP and access authentication IP address
ranges must be allowed through the firewall.
n For ECS private cloud, local ECS authentication and web storage (S3) access
IP ranges and ports 9020 (HTTP) and 9021 (HTTPS) must be allowed through
local firewalls.
Note
ECS private cloud load balancer IP access and port rules must also be
configured.
l Proxy settings
n If there are any existing proxy settings that cause data above a certain size to
be rejected, those settings must be changed to allow object sizes up to 4.5MB.
n If customer traffic is being routed through a proxy, the self-signed/CA-signed
proxy certificate must be imported. See "Importing CA certificates" for details.
l OpenSSL cipher suites
n Ciphers - ECDHE-RSA-AES256-SHA384, AES256-GCM-SHA384
Note
Default communication with all cloud providers is initiated with strong cipher.
n TLS Version: 1.2
l Supported protocols
n HTTP & HTTPS
Note
Default communication with all public cloud providers occurs on secure HTTP
(HTTPS), but you can overwrite the default setting to use HTTP.
Importing CA certificates
Before you can add cloud units for EMC Elastic Cloud Storage (ECS), Virtustream
Storage Cloud, Amazon Web Services S3 (AWS), and Azure cloud, you must import
CA certificates.
Before you begin
For AWS, Virtustream, and Azure public cloud providers, root CA certificates can be
downloaded from https://www.digicert.com/digicert-root-certificates.htm.
l For an AWS cloud provider, download the Baltimore CyberTrust Root certificate.
l For a Virtustream cloud provider, download the DigiCert High Assurance EV Root
CA certificate.
l For ECS, the root certificate authority will vary by customer.
If ECS is configured with a load balancer and an HTTPS endpoint is used as an
endpoint in the configuration, be sure to import the self-signed or CA-signed root
certificate. Contact your load balancer provider for details.
l For an Azure cloud provider, download the Baltimore CyberTrust Root certificate.
If your downloaded certificate has a .crt extension, it will likely need to be converted
to a PEM-encoded certificate. If so, use OpenSSL to convert the file from .crt format
to .pem (for example, openssl x509 -inform der -in
BaltimoreCyberTrustRoot.crt -out BaltimoreCyberTrustRoot.pem).
Procedure
1. Select Data Management > File System > Cloud Units.
2. In the tool bar, click Manage Certificates.
The Manage Certificates for Cloud dialog is displayed.
3. Click Add.
The Add CA Certificate for Cloud dialog is displayed:
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider
account.
4. For Cloud provider, select EMC Elastic Cloud Storage (ECS) from the drop-
down list.
5. Enter the provider Access key as password text.
6. Enter the provider Secret key as password text.
7. Enter the provider Endpoint in this format: http://<ip/hostname>:<port>. If
you are using a secure endpoint, use https instead.
By default, ECS runs the S3 protocol on port 9020 for HTTP and 9021 for
HTTPS. When a load balancer is used, these ports are sometimes remapped to
80 for HTTP and 443 for HTTPS, respectively. Check with your network
administrator for the proper ports.
8. If an HTTP proxy server is required to get around a firewall for this provider,
click Configure for HTTP Proxy Server.
Enter the proxy hostname, port, user, and password.
9. Click Add.
The File System main window now displays summary information for the new
cloud unit as well a control for enabling and disabling the cloud unit.
l s3.amazonaws.com
l s3-us-west-1.amazonaws.com
l s3-us-west-2.amazonaws.com
l s3-eu-west-1.amazonaws.com
l s3-ap-northeast-1.amazonaws.com
l s3-ap-southeast-1.amazonaws.com
l s3-ap-southeast-2.amazonaws.com
l s3-sa-east-1.amazonaws.com
l ap-south-1
l ap-northeast-2
l eu-central-1
Note
Note
The AWS user credentials must have permissions to create and delete buckets and to
add, modify, and delete files within the buckets they create. S3FullAccess is preferred,
but these are the minimum requirements:
l CreateBucket
l ListBucket
l DeleteBucket
l ListAllMyBuckets
l GetObject
l PutObject
l DeleteObject
Procedure
1. Select Data Management > File System > Cloud Units.
2. Click Add.
The Add Cloud Unit dialog is displayed.
3. Enter a name for this cloud unit. Only alphanumeric characters are allowed.
The remaining fields in the Add Cloud Unit dialog pertain to the cloud provider
account.
4. For Cloud provider, select Amazon Web Services S3 from the drop-down list.
5. Select the appropriate Storage region from the drop-down list.
6. Enter the provider Access key as password text.
7. Enter the provider Secret key as password text.
8. Ensure that port 443 (HTTPS) is not blocked in firewalls. Communication with
the AWS cloud provider occurs on port 443.
9. If an HTTP proxy server is required to get around a firewall for this provider,
click Configure for HTTP Proxy Server.
5. For Primary key, enter the new provider primary key as password text.
6. For Secondary key, enter the new provider secondary key as password text.
7. If an HTTP proxy server is required to get around a firewall for this provider,
click Configure for HTTP Proxy Server.
8. Click OK.
Procedure
1. Use the following CLI command to identify files in the cloud unit.
# filesys report generate file-location
Wait for cleaning to complete. The cleaning may take time depending on how
much data is present in the cloud unit.
4. Disable the file system.
5. Use the following CLI command to delete the cloud unit.
# cloud unit del unit-name
Data movement
Data is moved from the active tier to the cloud tier as specified by your individual data
movement policy. The policy is set on a per-MTree basis. Data movement can be
initiated manually or automatically using a schedule.
Note
Procedure
1. Select Data Management > MTree.
2. In the top panel, select the MTree to which you want to add a data movement
policy.
3. Click the Summary tab.
5. For File Age in Days, set the file age threshold (Older than) and optionally, the
age range (Younger than).
Note
The minimum number of days for Older than is 14. For nonintegrated backup
applications, files moved to the cloud tier cannot be accessed directly and need
to be recalled to the active tier before you can access them. So, choose the age
threshold value as appropriate to minimize or avoid the need to access a file
moved to the cloud tier.
Note
The throttle is for adjusting resources for internal Data Domain processes; it
does not affect network bandwidth.
Note
If a cloud unit is inaccessible when cloud tier data movement runs, the cloud
unit is skipped in that run. Data movement on that cloud unit occurs in the next
run if the cloud unit becomes available. The data movement schedule
determines the duration between two runs. If the cloud unit becomes available
and you cannot wait for the next scheduled run, you can start data movement
manually.
Note
Procedure
1. Select Data Management > File System > Summary.
Note
The Recall link is available only if a cloud unit is created and has data.
3. In the Recall File from Cloud dialog, enter the exact file name (no wildcards) and
full path of the file to be recalled, for example: /data/col1/mt11/
file1.txt. Click Recall.
The dialog closes and the system indicates that there are one to four files being
recalled. If the number of files is less than four, you can access the Recall File
from Cloud dialog again to initiate another recall. Once the limit of four
concurrent files is reached, you can no longer access the Recall File from Cloud
dialog.
4. To check the status of the recall, do one of the following:
l In the Cloud Tier section of the Space Usage panel, click Details.
l Expand the File System status panel at the bottom of the screen and click
Details.
The Cloud File Recall Details dialog is displayed, showing the file path, cloud
provider, recall progress, and amount of data transferred. If there are
unrecoverable errors during the recall, an error message is displayed. Hover
your cursor over the error message to display a tool tip with more details and
possible corrective actions.
Results
Once the file has been recalled to the active tier, you can restore the data.
Note
For nonintegrated applications, once a file has been recalled from the cloud tier to the
active tier, a minimum of 14 days must elapse before the file is eligible for data
movement. After 14 days, normal data movement processing will occur for the file.
This restriction does not apply to integrated applications.
Note
Note
Procedure
1. Check the location of the file using:
filesys report generate file-location [path {<path-name> |
all}] [output-file <filename>]
The pathname can be a file or directory; if it is a directory, all files in the
directory are listed.
Using the CLI to recall a file from the cloud tier 429
DD Cloud Tier
Filename Location
-------- --------
/data/col1/mt11/file1.txt Cloud Unit 1
If the status shows that the recall isn't running for a given path, the recall may
have finished, or it may have failed.
Results
Once the file has been recalled to the active tier, you can restore the data.
Note
For nonintegrated applications, once a file has been recalled from the cloud tier to the
active tier, a minimum of 14 days must elapse before the file is eligible for data
movement. After 14 days, normal data movement processing will occur for the file.
This restriction does not apply to integrated applications.
Note
If the license is not installed, use the elicense update command to install
the license. Enter the command and paste the contents of the license file
after this prompt. After pasting, ensure there is a carriage return, then press
Control-D to save. You are prompted to replace licenses, and after
answering yes, the licenses are applied and displayed.
# elicense update
Enter the content of license file and then press Control-D,
or press Control-C to cancel.
2. Install certificates.
Before you can create a cloud profile, you must install the associated
certificates.
For AWS, Virtustream, and Azure public cloud providers, root CA certificates
can be downloaded from https://www.digicert.com/digicert-root-
certificates.htm.
l For an AWS cloud provider, download the Baltimore CyberTrust Root
certificate.
l For a Virtustream cloud provider, download the DigiCert High Assurance EV
Root CA certificate.
l For an Azure cloud provider, download the Baltimore CyberTrust Root
certificate.
l For ECS, the root certificate authority will vary by customer. Contact your
load balancer provider for details.
Downloaded certificate files have a .crt extension. Use openssl on any Linux or
Unix system where it is installed to convert the file from .crt format to .pem.
Using the Command Line Interface (CLI) to configure DD Cloud Tier 431
DD Cloud Tier
3. To configure the Data Domain system for data-movement to the cloud, you
must first enable the cloud feature and set the system passphrase if it has not
already been set.
# cloud enable
Cloud feature requires that passphrase be set on the system.
Enter new passphrase:
Re-enter new passphrase:
Passphrases matched.
The passphrase is set.
Encryption is recommended on the cloud tier.
Do you want to enable encryption? (yes|no) [yes]:
Encryption feature is enabled on the cloud tier.
Cloud feature is enabled.
4. Configure the cloud profile using the cloud provider credentials. The prompts
and variables vary by provider.
# cloud profile add <profilename>
Note
For security reasons, this command does not display the access/secret keys
you enter.
At the end of each profile addition you are asked if you want to set up a proxy.
If you do, these values are required: proxy hostname, proxy port, proxy
username, and proxy password.
5. Verify the cloud profile configuration:
# cloud profile show
Use the cloud unit list command to list the cloud units.
11. Configure the file migration policy for this MTree. You can specify multiple
MTrees in this command. The policy can be based on the age threshold or the
range.
a. To configure the age-threshold (migrating files older than the specified age
to cloud):
# data-movement policy set age-threshold age_in_days to-tier
cloud cloud-unit unitname mtrees mtreename
b. To configure the age-range (migrating only those files that are in the
specified age-range):
# data-movement policy set age-range min-age age_in_days max-
age age_in_days to-tier cloud cloud-unit unitname mtrees
mtreename
12. Export the file system, and from the client, mount the file system and ingest
data into the active tier. Change the modification date on the ingested files
such that they now qualify for data migration. (Set the date to older than the
age-threshold value specified when configuring the data-movement policy.)
13. Initiate file migration of the aged files. Again, you can specify multiple MTrees
with this command.
# data-movement start mtrees mtreename
14. Verify that file migration worked and the files are now in the cloud tier:
# filesys report generate file-location path all
15. Once you have migrated a file to the cloud tier, you cannot directly read from
the file (attempting to do so results in an error). The file can only be recalled
back to the active tier. To recall a file to the active tier:
# data-movement recall path pathname
Using the Command Line Interface (CLI) to configure DD Cloud Tier 433
DD Cloud Tier
Note
Note
Cloud encryption is allowed only through the Data Domain Embedded Key
Manager. The RSA Key Manager is not supported for cloud encryption.
8. Click OK.
9. Use the DD Encryption Keys panel to configure encryption keys.
Note
This process is designed for emergency situations only and will involve significant time
and effort from the Dell EMC engineering staff.
Note
If the source system is running DD OS 5.6 or 5.7 and replicating into a Cloud Tier
enabled system using MTree replication, the source system must be upgraded to a
release that can replicate to a Cloud Tier enabled system. Please see the DD OS
Release Notes system requirements.
Note
Files in the Cloud Tier cannot be used as base files for virtual synthetic operations.
The incremental forever or synthetic full backups need to ensure that the files remain
in the Active Tier if they will be used in virtual synthesis of new backups.
l The Space Usage Tab displays space usage over time, in MiB. You can select
a duration (one week, one month, three months, one year, or All). The data
is presented (color-coded) as pre-compression used (blue), post-
compression used (red), and the compression factor (green).
l The Consumption Tab displays the amount of post-compression storage
used and the compression ratio over time, which enables you to analyze
consumption trends. You can select a duration (one week, one month, three
months, one year, or All). The data is presented (color-coded) as capacity
(blue), post-compression used (red), compression factor (green), cleaning
(orange) and data movement (violet).
l The Daily Written Tab displays the amount of data written per day. You can
select a duration (one week, one month, three months, one year, or All). The
data is presented (color-coded) as pre-compression written (blue), post-
compression used (red), and the total compression factor (green).
Note
Note
As of DD OS 5.5.1, only one retention unit per retention tier is allowed. However,
systems set up prior to DD OS 5.5.1 may continue to have more than one retention
unit, but you will not be allowed to add any more retention units to them.
Transparency of Operation
DD Extended Retention-enabled DD systems support existing backup applications
using simultaneous data access methods through NFS and CIFS file service protocols
over Ethernet, through DD VTL for open systems and IBMi, or as a disk-based target
using application-specific interfaces, such as DD Boost (for use with EMC Avamar,
EMC NetWorker, EMC GreenPlum, Symantec OpenStorage, and Oracle RMAN).
DD Extended Retention extends the DD architecture with automatic transparent data
movement from the active tier to the retention tier. All of the data in the two tiers is
accessible, although there might be a slight delay on initial access to data in the
retention tier. The namespace of the system is global and is not affected by data
movement. No partitioning of the file system is necessary to take advantage of the
two-tiered file system.
Data Movement Policy
The Data Movement Policy, which you can customize, is the policy by which files are
moved from the active to the retention tier. It is based on the time when the file was
last modified. You can set a different policy for each different subset of data, because
the policy can be set on a per-MTree basis. Files that may be updated need a policy
different from those that never change.
Deduplication within Retention Unit
For fault isolation purposes, deduplication occurs entirely within the retention unit for
DD Extended Retention-enabled DD systems. There is no cross-deduplication between
active and retention tiers, or between different retention units (if applicable).
Note
For both active and retention tiers, DD OS 5.2 and later releases support ES20 and
ES30 shelves, and DD OS 5.7 and later supports DS60 shelves on certain models..
Different Data Domain shelf types cannot be mixed in the same shelf set, and the shelf
sets must be balanced according to the configuration rules specified in the EMC ES30
Expansion Shelf Hardware Guide orEMC DS60 Expansion Shelf Hardware Guide. With DD
Extended Retention, youcan attach significantly more storage to the same controller.
For example, you can attach up to a maximum of 56 ES30 shelves on a DD990 with
DD Extended Retention. The active tier must include storage consisting of at least one
shelf. For the minimum and maximum shelf configuration for the Data Domain
controller models, refer to the expansion shelf hardware guides for ES30 and DS60.
Data Protection
On a DD Extended Retention-enabled DD system, data is protected with built-in fault
isolation features, disaster recovery capability, and DIA (Data Invulnerability
Architecture). DIA checks files when they are moved from the active to the retention
tier. After data is copied into the retention tier, the container and file system
structures are read back and verified. The location of the file is updated, and the space
on the active tier is reclaimed after the file is verified to have been correctly written to
the retention tier.
When a retention unit is filled up, namespace information and system files are copied
into it, so the data in the retention unit may be recovered even when other parts of
the system are lost.
Note
Sanitization and some forms of Replication are not supported for DD Extended
Retention-enabled DD systems.
Space Reclamation
To reclaim space that has been freed up by data moved to the retention tier, you can
use Space Reclamation (as of DD OS 5.3), which runs in the background as a low-
priority activity. It suspends itself when there are higher priority activities, such as
data movement and cleaning.
Encryption of Data at Rest
As of DD OS 5.5.1, you can use the Encryption of Data at Rest feature on DD
Extended Retention-enabled DD systems, if you have an encryption license.
Encryption is not enabled by default.
This is an extension of the encryption capability already available, prior to DD OS 5.5.1,
for systems not using DD Extended Retention.
Refer to the Managing Encryption of Data at Rest chapter in this guide for complete
instructions on setting up and using the encryption feature.
Note
For a list of applications supported with DD Boost, see the DD Boost Compatibility List
on the EMC Online Support site.
When you are using DD Extended Retention, data first lands in the active tier. Files are
moved in their entirety into the retention unit in the retention tier, as specified by your
Data Movement Policy. All files appear in the same namespace. There is no need to
partition data, and you can continue to expand the file system as desired.
All data is visible to all users, and all file system metadata is present in the active tier.
The trade-off in moving data from the active to the retention tier is larger capacity
versus slightly slower access time if the unit to be accessed is not currently ready for
access.
Note
l Both the source and destination systems must be configured as DD systems with
DD Extended Retention enabled.
l The file system must not be enabled on the destination until the retention unit has
been added to it, and replication has been configured.
Note
Note
For DD Boost 2.3 or later, you can specify how multiple copies are to be made and
managed within the backup application.
DD4200
l 128 GB of RAM
l 1 - NVRAM IO module (4 GB)
l 4 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 6 - 1/10 GbE NIC cards for external connectivity
l 0 to 6 - Dual-Port FC HBA cards for external connectivity
l 0 to 6 - Combined NIC and FC cards, not to exceed four of any one specific IO
module
l 1 to 16 - ES30 SAS shelves (2 or 3 TB disks), not to exceed the system maximum
usable capacity of 192 TB. ES30 SATA shelves (1, 2, or 3 TB disks) are supported
for system controller upgrades.
If DD Extended Retention is enabled on a DD4200, the maximum usable storage
capacity of the active tier is 192 TB. The retention tier can have a maximum usable
capacity of 192 TB. The active and retention tiers have a total usable storage capacity
of 384 TB. External connectivity is supported for DD Extended Retention
configurations up to 16 shelves.
DD4500
l 192 GB of RAM
l 1 - NVRAM IO module (4 GB)
l 4 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 6 - 1/10 GbE NIC IO cards for external connectivity
l 0 to 6 - Dual-Port FC HBA cards for external connectivity
l 0 to 5 - Combined NIC and FC cards, not to exceed four of any one specific IO
module
l 1 to 20 - ES30 SAS shelves (2 or 3 TB disks), not to exceed the system maximum
usable capacity of 285 TB. ES30 SATA shelves (1 TB, 2 TB, or 3 TB) are supported
for system controller upgrades.
If DD Extended Retention is enabled on a DD4500, the maximum usable storage
capacity of the active tier is 285 TB. The retention tier can have a maximum usable
capacity of 285 TB. The active and retention tiers have a total usable storage capacity
of 570 TB. External connectivity is supported for DD Extended Retention
configurations up to 24 shelves.
DD6800
l 192 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 3 - Quad-port SAS IO modules
l 1 - 1 GbE port on the motherboard
l 0 to 4 - 1/10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port FC HBA cards for external connectivity
l 0 to 4 - Combined NIC and FC cards
l Shelf combinations are documented in the installation and setup guide for your DD
system, and the expansion shelf hardware guides for your expansion shelves.
l Shelf combinations are documented in the installation and setup guide for your DD
system, and the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD9500, the maximum usable storage
capacity of the active tier is 864 TB. The retention tier can have a maximum usable
capacity of 864 TB. The active and retention tiers have a total usable storage capacity
of 1.7 PB. External connectivity is supported for DD Extended Retention
configurations up to 56 shelves.
DD9800
l 768 GB of RAM
l 1 - NVRAM IO module (8 GB)
l 4 - Quad-port SAS IO modules
l 1 - Quad 1 GbE ports on the motherboard
l 0 to 4 - 10 GbE NIC cards for external connectivity
l 0 to 4 - Dual-Port 16 Gbe FC HBA cards for external connectivity
l Shelf combinations are documented in the installation and setup guide for your DD
system, and the expansion shelf hardware guides for your expansion shelves.
If DD Extended Retention is enabled on a DD9800, the maximum usable storage
capacity of the active tier is 1008 TB. The retention tier can have a maximum usable
capacity of 1008 TB. The active and retention tiers have a total usable storage
capacity of 2.0 PB. External connectivity is supported for DD Extended Retention
configurations up to 56 shelves.
l Shelf Capacity Licenses, which display shelf capacity (in TiB), the shelf model
(such as ES30), and the shelfs storage tier (active or retention).
To delete a license, select the license in the Licenses list, and click Delete Selected
Licenses. If prompted to confirm, read the warning, and click OK to continue.
Note
n See the EMC Data Domain Expansion Shelf Hardware Guide for your shelf model
(ES20, ES30, or DS60).
The only command not available when you use the DD System Manager is archive
report.
CLI Equivalent
You can also verify that the Extended Retention license has been installed at
the CLI:
# license show
## License Key Feature
-- ------------------- -----------
1 AAAA-BBBB-CCCC-DDDD Replication
2 EEEE-FFFF-GGGG-HHHH VTL
-- ------------------- -----------
Note
Create an archive unit, and add it to the file system. You are asked to specify
the number of enclosures in the archive unit:
# filesys archive unit add
Verify that the archive unit is created and added to the file system:
# filesys archive unit list all
l Clean Status shows the time the last cleaning operation finished, or the current
cleaning status if the cleaning operation is currently running. If cleaning can be
run, it shows a Start Cleaning button. When cleaning is running, the Start
Cleaning button changes to a Stop Cleaning button.
l Data Movement Status shows the time the last data movement finished. If data
movement can be run, it shows a Start button. When data movement is running,
the Start button changes to a Stop button.
l Space Reclamation Status shows the amount of space reclaimed after deleting
data in the retention tier. If space reclamation can be run, it shows a Start button.
If it is already running, it shows Stop and Suspend buttons. If it was running
previously and was suspended, it shows Stop and Resume buttons. There is also a
More Information button that will display detailed information about starting and
ending times, completion percentage, units reclaimed, space freed, etc.
l Selecting More Tasks > Destroy lets you delete all data in the file system,
including virtual tapes. This can be done only by a system administrator.
l Selecting More Tasks > Fast Copy lets you clone files and MTrees of a source
directory to a destination directory. Note that for DD Extended Retention-enabled
systems, fast copy will not move data between the active and retention tiers.
l Selecting More Tasks > Expand Capacity lets you expand the active or retention
tier.
Note
4. Select the size to expand the retention unit, then click Configure.
5. After configuration completes, you are returned to the Expand File System
Capacity dialog. Click Finish to complete the retention tier expansion.
Note
CLI Equivalent
To enable space reclamation:
# archive space-reclamation start
Previous Cycle:
---------------
Start time : Feb 21 2014 14:17
End time : Feb 21 2014 14:49
Effective run time : 0 days, 00:32.
Percent completed : 00 % (was stopped by user)
Units reclaimed : None
The system displays a warning telling you that you cannot revert the file system
to its original size after this operation.
5. Click Expand to expand the file system.
Note
You can specify different File Age Thresholds for each defined MTree. An MTree is a
subtree within the namespace that is a logical set of data for management purposes.
For example, you might place financial data, emails, and engineering data in separate
MTrees.
To take advantage of the space reclamation feature, introduced in DD OS 5.3, it is
recommended that you schedule data movement and file system cleaning on a bi-
weekly (every 14 days) basis. By default, cleaning is always run after data movement
completes. It is highly recommended that you do not change this default.
Avoid these common sizing errors:
l Setting a Data Movement Policy that is overly aggressive; data will be moved too
soon.
l Setting a Data Movement Policy that is too conservative: after the active tier fills
up, you will not be able to write data to the system.
l Having an undersized active tier and then setting an overly aggressive Data
Movement Policy to compensate.
Be aware of the following caveats related to snapshots and file system cleaning:
l Files in snapshots are not cleaned, even after they have been moved to the
retention tier. Space cannot be reclaimed until the snapshots have been deleted.
l It is recommended that you set the File Age Threshold for snapshots to the
minimum of 14 days.
Here are two examples of how to set up a Data Movement Policy.
l You could segregate data with different degrees of change into two different
MTrees and set the File Age Threshold to move data soon after the data stabilizes.
Create MTree A for daily incremental backups and MTree B for weekly fulls. Set
the File Age Threshold for MTree A so that its data is never moved, but set the File
Age Threshold for MTree B to 14 days (the minimum threshold).
l For data that cannot be separated into different MTrees, you could do the
following. Suppose the retention period of daily incremental backups is eight
weeks, and the retention period of weekly fulls is three years. In this case, it would
be best to set the File Age Threshold to nine weeks. If it were set lower, you would
be moving daily incremental data that was actually soon to be deleted.
8. Back in the Configuration tab, you can specify age threshold values for
individual MTrees by using the File Age Threshold per MTree link at the lower
right corner.
CLI Equivalent
To set the age threshold:
# archive data-movement policy set age-threshold {days|none}
mtrees mtree-list
Note
Note
Consult your contracted service provider, and refer to the instructions in the
appropriate System Controller Upgrade Guide.
Note
If you need to recover only a portion of a system, such as one retention unit, from
a collection replica, contact EMC Support.
l MTree replication See the MTree Replication section in the Working with DD
Replicator chapter.
l DD Boost managed file replication See the EMC Data Domain Boost for
OpenStorage Administration Guide.
Note
Files that are written to shares or exports that are not committed to be retained (even
if DD Retention Lock Governance or Compliance is enabled on the MTree containing
the files) can be modified or deleted at any time.
Retention locking prevents any modification or deletion of files under retention from
occurring directly from CIFS shares or NFS exports during the retention period
specified by a client-side atime update command. Some archive applications and
backup applications can issue this command when appropriately configured.
Applications or utilities that do not issue this command cannot lock files using DD
Retention Lock.
Retention-locked files are always protected from modification and premature deletion,
even if retention locking is subsequently disabled or if the retention-lock license is no
longer valid.
You cannot rename or delete non-empty folders or directories within an MTree that is
retention-lock enabled. However, you can rename or delete empty folders or
directories and create new ones.
The retention period of a retention-locked file can be extended (but not reduced) by
updating the files atime.
For both DD Retention Lock Governance and Compliance, once the retention period
for a file expires, the file can be deleted using a client-side command, script, or
application. However, the file cannot be modified even after the retention period for
the file expires. The Data Domain system never automatically deletes a file when its
retention period expires.
Note
4. Optionally, delete files with expired retention periods using client-side commands.
Note
License keys are case-insensitive. Include the hyphens when typing keys.
d. Click Add.
2. Select an MTree for retention locking.
a. Select Data Management > MTree.
b. Select the MTree you want to use for retention locking. You can also create
an empty MTree and add files to it later.
3. Click the MTree Summary tab to display information for the selected MTree.
4. Scroll down to Retention Lock area and click Edit to the right of Retention
Lock.
5. Enable DD Retention Lock Governance on the MTree and change the default
minimum and maximum retention lock periods for the MTree, if required.
Perform the following actions in the Modify Retention Lock dialog box:
Note
Note
To check retention lock configuration settings for any MTree, select the MTree
in the Navigation Panel, then click the Summary tab.
Note
License keys are case-insensitive. Include the hyphens when typing keys.
Note
Note
5. To change the default minimum and maximum retention lock periods for a
compliance-enabled MTree, type the following commands with security officer
authorization.
mtree retention-lock set min-retention-period period mtree
mtree-path
mtree retention-lock set max-retention-period period mtree
mtree-path
Note
The retention period is specified in the format [number] [unit]. For example: 1
min, 1 hr, 1 day, 1 mo, or 1 year. Specifying a minimum retention period of less
than 12 hours, or a maximum retention period longer than 70 years, results in an
error.
Note
Note
Some client machines using NFS, but running a legacy OS, cannot set retention time
later than 2038. The NFS protocol doesnt impose the 2038 limit and allows to
specifying times until 2106. Further, DD OS doesnt impose the 2038 limit.
Client-side commands are used to manage the retention locking of individual files.
These commands apply to all retention-lock-capable Data Domain systems and must
be issued in addition to the setup and configuration of DD Retention Lock on the Data
Domain system.
Required Tools for Windows Clients
You need the touch.exe command to perform retention-locking from a Windows-
based client.
To obtain this command, download and install utilities for Linux/Unix-based
applications according to your Windows version. These utilities are best
recommendations from EMC and should be used per customer environment.
l For Windows 8, Windows 7, Windows Server 2008, Windows Vista, Windows
Server 2003, and Windows XP:
http://sourceforge.net/projects/unxutils/files/latest
l For Windows Server 2008, Windows Vista Enterprise, Windows Vista Enterprise
64-bit edition, Windows Vista SP1, Windows Vista Ultimate, and Windows Vista
Ultimate 64-bit edition:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=23754
l For Windows Server 2003 SP1 and Windows Server 2003 R2:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=20983
Note
The touch command for Windows may have a different format than the Linux
examples in this chapter.
Follow the installation instructions provided and set the search path as needed on the
client machine.
Note
The commands listed in this section are to be used only on the client. They cannot be
issued through the DD System Manager or CLI. Command syntax may vary slightly,
depending on the utility you are using.
The topics that follow describe how to manage client-side retention lock file control.
Note
Some client machines using NFS, but running a legacy OS, cannot set retention time
later than 2038. The NFS protocol doesnt impose the 2038 limit and allows to
specifying times until 2106. Further, DD OS doesnt impose the 2038 limit.
Note
A file must be completely written to the Data Domain system before it is committed to
be a retention-locked file.
will lock file SavedData.dat until 10:30 p.m. December 31, 2014.
For example, changing the atime from 201412312230 to 202012121230 using the
following command:
ClientOS# touch -a -t 202012121230 SavedData.dat
will cause the file to be locked until 12:30 p.m. December 12, 2020.
Note
Some client machines using NFS, but running a very old OS, cannot set retention time
later than 2038. The NFS protocol doesnt impose the 2038 limit and allows to
specifying times until 2106. Further, DD OS doesnt impose the 2038 limit.
If the atime of SavedData.dat is 202012121230 (12:30 p.m. December 12, 2020) and
the touch command specifies an earlier atime, 202012111230 (12:30 p.m. December
11, 2020), the touch command fails, indicating that SavedData.dat is retention-
locked.
Note
Note
If the retention period of the retention-locked file has not expired, the delete
operation results in a permission-denied error.
Privileged delete
For DD Retention Lock Governance (only), you can delete retention locked files using
this two step process.
Procedure
1. Use the mtree retention-lock revert path command to revert the
retention locked file.
2. Delete the file on the client system using the rm filename command.
Note
User access permissions for a retention-locked file are updated using the Linux
command line tool chmod.
mtime
mtime is the last-modified time of a file. It changes only when the contents of the file
change. So, the mtime of a retention-locked file cannot change.
Replication
Collection replication, MTree replication, and directory replication replicate the locked
or unlocked state of files.
Files that are governance retention locked on the source are governance retention
locked on the destination and have the same level of protection. For replication, the
source system must have a DD Retention Lock Governance license installeda
license is not required on the destination system.
Replication is supported between systems that are:
l Running the same major DD OS version (for example, both systems are running DD
OS 5.5.x.x).
l Running DD OS versions within the next two consecutive higher or lower major
releases (for example, 5.3.x.x to 5.5.x.x or 5.5.x.x to 5.3.x.x). Cross-release
replication is supported only for directory and MTree replication.
Note
Be aware that:
l Collection replication and MTree replication replicate the minimum and maximum
retention periods configured on MTrees to the destination system.
l Directory replication does not replicate the minimum and maximum retention
periods to the destination system.
The procedure for configuring and using collection, MTree, and directory replication is
the same as for Data Domain systems that do not have a DD Retention Lock
Governance license.
Replication Resync
The replication resync destination command tries to bring the destination into
sync with the source when the MTree or directory replication context is broken
between destination and source systems. This command cannot be used with
collection replication. Note that:
l If files are migrated to the cloud tier before the context is broken, the MTree
replication resync overwrites all the data on the destination, so you will need to
migrate the files to the cloud tier again.
l If the destination directory has DD Retention Lock enabled, but the source
directory does not have DD Retention Lock enabled, then a resync of a directory
replication will fail.
l With Mtree replication, resync will fail if the source MTree does not have retention
lock enabled and the destination MTree has retention lock enabled.
l With Mtree replication, resync will fail if the source and destination MTrees are
retention lock enabled but the propagate retention lock option is set to FALSE.
Fastcopy
When the filesys fastcopy [retention-lock] source src destination
dest command is run on a system with a DD Retention Lock Governance enabled
474 EMC Data Domain Operating System 6.0 Administration Guide
DD Retention Lock
MTree, the command preserves the retention lock attribute during the fastcopy
operation.
Note
If the destination MTree is not retention lock enabled, the retention-lock file attribute
is not preserved.
Filesys destroy
Effects of the filesys destroy command when it is run on a system with a DD
Retention Lock Governance enabled MTree.
l All data is destroyed, including retention-locked data.
l All filesys options are returned to their defaults. This means that retention
locking is disabled and the minimum and maximum retention periods are set back
to their default values on the newly created file system.
Note
MTree delete
When the mtree delete mtree-path command attempts to delete a DD Retention
Lock Governance enabled (or previously enabled) MTree that currently contains data,
the command returns an error.
Note
Replication
An MTree enabled with DD Retention Lock Compliance can be replicated via MTree
and collection replication only. Directory replication is not supported.
MTree and collection replication replicate the locked or unlocked state of files. Files
that are compliance retention locked on the source are compliance retention locked on
the destination and have the same level of protection. Minimum and maximum
retention periods configured on MTrees are replicated to the destination system.
To perform collection replication, the same security officer user must be present on
both the source and destination systems before starting replication to the destination
system and afterward for the lifetime of the source/replica pair.
Replication Resync
The replication resync destination command can be used with MTree
replication, but not with collection replication.
l If the destination MTree contains retention-locked files that do not exist on the
source, then resync will fail.
l Both source and destination MTrees must be enabled for DD Retention Lock
Compliance, or resync will fail.
Replication procedures
The topics in this section describe MTree and collection replication procedures
supported for DD Retention Lock Compliance.
Note
For full descriptions of the commands referenced in the following topics, see the EMC
Data Domain Operating System Command Reference Guide.
Note
License keys are case-insensitive. Include the hyphens when typing keys.
Note
Note
License keys are case-insensitive. Include the hyphens when typing keys.
Note
Note
License keys are case-insensitive. Include the hyphens when typing keys.
d. Click Add.
5. Break the current MTree context on the replication pair.
replication break mtree://destination-system-name/data/
col1/mtree-name
6. Create the new replication context.
replication add source mtree://source-system-name/data/
col1/mtree-name destination mtree://destination-system-
name/data/col1/mtree-name
7. Perform the following steps on the source system only.
8. Select an MTree for retention locking.
Click the Data Management > MTree tab, then the checkbox for the MTree
you want to use for retention locking. (You can also create an empty MTree and
add files to it later.)
9. Click the MTree Summary tab to display information for the selected MTree.
10. Lock files in the compliance-enabled MTree.
11. Ensure that both source and destination (replica) MTrees are the same.
replication resync mtree://destination-system-name/data/
col1/mtree-name
12. Check the progress of resync.
replication watch mtree://destination-system-name/data/
col1/mtree-name
Note
Note
For collection replication the same security officer account must be used on both the
source and destination systems.
Procedure
1. Until instructed to do differently, perform the following steps on the source
system only.
2. Log in to the DD System Manager.
The DD System Manager window appears with DD Network in the Navigation
Panel.
3. Select a Data Domain system.
In the Navigation Panel, expand DD Network and select a system.
4. Add the DD Retention Lock Governance license, if it is not listed under Feature
Licenses.
a. Select Administration > Licenses
b. In the Licenses area click Add Licenses.
c. In the License Key text box, type the license key.
Note
License keys are case-insensitive. Include the hyphens when typing keys.
d. Click Add.
5. Create the replication context.
replication add source col://source-system-name
destination col://destination-system-name
6. Until instructed to do differently, perform the following steps on the destination
system only.
7. Destroy the file system.
filesys destroy
8. Log in to the DD System Manager.
The DD System Manager window appears with DD Network in the Navigation
Panel.
9. Select a Data Domain system.
In the Navigation Panel, expand DD Network and select a system.
10. Create a file system, but do not enable it.
filesys create
11. Create the replication context.
replication add source col://source-system-name
destination col://destination-system-name
12. Configure and enable the system to use DD Retention Lock Compliance.
system retention-lock compliance configure
(The system automatically reboots and executes the system retention-
lock compliance enable command.)
13. Perform the following steps on the source system only.
14. Initialize the replication context.
replication initialize source col://source-system-name
destination col://destination-system-name
15. Confirm that replication is complete.
replication status col://destination-system-name detailed
This command reports 0 pre-compressed bytes remaining when replication is
finished.
Fastcopy
When the filesys fastcopy [retention-lock] source src destination
dest command is run on a system with a DD Retention Lock Compliance enabled
MTree, the command preserves the retention lock attribute during the fastcopy
operation.
Note
If the destination MTree is not retention lock enabled, the retention-lock file attribute
is not preserved.
CLI usage
Considerations for a Data Domain system with DD Retention Lock Compliance.
l Commands that break compliance cannot be run. The following commands are
disallowed:
n filesys archive unit del archive-unit
n filesys destroy
n mtree delete mtree-path
n mtree retention-lock reset {min-retention-period period |
max-retention-period period} mtree mtree-path
n mtree retention-lock disable mtree mtree-path
n mtree retention-lock revert
n user reset
l The following command requires security officer authorization if the license being
deleted is for DD Retention Lock Compliance:
n license del license-feature [license-feature ...] |
license-code [license-code ...]
l The following commands require security officer authorization if DD Retention
Lock Compliance is enabled on an MTree specified in the command:
n mtree retention-lock set {min-retention-period period |
max-retention-period period} mtree mtree-path
n mtree rename mtree-path new-mtree-path
l The following commands require security officer authorization if DD Retention
Lock Compliance is enabled on the system:
n alerts notify-list reset
n config set timezone zonename
n config reset timezone
n cifs set authentication active-directory realm { [dc1
[dc2 ...]]
n license reset
n ntp add timeserver time server list
n ntp del timeserver time server list
n ntp disable
n ntp enable
n ntp reset
n ntp reset timeservers
n replication break {destination | all}
n replication disable {destination | all}
n system set date MMDDhhmm[[CC]YY]
System clock
DD Retention Lock Compliance implements an internal security clock to prevent
malicious tampering with the system clock.
The security clock closely monitors and records the system clock. If there is an
accumulated two-week skew within a year between the security clock and the system
clock, the file system is disabled and can be resumed only by a security officer.
Finding the System Clock Skew
You can run the DD OS command system retention-lock compliance
status (security officer authorization required) to get system and security clock
information, including the last recorded security clock value, and the accumulated
system clock variance. This value is updated every 10 minutes.
weeks), the file system is disabled. Complete these steps to restart the file system
and remove the skew between security and system clocks.
Procedure
1. At the system console, enable the file system.
filesys enable
2. At the prompt, confirm that you want to quit the filesys enable command
and check whether the system date is right.
3. Display the system date.
system show date
4. If the system date is not correct, set the correct date (security officer
authorization is required) and confirm it.
system set date MMDDhhmm[[CC]YY]
system show date
5. Enable the file system again.
filesys enable
6. At the prompt, continue to the enabling procedure.
7. A security officer prompt appears. Complete the security officer authorization
to start the file system. The security clock will automatically be updated to the
current system date.
DD Encryption 487
DD Encryption
DD encryption overview
Data encryption protects user data if the Data Domain system is stolen or if the
physical storage media is lost during transit, and it eliminates accidental exposure of a
failed drive if it is replaced.
When data enters the Data Domain system using any of the supported protocols
(NFS, CIFS, DD VTL, DD Boost, and NDMP Tape Server), the stream is segmented,
fingerprinted, and de-duplicated (global compression). It is then grouped into multi-
segment compression regions, locally compressed, and encrypted before being stored
to disk.
Once enabled, the Encryption at Rest feature encrypts all data entering the Data
Domain system. You cannot enable encryption at a more granular level.
CAUTION
Data that has been stored before the DD Encryption feature is enabled does not
automatically get encrypted. To protect all of the data on the system, be sure to
enable the option to encrypt existing data when you configure encryption.
Additional Notes:
As of DD OS 5.5.1.0, Encryption of Data at Rest is supported for DD Extended
Retention-enabled systems with a single retention unit. As of 5.5.1.0, DD Extended
Retention supports only a single retention unit, so systems set up during, or after,
5.5.1.0 will have no problem complying with this restriction. However, systems set up
prior to 5.5.1.0 may have more than one retention unit, but they will not work with
Encryption of Data at Rest until all but one retention unit has been removed, or data
has been moved or migrated to one retention unit.
The filesys encryption apply-changes command applies any encryption
configuration changes to all data present in the file system during the next cleaning
cycle. For more information about this command, see the EMC Data Domain Operating
System Command Reference Guide.
Encryption of Data at Rest supports all of the currently supported backup applications
described in the Backup Compatibility Guides available through EMC Online Support
at http://support.emc.com.
Data Domain Replicator can be used with encryption, enabling encrypted data to be
replicated using collection, directory, MTree, or application-specific managed file
replication with the various topologies. Each replication form works uniquely with
encryption and offers the same level of security. For more information, see the section
on using encryption of data at rest with replication.
Files locked using Data Domain Retention Lock can be stored, encrypted, and
replicated.
The autosupport feature includes information about the state of encryption on the
Data Domain system:
l Whether or not encryption is enabled
l The Key Manager in effect and which keys are used
l The encryption algorithm that is configured
l The state of the file system
Configuring encryption
This procedure includes configuring a key manager.
If the Encryption Status on the Data Management > File System > Encryption tab
shows Not Configured, click Configure to set up encryption on the Data Domain
system.
Provide the following information:
l Algorithm
n Select an encryption algorithm from the drop-down list or accept the default
AES 256-bit (CBC).
The AES 256-bit Galois/Counter Mode (GCM) is the most secure algorithm
but it is significantly slower than the Cipher Block Chaining (CBC) mode.
n Determine what data is to be encrypted: existing and new or only new. Existing
data will be encrypted during the first cleaning cycle after the file system is
restarted. Encryption of existing data can take longer than a standard file
system cleaning operation.
l Key Manager (select one of the two)
n Embedded Key Manager
By default, the Data Domain Embedded Key Manager is in effect after you
restart the file system unless you configure the RSA DPM Key Manager.
You can enable or disable key rotation. If enabled, enter a rotation interval
between 1-12 months.
n RSA DPM Key Manager
Note
See the section about key management for an explanation about how the
Embedded Key Manager and the RSA DPM Key Manager work.
Note
The RSA DPM Key Manager requires setup on both an RSA DPM server and on
the Data Domain system. Follow the instructions in the RSA DPM key manager
encryption setup section before selecting the RSA DPM Key Manager in the Data
Domain interface. You can enable encryption using the Embedded Key Manager
before configuring the RSA DPM Key Manager. You can then switch to the RSA
DPM Key Manager after performing an RSA DPM key manager encryption setup
and following the procedure described in the changing key managers after setup
section.
The Summary shows your selected configuration values. Review them for correctness.
To change a value, click Back to navigate to the page where it was entered and modify
it.
A system restart is necessary to enable encryption. To apply the new configuration,
select the option to restart the file system.
Note
be configured for the Key Manager. If not, the Data Domain system continues to use
the latest known key.
Click on a Key MUID and the system displays the following information for the key in
the Key Details dialog: Tier/Unit (example: Active, Retention-unit-2), creation date,
valid until date, state (see DPM Encryption Key States Supported by Data Domain),
and post compression size. Click Close to close the dialog.
State Definition
Pending-Activated The key has just been created. After a file
system restart, the key becomes Activated-
RW.
Table 179 DPM encryption key states supported by Data Domain (continued)
State Definition
respectively. Activated-RW is the latest
activated key.
Compromised The key can only decrypt. After all of the data
encrypted with the compromised key is re-
encrypted, the state changes to Destroyed
Compromised. The keys are re-encrypted
when a file system cleaning is run. You can
delete a Destroyed Compromised key, if
necessary.
Note
Note
A file system restart is necessary if keys have changed since the last sync.
Procedure
1. Using the DD System Manager, select the Data Domain system you are working
with in the Navigation panel.
Note
Note
For information about the security officer, see the sections regarding creating local
users and enabling security authorization.
Note
After a file system clean has run, the key state will change to Destroyed.
Deleting a key
You can delete Key Manager keys that are in the Destroyed or Compromised-
Destroyed states. However, you only need to delete a key when the number of keys
has reached the maximum 254 limit. This procedure requires security officer
credentials.
Note
To reach the Destroyed state, the Destroying a Key procedure (for either the
Embedded Key Manager or the RSA DPM Key Manager) must be performed on the
key and a system cleaning must be run.
Procedure
1. Select Data Management > File System > Encryption.
2. In the Encryption Keys section, select the key or keys in the list to be deleted.
3. Click Delete....
The system displays the key to be deleted, and the tier and state for the key.
4. Type your security officer user name and password.
5. Confirm that you want to delete the key or keys by clicking Delete.
Note
After a file system clean has run, the key state changes to Destroyed.
Note
See the latest version of the RSA Data Protection Manager Server Administrators Guide
for more information about each step of this procedure.
Algorithm and cipher mode settings set on the RSA DPM Key Manager Server are
ignored by the Data Domain system. Configure these settings on the Data Domain
system.
Procedure
1. Create an identity for the Data Domain system using the X509 certificate. A
secure channel is created based on this certificate.
2. Create a key class with the proper attributes:
l Key length: 256 bits.
l Duration: For example, six months or whatever matches your policy.
l Auto-key generation: Select to have keys automatically generated.
Note
Multiple Data Domain systems can share the same key class. For more
information about key classes, see the section about RSA DPM key classes.
3. Create an identity using the Data Domain systems host certificate as its
identity certificate. The identity and the key class have to be in the same
identity group.
4. Import the certificates. See the section about importing certificates for more
information.
Note
If the key length is not 256 bits, the DPM configuration will fail.
2. Import the certificates by redirecting the certificate files using ssh command
syntax. See the EMC Data Domain Operating System Command Reference Guide
for details.
ssh sysadmin@<Data-Domain-system> adminaccess certificate import
{host password password |ca } < path_to_the_certificate
For example, to import the host certificate host.p12 from your personal
computers desktop over to the Data Domain system DD1 using ssh, enter:
# ssh sysadmin@DD1 adminaccess certificate import host password
abc123 < C:\host.p12
3. Import the CA certificate, for example, ca.pem, from your desktop to DD1 via
SSH by entering:
# ssh sysadmin@DD1 adminaccess certificate import ca < C:\ca.pem
Note
Note
By default, the fips-mode is enabled. If the PKCS #12 client credential is not
encrypted with the FIPS 140-2 approved algorithm, such as RC2, then you must
disable fips-mode. See the Data Domain Operating System Command Reference
Guide for information about disabling fips-mode.
3. Log into the DD System Manager and select the Data Domain system you are
working with in the Navigation panel.
Note
4. Click the Data Management > File System > Encryption tab.
Note
Certificates are only necessary for RSA Key Manager. Embedded Key Manager does
not use certificates.
Procedure
1. Select the option to upload the certificate as a .p12 file.
a. Enter a password.
b. Click Browse to find the .p12 file.
2. Select the option to upload the public key as a .pem file and use a generated
private key.
a. Click Browse to find the .pem file.
3. Click Add.
Deleting certificates
Select a certificate with the correct fingerprint.
Procedure
1. Select a certificate to delete.
2. Click Delete.
The system displays a Delete Certificate dialog with the fingerprint of the
certificate to be deleted.
3. Click OK.
Note
3. In the Security Officer Credentials area, enter the user name and password of a
security officer.
4. Select one of the following:
l Select Apply to existing data and click OK. Decryption of existing data will
occur during the first cleaning cycle after the file system is restarted.
l Select Restart the file system now and click OK. DD Encryption will be
disabled after the file system is restarted.
After you finish
Note
2. Disable the file system by clicking Disabled in the File System status area.
3. Use the procedure to lock or unlock the file system.
2. In the text fields of the Lock File System dialog box, provide:
l The username and password of a Security Officer account (an authorized
user in the Security User group on that Data Domain system).
l The current and a new passphrase.
3. Click OK.
This procedure re-encrypts the encryption keys with the new passphrase. This
process destroys the cached copy of the current passphrase (both in-memory
and on-disk).
Note
CAUTION
Be sure to take care of the passphrase. If the passphrase is lost, you will
never be able to unlock the file system and access the data. The data will
be irrevocably lost.
CAUTION
Do not use the chassis power switch to power off the system. Type the
following command at the command prompt instead.
3. Select an encryption algorithm from the drop-down list or accept the default
AES 256-bit (CBC).
The AES 256-bit Galois/Counter Mode (GCM) is the most secure algorithm but
it is significantly slower than the Cipher Block Chaining (CBC) mode.
Note
To reset the algorithm to the default AES 256-bit (CBC), click Reset to default.
Note
Encryption of existing data can take longer than a standard file system clean
operation.
l To encrypt only new data, select Restart file system now and click OK.
Note
EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or
warranties of any kind with respect to the information in this publication, and specifically disclaims
implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution
of any EMC software described in this publication requires an applicable software license.
EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the
United States and other countries.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to EMC Online Support
(https://support.emc.com).