Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Ecs 381 Admin Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 224

ECS 3.8.

1 Administration Guide
3.8.1

October 2024
Rev. 1.2
Notes, cautions, and warnings

NOTE: A NOTE indicates important information that helps you make better use of your product.

CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.

WARNING: A WARNING indicates a potential for property damage, personal injury, or death.

© 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents

Figures..........................................................................................................................................9

Tables..........................................................................................................................................10

Chapter 1: Overview.....................................................................................................................12
Revision history.................................................................................................................................................................. 12
Introduction......................................................................................................................................................................... 12
ECS platform.......................................................................................................................................................................13
ECS data protection..........................................................................................................................................................14
Configurations for availability, durability, and resilience..................................................................................... 15
ECS network....................................................................................................................................................................... 16
Load balancing considerations........................................................................................................................................ 16

Chapter 2: Getting Started with ECS........................................................................................... 17


Initial configuration.............................................................................................................................................................17
Log in to the ECS Portal.................................................................................................................................................. 18
Security fix for Management API............................................................................................................................. 18
View the Getting Started Task Checklist.....................................................................................................................18
View the ECS Portal Dashboard.....................................................................................................................................19
Upper-right menu bar..................................................................................................................................................19
View requests...............................................................................................................................................................20
View capacity utilization............................................................................................................................................ 20
View performance....................................................................................................................................................... 20
View storage efficiency............................................................................................................................................. 20
View geo monitoring................................................................................................................................................... 20
View node and disk health......................................................................................................................................... 21
View alerts..................................................................................................................................................................... 21
View audits.....................................................................................................................................................................21

Chapter 3: Storage Pools, VDCs, and Replication Groups............................................................ 22


Introduction to storage pools, VDCs, and replication groups.................................................................................22
Working with storage pools ........................................................................................................................................... 23
Create a storage pool................................................................................................................................................. 24
Edit a storage pool...................................................................................................................................................... 25
Working with VDCs in the ECS Portal ........................................................................................................................ 25
Create a VDC for a single site.................................................................................................................................. 26
Add a VDC to a federation........................................................................................................................................ 27
Edit a VDC.....................................................................................................................................................................28
Remove VDC from a Replication Group.................................................................................................................30
Fail a VDC (PSO)..........................................................................................................................................................31
Guidelines to check failover and bootstrap process............................................................................................31
Working with replication groups in the ECS Portal...................................................................................................32
Create a replication group......................................................................................................................................... 33
Edit a replication group.............................................................................................................................................. 34
Delete a replication group......................................................................................................................................... 35

Contents 3
Chapter 4: Authentication Providers............................................................................................37
Introduction to authentication providers..................................................................................................................... 37
Working with authentication providers in the ECS Portal.......................................................................................37
Considerations when adding Active Directory authentication providers....................................................... 38
AD or LDAP authentication provider settings...................................................................................................... 38
Add an AD or LDAP authentication provider........................................................................................................ 42
Add a Keystone authentication provider................................................................................................................42

Chapter 5: Namespaces...............................................................................................................44
Introduction to namespaces........................................................................................................................................... 44
Namespace tenancy....................................................................................................................................................44
Working with namespaces in the ECS Portal............................................................................................................. 45
Namespace settings................................................................................................................................................... 45
Create a namespace................................................................................................................................................... 49
Edit a namespace........................................................................................................................................................ 50
Delete a namespace.....................................................................................................................................................51

Chapter 6: Users and Roles......................................................................................................... 53


Introduction to users and roles...................................................................................................................................... 53
Users in ECS.......................................................................................................................................................................53
Management users......................................................................................................................................................54
Default management users....................................................................................................................................... 54
Object users..................................................................................................................................................................55
Domain and local users...............................................................................................................................................55
User scope.................................................................................................................................................................... 57
User tags....................................................................................................................................................................... 58
Management roles in ECS...............................................................................................................................................58
Security Administrator............................................................................................................................................... 58
System Administrator.................................................................................................................................................59
System Monitor........................................................................................................................................................... 59
Namespace Administrator......................................................................................................................................... 59
Tasks performed by role............................................................................................................................................ 60
Working with users in the ECS Portal..........................................................................................................................62
Add an object user...................................................................................................................................................... 63
Add a domain user as an object user......................................................................................................................65
Add domain users into a namespace...................................................................................................................... 65
Create a local management user or assign a domain user or AD or LDAP group to a management
role.............................................................................................................................................................................. 65
Assign the Namespace Administrator role to a user or AD or LDAP group ................................................. 66

Chapter 7: Identity and Access Management (S3)....................................................................... 67


Introduction to Identity and Access Management.................................................................................................... 67
Account Management................................................................................................................................................ 67
Access Management.................................................................................................................................................. 68
Users.................................................................................................................................................................................... 69
New User....................................................................................................................................................................... 70
Delete Users..................................................................................................................................................................70
Groups..................................................................................................................................................................................70

4 Contents
New Group..................................................................................................................................................................... 71
Delete Groups................................................................................................................................................................71
Roles...................................................................................................................................................................................... 71
New Role........................................................................................................................................................................72
Delete Roles.................................................................................................................................................................. 72
Policies................................................................................................................................................................................. 72
New Policy.....................................................................................................................................................................73
Delete Policies.............................................................................................................................................................. 73
Policy Simulator- Existing Policies...........................................................................................................................73
Policy Simulator- New Policy....................................................................................................................................74
Identity Provider................................................................................................................................................................ 75
New Identity Provider.................................................................................................................................................75
Delete Providers.......................................................................................................................................................... 75
SAML Service Provider Metadata.................................................................................................................................76
Generate SAML Service Provider Metadata........................................................................................................ 76
Root Access Key................................................................................................................................................................76
Create Access Key...................................................................................................................................................... 76

Chapter 8: Buckets......................................................................................................................78
Introduction to buckets....................................................................................................................................................78
Working with buckets in the ECS Portal..................................................................................................................... 79
Bucket settings............................................................................................................................................................ 79
Create a bucket........................................................................................................................................................... 82
Edit a bucket.................................................................................................................................................................83
Set ACLs........................................................................................................................................................................84
Set bucket policies...................................................................................................................................................... 86
Restrict user IP addresses that can access a CAS bucket............................................................................... 90
Create a bucket using the S3 API (with s3curl).........................................................................................................91
Bucket HTTP headers................................................................................................................................................ 93
Enable Data Movement................................................................................................................................................... 93
Data Mobility Common Issues .................................................................................................................................95
Troubleshoot Data Mobility.......................................................................................................................................95
Data Mobility Debug Logging .................................................................................................................................. 95
Bucket, object, and namespace naming conventions.............................................................................................. 96
S3 bucket and object naming in ECS..................................................................................................................... 96
OpenStack Swift container and object naming in ECS...................................................................................... 97
Atmos bucket and object naming in ECS...............................................................................................................97
CAS pool and object naming in ECS....................................................................................................................... 97
Simplified bucket delete ................................................................................................................................................. 98
Delete a bucket............................................................................................................................................................ 99
Simplified bucket delete common issues ..............................................................................................................99
Simplified bucket delete log files and debug logging ....................................................................................... 100
Priority task coordinator ............................................................................................................................................... 100
Priority task coordinator common issues ............................................................................................................ 101
Partial list results.............................................................................................................................................................. 101
Bucket listing limitation................................................................................................................................................... 101
Disable unused services................................................................................................................................................. 102

Chapter 9: File Access............................................................................................................... 104

Contents 5
Introduction to file access............................................................................................................................................. 104
ECS multi-protocol access............................................................................................................................................ 105
S3/NFS multi-protocol access to directories and files.................................................................................... 105
Multiprotocol access permissions..........................................................................................................................105
Working with NFS exports in the ECS Portal........................................................................................................... 107
Working with user or group mappings in the ECS Portal.......................................................................................107
ECS NFS configuration tasks....................................................................................................................................... 108
Create a bucket for NFS using the ECS Portal..................................................................................................108
Add an NFS export.................................................................................................................................................... 109
Add a user or group mapping using the ECS Portal............................................................................................111
Configure ECS NFS with Kerberos security......................................................................................................... 111
Mount an NFS export example..................................................................................................................................... 116
Best practices for mounting ECS NFS exports...................................................................................................117
NFS access using the ECS Management REST API................................................................................................ 117
NFS WORM (Write Once, Read Many)...................................................................................................................... 118
S3A support...................................................................................................................................................................... 120
Configuration at ECS................................................................................................................................................ 120
Configuration at Hadoop Node............................................................................................................................... 121
Geo-replication status.................................................................................................................................................... 122

Chapter 10: Maintenance........................................................................................................... 123


Maintenance......................................................................................................................................................................123
Rack.............................................................................................................................................................................. 123
Node..............................................................................................................................................................................123
Disk................................................................................................................................................................................ 124
ECS Appliance CRU and FRU guide availability........................................................................................................125

Chapter 11: Certificates............................................................................................................. 126


Introduction to certificates........................................................................................................................................... 126
ECS certificate tool.........................................................................................................................................................126
Installation....................................................................................................................................................................127
Configuration.............................................................................................................................................................. 128
View current certificates......................................................................................................................................... 129
Create certificate signing request.......................................................................................................................... 131
Create a self-signed certificate..............................................................................................................................133
Upload certificate...................................................................................................................................................... 135
Generate certificates...................................................................................................................................................... 136
Create a private key.................................................................................................................................................. 136
Generate a SAN configuration................................................................................................................................ 137
Create a self-signed certificate..............................................................................................................................138
Create a certificate signing request......................................................................................................................140
Upload a certificate..........................................................................................................................................................141
Authenticate with the ECS Management REST API..........................................................................................141
Upload a management certificate..........................................................................................................................142
Upload a data certificate for data access endpoints........................................................................................ 143
Add custom LDAP certificate................................................................................................................................. 144
Verify installed certificates............................................................................................................................................146
Verify the management certificate........................................................................................................................146
Verify the object certificate.................................................................................................................................... 147

6 Contents
Reset the object certificate.....................................................................................................................................147

Chapter 12: ECS Settings...........................................................................................................148


Introduction to ECS settings........................................................................................................................................ 148
Object base URL.............................................................................................................................................................. 148
Bucket and namespace addressing....................................................................................................................... 149
DNS configuration..................................................................................................................................................... 150
Add a Base URL.......................................................................................................................................................... 151
Key Management..............................................................................................................................................................151
Activating external key management.................................................................................................................... 151
External Key Manager Configuration..........................................................................................................................152
Create a cluster..........................................................................................................................................................153
Migrating external key management.....................................................................................................................154
Add external key management servers to cluster............................................................................................. 155
Update VDC to EKM Server Mapping.................................................................................................................. 156
Activate EKM Cluster............................................................................................................................................... 157
Key rotation.......................................................................................................................................................................157
Secure Remote Services................................................................................................................................................158
Secure Remote Services prerequisites ............................................................................................................... 158
Add a Secure Remote Services Server................................................................................................................ 159
Verify that ESRS call home works.........................................................................................................................160
Disable call home........................................................................................................................................................ 161
Alert policy......................................................................................................................................................................... 161
New alert policy..........................................................................................................................................................162
Event notification servers............................................................................................................................................. 162
SNMP servers.............................................................................................................................................................162
Syslog servers.............................................................................................................................................................167
Platform locking............................................................................................................................................................... 169
Lock and unlock nodes using the ECS Portal..................................................................................................... 170
Lock and unlock nodes using the ECS Management REST API..................................................................... 170
Licensing............................................................................................................................................................................. 171
Obtain the Dell EMC ECS license file.....................................................................................................................171
Upload the ECS license file......................................................................................................................................172
Security.............................................................................................................................................................................. 172
Password......................................................................................................................................................................172
Password Rules.......................................................................................................................................................... 173
Sessions........................................................................................................................................................................174
User Agreement......................................................................................................................................................... 175
About this VDC.................................................................................................................................................................175
Object version limitation settings................................................................................................................................ 176

Chapter 13: ECS Outage and Recovery....................................................................................... 177


Introduction to ECS site outage and recovery......................................................................................................... 177
TSO behavior.................................................................................................................................................................... 178
TSO behavior with the ADO bucket setting turned off....................................................................................178
TSO behavior with the ADO bucket setting turned on.....................................................................................179
TSO considerations................................................................................................................................................... 187
NFS file system access during a TSO................................................................................................................... 187
PSO behavior....................................................................................................................................................................188

Contents 7
Recovery on disk and node failures.............................................................................................................................188
NFS file system access during a node failure..................................................................................................... 189
Data rebalancing after adding new nodes................................................................................................................. 189

Chapter 14: Advanced Monitoring.............................................................................................. 190


Advanced Monitoring......................................................................................................................................................190
View Advanced Monitoring Dashboards.............................................................................................................. 190
Share Advanced Monitoring Dashboards............................................................................................................200
Flux API............................................................................................................................................................................. 200
Monitoring list of metrics........................................................................................................................................ 202
Flux API field descriptions.......................................................................................................................................203
Monitoring list of metrics: Non-Performance.................................................................................................... 203
Monitoring list of metrics: Performance.............................................................................................................. 213
Flux API replacements for deprecated dashboard API..................................................................................... 217
Dashboard APIs............................................................................................................................................................... 220

Part I: Document feedback........................................................................................................ 223


Index......................................................................................................................................... 224

8 Contents
Figures

1 ECS component layers............................................................................................................................................ 13


2 Guide icon...................................................................................................................................................................18
3 Getting Started Task Checklist.............................................................................................................................19
4 Upper-right menu bar..............................................................................................................................................19
5 Replication group spanning three sites and replication group spanning two sites.................................. 23
6 Adding a subset of domain users into a namespace using one AD attribute............................................ 56
7 Adding a subset of domain users into a namespace using multiple AD attributes...................................57
8 Bucket Policy Editor code view............................................................................................................................ 87
9 Bucket Policy Editor tree view............................................................................................................................. 87
10 Read/write request fails during TSO when data is accessed from non-owner site and owner
site is unavailable....................................................................................................................................................179
11 Read/write request succeeds during TSO when data is accessed from owner site and non-
owner site is unavailable....................................................................................................................................... 179
12 Read/write request succeeds during TSO when ADO-enabled data is accessed from non-
owner site and owner site is unavailable.......................................................................................................... 180
13 Object ownership example for a write during a TSO in a two-site federation........................................182
14 Read request workflow example during a TSO in a three-site federation............................................... 183
15 Passive replication in normal state.....................................................................................................................184
16 TSO for passive replication..................................................................................................................................184
17 ADO and Object Lock enabled bucket where the object is replicated from Zone 1 to Zone 2
during normal network connection.................................................................................................................... 185
18 ADO and Object Lock enabled bucket where a new version of an object is replicated from Zone
1 to Zone 2 during normal network connection but the lock is not replicated........................................186
19 ADO and Object Lock enabled bucket during TSO when the unlocked version of the object is
overwritten in Zone 2............................................................................................................................................186
20 ADO and Object Lock enabled bucket after TSO where the version updated in Zone 2 during
the outage will be replicated to Zone 1.............................................................................................................186

Figures 9
Tables

1 Revision dates and changes...................................................................................................................................12


2 ECS supported data services................................................................................................................................ 13
3 Erasure encoding requirements for regular and cold archives...................................................................... 14
4 Storage overhead..................................................................................................................................................... 14
5 ECS data protection schemes...............................................................................................................................15
6 Storage pool properties..........................................................................................................................................23
7 VDC properties.........................................................................................................................................................25
8 Options to modify replication and management endpoints........................................................................... 29
9 Replication Group properties................................................................................................................................ 32
10 Authentication provider properties......................................................................................................................37
11 AD or LDAP authentication provider settings...................................................................................................38
12 Keystone authentication provider settings........................................................................................................42
13 Namespace properties............................................................................................................................................45
14 Namespace settings................................................................................................................................................45
15 ECS Management REST API retention policy...................................................................................................48
16 Default management users....................................................................................................................................54
17 Tasks performed by ECS management user role.............................................................................................60
18 Object user properties............................................................................................................................................62
19 Management user properties................................................................................................................................63
20 Identities.....................................................................................................................................................................67
21 Policy types............................................................................................................................................................... 68
22 IAM user.....................................................................................................................................................................69
23 IAM groups................................................................................................................................................................ 70
24 Role.............................................................................................................................................................................. 71
25 IAM policy.................................................................................................................................................................. 72
26 Identity Provider.......................................................................................................................................................75
27 SAML Service Provider Metadata....................................................................................................................... 76
28 Root Access Key...................................................................................................................................................... 76
29 Bucket settings........................................................................................................................................................ 79
30 Bucket ACLs............................................................................................................................................................. 84
31 Predefined groups................................................................................................................................................... 85
32 Bucket headers........................................................................................................................................................ 93
33 Simplified bucket delete issues............................................................................................................................ 99
34 Priority task coordinator issues........................................................................................................................... 101
35 Mapping between NFS ACL and Object ACL attributes.............................................................................. 106
36 NFS export properties...........................................................................................................................................107
37 Mapping fields.........................................................................................................................................................108
38 Host parameters..................................................................................................................................................... 110
39 ECS Management REST API calls for managing NFS access...................................................................... 117
40 Autocommit terms.................................................................................................................................................. 118

10 Tables
41 Expected file operations.......................................................................................................................................120
42 Rack...........................................................................................................................................................................123
43 Node.......................................................................................................................................................................... 124
44 Disk............................................................................................................................................................................ 124
45 Distinguished Name (DN) fields......................................................................................................................... 139
46 Key Management properties............................................................................................................................... 152
47 Create a cluster...................................................................................................................................................... 153
48 New external key servers.....................................................................................................................................155
49 Key Management properties............................................................................................................................... 156
50 Secure Remote Services properties.................................................................................................................. 158
51 Syslog facilities used by ECS.............................................................................................................................. 168
52 Syslog severity keywords.....................................................................................................................................168
53 ECS Management REST API calls for managing node locking....................................................................170
54 Password rules........................................................................................................................................................ 173
55 Sessions.................................................................................................................................................................... 174
56 User agreement...................................................................................................................................................... 175
57 Object version limitation settings.......................................................................................................................176
58 Object Lock and ADO in different types of buckets..................................................................................... 185
59 Example scenario where locked data can be lost in TSO.............................................................................186
60 Advanced monitoring dashboards......................................................................................................................190
61 Advanced monitoring dashboard fields..............................................................................................................191
62 Alternative places to find removed data..........................................................................................................220
63 APIs removed in ECS 3.5.0................................................................................................................................. 222

Tables 11
1
Overview
Topics:
• Revision history
• Introduction
• ECS platform
• ECS data protection
• ECS network
• Load balancing considerations

Revision history
Table 1. Revision dates and changes
Revision Date Description of change
October 2024 Rev 1.2 Updated TSO and PSO Minimum Requirements.
July 2024 Rev 1.1 Removed broken links
April 2024 Rev 1.0 Initial release of ECS 3.8.1

Introduction
Dell EMC ECS provides a complete software-defined cloud storage platform that supports the storage, manipulation, and
analysis of unstructured data on a massive scale on commodity hardware. You can deploy ECS as a turnkey storage appliance or
as a software product that is installed on a set of qualified commodity servers and disks. ECS offers the cost advantages of a
commodity infrastructure and the enterprise reliability, availability, and serviceability of traditional arrays.
ECS uses a scalable architecture that includes multiple nodes and attached storage devices. The nodes and storage devices are
commodity components, similar to devices that are generally available, and are housed in one or more racks.
A rack and its components that are supplied by Dell EMC and that have preinstalled software, is referred to as an ECS
appliance. A rack and commodity nodes that are not supplied by Dell EMC, is referred to as a Dell EMC ECS software only
solution. Multiple racks are referred to as a cluster.
A rack, or multiple joined racks, with processing and storage that is handled as a coherent unit by the ECS infrastructure
software is referred to as a site, and at the ECS software level as a Virtual Data Center (VDC). If you add a VDC + Storage Pool
to a replication group, and it appears a Zone.
Management users can access the ECS UI, which is referred to as the ECS Portal, to perform administration tasks. Management
users can be assigned one of four roles: Security Administrator, System Administrator, Namespace Administrator, and System
Monitor. Management tasks that can be performed in the ECS Portal can also be performed by using the ECS Management
REST API.
ECS administrators can perform the following tasks in the ECS Portal:
● Configure and manage the object store infrastructure (compute and storage resources) for object users.
● Manage users, roles, and buckets within namespaces. Namespaces are equivalent to tenants.
Object users cannot access the ECS Portal, but can access the object store to read and write objects and buckets by using
clients that support the following data access protocols:
● Amazon Simple Storage Service (Amazon S3)
● EMC Atmos
● OpenStack Swift
● ECS CAS (content-addressable storage)

12 Overview
For more information about object user tasks, see ECS Data Access Guide.
For more information about system monitor tasks, see ECS Monitoring Guide.

ECS platform
The ECS platform is composed of the data services, portal, storage engine, fabric, infrastructure, and hardware component
layers.

Data Portal
Services
ECS
Storage Engine Software
Fabric
Infrastructure
Hardware

Figure 1. ECS component layers

Data services The data services component layer provides support for access to the ECS object store through object
and NFS v3 protocols. In general, ECS provides multiprotocol access, data that is ingested through one
protocol can be accessed through another. For example, data that is ingested through S3 can be modified
through Swift or NFS v3. This multiprotocol access has some exceptions due to protocol semantics and
representations of how the protocol was designed. The following table shows the object APIs and the
protocols that are supported and interoperated.
NOTE: In this document, HDFS sees the native Hadoop Compatible File System (HCFS) support, also
known as ViPRFS. Hadoop support for ECS object storage (S3A) is typically referenced as S3A.
The following table shows the object APIs and the protocols that are supported and interoperated.

Table 2. ECS supported data services


Protocols Support Interoperability
Object S3 Capabilities such as byte range updates and rich File systems (NFS), Swift
ACLS
Hadoop S3A Hadoop 2.7+ S3
Atmos Version 2.0 NFS (only path-based objects, not object ID
style objects).
Swift V2 APIs, Swift, and Keystone v3 authentication File systems (NFS), S3
CAS SDK v3.1.544 and later Not applicable
NFS NFSv3 S3, Swift, Atmos (only path-based objects)

Portal The ECS Portal component layer provides a Web-based user interface that allows you to manage, license,
and provision ECS nodes. The portal has the following comprehensive reporting capabilities:
● Capacity utilization for each site, storage pool, node, and disk
● Performance monitoring on latency, throughput, transactions per second, and replication progress and
rate
● Diagnostic information, such as node and disk recovery status and statistics on hardware and process
health for each node, which helps identify performance and system bottlenecks.

Storage engine The storage engine component layer provides an unstructured storage engine that is responsible for
storing and retrieving data, managing transactions, and protecting and replicating data. The storage
engine provides access to objects ingested using multiple object storage protocols and the NFS file
protocols.

Overview 13
Fabric The fabric component layer provides cluster health management, software management, configuration
management, upgrade capabilities, and alerting. The fabric layer is responsible for keeping the services
running and managing resources such as the disks, containers, firewall, and network. It tracks and reacts
to environment changes such as failure detection and provides alerts that are related to system health.
The 9069 and 9099 ports are public IP ports that are protected by Fabric Firewall manager. Port is not
available outside of the cluster.

Infrastructure The infrastructure component layer uses SUSE Linux Enterprise Server 12 as the base operating system
for the ECS appliance, or qualified Linux operating systems for commodity hardware configurations.
Docker is installed on the infrastructure to deploy the other ECS component layers. The Java Virtual
Machine (JVM) is installed as part of the infrastructure because ECS software is written in Java.

Hardware The hardware component layer is an ECS appliance or qualified industry standard hardware. For more
information about ECS hardware, see Dell Support.

ECS data protection


ECS protects data within a site by mirroring the data onto multiple nodes, and by using erasure coding to break down data
chunks into multiple fragments and distribute the fragments across nodes. Erasure coding (EC) reduces the storage overhead
and ensures data durability and resilience against disk and node failures.
By default, the storage engine implements the Reed-Solomon 12 + 4 erasure coding scheme in which an object is broken into 12
data fragments and 4 coding fragments. The resulting 16 fragments are dispersed across the nodes in the local site. When an
object is erasure-coded, ECS can read the object directly from the 12 data fragments without any decoding or reconstruction.
The code fragments are used only for object reconstruction when a hardware failure occurs. ECS also supports a 10 + 2 scheme
for use with cold storage archives to store objects that do not change frequently and do not require the more robust default EC
scheme.

Table 3. Erasure encoding requirements for regular and cold archives


Use case Minimum Minimum Recommended EC efficiency EC scheme
required nodes required disks disks
Regular archive 4 16 32 1.33 12 + 4
Cold archive 6 12 24 1.2 10 + 2

Sites can be federated, so that data is replicated to another site to increase availability and data durability, and to ensure that
ECS is resilient against site failure. For three or more sites, in addition to the erasure coding of chunks at a site, chunks that are
replicated to other sites are combined using a technique that is called XOR to provide increased storage efficiency.

Table 4. Storage overhead


Number of sites in Storage overhead
replication group
Default (Erasure Code: Cold archive (Erasure
12+4) Code: 10+2)
1 1.33 1.2
2 2.67 2.4
3 2.00 1.8
4 1.77 1.6
5 1.67 1.5
6 1.60 1.44
7 1.55 1.40
8 1.52 1.37

If you have one site, with erasure coding the object data chunks use more space (1.33 or 1.2 times storage overhead) than the
raw data bytes require. If you have two sites, the storage overhead is doubled (2.67 or 2.4 times storage overhead) because

14 Overview
both sites store a replica of the data, and the data is erasure coded at both sites. If you have three or more sites, ECS combines
the replicated chunks so that, counter intuitively, the storage overhead reduces.
When one node is down in a four nodes system, ECS starts to rebuild the EC on priority to avoid DU. As one node is down, the
EC segment separates to the other three nodes, which results in the segment number being greater than the EC code number.
If the down node comes back, things go back to normal. When another node with the most number of EC segments goes down,
the DU window is as large as the node N/A window, when the node does not recover it causes DL.
EC retiring feature converts unsaved EC chunks into three mirror copies chunk for data safety. However, EC retiring has some
limitations:
● It increases system capacity, and protection overhead from 1.33 to 3.
● When there is no node down situation, EC retiring introduce unnecessary I/O.
● The feature applies to four nodes system. EC retiring does not automatically trigger, you must trigger it on demand using an
API through service console.
For a detailed description of the mechanism that is used by ECS to provide data durability, resilience, and availability, see the
ECS High Availability Design White Paper.

Configurations for availability, durability, and resilience


Depending on the number of sites in the ECS system, different data protection schemes can increase availability and balance
the data protection requirements against performance. ECS uses the replication group to configure the data protection
schemes.

Table 5. ECS data protection schemes


Number of sites Data protection scheme
Local Protection Full Copy Active Passive
Protection*
1 Yes Not applicable Not applicable Not applicable
2 Yes Always Not applicable Not applicable
3 or more Yes Optional Normal Optional
* Full Copy Protection can be selected with Active. Full Copy Protection is not available if Passive is selected.

Local Protection Data is protected locally by using triple mirroring and erasure coding which provides resilience against disk
and node failures, but not against site failure.
Full Copy When the Replicate to All Sites setting is turned on for a replication group, the replication group makes
Protection a full readable copy of all objects to all sites within the replication group. Having full readable copies of
objects on all VDCs in the replication group provides data durability and improves local performance at all
sites at the cost of storage efficiency.
Active Active is the default ECS configuration. When a replication group is configured as Active, data is
replicated to federated sites and can be accessed from all sites with strong consistency. If you have
two sites, full copies of data chunks are copied to the other site. If you have three or more sites, the
replicated chunks are combined (XOR'ed) to provide increased storage efficiency. When data is accessed
from a site that is not the owner of the data, until that data is cached at the non-owner site, the access
time increases. Similarly, if the owner site that contains the primary copy of the data fails, and if you have
a global load balancer that directs requests to a non-owner site, the non-owner site must re-create the
data from XOR'ed chunks, and the access time increases.
Passive The Passive configuration includes two, three, or four active sites with an additional passive site that
is a replication target (backup site). The minimum number of sites for a Passive configuration is three
(two active, one passive) and the maximum number of sites is five (four active, one passive). Passive
configurations have the same storage efficiency as Active configurations. For example, the Passive
three-site configuration has the same storage efficiency as the Active three-site configuration (2.0 times
storage overhead). In the Passive configuration, all replication data chunks are sent to the passive site
and XOR operations occur only at the passive site. In the Active configuration, the XOR operations occur
at all sites. If all sites are on-premises, you can designate any of the sites as the replication target. If
there is a backup site hosted off-premise by a third-party data center, ECS automatically selects it as the
replication target when you create a Passive geo replication group (see Create a replication group). If you

Overview 15
want to change the replication target from a hosted site to an on-premises site, you can do so using the
ECS Management REST API.

ECS network
ECS network infrastructure consists of top of rack switches allowing for the following types of network connections:
● Public network – connects ECS nodes to your organization's network, providing data.
● Internal private network – manages nodes and switches within the rack and across racks.
For more information about ECS networking, see the ECS Networking and Best Practices White Paper.
CAUTION: It is required to have connections from the customer's network to both front end switches (rabbit
and hare) in order to maintain the high availability architecture of the ECS appliance. If the customer chooses
not to connect to their network in the required HA manner, there is no guarantee of high data availability for the
use of this product.

Load balancing considerations


It is recommended that a load balancer is used in front of ECS.
In addition to distributing the load across ECS cluster nodes, a load balancer provides High Availability (HA) for the ECS cluster
by routing traffic to healthy nodes. Where network separation is implemented, and data and management traffic are separated,
the load balancer must be configured so that user requests, using the supported data access protocols, are balanced across the
IP addresses of the data network. ECS Management REST API requests can be made directly to a node IP on the management
network or can be load that is balanced across the management network for HA.
The load balancer configuration depends on the load balancer type. For information about test configurations and best practice,
contact ECS Remote Support.

16 Overview
2
Getting Started with ECS
Topics:
• Initial configuration
• Log in to the ECS Portal
• View the Getting Started Task Checklist
• View the ECS Portal Dashboard

Initial configuration
The initial configuration steps that are required to get started with ECS include logging in to the ECS Portal for the first time,
using the ECS Portal Getting Started Task Checklist and Dashboard, uploading a license, and setting up an ECS virtual data
center (VDC).

About this task


NOTE: Set the user scope before you create the first object user. Setting up user scope is a strict one time configuration.
Once configured for an ECS system, user scope cannot be changed. If you want to change the user scope, ECS must be
reinstalled and all the users, buckets, namespaces, and data must be cleaned up.
● Refer Object users and User scope for more information about object users and user scope.
● For more information about object user tasks, see the ECS Data Access Guide.
To initially configure ECS, the root user or System Administrator must at a minimum:

Steps
1. Upload an ECS license.
See Licensing.
2. Select a set of nodes to create at least one storage pool.
See Create a storage pool.
3. Create a VDC.
See Create a VDC for a single site.
4. Create at least one replication group.
See Create a replication group.
a. Optional: Set authentication.
You can add Active Directory (AD), LDAP, or Keystone authentication providers to ECS to enable users to be
authenticated by systems external to ECS. See Introduction to authentication providers.
5. Create at least one namespace. A namespace is the equivalent of a tenant.
See Create a namespace.
a. Optional: Create object and/or management users.
See Working with users in the ECS Portal.
6. Create at least one bucket.
See Create a bucket.
After you configure the initial VDC, if you want to create an additional VDC and federate it with the first VDC, see Add a
VDC to a federation.

Getting Started with ECS 17


Log in to the ECS Portal
Log in to the ECS Portal to set up the initial configuration of a VDC. Log in to the ECS Portal from the browser by specifying
the IP address or fully qualified domain name (FQDN) of any node, or the load balancer that acts as the front end to ECS. The
login procedure is described below.

Prerequisites
Logging in to the ECS Portal requires the Security Administrator, System Administrator, System Monitor, or Namespace
Administrator role.
NOTE: You can log in to the ECS Portal for the first time with any valid login. However, you can configure the system, only
with System Administrator role and Security Administrator role.
On initial ECS login, use the default credentials. You are then prompted to change the password for the root user immediately.

Steps
1. Type the public IP address of the first node in the system, or the address of the load balancer that is configured as the front
end, in the address bar of your browser: https://<node1_public_ip>.
2. Log in with the default root credentials:
● User Name: root
● Password: ChangeMe
NOTE: The first system administrator to log in to the ECS Portal is prompted to acknowledge the End User License
Agreement. Once you acknowledge the agreement, you are prompted to change the password.

3. After you change the password at first login, click Save.


You are logged out, and the ECS login screen is displayed.
4. Type the User Name and Password.
5. To log out of the ECS Portal, in the upper-right menu bar, click the arrow beside your username, and then click logout.

Security fix for Management API


NOTE: The Security fix for Management API is not required for ECS 3.8.1.0 and above. Please review KB 205031 for
detailed information.
ECS 3.8 is enhanced to allow only the registered external servers like load balancers or proxies to post any management
operations. If you do not register the load balancers or proxy servers IP address to the accepted list, the management
operations that are posted by those servers fail with error 403-forbidden.
Before you use a load balancer or attempt to access ECS by proxy, ensure that you register the IP address of the server you are
using to the trusted server names. After upgrade to 3.8, register load balancers and proxy IP for management operations.
The knowledge base article How to prevent Host Header injection for ECS 3.8.x provides information.
You do not have to register external servers making Management API calls directly to the ECS nodes or using the ECS nodes
IPs. The procedure is required only for connections that pass through a proxy server or a Load Balancer.
This does not impact data operations.

View the Getting Started Task Checklist


The Getting-started Task Checklist in the ECS Portal guides you through the initial ECS configuration. The checklist appears
when you first log in and when the portal detects that the initial configuration is not complete. The checklist automatically
appears until you dismiss it. On any ECS Portal page, in the upper-right menu bar, click the Guide icon to open the checklist.

Figure 2. Guide icon

18 Getting Started with ECS


The Getting-started Task Checklist displays in the portal.

Figure 3. Getting Started Task Checklist

1. The current step in the checklist.


2. An optional step. This step does not display a check mark even if you have completed the step.
3. Information about the current step.
4. Available actions.
5. Dismiss the checklist.
A completed step appears in green color.
A completed checklist gives you the option to browse the list again or recheck your configuration.

View the ECS Portal Dashboard


The ECS Portal Dashboard provides critical information about the ECS processes on the VDC you are currently logged in to.
The Dashboard is the first page you see after you log in. The title of each panel (box) links to the portal monitoring page that
shows more detail for the monitoring area.

Upper-right menu bar


The upper-right menu bar appears on each ECS Portal page.

Figure 4. Upper-right menu bar

Menu items include the following icons and menus:


1. The Alert icon displays a number that indicates how many unacknowledged alerts are pending for the current VDC. The
number displays 99+ if there are more than 99 alerts. You can click the Alert icon to see the Alert menu, which shows the
five most recent alerts for the current VDC.
2. The Help icon brings up the online documentation for the current portal page.
3. The Guide icon brings up the Getting Started Task Checklist.
4. The VDC menu displays the name of the current VDC. If your AD or LDAP credentials allow you to access more than one
VDC, you can switch the portal view to the other VDCs without entering your credentials.

Getting Started with ECS 19


5. The User menu displays the current user and allows you to log out. The User menu displays the last login time for the user.

View requests
The Requests panel displays the total requests, successful requests, and failed requests.
Failed requests are organized by system error and user error. User failures are typically HTTP 400 errors. System failures are
typically HTTP 500 errors. Click Requests to see more request metrics.
Request statistics do not include replication traffic.
NOTE: For partial upgrade scenarios (for example, during 3.4 to 3.6), nodes in 3.4 pulls data from dashboard API, whereas
nodes upgraded to 3.6 pulls data from flux API. This may result in inconsistent display of data.

View capacity utilization


The Capacity Utilization panel displays the total, used, available, reserved, and percent full capacity.
NOTE: When the storage pool reaches 90% of its total capacity, it does not accept write requests and it becomes a
read-only system. A storage pool must have a minimum of four nodes and must have three or more nodes with more than
10% free capacity in order to allow writes. This reserved space is required to ensure that ECS does not run out of space
while persisting system metadata. If this criteria is not met, the write will fail. The ability of a storage pool to accept writes
does not affect the ability of other pools to accept writes. For example, if you have a load balancer that detects a failed
write, the load balancer can redirect the write to another VDC.
Capacity amounts are shown in gibibytes (GiB) and tibibytes (TiB). One GiB is approximately equal to 1.074 gigabytes (GB). One
TiB is approximately equal to 1.1 terabytes (TB).
The Used capacity indicates the amount of capacity that is in use. Click Capacity Utilization to see more capacity metrics.
The capacity metrics are available in the left menu.

View performance
The Performance panel displays how network read and write operations are currently performing, and the average read/write
performance statistics over the last 24 hours for the VDC.
Click Performance to see more comprehensive performance metrics.
NOTE:
● There will be a label of SSD Cache Enabled if the feature is on the node. And if Read Cache is disabled or the nodes do
not have SSD disks there will be no SSD Cache Enabled label.
● For partial upgrade scenarios (for example, during 3.4 to 3.6), nodes in 3.4 pulls data from dashboard API, whereas
nodes upgraded to 3.6 pulls data from flux API. This may result in inconsistent display of data.

View storage efficiency


The Storage Efficiency panel displays the efficiency of the erasure coding (EC) process.
The chart shows the progress of the current EC process, and the other values show the total amount of data that is subject to
EC, the amount of EC data waiting for the EC process, and the current rate of the EC process. Click Storage Efficiency to see
more storage efficiency metrics.

View geo monitoring


The Geo Monitoring panel displays how much data from the local VDC is waiting for geo-replication, and the rate of the
replication.
Recovery Point Objective (RPO) refers to the point in time in the past to which you can recover. The value is the oldest data at
risk of being lost if a local VDC fails before replication is complete. Failover Progress shows the progress of any active failover

20 Getting Started with ECS


that is occurring in the federation involving the local VDC. Bootstrap Progress shows the progress of any active process to
add a new VDC to the federation. Click Geo Monitoring to see more geo-replication metrics.

View node and disk health


The Node & Data Disks panel displays the health status of disks and nodes.
A green check mark beside the node or disk number indicates the number of nodes or disks in good health. A red x indicates bad
health. Click Node & Data Disks to see more hardware health metrics. If the number of bad disks or nodes is a number other
than zero, clicking the count takes you to the corresponding Hardware Health tab (Offline Data Disks or Offline Nodes) on
the System Health page.
NOTE:
● If the data from failed disks have already recovered and failed disks are ready for replacement, they will not show in the
Node & Data Disks panel. Click Manage Disks under System Health to go to Maintenance, which indicates if there
are disks that are ready for physical replacement. Alternatively, access Maintenance using left panel menu, Manage >
Maintenance.
● The maximum number of connections per node is 1000.

View alerts
The Alerts panel displays a count of critical alerts and errors.
Click Alerts to see the full list of current alerts. Any Critical or Error alerts are linked to the Alerts tab on the Events page
where only the alerts with a severity of Critical or Error are filtered and displayed.

NOTE: Alerts can also be filtered with Severity Info and Warning.

View audits
Audits can be filtered only with date time range and namespace.

Getting Started with ECS 21


3
Storage Pools, VDCs, and Replication Groups
Topics:
• Introduction to storage pools, VDCs, and replication groups
• Working with storage pools
• Working with VDCs in the ECS Portal
• Working with replication groups in the ECS Portal

Introduction to storage pools, VDCs, and replication


groups
This topic provides conceptual information about storage pools, virtual data centers (VDCs), and replication groups and the
following topics describe the operations that are required to configure them:
● Working with storage pools at the ECS Portal
● Working with VDCs at the ECS Portal
● Working with replication groups at the ECS Portal
The storage that is associated with a VDC must be assigned to a storage pool and the storage pool must be assigned to one or
more replication groups to allow the creation of buckets and objects.
A storage pool can be associated with more than one replication group. A best practice is to have a single storage pool for a site.
However, you can have as many storage pools as required, with a minimum of four nodes (and 16 disks) in each pool.
You must create more than one storage pool at a site for the following reasons:
● The storage pool is used for Cold Archive. The erasure coding scheme that is used for cold archive uses 10+2 coding rather
than the default ECS 12+4 scheme.
● A tenant requires the data to be stored on separate physical media.
CAUTION: You are not allowed to perform any management operations on storage pools, replication groups,
or namespaces until all the VDCs have been upgraded. Do not perform procedures such as extend, IP change,
network separation, or disk replacement during upgrade. Do not perform geo operations like PSO and add new
site while performing upgrade. Also, the upgrade may have enabled new bucket-level features. It is advisable not
to use new bucket-level features until all VDCs in a geo-federation have been upgraded.

NOTE:
● When the storage pool reaches 90% of its total capacity, it does not accept write requests, and it becomes a read-only
system. A storage pool must have a minimum of four nodes and must have three or more nodes with more than 10%
free capacity in order to allow writes. This reserved space is required to ensure that ECS does not run out of space while
persisting system metadata. If this criterion is not met, the write fails. The ability of a storage pool to accept writes does
not affect the ability of other pools to accept writes. For example, if you have a load balancer that detects a failed write,
the load balancer can redirect the write to another VDC.
● The maximum number of VDC per ECS federation and/or RG is eight.
● Node down on a single site VDC (e.g. VDC1) would block adding a new second VDC (e.g. VDC2).
The replication group is used by ECS for replicating data to other sites so that the data is protected and can be accessed from
other, active sites. When you create a bucket, you specify the replication group that it is in. ECS ensures that the bucket and
the objects in the bucket are replicated to all the sites in the replication group.
ECS can be configured to use more than one replication scheme, depending on the requirements to access and protect the
data. The following figure shows a replication group (RG 1) that spans all three sites. RG 1 takes advantage of the XOR storage
efficiency that is provided by ECS when using three or more sites. In the figure, the replication group that spans two sites (RG
2), contains full copies of the object data chunks and does not use XOR'ing to improve storage efficiency.

22 Storage Pools, VDCs, and Replication Groups


Rack 1

SP 2
Rack 1
Rack 1

SP 1
VDC C
SP 3

RG 1 RG 1
RG 1 (SP 1,2,3)
VDC C

RG 2 (SP 1,3)
VDC A
VDC B
Federation

Figure 5. Replication group spanning three sites and replication group spanning two sites

The physical storage that the replication group uses at each site is determined by the storage pool that is in the replication
group. The storage pool aggregates the disk storage of each of the minimum of four nodes to ensure that it can handle the
placement of erasure coding fragments. A node cannot exist in more than one storage pool. The storage pool can span racks,
but it is always within a site.

Working with storage pools


You can use storage pools to organize storage resources based on business requirements. For example, if you require physical
separation of data, you can partition the storage into multiple storage pools.
You can go toManage > Storage Pools > Storage Pool Management to view the details of existing storage pools, to create
storage pools, and to edit existing storage pools.
You cannot delete storage pools.
You cannot delete nodes from an existing storage pool when storage pool details are already saved in the portal UI.

Table 6. Storage pool properties


Field Description
Name The name of the storage pool.
Nodes The number of nodes that are assigned to the storage pool.
Status The state of the storage pool and of the nodes.
● Ready: At least four nodes are installed and all nodes are in the ready to use state.
● Not Ready: A node in the storage pool is not in the ready to use state.
● Partially Ready: Fewer than four nodes, and all nodes are in the ready to use state.

Host The fully qualified hostname that is assigned to the node.


Data IP The public IP address that is assigned to the node or the data IP address in a network separation
environment.
Rack ID The name that is assigned to the rack that contains the nodes.
Actions The action that can be completed for the storage pool.
● Edit: Change the storage pool name.
NOTE: The user is allowed to add nodes to a storage pool. The user is not allowed to remove node
from a storage pool.

Storage Pools, VDCs, and Replication Groups 23


Table 6. Storage pool properties (continued)
Field Description
Cold Storage A storage pool that is specified as Cold Storage. Cold Storage pools use an erasure coding (EC)
scheme that is more efficient for infrequently accessed objects. Cold Storage is also known as a Cold
Archive. After a storage pool is created, this setting cannot be changed.

Create a storage pool


Storage pools must contain a minimum of four nodes. The first storage pool that is created is known as the system storage pool
because it stores system metadata.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, click New Storage Pool.
3. On the New Storage Pool page, in the Name field, type the storage pool name (for example, StoragePool1).
NOTE:
● A storage pool can only contain HDD nodes or NVMe nodes.
● Click Drive Technology to list the nodes of the same drive technology.

4. In the Cold Storage field, specify if this storage pool is Cold Storage. Cold storage contains infrequently accessed data. The
ECS data protection scheme for cold storage is optimized to increase storage efficiency. After a storage pool is created, this
setting cannot be changed.
NOTE: Cold storage requires a minimum hardware configuration of six nodes. For more information, see ECS data
protection.

5. From the Available Nodes list, select the nodes to add to the storage pool.
a. To select nodes one-by-one, click the -> icon beside each node.
b. To select all available nodes, click the + icon at the top of the Available Nodes list.
c. To narrow the list of available nodes, in the search field, type the public IP address for the node or the host name.
6. In the Available Capacity Alerting fields, select the applicable available capacity thresholds that will trigger storage pool
capacity alerts:
a. In the Critical field, select 10 %, 15 %, or No Alert.
For example, if you select 10 %, that means a Critical alert will be triggered when the available storage pool capacity is
less than 10 percent.
b. In the Error field, select 20 %, 25 %, 30 %, or No Alert.
For example, if you select 25 %, that means an Error alert will be triggered when the available storage pool capacity is
less than 25 percent.
c. In the Warning field, select 30 %, 35 %, 40 %, or No Alert.
For example, if you select 40 %, that means a Warning alert will be triggered when the available storage pool capacity is
less than 40 percent.
When a capacity alert is generated, a call home alert is also generated that alerts ECS customer support that the ECS
system is reaching its capacity limit.
7. Click Save.
8. Wait 10 minutes after the storage pool is in the Ready state before you perform other configuration tasks, to allow the
storage pool time to initialize.
If you receive the following error, wait a few more minutes before you attempt any further configuration. Error
7000 (http: 500): An error occurred in the API Service. An error occurred in the API
service.Cause: error insertVdcInfo. Virtual Data Center creation failure may occur when
Data Services has not completed initialization.

24 Storage Pools, VDCs, and Replication Groups


Edit a storage pool
You can change the name of a storage pool or change the set of nodes that are included in the storage pool.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, locate the storage pool that you want to edit in the table. Click Edit in the
Actions column beside the storage pool you want to edit.
3. On the Edit Storage Pool page:
● Nodes cannot be deleted from a storage pool if the storage pool is saved.
● To modify the storage pool name, in the Name field, type the new name.
● Drive Technology drop-down list is not editable.
● To modify the nodes included in the storage pool:
○ In the Available Nodes list, add a node to the storage pool by clicking the + icon beside the node.
● To modify the available capacity thresholds that will trigger storage pool capacity alerts, select the applicable alert
thresholds in the Available Capacity Alerting fields.
4. Click Save.

Working with VDCs in the ECS Portal


An ECS virtual data center (VDC) is the top-level resource that represents the collection of ECS infrastructure components to
manage as a unit.
You can use the Virtual Data Center Management page available from Manage > Virtual Data Center to view VDC details,
to create a VDC, to edit an existing VDC, to update endpoints in multiple VDCs, delete VDCs, and to federate multiple VDCs for
a multisite deployment. The following example shows the Virtual Data Center Management page for a federated deployment.
It is configured with two VDCs named vdc1 and vdc2.
You can use the Virtual Data Center Management page available from Manage > Virtual Data Center to:
● Create a new VDC
● Federate multiple VDCs for a multi-site deployment
● Edit an existing VDC
● Update endpoints in multiple VDCs
● Delete VDCs
● View the details of existing VDCs as shown in the following screenshot.

Table 7. VDC properties


Field Description
Name The name of the VDC.
Type The type of VDC is automatically set and can be either Hosted or on-premises.
● A Hosted VDC is hosted off-premise by a third-party data center (a backup site).
Replication Endpoints Endpoints for communication of replication data between VDCs when an ECS federation is
configured.
● By default, replication traffic runs between VDCs over the public network. By default the public
network IP address for each node is used as the Replication Endpoints.
● If a separate replication network is configured, the network IP address that is configured for
replication traffic of each node is used as the Replication Endpoints.
Management Endpoints Endpoints for communication of management commands between VDCs when an ECS federation is
configured.
● By default, management traffic runs between VDCs over the public network. By default the
public network IP address for each node is used as the Management Endpoints.

Storage Pools, VDCs, and Replication Groups 25


Table 7. VDC properties (continued)
Field Description
● If a separate management network is configured, the network IP address that is configured for
management traffic of each node is used for the Management Endpoints.
Status The state of the VDC.
● Online
● Permanently Failed: The VDC was deleted.
Actions The actions that can be completed for the VDC.
● Edit: Change the name of a VDC, the VDC access key, and the VDC replication and management
endpoints.
● Delete: Delete the VDC. The delete operation triggers a permanent failover of the VDC. You
cannot add the VDC again by using the same name. You cannot delete a VDC that is part of a
replication group until you first remove it from the replication group. You cannot delete a VDC
when you are logged in to the VDC you are trying to delete.
● Fail this VDC:

WARNING: Failing a VDC is permanent. The site cannot be added back.

NOTE: Fail this VDC is available when there is more than one VDC.
○ Ensure that Geo replication is up to date. Stop all writes to the VDC.
○ Ensure that all nodes of the VDC are shut down.
○ Replication to/from the VDC is disabled for all replication groups.
○ Recovery is initiated only when the VDC is removed from the replication group. Go to do
that next.
○ This VDC displays a status of Permanently Failed in any replication group to which it
belongs.
○ To Reconstruct this VDC, it must be added as a new site. Any previous data is lost, as
that data have failed over to other sites in the federation.

This topic provides conceptual information about storage pools, VDCs, and replication groups:
● Introduction to storage pools, VDCs, and replication groups

Create a VDC for a single site


You can create a VDC for a single-site deployment, or when you create the first VDC in a multi-site federation.

Prerequisites
This operation requires the System Administrator role in ECS.
Ensure that one or more storage pools are available and in the Ready state.

Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, click New Virtual Data Center.
3. On the New Virtual Data Center page, in the Name field, type the VDC name (for example: VDC1).
4. To create an access key for the VDC, either:
● Type the VDC access key value in the Key field, or
● Click Generate to generate a VDC access key.
The VDC Access Key is used as a symmetric key for encrypting replication traffic between VDCs in a multi-site federation.
5. In the Replication Endpoints field, type the replication IP address of each node assigned to the VDC. Type them as a
comma-separated list.

26 Storage Pools, VDCs, and Replication Groups


By default replication traffic runs on the public network. Therefore by default, the IP address configured for the public
network on each node is entered here. If the replication network was separated from the public or management network,
each node's replication IP address is entered here.
If a load balancer is configured to distribute the load between the replication IP addresses of the nodes, the address
configured on the load balancer is displayed.
6. In the Management Endpoints field, type the management IP address of each node assigned to the VDC. Type them as a
comma-separated list.
By default management traffic runs on the public network. Therefore by default, the IP address configured for the public
network on each node is entered here. If the management network was separated from the public or replication network,
each node's management IP address is entered here.
7. Click Save.
When the VDC is created, ECS automatically sets the VDC's Type to either On-Premise or Hosted.
A Hosted VDC is hosted off-premise by a third-party data center (a backup site).

Add a VDC to a federation


You can add a VDC to an existing VDC (for example, VDC1) to create a federation. It is important when you perform this
procedure that you DO NOT create a VDC on the rack you want to add. Retrieve only the VDC Access Key from it, and then
proceed from the existing VDC (VDC1).

Prerequisites
Obtain the ECS Portal credentials for the root user, or for a user with System Administrator credentials, to log in to both VDCs.
In an ECS geo-federated system with multiple VDCs, the IP addresses for the replication and management networks are used
for connectivity of replication and management traffic between VDC endpoints. If the VDC you are adding to the federation is
configured with:
● Replication or management traffic running on the public network (default), you need the public network IP address that is
used by each node.
● Separate networks for replication or management traffic, you need the IP addresses of the separated network for each
node.
If a load balancer is configured to distribute the load between the replication IP addresses of the nodes, you need the IP address
that is configured on the load balancer.
Ensure that the VDC you are adding has a valid ECS license that is uploaded and has at least one storage pool in the Ready
state.

Steps
1. On the VDC you want to add (for example, VDC2):
a. Log in to the ECS Portal.
b. In the ECS Portal, select Manage > Virtual Data Center.
c. On the Virtual Data Center Management page, click Get VDC Access Key.
d. Select the key, and press Ctrl-c to copy it.
Important: You are only obtaining and copying the key of the VDC you want to add, you are not creating a VDC on the
site you are logged in to.
e. Log out of the ECS Portal on the site you are adding.
2. On the existing VDC (for example, VDC1):
a. Log in to the ECS Portal.
b. Select Manage > Virtual Data Center.
c. On the Virtual Data Center Management page, click New Virtual Data Center.
d. On the New Virtual Data Center page, in the Name field, type the name of the new VDC you are adding.
e. Click in the Key field, and then press Ctrl-v to paste the access key you copied from the VDC you are adding (from
step 1d).
3. In the Replication Endpoints field, enter the replication IP address of each node in the storage pools that are assigned to
the site you are adding (for example, VDC2). Use the:
● Public IP addresses for the network if the replication network has not been separated.
● IP address configured for replication traffic, if you have separated the replication network.

Storage Pools, VDCs, and Replication Groups 27


● If a load balancer is configured to distribute the load between the replication IP addresses of the nodes, the replication
endpoint is the IP address that is configured on the load balancer.
Use a comma to separate IP addresses within the text box.

4. In the Management Endpoints fields, enter the management IP address of each node in the storage pools that are
assigned to the site you are adding (for example, VDC2). Use the:
● Public IP addresses for the network if the management network has not been separated.
● IP address configured for management traffic, if you have separated the management network.
Use a comma to separate IP addresses within the text box.

5. Click Save.

Results
The new VDC is added to the existing federation. The ECS system is now a geo-federated system. When you add the VDC to
the federation, ECS automatically sets the type of the VDC to either On-Premise or Hosted.

Next steps
NOTE: If External Key Manager (EKM) feature is activated for the federation, and then you have to add the necessary VDC
to EKM mapping for the newly added VDC in the key management section.
To complete the configuration of the geo-federated system, you must create a replication group that spans multiple VDCs so
that data can be replicated between the VDCs. To do this, you must ensure that:
● You have created storage pools in the VDCs that will be in the replication group (see Create a storage pool).
● You create the replication group, selecting the VDCs that provide the storage pools for the replication group (see Create a
replication group).
● After you create the replication group, you can monitor the copying of user data and metadata to the new VDC that you
added to the replication group on the Monitor > Geo Replication > Geo Bootstrap Processing tab. When all the user
data and metadata are successfully replicated to the new VDC, the Bootstrap State is Done and the Bootstrap Progress
(%) is 100 on all the VDCs.
NOTE: All the existing data are copied to a new VDC when it is added to an existing replication group. Retention is
maintained. And this occurs without the requirement to implement additional data copy or migration tooling.

Edit a VDC
You can change the name, the access key, or the replication and management endpoints of the VDC.

Prerequisites
This operation requires the System Administrator role in ECS.
If you have an ECS geo-federated system, and you want to update VDC endpoints after you:
● Separated the replication or management networks for a VDC, or
● Changed the IP addresses of multiple nodes in a VDC.
Use the update endpoints in multiple VDCs procedure. If you attempt to update the VDC endpoints by editing the settings for an
individual VDC from the Edit Virtual Data Center < VDC name > page, you lose connectivity between VDCs.

Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, locate the VDC you want to edit in the table. Click Edit in the Actions
column beside the VDC you want to edit.
3. On the Edit Virtual Data Center < VDC name > page:
● To modify the VDC name in the Name field, type the new name.
● To modify the VDC access key for the node you are logged into, in the Key field, type the new key value, or click
Generate to generate a new VDC access key.
● To modify the replication and management endpoints:

28 Storage Pools, VDCs, and Replication Groups


Table 8. Options to modify replication and management endpoints
Option Description
For a single VDC that is not a. In the Replication Endpoints field, enter the IP addresses for the replication
part of an ECS federation. endpoints. Use the:
● Public IP addresses for the network, if you did not separate the replication network.
● IP address configured for replication traffic, if you separated the replication network.
● IP address configured on the load balancer, if you configured a load balancer to
distribute the load between the replication IP addresses of the nodes.
b. In the Management Endpoints field, enter the IP addresses for the management
endpoints. Use the:
● Public IP addresses for the network, if you did not separate the management
network.
● IP address configured for management traffic, if you separated the management
network.
Use a comma to separate IP addresses within the text boxes.
For a VDC that is part Do not modify endpoints on the Edit Virtual Data Center < VDC name > page, you must
of an ECS federation, modify the endpoints on the Update All VDC Endpoints page as described in Update
and for which you have replication and management endpoints in multiple VDCs.
separated the replication or NOTE: Update to endpoints on ECS requires assistance of ECS Professional
management networks or Services to perform. Any change to endpoints without engaging ECS Remote
changed multiple node IP Support could potentially lead to DU.
addresses.

4. Click Save.

Update replication and management endpoints in multiple VDCs


In an ECS geo-federated system, you can use the Update All VDC Endpoints page to update replication and management
endpoints in multiple VDCs.

Prerequisites
NOTE: Update to endpoints on ECS requires assistance of ECS Professional Services to perform. Any change to
endpoints without engaging ECS Remote Support could potentially lead to DU.
This operation requires the System Administrator role in ECS.
Update the VDC endpoints from the Update All VDC Endpoints page after you have:
● Separated the replication or management networks of a VDC in an existing ECS federation.
● Changed the IP address of multiple nodes in a VDC.
If you attempt to update the VDC endpoints by editing the settings for an individual VDC from the Edit Virtual Data Center
< VDC name > page, you will lose connectivity between VDCs.

About this task


In an ECS geo-federated system with multiple VDCs, the IP addresses configured for ECS replication and management networks
are used for connectivity of replication and management traffic between VDC endpoints. By default, all the ECS replication and
management traffic is configured to run on the ECS public network, and by default the IP addresses of the public network that
is configured for each node are used as the replication and management endpoints.
Optionally, you can separate the replication and management networks, and you must reconfigure the replication and
management endpoints for a VDC.

Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, click Update All VDC Endpoints.
3. On the Update All VDC Endpoints page, if the replication network was separated, or if the IP address for the node was
changed, type the replication IP address of each node in the Replication Endpoints field for the VDC. Type them as a
comma-separated list.

Storage Pools, VDCs, and Replication Groups 29


4. On the Update All VDC Endpoints page, if the management network was separated, or if the IP address for the node was
changed, type the management IP address of each node in the Management Endpoints field for the VDC. Type them as a
comma-separated list.
5. Click Update Endpoints.

Remove VDC from a Replication Group


You can remove a VDC from a replication group (RG) in a multi VDC federation without affecting the VDC or other RGs
associated with the VDC. Removing VDC from RG no longer initiates PSO. Removing a VDC from RG initiates recovery.

Prerequisites
CAUTION: See Minimum Requirements to Remove VDC from a Replication Group before proceeding with the
below steps.
This operation requires the System Administrator role in ECS.
Restrictions to Remove VDC:
● You cannot log in to a VDC and remove the same VDC.
● If any VDC in the replication group is off, you cannot remove a VDC .
● If it is the only VDC in a replication group, you cannot remove a VDC.
● If any VDC in the replication group has a Bootstrap or Failover process in progress, you cannot remove a VDC.
● You cannot remove more than one VDC at a time.
● If the system is not fully upgraded, you cannot remove a VDC.
You cannot delete a VDC when it is still associated with any replication groups.
It is a prerequisite to stop any workload to the system and wait until data replication is completed between the VDCs.

NOTE: The time taken to complete data replication depends on the workload and network condition.

Steps
1. Log in to the ECS Portal.
2. In the ECS Portal, select Manage > Virtual Data Center, check to ensure the VDC you want to remove is online and
working correctly.
3. Select Manage > Replication Group and click Edit for the corresponding replication group.
The Edit Replication Group window opens.
4. Click Delete... for the VDC you want to delete.
Confirm Remove VDC window opens. Read all the important notes before you click the check box to confirm removal of
VDC. Click ok.
5. Click Save in the Replication Group Management Window.
6. Select Monitoring > Geo Replication > Failover Processing. The Failover Progress column must show 100% done on all
remaining VDCs in the replication group.
See Guidelines to check failover and bootstrap process procedure for details.
7. Select Monitoring > Geo Replication > Bootstrap Processing. The Bootstrap Progress (%) column must show 100%
done on all remaining VDCs in the replication group.
The time that is taken for both the Failover process and the Bootstrap Process to complete depends on the amount of data
on the failed VDC.
See Guidelines to check failover and bootstrap process procedure for details.

Next steps
NOTE: You may have to wait for 5 to 10 minutes before the Failover and the Bootstrap process show the status on the
page.

30 Storage Pools, VDCs, and Replication Groups


Fail a VDC (PSO)
Failing a VDC or permanent site outage (PSO) is done from the VDC level. Removing a VDC from the replication group (RG)
initiates recovery, but a failed VDC does not initiate recovery.

Prerequisites

CAUTION: See Minimum Requirements to Trigger PSO before proceeding with the below steps.

This operation requires the System Administrator role in ECS.


Stop any workload to the system and wait until data replication is completed between the VDCs.

NOTE: The time taken to complete data replication depends on the workload and network condition.

Wait for more than 15 minutes after you power off the VDC for the system to confirm that the VDC is off. For unplanned PSO,
ensure the VDC is not accessible for more than 15 minutes.
Restrictions to fail a VDC.
● You cannot fail a VDC that is powered on. If a VDC is powered off for less than 15 minutes, the system does not detect that
the VDC is off, which results in Fail VDC operation to fail.

Steps
1. Log in to the ECS Portal.
2. In the ECS Portal, select Manage > Virtual Data Center. Click Edit and select Fail this VDC. Wait for a few minutes to
ensure the VDC status shows Permanent Site Outage.
See the restrictions in the prerequisites section for limitations of this operation.
NOTE: Until failover progress is completed after performing Step 3, there may be DU for objects that are owned by
VDCs being PSOed.

3. Select Manage > Replication Group, click Edit, and remove the failed VDC from each replication group.
See the restrictions in the prerequisites section for limitations of this operation.
4. Select Monitoring > Geo Replication > Failover Processing. The Failover Progress column must show 100% done on all
remaining VDCs in the replication group.
5. Select Monitoring > Geo Replication > Bootstrap Processing. The Bootstrap Progress (%) column must show 100%
done on all remaining VDCs in the replication group.
The time that is taken for both the Failover process and the Bootstrap Process to complete depends on the amount of data
on the failed VDC.
NOTE: You may have to wait for 5 to 10 minutes before the Failover and the Bootstrap process show the status on the
page.

6. VDC should be removed after Failover Processing percentage shows as 100%. Select Manage > Virtual Data Center, click
Edit and delete the Permanently failed VDC. If the VDC is still associated with any of the replication groups, this operation
fails.
NOTE: After this step, Failover Processing data will become unavailable.

Guidelines to check failover and bootstrap process


The following section describes the procedure to check the failover and bootstrap process completion:

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. Log in to each of the VDC in the replication group (RG) and check the following:

Storage Pools, VDCs, and Replication Groups 31


2. Select Monitoring > Geo Replication > Failover Processing and check the failover process column for the VDCs. One
more row which shows the failed VDC and the failover process is shown. The Failover Progress column shows 100% when
the process is complete.
3. Select Monitoring > Geo Replication > Bootstrap Processing. Addition rows for each of the remaining VDCs except for
the VDC you have logged in is shown. The process is complete when bootstrap process shows 100%.

Checking failover and bootstrap process in different cases


2 Site Case (We have vdc1 and vdc2. Vdc2 is removed from the RG).
Failover process: According to the guidelines in the procedure, you see that one other row shows that VDC2 as failed and the
failover status from VDC1's UI.
Bootstrap process: According to the guidelines, there are no other remaining zones in the RG except VDC1. So no addition row
is shown.
3 Site Case (We have vdc1, vdc2 and vdc3. Vdc3 is removed from the RG)
Failover process: According to the guidelines in the procedure, you see that one other row shows that VDC3 as failed and the
failover status from VDC1 and VDC2's UI.
Bootstrap process: According to the guidelines, if you log in to VDC1, VDC2 is the remaining zone. As a result, you can see an
extra row showing the bootstrap process of VDC2. Similarly, you can see an extra row showing bootstrap process of VDC1 while
you log in to VDC2.
4 Site Case (We have vdc1, vdc2, vdc3 and vdc4. Vdc4 is removed from the RG)
Failover process: According to the guidelines in the procedure, you see that one other row shows that VDC4 as failed and the
failover status from VDC1, VDC2, and VDC3's UI.
Bootstrap process:
If you log in to VDC1, then VDC2 and VDC3 are the remaining zone except VDC1. As a result, you see two extra rows showing
the bootstrap process of VDC2 and VDC3.
If you log in to VDC2, then VDC1 and VDC3 are the remaining zone except VDC2. As a result, you see two extra rows showing
the bootstrap process of VDC1 and VDC3
If you log in to VDC3, then VDC1 and VDC2 are the remaining zone except VDC3. As a result, you see two extra rows showing
the bootstrap process of VDC1 and VDC2.

Working with replication groups in the ECS Portal


You can use replication groups to define where storage pool content is protected. Replication groups can be local or global.
Local replication groups do not replicate data to other VDCs, but protect objects within the same VDC against disk or node
failures using mirroring and erasure coding techniques. Global replication groups protect objects by replicating them to another
site within an ECS federation and, by doing so, protect against site failures.
You can use the Replication Group Management page to view replication group details, to create replication groups, and to
edit existing replication groups. You cannot delete replication groups in this release.
You can use the Replication Group Management page to:
● create new replication groups
● edit existing replication groups
● View the details of existing replication groups as shown in the following screenshot.
You cannot delete replication groups in this release.
NOTE: Do not create more than eight replication groups in a cluster. If there is a requirement to have more than eight
replication groups in a cluster, contact ECS Remote Support.

Table 9. Replication Group properties


Field Description
Name The name of the replication group.
Type The replication type can be Active or Passive. Passive means that a site is designated as the target for
replication data and is only available when there is a minimum of three sites. Passive cannot be selected

32 Storage Pools, VDCs, and Replication Groups


Table 9. Replication Group properties (continued)
Field Description
if Replicate to All Sites is On. If you have a Hosted site, it will automatically be selected as the target
for replication in a Passive configuration.
VDC The number of VDCs in the replication group and the names of the VDCs where the storage pools are
located.
Storage Pool The names of the storage pools and their associated VDCs. A replication group can contain a storage
pool from each VDC in a federation.
Target The storage pool in the replication group that is the replication target in a Passive configuration.
Status The state of the replication group.
● Online
● Temp Unavailable: Replication traffic to this VDC has failed. If all replication traffic to the same VDC
is in the Temp Unavailable state, further investigation about the cause of the failure is recommended.
Actions The actions that can be completed for the replication group. Edit: Modify the replication group name and
the set of VDCs and storage pools in the replication group.

This topic provides conceptual information about storage pools, VDCs, and replication groups:
● Introduction to storage pools, VDCs, and replication groups

Create a replication group


Replication groups can be local to a VDC or can protect data by replicating data across sites.

Prerequisites
This operation requires the System Administrator role in ECS.
If you want the replication group to span multiple VDCs, you must ensure that the VDCs are federated (joined) to the primary
VDC, and that storage pools have been created in the VDCs that will be included in the replication group.
NOTE: The geo-replicated data is encrypted using AES256 when sent from one node to another. However, ECS 3.7 do not
use TLS for geo-replication. For enhanced security of traffic in transit, users must use VPN (or other such secure network
channel) when enabling and using geo-replication.

Steps
1. In the ECS Portal, select Manage > Replication Group.
2. On the Replication Group Management page, click New Replication Group.
3. On the New Replication Group page, in the Name field, type a name (for example, ReplicationGroup1).
NOTE:
● A replication group can only contain HDD storage pools or EXF900 storage pools.
● Click Drive Technology to list the storage pools of the same drive technology.

4. Optionally, in the Replicate to All Sites field, click On for this replication group. You can only turn this setting on when you
create the replication group; you cannot turn it off later.
For a Passive configuration, leave this setting Off.

Option Description
Replicate to All The replication group uses default replication. With default replication, data is stored at the primary
Sites Off site and a full copy is stored at a secondary site chosen from the sites within the replication group.
The secondary copy is protected by triple-mirroring and erasure coding. This process provides data
durability with storage efficiency.
Replicate to All The replication group makes a full readable copy of all objects to all sites (VDCs) within the replication
Sites On group. Having full readable copies of objects on all VDCs in the replication group provides data
durability and improves local performance at all sites at the cost of storage efficiency.

Storage Pools, VDCs, and Replication Groups 33


5. In the Geo Replication Type field, click Active or Passive. Active is the default ECS configuration. You cannot change this
setting after you create the replication group.
Passive is available only when you have three or more sites.
Passive cannot be selected if Replicate to All Sites is On.
For more information on Active and Passive data protection schemes, see Configurations for availability, durability, and
resilience
6. Click Add VDC to add storage pools from VDCs to the replication group.
The steps to add storage pools to a replication group depends on whether you have a single VDC, an Active, or Passive
environment.
7. To add storage pools to an Active (or to a single site) configuration, use the steps below.
a. From the Virtual Data Center list, select the VDC that will provide a storage pool for the replication group.
b. From the Storage Pool list, select the storage pool that belongs to the selected VDC.
c. To include other sites in the replication group, click Add VDC.
d. Repeat these steps for each storage pool that you want to add to the replication group.
8. To add storage pools for a Passive configuration, complete the following steps.
a. In the Target VDC for Replication Virtual Data Center list, select the VDC that you want to add as the replication
target.
If you have a Hosted VDC that is hosted off-premise by a third-party data center (a backup site), it is automatically
selected as the replication target.
b. In the Target VDC for Replication Storage Pool list, select the storage pool that belongs to the selected VDC.
c. In each of the two Source VDC for Replication Virtual Data Center lists, select the VDC that you want to add as the
active site.
d. In each of the two Source VDC for Replication Storage Pool lists, select the storage pool that belongs to each
selected VDC. These storage pools will provide storage at the two active sites.
9. Click Save.

Next steps
After you create the replication group, you can monitor the copying of user data and metadata to the new VDC that you added
to the replication group on the Monitor > Geo Replication > Geo Bootstrap Processing tab. When all the user data and
metadata is successfully replicated to the new VDC, the Bootstrap State is Done and the Bootstrap Progress (%) is 100.

Edit a replication group


You can change the name of the replication group or change the set of VDCs and storage pools in the replication group.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task


CAUTION: In a multisite federation, you can edit a replication group (RG) and choose to delete a VDC from one
or more replication groups to which it belongs. Removing a VDC from a RG no longer removes the VDC from the
federation. It only removes it from the RG and triggers recovery for that RG. In ECS, each object has a primary,
or owning VDC. The time that is taken to complete this failover process depends on the amount of data that
must be moved. If you created the replication group with the Replicate to All Sites setting turned on, the time
taken to move all data to the remaining sites is short, as a copy exists at all sites.
You cannot edit the Replicate to All Sites or Geo Replication Type settings. After you set these options when you first
create the replication group, they cannot be changed.

Steps
1. In the ECS Portal, select Manage > Replication Group.
2. On the Replication Group Management page, beside the replication group you want to edit, click Edit.
3. On the Edit Replication Group page,
● To modify the replication group name, in the Name field, type the new name.

34 Storage Pools, VDCs, and Replication Groups


● To add a VDC to the replication group, click Add VDC and select the VDC and storage pool from the list.
● To delete a VDC from the replication group, click the Delete button beside the VDC (and its storage pool).

CAUTION:
○ Deleting a VDC from one or more replication groups to which it belongs means that you are removing
this VDC from the replication group and not from the federation.
○ VDC removed from a specific Replication group cannot be added back to the same replication group
after deletion.
○ Ensure that Geo replication is up-to-date. Stop all writes to the VDC.
○ Ensure that the nodes are shut down only for failing (PSO) VDC at the federation level.
○ Recovery is initiated only when the VDC is removed from the replication group. Proceed to do that
next.
○ This VDC will display a status of Permanently Failedfor failed VDC and not for removed VDC.
○ In case of failing VDC, to reconstruct this VDC, it must be added as a new site. Any previous data will
be lost, as that data will have failed over to other sites in the federation.

4. Click Save.

Delete a replication group


You can delete the last VDC associated with a user replication group when the VDC fails permanently.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task


You can delete the failed VDC from the system after all references to user replication groups have been removed. After the last
VDC is removed, the replication group will no longer appear in the list of user replication groups. After the last VDC is removed
the replication group can no longer be used. Any attempt to perform operations to the deleted replication group such as adding
an active VDC to it results in an error.

Steps
1. Shut down the VDC and wait for 15 minutes.
Replication group (RG) with a single VDC: In RG level, the shutdown VDC status is shown as Unattainable.
Replication group (RG) with more than one VDC: In RG level, the shutdown VDC status is shown as Temporarily
Unavailable.
2. Fail the shutdown VDC through REST API.

curl -k -X POST -H Content-Type:application/json -H "$TOKEN" -H ACCEPT:application/


json https://<Node IP>:4443/object/vdcs/vdc/<VDC ID>/fail?overridePrechecks=true

NOTE: An error message displays while trying to fail shutdown VDC from ECS UI.

Unable to Fail Virtual Data Center VDC2. Error


40040 (http: 400): Please contact customer support when failing a
VDC with replication groups only associated with a single VDC to
prevent potential data loss. To prevent potential data loss please
contact customer support for assistance failing a VDC associated
with single VDC replication groups.

3. In VDC and RG level, state of the shutdown VDC is shown as Permanently Failed.
● If there is a single VDC, the associated RG gets a new drop-down. To delete the RG, select Delete.
● If the shutdown VDC is associated with other RGs, to delete the RG, select Edit > Remove.

Storage Pools, VDCs, and Replication Groups 35


NOTE: If you try to delete the failed VDC from federation or VDC level, where the failed VDC is associated with any RG,
an error message is displayed.

Error 1007 (http: 405): Method not supported. Operation not supported. Reason:
Zone still referenced by one or more User Replication Groups.

4. To delete a failed VDC, select the VDC, and from the drop-down of the failed VDC, select Delete.

36 Storage Pools, VDCs, and Replication Groups


4
Authentication Providers
Topics:
• Introduction to authentication providers
• Working with authentication providers in the ECS Portal

Introduction to authentication providers


You can add authentication providers to ECS if you want users to be authenticated by systems external to ECS.
An authentication provider is a system that is external to ECS that can authenticate users on behalf of ECS. ECS stores the
information that allows it to connect to the authentication provider so that ECS can request authentication of a user.
In ECS, the following types of authentication provider are available:
● Active Directory (AD) authentication or Lightweight Directory Access Protocol (LDAP) authentication: Used to authenticate
domain users that are assigned to management roles in ECS.
● Keystone: Used to authenticate OpenStack Swift object users.
Authentication providers can be created from the ECS Portal or by using the ECS Management REST API or CLI.
Authentication providers can be created from the ECS Portal or by using the ECS Management REST API or CLI. You can use
the following procedures to create AD/LDAP or Keystone authentication providers.
● Add an AD or LDAP authentication provider
● Add a Keystone authentication provider

Working with authentication providers in the ECS


Portal
You can use the Authentication Provider Management page available from Manage > Authentication to view the details
of existing authentication providers, to add AD/LDAP or Keystone authentication providers, to edit existing authentication
providers, and to delete authentication providers.
You can use the Authentication Provider Management page available from Manage > Authentication to view the details
of existing authentication providers, to add AD/LDAP or Keystone authentication providers, to edit existing authentication
providers, and to delete authentication providers.

Table 10. Authentication provider properties


Field Description
Name The name for the authentication provider.
Type The type of authentication provider. The authentication provider is an Active Directory (AD),
Lightweight Directory Access Protocol (LDAP), or Keystone V3 server.
Domains The domains that the authentication provider provides access to.
Status Indicates whether the authentication provider is Enabled or Disabled.
Actions The actions that can be completed for the authentication provider.
● Edit:
○ Change the AD/LDAP authentication provider settings.
○ Change the Keystone authentication provider settings.
● Delete: Delete the authentication provider.

Authentication Providers 37
Table 10. Authentication provider properties (continued)
Field Description
● New Authentication Provider button: Add an authentication provider.

Considerations when adding Active Directory authentication


providers
When you configure ECS to work with Active Directory (AD), you must decide whether to add a single AD authentication
provider to manage multiple domains, or to add separate AD authentication providers for each domain.
The decision to add a single AD authentication provider, or multiple, depends on the number of domains in the environment,
and the location on the tree from which the manager user can search. Authentication providers have a single search base from
which the search begins, and a single manager account that has read access at the search base level and below.
You can add a single authentication provider for multiple domains in the following conditions:
● You manage an AD forest
● The manager account has privileges to search all user entries in the tree
● The search is conducted throughout the whole forest from a single search base, not just the domains listed in the provider
Otherwise, add separate authentication providers for each domain.
NOTE: If you manage an AD forest and you have the necessary manager account privileges, there are scenarios in which
you might still want to add an authentication provider for each domain. For example, if you want tight control on each
domain and more granularity on setting the search base starting point for the search.
The search base must be high enough in the directory structure of the forest for the search to correctly find all the users in the
targeted domains. The following search examples describe the best options for adding either single or multiple authentication
providers:
● In the scenario where the forest in the configuration contains ten domains but you want to target only three, you would not
want to add a single authentication provider to manage multiple domains, because the search would unnecessarily span the
whole forest. This might adversely affect performance. In this case, you should add three separate authentication providers
for each domain.
● In the scenario where the forest in the configuration contains ten domains and you want to target ten domains, adding a
single authentication provider to manage multiple domains is a good choice, because there is less overhead to set up.

AD or LDAP authentication provider settings


Provide authentication provider information when you add or edit an AD or LDAP authentication provider. You can customize an
LDAP certificate for ECS authentication.

Table 11. AD or LDAP authentication provider settings


Field Description and requirements
Name The name of the authentication provider. You can have multiple providers for different domains.
Description Free text description of the authentication provider.
Type The type of authentication provider. Active Directory or LDAP.
Domains The collection of administratively defined objects that share a common directory database, security
policies, and trust relationships. A domain can span multiple physical locations or sites and can
contain millions of objects.

Example:
mycompany.com

If an alternate UPN suffix is configured in the Active Directory, the Domains field should also contain
the alternate UPN configured for the domain. For example, if
myco is added as an alternate UPN suffix for
mycompany.com, then the Domains field should contain both

38 Authentication Providers
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements

myco and
mycompany.com.

Server URLs The LDAP or LDAPS (secure LDAP) with the domain controller IP address. The default port for LDAP
is 389. The default port for LDAPS is 636.

You can specify one or more LDAP or LDAPS authentication provider.

Example:
ldap://<Domain controller FQDN>:<port> (if not default port) or
ldaps://<Domain controller FQDN>:<port>(if not default port)

If the authentication provider supports a multidomain forest, use the global catalog server IP and
always specify the port number. The default port for LDAP is 3268. The default port for LDAPS is
3269.

Example:
ldap(s)://<Global catalog server FQDN>:<port>

Manager DN The Active Directory Bind user account that ECS uses to connect to the Active Directory or LDAP
server. This account is used to search Active Directory when an ECS administrator specifies a user
for role assignment.

This user account must have


Read all inetOrgPerson information in Active Directory. The
InetOrgPerson object class is used in several non-Microsoft, LDAP and X.500 directory services
to represent people in an organization.

To set this privilege in Active Directory:

1. Open Active Directory Users and Computers.


2. Right-click the domain, select Delegate Control, and then click Next.
3. In the Delegation of Control wizard, click Next, and then click Add.
4. In the Select Users, Computers, or Groups dialog box, select the user that you are using for
managerdn, and then click Next.
5. In the Tasks to Delegate page, in Delegate the following common tasks, check the Read
all inetOrgPerson information task, and then click Next.
6. Click Finish.
In this example:
CN=Manager,CN=Users,DC=mydomaincontroller,DC=com, the Active Directory Bind user is
Manager, in the
Users tree of the
mydomaincontroller.com domain. Usually
managerdn is a user who has fewer privileges than Administrator, but has sufficient privileges to
query Active Directory for users attributes and group information.

Important: You must update this user account in ECS if the


managerdn credentials change in Active Directory.

Manager Password The password of the


managerdn user.

Important: You must update this password in ECS if the


managerdn credentials change in Active Directory.

Providers This setting is Enabled by default when adding an authentication provider. ECS validates the
connectivity of the enabled authentication provider and that the name and domain of the enabled
authentication provider are unique.

Authentication Providers 39
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements

Select Disabled only if you want to add the authentication provider to ECS, but you do not
immediately want to use it for authentication. ECS does not validate the connectivity of a disabled
authentication provider, but it does validate that the authentication provider name and domain are
unique.

Group Attribute This attribute applies to Active Directory and LDAP.

The AD attribute that is used to identify a group. Used for searching the directory by groups.

Example:
CN

NOTE: After you set this attribute for an AD authentication provider, you cannot change it,
because the tenants using this provider might already have role assignments and permissions
that are configured with group names in a format that uses this attribute.

Group allowlist This setting applies to Active Directory and LDAP.

Optional. One or more group names as defined by the authentication provider. This setting filters the
group membership information that ECS retrieves about a user.

● When a group or groups are in the allowlist, ECS is aware only of the membership of a user
in the specified groups. Multiple values (one value on each line in the ECS Portal, and values
comma-separated in CLI and API) and wildcards (for example MyGroup*, TopAdminUsers*) are
allowed.
● The default setting is blank. ECS is aware of all groups that a user belongs to. Asterisk (*) is the
same as blank.
Example:
UserA belongs to
Group1 and
Group2.

If the whitelist is blank, ECS knows that UserA is a member of


Group1 and
Group2.

If the whitelist is
Group1, ECS knows that UserA is a member of
Group1, but does not know that UserA is a member of
Group2 (or of any other group).

Use care when adding a whitelist value. For example, if you map a user to a namespace that is based
on group membership, then ECS must be aware of the user's membership in the group.

To restrict access to a namespace to only users of certain groups, complete the following tasks.

● Add the groups to the namespace user mapping . The namespace is configured to accept only
users of these groups.
● Add the groups to the allowlist. ECS is authorized to receive information about them.

By default, if no groups are added to the namespace user mapping, users from any groups are
accepted, regardless of the whitelist configuration.

Group Object Classes This setting applies only to LDAP. It does not apply to other types of authentication providers.
Object classes that represent groups in a specified LDAP server, one per line. If this field is empty,
then authorization by an LDAP group is not available.

Example:
groupOfNames
groupOfUniqueNames

40 Authentication Providers
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements
Group Member This setting applies only to LDAP. It does not apply to other types of authentication providers.
Attribute Group member attributes used in a specified LDAP server, one per line. If this field is empty, then
authorization by an LDAP group is not available.

Example:
member
uniqueMember

Search Scope The levels to search. Possible values are:


● One Level (search for users one level under the search base)
● Subtree (search the entire subtree under the search base)
Search Base The Base Distinguished Name that ECS uses to search for users or AD groups at login time and when
assigning roles or setting ACLs.

The following example searches for all users in the


Users container.

CN=Users,DC=mydomaincontroller,DC=com

The following example searches for all users in the


Users container in the
myGroup organization unit. Note that the structure of the search base value begins with the leaf
level and goes up to the domain controller level, which is the reverse of the structure seen in the
Active Directory Users and Computers snap-in.
CN=Users,OU=myGroup,DC=mydomaincontroller,DC=com

Search Filter The string that is used to select subsets of users.

Example:
userPrincipalName=%u

NOTE: ECS does not validate this value when you add the authentication provider.

If an alternate UPN suffix is configured in the Active Directory, the Search Filter value must be of
the format
sAMAccountName=%U where
%U is the username, and does not contain the domain name.

Configuration settings for users using a combination of AD and LDAP authentication


providers
NOTE: These settings are also applicable for users using custom LDAP search queries.

1. Take backup of /opt/storageos/conf/auth-head-conf.xml

viprexec -i -c 'cp /opt/storageos/conf/auth-head-conf.xml /opt/storageos/conf/auth-


head-conf.xml.bak'

2. Change the value property name="switchToLdapFromAD" from false to true

viprexec -i -c 'sed -i "s/\"switchToLdapFromAD\" value=\"false\"/


\"switchToLdapFromAD\" value=\"true\"/" /opt/storageos/conf/auth-head-conf.xml'

3. Restart objcontrolsvc on all nodes.

viprexec -i 'pidof objcontrolsvc; kill `pidof objcontrolsvc`;sleep 60;pidof


objcontrolsvc'

Authentication Providers 41
Add an AD or LDAP authentication provider
You can add one or more authentication providers to ECS to perform user authentication for ECS domain users.

Prerequisites
● This operation requires the Security Administrator role in ECS.
● You need access to the authentication provider information listed in AD/LDAP authentication provider settings. Note
especially the requirements for the Manager DN user.

Steps
1. In the ECS Portal, select Manage > Authentication.
2. On the Authentication Provider Management page, click New Authentication Provider.
3. On the New Authentication Provider page, type values in the fields. For more information about these fields, see AD/
LDAP authentication provider settings.
4. Click Save.
5. To verify the configuration, add a user from the authentication provider at Manage > Users > Management Users, and
then try to log in as the new user.

Next steps
If you want these users to perform ECS object user operations, add (assign) the domain users into a namespace. For more
information, see Add domain users into a namespace.

Add a Keystone authentication provider


You can add a Keystone authentication provider to authenticate OpenStack Swift users.

Prerequisites
● This operation requires the Security Administrator role in ECS.
● You can add only one Keystone authentication provider.
● Obtain the authentication provider information listed in Keystone authentication provider settings.

Steps
1. In the ECS Portal, select Manage > Authentication.
2. On the Authentication Provider Management page, click New Authentication Provider.
3. On the New Authentication Provider page, in the Type field, select Keystone V3.
The required fields are displayed.
4. Type values in the Name, Description, Server URL, Keystone Administrator, and Admin Password fields. For more
information about these fields, see Keystone authentication provider settings.
5. Click Save.

Keystone authentication provider settings


You must provide authentication provider information when you add or edit a Keystone authentication provider.

Table 12. Keystone authentication provider settings


Field Description
Name The name of the Keystone authentication provider. This name is used to identify
the provider in ECS.
Description Free text description of the authentication provider.
Type Keystone V3.

42 Authentication Providers
Table 12. Keystone authentication provider settings (continued)
Field Description
Server URL URl of the Keystone system that ECS connects to in order to validate Swift users.
Keystone Administrator Username for an administrator of the Keystone system. ECS connects to the
Keystone system using this username.
Admin Password Password of the specified Keystone administrator.

Authentication Providers 43
5
Namespaces
Topics:
• Introduction to namespaces
• Working with namespaces in the ECS Portal

Introduction to namespaces
You can use namespaces to provide multiple tenants with access to the ECS object store and to ensure that the objects and
buckets written by users of each tenant are segregated from the other tenants.
ECS supports access by multiple tenants, where each tenant is defined by a namespace and the namespace has a set of
configured users who can store and access objects within the namespace. Users from one namespace cannot access the
objects that belong to another namespace.
ECS supports access by multiple tenants, where each tenant is defined by a namespace and the namespace has a set of
configured users who can store and access objects within the namespace. Users from one namespace cannot access the
objects that belong to another namespace.
Namespaces are global resources in ECS. A System Administrator or Namespace Administrator can access ECS from any
federated VDC and can configure the namespace settings. The object users that you assign to a namespace are global and can
access the object store from any federated VDC.
Namespaces are global resources in ECS. A System Administrator or Namespace Administrator can access ECS from any
federated VDC and can configure the namespace settings. The object users that you assign to a namespace are global and can
access the object store from any federated VDC.
You configure a namespace with settings that define which users can access the namespace and what characteristics the
namespace has. Users with the appropriate privileges can create buckets, and can create objects within buckets, in the
namespace.
You can use buckets to create subtenants. The bucket owner is the subtenant administrator and can assign users to the
subtenant by using access control lists (ACLs). However, subtenants do not provide the same level of segregation as tenants.
Any user assigned to the tenant could be assigned privileges on a subtenant, so care must be taken when assigning users.
An object in one namespace can have the same name as an object in another namespace. ECS can identify objects by the
namespace qualifier.
You can configure namespaces to monitor and meter their usage, and you can grant management rights to the tenant so that it
can perform configuration, monitoring, and metering operations.
In the ECS Portal you can:
● create new namespaces
● edit namespaces
● delete namespaces
The namespace configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API.
The namespace configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API.

Namespace tenancy
A System Administrator can set up namespaces in the following tenant scenarios:

44 Namespaces
Enterprise single All users access buckets and objects in the same namespace. Buckets can be created for subtenants, to
tenant allow a subset of namespace users to access the same set of objects. For example, a subtenant might be
a department within the organization.

Enterprise Departments within an organization are assigned to different namespaces and department users are
multitenant assigned to each namespace.

Cloud Service A single namespace is configured and the Service Provider provides access to the object store for users
Provider single within the organization or outside the organization.
tenant

Cloud Service The Service Provider assigns namespaces to different companies and assigns an administrator for the
Provider namespace. The Namespace Administrator for the tenant can then add users and can monitor and meter
multitenant the use of buckets and objects.

Working with namespaces in the ECS Portal


You can use the Namespace Management page available from Manage > Namespace to view the details of existing
namespaces, to create namespaces, to edit existing namespaces, and to delete namespaces.
You can use the Namespace Management page available from Manage > Namespace to:
● create new namespaces
● edit existing namespaces
● delete namespaces
● View the details of existing namespaces as shown in the following screenshot.

Table 13. Namespace properties


Field Description
Name The name of the namespace.
Default replication group The default replication group for the namespace.
Notification quota The quota limit at which notification is generated.
Max quota The quota limit at which writes to the namespace is blocked.
Encryption Specifies if D@RE server-side encryption is enabled for the namespace.
Actions The actions that can be completed for the namespace.
● Edit: Change the namespace name, the namespace Administrator, the default
replication group, namespace quota, bucket quota, server-side encryption, access
during outage, and compliance settings for the namespace.
● Delete: Delete the namespace.
Namespace root user A namespace root user is a user which has complete access to the namespace resources.
The namespace root user is the default owner for resources created using IAM roles. With
this task, the system administrator and namespace administrator can manage ECS Portal
access for namespace root user.

Namespace settings
The following table describes the settings that you can specify when you create or edit an ECS namespace.
How to use namespace and bucket names when addressing objects in ECS is described in Object base URL.

Table 14. Namespace settings


Field Description Can be
edited
Name The name of the namespace, in lowercase characters. No

Namespaces 45
Table 14. Namespace settings (continued)
Field Description Can be
edited
Namespace Admin The user ID of one or more users who are assigned to the namespace Administrator Yes
role; a list of users is comma that is separated. Namespace Administrators can be
local or domain users. If the namespace Administrator is a domain user, ensure that
an authentication provider is added to ECS. See Introduction to users and roles for
details.
Domain Group Admin The domain group that is assigned to the namespace Administrator role. Any Yes
authenticated member is assigned the namespace Administrator role for the
namespace. The domain group must be assigned to the namespace by setting the
Domain User Mappings for the namespace. To use this feature, you must ensure
that an authentication provider is added to ECS. See Introduction to users and roles
for details.
Replication Group The default replication group for the namespace. Yes
Namespace Quota The storage space limit that is specified for the namespace. You can specify a Yes
storage limit for the namespace and define notification and access behavior when
the quota is reached. The quota set for a namespace cannot be less than 1 GB. You
can specify namespace quota settings in increments of GB. You can select one of
the following quota behavior options:
● Notification Only at < quota_limit_in_GiB > Soft quota setting at which you
are notified.
● Block Access Only at < quota_limit_in_GiB > Hard quota setting which, when
reached, prevents write or update access to buckets in the namespace.
● Block Access at < quota_limit_in_GiB > and Send Notification at
< quota_limit_in_GiB > Hard quota setting which, when reached, prevents
write or update access to the buckets in the namespace and the quota setting
at which you are notified.
Default Bucket Quota The default storage limit that is specified for buckets that are created in this Yes
namespace. This is a hard quota which, when reached, prevents write or update
access to the bucket. Changing the default bucket quota does not change the
bucket quota for buckets that are already created.
Server-side Encryption The default value for server-side encryption for buckets created in this namespace. No
● Server-side encryption, also known as Data At Rest Encryption or D@RE,
encrypts data inline before storing it on ECS disks or drives. This encryption
helps prevent sensitive data from being acquired from discarded or stolen
media.
● If you turn this setting on for the namespace, and then all its buckets are
encrypted, and this setting cannot be changed when a bucket is created. If
you want the buckets in the namespace to be unencrypted, and then you must
leave this setting off. If you leave this setting off for the namespace, individual
buckets can be set as encrypted when created.
● For a complete description of the feature, see the ECS 3.8 Security
Configuration and Hardening Guide.
Access During Outage The default behavior when accessing data in the buckets created in this namespace Yes
during a temporary site outage in a geo-federated setup.
● If you turn this setting on for the namespace and a temporary site outage
occurs, if you cannot access a bucket at the failed site where the bucket was
created (owner site), you can access a copy of the bucket at another site.
Objects that you access in the buckets in the namespace might have been
updated at the failed site, but changes might not have been propagated to the
site from which you are accessing the object.
● If you leave this setting off for the namespace, data in the site which has the
temporary outage is not available for access from other sites, and object read
for data that is owned by the failed site fails.
● In ECS 3.8, Object Lock and ADO can be enabled together in a namespace for
new buckets. However, there is a risk of losing locked versions during a TSO,
and hence for Object Lock buckets setting ADO is denied by default. You must

46 Namespaces
Table 14. Namespace settings (continued)
Field Description Can be
edited
have system administrator privileges to allow Object Lock and ADO to co-exist
through the Management API. Before enabling it, you should understand the risk
of losing locked versions during a TSO.
● For more information, see TSO behavior with the ADO bucket setting turned on.
Compliance ● The rules that limit changes that can be made to retention settings on objects No
under retention. ECS has object retention features enabled or defined at the
object level, bucket level, and namespace level. Compliance strengthens these
features by limiting changes that can be made to retention settings on objects
under retention.
● You can turn this setting on only at the time the namespace is created; you
cannot change it after the namespace is created.
● Compliance is supported by S3 and CAS systems. For details about the rules
enforced by compliance, see the ECS Data Access Guide.
Retention Policies Enables one or more retention policies to be added and configured. Yes
● A namespace can have one or more associated retention policies, where each
policy defines a retention period. When you apply a retention policy to several
objects, rather than to an individual object, a change to the retention policy
changes the retention period for all the objects to which the policy is applied.
A request to modify an object before the expiration of the retention period is
disallowed.
● In addition to specifying a retention policy for several objects, you can specify
retention policies and a quota for the entire namespace.
● For more information about retention, see Retention periods and policies.
Domain Enables Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) Yes
domains to be specified and the rules for including users from the domain to be
configured.
● Domain users can be assigned to ECS management roles and can use the ECS
self-service capability to register as object users.
● The mapping of domain users into a namespace is described in Domain users
require an assigned namespace to perform object user operations.
Namespace root user A namespace root user is a user which has complete access to the namespace Yes
resources. The namespace root user is the default owner for resources created
using IAM roles. With this task, the system administrator and namespace
administrator can manage ECS Portal access for namespace root user.

You can set the following attributes using the ECS Management REST API, but not from the ECS Portal.

Allowed (and Enables a client to specify the replication groups that the namespace can use.
Disallowed)
Replication
Groups

Retention periods and policies


ECS provides the ability to prevent data from being modified or deleted within a specified retention period.
You can specify retention by using retention periods and retention policies that are defined in the metadata that is associated
with objects and buckets. The retention periods and retention policies are checked each time a request to modify an object is
made. Retention periods are supported on all ECS object protocols (S3, Swift, Atmos, and CAS).
NOTE: For detailed information about setting retention on object interfaces, including CAS retention and CAS advanced
retention, see the ECS Data Access Guide.

Retention You can assign retention periods at the object level or the bucket level. Each time a user requests to
Periods modify or delete an object, an expiration time is calculated. The object expiration time equals the object
creation time plus the retention period. When you assign a retention period for a bucket, the object

Namespaces 47
expiration time is calculated based on the retention period set on the object and the retention period set
on the bucket, whichever is the longest. When you apply a retention period to a bucket, the retention
period for all objects in a bucket can be changed at any time, and can override the value that is written
to the object by an object client by setting it to a longer period. You can specify that an object is retained
indefinitely.
Auto-Commit The autocommit period is the time interval in which the updates through NFS are allowed for objects
Period under retention. This attribute enables NFS files that are written to ECS to be WORM-compliant. The
interval is calculated from the last modification time. The autocommit value must be less than or equal to
the retention value with a maximum of 1 day. A value of 0 indicates no autocommit period.
Retention Retention policies are associated with a namespace. Any policy that is associated with the namespace
Policies can be assigned to an object belonging to the namespace. A retention policy has an associated retention
period. When you change the retention period that is associated with a policy, the retention period
automatically changes for objects that have that policy that is assigned. You can apply a retention policy
to an object. When a user attempts to modify or delete an object, the retention policy is retrieved. The
retention period in the retention policy is used with object retention period and bucket retention period
to verify whether the request is allowed. For example, you could define a retention policy for each of
the following document types, and each policy could have an appropriate retention period. When a user
requests to modify or delete the legal document four years after it was created, the larger of the bucket
retention period or the object retention period is used to verify whether the operation can be performed.
In this case, the request is not allowed, and the document cannot be modified or deleted for one more
year.
● Email - six months
● Financial - three years
● Legal - five years

ECS Management REST API retention policy methods


The retention policy creation and configuration tasks that can be performed in the ECS Portal can also be performed using the
ECS Management REST API. The following table describes the ECS Management REST API methods that relate to retention
policies:

Table 15. ECS Management REST API retention policy


ECS Management REST API method Description
PUT /object/bucket/{bucketName}/ The retention value for a bucket that defines a mandatory retention period which is
retention applied to every object within a bucket. If the retention value is one year, an object
from the bucket cannot be modified or deleted for one year.
GET /object/bucket/{bucketName}/ Returns the retention period that is set for a specified bucket.
retention
POST /object/namespaces/namespace/ The retention setting for namespaces that acts like a policy, where each policy
{namespace}/retention is a <name>: <retention period> pair. You can define several retention policies
for a namespace and you can assign a policy, by name, to an object within the
namespace. This allows you to change the retention period for objects that have
the same policy that is assigned, by changing the corresponding policy.
PUT /object/namespaces/namespace/ Updates the period for a retention class that is associated with a namespace.
{namespace}/retention/{class}
GET /object/namespaces/namespace/ Returns the retention classes that are defined for a namespace.
{namespace}/retention

For information about how to access the ECS Management REST API, see the ECS Data Access Guide.

48 Namespaces
Create a namespace
You can create a namespace.

Prerequisites
● This operation requires the System Administrator role in ECS.
● A replication group must exist. The replication group provides access to storage pools in which object data is stored.
● If you want to enable domain users to access the namespace, an authentication provider must be added to ECS. To
configure domain object users or a domain group, you must plan how you want to map users into the namespace. For more
information about mapping users, see Domain users require an assigned namespace to perform object user operations.

About this task


For more information about namespaces, see Namespace settings.

Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, click New Namespace.
3. On the New Namespace page, in the Name field, type the name of the namespace.
● The name cannot be changed once created.
● To manage the ECS Portal access of a namespace root user, see
○ Manage namespace root user- System administrator
○ Manage namespace root user- Namespace administrator.
4. In the Namespace Admin field, specify the user ID of one or more domain or local users to whom you want to assign the
Namespace Administrator role.
You can add multiple users or groups as comma-separated lists.
5. In the Domain Group Admin field, you can also add one or more domain groups to whom you want to assign the
Namespace Administrator role.
6. In the Replication Group field, select the default replication group for this namespace.
7. In the Namespace Quota field, click On to specify a storage space limit for this namespace. If you enable a namespace
quota, select one of the following quota behavior options:
a. Notification Only at < quota_limit_in_GiB >
Select this option if you want to be notified when the namespace quota setting is reached.
b. Block Access Only at < quota_limit_in_GiB >
Select this option is you want write/update access to the buckets in this namespace that is blocked when the quota is
reached.
c. Block Access at < quota_limit_in_GiB > and Send Notification at < quota_limit_in_GiB >
Select this option if you want write/update access to the buckets in this namespace that is blocked when the quota is
reached and you want to be notified when the quota reaches a specified storage limit.
8. In the Default Bucket Quota field, click On to specify a default storage space limit that is automatically set on all buckets
that are created in this namespace.
9. In the Server-side Encryption field, click On to enable server-side encryption on all buckets that are created in the
namespace and to encrypt all objects in the buckets. If you leave this setting Off, you can apply server-side encryption to
individual buckets in the namespace at the time of creation.
10. In the Access During Outage field, click On or Off to specify the default behavior when accessing data in the buckets
created in this namespace during a temporary site outage in a geo-federated setup.
If you turn this setting on, if a temporary site outage occurs in a geo-federated system and you cannot access a bucket at
the failed site where it was created (owner site), you can access a copy of the bucket at another site.
If you leave this setting off, data in the site which has the temporary outage is not available for access from other sites, and
object reads for data that is owned by the failed site will fail.
11. In the Compliance field, click On to enable compliance features for objects in this namespace.
Once you turn this setting on, you cannot turn it off.
You can only turn this setting on during namespace creation.
Once you turn this setting on, you can add a retention policy by completing the following steps:

Namespaces 49
a. In the Retention Policies area, in the Name field, type the name of the policy.
b. In the Value fields, select a numerical value and then select the unit of measure (seconds, minutes, hours, days, months,
years, infinite) to set the retention period for this retention policy.
Instead of specifying a specific retention period, you can select Infinite as a unit of measure to ensure that buckets that
are assigned to this retention policy are never deleted.
c. Click Add to add the new policy.
12. To specify an Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) domain that contains the users who
can log in to ECS and perform administration tasks for the namespace, click Domain.
a. In the Domain field, type the name of the domain.
b. Specify the groups and attributes for the domain users that are allowed to access ECS in this namespace by typing the
values in the Groups, Attribute, and Values fields.
For information about how to perform complex mappings using groups and attributes, see Domain users require an assigned
namespace to perform object user operations.
13. Click Save.

Edit a namespace
You can change the configuration of an existing namespace.

Prerequisites
This operation requires the System Administrator role in ECS.
The Namespace Administrator role can modify the AD or LDAP domain that contains the users in the namespace that are object
users or management users that can be assigned the Namespace Administrator role for the namespace.

About this task


You cannot edit the Name, Server-side Encryption, or Compliance fields after the namespace has been created.
NOTE: Changing replication group for a namespace does not change replication group of old data that is written as part of
the namespace earlier.

Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, locate the namespace that you want to edit in the table. Click Edit in the Actions
column beside the namespace you want to edit.
3. On the Edit Namespace page:
● To modify the domain or local users to whom you want to assign the Namespace Administrator role, in the Namespace
Admin or Domain Group Admin fields, change the user IDs.
● To modify the default replication group for this namespace, in the Replication Group field, select a different replication
group.
● To modify which of the following settings are enabled, click the appropriate On or Off options.
○ Namespace Quota
○ Default Bucket Quota
○ Access During Outage
4. To modify an existing retention policy, in the Retention Policies area:
a. Click Edit in the Actions column beside the retention policy you want to edit.
b. To modify the policy name, in the Name field, type the new retention policy name.
c. To modify the retention period, in the Value field, type the new retention period for this retention policy.
5. To modify the AD or LDAP domain that contains the object users in the namespace and management users that can be
assigned the Namespace Administrator role for the namespace, click Domain.
a. To modify the domain name, in the Domain field, type the new domain name.
b. To modify the groups and attributes for the domain users that are allowed to access ECS in this namespace, type the
new values in the Groups, Attribute, and Values fields.
6. Click Save.

50 Namespaces
Manage namespace root user
You can manage the ECS Portal access of a namespace root user.

Prerequisites
This task requires system administrator role or namespace administrator role in ECS.

About this task


Namespace root user is a user which has complete access to the namespace resources. Namespace root user is the default
owner for resources created using IAM roles. With this task, the system administrator and namespace administrator can manage
ECS Portal access for namespace root user.

Manage namespace root user- System administrator

Steps
1. In the ECS Portal, select Manage > Edit Namespace > MANAGE (next to Namespace Root User).
2. On the Manage page, you can:
● Enable or Disable ECS Portal access for namespace root user.
● Set or change ECS Portal login password for namespace root user.
3. Click Save.

Manage namespace root user- Namespace administrator

Steps
1. NOTE: This procedure is available only after the system administrator has enabled ECS Portal access for namespace
root user.

In the ECS Portal, select Manage > Edit Namespace > MANAGE (next to Namespace Root User).
2. On the Manage page, you can:
● Change current ECS Portal login password for the namespace root user.
3. Click Save.
NOTE: Current namespace administrator session will be invalidated after the password is changed.

Delete a namespace
You can delete a namespace, but you must delete the buckets in the namespace first.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, locate the namespace that you want to delete in the table. Click Delete in the
Actions column beside the namespace you want to delete.
An alert displays informing you of the number of buckets in the namespace and instructs you to delete the buckets in the
namespace before removing the namespace. Click OK.
3. Delete the buckets in the namespace.
a. Select Manage > Buckets.
b. On the Bucket Management page, locate the bucket that you want to delete in the table. Click Delete in the Actions
column beside the bucket you want to delete.

Namespaces 51
c. Repeat step 4b. for all the buckets in the namespace.
4. On the Namespace Management page, locate the namespace that you want to delete in the table. Click Delete in the
Actions column beside the namespace you want to delete.
Since there are no longer any buckets in this namespace, a message displays to confirm that you want to delete this
namespace. Click OK.
5. Click Save.

52 Namespaces
6
Users and Roles
Topics:
• Introduction to users and roles
• Users in ECS
• Management roles in ECS
• Working with users in the ECS Portal

Introduction to users and roles


In ECS you can configure users and roles to control access to the ECS management tasks and to the object store. Management
users can perform administration tasks in the ECS Portal. Object users cannot access the ECS Portal but can access the object
store using clients that support the ECS data access protocols.
In ECS you can configure users and roles to control access to the ECS management tasks and to the object store. Management
users can perform administration tasks in the ECS Portal. Object users cannot access the ECS Portal but can access the object
store using clients that support the ECS data access protocols.
Roles in ECS determine the operations that a management user can perform in the ECS Portal or by using the ECS Management
REST API.
Management users and object users are stored in different tables and their credentials are different. Management users require
a local username and password, or a link to a domain user account. Object users require a username and a secret key. You can
create a management user and an object user with the same name, but they are effectively different users as their credentials
are different.
Management user and object user names can be unique across the ECS system or can be unique within a namespace, which is
referred to as user scope.
Domain and local users can be assigned as management users or object users. Local user credentials are stored by ECS. Domain
users are users defined in an Active Directory AD/LDAP database and ECS must talk to the AD or LDAP server to authenticate
user login request.
Domain and local users can be assigned as management users or object users. Local user credentials are stored by ECS. Domain
users are users defined in an Active Directory AD/LDAP database and ECS must talk to the AD or LDAP server to authenticate
user login request.
In the ECS Portal, you can:
● Add an object user
● Add a domain user as an object user
● Add domain users into a namespace
● Create a local management user or assign a domain user or AD group to a management role
● Assign the Namespace Administrator role to a user or AD group

Users in ECS
ECS requires two types of user: management users, who can perform administration of ECS, and object users, who access the
object store to read and write objects and buckets.
The following topics describe ECS user types and concepts.
● Management users
● Default management users
● Object users
● Domain and local users
● User scope

Users and Roles 53


● User tags

Management users
Management users can perform the configuration and administration of the ECS system and of namespaces (tenants)
configured in ECS.
The roles that can be assigned to management users are Security Administrator, System Administrator, System Monitor, and
Namespace Administrator as described in Management roles in ECS.
Management users can be local users or domain users. Management users that are local users are authenticated by ECS against
the locally held credentials. Management users that are domain users are authenticated in Active Directory (AD) or Lightweight
Directory Access Protocol (LDAP) systems. For more information about domain and local users, see Domain and local users.
Management users are not replicated across geo-federated VDCs.

Default management users


On installation, ECS creates two default local management users, root and emcsecurity, to enable for the initial and ongoing
configuration of ECS. The root and emcsecurity management users can access the ECS system by using the ECS Portal or the
ECS Management REST API. These default users are not replicated across sites in a geo-federation.

Table 16. Default management users


Default Default User description Role that is Role permission Can more
user password assigned to users be
the user created?
root ChangeMe This user performs the initial System ● Upload license. No
configuration of the ECS system. Administrator ● Create, delete,
The first time the root user modify storage pools
accesses ECS, the user is prompted Security namespaces, buckets,
to change the password and Administrator and replication groups.
immediately log in again with the ● View monitoring metrics.
new password. From an audit
● Grant permissions to
perspective, to know which user
object users.
carried out changes to the system,
the root user should not be used
after system initialization. After
system initialization, each System
Administrator user should log in
to the system using their own
credentials, not the root user
credentials.
emcsecurity ChangeMe This user can prevent remote SSH Security ● Upload certificates. Yes
access to nodes by locking them. Administrator ● Add authentication
The password for this user should providers.
be changed after system installation ● Assign the Security
and securely recorded. Administrator role to one
or more local users, or
AD/LDAP users/groups,
or both.
● Assign the Security
Administrator role to one
or more local users, or
AD/LDAP users/groups,
or both.
● Disable, delete any
management user or
change its password,
including root account.
● Lock and unlock nodes.

54 Users and Roles


Table 16. Default management users (continued)
Default Default User description Role that is Role permission Can more
user password assigned to users be
the user created?
● Change its own
password.

Object users
Object users are users of the ECS object store. They access ECS through object clients that are using the object protocols that
ECS supports (S3, EMC Atmos, OpenStack Swift, and CAS). Object users can be assigned UNIX-style permissions to access
buckets that are exported as file systems.
A management user (System or namespace Administrator) can create an object user. The management user defines a username
and assigns a secret key to the object user when the user is created or at any time thereafter. A username can be a local name
or a domain-style username that includes @ in the name. The object user uses the secret key to access the ECS object store.
The secret key of object user is distributed by email or other means.
Users that are added to ECS as domain users can later add themselves as object users by creating their own secret key using
the ECS self-service capability through a client that communicates with the ECS Management REST API. The object username
that they are given is the same as their domain name. Object users do not have access to the ECS Portal. For more information
about domain users, see the Domain and local users. For information about creating a secret key, see the ECS Data Access
Guide.
Object users are global resources. An object user can have privileges to read and write buckets, and objects within the
namespace to which they are assigned, from any VDC.
NOTE: Set the user scope before you create the first object user. Setting up the user scope is a strict one-time
configuration. Once configured for an ECS system, the user scope cannot be changed. If you want to change the user
scope, ECS must be reinstalled and all the users, buckets, namespaces, and data must be cleaned up.
● Refer User scope for more information about object users and user scope.
● For more information about object user tasks, see the ECS Data Access Guide.

Domain and local users


ECS supports for local user and domain users. Local and domain users can be assigned as management users or object users.
The ECS self-service capability authenticates domain users and enables domain users to create a secret key for themselves.
When a domain user creates their own secret key, they become an object user in the ECS system. You can use AD and LDAP
to give many users from an existing user database access to the ECS object store (as object users). Without creating each user
individually.
NOTE: Domain users that are object users must be added (mapped) into a namespace. For more information, see Add
domain users into a namespace.
ECS stores credentials of local users. The credentials for object users are global resources and are available from all VDCs in
ECS.
Domain users are defined in an Active Directory AD or LDAP database. Domain usernames are defined by using the
user@domain.com format. Usernames without @ are authenticated against the local user database. ECS uses an
authentication provider to supply the credentials to communicate with the AD or LDAP server to authenticate a domain user
login request. Domain users assigned to management roles can be authenticated against their AD or LDAP credentials to enable
them to access ECS and perform ECS administration operations.
With single login sessions, Management users can switch between federated VDCs, except the Management users who have
only Security Administrator role.

Users and Roles 55


Domain users require an assigned namespace to perform object user
operations

You must add (assign) domain users into a namespace if you want these users to perform ECS object user operations. To
access the ECS object store, object users and namespace Administrators must be assigned to a namespace. You can add an
entire domain of users into a namespace, or you can add a subset of the domain users into a namespace by specifying a
particular group or attribute associated with the domain.
A domain can provide users for multiple namespaces. For example, you might decide to add users such as the Accounts
department in the yourco.com domain into Namespace1, and users such as the Finance department in the yourco.com
domain into Namespace2. In this case, the yourco.com domain is providing users for two namespaces.
An entire domain, a particular set of users, or a particular user cannot be added into more than one namespace. For example, the
yourco.com domain can be added into Namespace1, but the domain cannot also be added into Namespace2.
The following example shows that a System or namespace Administrator has added into a namespace a subset of users in
the yourco.com domain; the users that have their Department attribute = Accounts in Active Directory. The System or
namespace Administrator has added the users in the Accounts department from this domain into a namespace by using the Edit
Namespace page in the ECS Portal.

Figure 6. Adding a subset of domain users into a namespace using one AD attribute

The following example shows a different example where the System or namespace Administrator is using more granularity
in adding users into a namespace. In this case, the System or namespace Administrator has added the members in the
yourco.com domain who belong to the Storage Admins group with the Department attribute = Accounts AND Region attribute
= Pacific, OR belong to the Storage Admins group with the Department attribute = Finance.

56 Users and Roles


Figure 7. Adding a subset of domain users into a namespace using multiple AD attributes

For more information about adding domain users into namespaces using the ECS Portal, see Add domain users into a
namespace.

User scope
The user scope setting affects all object users, in all namespaces across all federated VDCs.
The user scope can be GLOBAL or NAMESPACE. If the scope is set to GLOBAL, object usernames are unique across all VDCs
in the ECS system. If the scope is set to NAMESPACE, object usernames are unique within a namespace, so the same object
username can exist in different namespaces.
NOTE: CAS API and SWIFT API are supported only for GLOBAL user scope. CAS API and SWIFT API are not
supported with NAMESPACE user scope.

The default setting is GLOBAL. If you intend to use ECS in a multitenant configuration and you want to ensure that namespaces
can use names that are in use in another namespace, you must change this setting to NAMESPACE.
NOTE: Set the user scope before you create the first object user. Setting up user scope is a strict one time configuration.
Once configured for an ECS system, user scope cannot be changed. If you want to change the user scope, ECS must be
reinstalled and all the users, buckets, namespaces, and data must be cleaned up.
● See Object users for more information about object users.
● For more information about object user tasks, see the ECS Data Access Guide.

Set the user scope


You can set the user scope using the ECS Management REST API.

Prerequisites
This operation requires the System Administrator role in ECS.
If you are going to change the default user scope setting from GLOBAL to NAMESPACE, you must do so before you create the
first object user in ECS.

Users and Roles 57


About this task
The user scope setting affects all object users in ECS.

Steps
In the ECS Management REST API, use the PUT /config/object/properties API call and pass the user scope in the
payload.
The following example shows a payload that sets the user_scope to NAMESPACE.

PUT /config/object/properties/

<property_update>
<properties>
<properties>
<entry>
<key>user_scope</key>
<value>NAMESPACE</value>
</entry>
</properties>
</property_update>

User tags
A tag in the form of name=value pairs can be associated with the user ID for an object user, and retrieved by an application. For
example, an object user can be associated with a project or cost-center. Tags cannot be associated with management users.
This functionality is not available from the ECS Portal. Tags can be set on an object user, and the tags that are associated with
the object user can be retrieved by using the ECS Management REST API. You can add a maximum of 20 tags.

Management roles in ECS


ECS defines roles to determine the operations that a user can perform in the ECS Portal or when accessing ECS using the ECS
Management REST API. Management users and groups can be assigned to administration roles in ECS and can be either local
users or domain users. Roles can also be assigned to Active Directory group names.
ECS defines roles to determine the tasks that a user can perform in the ECS Portal or when accessing ECS using the ECS
Management REST API. Management users and groups can be assigned to administration roles in ECS and can be either local
users or domain users. Roles can also be assigned to Active Directory group names.
The following list provides the four possible management roles that exist in ECS:
● Security Administrator
● System Administrator
● System Monitor
● Namespace Administrator
The following management roles are defined:
● Security Administrator
● System Administrator
● System Monitor
● Namespace Administrator

Security Administrator
Actions that are allowed to the Security Administrator:
● Upload certificates.
● Add authentication providers.
● Create, edit, and delete management users and/or AD/LDAP users/groups.

58 Users and Roles


● Lock and unlock nodes.
● Change its own password.
The Security Administrator is authenticated by local authentication (not through AD/LDAP).
The Security Administrator cannot access to metering and monitoring UI pages and APIs.
The Security Administrator is the only management user who can lock and unlock nodes from the ECS Portal or the ECS
Management REST API. Locking a node is the ability to disable remote SSH access to the node. The Security Administrator is a
default local user called emcsecurity. The emcsecurity user is described in Default management users.
The Security Administrator role can be assigned to another management user only by users with Security Administrator role.
System Administrators and System Monitors can view the lock status of nodes. For instructions on locking and unlocking nodes,
see Lock and unlock nodes using the ECS Portal.

System Administrator
The System Administrator role allows a user to configure ECS and during initial configuration, specify the storage used for the
object store, how the store is replicated, how tenant access to the object store is configured (by defining namespaces), and
which users have permissions within an assigned namespace. The System Administrator can also configure namespaces and
perform namespace administration, or can assign a user who belongs to the namespace as the Namespace Administrator.
The System Administrator has access to the ECS Portal and system administration operations can also be performed from
programmatic clients using the ECS Management REST API.
After initial installation of ECS, the System Administrator is a pre-provisioned local management user called root. The default
root user is described in Default management users.
Because management users are not replicated across sites, a System Administrator must be created at each VDC that requires
one.

System Monitor
The System Monitor role enables a user to have read-only access to the ECS Portal. The System Monitor can view all ECS
Portal pages and all information on the pages, except user detail information such as passwords and secret key data. The
System Monitor cannot provision or configure the ECS system. For example, the monitor cannot create or update storage pools,
replication groups, namespaces, buckets, users through the portal or ECS Management REST API. Monitors cannot modify any
other portal setting except their own passwords.
Because management users are not replicated across sites, a System Monitor must be created at each VDC that requires one.

Namespace Administrator
The Namespace Administrator is a management user who can access the ECS Portal.
The Namespace Administrator can assign local users as object users for the namespace and create and manage buckets
within the namespace. Namespace Administrator operations can also be performed using the ECS REST API. A Namespace
Administrator can only be the administrator of a single namespace.
Because authentication providers and namespaces are replicated across sites (they are ECS global resources), a domain user
who is a Namespace Administrator can log in at any site and perform namespace administration from that site.
NOTE: If a domain user is to be assigned to the Namespace Administrator role, the user must be mapped into the
namespace if the user is not a namespace administrator.
Local management users are not replicated across sites, so a local user who is a Namespace Administrator can only log in at the
VDC at which the management user was created. If you want the same username to exist at another VDC, the user must be
created at the other VDC. As they are different users, changes to a same-named user at one VDC, such as a password change,
is not propagated to the user with the same name at the other VDC.

Users and Roles 59


Tasks performed by role
The tasks that can be performed in the ECS Portal or ECS Management REST API by each role are described in the following
table:

Table 17. Tasks performed by ECS management user role


Task System Admin System Monitor Namespace Security
Admin (includes Admin
namespace root)
Tenancy
Create namespaces (tenants) Yes No No No
Delete namespaces Yes No No No

NOTE: The namespace root is a namespace admin.

User management (management and object users unless otherwise noted)


Create local object users and Yes (in all namespaces) No Yes (in one No
assign them to namespaces namespace)
Create local management users No No No Yes
and assign them to namespaces
Delete local object users Yes (in all namespaces) No Yes (in one No
namespace)
Delete or unlock local No No No Yes
management users
Edit or change the password of No No No Yes
local management users
Set user scope (global or Yes No No No
namespace) for all object users
Add an AD, LDAP, or Keystone No No No Yes
authentication provider
Delete an AD, LDAP, or Keystone No No No Yes
authentication provider
Add AD and LDAP domain users No No Yes (in one Yes
or AD groups into a namespace namespace)
Create an AD group (LDAP, No No No Yes
and Keystone groups are not
supported)
Delete domain users or AD No No No Yes
groups
Role management
Assign administration roles to No No No Yes
local and domain management
users and AD groups
Revoke roles from local and No No No Yes
domain users and AD groups
Storage configuration
Create, modify, storage pools Yes (in the VDC where the No No No
System Admin was created)
Create, modify, delete VDCs Yes (in the VDC where the No No No
System Admin was created)

60 Users and Roles


Table 17. Tasks performed by ECS management user role (continued)
Task System Admin System Monitor Namespace Security
Admin (includes Admin
namespace root)
Create, modify replication groups Yes (in the VDC where the No No No
System Admin was created)
Create, modify, delete buckets Yes (in all namespaces) No Yes (in one No
namespace)
Set the bucket ACL permissions Yes (buckets in all No Yes (buckets in one No
for a user namespaces) namespace)
Create, modify, delete NFS Yes (buckets in all No Yes (buckets in one No
exports namespaces) namespace)
Create, modify, delete mapping Yes (buckets in all No Yes (buckets in one No
of users and groups to files and namespaces) namespace)
objects in buckets
Add, modify, delete the object Yes (in all namespaces) No No No
Base URL to use ECS
object storage for Amazon S3
applications
Monitoring and reports
Get metering information for Yes (in all namespaces) Yes (in all Yes (in one No
each namespace and bucket namespaces) namespace)
Get audit information (list of Yes (in all namespaces) Yes (in all No No
all activity of users using the namespaces)
ECS Portal and ECS Management
REST API)
View alerts, and perform alert Yes (in all namespaces) Yes (in all No No
actions (such as acknowledging namespaces)
or assigning alerts)
Configure alerts Yes (in all namespaces) No No No
Monitor capacity utilization of Yes (in all namespaces) Yes (in all No No
storage pools, nodes, disks, and namespaces)
the entire VDC
Monitor the health and utilization Yes (in all namespaces) Yes (in all No No
of the infrastructure environment namespaces)
(nodes, disks, NIC bandwidth,
CPU, and memory utilization)
Monitor requests and network Yes (in all namespaces) Yes (in all No No
performance for VDCs and nodes namespaces)
Monitor status of data erasure Yes (in all namespaces) Yes (in all No No
encoding for each storage pool namespaces)
Monitor recovery status of Yes (in all namespaces) Yes (in all No No
storage pools after an outage or namespaces)
failure (data rebuilding process)
Monitor disk use metrics at the Yes (in all namespaces) Yes (in all No No
VDC or individual node level namespaces)
Monitor geo-replication metrics Yes (in all namespaces) Yes (in all No No
including network traffic, data namespaces)
pending replication and XOR,
failover and bootstrapping
processing status

Users and Roles 61


Table 17. Tasks performed by ECS management user role (continued)
Task System Admin System Monitor Namespace Security
Admin (includes Admin
namespace root)
Monitor information about Yes (in all namespaces) Yes (in all No No
cloud-hosted VDCs and cloud namespaces)
replication traffic
Licensing, ESRS, security, and event configuration
View license and subscription Yes Yes No No
information for all components
Procure and apply the new Yes No No No
licenses
Add, modify, delete Secure Yes No No No
Remote Services (Secure
Remote Services) server
Change password Yes Yes Yes Yes
Lock nodes to prevent remote No No No Yes
access through SSH
Add or delete an SNMP trap Yes No No No
recipient to forward ECS events
Add or delete a syslog server Yes No No No
to remotely store ECS logging
messages

Working with users in the ECS Portal


You can use the User Management page available from Manage > Users to create local users who are assigned as object
users for a namespace. You can also create management users, which can be new local users to whom you assign management
roles, or domain users to whom you assign management roles.
You can use the User Management page available from Manage > Users to:
● Create a local user assigned as an object user for a namespace
● Add a domain user as an object user
● Add domain users into a namespace
● Create a local user assigned to a management role or assign a domain user or AD group to a management role
● Assign the Namespace Administrator role to a user or AD group
The User Management page has two tabs: the Object Users tab and the Management Users tab.

Object Users tab


You can use the Object Users tab to view the details of object users, to edit object users, and to delete object users. The
object users who are listed on this page include:
● The local object users who are created by a System or Namespace Administrator in the ECS Portal.
● The domain users that have become object users by way of obtaining a secret key using a client that communicates with the
ECS Management REST API.
A System Administrator sees the object users for all namespaces. A Namespace Administrator sees only the object users in their
namespace.

Table 18. Object user properties


Field Description
Name The name of the object user.

62 Users and Roles


Table 18. Object user properties (continued)
Field Description
Namespace The namespace to which the object user is assigned.
Actions The actions that can be completed for the object user.
● Edit: Change the name of the object user, the namespace to which the user is assigned, or
the S3, Swift, or CAS object access passwords for the user.
● Delete: Delete the object user.
● New Object User button: Adds an object user.

Management Users tab


You can use the Management Users tab to view the details of local and domain management users, to edit management users,
and to delete management users. This tab is only visible to System Administrators.

Table 19. Management user properties


Field Description
Name The name of the management user.
Actions The actions that can be completed for the management user.
● Edit: For a local management user: Change the user's name, password, and Security
Administrator, System Administrator, or System Monitor role assignment. For a domain
management user: Change the AD or LDAP username or AD group name, and Security
Administrator, System Administrator, or System Monitor role assignment.
● Delete: Delete the management user.
● New Management User button: Add a management user that can be assigned the
Security Administrator role, the System Administrator role, or the System Monitor role.
● Unlock: A Security Administrator can unlock a management user who is locked out.

Add an object user


You can create object users and configure them to use the supported object access protocols. You can edit an object user
configuration by adding or removing access to an object protocol, or by creating a new secret key for the object user.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can assign new object users into any namespace.
● A Namespace Administrator can assign new object users into the namespace in which they are the administrator.
● If you create an object user who will access the ECS object store through the OpenStack Swift object protocol, the
Swift user must belong to an OpenStack group. A group is a collection of Swift users that have been assigned a role
by an OpenStack administrator. Swift users that belong to the admin group can perform all operations on Swift buckets
(containers) in the namespace to which they belong. Do not add ordinary Swift users to the admin group. For Swift users
that belong to any group other than the admin group, authorization depends on the permissions that are set on the Swift
bucket. You can assign permissions on the bucket from the OpenStack Dashboard UI or in the ECS Portal using the Custom
Group ACL for the bucket. For more information, see Set custom group bucket ACLs.

Steps
1. In the ECS Portal, select Manage > Users.
2. On the User Management page, click New Object User.
3. On the New Object User page, in the Name field, type a name for the local object user.
You can type domain-style names that include @ (for example, user@domain.com). You might want to do this to keep
names unique and consistent with AD names. However, local object users are authenticated using a secret key that is
assigned to the username, not through AD or LDAP.

Users and Roles 63


NOTE: User names can include uppercase letters, lowercase letters, numbers, and any of the following characters: ! # $
&'()*+,-./:;=?@_~

4. In the Namespace field, select the namespace that you want to assign the object user to, and then complete one of the
following steps:
● To add the object user, and return later to specify passwords or secret keys to access the ECS object protocols, click
Save.
● To specify passwords or secret keys to access the ECS object protocols, click Next to Add Passwords.
5. NOTE: You can lock or unlock an object user by:
● Edit > LOCK USER
● Edit > UNLOCK USER

On the Update Passwords for User < username > page, in the Object Access area, for each of the protocols that you
want the user to use to access the ECS object store, type or generate a key for use in accessing the S3/Atmos, Swift, or
CAS interfaces.
a. For S3 access, in the S3/Atmos box, click Generate & Add Secret Key.
The secret key (password) is generated.
To view the secret key in plain text, select the Show Secret Key checkbox.
To create a second secret key to replace first secret key for security reasons, click Generate & Add Secret Key.
The Add S3/Atmos Secret Key/Set Expiration on Existing Secret Key dialog is displayed. When adding a second
secret key, you can specify for how long to retain the first password. Once this time has expired, the first secret key
expires.
In the Minutes field, type the number of minutes for which you want to retain the first password before it expires. For
example, if you typed 3 minutes, you would see This password will expire in 3 minute(s).
After 3 minutes, you would see that the first password displays as expired and you could then delete it.
b. For Swift access:
● In the Swift Groups field, type the OpenStack group to which the user belongs.
● In the Swift password field, type the OpenStack Swift password for the user.
● Click Set Groups & Password.
If you want an S3 user to be able to access Swift buckets, you must add a Swift password and group for the user. The
S3 user is authenticated by using the S3 secret key, and the Swift group membership enables access to Swift buckets.
c. For CAS access:
● In the CAS field, type the password and click Set Password or click Generate to automatically generate the
password and click Set Password.
● Click Generate PEA file to generate a Pool Entry Authorization (PEA) file. The file output displays in the PEA file
box and the output is similar to the following example. The PEA file provides authentication information to CAS before
CAS grants access to ECS; this information includes the username and secret. The secret is the base64-encoded
password that is used to authenticate the ECS application.

NOTE: Generate PEA file button is displayed after the password is set.

<pea version="1.0.0">
<defaultkey name="s3user4">
<credential id="csp1.secret" enc="base64">WlFOOTlTZUFSaUl3Mlg3VnZaQ0k=</
credential>
</defaultkey>
<key type="cluster" id="93b8729a-3610-33e2-9a38-8206a58f6514" name="s3user4">
<credential id="csp1.secret" enc="base64">WlFOOTlTZUFSaUl3Mlg3VnZaQ0k=</
credential>
</key>
</pea>

● In the Default Bucket field, select a bucket, and click Set Bucket.
● Optional. Click Add Attribute and type values in the Attribute and Group fields.
● Click Save Metadata.
6. Click Close.
The passwords/secret keys are saved automatically.

64 Users and Roles


Add a domain user as an object user
You can configure domain users so that they can access the ECS object store and generate secret keys for themselves. By
doing so, they add themselves as object users to ECS.

Prerequisites
● AD or LDAP domain users must have been added to ECS through an AD or LDAP authentication provider. Adding an
authentication provider must be performed by a System Administrator and is described in Add an AD or LDAP authentication
provider.
● Domain users must have been added into a namespace by a System or Namespace Administrator, as described in Add
domain users into a namespace.

Steps
Domain users can create secret keys for themselves by using the instructions in the ECS Data Access Guide.
When a domain user creates their own secret key, they become an object user in the ECS system.

Add domain users into a namespace


In the ECS Portal, you can add domain users into a namespace based on the AD or LDAP domain, groups, and attributes
associated with the users. Domain users must be added (mapped) into a namespace to perform ECS object user operations.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● An authentication provider must exist in the ECS system that provides access to the domain that includes the users you
want to add into the namespace.

Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, beside the namespace, click Edit.
3. On the Edit Namespace page, click Domain and type the name of the domain in the Domain field.
4. In the Groups field, type the names of the groups that you want to use to add users into the namespace.
The groups that you specify must exist in AD.
5. In the Attribute and Values fields, type the name of the attribute and the values for the attribute.
The specified attribute values for the users must match the attribute values specified in AD or LDAP.
If you do not want to use attributes to add users into the namespace, click the Attribute button with the trash can icon to
remove the attribute fields.
6. Click Save.

Create a local management user or assign a domain user or AD or


LDAP group to a management role
You can create a local management user, and you can assign a management role to a local user, a domain user, or an AD or
LDAP group. Management users can perform system-level administration (VDC administration) and namespace administration.
You can also remove the management role assignment.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● By default, the ECS root user is assigned the System Administrator role and can perform the initial assignment of a user to
the System Administrator role.
● To assign a domain user or an AD or LDAP group to a management role, the domain users or AD or LDAP group must have
been added to ECS through an authentication provider. Adding an authentication provider must be performed by a System
Administrator and is described in Add an AD or LDAP authentication provider.

Users and Roles 65


● To assign the Namespace Administrator role to a management user, you must create a management user using the following
procedure and perform the role assignment on the Edit Namespace page in the ECS Portal (see Assign the Namespace
Administrator role to a user or AD or LDAP group ). The user cannot log in until the Namespace Administrator role is
assigned.

Steps
1. In the ECS Portal, select Manage > Users.
2. On the User Management page, click the Management Users tab.
3. Click New Management User.
4. Click AD/LDAP User or Group or Local User.
● For a domain user, in the Username field, type the name of the user. The username and password that ECS uses to
authenticate a user are held in AD or LDAP, so you do not need to define a password.
● For an AD or LDAP group, in the Group Name field, type the name of the group. The username and password that ECS
uses to authenticate the AD or LDAP group are held in AD or LDAP, so you do not need to define a password.
● For a local user, in the Name field, type the name of the user and in the Password field, type the password for the user.
NOTE: User names can include uppercase letters, lowercase letters, numbers and any of the following characters: ! # $
&'()*+,-./:;=?@_~

5. To assign the System Administrator role to the user or AD or LDAP group, in the System Administrator box, click Yes.
If you select Yes, but later you want to remove System Administrator privileges from the user, you can edit this setting and
select No.
6. To assign the System Monitor role to the user or AD or LDAP group, in the System Monitor box, click Yes.
7. Click Save.

Assign the Namespace Administrator role to a user or AD or LDAP


group
You can assign the Namespace Administrator role to a local management user, a domain user, or AD or LDAP group that exists in
the ECS system.

Prerequisites
● This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, beside the namespace into which you want to assign the Namespace
Administrator, click Edit.
3. On the Edit Namespace page:
a. For a local management user or a domain user, in the Namespace Admin field, type the name of the user to whom you
want to assign the Namespace Administrator role.
To add more than one Namespace Administrator, separate the names with commas.
A user can be assigned as the Namespace Administrator only for a single namespace.
b. For an AD or LDAP group, in the Domain Group Admin field, type the name of the AD or LDAP group to which you want
to assign the Namespace Administrator role.
When the AD or LDAP group is assigned the Namespace Administrator role, all users in the group are assigned this role.
An AD or LDAP group can be the Namespace Administrator only for one namespace.
4. Click Save.

66 Users and Roles


7
Identity and Access Management (S3)
Topics:
• Introduction to Identity and Access Management
• Users
• Groups
• Roles
• Policies
• Identity Provider
• SAML Service Provider Metadata
• Root Access Key

Introduction to Identity and Access Management


ECS Identity and Access Management (IAM) is a service that provides secure fine-grained access to ECS resources.
NOTE:
● IAM is accessible only by S3 protocol. IAM policies and settings have no impact when data is accessed using other
protocols.
● Management users in ECS have complete access to IAM capabilities.
● When the IAM configuration is changed, the effects of those changes may not take effect immediately.
IAM consists of the following components:
● Account Management
● Access Management
● Identity Federation
● Secure Token Service

Account Management
Account Management enables you to manage IAM identities within each namespace such as users, groups, and roles.
All IAM entities have a unique ID associated with it. Deleting and re-creating an entity with the same name creates a unique ID
for the new entity.

Identities
Table 20. Identities
Field Description
Namespace root user ● The namespace root user is an admin user in the
namespace.
● Only the namespace root user can access the ECS UI.
● The namespace root user is the owner of the buckets and
objects that are created by the IAM entities.
IAM user ● An IAM user is a person or an application in the namespace
that can interact with ECS resources.
● An IAM user can belong to one or more IAM groups.

Identity and Access Management (S3) 67


Table 20. Identities (continued)
Field Description
● It is possible to create, view, modify, delete, and list IAM
users in ECS using both API and UI.
● IAM users cannot access the ECS UI.
IAM group ● An IAM group is a collection of IAM users.
● IAM groups do not nest and contain only IAM users.
● IAM groups let you specify permissions for all the users in
the group making management easier.
● Creating and managing groups can be done from both UI
and API.
● Tagging on groups is not supported.
IAM role ● An IAM role is similar to a user, in that it is an identity with
permission policies that determine what the identity can
and cannot do.
● An IAM role does not have any credentials that are
associated with it.
● An entity assumes a role by calling an API that provides it
with temporary credentials to access a resource.
● A federated user can assume an IAM role by authenticating
with an external identity provider.
● An IAM user can assume a role in the same or different
account (cross-account access).

NOTE: IAM and namespace root users access S3 and IAM APIs using Access Keys. Access Keys are long-term credentials
which consist of an access key ID and a secret access key. A user can have at most two Access Keys associated with it at
any time.

Tagging IAM Entities (Users and Role)


A tag is a label that you assign to a resource. Each tag consists of a key and an optional value, both of which you define. Custom
attributes are added to users and roles using a tag key-value pair. These tags can be used to control the access of an entity to
resources or to control what tags can be attached to an entity. Groups and policies cannot be tagged. You can apply the same
tag to multiple entities. But multiple tags on one entity cannot have the same key. Fifty tags per IAM entity are allowed.

Access Management
Access is managed by creating policies and attaching them to IAM identities or resources.

Policies
A policy is an object that when associated with an identity or resource defines their permission. Permissions in the policies
determine if the request is permitted or denied. Policies are stored in JSON format. ECS IAM enables creation, modification,
listing, assigning, and deletion of policies on an identity or resource.
The following policy types are supported:

Table 21. Policy types


Policies Description
Identity-based policies Policies that are assigned to users, groups, and roles which grant permissions to an identity.
● Inline Policies
● Managed Policies (Both ECS and Customer managed)
Resource-based policies These policies are inline policies that are assigned to an ECS resource that grants specified
principal permission to perform specific action on the resource.

68 Identity and Access Management (S3)


Table 21. Policy types (continued)
Policies Description
● Bucket Policy - Tweaked existing support for bucket policies to support IAM use cases.
● Trust Policy - Is a resource-based policy that is attached to an IAM role. Trust policies
identify the principal entities that can assume the role.
Permission Boundaries Use a managed-policy as the permission boundary for an IAM entity (user or role). That policy
defines the maximum permissions that identity-based policies can grant to an entity, but does
not grant permissions. Permissions boundaries do not define the maximum permissions that a
resource-based policy can grant to an entity.
Session policies Session policies are used with AssumeRole, AssumeRoleWithSAML, and GetFederationToken
APIs. Session policies limit the permissions that the role or user's identity-based policies grant
to the session. Session policies limit permissions for a created session, but do not grant
permissions.
Access Control Lists (ACLs) Tweaked existing ECS ACLs on buckets and objects to support IAM use cases. ACLs are
cross-account permissions policies that grant permissions to the specified principal. ACLs
cannot grant permissions to entities within the same account.

ECS IAM protects the following resources:


● Object Head API
○ S3 (buckets and objects)
● STS APIs
○ AssumeRole - Provides temporary credentials for cross account access.
○ AssumeRoleWithSAML - Provides temporary credentials for SAML authenticated users.
○ GetFederationToken - Provides temporary credentials for federated users.
● IAM API

ACLs
Access control lists enable you to manage access to objects and buckets. An ACL is attached to all objects and buckets.

Users
An IAM user represents a person or application in the namespace that can interact with ECS resources.

Table 22. IAM user


Field Description
Name Name of user
Creation Time The time at which the user is created.
Actions ● Create
● Delete
● Get
● List
Attached Groups Groups that are attached to the user.
Access Keys Access Keys that are related to the user.
Secret Keys Secret Keys that are related to the user.

Identity and Access Management (S3) 69


New User

Steps
1. Select Manage > Identity and Access (S3) > Users > NEW USER.
2. In the Name field, type a unique name for the user and click NEXT.
To cancel creating a user, click CANCEL.
3. Add the user to one or more than one group that gives the user, permissions to perform the required tasks and click NEXT.
● You can also attach permission policies to the user and grant permissions.
● To limit permissions of a user, you can set permissions boundary by selecting the existing user.
4. You can attach tags to add metadata to a user.
5. Click NEXT.
6. Review the data of the user, and click CREATE USER.
7. Click COMPLETE.

Delete Users

About this task


NOTE: Deleting user will permanently delete user, including all user data, user security credentials, user inline policies, and
user policy attachments.

Steps
1. Select Manage > Identity and Access (S3) > Users > DELETE USERS.
2. Select the user.
You can select more than one user.
3. Click DELETE USERS.
4. Click OK in the pop-up window.

Groups
An IAM group is a collection of IAM users. You can use groups to specify permissions for a collection of IAM users.

Table 23. IAM groups


Field Description
Name Name of the group
Users Users in the group
Policies Policies that are attached to the user and the inline policies.
Creation Time Creation date and time of the group
Actions ● Create
● Delete
● Get
● List

70 Identity and Access Management (S3)


New Group
Steps
1. Select Manage > Identity and Access (S3) > Groups > NEW GROUP.
2. In the Group Name field, type a unique name for the group and click NEXT.
3. Select a Policy and click NEXT.
4. Click SAVE.

Delete Groups
About this task
NOTE: Deleting group will remove the permissions belonging to the group. The membership will also be removed for users
that are members of the group.

Steps
1. Select Manage > Identity and Access (S3) > Groups > DELETE GROUPS.
2. Select the group.
You can select more than one group.
3. Click DELETE GROUPS.
4. Click OK in the pop-up window.

Roles
A role is similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do .
However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also,
a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access
keys are created dynamically and provided to the user.

Table 24. Role


Field Description
Name Name of role
Description Description of role
Maximum CLI/API session duration Maximum session duration for the role
Namespace Namespace that can use this role

Effect

Principal

Assume Role Policy Document The trust relationship policy document that grants an entity
permission to assume the role.
Permissions Boundary The ARN of the policy that is used to set the permission
boundary for the role.
Tags A list of tags that you want to attach to the newly created
role. Each tag consists of a key name and an associated value.

Identity and Access Management (S3) 71


New Role

Steps
1. Select Manage > Identity and Access (S3) > Roles > NEW ROLE.
2. In the Name field, type a unique name for the role. Describe about the role in the Description of field. Click Edit to set the
maximum session duration for the role, and click NEXT.
3. NOTE: Choose either Step 3 or Step 4, then go to Step 5.

Click Namespace.
a. Select Effect: Allow or Deny.
b. Click ADD PRINCIPAL ARN to add principal ARN, and click SAVE.
c. Click NEXT.
4. Click SAML2.0 Federation and click NEXT.
a. Select a SAML Provider.
b. Select an Attribute.
c. Enter a Value.
Click ADD CONDITION if you want to add conditions.
d. Click NEXT.
e. Enter the JSON file and click NEXT.
5. Add permissions to the role and click NEXT.
To limit permissions of a role, you can set permissions boundary.
6. You can attach tags to add metadata to a role.
7. Click NEXT.
8. Review the data of the role, and click CREATE ROLE.
9. Click COMPLETE.
NOTE: Clicking EDIT TRUST RELATIONSHIP opens up a JSON editor which contains the trust policy.

Delete Roles

About this task

NOTE: Deleting roles, deletes roles along with its inline policies and role policy attachments.

Steps
1. Select Manage > Identity and Access (S3) > Roles > DELETE ROLES.
2. Select the role.
You can select more than one role.
3. Click DELETE ROLES.
4. Click OK in the pop-up window.

Policies
A IAM policy is a document in JSON format which defines the permissions for a role.

Table 25. IAM policy


Field Description
Policy Name Name of the policy

72 Identity and Access Management (S3)


Table 25. IAM policy (continued)
Field Description
Type Type of the policy
Used as ● Policy permissions
● Permission boundary
Description Describes about the policy.
Actions ● Attach
● Detach
● Delete

New Policy

Steps
1. NOTE: The Inline policies can be applied to IAM users after they are created with edit operation. However, managed
policies are always encouraged for use.

Select Manage > Identity and Access (S3) > Policies > NEW POLICY.
2. Enter the details in the Name, and Description fields, and click NEXT.
3. Select the appropriate options in the Service, Actions, Resources, and Request Condition fields, and click NEXT.
Optionally you can click ADD ADDITIONAL PERMISSIONS to set additional permissions to the new policy.
4. Review the data of the new policy, and click CREATE POLICY.
5. Click COMPLETE.

Delete Policies

About this task


NOTE:
● Deleting policy will detach the policy from entities it is attached to, and all policy versions will be deleted.
● The permissions boundary settings will be cleared on users or roles if the policy is used as permissions boundary on
them.

Steps
1. Select Manage > Identity and Access (S3) > Policies > DELETE POLICIES.
2. Select the policy.
You can select more than one policy.
3. Click DELETE POLICIES.
4. Click OK in the pop-up window.

Policy Simulator- Existing Policies


You can use the POLICY SIMULATOR to test and troubleshoot IAM Policies and Resource Policies.

About this task


NOTE:
● POLICY SIMULATOR is available from ECS 3.5.0.1.

Identity and Access Management (S3) 73


● If more than one policy is attached to the user, group, or role, you can test all the policies, or select individual policies to
test.

Steps
1. Select Manage > Identity and Access (S3) > Policies > POLICY SIMULATOR.
The POLICY SIMULATOR opens in a new tab.
2. Select Namespace from the drop-down in the page header.
Existing Policies is the default in Mode.
3. Select an IAM entity to test the policy that is attached to it, at the upper left of the page.
● Select Group from the Group, Role, User drop-down list. Then choose the group. Or,
● Select Role from the Group, Role, User drop-down list. Then choose the role. Or,
● Select User from the Group, Role, User drop-down list. Then choose the user.
After you select the entity, you can see the policies that are attached to it.
4. Select the policy.
You can select more that one policy to test.
Click the policy to see the details about the policy.
5. Select Service.
6. Select Actions.
7. Select Global Settings.
Update the fields as required to test the policy.
NOTE:
● Include Resource Policy is available only for buckets and objects. Select Include Resource Policy, if you want to
include the policies that are associated with the bucket or the object in the policy simulation.
● Caller ARN is the ARN of the IAM user that you want to use as the simulated caller of the API operations. Caller
ARN is required if you include a resource policy so that the principal element of the policy has a value to use in
evaluating the policy.
● For global settings, if policy is simulated without specifying global condition key value, a message appears,

You have not specified any value for global conditions, are you sure to proceed
on simulation run anyway?

with Ok and Cancel options. Click Ok and simulate the policy ignoring the global condition key.

8. Select Run Simulation.

Results
The result of the simulation is displayed on the Permission column.

Policy Simulator- New Policy


Steps
1. Select Manage > Identity and Access (S3) > Policies > POLICY SIMULATOR.
The POLICY SIMULATOR opens in a new tab.
2. Select Namespace from the drop-down in the page header.
3. Select New Policy in Mode.
Existing Policies is the default in Mode.
4. Click New Policy.
5. Enter the name of the new policy.
6. Enter the policy details.
7. Click Apply.
8. Select the policy.

74 Identity and Access Management (S3)


You can select more that one policy to test.
Click the policy to see the details about the policy.
9. Select Service.
10. Select Actions.
11. Select Global Settings.
Update the fields as required to test the policy.
NOTE:
● Include Resource Policy is available only for buckets and objects. Select Include Resource Policy, if you want to
include the policies that are associated with the bucket or the object in the policy simulation.
● Caller ARN is the ARN of the IAM user that you want to use as the simulated caller of the API operations. Caller
ARN is required if you include a resource policy so that the principal element of the policy has a value to use in
evaluating the policy.

12. Select Run Simulation.

Identity Provider
Table 26. Identity Provider
Field Description
Name Name of the identity provider.
Type Only SAML is supported.
Created The time at which the user is created.
Metadata An XML document generated by an identity provider (IdP)
that supports SAML 2.0.

New Identity Provider

Steps
1. Select Manage > Identity and Access (S3) > Identity Provider > NEW IDENTITY PROVIDER.
2. Enter the details in the Name and Type fields.
3. Click Choose to select metadata provider.
4. Click NEXT.
5. Verify the data of the user, and click NEW IDENTITY PROVIDER.
6. Click COMPLETE.

Delete Providers

Steps
1. Select Manage > Identity and Access (S3) > Identity Provider > DELETE PROVIDERS.
2. Select the identity provider.
You can select more than one identity provider.
3. Click DELETE PROVIDERS.
4. Click OK in the pop-up window.

Identity and Access Management (S3) 75


SAML Service Provider Metadata
Security Assertion Markup Language (SAML) is an open standard for exchanging authentication and authorization data between
parties, in particular, between an identity provider and a service provider.

Table 27. SAML Service Provider Metadata


Field Description
Java Key Store The base64 encoded java key store is used in metadata
generation.
Key Alias Key alias for the java key store.
Key Password Password for the java key store.
DNS Base URL DNS base URL to be used in metadata.

Generate SAML Service Provider Metadata


You can use this interface to generate ECS metadata XML to configure ECS trust relationship with the identity provider. The
generation requires a java key store and a DNS-domain-name which will be used as the entity Base URL to set the Location in
the Assertion Consumer Service.

Steps
1. Select Manage > Identity and Access (S3) > SAML Service Provider Metadata.
2. Click Choose to select a Java Key Store.
3. Enter the details in the Key Alias, Key Password, DNS Base URL fields.
4. Click GENERATE.

Root Access Key


Table 28. Root Access Key
Field Description
Access key ID ID of the access key
Created The time at which the access key is created.
Last Used The last time at which the access key is used.
Status ● Active
● Make Inactive
Actions ● Create
● Delete
● Get
● List

Create Access Key

Steps
1. NOTE: A root user can have a maximum of two access keys that are associated with it at any time.

Select Manage > Identity and Access (S3) > Root Access Key.

76 Identity and Access Management (S3)


2. Click CREATE ACCESS KEY.

Identity and Access Management (S3) 77


8
Buckets
Topics:
• Introduction to buckets
• Working with buckets in the ECS Portal
• Create a bucket using the S3 API (with s3curl)
• Enable Data Movement
• Bucket, object, and namespace naming conventions
• Simplified bucket delete
• Priority task coordinator
• Partial list results
• Bucket listing limitation
• Disable unused services

Introduction to buckets
Buckets are object containers that are used to control access to objects and to set properties that define attributes for all
contained objects, such as retention periods and quotas.
In S3, object containers are called buckets and this term has been adopted as a general term in ECS. In Atmos, the equivalent of
a bucket is a subtenant. In Swift, the equivalent of a bucket is a container. In CAS, a bucket is a CAS pool.
In ECS, buckets are assigned a type, which can be S3, Swift, Atmos, or CAS. S3, Atmos, or Swift buckets can be configured
to support file system access (for NFS). A bucket that is configured for file system access can be read and written by using
its object protocol and by using the NFS protocol. S3 and Swift buckets can also be accessed using each other's protocol.
Accessing a bucket using more than one protocol is often referred to as cross-head support.
You can create buckets for each object protocol using its API, usually using a client that supports the appropriate protocol. You
can also create S3, file system-enabled (NFS), and CAS buckets using the ECS Portal and the ECS Management REST API.
You can create buckets for each object protocol using its API, usually using a client that supports the appropriate protocol. For
information about how to create a bucket using the S3 API, click here.
You can also create S3, NFS file system-enabled, and CAS buckets using the ECS Portal and the ECS Management REST API.
Bucket names for the different object protocols must conform to the ECS specifications described in Bucket and key naming
conventions.

Bucket ownership
A bucket is assigned to a namespace and object users are also assigned to a namespace. An object user can create buckets only
in the namespace to which the object user is assigned. An ECS System or Namespace Administrator can assign the object user
as the owner of a bucket, or a grantee in a bucket ACL, even if the user does not belong to the same namespace as the bucket,
so that buckets can be shared between users in different namespaces. For example, in an organization where a namespace is a
department, a bucket can be shared between users in different departments.

Bucket access
Objects in a bucket that belong to a replication group spanning multiple VDCs can be accessed from all of the VDCs in the
replication group. Objects in a bucket that belongs to a replication group that is associated with only one VDC can be accessed
from only that VDC. Buckets cannot be accessed or listed from other VDCs that are not in the replication group. However,
because the identity of a bucket and its metadata, such as its ACL, are global management information in ECS, and the global
management information is replicated across the system storage pools, the existence of the bucket can be seen from all VDCs in
the federation.

78 Buckets
For information about how objects in buckets can be accessed during site outages, see TSO behavior with the ADO bucket
setting turned on.
In the ECS Portal, you can:
● Create a bucket
● Edit a bucket
● Set ACLs
● Set bucket policies

Working with buckets in the ECS Portal


You can use the Bucket Management page available from Manage > Buckets to view the details of existing buckets in a
selected namespace, to modify the Access Control List (ACL) for buckets, to modify the policy for buckets, and to delete
buckets.
You can use the Bucket Management page available from Manage > Buckets to:
● Create a bucket
● Edit a bucket
● Edit ACLs
● Edit bucket policies
● Delete a bucket
● View details of existing buckets in a selected namespace

Bucket settings
The following table describes the settings that you can specify when you create or edit a bucket:

Table 29. Bucket settings


Attribute Description Can be
Edited
Name Name of the bucket. For information about bucket naming, see Bucket, object, and No
namespace naming conventions.
Namespace Namespace with which the bucket is associated. No
Replication Group Replication group in which the bucket is created. No
Bucket Owner Bucket owner. Yes
File System Indicates whether the bucket can be used as a file system (NFS export). To No
simplify access to the file system, a default group, and default permissions that
are associated with the group, can be defined. For more information, see Default
group.
NOTE: File system-enabled buckets only support the / delimiter when listing
objects.

CAS Indicates whether the bucket can be used for CAS data. No
Metadata Search Indicates that metadata search indexes are created for the bucket, based on No
specified key values.
● If turned on, metadata keys that are used as the basis for indexing objects in the
bucket can be defined. These keys must be specified at bucket creation time.
● After the bucket is created, search can be turned off altogether, but the
configured index keys cannot be changed.
● The way to define the attribute is described in Metadata search fields.
NOTE: Metadata that is used for indexing is not encrypted, so metadata search
can still be used on a bucket when Server-side Encryption (D@RE) is turned on.

Buckets 79
Table 29. Bucket settings (continued)
Attribute Description Can be
Edited
Data Mobility Indicates whether Data Mobility is enabled or not. Requires Metadata Search and Yes
LastModified field to be indexed to enable it. Data Mobility allows you to set up
automated copying of bucket data to a target bucket. This target bucket can be on
an external ECS cluster or in the cloud on AWS or similar S3-compatible storage.
The bucket user must be an IAM user to enable Data Mobility.
Access During Outage ● The ECS system behavior when accessing data in the bucket during a temporary Yes
(ADO) site outage in a geo-federated setup.
● If you turn this setting on, and a temporary site outage occurs, if you cannot
access a bucket at the failed site where the bucket was created (owner site),
you can access a copy of the bucket at another site. Objects that you access
in the buckets in the namespace might have been updated at the failed site,
but changes might not have been propagated to the site from which you are
accessing the object.
● Turning this setting on in Object Lock-enabled buckets is disabled by default.
Users can explicitly request system administrators to allow this feature after
understanding the data loss risks in Object Lock-enabled buckets during a
temporary site outage.
● If you turn this setting off, data in the site which has the temporary outage
is not available for access from other sites, and object reads for data that
is owned by the failed site fails. This is the default ECS system behavior to
maintain strong consistency by continuing to allow access to data owned by
accessible sites and preventing access to data owned by a failed site.
● Read-Only option: Specifies whether a bucket with the ADO setting that is
turned on is accessible as read-only or read/write during a temporary site
outage. If you select the Read-Only option, the bucket is only accessible in
read-only mode during the outage.
● For more information, see TSO behavior with the ADO bucket setting turned on.
Server-side Encryption Indicates whether server-side encryption is turned on or off. No
● Server-side encryption, also known as Data At Rest Encryption or D@RE,
encrypts data inline before storing it on ECS disks or drives. This encryption
helps prevent sensitive data from being acquired from discarded or stolen
media. If you turn encryption on when the bucket is created, this feature cannot
be turned off later.
● If the namespace of the bucket is encrypted, then every bucket is encrypted.
If the namespace is not encrypted, you can select encryption for individual
buckets.
● For a complete description of the feature, see the ECS 3.8 Security
Configuration and Hardening Guide.
Quota The storage space limit that is specified for the bucket. You can specify a storage Yes
limit for the bucket and define notification and access behavior when the quota is
reached. The quota set for a bucket cannot be less than 1 GiB. You can specify
bucket quota settings in increments of GiB. You can select one of the following
quota behavior options:
● Notification Only at < quota_limit_in_GiB > Soft quota setting at which you
are notified.
● Block Access Only at < quota_limit_in_GiB > Hard quota setting which, when
reached, prevents write/update access to the bucket.
● Block Access at < quota_limit_in_GiB > and Send Notification at
< quota_limit_in_GiB > Hard quota setting which, when reached, prevents
write/update access to the bucket and the quota setting at which you are
notified.
NOTE: Quota enforcement depends on the usage that is reported by ECS
metering. Metering is a background process that is designed so that it does not
impact foreground traffic and so the metered value can lag the actual usage.
Because of the metering lag, there can be a delay in the enforcement of quotas.

80 Buckets
Table 29. Bucket settings (continued)
Attribute Description Can be
Edited
Bucket Tagging Name-value pairs that are defined for a bucket and enable buckets to be classified. Yes
For more information about bucket tagging, see Bucket tagging.
Bucket Retention Retention period for a bucket. Yes
● The expiration of a retention period on an object within a bucket is calculated
when a request to modify an object is made and is based on the value set on the
bucket and the objects themselves.
● The retention period can be changed during the lifetime of the bucket.
● You can find more information about retention and applying retention periods
and policies in Retention periods and policies.
Auto-Commit Period The autocommit period is the time interval in which the updates through NFS are Yes
allowed for objects under retention. This attribute enables NFS files that are written
to ECS to be WORM-compliant. The interval is calculated from the last modification
time. The autocommit value must be less than or equal to the retention value with a
maximum of 1 day. A value of 0 indicates no autocommit period.
Local Object Metadata Local Object Metadata Reads indicates whether Local CAS Metadata Reads is Yes
Reads turned on or off. If turned on, this improves latency on CAS read operations when
the metadata has been successfully replicated to the local site.

Default group
When you turn the File System setting on for a bucket to enable file system access, you can assign a default group for the
bucket. The default group is a Unix group, the members of which have permissions on the bucket when it is accessed as a file
system. Without this assignment, only the bucket owner can access the file system.
You can also specify Unix permissions that are applied to files and directories created using object protocols so that they are
accessible when the bucket is accessed as a file system.

Metadata search fields


When you set the Metadata Search option to On for a bucket, objects in the bucket can be indexed based on their metadata
fields. S3 object clients can search for objects that are based on the indexed metadata using a rich query language.
Each object has a system metadata that is automatically assigned, and can also have user-assigned metadata. Both system and
user metadata can be in the index and used as the subject of metadata searches.
When metadata search is enabled on a bucket, you can select System or User as the metadata search Type. When you select
a metadata search Type as System, metadata that is automatically assigned to objects in a bucket is listed for selection in the
Name drop-down menu.
When you select a metadata search Type of User, you must specify the name of the user metadata to create an index for. You
must also specify its datatype so that ECS knows how to interpret the metadata values provided in search queries.
You can read more about the metadata search feature in the ECS Data Access Guide.

Bucket tagging
Bucket tags are key-value pairs that you can associate with a bucket so that the object data in the bucket can be categorized.
For example, you could define keys like Project or Cost Center on each bucket and assign values to them. You can add up
to ten tags to a bucket.
You can assign bucket tags and values using the ECS Portal or using a custom client through the ECS Management REST API.
Bucket tags are included in the metering data reports displayed in the ECS Portal or retrieved using the ECS Management REST
API.

Buckets 81
Create a bucket
You can create and configure S3, S3+FS, or CAS buckets in the ECS Portal.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can create buckets in any namespace.
● A Namespace Administrator can create buckets in the namespace in which they are the administrator.

About this task


For CAS-specific instructions on setting up a CAS bucket for a CAS object user, see the ECS Data Access Guide.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, click New Bucket.
3. On the New Bucket page, on the Basic tab, do the following:
a. In the Name field, type the bucket name.
b. In the Namespace field, select the namespace that you want the bucket and its objects to belong to.
c. In the Replication Group field, select the replication group that you want to associate the bucket with.
d. In the Bucket Owner field, type the bucket owner, or select the Set current user as Bucket Owner checkbox.
The bucket owner must be an ECS object user for the namespace. If you do not specify a user, you are assigned as the
owner. However, you cannot access the bucket unless your username is also assigned as an object user.
The user that you specify is given Full Control.
e. Click Next.
4. On the New Bucket page, on the Required tab, do the following:
a. In the File System field, click On to specify that the bucket supports operation as a file system (for NFS).
The bucket is an S3 bucket that supports file systems.
You can set a default UNIX group for access to the bucket and for objects that are created in the bucket. For more
information, see Default group.
b. In the CAS field, click On to set the bucket as a CAS bucket.
By default, CAS is disabled and the bucket is marked as an S3 bucket.
In the Reflection Expiration field, click On to configure an expiration time for reflections in the bucket.
In the Reflection Age field, select the appropriate expiration time. (The minimum expiration time is 1 day, and the
maximum is 99 years.)
If there is no configured expiration time for a reflection, the reflection is never deleted.
In the Local Object Metadata Reads field, click On to enable Local CAS Metadata reads. This option is available only
when the bucket is enabled for CAS data.
You can enable or disable Local Object Metadata Reads by editing the bucket. Ensure that ADO is enabled and Read-Only
is disabled before enabling Local Object Metadata Reads.
c. In the Metadata Search field, click On to specify that the bucket supports searches that are based on object metadata.
If the Metadata Search setting is turned on, you can add user and system metadata keys that are used to create object
indexes. For more information about entering metadata search keys, see Metadata search fields.
NOTE: If the bucket supports CAS, metadata search is automatically enabled and a CreateTime key is automatically
created. The metadata can be searched using the S3 metadata search capability or using the Centera API.

d. In the Access During Outage field, click On if you want the bucket to be available during a temporary site outage. For
more information about this option, see TSO behavior with the ADO bucket setting turned on.
● If the Access During Outage setting is turned on, you have the option of selecting the Read-Only checkbox to
restrict create, update, or delete operations on the objects in the bucket during a temporary site outage. Once you
turn the Read-Only option on for the bucket, you cannot change it after the bucket is created. For more information
about this option, see TSO behavior with the ADO bucket setting turned on.
e. In the Server-side Encryption field, click On to specify that the bucket is encrypted.
f. Click Next.

82 Buckets
5. On the New Bucket page, on the Optional tab, do the following:
a. In the Quota field, click On to specify a quota for the bucket and select the quota setting you require.
The settings that you can specify are described in Bucket settings.
b. In the Bucket Tagging field, click Add to add tags, and type name-value pairs.
For more information, see Bucket tagging.
c. In the Bucket Retention Period field, type a time period to set a bucket retention period for the bucket, or click
Infinite if you want objects in the bucket to be retained forever.
For more information about retention periods, see Retention periods and policies.
d. In the Auto-Commit Period field, type a time period to enable updates to the files that are under retention. The interval
applies only to file enabled buckets.
The autocommit value must be less than or equal to the retention value with a maximum of 1 day. A value of 0 indicates
no autocommit period.
e. Click Save to create the bucket.

Results
To assign permissions on the bucket for users or groups, see the tasks below.

Edit a bucket
You can edit some bucket settings after the bucket has been created and after it has had objects that are written to it.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the settings for a bucket in any namespace.
● A Namespace Administrator can edit the settings for a bucket in the namespace in which they are the administrator.
To edit a bucket, you must be assigned to the Namespace Administrator or System Administrator role.

About this task

NOTE: You can copy your S3 bucket ARN using the copy icon next to your ARN.

You can edit the following bucket settings:


● Quota
● Bucket Owner
● Bucket Tagging
● Access During Outage
● Bucket Retention
● Reflection Expiration and Reflection Age (for CAS buckets)
You cannot change the following bucket settings:
● Replication Group
● Server-side Encryption
● File System
● CAS
● Metadata Search

Known issue with If you change the bucket owner and try to revert the changes, reverting the bucket ownership does
changing NFS not work. You cannot access the NFS mount over the previous bucket owner until you reset the bucket
bucket owners ownership using the API mentioned in the support KB article KB 534080.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, in the Buckets table, select the Edit action for the bucket for which you want to
change the settings.
3. Edit the settings that you want to change.

Buckets 83
You can find out more information about the bucket settings in Bucket settings.
4. Click Save.

Set ACLs
The privileges a user has when accessing a bucket are set using an Access Control List (ACL). You can assign ACLs for a user,
for predefined groups, such as all users, and for a custom group.
The privileges a user has when accessing a bucket are set using an Access Control List (ACL). You can assign ACLs for a user,
for a set of pre-defined groups, such as all users, and for a custom group.
When you create a bucket and assign an owner to it, an ACL is created that assigns a default set of permissions to the bucket
owner - the owner is, by default, assigned full control.
When you create a bucket and assign an owner to it, an ACL is created that assigns a default set of permissions to the bucket
owner - the owner is, by default, assigned full control.
You can modify the permissions that are assigned to the owner, or you can add new permissions for a user by selecting the Edit
ACL operation for the bucket.
In the ECS Portal, the Bucket ACLs Management page has User ACLs, Group ACLs, and Custom Group ACLs tabs to
manage the ACLs associated with individual users and predefined groups, and to allow groups to be defined that can be used to
access the bucket as a file system.

NOTE: For information about ACLs with CAS buckets, see the ECS Data Access Guide.

Bucket ACL permissions reference

The ACL permissions that can be assigned are provided in the following table. The permissions that are applicable depend on the
type of bucket.

Table 30. Bucket ACLs


ACL Permission
Read Allows the user to list the objects in the bucket.
Read ACL Allows the user to read the bucket ACL.
Write Allows the user to create or update any object in the bucket.
Write ACL Allows the user to write the ACL for the bucket.
Execute Sets the execute permission when accessed as a file system. This permission has no effect
when the object is accessed using the ECS object protocols.
Full Control Allows the user to Read, Write, Read ACL, and Write ACL.
NOTE: Nonowners can Read, Write, Read ACL, and Write ACL if the permission has been
granted or can only list the objects.

Privileged Write Allows user to perform writes to a bucket or object when the user does not have normal write
permission. Required for CAS buckets.
Delete Allows the user to delete buckets and objects. Required for CAS buckets.
None The user has no privileges on the bucket.

Set the bucket ACL permissions for a user


The ECS Portal enables you to set the bucket ACL for a user or for a pre-defined group.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the ACL settings for a bucket in any namespace.

84 Buckets
● A Namespace Administrator can edit the ACL settings for a bucket n the namespace in which they are the administrator.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. On the Bucket ACLs Management page, the User ACLs tab displays by default and shows the ACLs that have been
applied to the users who have access to the bucket.
The bucket owner has default permissions that are assigned.
NOTE: Because the ECS Portal supports S3, S3 + NFS File system, and CAS buckets, the range of permissions that can
be set are not applicable to all bucket types.

4. To set (or remove) the ACL permissions for a user that already has permissions that are assigned, in the ACL table, in the
Action column, click Edit or Remove.
5. To add a user and assign ACL permissions to the bucket, click Add.
a. Enter the username of the user that the permissions apply to.
b. Select the permissions for the user.
For more information about ACL permissions, see Bucket ACL permissions reference.
6. Click Save.

Set the bucket ACL permissions for a predefined group


You can set the ACL for a bucket for a predefined group from the ECS Portal.

Prerequisites
● This operation requires the System Administrator or namespace Administrator role in ECS.
● A System Administrator can edit the group ACL settings for a bucket in any namespace.
● A namespace Administrator can edit the group ACL settings for a bucket in the namespace in which they are the
administrator.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. Click the Group ACLs tab to set the ACL permissions for a predefined group.
4. Click Add.
5. The Edit Group page is displayed.
The group names are described in the following table:

Table 31. Predefined groups


Group Description
Public All users, authenticated or not
All users All authenticated users
Other Authenticated users but not the
bucket owner
Log delivery Not supported

6. Select the permissions for the group.


7. Click Save.

Buckets 85
Set custom group bucket ACLs
You can set a group ACL for a bucket in the ECS Portal and you can set bucket ACLs for a group of users (Custom Group ACL),
for individual users, or a combination of both. For example, you can grant full bucket access to a group of users, but you can
also restrict (or even deny) bucket access to individual users in that group.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the group ACL settings for a bucket in any namespace.
● A Namespace Administrator can edit the group ACL settings for a bucket in the namespace in which they are the
administrator.

About this task


Custom group ACLs enable groups to be defined and for permissions to be assigned to the group. The main use case for
assigning groups to a bucket is to support access to the bucket as a file system. For example, when making the bucket available
for NFS.
Members of the UNIX group can access the bucket when it is accessed as a file system using NFS.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. Click the Custom Group User ACLs tab to set the ACL for a custom group.
4. Click Add.
The Edit Custom Group page displays.
5. On the Edit Custom Group page, in the Custom Group Name field, type the name for the group.
This name can be a Unix/Linux group, or an Active Directory group.
6. Select the permissions for the group.
At a minimum you should assign Read, Write, Execute, and Read ACL.
7. Click Save.

Set bucket policies


The ECS Portal provides a Bucket Policy Editor to enable you to create a bucket policy for an existing bucket.
The ECS Portal provides a Bucket Policy Editor to enable you to create a bucket policy for an existing bucket.
For each bucket, you can define ACLs for an object user. Bucket policies provide greater flexibility than ACLs and allow
fine-grained control over permissions for bucket operations and for operations on objects within the bucket. Policy conditions
are used to assign permissions for a range of objects that match the condition and are used to automatically assign permissions
to newly uploaded objects.
For each bucket, you can define ACLs for an object user. Bucket policies provide greater flexibility than ACLs and allow
fine-grained control over permissions for bucket operations and for operations on objects within the bucket. Policy conditions
are used to assign permissions for a range of objects that match the condition and are used to automatically assign permissions
to newly uploaded objects. The typical scenarios in which you would use bucket policies are described here.
Policies are defined in JSON format and the syntax that is used for policies is the same syntax that is used for Amazon AWS.
The operations for which permissions can be assigned are limited to those operations supported by ECS. For more information,
see the ECS Data Access Guide.
The bucket policy editor has a code view and a tree view. The code view, which is shown in the following screenshot, enables
you to enter JSON policies from scratch or to paste existing policies into the editor and modified. For example, if you have
existing policies in JSON format, you can paste them into the code view and modify them.

86 Buckets
Figure 8. Bucket Policy Editor code view

The tree view, which is shown in the following screenshot, provides a mechanism for navigating a policy and is useful where you
have many statements in a policy. You can expand and contract the statements and search them.

Figure 9. Bucket Policy Editor tree view

Buckets 87
Bucket policy scenarios
In general, the bucket owner has full control on a bucket and can grant permissions to other users and can set S3 bucket
policies using an S3 client. In ECS, it is also possible for an ECS System or Namespace Administrator to set bucket policies using
the Bucket Policy Editor from the ECS Portal.
You can use bucket policies in the following typical scenarios:
● Grant bucket permissions to a user
● Grant bucket permissions to all users
● Automatically assign permissions to created objects

Grant bucket permissions to a user


To grant permission on a bucket to a user apart from the bucket owner, specify the resource that you want to change the
permissions for. Set the principal attribute to the name of the user, and specify one or more actions that you want to enable.
The following example shows a policy that grants a user who is named user1 the permission to update and read objects in the
bucket that is named mybucket:

{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "Grant permission to user1",
"Effect": "Allow",
"Principal": ["user1"],
"Action": [ "s3:PutObject","s3:GetObject" ],
"Resource":[ "mybucket/*" ]
}
]
}

You can also add conditions. For example, if you only want the user to read and write object when accessing the bucket from a
specific IP address, add a IpAddress condition as shown in the following policy:

{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "Grant permission ",
"Effect": "Allow",
"Principal": ["user1"],
"Action": [ "s3:PutObject","s3:GetObject" ],
"Resource":[ "mybucket/*" ]
"Condition": {"IpAddress": {"aws:SourceIp": "<Ip address>"}
}
]
}

Grant bucket permissions to all users


To grant permission on a bucket to a user apart from the bucket owner, specify the resource that you want to change the
permissions for. Set the principal attribute as anybody (*), and specify one or more actions that you want to enable.
The following example shows a policy that grants anyone permission to read objects in the bucket that is named mybucket:

{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "statement2",
"Effect": "Allow",
"Principal": ["*"],

88 Buckets
"Action": [ "s3:GetObject" ],
"Resource":[ "mybucket/*" ]
}
]
}

Automatically assign permissions to created objects


You can use bucket policies to automatically enable access to ingested object data. In the following example bucket policy,
user1 and user2 can create subresources (that is, objects) in the bucket that is named mybucket and can set object ACLs.
With the ability to set ACLs, the users can then set permissions for other users. If you set the ACL in the same operation, a
condition can be set. Such that a canned ACL public-read must be specified when the object is created. This ensures anybody
can read all the created objects.

{
"Version": "2012-10-17",
"Id": "S3PolicyId3",
"Statement": [
{
"Sid": "statement3",
"Effect": "Allow",
"Principal": ["user1", "user2"],
"Action": [ "s3:PutObject, s3:PutObjectAcl" ],
"Resource":[ "mybucket/*" ]
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}

Create a bucket policy


You can create a bucket policy for a selected bucket using the Bucket Policy Editor in the ECS Portal.

Prerequisites
This operation requires the System Administrator or Namespace Administrator role (for the namespace to which the bucket
belongs).

About this task


You can also create a bucket policy in a text editor and deploy it using the ECS Management REST API or the S3 API.

Steps
1. In the ECS Portal, select Manage > Buckets
2. From the Namespace drop-down, select the namespace to which the bucket belongs.
3. In the Actions column for the bucket, select Edit Policy from the drop-down menu.
4. Provided your policy is valid, you can switch to the tree view of the Bucket Policy Editor. The tree view makes it easier to
view your policy and to expand and contract statements.
5. In the Bucket Policy Editor, type the policy or copy and paste a policy that you have previously created.
Some examples are provided in Bucket policy scenarios and full details of the supported operations and conditions are
provided in the ECS Data Access Guide.
6. Save.
The policy is validated and, if valid, the Bucket Policy Editor exits and the portal displays the Bucket Management page. If
the policy is invalid, the error message provides information about the reason the policy is invalid.

Buckets 89
Restrict user IP addresses that can access a CAS bucket
You can restrict client IPs from accessing a CAS bucket. Only the IPs that are set under the restriction list can have access to
the pertaining user.

Introduction
● By default, there is no restriction set for a user.
● CAS user IP restriction is applicable for users across all VDCs.
● Default IP limit for a user is 10, and it is configurable.

NOTE: To change the default IP limit value contact ECS Remote Support.

To set user IP restrictions

PUT /object/user-cas/ip-restrictions/{namespace_name}/{user_name}

response body example 1: (by default when there is no IP restriction set for the user)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions>
<user_name>clientuser</user_name>
</user_ip_restrictions>

response body example 2: (only the below provided client IP will have access to the user)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions_param>
<ip_restrictions>128.128.128.128</ip_restrictions>
< ip_restrictions>127.127.127.127</ip_restrictions>
<user_name>clientuser</user_name>
</user_ip_restrictions_param>

To get user IP restrictions

GET /object/user-cas/ip-restrictions/{namespace_name}/{user_name}

request body example 1:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?><user_ip_restrictions_param></
user_ip_restrictions_param>

request body example 2:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions>
<ip_restrictions>128.128.128.128</ip_restrictions>
<ip_restrictions>127.127.127.127</ip_restrictions>
<user_name>clientuser</user_name>
</user_ip_restrictions>

To list user IP restrictions

GET /object/user-cas/ip-restrictions/

response body example 1:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions_list>
<userIpRestriction>
<user_name>clientuser</user_name>
</userIpRestriction>
<userIpRestriction>
<user_name>wuser1@sanity.local</user_name>

90 Buckets
</userIpRestriction>
</user_ip_restrictions_list>

response body example 2:


<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions_list>
<userIpRestriction>
<ip_restrictions>128.128.128.128</ip_restrictions>
<ip_restrictions>127.127.127.127</ip_restrictions>
<user_name>clientuser</user_name>
</userIpRestriction>
<userIpRestriction>
<user_name>wuser1@sanity.local</user_name>
</userIpRestriction>
</user_ip_restrictions_list>

response body example 1: example for a user without any client IP restriction set. (By
default there won’t be any restriction set for a user)
response body example 2: example for user when client IP restriction is set

Create a bucket using the S3 API (with s3curl)


You can use the S3 API to create a bucket in a replication group. Because ECS uses custom headers (x-emc), the string to sign
must be constructed to include these headers. In this task, the s3curl tool is used. There are also several programmatic clients
that you can use, for example, the S3 Java client.

Prerequisites
● To create a bucket, ECS must have at least one replication group configured.
● Ensure that Perl is installed on the Linux machine on which you run s3curl.
● Ensure that the curl tool and the s3curl tool are installed. The s3curl tool acts as a wrapper around curl.
● To use s3curl with x-emc headers, minor modifications must be made to the s3curl script. You can obtain the modified,
ECS-specific version of s3curl from the EMCECS Git Repository.
● Ensure that you have obtained a secret key for the user who creates the bucket. For more information, see ECS Data
Access Guide.

About this task


The EMC headers that can be used with buckets are described in Bucket HTTP headers.

Steps
1. Obtain the identity of the replication group in which you want the bucket to be created, by typing the following command:

GET https://<ECS IP Address>:4443/vdc/data-service/vpools

The response provides the name and identity of all data services virtual pools. In the following example, the ID is
urn:storageos:ReplicationGroupInfo:8fc8e19bedf0-4e81-bee8-79accc867f64:global.

<data_service_vpools>
<data_service_vpool>
<creation_time>1403519186936</creation_time>
<id>urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-
bee8-79accc867f64:global</id>
<inactive>false</inactive>
<tags/>
<description>IsilonVPool1</description>
<name>IsilonVPool1</name>
<varrayMappings>
<name>urn:storageos:VirtualDataCenter:1de0bbc2-907c-4ede-b133-
f5331e03e6fa:vdc1</name>
<value>urn:storageos:VirtualArray:793757ab-ad51-4038-b80a-682e124eb25e:vdc1</
value>
</varrayMappings>

Buckets 91
</data_service_vpool>
</data_service_vpools>

2. Set up s3curl by creating a .s3curl file in which to enter the user credentials.
The .s3curl file must have permission 0600 (rw-/---/---) when s3curl.pl is run.
In the following example, the profile my_profile references the user credentials for the user@yourco.com account, and
root_profile references the credentials for the root account.

%awsSecretAccessKeys = (
my_profile => {
id => 'user@yourco.com',
key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
},
root_profile => {
id => 'root',
key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
},
);

3. Add the endpoint that you want to use s3curl against to the .s3curl file.
The endpoint is the address of your data node or the load balancer that sits in front of your data nodes.

push @endpoints , (
'203.0.113.10', 'lglw3183.lss.dell.com',
);

4. Create the bucket using s3curl.pl and specify the following parameters:
● Profile of the user
● Identity of the replication group in which to create the bucket (<vpool_id>, which is set using the x-emc-
dataservice-vpool header.
● Any custom x-emc headers
● Name of the bucket (<BucketName>).
The following example shows a fully specified command:

./s3curl.pl --debug --id=my_profile --acl public-read-write


--createBucket -- -H 'x-emc-file-system-access-enabled:true'
-H 'x-emc-dataservice-vpool:<vpool_id>' http://<DataNodeIP>:9020/<BucketName>

The example uses the x-emc-dataservice-vpool header to specify the replication group in which the bucket is created
and the x-emc-file-system-access-enabled header to enable the bucket for file system access, such as for NFS.
NOTE: T2he -acl public-read-write argument is optional, but can be used to set permissions to enable access
to the bucket. For example, if you intend to access to bucket as NFS from an environment that is not secured using
Kerberos.

If successful, (with --debug on) output similar to the following is displayed:

s3curl: Found the url: host=203.0.113.10; port=9020; uri=/S3B4; query=;


s3curl: ordinary endpoint signing case
s3curl: StringToSign='PUT\n\n\nThu, 12 Dec 2013 07:58:39 +0000\nx-amz-acl:public-read-
write
\nx-emc-file-system-access-enabled:true\nx-emc-dataservice-vpool:
urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-bee8-79accc867f64:global:\n/
S3B4'
s3curl: exec curl -H Date: Thu, 12 Dec 2013 07:58:39 +0000 -H Authorization: AWS
root:AiTcfMDhsi6iSq2rIbHEZon0WNo= -H x-amz-acl: public-read-write -L -H content-
type:
--data-binary -X PUT -H x-emc-file-system-access-enabled:true

92 Buckets
-H x-emc-dataservice-vpool:urn:storageos:ObjectStore:e0506a04-340b-4e78-
a694-4c389ce14dc8: http://203.0.113.10:9020/S3B4

Next steps
You can list the buckets using the S3 interface, using:

./s3curl.pl --debug --id=my_profile http://<DataNodeIP>:9020/

Bucket HTTP headers


There are various headers that determine the behavior of ECS when creating buckets using the objects APIs.
The following x-emc headers are provided:

Table 32. Bucket headers


Header Description
x-emc-dataservice-vpool Determines the replication group is used to store the objects associated
with this bucket. If you do not specify a replication group using the x-emc-
dataservice-vpool header, ECS selects the default replication group that is
associated with the namespace.
x-emc-file-system-access-enabled Configures the bucket for NFS access. The header must not conflict with the
interface that is used. That is, a create bucket request from NFS cannot specify
x-emc-file-system-access-enabled=false.

x-emc-namespace Specifies the namespace that is used for this bucket. If the namespace is not
specified using the S3 convention of host-style or path-style request, and then
it is specified using the x-emc-namespace header. If the namespace is not
specified in this header, the namespace that is associated with the user is used.
x-emc-retention-period Specifies the retention period that is applied to objects in a bucket. Each
time a request is made to modify an object in a bucket, the expiration of the
retention period for the object is calculated based on the retention period that is
associated with the bucket.
x-emc-is-stale-allowed Specifies whether the bucket is accessible during a temporary VDC outage in a
federated configuration.
x-emc-server-side-encryption-enabled Specifies whether objects that are written to a bucket are encrypted.
x-emc-metadata-search Specifies one or more user or system metadata values that are used to create
indexes of objects for the bucket. The indexes are used to perform object
searches that are filtered based on the indexed metadata.
x-emc-autocommit-period Specifies the autocommit period in seconds (applicable to FS enabled bucket
and for requests from NFS and Atmos heads).

Enable Data Movement


Data Mobility allows you to set up automated copying of bucket data to a target bucket. This target bucket can be on an
external ECS cluster or to the cloud on AWS or similar S3-compatible storage.

Prerequisites
Observe the following prerequisites:
● Data Mobility requires 192 GB nodes.
● Data Mobility must be running on all nodes.
● Metadata search must be enabled with Last Modified indexed.
This generally requires a new bucket, unless an existing bucket already has the Last Modified field indexed.

Buckets 93
● Data Mobility is supported on IAM buckets only.
● Internal copy policies, which must be created through the Management API, require an additional IAM role for the target
bucket in order to work properly.
● Data Mobility supports one policy per bucket, and 100 policies per cluster.
● There may be a performance impact when running Data Movement policies. Specifically, you may observe slowness when
copying on the front end.
● Data Mobility does not copy object tags or retention/object lock.
● Data Mobility does not propagate deleted objects or delete markers to the target bucket.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, follow the workflow to create a new bucket, and click Next.
"Create a bucket" provides information.
3. In Metadata Search, select On.
a. In Type field, select System.
b. In Name field, select LastModified.
c. Click Add and Next.
4. In Data Mobility, select On.
5. In Policy Type: Copy to S3 is the default option.
a. In Destination Configuration Endpoint, specify the S3 API endpoint where you would like to copy the data.
For example, if you are copying data to AWS, the endpoint might be https://s3.amazonaws.com.
Destination Configuration Endpoint is editable the first time that you create a policy only.
WARNING: When Data Movement policy is active, it writes objects to the specified destination bucket. This action may
overwrite data if objects exist in a bucket with the same name. Therefore, the destination bucket should not be written
to by any other applications. If you are concerned with potentially overwriting data, you can enable versioning on the
destination bucket. Versioning may consume more capacity. Data in the destination bucket may get overwritten by an
active policy.
b. In TSL Certificate: If the target endpoint uses a self-signed or corporate-CA-signed SSL/TLS certificate, that is, one
that is not publicly trusted, provide the base64 (x.509) certificate, or its CA certificate. This is so that ECS can securely
communicate with the target.
Exclude the BEGIN CERTIFICATE and END CERTIFICATE statements from the key.
c. In Access Key, specify the S3 access key that is required to access the target bucket.
Access Key is a required field.
d. Secret Key, specify the S3 secret key that is required to access the target bucket.
This is a required field.
e. In Bucket Name, enter the bucket within the Destination Endpoint to which you would like to copy data.
Bucket is editable the first time that you create a policy only.
f. Object Tag Filtering defaults to Off.
If enabled, the policy applies to objects with the specified object tag only. Only one tag may be specified in the format
name=value. When you change the object tag filter, the change affects objects that have not already been copied only.
In other words, changing the filter will not rescan the bucket to copy objects that have been skipped because they did
not match the filter.
g. Server Side Encryption (SSE-S3) defaults to On.
When enabled, all objects that are copied to the target bucket are written with server-side (SSE-S3) object-level
encryption. When you change SSE-S3, the policy change affects objects that have not already been copied only.
h. Detailed Logging defaults to Off.
Off does not show logging operations, bucket, and prefix. On shows logging operations, bucket, and prefix.
i. Logged Operations, defaults to All Operations. Select Only Errors if wanted.
Only Errors specifies that the detailed logs in the log bucket should only include errors. All Operations specifies that the
details logs should include operations on all objects. Logging all operations is useful for audit purposes, but may take up a
lot of space.
j. In Logging Bucket , specify the bucket in which to write the detailed logs.
This field is required if logging is enabled.
k. In Logging Prefix, specify the prefix in the bucket under which to write the detailed logs.

94 Buckets
l. Click Validate Data Movement Policy to run a test in objcontrolsvc. ECS displays the following messages:
The Policy is valid..
Warning, missing configuration paramters. There is some missing configuration may must be completed
before the policy can run.
Test failed.

Data Mobility Common Issues


Learn about some common issues that you may encounter with Data Mobility. "Troubleshoot Data Mobility" provides additional
information.

Watermark lag is never zero


To account for possible clock skew, the watermark is always at least 2 hours behind real time. Also, because the watermark
statistics are updated every hour, a watermark lag of between 2 to 3 hours is expected behavior.

Grafana inconsistencies
Grafana may experience inconsistencies because data collection and aggregation tasks that run in batch. No data is displayed
when a cluster is added. Also, there are inconsistencies in large time ranges because ECS does not support real-time
aggregation.

resource-svc has high load, Data Mobility dashboard stats are lagged
This issue is caused by Data Mobility stats being aggregated too often, which can overload the scanner in resource-svc .
Contact Dell Customer Service to open a Service Request and reference STORAGE-32228.

Very large objects get stuck and never complete


Contact Dell Customer Service for a support request.

Troubleshoot Data Mobility


Learn how to troubleshoot common Data Mobility issues.

Steps
1. In the ECS Web Portal, go to Settings > Alerts Policy to check errors and get policy information.
2. If the Data Mobility policy data log bucket is enabled, check the logs in the log bucket for syncing errors.

Data Mobility Debug Logging


Each log captures one batch of copied objects. The maximum batch size is 10 K of objects or 10 GiB of data.
The log format is: [log-prefix][job-start-date]/ [log-write-timestamp]-[batch-shard].log where:
● [log-prefix] is the log prefix specified in the movement policy. Best practice to include the source bucket name in the
log prefix.
● [job-start-date] is the UTC date that the data movement job started. The format is yyyy-MM-dd.
● [log-write-timestamp] is the UTC time that the specific log object was written. The format is yyyy-MM-dd-hh-mm-
ss.
● [batch-shard] is the first six hex characters of the SHA1 hash of the first object key in the batch that was copied.

Buckets 95
Examples
See below examples of successful and failed log output from batch copied objects:
Successful log output:
2022-09-02T09:12:18Z DM.COPY demo test ASIA4258DF91ACFD0BBE S8d162an1wh1
2022-09-02T09:05:57Z 10,240 a1bd91c9d3c7f410b37dc36e14f203ca http://10.249.250.95:9020/
demo_target AKIAED117E670B7136E6 99 SUCCESS
Failed log output:
2022-05-27T07:10:18Z DM.COPY large-data large-buk ASIAEA0F10AF314E3CEE obj2_6542
2022-05-27T03:05:21Z 279 9073c9c324e9c6fb48c89955c0526e19
http://10.243.82.165:9020/ large-buk-target AKIAEAA15A0C9143994D 853 ERROR
"java.util.concurrent.ExecutionException: com.amazonaws.AmazonServiceException: Service
Unavailable (Service: Amazon S3; Status Code: 503; Error Code: Service Unavailable; Request
ID: null; Proxy: null)"

Bucket, object, and namespace naming conventions


Bucket and object (also referred to as key) names must conform to the specification presented here.
● S3 bucket and object naming in ECS
● OpenStack Swift container and object naming in ECS
● Atmos bucket and object naming in ECS
● CAS pool and object naming in ECS
Bucket and object (also referred to as key) names for the S3, OpenStack Swift, Atmos, and CAS protocols must conform to the
ECS specifications described in this section.

Namespace name
The following rules apply to the naming of ECS namespaces:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● Valid characters are defined by regex /[a-zA-Z0-9-_]+/. That is, alphanumeric characters and hyphen (-) and underscore
(_) special characters.

S3 bucket and object naming in ECS


Bucket and object names must conform to the ECS naming specification when using the ECS S3 Object API.

Bucket name
The following rules apply to the naming of S3 buckets in ECS:
● Must be between one and 255 characters in length. (S3 requires bucket names to be 1–255 characters long)
● Can include dot (.), hyphen (-), and underscore (_) characters and alphanumeric characters ([a-zA-Z0-9])
● Can start with a hyphen (-) or alphanumeric character.
● Cannot start with a dot (.)
● Cannot contain a double dot (..)
● Cannot end with a dot (.)
● Must not be formatted as IPv4 address.
You can compare these rules with the naming restriction in the S3 bucket quotas, restrictions and limitations.

96 Buckets
Object name
The following rules apply to the naming of S3 objects in ECS:
● Cannot be null or an empty string.
● Length range is 1..255 (Unicode char)
● No validation on characters

OpenStack Swift container and object naming in ECS


Container and object names must conform to the ECS naming specification when using the ECS OpenStack Swift Object API.

Container name
The following rules apply to the naming of Swift containers:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● Can include dot (.), hyphen (-), and underscore (_) characters and alphanumeric characters ([a-zA-Z0-9])
● Can include the at symbol (@) with the assistance of your customer support representative.

Object name
The following rules apply to the naming of Swift objects:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● No validation on characters

Atmos bucket and object naming in ECS


Subtenant and object names must conform to the ECS naming specification when using the ECS Atmos Object API.

Subtenant (bucket)
The subtenant is created by the server, so the client does not need to know the naming scheme.

Object name
The following rules apply to the naming of Atmos objects:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode characters)
● No validation on characters
● Name should be percent-encoded UTF-8.

CAS pool and object naming in ECS


CAS pools and objects (clips in CAS terminology) names must conform to the ECS naming specification when using the CAS
API.

CAS pool naming


The following rules apply to the naming of CAS pools in ECS:

Buckets 97
● Can contain a maximum of 255 characters
● Cannot contain: ' " / & ? * < > <tab> <newline> or <space>

Clip naming
The CAS API does not support user-defined keys. When an application using CAS API creates a clip, it opens a pool, creates a
clip, and adds tags, attributes, streams and so on. After a clip is complete, it is written to a device.
A corresponding clip ID is returned by the CAS engine and can be referred to using <pool name>/<clip id>.

Simplified bucket delete


Simplified bucket delete empties the bucket on behalf of the user. You must start the new delete option through the S3 and
Management APIs. You must have System Admin privileges for the Management API. Simplified bucket delete requires all zones
to be fully upgraded to the supported ECS version. Simplified bucket delete is enabled by default on all zones.
When you initiate the bucket delete using the API and during the bucket delete operation, bucket access is set to read-only, no
property changes are allowed on the bucket, object and versions are removed, and MPUs are aborted. Also, ECS honors user
permissions, object lock, governance, compliance, retention, and legal hold during delete operation. On completing the bucket
delete, ECS deletes all associated content for the bucket. If there is any object that cannot be removed, ECS puts the bucket
back into writable state. ECS carries out nonempty bucket deletion asynchronously in the background using a special header
through S3 or management API. You can monitor the bucket deletion progress as well as configure and throttle speed and
resource consumption.
ECS carries out authorization for bucket deletion as a one-time atomic permission evaluation during task creation. You must
have IAM permission to carry out bucket deletion.
Metering estimates provide an easy way to calculate the numbers of objects in the deletion bucket and deletion in progress.
ECS carries out a retry mechanism for recoverable errors to increase the overall stability and success rate for bucket deletion.
Recoverable errors are errors that are generated for temporary bucket deletion failures due to transitory system errors such as
network issues.
Simplified bucket delete operations are designed and implemented to be background, configurable operations to balance deletion
speed and resource consumption. Note the following performance observations regarding default configuration:
● Longer deletion times are correlated with higher object counts in a bucket, rather than object size.
● Bucket deletion takes longer during heavier system load.
● Deleting a versioning-enabled bucket can take a longer.
● Concurrent bucket deletion has a small impact on throughput load and slightly increases CPU and memory consumption.

Bucket deletion replication handling


During the bucket delete process, if a temporary site outage (TSO) is encountered, the operations pause until the TSO is
resolved. If a VDC is removed from a replication group or a permanent site outage (PSO) is initiated during the delete process,
the operation is aborted and a best effort is made to return the bucket to a writable state.
The simplified bucket delete request is rejected if the replication group that is associated with the bucket is in active recovery
from a VDC being removed. The operation is allowed once recovery reaches the bootstrap stage.
ECS supports simplified bucket delete on the following bucket types:
● Normal
● Versioned
● File System
● Access During Outage (ADO)
● Object Lock enabled

98 Buckets
Delete a bucket
Using with the ECS Portal user interface, delete a bucket when you no longer need it.

Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can delete a bucket in any namespace.
● A Namespace Administrator can delete a bucket in the namespace in which they are the administrator.
To delete a bucket, you must be assigned to the Namespace Administrator or System Administrator role.

Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to delete in the table, and then select the Edit Bucket
> Delete Bucket action.
ECS prompts This Action is permanent. It can not be stopped once started and this action is
not reversible.
3. In the Delete Confirmation window, select the option to proceed with the deletion.
● Select the Delete the selected Buckets option to delete the buckets you selected.
● Select the Delete ENTIRE Contents including the Selected bucket option to delete the bucket and its contents.
4. Check the I acknowledge and would like to delete the bucket checkbox, and then click Delete.
NOTE: Buckets being deleted cannot be modified. To delete a bucket with Filesystem enabled, you must remove any
associated NFS exports before you delete the bucket.

The bucket deletion process is started. Buckets being deleted are marked as Bucket deletion is in progress. To view the
current status of the bucket deletion, select the View Bucket > View Delete Status action.

Simplified bucket delete common issues


Learn about some common issues that you may encounter with simplified bucket delete, and how to troubleshoot them.

Table 33. Simplified bucket delete issues


Issue Reason Corrective action
Bucket delete fails. You may not have the correct user permissions to Check that you have the correct permissions.
initiate simplified bucket delete. You must have sysadmin privileges.
The operation may fail because of retention 1. Remove the retention rules on the buckets or
policy (governance, object lock, or retention). zones that you are trying to delete.
2. Then, either delete the objects manually or
initiate a new bucket delete request.
The ECS 3.8 Data Access Guide provides
information about bucket API commands.

The operation may fail because of service Initiate a new bucket delete request. The ECS 3.8
instability or other errors. Data Access Guide provides information about
bucket API commands.
S3 or management API May not meet policy and feature requirements. Verify that feature flags are enabled and all
not accepting empty- requirements are met.
bucket header or query
parameters
EmptyBucket task no The bucket may already be deleted. The task is 1. Check if the bucket exists.
longer found removed if the bucket is successfully deleted. 2. If the bucket still exists, check if the bucket
owner zone was removed from the associated
RG.

Buckets 99
Table 33. Simplified bucket delete issues (continued)
Issue Reason Corrective action

The ECS 3.8 Data Access Guide provides


information about bucket API commands.

Asks not performing You may not have the correct permissions to Check that you have the correct permissions.
initiate simplified bucket delete. You must have sysadmin privileges.
Retention rules do not allow objects to be 1. Remove the retention rules on the buckets or
deleted. zones that you are trying to delete.
2. Then, either delete the objects manually or
initiate a new bucket delete request.
The ECS 3.8 Data Access Guide provides
information about bucket API commands.

DT instability may cause listing or delete requests Initiate a new delete bucket request.
to fail during task execution.
TSO may cause the task to be delayed. Check if there is an active TSO on the associated
RG. If so, delete resumes when TSO resolves.
PSO or Zone removal from RG may cause the 1. Reset the empty_bucket_in_progress
task to fail and possibly leave the bucket in a flag for the bucket in DTQuery.
read-only state. 2. Initiate a new delete bucket request once RG
recovery is at the bootstrap stage.
The ECS 3.8 Data Access Guide provides
information about bucket API commands.

Tasks aborted PSO or nonbucket owner zone that is removed Initiate a new delete bucket request once RG
from RG. recovery is at the bootstrap stage. The ECS 3.8
Data Access Guide provides information about
bucket API commands.
Multiple bucket-delete - Reduce the number of permits in
tasks impacting system PriorityTaskCoordinator to limit the parallel tasks
performance. that can run per DT.
com.emc.ecs.priority.coordinator.OB.permits

Simplified bucket delete log files and debug logging


Learn about the simplified bucket delete configuration, deployment, and debug log locations.

Files
Configuration and deployment file name and locations:
● com.emc.ecs.empty_bucket.*
● com.emc.ecs.priority.*
● CF values are defined in: shared-cf-conf.xml

Priority task coordinator


The priority task coordinator allows ECS to schedule and run general tasks by priority. The priority task coordinator exposes a
limited set of APIs through the management interface that allows you to view and modify some tasks.
ECS implements a scheduling algorithm through the API which prevents starvation of lower-priority tasks. The priority task
coordinator provides limited management API to view and manage tasks generically as well as visibility of some background
tasks.

100 Buckets
Task Priority values are 1-10, with 1 being highest priority. If there are more tasks to run in the time allotted, ECS switches
scheduling to a time- based scheme based on task priority. ECS gives higher-priority tasks more time to run. When a process
time exceeds, ECS signals the task processor to save the task state and stop. ECS schedules tasks by priority and start time.
The default number of tasks running in parallel in the priority task coordinator is three. Increasing this value can have a
performance impact, as the value is a multiplied by the number of directory tables that exist for the given service. If you have
disabled then enabled the priority task coordinator feature, you must restart the services that are associated with it.

Priority task coordinator common issues


Learn about some common issues that you may encounter with a priority task coordinator and how to troubleshoot them.

Table 34. Priority task coordinator issues


Issue Reason Corrective action
Tasks not performing Priority task coordinator may not be enabled. 1. Verify that the feature flags are enabled.
● emc.ecs.priority.coordinator.featureEnable
d
2. Restart the associated services.
The coordinators may be paused. 1. Verify that the coordinators for OB/RT are
not paused:
● com.emc.ecs.priority.coordinator.OB.paus
e
● com.emc.ecs.priority.coordinator.RT.pause
2. If they are paused, restart them.
Tasks impacting system Many permits that are configured on the priority Reduce the number of permits in
performance task coordinator. PriorityTaskCoordinator to limit the parallel tasks
that can run per the DT
com.emc.ecs.priority.coordinator.OB
.permits.

Partial list results


Partial listing allows ECS to return a partial list result instead of a 500 error on listing timeout. Partial listing reduces
disruptive and expensive listing timeouts. ECS supports partial list results in the following APIs: S3 ListObjects, ListObjectsV2,
ListObjectVersions, and ECS extended S3 Metadata Search GetObjectsList.
Listing timeouts can be caused by sub-optimal queries, JAVA stalls, and DT/disk slowness. The default timeout window is five
minutes, at which time ECS return a 503 error. With partial listing, you can set a parameter that allows ECS to returns a token,
or marker, that indicates that you should use it to get the next page of results.

Enabling partial list results


The true|false flag on the ?allow-partial-results API call overrides the default behavior for an individual request.

Bucket listing limitation


Buckets must be limited to 1000 items in a page to prevent the resourcesvc from running out of memory.
If more than 1000 items exist, then ECS returns a token, or marker, that indicates that you should use it to get the next page of
results.
Output example:
<NextMarker>FR_STG_S3_AVI-DATA</NextMarker> <NextPageLink>/object/bucket?namespace=ns-
fr&name=*&marker=FR_STG_S3_AVI-DATA</NextPageLink>
first listing request

Buckets 101
admin@ecs06pn08012-data:~> SDS_TOKEN=$(curl -i -s -L --location-trusted -k https://$
{MGMT_IP}:4443/login -u emcmonitor:ChangeMe | grep X-SDS-AUTH-TOKEN); echo ${SDS_TOKEN} X-
SDS-AUTH-TOKEN:
BAAccFE0TUNtWUJxdnlSbTRmTjV4UnBKODJVTURnPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW50ZXJ
EYXRhOmM1OWZjNTIzLTFiMTctNDE3Mi1iN2YyLTM0NDcxMTAyNDZhYQIADTE2NjEzNzc3NDkyMzcDAC51cm46VG9rZW
46YzUzMDcyMGYtZDU4MC00MjA5LThkMDgtOGQwM2RkMzQ5ZmM3AgAC0A8= admin@ecs06pn08012-data:~> curl
-v -L --location-trusted -k -H "${SDS_TOKEN}" https://${MGMT_IP}:4443/object/bucket?
namespace=ns-fr | xmllint --format - > /tmp/test.tmp
● Trying 10.185.65.142...
● TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload
Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 - - - - - - 0* Connected to 10.185.65.142
(10.185.65.142) port 4443 (#0)
● ALPN, offering h2
● ALPN, offering http/1.1 . } [5 bytes data] > GET /object/bucket?namespace=ns-fr HTTP/1.1 >
Host: 10.185.65.142:4443 > User-Agent: curl/7.60.0 > Accept:> X-SDS-AUTH-TOKEN:
BAAccFE0TUNtWUJxdnlSbTRmTjV4UnBKODJVTURnPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW50Z
XJEYXRhOmM1OWZjNTIzLTFiMTctNDE3Mi1iN2YyLTM0NDcxMTAyNDZhYQIADTE2NjEzNzc3NDkyMzcDAC51cm46VG
9rZW46YzUzMDcyMGYtZDU4MC00MjA5LThkMDgtOGQwM2RkMzQ5ZmM3AgAC0A8= > {
[5 bytes data]
< HTTP/1.1 200 OK < Date: Thu, 25 Aug 2022 13:17:18 GMT < Content-Type: application/xml <
Transfer-Encoding: chunked < Connection: keep-alive < {
[16245 bytes data]
100 1638k 0 1638k 0 0 2672k 0 - - - - - - 2668k
● Connection #0 to host 10.185.65.142 left intact
admin@ecs06pn08012-data:~> grep -i marker /tmp/test.tmp -C2
<object_buckets> <Filter>namespace=ns-fr&name=*</Filter> <NextMarker>FR_STG_S3_AVI-DATA</
NextMarker> <NextPageLink>/object/bucket?namespace=ns-fr&name=*&marker=FR_STG_S3_AVI-DATA</
NextPageLink> <object_bucket> <api_type>S3</api_type> admin@ecs06pn08012-data:~>
second listing request with NextMarker
admin@ecs06

Disable unused services


This section provides you information about the ECS supported services and the available connection options.

About this task


ECS supports the following services and the connection options to connect to those services:
● S3 - http/https/https,http/disabled
● Atmos - http/https/https,http/disabled
● Swift - http/https/https,http/disabled
● NFS - enabled/disabled
● CAS - enabled/disabled
The service waits on the key for changes and takes appropriate action. For example, S3 waits for changes to the S3 key. When
it realizes that HTTP is no longer requested, then you cannot connect to the service using the HTTP protocol.
● To update a single service, run:

PUT /service/{service_name}
{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
}

102 Buckets
For example,

PUT /service/atmos
{
"name": "atmos",
"settings": ["disabled"]
}

● To update multiple services, run:

PUT /service
{
"service": [{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
},
{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
}]

For example,

PUT /service
{ "service": [{
"name": "s3",
"settings": ["http", "https"]
},
{
"name": "swift",
"settings": ["http"]
},
{
"name": "cas",
"settings": ["enabled"]
}]
}

Buckets 103
9
File Access
Topics:
• Introduction to file access
• ECS multi-protocol access
• Working with NFS exports in the ECS Portal
• Working with user or group mappings in the ECS Portal
• ECS NFS configuration tasks
• Mount an NFS export example
• NFS access using the ECS Management REST API
• NFS WORM (Write Once, Read Many)
• S3A support
• Geo-replication status

Introduction to file access


ECS allows you to configure object buckets for access as NFS file systems using NFSv3.
In the ECS Portal, you can make ECS buckets and the directories within them accessible as file systems to Unix users by:
● creating NFS exports of ECS buckets and specifying the hosts that you want to be able to access the export.
● mapping ECS object users/groups to Unix users/groups so that the Unix users can access the NFS export.
In the ECS Portal, you can make ECS buckets and the directories within them accessible as file systems to Unix users by:
● creating NFS exports of ECS buckets and specifying the hosts that you want to be able to access the export.
● mapping ECS object users/groups to Unix users/groups so that the Unix users can access the NFS export.
Mapping the ECS bucket owner to a Unix ID gives that Unix user permissions on the file system. In addition, ECS allows you to
assign a default custom group to the bucket so that members of a Unix group mapped to the ECS default custom group can
access the bucket.
Mapping the ECS bucket owner to a Unix ID gives that Unix user permissions on the file system. In addition, ECS allows you to
assign a default custom group to the bucket so that members of a Unix group mapped to the ECS default custom group can
access the bucket.
ECS NFS supports:
● multi-protocol access, so that files written using NFS can also be accessed using object protocols, and vice versa.
● Kerberos security
● advisory locking and locking over multiple sites as well as shared and exclusive locks.
ECS NFS supports:
● multi-protocol access, so that files written using NFS can also be accessed using object protocols, and vice versa.
● Kerberos security
● advisory locking and locking over multiple sites as well as shared and exclusive locks.
The ECS NFS configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API or CLI.
The ECS NFS configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API or CLI.

104 File Access


ECS multi-protocol access
ECS supports multi-protocol access, so that files written using NFS can also be accessed using Amazon Simple Storage Service
(Amazon S3), OpenStack Swift, and EMC Atmos object protocols. Similarly, objects written using S3 and OpenStack Swift
object protocols can be made available through NFS. For Atmos, objects created using the namespace interface can be listed
using NFS, however, objects created using an object ID cannot. Objects and directories created using object protocols can be
accessed by Unix users and by Unix group members by mapping the object users and groups.
ECS supports multi-protocol access, so that files written using NFS can also be accessed using Amazon Simple Storage Service
(Amazon S3), OpenStack Swift, and EMC Atmos object protocols. Similarly, objects written using S3 and OpenStack Swift
object protocols can be made available through NFS. For Atmos, objects created using the namespace interface can be listed
using NFS, however, objects created using an object ID cannot. Objects and directories created using object protocols can be
accessed by Unix users and by Unix group members by mapping the object users and groups.
For information on NFS and object protocol permissions, see Multiprotocol access permissions.

S3/NFS multi-protocol access to directories and files


ECS supports writing objects using the S3 protocol and accessing them as files using NFS and, conversely, writing files using
NFS and accessing the files as objects using the S3 protocol. You must understand how directories are managed when you use
multi-protocol access.
The S3 protocol does not make provision for the creation of folders or directories.
To enable multi-protocol operation, ECS support for the S3 protocol formalizes the use of / and creates directory objects for all
intermediate paths in an object name. An object named /a/b/c.txt results in the creation of a file object named c.txt and
directory objects for a and b. The directory objects are not exposed to users through the S3 protocol, and are maintained only
to provide multi-protocol access and compatibility with file system-based APIs. This means that ECS can display files within a
directory structure when the bucket is viewed as an NFS file system.

Limitations
● An issue can arise where both a directory object and a file object are created with the same name. This can occur in the
following ways:
○ A file path1/path2 is created from NFS, then an object path1/path2/path3 is created from S3. Because S3 allows
creation of objects that have another object's name as the prefix, this operation is valid and is supported. A file and a
directory called path2 will exist.
○ A directory path1/path2 is created from NFS, then an object path1/path2 is created from S3. This operation is a
valid operation from S3 because directory path1/path2 is not visible through the S3 API. A file and a directory called
path2 will exist.
To resolve this issue, requests from S3 always return the file, and requests from NFS always return the directory. However,
this means that in the first case the file created by NFS is hidden by the object created by S3.
● NFS does not support filenames with a trailing / in them, but the S3 protocol does. NFS does not show these files.

Multiprotocol access permissions


Objects can be accessed using NFS and using the object service. Each access method has a way of storing permissions: Object
Access Control List (ACL) permissions and File System permissions.
When an object is created or modified using the object protocol, the permissions that are associated with the object owner
are mapped to NFS permissions, and the corresponding permissions are stored. Similarly, when an object is created or modified
using NFS, ECS maps the NFS permissions of the owner to object permissions and stores them.
The S3 object protocol does not have the concept of groups. Changes to group ownership or permissions from NFS do not have
to be mapped to corresponding object permissions. When you create a bucket or an object within a bucket (the equivalent of a
directory and a file), ECS can assign UNIX group permissions, and they can be accessed by NFS users.
For NFS, the following ACL attributes are stored:
● Owner
● Group

File Access 105


● Other
For object access, the following ACLs are stored:
● Users
● Custom Groups
● Groups (Predefined)
● Owner (a specific user from Users)
● Primary Group (a specific group from Custom Groups)
For more information about ACLs, see Set ACLs.
The following table shows the mapping between NFS ACL attributes and object ACL attributes.

Table 35. Mapping between NFS ACL and Object ACL attributes
NFS ACL attribute Object ACL attribute
Owner User who is also an Owner
Group Custom Group that is also a Primary Group
Others Pre-Defined Group

Examples of this mapping are discussed later.


The following Access Control Entries (ACE) can be assigned to each ACL attribute.
NFS ACEs:
● Read (R)
● Write (W)
● Execute (X)
Object ACEs:
● Read (R)
● Write (W)
● Execute (X)
● ReadAcl (RA)
● WriteAcl (WA)
● Full Control (FC)

Creating and modifying an object using NFS and accessing using the object service
When an NFS user creates an object using the NFS protocol, the owner permissions are mirrored to the ACL of the object user
who is designated as the owner of the bucket. If the NFS user has RWX permissions, Full Control is assigned to the object
owner through the object ACL.
The permissions that are assigned to the group that the NFS file or directory belongs to are reflected onto a custom group of
the same name, if it exists. ECS reflects the permissions that are associated with Others onto predefined group permissions.
The following example illustrates the mapping of NFS permissions to object permissions.

NFS ACL Setting Object ACL Setting

Owner John : RWX Users John : Full Control


Group ecsgroup : R-X ---> Custom Groups ecsgroup : R-X
Other RWX Groups All_Users : R, RA
Owner John
Primary Group ecsgroup

When a user accesses ECS using NFS and changes the ownership of an object, the new owner inherits the owner ACL
permissions and is given Read_ACL and Write_ACL. The previous owner permissions are kept in the object user's ACL.
When a chmod operation is performed, the ECS reflects the permissions in the same way as when creating an object.
Write_ACL is preserved in Group and Other permissions if it exists in the object user's ACL.

106 File Access


Creating and modifying objects using the object service and accessing using NFS
When an object user creates an object using the object service, the user is the object owner and is automatically granted Full
Control of the object. The file owner is granted RWX permission. If the owner permissions are set to other than Full Control,
ECS reflects the object RWX permissions onto the file RWX permissions. An object owner with RX permissions results in an
NFS file owner with RX permissions. The object primary group, which is set using the Default Group on the bucket, becomes
the Custom Group that the object belongs to and the object permissions are set based on the default permissions that have
been set. These permissions are reflected onto the NFS.group permissions. If the object Custom Group has Full Control, these
permissions become the RWX permissions for the NFS group. If predefined groups are specified on the bucket, these are applied
to the object and are reflected as Others permissions for the NFS ACLs.
The following example illustrates the mapping of object permissions onto NFS permissions.

Object ACL Setting NFS ACL Setting

Users John : Full Control Owner John : RWX


Custom Groups ecsgroup : R-X ----> Group ecsgroup : R-X
Groups All_Users : R, RA Other RWX
Owner John
Primary Group ecsgroup

If the object owner is changed, the permissions that are associated with the new owner are applied to the object and are
reflected onto the file RWX permissions.

Working with NFS exports in the ECS Portal


You can use the Exports tab on the File page available from Manage > File to view the details of existing NFS exports, to
create NFS exports, and to edit existing exports.
You can use the Exports tab on the File page available from Manage > File to view the details of existing NFS exports, to
create new NFS exports, and to edit existing exports.
The Exports tab has a Namespace field that allows you to select the namespace for which you want to see the defined
exports.

Table 36. NFS export properties


Field Description
Namespace The namespace that the underlying storage belongs to.
Bucket The bucket that provides the underlying storage for the NFS export.
Export Path The mount point associated with the export, in the form: /<namespace_name>/
<bucket_name>/<export_name> . The export name can only be specified if you are
exporting a directory that exists within the bucket.
Actions The actions that can be completed on the NFS export.
● Edit: Edit the bucket quota and export host options.
● Delete: Delete the NFS export.

Working with user or group mappings in the ECS


Portal
You can use the User/Group Mapping tab on the File page available from Manage > File to view the existing mappings
between ECS object users or groups and UNIX users, to create user or group mappings, and to edit mappings.
You can use the User/Group Mapping tab on the File page available from Manage > File to view the existing mappings
between ECS object users or groups and UNIX users, to create new user/group mappings and to edit mappings.
The User/Group Mapping tab has a Namespace field that allows you to select the namespace for which you want to see the
configured user or group mappings.

File Access 107


ECS stores the owner and group for the bucket, and the owner and group for files and directories within the bucket, as ECS
object username and custom group names, respectively. The names must be mapped to UNIX IDs in order that NFS users can
be given access with the appropriate privileges.
The mapping enables ECS to treat an ECS object user and an NFS user as the same user but with two sets of credentials, one
to access ECS using NFS, and one to access the ECS using the object protocols. Because the accounts are mapped, files that
are written by an NFS user are accessible as objects by the mapped object user and objects that are written by the object users
are accessible as files by the NFS user.
The permissions that are associated with the file or object are based on a mapping between POSIX and object protocol ACL
privileges. For more information, see Multiprotocol access permissions.
The permissions associated with the file or object are based on a mapping between POSIX and object protocol ACL privileges.

Table 37. Mapping fields


Field Description
User or Group Name The object username of the user
ID The UNIX User ID or Group ID that has been mapped to the object user.
Type Indicates whether the ID is for a User or Group.
Actions The actions that can be completed on the user or group mapping. They are: View and Delete

ECS NFS configuration tasks


You must perform the following tasks to configure ECS NFS.

Steps
1. Create a bucket for NFS using the ECS Portal
2. Add an NFS export
3. Add a user or group mapping using the ECS Portal
4. Configure ECS NFS with Kerberos security

Next steps
After you perform the above steps, you can mount the NFS export on the export host.

Create a bucket for NFS using the ECS Portal


You can use the ECS Portal to create a bucket configured for use with NFS.

Prerequisites
● This operation requires the Namespace Administrator or System Administrator role in ECS.
● If you are a Namespace Administrator, you can create buckets in your namespace.
● If you are System Administrator, you can create a bucket belonging to any namespace.

About this task


The steps provided here focus on the configuration you need to perform to make a bucket suitable for use by NFS. The bucket
that you create is an S3 bucket that is enabled for file system use.

Steps
1. In the ECS Portal, select Manage > Buckets > New Bucket.
2. On the New Bucket page, in the Name field, type a name for the bucket.
3. In the Namespace field, select the namespace that the bucket belongs to.
4. In the Replication Group field, select a replication group or leave blank to use the default replication group for the
namespace.

108 File Access


5. In the Bucket Owner field, type the name of the bucket owner.
6. On the Required page, in the File System field, click On.
Once file system access is enabled, the fields for setting a default group for the file system/bucket and for assigning group
permissions for files and directories that are created in the bucket are available.
7. In the Default Bucket Group field, type a name for the default bucket group.
This group is the group that is associated with the NFS root file system and with any files or directories that are created
in the NFS export. It enables users who are members of the group to access the NFS export and to access files and
directories.
This group must be specified at bucket creation. If it is not, the group would have to be assigned later from the NFS client.
8. Set the default permissions for files and directories that are created in the bucket using the object protocol.
These settings are used to apply UNIX group permissions to objects created using object protocols.
The S3 protocol does not have the concept of groups. Thus there is no opportunity for setting group permissions in S3 and
mapping them to UNIX permissions. Hence, this provides a one-off opportunity for a file or directory created using the S3
protocol to be assigned to the specified default group with the permissions specified here.
a. Set the Group File Permissions by clicking the appropriate permission buttons.
You normally set Read and Execute permissions.
b. Set the Group Directory Permissions by clicking the appropriate permission buttons.
You normally set Read and Execute permissions.
9. In the CAS field, do not enable CAS.
NOTE: A bucket that is intended for use as NFS cannot be used for CAS. The CAS field is disabled when the File
System field is enabled.

10. Enable any other bucket features that you require.


You can enable any of the following features on an NFS bucket:
● Quota
● Server-side Encryption
● Metadata Search
● Access During Outage
● Read-Only Access During Outage
● Bucket Retention Period
For information about these settings, see Bucket settings.
NOTE: A bucket that is compliance-enabled cannot be written to using the NFS protocol. However, data that are
written using object protocols can be read from NFS.

11. Click Save to create the bucket.

Add an NFS export


You can use the portal to create an NFS export and set the options that control access to the export.

Prerequisites
● This operation requires the namespace Administrator or System Administrator role.
● If you are a namespace Administrator, you can add NFS exports into your namespace.
● If you are a System Administrator, you can add NFS exports into any namespace.
● You must have created a bucket to provide the underlying storage for the export. For more information, see Create a bucket
for NFS using the ECS Portal.

Steps
1. In the portal, select File > Exports > New Export.
The New File Export page is displayed.
2. On the New File Export page, in the Namespace field, select the namespace that owns the bucket that you want to
export.
3. In the Bucket field, select the bucket.

File Access 109


4. In the Export Path field, specify the path.
The system automatically generates the export path that is based on the namespace and bucket. You only have to enter a
name if you are exporting a directory that exists within the bucket. So if you enter /namespace1/bucket1/dir1, for
example, you should ensure that dir1 exists. If it does not, mounting the export fails.
5. Set the Show Bucket Quota option to No or Yes depending on how you want the export size to be reported.

Option Description
No The export size is reported as the storage pool size.
Yes The export size is reported as the hard quota on the bucket, if set.
6. To add the hosts that you want to be able to access the export, complete the following steps:
a. In the Export Host Options area, click Add.
The Add Export Host dialog is displayed.
b. In the Add Export Host dialog, specify one or more hosts that you want to be able to access the export and configure
the access options.
You must choose an Authentication option. This option is Sys unless you are intending to configure Kerberos. Default
values for Permissions (ro) and Write Transfer Policy (async) are already set in the Add Export Host dialog and are
passed to the NFS server. The remaining options are the same as the NFS server defaults and so are only passed by the
system if you change them.
The following table describes the parameters that you can specify when you add a host:

Table 38. Host parameters


Setting Description
Export Host Sets the IP address of the host or hosts that can access the export. Use a comma-
separated list to specify more than one host.
Permissions Enables access to the export to be set as Read/Write or Read only. This is the same
as setting rw or ro in /etc/exports.

Write Transfer Policy Sets the write transfer policy as synchronous or asynchronous. The default is
asynchronous. This parameter is the same as setting sync or async for an export
in /etc/exports.

Authentication Sets the authentication types that are supported by the export.
Mounting Directories Inside Specifies whether subdirectories of the export path are allowed as mount points.
Export This parameter is the same as the alldir setting in /etc/exports. With the
alldir option, if you exported /namespace1/bucket1, for example, you can
also mount subdirectories, such as/namespace1/bucket1/dir1, provided the
directory exists.
AnonUser An object username (must have an entry in user or group mapping) to which all
unknown (not mapped using the user or group mapping) user IDs are mapped when
accessing the export.
NOTE: For performance reasons, user mapping must be predefined and can be
selected here.

AnonGroup An object group name (must have an entry in user or group mapping) to which all
unknown (not mapped using the user or group mapping) group IDs are mapped when
accessing the export.
NOTE: For performance reasons, group mapping must be predefined and can be
selected here.

RootSquash An object username to which the root user ID (0) is mapped when accessing the
export.

NOTE: The AnonUser/AnonGroup is the object user or group name that is used to map any incoming user id. This
overrides user mapping provided at the namespace level.

c. Click Add to finish defining the host options.


7. If you want to add more hosts that can access the export, but with different options, repeat the previous step.

110 File Access


8. Click Save to save the NFS export definition.

Next steps
Add a user or group mapping using the ECS Portal.

Add a user or group mapping using the ECS Portal


To provide NFS access to the file system (the bucket), you must map an object user who has permissions on the bucket to a
UNIX User ID (UID) so that the UNIX user acquires the same permissions as the object user. Alternatively, you can map an ECS
custom group that has permissions on the bucket to a UNIX Group ID (GID) to provide access for members of the UNIX group.

Prerequisites
● This operation requires the Namespace Administrator or System Administrator role in ECS.
● If you are a Namespace Administrator, you can add user or group mappings into your namespace.
● If you are System Administrator, you can add user or group mappings into any namespace.
● For mapping a single ECS object user to a UNIX user:
● ○ Ensure that the UID exists on the NFS client and the username is an ECS object username.
● For mapping a group of ECS object users to a group of UNIX users:
● ○ Ensure that a default custom group has been assigned to the bucket (either a default group that was assigned at bucket
creation, or a custom group ACL that was set after bucket creation). In order for UNIX group members to have access to
the file system, a default custom group must be assigned to the bucket and the UID for each member of the group must
be known to ECS. In other words, there must be a UNIX UID mapping for each member of the group in ECS.
○ Ensure that default object and directory permissions have been assigned to the bucket in order that group members have
access to objects and directories created using object protocols.

Steps
1. In the ECS Portal, select Manage > File and click the User/Group Mapping tab.
2. Click New User/Group Mapping.
The New User/Group Mapping page is displayed.
3. In the User/Group Name field, type the name of the ECS object user or ECS custom group that you want to map to a UNIX
UID or GID.
4. In the Namespace field, select the namespace that the ECS object user or custom group belongs.
5. In the ID field, enter the UNIX UID or GID that you want the ECS user or group to map to.
6. In the Type field, click the type of mapping: User or Group so that ECS knows that the ID you have entered is a UID or a
GID.
7. Click Save.

Configure ECS NFS with Kerberos security


To configure Kerberos authentication with ECS NFS, you must configure both the ECS nodes and the NFS client, and create
keytabs for the NFS server principal and for the NFS client principal.

Prerequisites
Depending on your internal IT setup, you can use a Key Distribution Center (KDC) or you can use Active Directory (AD) as your
KDC.
To use AD, follow the steps in these tasks:
● Register an ECS node with Active Directory
● Register a Linux NFS client with Active Directory

About this task


The following scenarios are supported:
● ECS client to single ECS node. The keytab on each ECS that you want to use as the NFS server must be specific to that
node.

File Access 111


● ECS client to load balancer. The keytab on all ECS nodes is the same, and uses the hostname of the load balancer.

Steps
1. Ensure that the hostname of the ECS node can be resolved.
You can use the hostname command to ensure that the FQDN of the ECS node is added to /etc/HOSTNAME.

dataservice-10-247-142-112:~ # hostname ecsnode1.yourco.com


dataservice-10-247-142-112:~ # hostname -i
10.247.142.112
dataservice-10-247-142-112:~ # hostname -f
ecsnode1.yourco.com
dataservice-10-247-142-112:~ #

2. Create the Kerberos configuration file (krb5.conf) on the ECS node.


Change the file permissions to 644 and make the user with id 444(storageos) the owner of the file.

In the example below, the following values are used and must be replaced with your own settings.

Kerberos REALM Set to NFS-REALM in this example.


KDC Set to kdcname.yourco.com in this example.
KDC Admin In this example, the KDC acts as the admin server.
Server

[libdefaults]
default_realm = NFS-REALM.LOCAL
[realms]
NFS-REALM.LOCAL = {
kdc = kdcname.yourco.com
admin_server = kdcname.yourco.com
}
[logging]
kdc = FILE:/var/log/krb5/krb5kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
default = SYSLOG:NOTICE:DAEMON

3. Add a host principal for the ECS node and create a keytab for the principal.
In this example, the FQDN of the ECS node is ecsnode1.yourco.com

$ kadmin
kadmin> addprinc -randkey nfs/ecsnode1.yourco.com
kadmin> ktadd -k /datanode.keytab nfs/ecsnode1.yourco.com
kadmin> exit

4. Copy the keytab (datanode.keytab).


Change its file permissions to 644 and make the user with id 444(storageos) the owner of the file.
5. Download the unlimited JCE policy archive from oracle.com and extract it to the /opt/emc/caspian/fabric/
agent/services/object/data/jce/unlimited directory.
Kerberos may be configured to use a strong encryption type, such as AES-256. In that situation, the JRE within the ECS
nodes must be reconfigured to use the 'unlimited' policy.
NOTE: This step should be performed only if you are using a strong encryption type.

6. Run the following command from inside the object container.

service storageos-dataservice restarthdfs

7. To set up the client, begin by making sure that the hostname of the client can be resolved.

112 File Access


You can use the hostname command to ensure that the FQDN of the ECS node is added to /etc/HOSTNAME.

dataservice-10-247-142-112:~ # hostname ecsnode1.yourco.com


dataservice-10-247-142-112:~ # hostname -i
10.247.142.112
dataservice-10-247-142-112:~ # hostname -f
ecsnode1.yourco.com
dataservice-10-247-142-112:~ #

8. If your client is running SUSE Linux make sure that line NFS_SECURITY_GSS="yes" is uncommented in /etc/
sysconfig/nfs.
9. If you are on Ubuntu make sure to have line NEED_GSSD=yes in /etc/default/nfs-common.
10. Install rpcbind and nfs-common.
Use apt-get or zypper. On SUSE Linux, for nfs-common, use:

zypper install yast2-nfs-common

By default these are turned off in Ubuntu client.


11. Set up your Kerberos configuration file.
In the example below, the following values are used and you must replace them with your own settings.

Kerberos REALM Set to NFS-REALM in this example.


KDC Set to kdcname.yourco.com in this example.
KDC Admin In this example, the KDC acts as the admin server.
Server

[libdefaults]
default_realm = NFS-REALM.LOCAL
[realms]
NFS-REALM.LOCAL = {
kdc = kdcname.yourco.com
admin_server = kdcname.yourco.com
}
[logging]
kdc = FILE:/var/log/krb5/krb5kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
default = SYSLOG:NOTICE:DAEMON

12. Add a host principal for the NFS client and create a keytab for the principal.
In this example, the FQDN of the NFS client is nfsclient.yourco.com

$kadmin
kadmin> addprinc -randkey host/nfsclient.yourco.com
kadmin> ktadd -k /nkclient.keytab host/nfsclient.yourco.com
kadmin> exit

13. Copy the keytab file (nfsclient.keytab) from the KDC machine to /etc/krb5.keytab on the NFS client machine.

scp /nkclient.keytab root@nfsclient.yourco.com:/etc/krb5.keytab


ssh root@nfsclient.yourco.com 'chmod 644 /etc/krb5.keytab'

14. Create a principal for a user to access the NFS export.

$kadmin
kadmin> addprinc yourusername@NFS-REALM.LOCAL
kadmin> exit

File Access 113


15. Log in as root and add the following entry to your /etc/fstab file.

HOSTNAME:MOUNTPOINT LOCALMOUNTPOINT nfs


rw,user,nolock,noauto,vers=3,sec=krb5 0 0

For example:

ecsnode1.yourco.com:/s3/b1 /home/kothan3/1b1 nfs


rw,user,nolock,noauto,vers=3,sec=krb5 0 0

16. Log in as non root user and kinit as the non-root user that you created.

kinit yourusername@NFS-REALM.LOCAL

17. You can now mount the NFS export. For more information, see Mount an NFS export example and Best practices for
mounting ECS NFS exports.
NOTE:

Mounting as the root user does not require you to use kinit. However, when using root, authentication is done using
the client machine's host principal rather than your Kerberos principal. Depending upon your operating system, you can
configure the authentication module to fetch the Kerberos ticket when you login, so that there is no need to fetch the
ticket manually using kinit and you can mount the NFS share directly.

Register an ECS node with Active Directory


To use Active Directory (AD) as the KDC for your NFS Kerberos configuration, you must create accounts for the client and
server in AD and map the account to a principal. For the NFS server, the principal represents the NFS service accounts, for the
NFS client, the principal represents the client host machine.

Prerequisites
You must have administrator credentials for the AD domain controller.

Steps
1. Log in to AD.
2. In Server Manager, go to Tools > Active Directory Users and Computers.
3. Create a user account for the NFS principal using the format "nfs-<host>" , for example, "nfs-ecsnode1". Set a password
and set the password to never expire.
4. Create an account for yourself (optional and one time).
5. Execute the following command to create a keytab file for the NFS service account.

ktpass -princ nfs/<fqdn>REALM.LOCAL +rndPass -mapUser nfs-<host>@REALM.LOCAL -mapOp


set -crypto All -ptype KRB5_NT_PRINCIPAL -out filename.keytab

For example, to associate the nfs-ecsnode1 account with the principle nfs/ecsnode1.yourco.com@NFS-REALM.LOCAL, you
can generate a keytab using:

ktpass -princ nfs/ecsnode1.yourco.com@NFS-REALM.LOCAL +rndPass -mapUser nfs-


ecsnode1@NFS-REALM.LOCAL -mapOp set -crypto All -ptype KRB5_NT_PRINCIPAL -out nfs-
ecsnode1.keytab

6. Import the keytab to the ECS node.

ktutil
ktutil> rkt <keytab to import>
ktutil> wkt /etc/krb5.keytab

114 File Access


7. Test registration by running.

kinit -k nfs/<fqdn>@NFS-REALM.LOCAL

8. See the cached credentials by running the klist command.


9. Delete the cached credentials by running the kdestroy command.
10. View the entries in the keytab file by running the klist command.
Example:

klist -kte /etc/krb5.keytab

11. Follow steps 2, 4, and 5 from Configure ECS NFS with Kerberos security to place the Kerberos configuration files
(krb5.conf, krb5.keytab and jce/unlimited) on the ECS node.

Register a Linux NFS client with Active Directory


To use Active Directory (AD) as the KDC for your NFS Kerberos configuration, you must create accounts for the client and
server in AD, and map the account to a principal. For the NFS server, the principal represents the NFS service accounts. For the
NFS client, the principal represents the client host machine.

Prerequisites
You must have administrator credentials for the AD domain controller.

Steps
1. Log in to AD.
2. In Server Manager, go to Tools > Active Directory Users and Computers.
3. Create a computer account for the client machine (for example, "nfsclient"). Set the password to never expire.
4. Create an account for a user (optional and one time).
5. Run the following command to create a keytab file for the NFS service account.

ktpass -princ host/<fqdn>@REALM.LOCAL +rndPass -mapUser <host>@REALM.LOCAL -mapOp set


-crypto All -ptype KRB5_NT_PRINCIPAL -out filename.keytab

For example, to associate the nfs-ecsnode1 account with the principle host/nfsclient.yourco.com@NFS-REALM.LOCAL, you
can generate a keytab using:

ktpass -princ host/nfsclient.yourco.com@NFS-REALM.LOCAL +rndPass -mapUser


nfsclient$@NFS-REALM.LOCAL -mapOp set -crypto All -ptype KRB5_NT_PRINCIPAL -out
nfsclient.keytab

6. Import the keytab to the client node.

ktutil
ktutil> rkt <keytab to import>
ktutil> wkt /etc/krb5.keytab

7. Test registration by running.

kinit -k host/<fqdn>@NFS-REALM.LOCAL

8. See the cached credentials by running the klist command.


9. Delete the cached credentials by running the kdestroy command.
10. View the entries in the keytab file by running the klist command.

klist -kte /etc/krb5.keytab

File Access 115


11. Follow steps 2, 4, and 5 from Configure ECS NFS with Kerberos security to place the Kerberos configuration files
(krb5.conf, krb5.keytab and jce/unlimited) on the ECS node.

Mount an NFS export example


When you mount an export, you must ensure that the following prerequisites steps are carried out:
● The bucket owner name is mapped to a Unix UID.
● A default group is assigned to the bucket and the name of the default group is mapped to a Linux GID. This ensures that the
default group shows as the associated Linux group when the export is mounted.
● You review the Best practices for mounting ECS NFS exports.
The following steps provide an example of how to mount an ECS NFS export file system.
1. Create a directory on which to mount the export. The directory should belong to the same owner as the bucket.
In this example, the user fred creates a directory /home/fred/nfsdir on which to mount an export.

su - fred
mkdir /home/fred/nfsdir

2. As the root user, mount the export in the directory mount point that you created.

mount -t nfs -o "vers=3,nolock" 10.247.179.162:/s3/tc-nfs6 /home/fred/nfsdir

When mounting an NFS export, you can specify the name or IP address of any of the nodes in the VDC or the address of the
load balancer.
It is important that you specify -o "vers=3".

3. Check that you can access the file system as user fred.
a. Change to user fred.

$ su - fred

b. Check you are in the directory in which you created the mount point directory.

$ pwd
/home/fred

c. List the directory.

fred@lrmh229:~$ ls -al
total
drwxr-xr-x 7 fred fredsgroup 4096 May 31 05:38 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 16 May 31 05:31 .bash_history
drwxrwxrwx 3 fred anothergroup 96 Nov 24 2015 nfsdir

In this example, the bucket owner is fred and a default group, anothergroup, was associated with the bucket.

If no group mapping had been created, or no default group has been associated with the bucket, you will not see a group
name but a large numeric value, as shown below.

fred@lrmh229:~$ ls -al
total
drwxr-xr-x 7 fred fredssgroup 4096 May 31 05:38 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 16 May 31 05:31 .bash_history
drwxrwxrwx 3 fred 2147483647 96 Nov 24 2015 nfsdir

If you have forgotten the group mapping, you can create appropriate mapping in the ECS Portal.

116 File Access


You can find the group ID by looking in /etc/group.

fred@lrmh229:~$ cat /etc/group | grep anothergroup


anothergroup:x:1005:

And adding a mapping between the name and GID (in this case: anothergroup => GID 1005).

If you try and access the mounted file system as the root user, or another user that does not have permissions on the file
system, you will see ?, as below.

root@lrmh229:~# cd /home/fred
root@lrmh229:/home/fred# ls -al
total
drwxr-xr-x 8 fred fredsgroup 4096 May 31 07:00 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 1388 May 31 07:31 .bash_history
d????????? ? ? ? ? ? nfsdir

Best practices for mounting ECS NFS exports


The following best practices apply when you mount ECS NFS exports.

Use async
Whenever possible, use the async mount option. This option dramatically reduces latency, improves throughput, and reduces
the number of connections from the client.

Set wsize and rsize to reduce round trips from the client
Where you expect to read and/or write large files, ensure that the read or write size of files is set appropriately using the
rsize and wsize mount options. Usually, you set the wsize and rsize options to the highest possible value to reduce the
number of round trips from the client. This is typically 512 KB (524288 B).
For example, to write a 10 MB file, if the wsize option is set to 524288 (512 KB), the client makes 20 separate calls. If the write
size is set as 32 KB this results in 16 times as many calls.
When using the mount command, you can supply the read and write size using the options (-o) switch. For example:

# mount 10.247.97.129:/home /home -o "vers=3,nolock,rsize=524288,wsize=524288"

NFS access using the ECS Management REST API


You can use the following APIs to configure and manage NFS access.

Table 39. ECS Management REST API calls for managing NFS access
Method Description
POST /object/nfs/exports Creates an export. The payload specifies the export path, the hosts that
can access the export, and a string that defines the security settings for
the export.
PUT/GET/DELETE /object/nfs/exports/{id} Performs the selected operation on the specified export.
GET /object/nfs/exports Retrieves all user exports that are defined for the current namespace.
POST /object/nfs/users Creates a mapping between an ECS object username or group name and a
UNIX user or group ID.

File Access 117


Table 39. ECS Management REST API calls for managing NFS access (continued)
Method Description
PUT/GET/DELETE /object/nfs/users/ Performs the selected operation on the specified user or group mapping.
{mappingid}
GET /object/nfs/users Retrieves all user mappings that are defined for the current namespace.

The ECS Management REST API documentation provides full details of the API, and the documentation for the NFS export
methods can be accessed in the EMC ECS REST API REFERENCE.

NFS WORM (Write Once, Read Many)


NFS data become Write Once Read Many (WORM) compliant when autocommit is implemented on it.
In detail, creating files through NFS is a multistep process. To write to a new file, the NFS client first sends the CREATE request
with no payload to the NFS server. The server issues a WRITE request after receiving a response. It is a problem for FS-enabled
buckets under retention as the file created with 0 bytes blocks any writes to it. Due to this reason, until ECS 3.3, retention on
FS enabled bucket makes the whole mounted file-system read-only. There is no End of File (EOF) concept in NFS. Thus, setting
a retention for files, on the FS-enabled buckets, after writing to them does not work as expected.
The autocommit period is implemented on NFS data to remove the constraints that are placed on NFS files in a retention-
enabled bucket. For this reason, it is decided to introduce the autocommit period during which certain types of updates (for now
identified as writes, Acl updates and deletes that is required for rsync, and rename that is required for Vim editor) are allowed,
which removes the retention constraints for that period alone.
NOTE:
● The autocommit and the Atmos retention start delay are the same.
● Autocommit period is a bucket property like retention period.
● Autocommit period is,
○ Applicable only for the file system enabled buckets with retention period
○ Applicable to the buckets in the noncompliant namespace
○ Applies to only requests from NFS and Atmos

Seal file
The seal file functionality helps to commit the file to WORM state when the file is written, ignoring the remaining autocommit
period. The seal function is performed through the command: chmod ugo-w <file> on the file.

NOTE: The seal functionality does not affect outside the retention period.

High-level overview
Table 40. Autocommit terms
Term Description
Autocommit period Time interval relative to the object's last modified time during which certain retention
constraints (example: File modifications, file deletions, and so on) are not applied. This does
not affect outside of the retention period.
Retention Start Delay Atmos head uses the start delay to indicate the autocommit period.

The following diagram provides an overview of the autocommit period behavior:

118 File Access


Autocommit configuration
The autocommit period can be set from the user interface or bucket REST API or S3 head or Atmos subtenant API.

User Interface
The user interface has the following support during bucket create and edit:
● When the File System is not enabled, no autocommit option is displayed.
● When the File System is enabled /no retention value is specified, autocommit is displayed but disabled.
● When the File System is enabled/retention value selected/autocommit is displayed and enabled for selection.
NOTE: The maximum autocommit period is limited to the smaller of the Bucket Retention period or the default maximum
period of one day.

REST API
Create bucket REST API is modified with the new header x-emc-autocommit-period.

lglou063:~ # curl -i -k -T /tmp/bucket -X POST https://10.247.99.11:4443/object/bucket


-H "$token" -H "Content-Type: application/xml" -v

The contents of /tmp/bucket


<object_bucket_create>
<name>bucket2</name>
<namespace>s3</namespace>
<filesystem_enabled>true</filesystem_enabled>
<autocommit_period>300</autocommit_period>
<retention>1500</retention>
</object_bucket_create>

S3 head
Bucket creation
Bucket creation flow through s3 head can use optional request header, x-emc-auto-commit-period:seconds to set the
autocommit period. The following checks are made in this flow:
● Allow only positive integers.
● Settable only for file system buckets.
● Settable only when the retention value is present.

./s3curl.pl --ord --id=naveen --key=+1Zh4YC2r2puuUaj3Lbnj3u0G9qgPRj0RIWJhPxH --


createbucket -- -H 'x-emc-autocommit-period:600' -H 'x-emc-file-system-access-
enabled:true' -H 'x-emc-namespace:ns1' http://10.249.245.187:9020/bucket5 -v

Atmos

File Access 119


Atmos creates a subtenant request header, x-emc-retention-start-delay, captures the autocommit interval.

./atmoscurl.pl -user USER1 -action PUT -pmode TID -path / -header "x-emc-retention-
period:300" -header "x-emc-retention-start-delay:120" -include

Behavior of file operations


Table 41. Expected file operations
File Operation Expected within autocommit Expected within retention period (after
period autocommit period)
Change permission of file Allowed Denied
Change ownership of file Allowed Denied
Write to an existing file Allowed Denied
Create an empty file Allowed Allowed
Create a nonempty file Allowed Denied
Remove file Allowed Denied
Move file Allowed Denied
Rename file Allowed Denied
Make dir Allowed Allowed
Remove directory Denied Denied
Move directory Denied Denied
Rename directory Denied Denied
Change permission on directory Denied Denied
List Allowed Allowed
Read the file Allowed Allowed
Truncate file Allowed Denied
Copy of local read-only files to Allowed Allowed
NFS share
Copy of read-only files from NFS Allowed Allowed
share to NFS share
Change atime/mtime of file/ Allowed Denied
directory

S3A support
The AWS S3A client is a connector for HDFS (Hadoop Distributed File System), which enables you to run Spark or MapReduce
jobs.

NOTE: S3A support is available on Hadoop 2.7 or later version.

Configuration at ECS
About this task
To use S3A on Hadoop, do the following:
NOTE:

120 File Access


There are three ways to access ECS storage Hadoop using the AWS S3A client.

ECS Object User


Steps
1. Create an object user having S3 credentials.
2. Create a normal object bucket at ECS UI.
ECS does not enable you to run S3A client on FS enabled buckets.
3. Make the object user as the owner of the bucket.
4. Copy the S3 secret key that has to be put at HDP Ambari node.

IAM User
Steps
1. Create appropriate IAM policies and roles.
2. Create one or more IAM groups, and attach to policies.
3. Create one or more IAM users, and assign to groups.
4. Create a normal object bucket.
5. Provide access key and secret key information to IAM users.

SAML Assertions
Steps
1. Configure cross trust relationship between ECS and Identity Provider (ADFS).
2. Create appropriate IAM policies and roles.
3. User authenticates to Identity Provider on Hadoop node.
4. User performs SAML assertion to ECS receive temporary credentials.

Configuration at Hadoop Node


About this task
Following are some of the basic configurations which should be added to make S3A work at Hadoop UI > HDFS > core-site.xml.
There are other parameters that can be added to the Hadoop node. All these parameters are listed here.
NOTE:

Putting S3A credentials in Hadoop core-site file leads to security vulnerability, since this allows bucket access for any
Hadoop user that can view the credentials. If your Hadoop cluster contains sensitive data on the S3A object bucket, use
one of the two IAM methods of authorization that is discussed above.

The following list of configuration parameters should be added in core-site.xml on Hadoop UI. If you are using credential
providers or IAM, you would not be defining the access key or secret key in core-site.

fs.s3a.endpoint= <ECS IP address (only one node address) or LoadBalancer IP>:9020


#Comment //s3a does not support multiple IP addresses so better to have a
loadbalancer
fs.s3a.access.key= <S3 Object User as created on ECS>
fs.s3a.secret.key=<S3 Object User Secret Key as on ECS>
fs.s3a.connection.maximum=15
fs.s3a.connection.ssl.enabled=false
fs.s3a.path.style.access=false
fs.s3a.connection.establish.timeout=5000
fs.s3a.connection.timeout=200000
fs.s3a.paging.maximum=1000
fs.s3a.threads.max=10

File Access 121


fs.s3a.socket.send.buffer=8192
fs.s3a.socket.recv.buffer=8192
fs.s3a.threads.keepalivetime=60
fs.s3a.max.total.tasks=5
fs.s3a.multipart.size=100M
fs.s3a.multipart.threshold=2147483647
fs.s3a.multiobjectdelete.enable=true
fs.s3a.acl.default=PublicReadWrite
fs.s3a.multipart.purge=false
fs.s3a.multipart.purge.age=86400
fs.s3a.block.size=32M
fs.s3a.readahead.range=64K
fs.s3a.buffer.dir=${hadoop.tmp.dir}/s3a

Significance of each parameter:

fs.s3a.access.key - Your AWS access key ID


fs.s3a.secret.key - Your AWS secret key
fs.s3a.connection.maximum - Controls how many parallel connections HttpClient spawns
(default: 15)
fs.s3a.connection.ssl.enabled - Enables or disables SSL connections to S3 (default: true)
fs.s3a.attempts.maximum - How many times we should retry commands on transient errors
(default: 10)
fs.s3a.connection.timeout - Socket connect timeout (default: 5000)
fs.s3a.paging.maximum - How many keys to request from S3 when doing directory listings
at a time (default: 5000)
fs.s3a.multipart.size - How big (in bytes) to split a upload or copy operation up into
(default: 100 MB)
fs.s3a.multipart.threshold - Until a file is this large (in bytes), use non-parallel
upload (default: 2 GB)
fs.s3a.acl.default - Set a canned ACL on newly created/copied objects (Private |
PublicRead | PublicReadWrite | AuthenticatedRead | LogDeliveryWrite | BucketOwnerRead |
BucketOwnerFullControl)
fs.s3a.multipart.purge - True if you want to purge existing multipart uploads that may
not have been completed/aborted correctly (default: false)
fs.s3a.multipart.purge.age - Minimum age in seconds of multipart uploads to purge
(default: 86400)
fs.s3a.buffer.dir - Comma separated list of directories that will be used to buffer file
writes out of (default: uses fs.s3.buffer.dir)
fs.s3a.server-side-encryption-algorithm - Name of server side encryption algorithm to
use for writing files (e.g. AES256) (default: null)

For details, see here.


Add this parameter in your command line while copying a large file: -D fs.s3a.fast.upload=true. For details, see here.

Geo-replication status
The ECS S3 head supports Geo replication status of an object with replicationInfo. It API retrieves Geo replication status of an
object using replicationInfo. This automates their capacity management operations, enable site reliability operations and ensures
that the critical date is not deleted accidently.
Retrieves Geo replication status of an object by API to confirm that the object has been successfully replicated.

Request:
GET /bucket/key?replicationInfo

Response:

<ObjectReplicationInfo xmlns="http://s3.amazonaws.com/doc/
2006-03001/"
<IndexReplicated>false</IndexReplicated>
<ReplicatedDataPercentage>64.0</ReplicatedDataPercentage>
</ObjectReplicationInfo>

122 File Access


10
Maintenance
Topics:
• Maintenance
• ECS Appliance CRU and FRU guide availability

Maintenance
ECS allows you to monitor and manage disks.
All the current generation 1, 2 & 3 hardware are supported for monitoring the disk status.
Replacing a disk by a user using the ECS UI is only supported for:
● Gen3 Hardware (all EX-Series)
● Gen2 Hardware (only U-Series)
NOTE: For hardware other than above, a support request is required, to replace disks.

The supported disk types are:


● NVMe SSD
● HDD
● SSD (used for SSD read cache functionality only)
Following is a description of the disk color coding:
● Red: These disks require your attention.
● Yellow : These disks are having problems and are recovering. No action required.
● Blue: These disks require your action.
● Green: These disks are operating normally.

Rack
The Rack page allows you to see all racks within the system and analyze node status inside each rack.
To view Rack, select Manage > Maintenance.

Table 42. Rack


Field Description
Rack Name of the racks
Data Disks Status of Data Disks
SSD Cache Disks Status of SSD Cache Disks
Unassigned Disks Status of disks that have not yet been automatically assigned
for use (this column is available from ECS 3.6). Disks
remaining in this category requires contacting ECS Remote
Support for assistance.

Node
The Node page allows you to see all nodes within a rack and analyze disk status inside each node.
To view Node, select Manage > Maintenance > rack_name.

Maintenance 123
Table 43. Node
Field Description
Node Name of the nodes
Data Disks Status of Data Disks
SSD Cache Disks Status of SSD Cache Disks
Unassigned Disks Status of disks that have not yet been automatically assigned
for use (this column is available from ECS 3.6). Disks
remaining in this category requires contacting ECS Remote
Support for assistance.

Disk
The Disk page allows you to see and manage all disks within a node.
To view Disk, select Manage > Maintenance > rack_name > node_name.

Table 44. Disk


Field Description
Disk Name of the disk
Slot Slot number of the disk
Serial number Serial number of the disk
NOTE: After a disk is removed or goes missing, the status
column will reflect it. However, the Serial number column
continues to display the serial number of the disk, until the
disk is replaced. The serial number of the new disk will be
displayed after the old disk is replaced.

Status Status of the disk


SSD Life-Remaining The SSD Life-Remaining column shows the amount of wear
that is left on the disk. For a new disk, this column shows
100% and the green bar is full.

For NVMe SSD data disks:

● Standard Green (25% to 100%)


● Lemon or Lime (10% to 24%)
● Yellow or Orange (1% to 9%)
● Red (0%)

For SSD read cache disks:

● Standard Green (25% to 100%)


● Lemon or Lime (15% to 24%)
● Yellow or Orange (5% to 14%)
● Red (0% to 4%)
Description The Description column shows a description of the disk status,
reason of a disk replacement process failure, or instructions
a user has to follow to start or complete the replacement
process.
Actions When data recovery automatically competes for a failed disk,
Action column will present Replace button for the target disk
for the supported hardware.

124 Maintenance
ECS Appliance CRU and FRU guide availability

ECS Appliance Storage Disk Replacement Guide


This document describes how to replace the HDDs on the ECS EX500, EX300, EX3000, and Gen 2 U-Series appliance servers,
as well as NVMe SSD on EXF900. For detailed procedures for each server type, go to ECS Solve Online, and select CRU
Procedures, or Dell Support.

ECS SSD CRU Replacement Guide


This document describes how to replace the SSDs on the ECS EX500, EX300, EX3000, and Gen 2 U-Series appliance servers.
For detailed procedures for each server type, go to ECS Solve Online, and select CRU Procedures, or Dell Support.

ECS SSD Read Cache Deployment Guide


This document describes how to deploy the SSD read cache on the ECS EX500, EX3000, and Gen 2 U-Series appliance servers.
For detailed procedures for each server type, go to ECS Solve Online, and select CRU Procedures, or Dell Support.
NOTE:
● The SSD read cache deployment procedure for EX300 servers is not a customer activity, contact Dell EMC Deployment
Services for assistance.
● ECS SSD Read Cache is not applicable to EXF900 platform.

Maintenance 125
11
Certificates
Topics:
• Introduction to certificates
• ECS certificate tool
• Generate certificates
• Upload a certificate
• Verify installed certificates

Introduction to certificates
ECS ships with default unsigned SSL certificates installed in the keystore for each node. This certificate is not trusted by
applications that talk to ECS, or by the browser when users access ECS through the ECS Portal.
To prevent users from seeing an untrusted certificate error, or to allow applications to communicate with ECS, you should install
a certificate that is signed by a trusted Certificate Authority (CA). You can generate a self-signed certificate to use until you
have a CA signed certificate. The self-signed certificate must be installed into the certificate store of any machines that will
access ECS via HTTPS.
ECS uses the following types of SSL certificates:

Management Used for management requests using the ECS Management REST API. These HTTPS requests use port
certificates 4443.

Object Used for requests using the supported object protocols. These HTTPS requests use ports 9021 (S3),
certificates 9023 (Atmos), 9025 (Swift).

You can upload a self-signed certificate, a certificate that is signed by a CA authority, or, for an object certificate, you can
request ECS to generate a certificate or you. The key/certificate pairs can be uploaded to ECS by using the ECS Management
REST API on port 4443.
The following topics explain how to create, upload, and verify certificates:
● ECS certificate tool
● Generate certificates
● Upload a certificate
● Verify installed certificates

ECS certificate tool


ECS certificate tool assists to upload SSL certificates to the ECS data or management interfaces.
The following topics explain working with the ECS certificate tool.
● Installation
● Configuration
● View current certificates
● Create certificate signing request
● Create a self-signed certificate
● Upload certificate

126 Certificates
Installation
This task describes to install the ECS Certificate Tool.

Steps
1. Download the ecs_certificate_tool package from here.
2. Upload the tool to /home/admin on one of the ECS nodes.
3. Change to the /home/admin directory and extract the package.

# cd /home/admin
# tar -zxvf ecs_certificate_tool-1.1.tgz

4. Change into the certificate tool directory.

# cd ecs_certificate_tool-1.1

5. Edit the config.ini file and enter the correct root UI credentials.

Command:

# sudo vi config.ini

Example:

[UI_CREDENTIALS]
USERNAME = root
PASSWORD = ChangeMe

6. Use the certificate tool to generate your SAN (subject alternative name) configuration. Manually add the FQDN and IP
address of your load balancer if you are using one.
Command:

sudo python ./ecs_certificate_tool.py generate_san

Example:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


generate_san
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

======================================================================
Generating SAN (subject alternative name) config.
======================================================================

----------------------------------------------------------------------
Setting DATA_SUBJECT_ALTERNATIVE_NAME config
----------------------------------------------------------------------
Set DNS_NAMES to :
['layton-ex3000.example.com',
'ogden-ex3000.example.com',
'orem-ex3000.example.com',
'provo-ex3000.example.com',
'sandy-ex3000.example.com']

Set IP_ADDRESSES to :
['192.0.2.104',
'192.0.2.105',
'192.0.2.106',
'192.0.2.107',
'192.0.2.108']

----------------------------------------------------------------------

Certificates 127
Setting MANAGEMENT_SUBJECT_ALTERNATIVE_NAME config
----------------------------------------------------------------------
Set DNS_NAMES to :
['layton-ex3000.example.com',
'ogden-ex3000.example.com',
'orem-ex3000.example.com',
'provo-ex3000.example.com',
'sandy-ex3000.example.com']

Set IP_ADDRESSES to :
['192.0.2.104',
'192.0.2.105',
'192.0.2.106',
'192.0.2.107',
'192.0.2.108']

Wrote changes to: /home/admin/ecs_certificate_tool-1.0/config.ini


DONE

Configuration
This task describes about the values of your certificates.
The config.ini file is where you set all the values for your certificate.
If you do not want to use a value, leave it blank like the example below:

# optional unit name


ORGANIZATIONAL_UNIT_NAME =

Example of config.ini file:

[GENERAL]
COMMON_NAME = *.ecs.example.com
# Two letter country name
COUNTRY_NAME = US
LOCALITY_NAME = Salt Lake City
STATE_OR_PROVINCE_NAME = Utah
STREET_ADDRESS = 123 Example Street
ORGANIZATION_NAME = Example Inc.
# optional unit name
ORGANIZATIONAL_UNIT_NAME =
# optional email address
EMAIL_ADDRESS = example@example.com

[UI_CREDENTIALS]
USERNAME = root
PASSWORD = ChangeMe

[SELF_SIGNED]
# 1825 days = 5 years
VALID_DAYS = 1825

[DATA_SUBJECT_ALTERNATIVE_NAME]
DNS_NAMES = node1.ecs.example.com node2.ecs.example.com node3.ecs.example.com
IP_ADDRESSES = 192.0.2.1 192.0.2.2 192.0.2.3 192.0.2.4

[MANAGEMENT_SUBJECT_ALTERNATIVE_NAME]
DNS_NAMES = node1.ecs.example.com node2.ecs.example.com node3.ecs.example.com
IP_ADDRESSES = 198.51.100.1 198.51.100.2 198.51.100.3 198.51.100.4

[ADVANCED]
# Probably dont use these unless you really know what your doing
SERIAL_NUMBER =
SURNAME =
GIVEN_NAME =
TITLE =
GENERATION_QUALIFIER =
X500_UNIQUE_IDENTIFIER =
DN_QUALIFIER =

128 Certificates
PSEUDONYM =
USER_ID =
DOMAIN_COMPONENT =
JURISDICTION_COUNTRY_NAME =
JURISDICTION_LOCALITY_NAME =
BUSINESS_CATEGORY =
POSTAL_ADDRESS =
POSTAL_CODE =
INN =
OGRN =
SNILS =
UNSTRUCTURED_NAME =

View current certificates


This task describes to view current certificates using the ECS Certificate Tool.

Steps
Run ecs_certificate_tool view_certs operation.

Command:

# sudo python ./ecs_certificate_tool.py view_certs

Example output:

ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

Authenticating using configured credentials..PASS

----------------------------------------------------------------------
View certificates
----------------------------------------------------------------------

======================================================================
Data Certificate:
======================================================================

Certificate:
Data:
Version: 3 (0x2)
Serial Number:
3b:0f:a3:e2:fa:0a:90:14:86:6c:a3:3a:26:5c:0b:8d:6e:18:7d:eb
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123 Example
Street, O=Example Inc./emailAddress=example@example.com
Validity
Not Before: Oct 17 18:35:06 2020 GMT
Not After : Oct 16 18:35:06 2025 GMT
Subject: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123
Example Street, O=Example Inc./emailAddress=example@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ad:13:ea:31:bb:13:30:fc:ad:75:1a:84:16:53:
76:9d:0d:96:60:69:04:70:ad:00:76:c5:e4:f0:39:
3d:e3:9b:2e:2a:06:0b:ae:29:16:22:69:73:1d:2b:
27:73:68:7a:42:62:84:37:9b:7e:7f:60:48:aa:80:
14:96:07:52:ac:d5:dd:1f:af:59:3b:88:5e:15:43:
f1:9e:29:91:0a:6d:19:8e:41:4b:3c:9f:0c:64:16:
5c:c6:61:a6:c7:28:a9:9e:14:81:10:7e:4a:4f:25:
93:20:d9:5b:fe:b3:ac:56:28:f0:89:2c:e3:97:18:
df:1d:e3:1b:6d:c5:08:fb:d6:97:81:82:b1:6b:33:
45:1d:de:7a:30:5c:6d:4a:70:96:06:f8:05:48:a7:
89:ad:ce:db:99:f2:61:88:92:75:e5:cf:d2:b1:2c:
28:60:6f:5e:ba:6c:02:f4:12:90:be:eb:6d:48:ae:

Certificates 129
b2:3a:6e:76:a6:02:b1:9e:f7:95:2c:65:8a:80:1a:
64:52:ec:f5:0c:2b:c8:87:a7:e5:4d:f7:34:60:a5:
49:03:30:27:10:8d:ad:4e:92:52:8b:d9:6b:ad:2d:
15:60:a5:26:fc:1b:1d:69:9f:5c:a3:0f:d9:cb:b9:
1d:68:30:6c:c8:ca:e1:71:4b:88:bd:98:d7:10:ae:
89:c5
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:node1.ecs.example.com, DNS:node2.ecs.example.com,
DNS:node3.ecs.example.com, IP Address:192.0.2.1, IP Address:192.0.2.2, IP
Address:192.0.2.3, IP Address:192.0.2.4
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Key Usage: critical
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage: critical
TLS Web Server Authentication
X509v3 Authority Key Identifier:
0.
Signature Algorithm: sha256WithRSAEncryption
33:85:7e:3b:fd:fd:3a:35:97:17:11:2d:4d:e1:7e:03:35:82:
8a:47:30:ed:b2:f9:1b:b4:22:a2:60:00:b5:9c:aa:6c:0d:e7:
ea:c7:0a:e6:05:24:7d:bd:50:ab:23:9b:16:6a:e7:be:e9:21:
26:61:0e:e5:e1:62:7e:d8:01:3a:3e:19:14:89:c2:ef:62:a0:
17:5c:80:2b:24:6b:96:73:fa:b0:8f:4d:09:0e:69:4f:72:f0:
4d:b1:13:8d:90:4e:18:4b:82:be:fd:48:b0:c2:9d:9c:43:d9:
d9:73:e6:15:88:79:1f:3e:13:ec:c9:6f:5f:2a:08:7c:a7:5d:
b4:e1:50:0f:3c:49:e3:e4:9f:8f:dd:e0:b5:b5:2d:d8:2d:29:
94:2d:4b:66:20:36:f0:ae:3a:ae:a4:c5:91:3c:f4:2a:d6:f5:
24:ec:7b:3a:96:d6:75:91:f9:b3:1c:8a:93:87:1b:d7:f2:f7:
72:4d:0c:02:b9:2e:ab:f6:76:ca:c5:74:39:e0:a0:54:2b:85:
4d:dd:e6:c7:fc:d0:e7:bc:3e:9e:98:19:e5:ed:ad:5f:4b:ea:
20:17:c5:23:eb:09:ad:8e:13:57:75:78:f9:68:bb:18:34:fc:
3a:26:94:90:5e:ed:a6:09:bb:14:5c:bd:2e:d3:5b:c4:43:08:
66:95:e7:ee

======================================================================
Management Certificate:
======================================================================

Certificate:
Data:
Version: 3 (0x2)
Serial Number:
3b:0f:a3:e2:fa:0a:90:14:86:6c:a3:3a:26:5c:0b:8d:6e:18:7d:eb
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123 Example
Street, O=Example Inc./emailAddress=example@example.com
Validity
Not Before: Oct 17 18:35:06 2020 GMT
Not After : Oct 16 18:35:06 2025 GMT
Subject: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123
Example Street, O=Example Inc./emailAddress=example@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ad:13:ea:31:bb:13:30:fc:ad:75:1a:84:16:53:
76:9d:0d:96:60:69:04:70:ad:00:76:c5:e4:f0:39:
3d:e3:9b:2e:2a:06:0b:ae:29:16:22:69:73:1d:2b:
27:73:68:7a:42:62:84:37:9b:7e:7f:60:48:aa:80:
14:96:07:52:ac:d5:dd:1f:af:59:3b:88:5e:15:43:
f1:9e:29:91:0a:6d:19:8e:41:4b:3c:9f:0c:64:16:
5c:c6:61:a6:c7:28:a9:9e:14:81:10:7e:4a:4f:25:
93:20:d9:5b:fe:b3:ac:56:28:f0:89:2c:e3:97:18:
df:1d:e3:1b:6d:c5:08:fb:d6:97:81:82:b1:6b:33:
45:1d:de:7a:30:5c:6d:4a:70:96:06:f8:05:48:a7:
89:ad:ce:db:99:f2:61:88:92:75:e5:cf:d2:b1:2c:
28:60:6f:5e:ba:6c:02:f4:12:90:be:eb:6d:48:ae:
b2:3a:6e:76:a6:02:b1:9e:f7:95:2c:65:8a:80:1a:
64:52:ec:f5:0c:2b:c8:87:a7:e5:4d:f7:34:60:a5:

130 Certificates
49:03:30:27:10:8d:ad:4e:92:52:8b:d9:6b:ad:2d:
15:60:a5:26:fc:1b:1d:69:9f:5c:a3:0f:d9:cb:b9:
1d:68:30:6c:c8:ca:e1:71:4b:88:bd:98:d7:10:ae:
89:c5
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:node1.ecs.example.com, DNS:node2.ecs.example.com,
DNS:node3.ecs.example.com, IP Address:192.0.2.1, IP Address:192.0.2.2, IP
Address:192.0.2.3, IP Address:192.0.2.4
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Key Usage: critical
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage: critical
TLS Web Server Authentication
X509v3 Authority Key Identifier:
0.
Signature Algorithm: sha256WithRSAEncryption
33:85:7e:3b:fd:fd:3a:35:97:17:11:2d:4d:e1:7e:03:35:82:
8a:47:30:ed:b2:f9:1b:b4:22:a2:60:00:b5:9c:aa:6c:0d:e7:
ea:c7:0a:e6:05:24:7d:bd:50:ab:23:9b:16:6a:e7:be:e9:21:
26:61:0e:e5:e1:62:7e:d8:01:3a:3e:19:14:89:c2:ef:62:a0:
17:5c:80:2b:24:6b:96:73:fa:b0:8f:4d:09:0e:69:4f:72:f0:
4d:b1:13:8d:90:4e:18:4b:82:be:fd:48:b0:c2:9d:9c:43:d9:
d9:73:e6:15:88:79:1f:3e:13:ec:c9:6f:5f:2a:08:7c:a7:5d:
b4:e1:50:0f:3c:49:e3:e4:9f:8f:dd:e0:b5:b5:2d:d8:2d:29:
94:2d:4b:66:20:36:f0:ae:3a:ae:a4:c5:91:3c:f4:2a:d6:f5:
24:ec:7b:3a:96:d6:75:91:f9:b3:1c:8a:93:87:1b:d7:f2:f7:
72:4d:0c:02:b9:2e:ab:f6:76:ca:c5:74:39:e0:a0:54:2b:85:
4d:dd:e6:c7:fc:d0:e7:bc:3e:9e:98:19:e5:ed:ad:5f:4b:ea:
20:17:c5:23:eb:09:ad:8e:13:57:75:78:f9:68:bb:18:34:fc:
3a:26:94:90:5e:ed:a6:09:bb:14:5c:bd:2e:d3:5b:c4:43:08:
66:95:e7:ee

DONE

Create certificate signing request


This task describes to create certificate signing request (CSR) for the data and management interfaces.

Steps
Create a CSR.

Create a CSR for the data interface:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


create_csr -d
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------

Validating COMMON_NAME = *.ecs.example.com..PASS


Validating COUNTRY_NAME = US..PASS
Validating LOCALITY_NAME = Salt Lake City..PASS
Validating STATE_OR_PROVINCE_NAME = Utah..PASS

Certificates 131
Validating STREET_ADDRESS = 123 Example Street..PASS
Validating ORGANIZATION_NAME = Example Inc...PASS
Validating EMAIL_ADDRESS = example@example.com..PASS
----------------------------------------------------------------------
Validating DNS_NAMES configuration
----------------------------------------------------------------------

Validating DNSName: node1.ecs.example.com..PASS


Validating DNSName: node2.ecs.example.com..PASS
Validating DNSName: node3.ecs.example.com..PASS

----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------

Validating IPv4Address: 192.0.2.1..PASS


Validating IPv4Address: 192.0.2.2..PASS
Validating IPv4Address: 192.0.2.3..PASS
Validating IPv4Address: 192.0.2.4..PASS

Validating SELF_SIGNED..PASS

All configurations items validated successfully!

Creating RSA private key..DONE


Wrote private key to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-data_private.key
----------------------------------------------------------------------
Certificate Signing Request
----------------------------------------------------------------------

Creating Certificate Signing Request..DONE


Wrote certificate signing request to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-
data.csr

Create a CSR for the management interface:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


create_csr -m
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------

Validating COMMON_NAME = *.ecs.example.com..PASS


Validating COUNTRY_NAME = US..PASS
Validating LOCALITY_NAME = Salt Lake City..PASS
Validating STATE_OR_PROVINCE_NAME = Utah..PASS
Validating STREET_ADDRESS = 123 Example Street..PASS
Validating ORGANIZATION_NAME = Example Inc...PASS
Validating EMAIL_ADDRESS = example@example.com..PASS
----------------------------------------------------------------------
Validating DNS_NAMES configuration
----------------------------------------------------------------------

Validating DNSName: node1.ecs.example.com..PASS


Validating DNSName: node2.ecs.example.com..PASS
Validating DNSName: node3.ecs.example.com..PASS

----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------

Validating IPv4Address: 198.51.100.1..PASS

132 Certificates
Validating IPv4Address: 198.51.100.2..PASS
Validating IPv4Address: 198.51.100.3..PASS
Validating IPv4Address: 198.51.100.4..PASS

Validating SELF_SIGNED..PASS

All configurations items validated successfully!

Creating RSA private key..DONE


Wrote private key to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-
management_private.key
----------------------------------------------------------------------
Certificate Signing Request
----------------------------------------------------------------------

Creating Certificate Signing Request..DONE


Wrote certificate signing request to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-
management.csr

Create a self-signed certificate


This task describes to create self-signed certificate for the data and management interfaces.

Steps
Create a self-signed certificate.

Create a self-signed certificate for the data interface:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


create_ssc -d
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------

Validating COMMON_NAME = *.ecs.example.com..PASS


Validating COUNTRY_NAME = US..PASS
Validating LOCALITY_NAME = Salt Lake City..PASS
Validating STATE_OR_PROVINCE_NAME = Utah..PASS
Validating STREET_ADDRESS = 123 Example Street..PASS
Validating ORGANIZATION_NAME = Example Inc...PASS
Validating EMAIL_ADDRESS = example@example.com..PASS
----------------------------------------------------------------------
Validating DNS_NAMES configuration
----------------------------------------------------------------------

Validating DNSName: node1.ecs.example.com..PASS


Validating DNSName: node2.ecs.example.com..PASS
Validating DNSName: node3.ecs.example.com..PASS

----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------

Validating IPv4Address: 192.0.2.1..PASS


Validating IPv4Address: 192.0.2.2..PASS
Validating IPv4Address: 192.0.2.3..PASS
Validating IPv4Address: 192.0.2.4..PASS

Certificates 133
Validating SELF_SIGNED..PASS

All configurations items validated successfully!

Creating RSA private key..DONE


Wrote private key to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-data_private.key
----------------------------------------------------------------------
Self-signed certificate
----------------------------------------------------------------------

Creating self-signed certificate..DONE


Wrote Certificate to: /home/admin/ecs_certificate_tool-1.0/FNM00181300310-data.crt
admin@provo-ex3000:~/ecs_certificate_tool-1.0>

Create a self-signed certificate for the management interface:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


create_ssc -m
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------

Validating COMMON_NAME = *.ecs.example.com..PASS


Validating COUNTRY_NAME = US..PASS
Validating LOCALITY_NAME = Salt Lake City..PASS
Validating STATE_OR_PROVINCE_NAME = Utah..PASS
Validating STREET_ADDRESS = 123 Example Street..PASS
Validating ORGANIZATION_NAME = Example Inc...PASS
Validating EMAIL_ADDRESS = example@example.com..PASS
----------------------------------------------------------------------
Validating DNS_NAMES configuration
----------------------------------------------------------------------

Validating DNSName: node1.ecs.example.com..PASS


Validating DNSName: node2.ecs.example.com..PASS
Validating DNSName: node3.ecs.example.com..PASS

----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------

Validating IPv4Address: 198.51.100.1..PASS


Validating IPv4Address: 198.51.100.2..PASS
Validating IPv4Address: 198.51.100.3..PASS
Validating IPv4Address: 198.51.100.4..PASS

Validating SELF_SIGNED..PASS

All configurations items validated successfully!

Creating RSA private key..DONE


Wrote private key to /home/admin/ecs_certificate_tool-1.0/FNM00181300310-
management_private.key
----------------------------------------------------------------------
Self-signed certificate
----------------------------------------------------------------------

Creating self-signed certificate..DONE


Wrote Certificate to: /home/admin/ecs_certificate_tool-1.0/FNM00181300310-management.crt

134 Certificates
Upload certificate
This task describes to upload data and management certificates to ECS.

About this task


If your using a self-signed certificate, generated by this tool then the private key and certificate are generated in your current
directory.
If you have a certificate that is signed by a CA, upload them to ECS and put them in the certificate tool directory.

Steps
Upload your certificate.
Upload data certificate.
Command:

# sudo python ./ecs_certificate_tool.py upload_certificate -c <path to certificate> -p


<path to private key> --data

Example:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


upload_certificate -c ./FNM00181300310-data.crt -p FNM00181300310-data_private.key --
data ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Upload Certificate
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

Reading certificate from: ./FNM00181300310-data.crt..DONE


Reading private key from: FNM00181300310-data_private.key..DONE
Verifying the private key matches the certificate..DONE
Uploading the certificate to ECS..DONE

admin@provo-ex3000:~/ecs_certificate_tool-1.0>

NOTE: After uploading the data certificate, you have two options.
● You can wait two hours for dataheadsvc to propagate the new certificate across the cluster.
● You can manually restart dataheadsvc on the node you ran the tool from, but it can have a brief impact.

Command to restart dataheadsvc:

# sudo kill -9 `pidof dataheadsvc`

Upload management certificate.

Command:

# sudo python ./ecs_certificate_tool.py upload_certificate -c <path to certificate> -p


<path to private key> --management

Example:

admin@provo-ex3000:~/ecs_certificate_tool-1.0> sudo python ./ecs_certificate_tool.py


upload_certificate -c ./FNM00181300310-management.crt -p FNM00181300310-
management_private.key -m
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log

----------------------------------------------------------------------
Upload Certificate

Certificates 135
----------------------------------------------------------------------

Authenticating using configured credentials..PASS

Reading certificate from: ./FNM00181300310-management.crt..DONE


Reading private key from: FNM00181300310-management_private.key..DONE
Verifying the private key matches the certificate..DONE
Uploading the certificate to ECS..DONE

NOTE: After uploading the new management certificate, you must do rolling restarts of objcontrolsvc or nginx across
the cluster.
a. Generate a cluster wide MACHINES file:

# sudo getclusterinfo -a /root/MACHINES.VDC && sudo viprscp /root/MACHINES.VDC


/root/;sudo viprscp /root/MACHINES.VDC /home/admin/;sudo viprexec -i "pingall;
md5sum /root/MACHINES.VDC /home/admin/MACHINES.VDC"

b. Restart objcontrolsvc across the cluster:

# viprexec -f ~/MACHINES.VDC -i 'pidof objcontrolsvc; kill -9 `pidof


objcontrolsvc`; sleep 60; pidof objcontrolsvc'

c. Restart nginx across the cluster:

# viprexec -f ~/MACHINES.VDC -i -c "/etc/init.d/nginx restart;sleep 60;/etc/init.d/


nginx status"

Generate certificates
You can generate a self-signed certificate, or you can purchase a certificate from a certificate authority (CA). The CA-signed
certificate is recommended for production purposes because it can be validated by any client machine without any extra steps.
Certificates must be in PEM-encoded x509 format.
When you generate a certificate, you typically specify the hostname where the certificate is used as the common name (CN).
However, since ECS has multiple nodes, each with its own hostname, we must create a single certificate that supports all the
different host names for an ECS cluster. SSL certificates support this using the Subject Alternative Names (SAN) configuration.
This configuration section allows you to specify all the host names and IP addresses that the certificate should supports.
For maximum compatibility with object protocols, the Common Name (CN) on your certificate must point to the wildcard DNS
entry used by S3, because S3 is the only protocol that uses virtually hosted buckets (and injects the bucket name into the
hostname). You can specify only one wildcard entry on an SSL certificate, and it must be under the CN. The other DNS entries
for your load balancer for the Atmos and Swift protocols must be registered as a Subject Alternative Names (SANs) on the
certificate.
The topics in this section show how to generate a certificate or certificate request using openssl, however, your IT
organization may have different requirements or procedures for generating certificates.

Create a private key


You must create a private key to sign self-signed certificates and to create signing requests.

About this task


SSL uses public-key cryptography which requires a private and a public key. The first step in configuring it is to create a private
key. The public key is created automatically, using the private key, when you create a certificate signing request or a certificate.
The following steps describe how to use the openssl tool to create a private key.

Steps
1. Log in to an ECS node or to a node that you can connect to the ECS cluster.

136 Certificates
2. Use the openssl tool to generate a private key.
For example, to create a key called server.key, use:

openssl genrsa -des3 -out server.key 2048

openssl genrsa -des3 -out server.key 3072

openssl genrsa -des3 -out server.key 4096

3. When prompted, enter a passphrase for the private key and reenter it to verify. You will need to provide this passphrase
when creating a self-signed certificate or a certificate signing request using the key.
You must create a copy of the key with the passphrase removed before uploading the key to ECS. For more information, see
Upload a certificate.
4. Set the permissions on the key file.

chmod 0400 server.key

Generate a SAN configuration


If you want your certificates to support Subject Alternative Names (SANs), you must define the alternative names in a
configuration file.

About this task


OpenSSL does not allow you to pass Subject Alternative Names (SANs) through the command line, so you must add them to a
configuration file first. To do this, you must locate your default OpenSSL configuration file. On Ubuntu, it is at /usr/lib/ssl/
openssl.cnf.
NOTE: The openssl.cnf file location varies with Linux distribution. Replace with the correct location in Step 1.

Ubuntu 18/20 - /etc/ssl/openssl.cnf (softlink in /usr/lib/ssl to /etc/ssl/openssl.cnf)


CentOS 7/8 - /etc/pki/tls/openssl.cnf
ECS 3.4/3.5 - /etc/ssl/openssl.cnf

Steps
1. Create the configuration file.

cp /etc/ssl/openssl.conf request.conf

2. NOTE: If using a wildcard in the CN, it should also be added as an entry in the SAN section for maximum compatibility.

Edit the configuration file with a text editor and make the following changes.
a. Add the [ alternate_names ] .
For example:

[ alternate_names ]
DNS.1 = os.example.com
DNS.2 = atmos.example.com
DNS.3 = swift.example.com

NOTE: There is a space between the bracket and the name of the section.

NOTE: If you are using a load balancer, you can use FQDN instead of IP address.

Certificates 137
If you are uploading the certificates to ECS nodes rather than to a load balancer, the format is:

[ alternate_names ]
IP.1 = <IP node 1>
IP.2 = <IP node 2>
IP.3 = <IP node 3>
...

b. In the section [ v3_ca ], add the following lines:

subjectAltName = @alternate_names
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth

The following line is likely to exist in this [ v3_ca ] section. If you create a certificate signing request, you must comment
it out as shown:

#authorityKeyIdentifier=keyid:always,issuer

c. In the [ req ] section, add the following lines:

x509_extensions = v3_ca #for self signed cert


req_extensions = v3_ca #for cert signing req

d. In the section [ CA_default ], uncomment or add the line:

copy_extension=copy

Create a self-signed certificate


You can create a self-signed certificate.

Prerequisites
● Create a private key using the procedure in Create a private key.
● To create certificates that use SAN, you must create a SAN configuration file using the procedure in Generate a SAN
configuration.
● Create a self-signed certificate for management and one for data (object).

About this task

Steps
1. Use the private key to create a self-signed certificate.
Two ways of creating the signing request are shown. One for use if you have already prepared a SAN configuration file to
specify the alternative server name, another if you have not.
If you are using SAN:

openssl req -x509 -new -key server.key -config request.conf -out server.crt

If you are not, use:

openssl req -x509 -new -key server.key -out server.crt

Example output.

Signature ok
subject=/C=US/ST=GA/

138 Certificates
2. Enter the pass phrase for your private key.
3. At the prompts, enter the fields for the DN for the certificate.
Most fields are optional. Enter a Common Name (CN).
NOTE: The CN should have a wildcard (*) to support both path style and virtual style addressing. The wildcard (*)
supports virtual style addressing whereas the FQDN supports path style.

You see the following prompts:

You are about to be asked to enter information that will be incorporated


into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Acme
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*.acme.com
Email Address []:

4. Enter the Distinguished Name (DN) details when prompted. More information about DN fields is provided in Distinguished
Name (DN) fields.
5. View the certificate.

openssl x509 -in server.crt -noout -text

Distinguished Name (DN) fields

The following table describes the fields that consist of the Distinguished Name (DN).

Table 45. Distinguished Name (DN) fields


Name Description Example
Common Name (CN) The CN should have a wildcard (*) to support both path style *.yourco.com
and virtual style addressing. The wildcard (*) supports virtual style ecs1.yourco.com
addressing whereas the FQDN supports path style.
Organization The legal name of your organization. This must not be abbreviated and Yourco Inc.
should include suffixes such as Inc, Corp, or LLC.
Organizational Unit The division of your organization handling the certificate. IT Department

Locality or City The state or region where your organization is located. This must not Mountain View
be abbreviated.
State or Province The city where your organization is located. California

Country The two-letter ISO code for the country or region where your US
organization is located.
Email address An email address to contact your organization. contact@yourco.com

Certificates 139
Create a certificate signing request
You can create a certificate signing request to submit to a CA to obtain a signed certificate.

Prerequisites
● You must create a private key using the procedure in Create a private key.
● To create certificates that use SAN, you must create a SAN configuration file using the procedure in Generate a SAN
configuration.

Steps
1. Use the private key to create a certificate signing request.
Two ways of creating the signing request are shown. One for if you have already prepared a SAN configuration file to specify
the alternative server name, another if you have not.
If you are using SAN:

openssl req -new -key server.key -config request.conf -out server.csr

If you are not, use:

openssl req -new -key server.key -out server.csr

When creating a signing request, you are asked to supply the Distinguished Name (DN) which comprises a number of fields.
Only the Common Name is required and you can accept the defaults for the other parameters.
2. Enter the pass phrase for your private key.
3. At the prompts, enter the fields for the DN for the certificate.
Most fields are optional. However, you must enter a Common Name (CN).
NOTE: The CN should have a wildcard (*) to support both path style and virtual style addressing. The wildcard (*)
supports virtual style addressing whereas the FQDN supports path style.

You will see the following prompts:

You are about to be asked to enter information that will be incorporated


into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Acme
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:*.acme.com
Email Address []:

More information on the DN fields are provided in Distinguished Name (DN) fields.
4. You are prompted to enter an optional challenge password and a company name.

Please enter the following 'extra' attributes


to be sent with your certificate request
A challenge password []:
An optional company name []:

5. View the certificate request.

openssl req -in server.csr -text -noout

140 Certificates
Results
You can submit the certificate signing request to your CA who will return a signed certificate file.

Next steps
Once you receive the CA-signed certificate file, make sure it is in the correct format as described in CA-signed certificate file
format.

CA-signed certificate file format

If you received a signed certificate from a corporate CA, the format is host certificate > intermediate certificate > root
certificate, as shown below. The root certificate file should be included so that clients can import it.

NOTE: There is no text between the end of each certificate and the beginning of the next certificate.

——BEGIN CERTIFICATE——
host certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
intermediate certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
root certificate
——END CERTIFICATE——

If you received a signed certificate from a public CA, including the root certificate file is not required because it is installed
on the client. The certificate file format is host certificate > intermediate certificate, as shown below. (Note there is no text
between the end of the host certificate and the beginning of the intermediate certificate.)

——BEGIN CERTIFICATE——
host certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
intermediate certificate
——END CERTIFICATE——

Upload a certificate
You can upload management or data certificates to ECS. Whichever type of certificate that you upload, you must authenticate
with the API.
● Authenticate with the ECS Management REST API
● Upload a management certificate
● Upload a data certificate for data access endpoints

Authenticate with the ECS Management REST API


To run ECS Management REST API commands, you must first authenticate with the API service and obtain an authentication
token.

Steps
Authenticate with the ECS Management REST API and obtain an authentication token that can be used when using the API to
upload or verify certificates.
a. Run the following command:

export TOKEN=`curl -s -k -v -u <root user>:<root user password> https://<management


IP>:4443/login 2>&1 | grep X-SDS-AUTH-TOKEN | awk '{print $2, $3}'`

The username and password are those used to access the ECS Portal. The public_ip is the public IP address of the node.

Certificates 141
b. Verify the token exported correctly.

echo $TOKEN

Example output:

X-SDS-AUTH-TOKEN:
BAAcTGZjUjJ2Zm1iYURSUFZzKzhBSVVPQVFDRUUwPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW
50ZXJEYXRhOjcxYjA1ZTgwLTNkNzktND
dmMC04OThhLWI2OTU4NDk1YmVmYgIADTE0NjQ3NTM2MjgzMTIDAC51cm46VG9rZW46YWMwN2Y0NGYtMjE5OS00
ZjA4LTgyM2EtZTAwNTc3ZWI0NDAyAgAC
0A8=

Upload a management certificate


You can upload a management certificate which is used to authenticate access to management endpoints, such as the ECS
Portal and the ECS Management REST API.

Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN) as
described in Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure that your private key and certificate are available on the machine from which you intend to perform the upload.

Steps
1. Ensure that your private key does not have a passphrase.
If it does, you can create a copy with the passphrase stripped, by typing the following command:

openssl rsa -in server.key -out server_nopass.key

2. Upload the keystore for the data path using your private key and signed certificate.
Using curl:

curl -svk -H "$TOKEN" -H "Content-type: application/


xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT
-d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateKeyFile`</
private_key><certificate_chain>`cat certificateFile`</certificate_chain></
key_and_certificate></rotate_keycertchain>"https://<Management IP>:4443/vdc/keystore"

The privateKeyFile, for example <path>/server_nopass.key, and certificateFile, for example <path>/
server.crt, must be replaced with the path to the key and certificate files.
3. Log in to one of the ECS nodes as the security admin user.
4. Verify that the MACHINES file has all nodes in it.
The MACHINES file is used by ECS wrapper scripts that perform commands on all nodes, such as viprexec.
The MACHINES file is in /home/admin.
a. Display the contents of the MACHINES file.

cat /home/admin/MACHINES

b. If the MACHINES file does not contain all nodes, recreate it.

getrackinfo -c MACHINES

Verify that the MACHINES file now contains all nodes.


NOTE:
● Review ~/MACHINES file and check if it contains all nodes private.4 IP from all racks of the VDC.

142 Certificates
● The following check detects and reports inconsistency in xDoctor:

XDOC-1986 |Check consistency of MACHINES file across /home/admin and /root and
getrackinfo -c

5. Restart the objcontrolsvc and nginx services once the management certificates are applied.
a. Restart the object service.

viprexec -f ~/MACHINES -i 'pidof objcontrolsvc;


kill `pidof objcontrolsvc`; sleep 60; pidof objcontrolsvc'

b. Restart the nginx service.

viprexec -i -c "/etc/init.d/nginx restart;sleep 60;/etc/init.d/nginx status"

Next steps
You can verify that the certificate has uploaded correctly using the following procedure: Verify the management certificate.

Upload a data certificate for data access endpoints


You can upload a data certificate which is used to authenticate access for the S3, EMC Atmos, or OpenStack Swift protocols.

Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a suitable REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure that your private key and certificate are available on the machine from which you intend to perform the upload.

Steps
1. Ensure that your private key does not have a pass phrase.
If it does, you can create a copy with the pass phrase stripped, using:

openssl rsa -in server.key -out server_nopass.key

2. NOTE:
● Sometimes while copying the API, the - is left out from object-cert. To avoid errors, review the API before
using.
● You can use a single certificate consisting of all nodes from more than one VDC.

Upload the keystore for the data path using your private key and signed certificate.

curl -svk -H "$TOKEN" -H "Content-type: application/


xml" -H "X-EMC-REST-CLIENT: TRUE" -X PUT
-d "<rotate_keycertchain><key_and_certificate><private_key>`cat privateKeyFile`</
private_key><certificate_chain>`cat certificateFile`</certificate_chain></
key_and_certificate></rotate_keycertchain>"https://<Management IP>:4443/object-cert/
keystore

The privateKeyFile, for example <path>/server_nopass.key, and certificateFile, for example <path>/
server.crt, must be replaced with the path to the key and certificate files.
3. The certificate is distributed when the dataheadsvc service is restarted. You can do this with the commands below.

Certificates 143
NOTE: You do not have to restart the services when changing data certificate, dataheadsvc is restarted
automatically on each node two hours from certificate update.

viprexec -i 'pidof dataheadsvc; sudo kill -9 `pidof dataheadsvc`; sleep 60; pidof
dataheadsvc'

Next steps
You can verify that the certificate has correctly uploaded using the following procedure: Verify the object certificate.

Add custom LDAP certificate


You can add the customize LDAP certificate for ECS authentication.

Prerequisites
● This operation requires the System Administrator role in ECS.
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN) as
described in Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a suitable REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure your private key and certificate are available on the machine from which you intend to perform the upload.

Steps
1. To add certificate to TrustStore, use:

curl -s -k -X PUT -H Content-Type:application/json -H "X-SDS-


AUTHTOKEN: BAAcNThsbWVXbGFLUGhZQ0pXOTZ4azNOeFNsZkhBPQMAjAQASHVybjpzdG9yYWdlb3M6Vmly
dHVhbERhdGFDZW50ZXJEYXRhOjU1NDEyZjdmLWQyNGItNGU1Ni05MGM1LWRmZWVmZGUwMTNi
ZgIADTE1NjM4OTM1NjE2NzIDAC51cm46VG9rZW46ZmE3NzIyNzktODg0Yi00NjAxLTg1MWUt
YjkyYjQ4YjAwOGMwAgAC0A8=" -H ACCEPT:application/json https://10.243.83.105 :4443/vdc/
truststore -d @file1.json

Request Payload JSON example:

{
"add": [
"-----BEGIN CERTIFICATE-----\nMI7FS8J...DF=r\n-----END CERTIFICATE-----"
]
}

Response Payload JSON example:

{"certificate":["-----BEGIN CERTIFICATE-----\nMIIDdzCCAl+gAwIBAgIQU
+WFap1wZplFATLD4CWbnTANBgkqhkiG9w0BAQUFADBO\r
\nMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxFjAUBgoJkiaJk/IsZAEZFgZoYWRvb3Ax\r
\nHTAbBgNVBAMTFGhhZG9vcC1OSUxFMy1WTTQzLUNBMB4XDTE1MDkwNzEzMDA0MFoX\r
\nDTIwMDkwNzEzMTAzOVowTjEVMBMGCgmSJomT8ixkARkWBWxvY2FsMRYwFAYKCZIm\r
\niZPyLGQBGRYGaGFkb29wMR0wGwYDVQQDExRoYWRvb3AtTklMRTMtVk00My1DQTCC\r
\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOqoxHBtrND7CiHQvHXSDdKy\r
\nxyZv6qK0BjlQKlQR2qiCjfOC3By9b8cSvzVFo6mdDiQurxPjlz5JLALfbIMxcslN\r
\nBvDkzn9tzzspbYSLyRqOyMxe4F+Bo9Hm8nGLtZU6liLBglPgrSt77Qvi6pAU0EjN\r
\nNZ3ZqBYZcmx/rD3iCeHojcl/P4UDy4lbCb3l7w6GbrczGRimitkFiriD3kUtkXyw\r
\nMM4L+ZY1j8o6WXSfCMhX0nX8OCrSIukMyZKCreeUQg4xykSp6GhIB74I6R6gIAh0\r
\nFOqqLsRNjMRjEhWpVXB7tTW74E3DgVwe2PF/3aL1i9sx90UekZREhA3L1sKKm10C\r
\nAwEAAaNRME8wCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE\r
\nFDRCvMyr1H52IuQtDpsfnj4PbBNwMBAGCSsGAQQBgjcVAQQDAgEAMA0GCSqGSIb3\r
\nDQEBBQUAA4IBAQAXPE0Agbjuhi02Yufz0UtUBQYcAHsRCwxrLKrvnz1UBqA3wU87\r
\nbUQDppQ7e3Iz5SJ9PZ/3nUVQyMgnI5NS1UP6Hn/j9jVlaAB4MKXzgXdBCIDUOtzh\r
\nBZuvlz6FDjBbBSyrAk3LVnqSC2DNYVPbyRrUBHQxWnYT2FuIMQeGDb/rjtWALkvb\r
\n7sTsLKiGHwkwmeYyrsiUpzoyarw21EWxLRRrMfNX1/CrGg883k0mYuEYgOaFmoi0\r
\nRWZBz2NE10V5yorVniuiql/Tvbi3gPYLhy3DFMO4mjh9eSgcakYCNsQFc5msmu4Y\r
\nG6ab3ChgU6kVF5sIEv/wXvyId8X2uLoa8Wcj\r\n-----END CERTIFICATE-----"]}

144 Certificates
2. To get certificate from TrustStore, use:

curl -s -k -X GET -H Content-Type:application/json -H "X-SDS-


AUTHTOKEN: BAAcNThsbWVXbGFLUGhZQ0pXOTZ4azNOeFNsZkhBPQMAjAQASHVybjpzdG9yYWdlb3M6Vmly
dHVhbERhdGFDZW50ZXJEYXRhOjU1NDEyZjdmLWQyNGItNGU1Ni05MGM1LWRmZWVmZGUwMTNi
ZgIADTE1NjM4OTM1NjE2NzIDAC51cm46VG9rZW46ZmE3NzIyNzktODg0Yi00NjAxLTg1MWUt
YjkyYjQ4YjAwOGMwAgAC0A8=" -H ACCEPT:application/json https:// 10.243.83.105:4443/vdc/
truststore

Response Payload JSON example:

{"certificate":["-----BEGIN CERTIFICATE-----\nMIIDdzCCAl+gAwIBAgIQU
+WFap1wZplFATLD4CWbnTANBgkqhkiG9w0BAQUFADBO\r
\nMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxFjAUBgoJkiaJk/IsZAEZFgZoYWRvb3Ax\r
\nHTAbBgNVBAMTFGhhZG9vcC1OSUxFMy1WTTQzLUNBMB4XDTE1MDkwNzEzMDA0MFoX\r
\nDTIwMDkwNzEzMTAzOVowTjEVMBMGCgmSJomT8ixkARkWBWxvY2FsMRYwFAYKCZIm\r
\niZPyLGQBGRYGaGFkb29wMR0wGwYDVQQDExRoYWRvb3AtTklMRTMtVk00My1DQTCC\r
\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOqoxHBtrND7CiHQvHXSDdKy\r
\nxyZv6qK0BjlQKlQR2qiCjfOC3By9b8cSvzVFo6mdDiQurxPjlz5JLALfbIMxcslN\r
\nBvDkzn9tzzspbYSLyRqOyMxe4F+Bo9Hm8nGLtZU6liLBglPgrSt77Qvi6pAU0EjN\r
\nNZ3ZqBYZcmx/rD3iCeHojcl/P4UDy4lbCb3l7w6GbrczGRimitkFiriD3kUtkXyw\r
\nMM4L+ZY1j8o6WXSfCMhX0nX8OCrSIukMyZKCreeUQg4xykSp6GhIB74I6R6gIAh0\r
\nFOqqLsRNjMRjEhWpVXB7tTW74E3DgVwe2PF/3aL1i9sx90UekZREhA3L1sKKm10C\r
\nAwEAAaNRME8wCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE\r
\nFDRCvMyr1H52IuQtDpsfnj4PbBNwMBAGCSsGAQQBgjcVAQQDAgEAMA0GCSqGSIb3\r
\nDQEBBQUAA4IBAQAXPE0Agbjuhi02Yufz0UtUBQYcAHsRCwxrLKrvnz1UBqA3wU87\r
\nbUQDppQ7e3Iz5SJ9PZ/3nUVQyMgnI5NS1UP6Hn/j9jVlaAB4MKXzgXdBCIDUOtzh\r
\nBZuvlz6FDjBbBSyrAk3LVnqSC2DNYVPbyRrUBHQxWnYT2FuIMQeGDb/rjtWALkvb\r
\n7sTsLKiGHwkwmeYyrsiUpzoyarw21EWxLRRrMfNX1/CrGg883k0mYuEYgOaFmoi0\r
\nRWZBz2NE10V5yorVniuiql/Tvbi3gPYLhy3DFMO4mjh9eSgcakYCNsQFc5msmu4Y\r
\nG6ab3ChgU6kVF5sIEv/wXvyId8X2uLoa8Wcj\r\n-----END CERTIFICATE-----"]}

3. To delete certificate from TrustStore, use:

curl -s -k -X PUT -H Content-Type:application/json -H "X-SDS-


AUTHTOKEN: BAAcNThsbWVXbGFLUGhZQ0pXOTZ4azNOeFNsZkhBPQMAjAQASHVybjpzdG9yYWdlb3M6Vmly
dHVhbERhdGFDZW50ZXJEYXRhOjU1NDEyZjdmLWQyNGItNGU1Ni05MGM1LWRmZWVmZGUwMTNi
ZgIADTE1NjM4OTM1NjE2NzIDAC51cm46VG9rZW46ZmE3NzIyNzktODg0Yi00NjAxLTg1MWUt
YjkyYjQ4YjAwOGMwAgAC0A8=" -H ACCEPT:application/json https:// 10.243.83.105:4443/vdc/
truststore -d @file1.json

Request Payload JSON example:

{
"remove": [
"-----BEGIN CERTIFICATE-----\nMIIGWT...PvCr\n-----END CERTIFICATE-----",
]
}

Response Payload JSON example:

{"certificate":[]}

4. To set TrustStore settings, use:

curl -s -k -X PUT -H Content-Type:application/json -H "X-SDS-


AUTHTOKEN: BAAcNThsbWVXbGFLUGhZQ0pXOTZ4azNOeFNsZkhBPQMAjAQASHVybjpzdG9yYWdlb3M6Vmly
dHVhbERhdGFDZW50ZXJEYXRhOjU1NDEyZjdmLWQyNGItNGU1Ni05MGM1LWRmZWVmZGUwMTNi
ZgIADTE1NjM4OTM1NjE2NzIDAC51cm46VG9rZW46ZmE3NzIyNzktODg0Yi00NjAxLTg1MWUt
YjkyYjQ4YjAwOGMwAgAC0A8=" -H ACCEPT:application/json https:// 10.243.83.105:4443/vdc/
truststore/settings -d @truststoresettings.json

Request Payload JSON example:

{
"accept_all_certificates": ""
}

Certificates 145
Response Payload JSON example:

{"accept_all_certificates":true}

5. To get TrustStore settings, use:

curl -s -k -X GET -H Content-Type:application/json -H "X-SDS-


AUTHTOKEN: BAAcNThsbWVXbGFLUGhZQ0pXOTZ4azNOeFNsZkhBPQMAjAQASHVybjpzdG9yYWdlb3M6Vmly
dHVhbERhdGFDZW50ZXJEYXRhOjU1NDEyZjdmLWQyNGItNGU1Ni05MGM1LWRmZWVmZGUwMTNi
ZgIADTE1NjM4OTM1NjE2NzIDAC51cm46VG9rZW46ZmE3NzIyNzktODg0Yi00NjAxLTg1MWUt
YjkyYjQ4YjAwOGMwAgAC0A8=" -H ACCEPT:application/json https:// 10.243.83.105:4443/vdc/
truststore/settings

Response Payload JSON example:

{"accept_all_certificates":true}

Verify installed certificates


The object certificate and management certificate each has an ECS Management REST API GET request to retrieve the
installed certificate.
● Verify the management certificate
● Verify the object certificate

Verify the management certificate


You can retrieve the installed management certificate using the ECS Management REST API.

Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● If you have restarted services, the certificate is available immediately. Otherwise, you must wait two hours to be sure that
the certificate is propagated to all nodes.

About this task

Steps
1. Use the GET /vdc/keystore method to return the certificate.
Using the curl tool, the method can be run by typing the following:

curl -svk -H "$TOKEN" https://x.x.x.x:4443/vdc/keystore

<?xml version="1.0" encoding="UTF-8" standalone="yes"?><certificate_chain><chain>


-----BEGIN CERTIFICATE-----
MIIDgjCCAmoCCQCEDeNwcGsttTANBgkqhkiG9w0BAQUFADCBgjELMAkGA1UEBhMC&#xD;
VVMxCzAJBgNVBAgMAkdBMQwwCgYDVQQHDANBVEwxDDAKBgNVBAoMA0VNQzEMMAoG&#xD;
A1UECwwDRU5HMQ4wDAYDVQQDDAVjaHJpczEsMCoGCSqGSIb3DQEJARYdY2hyaXN0&#xD;
b3BoZXIuZ2hva2FzaWFuQGVtYy5jb20wHhcNMTYwNjAxMTg0MTIyWhcNMTcwNjAy&#xD;
MTg0MTIyWjCBgjELMAkGA1UEBhMCVVMxCzAJBgNVBAgMAkdBMQwwCgYDVQQHDANB&#xD;
VEwxDDAKBgNVBAoMA0VNQzEMMAoGA1UECwwDRU5HMQ4wDAYDVQQDDAVjaHJpczEs&#xD;
MCoGCSqGSIb3DQEJARYdY2hyaXN0b3BoZXIuZ2hva2FzaWFuQGVtYy5jb20wggEi&#xD;
MA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDb9WtdcW5HJpIDOuTB7o7ic0RK&#xD;
dwA4dY/nJXrk6Ikae5zDWO8XH4noQNhAu8FnEwS5kjtBK1hGI2GEFBtLkIH49AUp&#xD;
c4KrMmotDmbCeHvOhNCqBLZ5JM6DACfO/elHpb2hgBENTd6zyp7mz/7MUf52s9Lb&#xD;
x5pRRCp1iLDw3s15iodZ5GL8pRT62puJVK1do9mPfMoL22woR3YB2++AkSdAgEFH&#xD;
1XLIsFGkBsEJObbDBoEMEjEIivnTRPiyocyWki6gfLh50u9Y9B2GRzLAzIlgNiEs&#xD;
L/vyyrHcwOs4up9QqhAlvMn3Al01VF+OH0omQECSchBdsc/R/Bc35FAEVdmTAgMB&#xD;
AAEwDQYJKoZIhvcNAQEFBQADggEBAAyYcvJtEhOq+n87wukjPMgC7l9n7rgvaTmo&#xD;

146 Certificates
tzpQhtt6kFoSBO7p//76DNzXRXhBDADwpUGG9S4tgHChAFu9DpHFzvnjNGGw83ht&#xD;
qcJ6JYgB2M3lOQAssgW4fU6VD2bfQbGRWKy9G1rPYGVsmKQ59Xeuvf/cWvplkwW2&#xD;
bKnZmAbWEfE1cEOqt+5m20qGPcf45B7DPp2J+wVdDD7N8198Jj5HJBJt3T3aUEwj&#xD;
kvnPx1PtFM9YORKXFX2InF3UOdMs0zJUkhBZT9cJ0gASi1w0vEnx850secu1CPLF&#xD;
WB9G7R5qHWOXlkbAVPuFN0lTav+yrr8RgTawAcsV9LhkTTOUcqI=&#xD;
-----END CERTIFICATE-----</chain></certificate_chain>

2. You can verify the certificate using openssl on all nodes.

openssl s_client -showcerts -connect <node_ip>:<port>

NOTE: The management port is 4443.

For example:

openssl s_client -showcerts -connect 10.1.2.3:4443

Verify the object certificate


You can retrieve the installed object certificate using the ECS Management REST API.

Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● If you have restarted services, the certificate will be available immediately. Otherwise, you need to wait two hours to be sure
that the certificate has propagated to all nodes.

Steps
1. Use the GET /object-cert/keystore method to return the certificate.
Using the curl tool, the method can be run by typing the following:

curl -svk -H "$TOKEN" https://x.x.x.x:4443/object-cert/keystore

2. You can verify the certificate using openssl on all nodes.

openssl s_client -showcerts -connect <node_ip>:<port>

NOTE: Ports are: s3: 9021, Atmos: 9023, Swift: 9025

Example:

openssl s_client -showcerts -connect 10.1.2.3:9021

Reset the object certificate


You can reset the object certificate using the ECS Management REST API. If there is an issue with the object certificate that
was updated, it may be required to reset the object certificate back to the default unsigned certificate.

Steps
Use the below ECS Management REST API call to reset the object certificate back to the default unsigned certificate.

curl -svk -H "X-SDS-AUTH-TOKEN:$TOKEN" -H "Content-type: application/xml" -H


"X-EMCREST-CLIENT: TRUE" -X PUT -d "<rotate_keycertchain><system_selfsigned>true</
system_selfsigned></rotate_keycertchain>" https://`hostname -f`:4443/object-cert/keystore

Certificates 147
12
ECS Settings
Topics:
• Introduction to ECS settings
• Object base URL
• Key Management
• External Key Manager Configuration
• Key rotation
• Secure Remote Services
• Alert policy
• Event notification servers
• Platform locking
• Licensing
• Security
• About this VDC
• Object version limitation settings

Introduction to ECS settings


This section describes the settings that the System Administrator can view and configure in the Settings section of the ECS
Portal. These settings include:
● Object base URL
● Key Management
● ESRS
● Alert policy
● Event notification
● Platform locking
● Licensing
● Security
● About this VDC

Object base URL


ECS supports Amazon S3 compatible applications that use virtual host style and path style addressing schemes. In multitenant
configurations, ECS enables the namespace to be provided in the URL.
The base URL is used as part of the object address where virtual host style addressing is used and enables ECS to know which
part of the address refers to the bucket and, optionally, namespace.
For example, if you are using an addressing scheme that includes the namespace so that you have addresses of the form
mybucket.mynamespace.mydomain.com, you must tell ECS that mydomain.com is the base URL so that ECS identifies
mybucket.mynamespace as the bucket and namespace.
By default, the base URL is set to s3.amazonaws.com.
An ECS System Administrator can add a base URL by using the ECS Portal or by using the ECS Management REST API.
The following topics describe the addressing schemes that are supported by ECS, how ECS processes API requests from S3
applications, how the addressing scheme affects DNS resolution, and how to add a base URL in the ECS Portal.
● Bucket and namespace addressing
● DNS configuration

148 ECS Settings


● Add a Base URL

Bucket and namespace addressing


When an S3 compatible application makes an API request to perform an operation on an ECS bucket, ECS can identify the
bucket in several ways.
For authenticated API requests, ECS infers the namespace by using the namespace that the authenticated user is a member of.
To support anonymous, unauthenticated requests that require CORS support or anonymous access to objects, you must include
the namespace in the address so that ECS can identify the namespace for the request.
When the user scope is NAMESPACE, the same user ID can exist in multiple namespaces (for example, namespace1/user1 and
namespace2/user1). Therefore, you must include the namespace in the address. ECS cannot infer the namespace from the user
ID.
Namespace addresses require wildcard DNS entries (for example, *.ecs1.yourco.com) and also wildcard SSL certificates to
match if you want to use HTTPS. Non-namespace addresses and path style addresses do not require wildcards since there is
only one hostname for all traffic. If you use non-namespace addresses with virtual host style buckets, you will still need wildcard
DNS entries and wildcard SSL certificates.
You can specify the namespace in the x-emc-namespace header of an HTTP request. ECS also supports extraction of the
location from the host header.

Virtual host style addressing

In the virtual host style addressing scheme, the bucket name is in the hostname. For example, you can access the bucket named
mybucket on host ecs1.yourco.com using the following address:
http://mybucket.ecs1.yourco.com
You can also include a namespace in the address.
Example: mybucket.mynamespace.ecs1.yourco.com
To use virtual host style addressing, you must configure the base URL in ECS so that ECS can identify which part of the URL is
the bucket name. You must also ensure that the DNS system is configured to resolve the address. For more information on DNS
configuration, see DNS configuration

Path style addressing

In the path style addressing scheme, the bucket name is added to the end of the path.
Example: ecs1.yourco.com/mybucket
You can specify a namespace by using the x-emc-namespace header or by including the namespace in the path style address.
Example: mynamespace.ecs1.yourco.com/mybucket

ECS address processing

When ECS processes a request from an S3 compatible application to access ECS storage, ECS performs the following actions:
1. Try to extract the namespace from the x-emc-namespace header. If found, skip the following steps and process the
request.
2. Get the hostname of the URL from the host header and check if the last part of the address matches any of the configured
base URLs.
3. Where there is a BaseURL match, use the prefix part of the hostname (the part that is left when the base URL is removed),
to obtain the bucket location.
The following examples demonstrate how ECS handles incoming HTTP requests with different structures:
NOTE: When you add a base URL to ECS, you can specify if your URLs contain a namespace in the Use base URL with
Namespace field on the New Base URL page in the ECS Portal. This tells ECS how to treat the bucket location prefix. For
more information, see Add a Base URL

ECS Settings 149


Example 1: Virtual Host Style Addressing, Use base URL with Namespace is enabled

Host: baseball.image.yourco.finance.com
BaseURL: finance.com
Use BaseURL with namespace enabled

Namespace: yourco
Bucket Name: baseball.image

Example 2: Virtual Host Style Addressing, Use base URL with Namespace is disabled

Host: baseball.image.yourco.finance.com
BaseURL: finance.com
Use BaseURL without namespace enabled

Namespace: null (Use other methods to determine namespace)


Bucket Name: baseball.image.yourco

Example 3: ECS treats this request as a path style request

Host: baseball.image.yourco.finance.com
BaseURL: not configured

Namespace: null (Use other methods to determine namespace.)


Bucket Name: null (Use other methods to determine the bucket name.)

DNS configuration
In order for an S3 compatible application to access ECS storage, you must ensure that the URL resolves to the address of the
ECS data node, or the data node load balancer.
If your application uses path style addressing, you must ensure that your DNS system can resolve the address. For example,
if your application issues requests in the form ecs1.yourco.com/bucket, you must have a DNS entry that resolves
ecs1.yourco.com to the IP address of the load balancer that is used for access to the ECS nodes. If your application is
configured to talk to Amazon S3, the URI is in the form s3-eu-west-1.amazonaws.com.
If your application uses virtual host style addressing, the URL includes the bucket name and can include a namespace. Under
these circumstances, you must have a DNS entry that resolves the virtual host style address by using a wildcard in the DNS
entry. This also applies where you are using path style addresses that include the namespace in the URL.
For example, if the application issues requests in the form mybucket.s3.yourco.com, you must have the following DNS
entries:
● ecs1.yourco.com
● *.ecs1.yourco.com
If the application is previously connected to the Amazon S3 service using mybucket.s3.amazonaws.com, you must have the
following DNS entries:
● s3.amazonaws.com
● *.s3.amazonaws.com
These entries resolve the virtual host style bucket address and the base name when you issue service-level commands (for
example, list buckets).
If you create an SSL certificate for the ECS S3 service, it must have the wildcard entry on the name of the certificate and the
non wildcard version as a Subject Alternate Name.

150 ECS Settings


Add a Base URL
You must add a base URL if you use object clients that encode the location of an object, its namespace (optional), and bucket in
a URL.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task


Ensure that the domain specified in a request that uses a URL to specify an object location resolves to the location of the ECS
data node or a load balancer that sits in front of the data nodes.

Steps
1. In the ECS Portal, select Settings > Object Base URL.
The Base URL Management page is displayed with a list of base URLs. The Use with Namespace property indicates
whether your URLs include a namespace.
2. On the Base URL Management page, click New Base URL.
3. On the New Base URL page, in the Name field, type the name of the base URL.
The name provides a label for this base URL in the list of base URLs on the Base URL Management page.
4. In the Base URL field, type the base URL.
If your object URLs are in the form https://mybucket.mynamespace.acme.com (that is,
bucket.namespace.baseurl ) or https://mybucket.acme.com (that is, bucket.baseurl), the base URL would
be acme.com.
5. In the Use base URL with Namespace field, click Yes if your URLs include a namespace.
6. Click Save.

Key Management
As a part of Data at Rest Encryption (D@RE), ECS supports centralized external key managers. The centralized external key
managers are compliant with the Key Management Interoperability Protocol (KMIP) which enhance the enterprise grade security
in the system. Also, it enables the customers to use the centralized key servers to store top-level Key Encrypting Keys (KEKs)
to provide the following benefits:
● Helps in obtaining benefits from the Hardware Security Module (HSM) based key production and the latest encryption
technology that is provided by the specialized key management servers.
● Provides production against loss of the entire appliance by storing top-level key information outside of the appliance.
ECS incorporates the KMIP standard for integration with external key managers and serves as a KMIP client, and supports the
following:
● Supports the Gemalto Safenet v8.9, Thales CipherTrust Manager 2.5.2, and IBM SKLM v3.01 (Security Key Lifecycle
Manager) key managers.

NOTE: The key manager supported versions are determined by Dell EMC's Key-Trust-Platform (KTP) client.

● Supports the use of top-level KEK (master key) supplied by an external key manager.
● Supports rotation of top-level KEK (master key) supplied by an external key manager.

Activating external key management

Prerequisites
● Ensure that you are the security administrator or have the credentials to log in as an administrator.
● Ensure that you complete the following steps before the activation.

ECS Settings 151


Steps
1. Create an EKM Cluster representing the key management cluster. For more information, see Create cluster.
2. Create EKM Servers, each representing a member of the external key management cluster. For more information, see
External key servers
3. Map a set of the EKMServers to each VDC. For more information, see Add VDC to EKM Server Mapping.
● By default, ECS requires that there are at least two EKM Servers that are mapped per VDC.
● When mapping, the first EKMServer in the list is considered the primary, which is the server that is expected to handle
key creations and retrieval.
● The other EKMServers are considered secondaries and are used as a backup for key retrieval in case the primary is
unreachable or unavailable.
Once the EKMServers have been mapped, a background process is run to validate connectivity from each VDC. If all mapped
primary EKMServers are reachable, activate the EKMCluster, either through the UI or the API.

Results
Invoking the EKMCluster activation operation, triggers a background task to run the activation steps. These steps can be
observed through the UI. Upon completion of the activation steps, you complete the following:
● Master key created on external key manager
● Master key retrieval validated against all external key members
● Internal reference to the new Master Key is updated
● Rotation key created on external key manager
● Rotation key retrieval validated against all external key members.
● Internal reference to the new Rotation key is updated
● All namespace keys are re-protected using the virtual master key, which would now reference the new rotation key)

External Key Manager Configuration


This section provides you with information about External Key Management properties.
System Administrators can add a cluster, view VDC EKM-mapping information, and rotate keys on the Settings > Key
Management page in the ECS Portal.

Table 46. Key Management properties


Field Description
Cluster Name Name of the cluster
Cluster Type Vendor Type
Server Count Total number of servers that have been created for the cluster.
Status Indicates the status of the cluster. When first created, it is in the 'UNACTIVATED' status.
When activation is performed, the status changes to match the step in the activation process.
FQDN/IP FQDN or IP address of the EKM Server
Server Host Server host is provided in the certificate that is used to identify the client associated with the
identity store.
Port Port number that is associated with the KMIP server. The port number is used for
communicating between ECS and the external key server. The default is 5696.
Import Server Certificate Import Server Certificate is associated with the key server that is presented to ECS for
validation.
Import Revocation Certificate Compromised certificate that is not accepted (can be an empty file).
Import Identity Store Client certificate, signed by server and encrypted into .p12 file.
Identity Store Password Identity store certificate password.
Username Provide the following values:

152 ECS Settings


Table 46. Key Management properties (continued)
Field Description
● Thales CipherTrust Manager: Username must match that is defined on the Thales
CipherTrust Manager server
● Gemalto (SafeNet) KeySecure: The username must match that is defined on the Gemalto
key server
● IBM Secure Key Lifecycle Manager (SKLM): Optional field
Password Provide the following values:
● Thales CipherTrust Manager: The password must match that is defined on the Thales
CipherTrust Manager server
● Gemalto (SafeNet) KeySecure: Password for the client that is defined on the Gemalto key
server
● IBM Secure Key Lifecycle Manager (SKLM): Optional field
Device Serial Number Provide the following values:
● Thales CipherTrust Manager: Optional field
● Gemalto (SafeNet) KeySecure: Optional field
● IBM Secure Key Lifecycle Manager (SKLM): Device serial number
Device ID Provide the following values:
● Thales CipherTrust Manager: Optional field
● Gemalto (SafeNet) KeySecure: Device ID
● IBM Secure Key Lifecycle Manager (SKLM): Optional field for SKLM

Create a cluster
From the Key Management External Key Servers, you can create a cluster and then create external key servers.

About this task

Table 47. Create a cluster


Field Description
Cluster Name of the cluster
Type Vendor Type
Server Count Total number of servers that have been created for the cluster.
Status This field indicates the status of the cluster. When first created, it is in the
'UNACTIVATED' status. When activation is performed, the status changes
to match the step in the activation process.
FQDN/IP FQDN or IP address of the EKM Server
Server Host Server host is provided in the certificate that is used to identify the client
associated with the identity store.
Port Port number that is associated with the KMIP server. The port number
is used for communicating between ECS and the external key server. The
default is 5696.
Actions ● Edit - Edit the cluster name.
● Delete - Delete inactive cluster.
● Add Server - Add External Key Server to the cluster.
● Activate - Activate the cluster.

Steps
1. Select Settings > Key Management > External Key Servers > New Cluster.
2. In the Cluster Name field, type a unique name for the cluster.

ECS Settings 153


3. In the External Key Management Type field, select the vendor type from the drop-down menu.
4. Click Save.
After creating a cluster and before VDC EKM-Mapping, add key servers.
NOTE: Only one cluster can be created and used per ECS federation.

Migrating external key management


Learn how to change the EKM server and cluster on which you are managing your ECS encryption keys and migrate the files to
the new server and cluster.

Prerequisites
● Ensure that you are the security administrator or have the credentials to log in as an administrator.
● Ensure that you complete the following steps before the activation.

Steps
1. Deactivate the existing KeySecure in ECS using Dtquery API.
Contact Customer Support to deactivate the existing KeySecure in ECS.
2. Remove the deactivated EKM server.
Update VDC to EKM Server Mapping provides information.
3. Remove the deactivated EKM cluster
Change EKM Cluster status provides information.
4. Create an EKM Cluster representing the key management cluster.
Create cluster provides information.
5. Create EKM Servers, each representing a member of the external key management cluster.
New External Key Servers provides information.
6. Map a set of the EKMServers to each VDC.
Update VDC to EKM Server Mapping provides information.
● By default, ECS requires that there are at least two EKM Servers that are mapped per VDC.
● When mapping, the first EKMServer in the list is considered the primary, which is the server that is expected to handle
key creations and retrieval.
● The other EKMServers are considered secondaries and are used as a backup for key retrieval in case the primary is
unreachable or unavailable.
Once the EKMServers have been mapped, a background process is run to validate connectivity from each VDC. If all mapped
primary EKMServers are reachable, activate the EKMCluster, either through the UI or the API.

Results
Invoking the EKMCluster activation operation, triggers a background task to run the activation steps. These steps can be
observed through the UI. Upon completion of the activation steps, you complete the following:
● Master key created on external key manager
● Master key retrieval validated against all external key members
● Internal reference to the new Master Key is updated
● Rotation key created on external key manager
● Rotation key retrieval validated against all external key members.
● Internal reference to the new Rotation key is updated
● All namespace keys are re-protected using the virtual master key, which would now reference the new rotation key.

154 ECS Settings


Add external key management servers to cluster
An external key management cluster identifies a set of external key servers that are configured as part of the cluster. External
key servers are the entities that ECS nodes contact to create/retrieve cryptographic keys. After a cluster has been created,
servers must be added, and then mapped to a VDC before activating the cluster.

New External Key Servers


About this task

Table 48. New external key servers


Field Description
Cluster Name Name of the cluster
Hostname/IP of EKM Server The Hostname/IP of EKM Server
Server Host Name Server Host Name is the server name that is provided in the certificate that is used to identify
the client associated with the identity store.
Port Port number that is associated with the KMIP server. The port number is used for
communicating between ECS and the external key server. The default is 5696.
Import Server Certificate Import Server Certificate is associated with the key server that is presented to ECS for
validation.
Import Revocation Certificate Compromised certificate that is not accepted (can be an empty file).
Import Identity Store Client certificate, signed by server and encrypted into .p12 file.
Identity Store Password Identity store certificate password.
EKM Type ● Thales CipherTrust Manager
● Gemalto (SafeNet) KeySecure
● IBM Secure Key Lifecycle Manager (SKLM)
Username Provide the following values:
● Thales CipherTrust Manager
● Gemalto (SafeNet) KeySecure: The username must match that is defined on the Gemalto
key server.
● IBM Secure Key Lifecycle Manager (SKLM)
Password Provide the following values:
● Thales CipherTrust Manager
● Gemalto (SafeNet) KeySecure: Password for the client that is defined on the Gemalto key
server.
● IBM Secure Key Lifecycle Manager (SKLM)

Steps
1. Select Settings > Key Management > External Key Servers > New External Key Server.
2. In the New External Key Server form, enter the Hostname/IP of the EKM Server.
3. Enter the Server Host Name.
This is the server name that is provided in the certificate that is used to identify the client associated with the identity store.
4. Enter the Port.
If different from the default 5696.
5. To import the server certificate that is associated with the key server and presented for ECS validation, click Browse.
6. To import the revocation certificate that is not accepted, click Browse.
It can be an empty file.
7. To import the Identity store, which is the client certificate that is signed by server and encrypted into the .p12 file, click
Browse.

ECS Settings 155


8. Enter the identity store password.
9. Confirm the identity store password.
10. Enter the username for the key server.
The username must match the client that is defined on the key server.
NOTE: Username and password are optional for Gemalto and Thales CipherTrust. These fields are available when the
cluster type is either Gemalto or Thales CipherTrust .

Device Serial Number and Device ID are optional for SKLM. These fields are only available when the cluster type is
SKLM.

11. Enter the password for the key server.


12. Confirm the password.
13. Click Save.
NOTE: At least two key servers should be added before proceeding to VDC EKM Mapping. Activation is not enabled
without a minimum of two key servers. Once a key server is mapped, the delete option is disabled. You must remove
EKM Mapping to delete the key server.

Update VDC to EKM Server Mapping


VDC EKM mapping assigns a subset of a cluster member server to a VDC so that nodes in the VDC can use them to access
cryptographic keys.

About this task

Table 49. Key Management properties


Field Description
VDC All VDCs in the system
Number of Servers Number of servers mapped
External Key Manager FQDN or IP address of the EKM Server
FQDNIP
Port Port number that is associated with the KMIP server. The port number is used for
communicating between ECS and the external key server. The default is 5696.
Status When not expanded, the Status indicates the overall status of the mapping.

When expanded, each server displays its individual status.

Actions Edit the VDC Mapping to add, remove, and prioritize key servers.

Steps
1. Select Settings > Key Management > VDC EKM Mapping.
a. To expand the vdc, and to see details of the server click > .
2. Click Edit button for the VDC.
3. Select servers from the available table for actions to the selected table per the actions available in the table above.
If you are adding servers, a minimum of two servers are required. The first server in the selected list is the primary EKM
Server for the VDC.
After servers are mapped to a VDC, the EKM Cluster has to be activated.
NOTE: After the VDC mappings are created, a background process will validate connectivity to all mapped servers per
VDC. The result of this server check is attached to each server per VDC.

156 ECS Settings


Activate EKM Cluster
Steps
To activate the EKM cluster, External Key Servers > Edit > Activate.
As part of cluster activation, key rotation will automatically be initiated. You can check its status in the Key Rotation tab as
described in Key Rotation section.
NOTE: Activation can be initiated successfully only after the server check (noted in the Add VDC to EKM Server Mapping
section) marks all the primary servers with a status of Good.

Key rotation
This section provides information about ECS Key rotation and the limitations.
ECS supports rotation of keys, a practice of changing keys to limit the amount of data that is protected by any given key to
support industry standard practices. It can be performed on demand both through API and user interface, and is designed to
minimize the risk from compromised keys.
During key rotation, the system does the following:
● Creates control key natively or on EKM (if activated).
● Create a rotation key natively.
● Activate new control and rotation keys across all sites in the federation.
● Once activated, the new control key is used to generate new virtual control key.
● Once activated, the new rotation key is used to generate new virtual bucket key.
● The new virtual control key is used to rewrap all rotation and namespace keys.
● The new virtual bucket key is used to protect all new object keys and associated new data.
● Rewrapped namespace keys are instrumental in protecting existing data.
● Data is not reencrypted as a result of key rotation.
To initiate the key rotation, select Settings > Key Management > Key Rotation > Rotate Keys.
NOTE: Rotation is an asynchronous operation, and the latest status of the current operation can be seen in the table. The
Rotate Keys table also lists the status of previous rotation operations.

Summary of Key Management Changes since ECS 3.6


● Key management is more robust wherein now both control Key and Rotation Key are rotated during key rotation.
● Native and External Key management follow the same workflow except that when using EKM control key is external.
● After 3.6, only one control key is used in EKM whereas before 3.6, there were control Key and Rotation Keys in EKM.
● Even when using EKM, all Rotation keys are internal in ECS. So management is simple.
● The control key is cached for the lifetime of a service whereas before ECS 3.6 it was evicted.
● Changes are made to protect keys even when in cache.
NOTE: Native and External Key Management workflow that is changed in 3.6. To find the changes that are related to 3.5,
see 3.5 Security Guide.

Limitations
● Key rotation does not rotate namespace and bucket keys.
● Only one key rotation request can be active anytime and any other new request fails.
● The scope of the key rotation is at cluster level so all the new system encrypted objects are affected.
● Namespace or bucket level rotation is not supported.

ECS Settings 157


Secure Remote Services
Secure Remote Services (Secure Remote Services) is the recommended way for your technical support professional to receive
notification of potential system issues. Secure Remote Services enables your technical support professional to troubleshoot
system errors remotely by analyzing logs.
System Administrators can view Secure Remote Services information, add a Secure Remote Services server, disable Secure
Remote Services call home alerts, test the Secure Remote Services dial home feature, and delete a Secure Remote Services
server on the Settings > ESRS > EMC Secure Remote Services Management page in the ECS Portal. The EMC Secure
Remote Services Management page contains the following information:

Table 50. Secure Remote Services properties


Field Description
FQDN/IP Dell Secure Connect Gateway FQDN or IP address.
Port Dell Secure Connect Gateway port (9443 by default)
VDC Serial Number The software ID of the VDC.
Status Can be one of the following:
● Connected - Secure Remote Services server has been successfully added to the VDC,
ECS is registered with Secure Remote Services, and heartbeat between Secure Remote
Services server and VDC succeeds.
● Disconnected - After the Secure Remote Services server connection with the VDC is
established, Secure Remote Services and the VDC are not able to communicate with each
other, and the heartbeat fails.
The mouse-over error message for the Disconnected status reads: KeepAlive failed:
reason: INVALID_CREDENTIALS

● Failed - During the process of adding a Secure Remote Services server to the VDC, a
connection cannot be established between the Secure Remote Services server and the
VDC due to invalid Secure Remote Services FQDN/IP, port, or user credential information.
○ When invalid Dell Secure Connect Gateway FQDN/IP or port information is entered in
the ECS Portal when adding a Secure Remote Services server, the mouse-over error
message for the Failed status reads: Failed to configure the esrs server,
reason: ESRS_CONNECTION_REFUSED
○ When invalid Dell Support credentials are entered in the ECS Portal when adding a
Secure Remote Services server, the mouse-over error message for the Failed status
reads: Failed to add device <VDC serial number> to esrs gw <dell
secure connect gateway IP>, reason: INVALID_CREDENTIALS
● Disabled - The System Administrator has disabled the Secure Remote Services connection
with the VDC. A System Administrator might choose to do this temporarily during a
planned maintenance activity to prevent flooding Secure Remote Services with alerts.
Test Dial Home Status Can be one of the following:
● Never Run
● Passed (with timestamp)
● Failed (with timestamp)

Secure Remote Services prerequisites


ECS requires Secure Remote Services Virtual Edition (VE) version 3.12 and later.
Before you add a Secure Remote Services server to ECS using the ECS Portal, you must:
● Activate the ECS license and upload it the ECS Portal.
● Configure the Dell Secure Connet Gateway server at the physical site where the VDC is located.
The Dell Secure Connect Gateway server facilitates communication between ECS and the backend Secure Remote Services
server at Dell.

158 ECS Settings


● Verify that the Activated Site ID defined in the ECS license is supported on the Dell Secure Connect Gateway server that is
used by ECS.
The Activated Site ID is the license site number for the physical site where ECS is installed. You can obtain the Activated
Site ID in the Activated Site column in the ECS Portal on the Settings > Licensing page. To verify the Activated Site ID is
supported on the Dell Secure Connect Gateway server:
1. Log in to the Dell Secure Connect Gateway web UI.
2. In the top menu bar, click Devices > Manage Device.
A list of ECS-managed devices, such as VDCs and racks, is displayed. At the top of the list, there is a row with the
heading Site ID: followed by a comma-separated list of Site ID numbers. For example: Site ID: 67520, 89645,
111489

3. If the Activated Site ID you obtained from the ECS Portal is:
○ Listed in the Site ID row at the top of the Managed Device list in the Dell Secure Connect Gateway web UI, it is
supported on the Dell Secure Connect Gateway server.
○ Not listed in the Site ID row at the top of the Managed Device list in the Dell Secure Connect Gateway web UI, you
must add it by clicking the Add SiteID button at the bottom of the page.
● Verify that you have full access (administrator) rights to the Activated Site ID and that the VDC Serial Number is associated
with the Activated Site ID.
You can obtain the VDC Serial Number on the Settings > Licensing in the ECS Portal.
1. Go to Dell Support.
2. Log in and validate that you have full access rights.
If you can access this page, then you can use your user account credentials to configure Secure Remote Services.
3. To verify that the VDC Serial number is associated with the Activated Site ID, click the Install Base link near the center
of the page.
If you have access to multiple sites, select the appropriate site in the My Sites drop-down list.
4. In the search box, type the VDC Serial Number.
The VDC Serial Number is verified when it is displayed in the Product ID column in the table below the search box.

Add a Secure Remote Services Server


You can use the ECS Portal to add a Secure Remote Services server to an existing VDC.

Prerequisites
● In an ECS geo-federated system, you must add a Secure Remote Services server for each VDC in the system.
● If you already have a Secure Remote Services server that is enabled, you must delete it, then add the new server. You
cannot edit a Secure Remote Services server in this release.
● Review the ESRS prerequisites before performing this task in the ECS Portal.
● This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Settings > ESRS > New Server.
2. On the New ESRS Server page:
a. In the FQDN/IP field, type the Dell Secure Connect Gateway FQDN or IP address.
b. In the PORT field, type the Dell Secure Connect Gateway port (9443 by default).
c. In the Username field, type the login username that is used to interface with ECS support. This username is the same
login username that is used to log in to Dell Support.
d. In the Password field, type the password set up with the login username.
3. Click Save.
Server connectivity may take a few minutes to complete. To monitor the process, click the refresh button. Possible states of
transition are Processing, Connected, or Failed.
NOTE: If you receive an INVALID_CREDENTIALS error message, email support@emc.com with a description of the
issue, your account email, VDC serial number, and Activated Site ID.

ECS Settings 159


4. If Secure Remote Services is configured with more than one gateway for high availability, repeat steps 1 to 3 to add
additional Dell Secure Connect gateway servers.

Serial numbers
Serial numbers are specific to clusters and cannot be shared among clusters. You cannot configure specific serial numbers using
GUI. However, you can add it through fcli.
1. To configure ESRS on a cluster, choose a serial number corresponding to the hardware.
2. Configure serial number and the user information about the cluster by specifying the information in file settings.conf on
installer node.

provo-amber:/home/admin # cat /opt/emc/caspian/installer/conf/settings.conf


#
# Installer Customer Settings
#

# Customer data of callhome, uncomment and have proper value to propagate


to ECS
[fabric.installer.callhome]
customer_name = ""
customer_email = ""
software_serial_number = ##########

[fabric.installer.docker]
# Set as true for DIY
bypass_docker_configuration = false

3. Specify rack serial number instead of psnt1 in file topology.txt.

provo-amber:/home/admin # cat /root/topology.txt


#ID,HOSTNAME,ADDRESS,LOCATION,RACK,SHELF

4. Deploy fabric on the cluster.


5. To configure ESRS on GUI, go to Settings > ESRS > New Server.

FQDN/IP:
Port:
User name:
Password:

6. To check the device status and connectivity, go to Devices > Manage Device ESRS page.
NOTE: To configure serial number on a cluster using fcli, there must be space characters ' ' after ',' in the JSON file for
server-side correct parsing.

ESRS Configuration using FCLI

echo '{"esrs_config":{"esrs_connection_settings":{"hostname":"", "port":""},


"username":"", "password":"" }}' | sudo -i fcli lifecycle alert.configureesrs --body

fcli lifecycle alert.esrsconnection --id <ID>

Verify that ESRS call home works


You can verify ESRS connectivity by testing the ESRS call home functionality. In the ECS Portal, you can generate a test call
home alert, and then verify that the alert is received. This is an optional procedure.

Prerequisites
This operation requires the System Administrator role in ECS.
The VDC must be connected to the Dell Secure Connect gateway.

160 ECS Settings


Steps
1. In the ECS Portal, select Settings > ESRS.
2. On the EMC Secure Remote Services Management page, click Test Dial Home in the Actions column.
If the ESRS notification is successfully received, the status displays as Passed with a timestamp in the Test Dial Home
Status column.
3. You can also verify that the latest test alert is present on the ESRS server.
a. SSH into the ESRS server.
b. Go to the location of the RSC file.
# cd /opt/connectemc/archive/
You can also verify the RSC file at:
#cd /opt/connectemc/poll
c. Check for the latest RSC file:
# ls –lrt RSC_<VDC SERIAL NUMBER>*”
d. Open the RSC file, and check if the latest test alert shows in the description.

Disable call home

Prerequisites
This operation requires the System Administrator role in ECS.
The VDC must be connected to the Dell Secure Connect gateway.

About this task


System Administrators can use this feature to temporarily disable ESRS call home alerts. System Administrators should use this
feature during planned maintenance activities or during troubleshooting scenarios that require taking nodes offline to prevent
flooding ESRS with unnecessary alerts.

Steps
1. In the ECS Portal, select Settings > ESRS.
2. On the EMC Secure Remote Services Management page, click Disable in the Actions column beside the ESRS server
that you want to temporarily disable call home alerts.
The ESRS server status displays as Disabled in the Status column.

Alert policy
Alert policies are created to alert about metrics, and are triggered when the specified conditions are met. Alert policies are
created per VDC.
You can use the Settings > Alerts Policy page to view alert policies.
There are two types of alert policy:

System alert ● System alert policies are precreated and exist in ECS during deployment.
policies ● All the metrics have an associated system alert policy.
● System alert policies cannot be updated or deleted.
● System alert policies can be enabled/disabled.
● Alert is sent to the UI and all channels (SNMP, SYSLOG, and Secure Remote Services).

User-defined ● You can create User-defined alert policies for the required metrics.
alert policies ● Alert is sent to the UI and customer channels (SNMP and SYSLOG).

ECS Settings 161


New alert policy
You can use the Settings > Alerts Policy > New Alert Policy tab to create user-defined alert policies.

Steps
1. Select New Alert Policy.
2. Give a unique policy name.
3. Use the metric type drop-down menu to select a metric type.
Metric Type is a grouping of statistics. It consists of:
● Btree Statistics
● CAS GC Statistics
● Geo Replication Statistics
● Garbage Collection Statistics
● EKM
4. Use the metric name drop-down menu to select a metric name.
5. Select level.
a. To inspect metrics at the node level, select Node.
b. To inspect metrics at the VDC level, select VDC.
6. Select polling interval.
Polling Interval determines how frequently data should be checked. Each polling interval gives one data point which is
compared against the specified condition and when the condition is met, alert is triggered.
7. Select instances.
Instances describe how many data points to check and how many should match the specified conditions to trigger an alert.
For metrics where historical data is not available only the latest data is used.

8. Select conditions.
You can set the threshold values and alert type with Conditions.
The alerts can be either a Warning Alert, Error Alert, or Critical Alert.

9. To add more conditions with multiple thresholds and with different alert levels, select Add Condition.
10. Click Save.

Event notification servers


You can add SNMPv2 servers, SNMPv3 servers, and Syslog servers to ECS to route SNMP and Syslog event notifications to
external systems.
In ECS, you can add the following types of event notification servers:
● Simple Network Management Protocol (SNMP) servers, also known as SNMP agents, provide data about network-managed
device status and statistics to SNMP Network Management Station clients. For more information, see SNMP servers.
● Syslog servers provide a method for centralized storage and retrieval of system log messages. ECS supports forwarding of
alerts and audit messages to remote syslog servers, and supports operations using the BSD Syslog and Structured Syslog
application protocols. For more information, see Syslog servers.
You can add event notification servers from the ECS Portal or by using the ECS Management REST API or CLI.
● Add an SNMPv2 trap recipient
● Add an SNMPv3 trap recipient
● Add a Syslog server

SNMP servers
Simple Network Management Protocol (SNMP) servers, also known as SNMP agents, provide data about network managed
device status and statistics to SNMP Network Management Station clients.

162 ECS Settings


To allow communication between SNMP agents and SNMP Network Management Station clients, you must configure both sides
to use the same credentials. For SNMPv2, both sides must use the same Community name. For SNMPv3, both sides must
use the same Engine ID, username, authentication protocol and authentication passphrase, and privacy protocol and privacy
passphrase.
To authenticate traffic between SNMPv3 servers and SNMP Network Management Station clients, and to verify message
integrity between hosts, ECS supports the SNMPv3 standard use of the following cryptographic hash functions:
● Message Digest 5 (MD5)
● Secure Hash Algorithm 1 (SHA-1)
To encrypt all traffic between SNMPv3 servers and SNMP Network Management Station clients, ECS supports encryption of
SNMPv3 traffic by using the following cryptographic protocols:
● Digital Encryption Standard (using 56-bit keys)
● Advanced Encryption Standard (using 128-bit, 192-bit or 256-bit keys)
NOTE: Support for advanced security modes (AES192/256) provided by the ECS SNMP trap feature might be incompatible
with certain SNMP targets (for example, iReasoning).

Add an SNMPv2 trap recipient


You can configure Network Management Station clients as SNMPv2 trap recipients for the SNMP traps that are generated by
the ECS fabric using SNMPv2 standard messaging.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Settings > Event Notification.
On the Event Notification page, the SNMP tab displays by default and lists the SNMP servers that have been added to
ECS.
2. To add an SNMP server target, click New Target.
The New SNMP Target page is displayed.
3. On the New SNMP Target page, complete the following steps:
a. In the FQDN/IP field, type the Fully Qualified Domain Name or IP address for the SNMP v2c trap recipient node that
runs the snmptrapd server.
b. In the Port field, type the port number of the SNMP v2c snmptrapd running on the Network Management Station
clients.
The default port number is 162.
c. In the Version field, select SNMPv2.
d. In the Community Name field, type the SNMP community name.
Both the SNMP server and any Network Management Station clients that access it must use the same community name
to ensure authentic SNMP message traffic, as defined by the standards in RFC 1157 and RFC 3584.
The default community name is public.
4. Click Save.

Add an SNMPv3 trap recipient


You can configure Network Management Station clients as SNMPv3 trap recipients for the SNMP traps that are generated by
the ECS fabric using SNMPv3 standard messaging.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Settings > Event Notification.

ECS Settings 163


On the Event Notification page, the SNMP tab displays by default and lists the SNMP servers that have been added to
ECS.
2. To add an SNMP server target, click New Target.
The New SNMP Target page is displayed.
3. On the New SNMP Target page, complete the following steps:
a. In the FQDN/IP field, type the Fully Qualified Domain Name or IP address for the SNMPv3 trap recipient node that runs
the snmptrapd server.
b. In the Port field, type the port number of the SNMP 3c snmptrapd running on the Network Management Station client.
The default port number is 162.
c. In the Version field, select SNMPv3.
d. In the Username field, type in the username that will be used in authentication and message traffic as per the User-
based Security Model (USM) defined by RFC 3414.
Both the SNMP server and any Network Management Station clients that access it must specify the same username to
ensure communication. This is an octet string of up to 32 characters in length.
e. In the Authentication box, click Enabled if you want to enable Message Digest 5 (MD5) (128-bit) or Secure Hash
Algorithm 1 (SHA-1) (160-bit) authentication for all SNMPv3 data transmissions, and do the following:
● In the Authentication Protocol field, select MD5 or SHA.
This is the cryptographic hash function to use to verify message integrity between hosts. The default is MD5.
● In the Authentication Passphrase field, type the string to use as a secret key for authentication between SNMPv3
USM standard hosts, when calculating a message digest.
The passphrase can be 16 octets long for MD5 and 20 octets long for SHA-1.

f. In the Privacy box, click Enabled if you want to enable Digital Encryption Standard (DES) (56-bit) or Advanced
Encryption Standard (AES) (128-bit, 192-bit or 256-bit) encryption for all SNMPv3 data transmissions, and do the
following:
● In the Privacy Protocol field, select DES, AES128, AES192, or AES256.
This is the cryptographic protocol to use in encrypting all traffic between SNMP servers and SNMP Network
Management Station clients. The default is DES.
● In the Privacy Passphrase field, type the string to use in the encryption algorithm as a secret key for encryption
between SNMPv3 USM standard hosts.
The length of this key must be 16 octets for DES and longer for the AES protocols.

4. Click Save.

Results
When you create the first SNMPv3 configuration, the ECS system creates an SNMP Engine ID to use for SNMPv3 traffic. The
Event Notification page displays that SNMP Engine ID in the Engine ID field. You could instead obtain an Engine ID from a
Network Monitoring tool and specify that Engine ID in the Engine ID field. The important issue is that the SNMP server and any
SNMP Network Management Station clients that have to communicate with it using SNMPv3 traffic must use the same SNMP
Engine ID in that traffic.
NOTE: Get the Engine ID from the SNMPv3 server and specify the Engine ID in the Engine ID field in the ECS UI.

Send a test SNMP trap


You can send a test SNMP trap to validate configuration, and test traps without having to do a real failure.

Prerequisites
This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Settings > Event Notification.
On the Event Notification page, the SNMP tab displays by default and lists the SNMP servers that have been added to
ECS.

164 ECS Settings


2. To send a test SNMP trap, click the drop-down under Actions, and select Send Test Trap.
A success message is displayed for a successful SNMP trap.
3. Confirm that the test trap has reached the SNMP Target Host.
Current test functionality is to ensure that ECS can send out a trap, but there is no verification that it was reached.

Support for SNMP data collection, queries, and MIBs in ECS

ECS provides support for Simple Network Management Protocol (SNMP) data collection, queries, and MIBs in the following
ways:
● During the ECS installation process, your customer support representative can configure and start an snmpd server
to support specific monitoring of ECS node-level metrics. A Network Management Station client can query these kernel-
level snmpd servers to gather information about memory and CPU usage from the ECS nodes, as defined by standard
Management Information Bases (MIBs). For the list of MIBs for which ECS supports SNMP queries, see SNMP MIBs
supported for querying in ECS.
● The ECS fabric life cycle layer includes an snmp4j library which acts as an SNMP server to generate SNMPv2 traps and
SNMPv3 traps and send them to as many as ten SNMP trap recipient Network Management Station clients. For details of
the MIBs for which ECS supports as SNMP traps, see ECS-MIB SNMP Object ID hierarchy and MIB definition. You can add
the SNMP trap recipient servers by using the Event Notification page in the ECS Portal. For more information, see Add an
SNMPv2 trap recipient and Add an SNMPv3 trap recipient.

SNMP MIBs supported for querying in ECS


You can query the snmpd servers that can run on each ECS node from Network Management Station clients for the following
SNMP MIBs:
● MIB-2
● DISMAN-EVENT-MIB
● HOST-RESOURCES-MIB
● UCD-SNMP-MIB
You can query ECS nodes for the following basic information by using an SNMP Management Station or equivalent software:
● CPU usage
● Memory usage
● Number of processes running

ECS-MIB SNMP Object ID hierarchy and MIB definition


This topic describes the SNMP OID hierarchy and provides the full SNMP MIB-II definition for the enterprise MIB known as
ECS-MIB.
The SNMP enterprise MIB named ECS-MIB defines the objects trapAlarmNotification, notifyTimestamp,
notifySeverity, notifyType, and notifyDescription. The SNMP enterprise includes supported SNMP traps that
are associated with managing ECS appliance hardware. ECS sends traps from the Fabric lifecycle container, using services
provided by the snmp4j Java library. The objects contained in the ECS-MIB have the following hierarchy:

emc.............................1.3.6.1.4.1.1139
ecs.........................1.3.6.1.4.1.1139.102
trapAlarmNotification...1.3.6.1.4.1.1139.102.1.1
notifyTimestamp.....1.3.6.1.4.1.1139.102.0.1.1
notifySeverity......1.3.6.1.4.1.1139.102.0.1.2
notifyType..........1.3.6.1.4.1.1139.102.0.1.3
notifyDescription...1.3.6.1.4.1.1139.102.0.1.4

You can download the ECS-MIB definition (as the file ECS-MIB-v2.mib) from the Support Site in the Downloads section
under Add-Ons. The following Management Information Base syntax defines the SNMP enterprise MIB named ECS-MIB:

ECS-MIB DEFINITIONS ::= BEGIN


IMPORTS enterprises, Counter32, OBJECT-TYPE,
MODULE-IDENTITY, NOTIFICATION-TYPE
FROM SNMPv2-SMI;

ECS Settings 165


ecs MODULE-IDENTITY
LAST-UPDATED "201605161234Z"
ORGANIZATION "EMC ECS"
CONTACT-INFO "EMC Corporation 176 South Street Hopkinton, MA 01748"
DESCRIPTION "The EMC ECS Manager MIB module"
::= { emc 102 }

emc OBJECT IDENTIFIER ::= { enterprises 1139 }

-- Top level groups

notificationData OBJECT IDENTIFIER ::= { ecs 0 }


notificationTrap OBJECT IDENTIFIER ::= { ecs 1 }

-- The notificationData group


-- The members of this group are the OIDs for VarBinds
-- that contain notification data.

genericNotify OBJECT IDENTIFIER ::= { notificationData 1 }

notifyTimestamp OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The timestamp of the notification"
::= { genericNotify 1 }

notifySeverity OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The severity level of the event"
::= { genericNotify 2 }

notifyType OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "A type of the event"
::= { genericNotify 3 }

notifyDescription OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "A complete description of the event"
::= { genericNotify 4 }

-- The SNMP trap


-- The definition of these objects mimics the SNMPv2 convention for
-- sending traps. The enterprise OID gets appended with a 0
-- and then with the specific trap code.

trapAlarmNotification NOTIFICATION-TYPE
OBJECTS {
notifyTimestamp,
notifySeverity,
notifyType,
notifyDescription,
}
STATUS current
DESCRIPTION "This trap identifies a problem on the ECS. The description can be
used to describe the nature of the change"
::= { notificationTrap 1 }
END

Trap messages that are formulated in response to a Disk Failure alert are sent to the ECS Portal Monitor > Events >
Alerts page in the format Disk {diskSerialNumber} on node {fqdn} has failed:

2016-08-12 01:33:22 lviprbig248141.lss.dell.com [UDP: [10.249.248.141]:39116-


>[10.249.238.216]]:

166 ECS Settings


iso.3.6.1.6.3.18.1.3.0 = IpAddress: 10.249.238.216 iso.3.6.1.6.3.1.1.4.1.0 =
OID: iso.3.6.1.4.1.1139.102.1.1 iso.3.6.1.4.1.1139.102.0.1.1 = STRING: "Fri Aug
12 13:48:03 GMT 2016" iso.3.6.1.4.1.1139.102.0.1.2 = STRING: "Critical"
iso.3.6.1.4.1.1139.102.0.1.3 = STRING: "2002" iso.3.6.1.4.1.1139.102.0.1.4 = STRING:
"Disk 1EGAGMRB on node provo-mustard.ecs.lab.dell.com has failed"

Trap messages that are formulated in response to a Disk Back Up alert are sent to the ECS Portal Monitor > Events >
Alerts page in the format Disk {diskSerialNumber} on node {fqdn} was revived:

2016-08-12 04:08:42 lviprbig249231.lss.dell.com [UDP: [10.249.249.231]:52469-


>[10.249.238.216]]:
iso.3.6.1.6.3.18.1.3.0 = IpAddress: 10.249.238.216 iso.3.6.1.6.3.1.1.4.1.0
= OID: iso.3.6.1.4.1.1139.102.1.1 iso.3.6.1.4.1.1139.102.0.1.1 = STRING:
"Fri Aug 12 16:23:23 GMT 2016" iso.3.6.1.4.1.1139.102.0.1.2 = STRING: "Info"
iso.3.6.1.4.1.1139.102.0.1.3 = STRING: "2025" iso.3.6.1.4.1.1139.102.0.1.4 = STRING:
"Disk 1EV1H2WB on node provo-copper.ecs.lab.dell.com was revived"

Syslog servers
Syslog servers provide a method for centralized storage and retrieval of system log messages. ECS supports forwarding of
alerts and audit messages to remote syslog servers, and supports operations using the following application protocols:
● BSD Syslog
● Structured Syslog
Alerts and audit messages that are sent to Syslog servers are also displayed on the ECS Portal, with the exception of OS level
Syslog messages (such as node SSH login messages), which are sent only to Syslog servers and not displayed in the ECS Portal.
Once you add a Syslog server, ECS initiates a syslog container on each node. The message traffic occurs over either TCP or the
default UDP.
ECS sends Audit log messages to Syslog servers, including the severity level, using the following format:
${serviceType} ${eventType} ${namespace} ${userId} ${message}
ECS sends Alert logs to Syslog servers using the same severity as appears in the ECS Portal, using the following format:
${alertType} ${symptomCode} ${namespace} ${message}
ECS sends Fabric alerts using the following format:
Fabric {symptomCode} "{description}"
Starting with ECS 3.1, ECS forwards only the following OS logs to Syslog servers:
● External SSH messages
● All sudo messages with Info severity and higher
● All messages from the auth facility with Warning severity and higher, which are security-related and authorization-related
messages

Add a Syslog server


You can configure a Syslog server to remotely store ECS logging messages.

Prerequisites
● This operation requires the System Administrator role in ECS.

Steps
1. In the ECS Portal, select Settings > Event Notification.
2. On the Event Notification page, click the Syslog tab.
This page lists the Syslog servers that have been added to ECS and allows you to configure new Syslog servers.
3. To add a Syslog server, click New Server.
The New Syslog Server page is displayed.

ECS Settings 167


4. On the New Syslog Server page, complete the following steps:
a. In the Protocol field, select UDP or TCP.
UDP is the default protocol.
b. In the FQDN/IP field, type the Fully Qualified Domain Name or IP address for the node that runs the Syslog server.
c. In the Port field, type the port number for the Syslog server on which you want to store log messages.
The default port number is 514.
d. In the Severity field, select the severity of threshold for messages to send to the log. The drop-down options are
Emergency, Alert, Critical, Error, Warning, Notice, Informational, or Debug.
5. Click Save.

Server-side filtering of Syslog messages


This topic describes how an ECS Syslog message can be further filtered with server-side configuration.
You can configure Syslog servers in the ECS Portal (or by using the ECS Management REST API) to specify the messages that
are delivered to the servers. You can then use server-side filtering techniques to reduce the number of messages that are saved
to the logs. Filtering is done at the facility level. A facility segments message by type. ECS directs messages to facilities as
described in the following table.

Table 51. Syslog facilities used by ECS


Facility Keyword Defined use ECS use
1 user User-level messages Fabric alerts
3 daemon System daemons Operating system messages
4 auth Security and authorization ssh and sudo success and
messages failure messages
16 local0 Local use 0 Object alerts, object audits
All facilities * - -

For each facility, you can filter by severity level by using the following format:
facility-keyword.severity-keyword
Severity keywords are described in the following table.

Table 52. Syslog severity keywords


Severity level number Severity level Keyword
0 Emergency emerg
1 Alert alert
2 Critical crit
3 Error err
4 Warning warn
5 Notice notice
6 Informational info
7 Debug debug
All severities All severities *

168 ECS Settings


Modify the Syslog server configuration using the /etc/rsyslog.conf file
You can modify your existing configuration by editing the /etc/rsyslog.conf file on the Syslog server.

Steps
1. You might configure the /etc/rsyslog.conf file in the following ways:
a. To receive incoming ECS messages from all facilities and all severity levels, use this configuration and specify the
complete path and name of your target log file:

*.* /var/log/ecs-messages.all

b. To receive all fabric alerts, object alerts and object audits, use this configuration with the full path and name of your
target log file:

user.*,local0.* /var/log/ecs-fabric-object.all

c. To receive all fabric alerts, object alerts and object audits, and limit auth facility messages to warning severity and above,
use this configuration with the full path and name of your target log file:

user.*,local0.*/var/log/ecs-fabric-object.allauth.warn /var/log/ecs-auth-
messages.warn

d. To segment the traffic to a facility into multiple files log files:

auth.info /var/log/ecs-auth-info.log
auth.warn /var/log/ecs-auth-warn.log
auth.err /var/log/ecs-auth-error.log

2. After any modification of the configuration file, restart the Syslog service on the Syslog server:

# service syslog restart

Output:

Shutting down syslog services done


Starting syslog services done

Platform locking
You can use the ECS Portal to lock remote access to nodes.
ECS can be accessed through the ECS Portal or the ECS Management REST API by management users assigned administration
roles. ECS can also be accessed at the node level by a privileged default node user who is named admin that is created during
the initial ECS install. This default node user can perform service procedures on the nodes and have access:
● By directly connecting to a node through the management switch with a service laptop and using SSH or the CLI to directly
access the operating system of the node.
● By remotely connecting to a node over the network using SSH or the CLI to directly access the node's operating system.
For more information about the default admin node-level user, see the ECS Security Configuration and Hardening Guide.
Node locking provides a layer of security against remote node access. Without node locking, the admin node-level user can
remotely access nodes at any time to collect data, configure hardware, and run Linux commands. If all the nodes in a cluster are
locked, then remote access can be planned and scheduled for a defined window to minimize the opportunity for unauthorized
activity.
You can lock selected nodes in a cluster or all the nodes in the cluster by using the ECS Portal or the ECS Management REST
API. Locking affects only the ability to remote access (SSH to) the locked nodes. Locking does not change the way the ECS
Portal and the ECS Management REST APIs access nodes, and it does not affect the ability to directly connect to a node
through the management switch.

ECS Settings 169


For node maintenance using remote access, you can unlock a single node to allow remote access to the entire cluster by using
SSH as the admin user. After the admin user successfully logs in to the unlocked node using SSH, the admin user can SSH
from that node to any other node in the cluster through the private network.
You can unlock nodes to remotely use commands that provide Operating-System-level read-only diagnostics.
Node lock and unlock events appear in audit logs and Syslog. Failed attempts to lock or unlock nodes also appear in the logs.

Lock and unlock nodes using the ECS Portal


You can use the ECS Portal to lock and unlock remote SSH access to ECS nodes.

Prerequisites
This operation requires the Security Administrator role assigned to the emcsecurity user in ECS.

About this task


Locking affects only the ability to remotely access (SSH to) the locked nodes. Locking does not change the way the ECS Portal
and the ECS Management REST APIs access nodes, and it does not affect the ability to directly connect to a node through the
management switch.

Steps
1. Log in as the emcsecurity user.
For the initial login for this user, you are prompted to change the password and log back in.
2. In the ECS Portal, select Settings > Platform Locking.
The Platform Locking page lists the nodes in the cluster and displays the lock status.
The node states are:
● Unlocked: Displays an open green lock icon and the Lock action button.
● Locked: Displays a closed red lock icon and the Unlock action button.
● Offline: Displays a circle-with-slash icon but no action button because the node is unreachable and the lock state cannot
be determined.
3. Perform any of the following steps.
a. Click Lock in the Actions column beside the node that you want to lock.
Any user who is remotely logged in by SSH or CLI has approximately five minutes to exit before their session is
terminated. An impending shutdown message appears on the user's terminal screen.
b. Click Unlock in the Actions column beside the node that you want to unlock.
The admin default node user can remotely log in to the node after a few minutes.
c. Click Lock the VDC if you want to lock all unlocked, online nodes in the VDC.
It does not set a state where a new or offline node is automatically locked once detected.

Lock and unlock nodes using the ECS Management REST API
You can use the following APIs to manage node locks.

Table 53. ECS Management REST API calls for managing node locking
Resource Description
GET /vdc/nodes Gets the data nodes that are configured in the cluster.
GET /vdc/lockdown Gets the locked or unlocked status of a VDC.
PUT /vdc/lockdown Sets the locked or unlocked status of a VDC.
PUT /vdc/nodes/{nodeName}/lockdown Sets the Lock or unlock status of a node.
GET /vdc/nodes/{nodeName}/lockdown Gets the Lock or unlock status of a node.

170 ECS Settings


Licensing
ECS licensing is capacity-based. The ECS license file is a single file that contains base and add-on software features. A license
applies to a single VDC. In geo-federated systems, each VDC requires a license. In a VDC configuration with multiple racks, the
license file includes the total capacity for all racks in the VDC.
The ECS license file contains the following information:
● Feature: ViPR Unstructured (base feature) and may include ECS Server Side Encryption (free software add-on feature)
● Type: Permanent or Temporary
● Status: Licensed or Expired
● Entitlement: Describes the maximum storage that is licensed for the VIPR Unstructured feature in TB.
● VDC Serial Number: The software ID of the VDC
● PSNT: The quantity of PSNTs (racks) in the VDC. Each rack in a VDC has a product serial number tag (PSNT). In a VDC
with multiple racks, multiple PSNTs maps to the serial number of the VDC. Click the right-facing arrow > next to the Feature
name in the licensing table to expand and display the PSNT values.
● Activated Site: The license site number for the physical site where ECS will be installed
● Expiration: If the license is temporary, the license expiration date displays. If the license is permanent, the date the license
was issued displays.
Once the license is activated, the information that is contained in the license file displays on the Settings > Licensing page in
the ECS Portal.
There is a single ECS license file for the following scenarios:
● New ECS 3.3 installation
● Adding racks to an existing VDC
● Adding nodes to an existing rack
● Adding disks to existing nodes
In each of these scenarios, you can obtain the license as described in Obtain the Dell EMC ECS license file.
If you are upgrading from ECS 3.2.0.x to 3.3, you must contact your ECS customer support representative to perform internal
license configuration. In ECS 3.3, the licensing scheme changed so that racks in a single VDC all have the same VDC serial
number. Prior to 3.3, racks in a single VDC had different VDC serial numbers. This change impacts how licensed capacity
entitlements are reported and therefore require Dell EMC support.

Obtain the Dell EMC ECS license file


You can obtain a license file (.lic) from the Dell EMC license management website.

Prerequisites
To obtain the license file, you must have the License Authorization Code (LAC), which was emailed from Dell EMC. If you have
not received the LAC, contact your customer support representative.

Steps
1. Go to the license page at: https://support.emc.com/servicecenter/license/
2. From the list of products, select ECS Appliance.
3. Click Activate My Software.
4. On the Activate page, enter the LAC code, and then click Activate.
5. Select the feature to activate, and then click Start Activation Process.
6. Select Add a Machine to specify any meaningful string for grouping licenses.
For the machine name, enter any string that helps you to keep track of your licenses. (It does not have to be a machine
name.)
7. Enter the quantities for each feature, or select Activate All, and then click Next.
8. Optionally, specify an addressee to receive an email summary of the activation transaction.
9. Click Finish.
10. Click Save to File to save the license file (.lic) to a folder on your computer.

ECS Settings 171


Next steps
Upload the ECS license file.

Upload the ECS license file


You can upload the ECS license file from the ECS Portal.

Prerequisites
● This operation requires the System Administrator role in ECS.
● Ensure that you have a valid license file. You can follow the instructions that are provided in Obtain the Dell EMC ECS
license file to obtain a license.

About this task


Where you are installing more than one VDC in a geo-federated system, ensure that the licensing scheme across VDCs is the
same. For example, if the existing VDC has a server side encryption-enabled license, any new VDC added to it should have the
same.

Steps
1. In the ECS Portal, select Settings > Licensing.
2. On the Licensing page, in the Upload a New License File field, click Browse to go to your local copy of the license file.
3. Click Upload to add the license.
The license features and associated information be displayed in the list of licensed features.

Security
You can use the ECS Portal to change your password, set password rules, manage user sessions, and set user agreement text.
ECS logs an audit log event when the password change fails for reasons such as connection disruption, forgetting the password.
You can see and audit log for failed password changes.
● Password
● Password Rules
● Sessions
● User Agreement

Password
This section describes how a user with System Administrator or System Monitor role can change their own password.

Prerequisites
This operation requires either System Administrator role or System Monitor role in ECS.

About this task


The Change Password operation changes the password for the logged in user.

WARNING: All user sessions are terminated on successful password change.

NOTE:
● Users with either System Administrator role or System Monitor can change their password.
● To change the password, users have to provide the old password.

Steps
1. In the ECS Portal, select Settings > Security > Password.
The Password page opens.

172 ECS Settings


2. On the Password page, in the Old Password field, enter the old password.
3. Enter the new password, in the Password field, and then enter it again in the Confirm Password .
4. Click Save.

Password Change
This section describes how a user with System Administrator role can change their own password, and also of other users.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task


NOTE:
● User with System Administrator role can change their own password.
● User with System Administrator role can also change passwords of other users (usually in cases where the account
of a user gets locked out, users with System Administrator role can reset the account of that user by setting a new
password).
● To change the password, users do not have to provide the old password.

Steps
1. In the ECS Portal, select Manage > Users > Management Users > Select User > Edit.
The Edit Management User <user_name> page opens.
2. Enter the new password, in the Password field, and then enter it again in the Confirm Password .
3. Click Save.

Password Rules
You can use the ECS Portal to set password rules.

Prerequisites
This operation requires the System or Security Administrator role in ECS.

About this task


NOTE: The Password Rules take effect only when the Password Rules Switch is Enabled. The Password Rules Switch
is Disabled by default.

Table 54. Password rules


Field Description
Password Rules Switch The Password Rules Switch controls the compliance checks
for the rules on this page.
● Enabled: All rules apply when creating/updating passwords
● Disabled: No rules apply when creating/updating
passwords
Minimum number of characters The minimum number of characters that are required for a
password must be greater than or equal to 8, and not to
exceed 256.
Minimum number of upper case characters The minimum number of upper case characters that are
required for a password must be greater than or equal to 1.
Minimum number of lower case characters The minimum number of lower case characters that are
required for a password must be greater than or equal to 1.

ECS Settings 173


Table 54. Password rules (continued)
Field Description
Minimum number of numeric characters The minimum number of numeric characters that are required
for a password must be greater than or equal to 1.
Minimum number of special characters The minimum number of special characters that are required
for a password must be greater than or equal to 1.
Minimum character change The minimum number of characters that must change from
previous password must be greater than or equal to 3.
Password expiration(days) The maximum lifetime of a password until it expires must be
greater than or equal to 1.
Max login attempts The maximum number of attempts to log in with invalid
password before the user is locked.
● Must be greater than or equal to 3.
● System admin can unlock users through the management
user interface.
If a user is locked due to password expiration or failed login
attempts, System Administrator can unlock the user. Go to
Manage> Users> Management Users> Unlock Action.
Max lock day If the password is not changed, the number of days to lock a
user must be greater than or equal to 1.
Min Password duration(hours) The minimum lifetime for a password. Must be greater than or
equal to 1.
Passwords repeat The minimum number of previous passwords that cannot be
reused. Must be greater than or equal to 1.

NOTE: When a user is locked out due to password expiration or failed login attempts, Security Administrator can unlock the
user from Manage>Users>Management Users Unlock Action.

Steps
1. In the ECS Portal, select Settings > Security > Password Rules.
2. Enter the values for all the fields in the Password Rules page.
3. Click Save.

Sessions
You can use the ECS Portal to manage user sessions.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task

Table 55. Sessions


Field Description
Management sessions per user The maximum number of sessions per user. Must be greater
than or equal to 2.
Inactive session timeout(mins) The maximum number of minutes before an inactive session is
terminated.
● Must be greater than or equal to 15.
● Inactive session timeout applies to any logged in session.

174 ECS Settings


Table 55. Sessions (continued)
Field Description
● Use the Inactive UI Session timeout setting to limit ECS UI
session inactivity.
Inactive UI session timeout(mins) The maximum number of minutes before an inactive ECS UI
session is terminated.
● Must be greater than or equal to 10. Inactive UI session
timeout value may not exceed the Inactive session timeout
value.
● Inactive UI session timeout applies to any session where
the user logged in through ECS UI.

Steps
1. In the ECS Portal, select Settings > Security > Sessions.
2. Enter the values for all the fields in the Sessions page.
3. Click Save.

User Agreement
You can use the ECS Portal to define the user agreement text.

Prerequisites
This operation requires the System Administrator role in ECS.

About this task

Table 56. User agreement


Field Description
Agreement text Enter the appropriate agreement text or upload from a .txt
file. This content is displayed from the User Agreement link on
the login page. Maximum character limit is 5000.

Steps
1. In the ECS Portal, choose Settings > Security > User Agreement.
2. Enter the agreement text in the Agreement text field or click UPLOAD.TXT FILE to upload text from a text file.
You can review and modify the agreement text before you proceed.
3. Click Save.

About this VDC


You can view information about software version numbers for the current node or other nodes in the VDC on the About this
VDC page.

About this task


You can view information that is related to the node you are connected to on the About tab. You can view the names, IP
addresses, rack IDs, and software versions of the nodes available in the VDC on the Nodes tab. You can identify any nodes that
are not at the same software version as the node you are connected to on the Nodes tab.

Steps
1. In the ECS Portal, select Settings > About this VDC.

ECS Settings 175


On the About this VDC page, the About tab displays by default and shows the ECS software version and ECS Object
service version for the current node.
2. On the About this VDC page, to view the software version for the reachable nodes in the cluster, click the Nodes tab.
The green checkmark indicates the current node. A star indicates the nodes that have a different software version.
3. On the About this VDC page, to view the End User License Agreement click the tab entitled End User License
Agreement.
4. On the About this VDC page, to view the Infrastructure Telemetry Notice click the tab entitled ISG Product Telemetry
Software Notice.

Object version limitation settings


Objects with many versions can cause DU when the objects are accessed by foreground or background operations. With object
version limitation settings, it is possible to prevent DU.
When object version limitation settings are enabled, a PUT to an object with more than 50,000 versions in a bucket with
versioning enabled returns 403 Forbidden. All other operations continue to work as normal while PUT requests remain
forbidden.
NOTE: The limit is enforced (PUT request that is rejected with 403) only on new installations. Whereas during upgrade,
only alerts are sent.

Table 57. Object version limitation settings


Parameter Description
enabled Enable or disable maximum object version count enforcement.
create_forbidden_threshold Maximum number of versions for an object.
alert.level_one.enabled Enable or disable the version count alert.
alert.level_one.threshold Alert user at percent of limit.
alert.level_two.enabled Enable or disable the version count alert.
alert.level_two.threshold Alert user at percent of limit.

Default values
● The threshold is 50,000 versions of an object.
● Alerts are enabled by default.
● The threshold is disabled during upgrade.
● Threshold is enabled during a new install.
● Default values can be modified.

176 ECS Settings


13
ECS Outage and Recovery
Topics:
• Introduction to ECS site outage and recovery
• TSO behavior
• PSO behavior
• Recovery on disk and node failures
• Data rebalancing after adding new nodes

Introduction to ECS site outage and recovery


ECS is designed to provide protection when a site outage occurs due to a disaster or other problem that causes a site to go
offline or become disconnected from the other sites in a geo-federated deployment.
Site outages can be classified as a temporary site outage (TSO) or a permanent site outage (PSO). A TSO is a failure of the
WAN connection between two sites, or a temporary failure of an entire site (for example, a power failure). A site can be brought
back online after a TSO. ECS can detect and automatically handle these types of temporary site failures.
A PSO is when an entire site becomes permanently unrecoverable, such as when a disaster occurs. In this case, the System
Administrator must permanently fail over the site from the federation to initiate failover processing.
TSO and PSO behavior are described in the following topics:
● TSO behavior
● TSO considerations
● NFS file system access during a TSO
● PSO behavior
CAUTION: See Minimum Requirements for TSO and PSO.

NOTE: For more information about TSO and PSO behavior, see the ECS High Availability Design white paper.

NOTE: Do not delete buckets during rejoin after TSO.

NOTE: Best practices for administrators to consider while configuring network bandwidth for ECS replication:
● For most scenarios, ECS replicates the same amount of data that it ingests. Best practice for replication bandwidth
allocation should be at least equal or greater than the frontend injection rate.
● In full replication mode, the bandwidth that is required is also dependent on the number of VDC in the full replication
RG. For example, if a user has four VDC in an RG, the replication network bandwidth that is required is three times the
frontend ingest rate.
● Network that is used for replication must be stable under high utilization scenarios. It should avoid or account for
additional load such as load from the firewall.
● When a failure scenario such as a PSO, TSO, or VDC extend operation happens, there could be a backlog that is
generated. This increases the bandwidth that is required by replication to catch up and clear the backlog. Administrators
must account for these situations to avoid network saturation or in the worst case, network failure. Best practices to
consider:
○ Use a third-party QoS method to throttle the network used by replication.
○ Discuss options with the Dell Service provider, to tune the system based on your network situation.
ECS recovery and data balancing behavior are described in these topics:
● Recovery on disk and node failures
● Data rebalancing after adding new nodes

ECS Outage and Recovery 177


TSO behavior
VDCs in a geo-replicated environment have a heartbeat mechanism. Sustained loss of heartbeats for a configurable duration (by
default, 15 minutes) indicates a network or site outage and the system transitions to identify the TSO.
NOTE: TSO behavior applies to IAM configuration as well, and when the IAM configuration is changed, the effect of those
changes do not take effect immediately.
ECS marks the unreachable site as TSO and the site status displays as Temporarily unavailable on the Replication
Group Management page in the ECS Portal.
There are two important concepts that determine how the ECS system behaves during a TSO.
● Owner site: If a bucket or object is created within a namespace in Site A, and then Site A is the owner site of that bucket
or object. When a TSO occurs, the behavior for read/write requests differs depending on whether the request is made from
the site that owns the bucket or object, or from a nonowner site that does not own the primary copy of the object.
● Access During Outage (ADO) bucket setting: Access to buckets and the objects within them during a TSO differs depending
on whether the ADO setting is turned on for buckets. The ADO setting can be set at the bucket level; meaning you can turn
this setting on for some buckets and off for others.
○ If the ADO setting is turned off for a bucket, strong consistency is maintained during a TSO by continuing to enable
access to data owned by accessible sites and preventing access to data owned by a failed site.
○ If the ADO setting is turned on for a bucket, read and optionally write access to all geo-replicated data is enabled,
including the data that is owned by the failed site. During a TSO, the data in the ADO-enabled bucket temporarily
switches to eventual consistency. Once all sites are back online, it reverts to strong consistency.

TSO behavior with the ADO bucket setting turned off


If the Access During Outage (ADO) setting is turned off for a bucket, during a TSO you will only be able to access the data in
that bucket if it is owned by an available site. You cannot access data in a bucket that is owned by a failed site. By default, the
ADO setting is turned off because there is a risk that object data retrieved during a TSO is not the most recent.
In the ECS system example shown in the following figure, Site A is marked as TSO and is unavailable. Site A is the owner of
Bucket 1, because that is where the bucket (and the objects within it) was created. At the time Bucket 1 was created, the ADO
setting was turned off. The read/write requests for objects in that bucket made by applications connected to Site A will fail.
When an application tries to access an object in that bucket from a nonowner site (Site B), the read/write request will also fail.
The scenario would be the same if the request was made before the site was officially marked as TSO by the system (which
occurs after the heartbeat is lost for a sustained period of time, which is set at 15 minutes by default). In other words, if a
read/write request was made from an application connected to Site B within 15 minutes of the power outage, the request would
still fail.

178 ECS Outage and Recovery


1 power outage occurs, 2
Site A is unavailable
heartbeat stops between sites
Site A (owner site of object) X Site B (non-owner site)
X TSO 3 after 15 minutes, ECS marks Site A
as TSO

Bucket 1 Namespace Bucket 1


checks primary copy to see if Site B copy is latest copy - 5
primary copy primary copy unavailable secondary copy

X 6 read/write
request fails

Application

4 16 minutes after power outage,


an application connected to Site B
makes a read or write request for an
object owned by Site A
Figure 10. Read/write request fails during TSO when data is accessed from non-owner site and owner site is
unavailable

The following figure shows a nonowner site that is marked as TSO with the ADO setting turned off for the bucket. When an
application tries to access the primary copy at the owner site, the read/write request made to the owner site will be successful.
A read/write request made from an application connected to the nonowner site fails.

1 power outage occurs,


2 Site B is unavailable
heartbeat stops between sites
Site A (owner site of object) X X Site B (non-owner site)
3 after 15 minutes, ECS marks Site B
as TSO TSO
Bucket 1 Namespace Bucket 1

primary copy secondary copy

16 minutes 4 5 read/write
after power request
succeeds
outage, an app
Application
connected to
Site A makes a read or write
request for an object owned by Site A
Figure 11. Read/write request succeeds during TSO when data is accessed from owner site and non-owner site is
unavailable

TSO behavior with the ADO bucket setting turned on


Turning the ADO setting on for a bucket marks the bucket, and all of the objects in the bucket, as available during an outage.
You can turn the ADO setting on for a bucket so that the primary copies of the objects in that bucket are available, even when

ECS Outage and Recovery 179


the site that owns the bucket fails. If the ADO setting is turned off for a bucket, the read/write requests for the objects in the
bucket that is owned by a failed site cannot be made from the other sites.
When you turn the ADO setting on, the following occurs during a TSO:
● Object data is accessible for both read and write operations during the outage.
● File systems within file system-enabled (NFS) buckets that are owned by the unavailable site are read-only during an outage.
You can turn the ADO setting on when you create a bucket, and you can change this setting after the bucket is created (as long
as all sites are online.) You can turn the ADO setting on when creating a bucket from the following interfaces:
● ECS Portal (see Create a bucket)
● ECS Management REST API
● ECS CLI
● Object API REST interfaces such as S3, Swift, and Atmos
With the ADO setting turned on for a bucket and upon detecting a temporary outage, read/write requests from applications that
are connected to a nonowner site are accepted and honored, as shown in the following figure.

1 power outage occurs, 2


Site A is unavailable
heartbeat stops between sites
Site A (owner site of object) X Site B (non-owner site)
X TSO 3 after 15 minutes, ECS marks Site A as TSO

ADO-enabled Bucket 1 Namespace


ADO-enabled Bucket 1
checks primary copy to see if Site B copy is latest copy - 5
primary copy primary copy available secondary copy

16 minutes after the power outage, 4 6 read/write


an application connected to Site B request
succeeds
makes a read or write request for
an object owned by Site A Application

Figure 12. Read/write request succeeds during TSO when ADO-enabled data is accessed from non-owner site and
owner site is unavailable

The ECS system operates under the eventual consistency model during a TSO with ADO turned on for buckets. When a change
is made to an object at one site, it will be eventually consistent across all copies of that object at other sites. Until enough time
elapses to replicate the change to other sites, the value might be inconsistent across multiple copies of the data at a particular
point in time.
An important factor to consider is that turning the ADO setting on for buckets has performance consequences; ADO-enabled
buckets have slower read/write performance than buckets with the ADO turned off. The performance difference is because
when ADO is turned on for a bucket, ECS must first resolve object ownership to provide strong consistency when all sites
become available after a TSO. When ADO is turned off for a bucket, ECS does not have to resolve object ownership because
the bucket does not enable change of object ownership during a TSO.
The benefit of the ADO setting is that it enables you to access data during temporary site outages. The disadvantage is that the
data returned may be outdated and read/write performance on ADO buckets will be slower.
In ECS 3.8, you can enable ADO setting in Object Lock-enabled buckets if you have system administrator privileges, and you are
aware of the data loss risks.
By default, the ADO setting is turned off because there is a risk that object data retrieved during a TSO is not the most recent.
TSO behavior with the ADO bucket setting that is turned on is described for the following ECS system configurations:
● Two-site geo-federated deployment with ADO-enabled buckets
● Three-site active federated deployment with ADO-enabled buckets
● Three-site passive federated deployment with ADO-enabled buckets

180 ECS Outage and Recovery


ADO Read-Only option
When you create a bucket and turn the ADO setting on, you also have the additional option of selecting Read-Only. You can only
set the Read-Only option while creating the bucket; you cannot change this option after the bucket has been created. When you
select the Read-Only option for the ADO-enabled bucket, the bucket is only accessible in read-only mode during the outage. You
cannot edit or delete the bucket and its contents, and you cannot create objects in the bucket during the outage. Access to file
systems is not impacted because they are automatically in read-only mode when ADO is turned on for file system buckets.
NOTE: ADO-enabled buckets (with or without the Read-Only option selected) will have slower read/write performance
than buckets with the ADO setting turned off.

Two-site geo-federated deployment with ADO-enabled buckets

When an application is connected to a nonowner site, and it modifies an object within an ADO-enabled bucket during a network
outage, ECS transfers ownership of the object to the site where the object was modified.
The following figure shows how a write to a nonowner site causes the nonowner site to take ownership of the object during a
TSO in a two-site geo-federated deployment. This functionality allows applications connected to each site to continue to read
and write objects from buckets in a shared namespace.
When the same object is modified in both Site A and Site B during a TSO, the copy on the nonowner site is the authoritative
copy. When an object that is owned by Site B is modified in both Site A and Site B during a network outage, the copy on Site A
is the authoritative copy that is kept, and the other copy is overwritten.
When network connectivity between two sites is restored, the heartbeat mechanism automatically detects connectivity,
restores service, and reconciles objects from the two sites. This synchronization operation is done in the background and
can be monitored on the Monitor > Recovery Status page in the ECS Portal.

ECS Outage and Recovery 181


Before TSO - normal state
network
Site A connection Site B

ADO-enabled Bucket 1 Namespace ADO-enabled Bucket 1

primary copies secondary copies


of objects secondary copies replicated of objects
to Site B

Application connected to Site A makes write requests to create ADO-enabled Bucket 1


Application
and objects within it. Site A is the owner site of Bucket 1 and the objects within it.

During TSO - Site A is temporarily unavailable


network

X Site A TSO connection down


Site B

ADO-enabled Bucket 1 ADO-enabled Bucket 1


Namespace

Application connected to Site B writes


Application
to the secondary copy of Word doc

After TSO - Site A rejoins federation and object versions are reconciled
network
Site A connection Site B

ADO-enabled Bucket 1 ADO-enabled Bucket 1


Namespace

Primary copies of these 2 objects still reside in Primary copy of updated Word doc now exists in Site B
Site A. SIte A is owner site of these objects. after TSO is over. Site B is owner site of Word doc.

Figure 13. Object ownership example for a write during a TSO in a two-site federation

Three-site active federated deployment with ADO-enabled buckets


NOTE: Do not delete buckets during rejoin after TSO. Delete operation is not performed during rejoin execution.

When more than two sites are part of a replication group, and if network connectivity is interrupted between one site and the
other two, write, update, or ownership operations continue as they would with two sites, but the process for responding to read
requests is more complex.
If an application requests an object that is owned by a site that is not reachable, ECS sends the request to the site with
the secondary copy of the object. The secondary copy might have been subject to a data contraction operation, which is an
XOR between two different datasets that produces a new dataset. The site with the secondary copy must retrieve the chunks
of the object that is in the original XOR operation, and it must XOR those chunks with the recovery copy. This operation
returns the contents of the chunk that is originally stored on the owner site. Then the chunks from the recovered object can
be reassembled and returned. When the chunks are reconstructed, they are also cached so that the site can respond more
quickly to subsequent requests. Reconstruction is time consuming. More sites in a replication group imply more chunks that
must be retrieved from other sites, and hence reconstructing the object takes longer. The following figure shows the process
for responding to read requests in a three-site federation.

182 ECS Outage and Recovery


Network connection between Site A and Sites B and C is down

X Site A - owner site TSO Site B - non-owner site Site C - non-owner site

Namespace
ADO-enabled Bucket 1 ADO-enabled Bucket 1 4 retrieve XOR chunks ADO-enabled Bucket 1

MP4 XOR
primary copy MP4
copy of 5 use XOR chunks to reconstruct
secondary copy of object
object reconstructed
secondary
copy of object

3
Site C routes request to Site B where secondary
2 load balancer routes
copy resides; it is an XOR copy, so it must be request to one of the
reconstructed sites that is up, Site C

load balancer

1 read request
for MP4 object
6 read request
successfully
completed Application

Figure 14. Read request workflow example during a TSO in a three-site federation

Three-site passive federated deployment with ADO-enabled buckets

When ECS is deployed in a three-site passive configuration, the TSO behavior is the same as described in Three-site active
federated deployment with ADO-enabled buckets, with one difference. If a network connection fails between an active site and
the passive site, ECS always marks the passive site as TSO (not the active site).
When the network connection fails between the two active sites, the following normal TSO behavior occurs:
1. ECS marks one of the active sites as TSO (unavailable), for example, owner Site B.
2. Read/write/update requests are rendered from the site that is up (Site A).
3. For a read request, Site A requests the object from the passive site (Site C).
4. Site C decodes (undo XOR) the XOR chunks and sends to Site A.
5. Site A reconstructs a copy of the object to honor the read request.
6. If there is a write or update request, Site A becomes the owner of the object and keeps the ownership after the outage.
The following figure shows a passive configuration in a normal state; users can read and write to active Sites A and B and the
data and metadata is replicated one way to the passive Site C. Site C XORs the data from the active sites.

ECS Outage and Recovery 183


Site C - Passive Site

ADO-enabled Bucket 1

chunk 3
(chunk 1 XOR chunk 2 = chunk 3)

user data &


metadata replication

Site A - Active Site Site B - Active Site

ADO-enabled
AD Bucket 1
ADO-enabled Bucket 1
metadata
replication
chunk 1 chunk 2
Namespace

Figure 15. Passive replication in normal state

The following figure shows the workflow for a write request that is made during a TSO in a three-site passive configuration.

Site C - Passive Site

ADO-enabled Bucket 1

Chunks are replicated 4


to Site C

TSO
Site A - Active Site Network
connection Site B - Active Site X
down
ADO-enabled Bucket 1 ADO-enabled Bucket 1

Changes are MP4 MP4


saved on 3 Namespace
local chunks 5 primary
ri copy
updated object - Site A of object
now owns this object

load balancer routes


2 request to the site
that is up, Site A
load balancer

1 write request
for MP4 object
Application

Figure 16. TSO for passive replication

184 ECS Outage and Recovery


Object Lock in ADO buckets

In ECS 3.8, with system administrator privileges, you can enable ADO and Object Lock in a bucket when you decide to explicitly
allow it. When Object Lock and ADO are enabled together in a bucket, there is a risk of losing locked versions during a TSO. As a
result, for ADO buckets, setting Object Lock is denied by default. Object Lock and ADO can co-exist only when the user agrees
to understand the risk of losing locked versions during a TSO but would like to still allow this feature.
This scenario can be considered in three types of buckets.
1. Non-ADO
2. ADO read-only (RO)
3. ADO read/write (RW)

Table 58. Object Lock and ADO in different types of buckets


Object Lock and ADO Non-ADO ADO RO ADO RW
Set to not allowed Yes Yes No
Set to allowed Yes Yes Yes

Object Lock in ADO Read-only buckets ADO (RO)

Object Lock will be allowed in Read-Only ADO buckets. Data cannot be lost but can be unavailable/inconsistent during TSO. This
means that locked version, or updates to lock versions, can be not visible. Suppose the first version of an object was written,
and then updated to have a lock, and then the second version was created with a lock. If the second version is not replicated
before a temporary site outage, data could be unavailable or inconsistent.

Object Lock for ADO buckets (ADO RW)

Object Lock is not allowed on ADO buckets by default. In ECS 3.8, you see an error if you try to enable ADO and Object Lock
together: "Object Lock has been disabled for use with ADO enabled buckets, consult your local systems administrator or data
access guide for information to enable."
The error occurs in all three below situations when a user tries to enable ADO and Object Lock together.
● Create a bucket with both options set to true in request.
● Enable ADO in the Object Lock bucket.
● Enable Object Lock in ADO bucket.
This forces you to accept the risks of losing locked data before enabling both the services together. With system administrator
privileges, you can enable ADO and Object Lock together through the Management API. Here is an example scenario in a series
of figures to understand the data loss possibilities when ADO and Object Lock are enabled together. The following figure shows
the behavior of a bucket that is enabled with both ADO and Object Lock when the network connection is normal. The fist
version of the object is created and locked in Zone 1, which is then replicated to Zone 2.

Figure 17. ADO and Object Lock enabled bucket where the object is replicated from Zone 1 to Zone 2 during normal
network connection

The following figure shows the behavior of the ADO and Object Lock enabled bucket as time elapses. A new version of the
object is created in Zone 1 and is replicated to Zone 2. The new version (V2) is then locked in Zone 1, however the lock is not
replicated to Zone 2 at this moment in time.

ECS Outage and Recovery 185


Figure 18. ADO and Object Lock enabled bucket where a new version of an object is replicated from Zone 1 to Zone 2
during normal network connection but the lock is not replicated

During an outage, the connection between two zones is lost. The following figure demonstrates how during an outage Zone 1 is
inaccessible. As ADO is enabled, the unlocked version of the object can be accessed and modified from Zone 2. This situation
may lead to the creation of a new version (V3) in Zone 2.

Figure 19. ADO and Object Lock enabled bucket during TSO when the unlocked version of the object is overwritten
in Zone 2

The following figure shows the state of the bucket that is enabled with ADO and Object Lock after normal connection is
restored and the data is reconciled between linked zones. This means V3 is replicated from Zone 2 to Zone 1, and V2 is lost as
the lock was not replicated before the outage.

Figure 20. ADO and Object Lock enabled bucket after TSO where the version updated in Zone 2 during the outage
will be replicated to Zone 1

Table 59. Example scenario where locked data can be lost in TSO
Network connectivity Data available in Zone Data available in Zone 1 Activity in the bucket
status 2
Normal A new version of an Version 1 (Governance) -
object is created and
locked.
Normal The new version is Version 1 (Governance) Version 1 (Governance)
shipped to Zone 2.
Normal Version 2 of the object Version 1 (Governance) Version 1 (Governance)
is created and shipped to
Version 2 Version 2
Zone 2.

186 ECS Outage and Recovery


Table 59. Example scenario where locked data can be lost in TSO (continued)
Network connectivity Data available in Zone Data available in Zone 1 Activity in the bucket
status 2
Normal Version 2 is locked, and Version 1 (Governance) Version 1 (Governance)
Version 3 is created and
Version 2 (Compliance) Version 2
locked. Not shipped to
Zone 2 Version 3 (Legal Hold)

TSO Zone 1 is marked TSO Version 1 (Governance) Version 1 (Governance)


before Versions 2 & 3 are
Version 2 (Compliance) Version 2
replicated.
Version 3 (Legal Hold)

TSO Zone 1 is still in Version 1 (Governance) Version 1 (Governance)


TSO which means the
Version 2 (Compliance) Version 4
unlocked version 2 in
Zone 2 could be modified Version 3 (Legal Hold)
to form Version 4. The
locked Version 1 is still
protected.
Normal (Post TSO) A new state from Zone 2 Version 1 (Governance) Version 1 (Governance)
is replicated to Zone 1.
Version 4 Version 4

TSO considerations
You can perform many object operations during a TSO. You cannot perform create, delete, or update operations on the following
entities at any site in the geo-federation until the temporary failure is resolved, regardless of the ADO bucket setting:
● Namespaces
● Buckets
● Object users
● Authentication providers
● Replication groups (you can remove a VDC from a replication group for a site failover)
● NFS user and group mappings
The following limitations apply to buckets during a TSO:
● File systems within file system-enabled (NFS) buckets that are owned by the unavailable site are read-only.
● When you copy an object from a bucket owned by the unavailable site, the copy is a full copy of the source object. This
means that the same object's data is stored more than once. Under normal non-TSO circumstances, the object copy consists
of the data indexes of the object, not a full duplicate of the object's data.

NFS file system access during a TSO


NFS provides a single namespace across all ECS nodes and can continue to operate in the event of a TSO. When you mount
an NFS export, you can specify any of the ECS nodes as the NFS server or you can specify the address of a load balancer.
Whichever node you point at, the ECS system can resolve the file system path.
In the event of a TSO, if your load balancer can redirect traffic to a different site, your NFS export remains available. Otherwise,
you must remount the export from another, nonfailed site.
When the owner site fails, and ECS is required to reconfigure to point at a nonowner site, data can be lost due to NFS
asynchronous writes and also due to unfinished ECS data replication operations.
For more information about how to access NFS-enabled buckets, see Introduction to file access.

ECS Outage and Recovery 187


PSO behavior
If a disaster occurs, an entire site can become unrecoverable; it is referred in ECS as a permanent site outage (PSO). ECS treats
the unrecoverable site as a temporary site failure, but only if the entire site is down or unreachable over the WAN. If the failure
is permanent, the System Administrator must permanently fail over the site from the federation to initiate failover processing.
This initiates resynchronization and reprotection of the objects that are stored on the failed site. The recovery tasks that are run
as a background process. For more information about how to perform the failover procedure in the ECS Portal, see Fail a VDC
(PSO).
NOTE:
● Before triggering PSO (planned or unplanned), ensure that the site is off (all nodes are shut down). Ensure you fail the
site from the federation and remove the site from all the replication groups.
● If you want to reuse the same racks from the PSOed site, disconnect the racks physically and the nodes should be
reimaged before you bring them online.
● IPs or FQDN hostnames should not be reused after they have been PSOed from a system.
● ECS supports multi-site PSO. Multi-site PSO in ECS is limited to full replication RGs, where all sites have all user data.
To perform a Multi-site PSO, contact ECS Remote Support.
Before you initiate a PSO in the ECS Portal, it is advised to contact your technical support representative, so that the
representative can validate the cluster health. Data is not accessible until the failover processing is completed. You can monitor
the progress of the failover processing on the Monitor > Geo Replication > Failover Processing tab in the ECS Portal. While
the recovery background tasks are running, but after failover processing has completed, some data from the removed site might
not be read back until the recovery tasks fully complete.

Recovery on disk and node failures


ECS continuously monitors the health of the nodes, their disks, and objects stored in the cluster. ECS disperses data protection
responsibilities across the cluster and automatically reprotects at-risk objects when nodes or disks fail.
NOTE:
● For EXF900 platform (SSD NVMe type disks) during loss of public network of node, ECS UI does not show node as
Offline, and recovery is not initiated. During loss of private.4 network on a node, ECS UI shows the node as Offline, and
recovery will be initiated.
● For all other ECS platforms during loss of public network, ECS UI shows node as Offline, and Recovery is initiated.

Disk health
ECS reports disk health as Good, Suspect, or Bad.
● Good: The partitions of the disk can be read from and written to.
● Suspect: The disk has not yet met the threshold to be considered bad.
● Bad: A certain threshold of declining hardware performance has been met. When met, no data can be read or written.
ECS writes only to disks in good health. ECS does not write to disks in suspect or bad health. ECS reads from good disks and
suspect disks. When two of an object’s chunks are located on suspect disks, ECS writes the chunks to other nodes.

Node health
ECS reports node health as Good, Suspect, or Bad.
● Good: The node is available and responding to I/O requests in a timely manner.
● Suspect: The node has been unavailable for more than 30 minutes.
● Bad: The node has been unavailable for more than an hour.
ECS writes to reachable nodes regardless of the node health state. When two of an object’s chunks are located on suspect
nodes, ECS writes two new chunks of it to other nodes.

188 ECS Outage and Recovery


Data recovery
When there is a failure of a node or drive in the site, the storage engine:
1. Identifies the chunks or erasure coded fragments affected by the failure.
2. Writes copies of the affected chunks or erasure coded fragments to good nodes and disks that do not currently have copies.

NFS file system access during a node failure


NFS provides a single namespace across all ECS nodes and can continue to operate in the event of node failure. When you
mount an NFS export, you can specify any of the ECS nodes as the NFS server or you can specify the address of a load
balancer. Whichever node you point at, the ECS system resolves the file system path.
In the event of a node failure, ECS recovers data using its data fragments. If your NFS export is configured for asynchronous
writes, you run the risk of losing data that is related to any transactions that have not yet been written to disk. This is the same
with any NFS implementation.
If you mounted the file system by pointing at an ECS node and that node fails, you must remount the export by specifying a
different node as the NFS server. If you mounted the export by using the load balancer address, failure of the node is handled by
the load balancer which automatically directs requests to a different node.

Data rebalancing after adding new nodes


When the number of nodes at a site is expanded due to the addition of new racks or storage nodes, new erasure coded chunks
are allocated to the new storage and existing data chunks are redistributed (rebalanced) across the new nodes. Four or more
nodes must exist for erasure coding of chunks to take place. Addition of new nodes over and above the required four nodes
results in erasure coding rebalancing.
The redistribution of erasure coded fragments is performed as a background task so that the chunk data is accessible during
the redistribution process. In addition, the new fragment data is distributed as a low priority to minimize network bandwidth
consumption.
Fragments are redistributed according to the same erasure coding scheme with which they were originally encoded. If a
chunk was written using the cold storage erasure coding scheme, ECS uses the cold storage scheme when creating the new
fragments for redistribution.

ECS Outage and Recovery 189


14
Advanced Monitoring
Advanced Monitoring dashboards provide critical information about the ECS processes on the VDC you are logged in to.
Topics:
• Advanced Monitoring
• Flux API
• Dashboard APIs

Advanced Monitoring
Advanced Monitoring dashboards provide critical information about the ECS processes on the VDC you are logged in to.
The advanced monitoring dashboards are based on time series database, and are provided by Grafana, which is well known
open-source time series analytics platform.
Refer Grafana for basic details of navigation in Grafana dashboards.
● View Advanced Monitoring Dashboards
● Share Advanced Monitoring Dashboards

View Advanced Monitoring Dashboards


To view the advanced monitoring dashboards in the ECS Portal, select Advanced Monitoring. Data Access Performance -
Overview dashboard is the default.

Table 60. Advanced monitoring dashboards


Dashboard Description
Data Access Performance - Overview You can use the Data Access Performance - Overview dashboard to
monitor VDC data.
Data Access Performance - by Namespaces You can use the Data Access Performance - by Namespaces dashboard
to monitor performance data for individual namespace or group of
Namespaces.
Data Access Performance - by Nodes You can use the Data Access Performance - by Nodes dashboard to see
performance data for individual node or group of nodes in a VDC.
Data Access Performance - by Protocols You can use the Data Access Performance - by Protocols dashboard to
see performance data for each supported protocol (S3, ATMOS, SWIFT) or
set of protocols.
Data Movement - Overview You can use the Data Movement - Overview dashboard to see the
performance of the data mobility policy on namespace and buckets by
watermark lag, total errors, bytes copied, and objects copied.
Disk Bandwidth - by Nodes You can use the Disk Bandwidth - by Nodes dashboard to monitor the
disk usage metrics by read or write operations at the node level. The
dashboard displays the latest values.
NOTE: For Disk Bandwidth - by Nodes dashboard, consistency
checker metric shows data only for read but not write as it is irrelevant.

Disk Bandwidth - Overview You can use the Disk Bandwidth - Overview dashboard to monitor the
disk usage metrics by read or write operations at the VDC level.
NOTE: For Disk Bandwidth - Overview dashboard, consistency
checker metric shows data only for read but not write as it is irrelevant.

190 Advanced Monitoring


Table 60. Advanced monitoring dashboards (continued)
Dashboard Description
Node Rebalancing You can use the Node Rebalancing dashboard to monitor the status of
data rebalancing operations when nodes are added to, or removed from, a
cluster. Node rebalancing is enabled by default at installation. Contact your
technical support representative to disable or reenable this feature.
Process Health - by Nodes You can use the Process Health - by Nodes dashboard to monitor
for each node of the VDC use of network interface, CPU, and available
memory. The dashboard displays the latest values, and the history graphs
display values in the selected range.
Process Health - Overview You can use the Process Health - Overview dashboard to monitor the
VDC use of network interface, CPU, and available memory. The dashboard
displays the latest average values, and the history graphs display values in
the selected time range.
Process Health - Process List by Node You can use the Process Health - Process List by Node dashboard to
monitor processes use of CPU, memory, average thread number and last
restart time in the selected time range. The dashboard displays the latest
values in the selected time range.
Recovery Status You can use the Recovery Status dashboard to monitor the data
recovered by the system.
SSD Read Cache You can use the SSD Read Cache dashboard to monitor total SSD disk
capacity and disk space that is used by SSD read cache.
Tech Refresh: Data Migration You can use the Tech Refresh: Data Migration dashboard to monitor the
data migration off and on a node or cluster.
Top Buckets You can use the Top Buckets dashboard to monitor the number of buckets
with top utilization that is based on total object size and count.

Table 61. Advanced monitoring dashboard fields


Dashboard Field Description
● Data Access Performance - Related Allows you to switch to other dashboards in access performance
Overview dashboards group, with the selected time.
● Data Access Performance - by
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Transaction Lists the total Successful requests, System Failures, User Failures,
Overview Summary and Failure % Rate for the selected VDCs, namespaces, nodes, or
● Data Access Performance - by protocols.
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Performance Lists the latest values of data access bandwidth and latency of read/
Overview Summary write requests for selected range.
● Data Access Performance - by
Nodes
● Data Access Performance - Successful The number of data requests that were successfully completed.
Overview requests
● Data Access Performance - by
Namespaces

Advanced Monitoring 191


Table 61. Advanced monitoring dashboard fields (continued)
Dashboard Field Description
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - System Failures The number of data requests that failed due to hardware or service
Overview errors. System failures are failed requests that are associated with
● Data Access Performance - by hardware or service errors (typically an HTTP error code of 5xx).
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - User Failures The number of data requests from all object heads are classified as
Overview user failures. User failures are known error types originating from the
● Data Access Performance - by object heads (typically an HTTP error code of 4xx).
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Failure % Rate The percentage of failures for the VDC, namespace, nodes, or
Overview protocols.
● Data Access Performance - by
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - TPS (success/ Rate of successful requests and failures per second.
Overview failure)
● Data Access Performance - by
Namespaces
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Bandwidth Data access bandwidth of successful requests per second.
Overview (read/write)
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Failed Rate of failed requests per second, split by error type (user/system).
Overview Requests/s by
● Data Access Performance - by error type
Namespaces (user/system)
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Latency Latency of read/write requests.
Overview

192 Advanced Monitoring


Table 61. Advanced monitoring dashboard fields (continued)
Dashboard Field Description
● Data Access Performance - by
Nodes
● Data Access Performance - by
Protocols
● SSD Read Cache
● Data Access Performance - Successful Displays the rate of successful requests per second, by method, node,
Overview request drill and protocol.
● Data Access Performance - by down
Nodes
● Data Access Performance - Successful Rate of successful requests per second, by method.
Overview Requests/s by
● Data Access Performance - by Method
Nodes
● Data Access Performance - by Successful Rate of successful requests per second, by node.
Namespaces Requests/s by
● Data Access Performance - by Node
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Successful Rate of successful requests per second, by protocol.
Overview Requests/s by
● Data Access Performance - by Protocol
Nodes
● Data Access Performance - Failures drill Displays the rate of failed requests per second, by method, node, and
Overview down protocol.
● Data Access Performance - by
Nodes
● Data Access Performance - Failed Rate of failed requests per second, by method.
Overview Requests/s by
● Data Access Performance - by Method
Nodes
● Data Access Performance - by Failed Rate of failed requests per second, by node.
Namespaces Requests/s by
● Data Access Performance - by Node
Nodes
● Data Access Performance - by
Protocols
● Data Access Performance - Failed Rate of failed requests per second, by protocol.
Overview Requests/s by
● Data Access Performance - by Protocol
Nodes
● Data Access Performance - Failed Rate of failed requests per second, by error code.
Overview Requests/s by
● Data Access Performance - by error code
Nodes
● Data Access Performance - by Compare TPS of Select multiple nodes and compare rates of successful requests per
Nodes successful second.
● Data Access Performance - by requests
Namespaces
● Data Access Performance - by
Protocols

Advanced Monitoring 193


Table 61. Advanced monitoring dashboard fields (continued)
Dashboard Field Description
Data Access Performance - by Compare TPS of Select multiple nodes and compare rates of failed requests per second,
Namespaces failed requests by error type (user/system).
● Data Access Performance - by Compare read Select multiple nodes and compare data access bandwidth (read) of
Nodes bandwidth successful requests per second.
● Data Access Performance - by
Protocols
● Data Access Performance - by Compare write Select multiple nodes and compare data access bandwidth (write) of
Nodes bandwidth successful requests per second.
● Data Access Performance - by
Protocols
● Data Access Performance - by Compare read Select multiple nodes and compare latency of read requests.
Nodes latency
● Data Access Performance - by
Protocols
● Data Access Performance - by Compare write Select multiple nodes and compare latency of write requests.
Nodes latency
● Data Access Performance - by
Protocols
● Data Access Performance - by Compare rate of Select multiple nodes and compare rates of failed requests per second,
Nodes failed requests/s split by error type (user/system).
● Data Access Performance - by
Protocols
Data Access Performance - by Request drill Rate of requests per second, split by node.
Namespaces down by nodes
Data Movement - Overview Watermark Lag System real time minus the watermark of policy.
Data Movement - Overview Total Errors Total number of errors for the selected data movement policy on the
namespace and bucket.
Data Movement - Overview Objects Copied Total objects copied for the selected data movement policy on the
namespace and bucket.
Data Movement - Overview Bytes Copied Total byes copied for the selected data movement policy on the
namespace and bucket.
● Disk Bandwidth - by Nodes Read or Write Indicates whether the row describes read data or write data.
● Disk Bandwidth - Overview
● Disk Bandwidth - by Nodes Nodes The number of nodes in the VDC. You can click the nodes number
● Disk Bandwidth - Overview to see the disk bandwidth metrics for each node. There is no Nodes
column when you have drilled down into the Nodes display for a VDC.
● Disk Bandwidth - by Nodes Total Total disk bandwidth that is used for either read or write operations.
● Disk Bandwidth - Overview
● Disk Bandwidth - by Nodes Hardware Rate at which disk bandwidth is used to recover data after a hardware
● Disk Bandwidth - Overview Recovery failure.

● Disk Bandwidth - by Nodes Erasure Rate at which disk bandwidth is used in system erasure coding
● Disk Bandwidth - Overview Encoding operations.

● Disk Bandwidth - by Nodes XOR Rate at which disk bandwidth is used in the XOR data protection
● Disk Bandwidth - Overview operations of the system. XOR operations occur for systems with
three or more sites (VDCs).
● Disk Bandwidth - by Nodes Consistency Rate at which disk bandwidth is used to check for inconsistencies
● Disk Bandwidth - Overview Checker between protected data and its replicas.

194 Advanced Monitoring


Table 61. Advanced monitoring dashboard fields (continued)
Dashboard Field Description
● Disk Bandwidth - by Nodes Geo Rate at which disk bandwidth is used to support geo replication
● Disk Bandwidth - Overview operations.

● Disk Bandwidth - by Nodes User Traffic Rate at which disk bandwidth is used by object users.
● Disk Bandwidth - Overview
Node Rebalancing Data Rebalanced Amount of data that has been rebalanced.
Node Rebalancing Pending Amount of data that is in the rebalance queue but has not been
Rebalancing rebalanced yet.
Node Rebalancing Rate of The incremental amount of data that was rebalanced during a specific
Rebalance (per time period. The default time period is one day.
day)
Process Health - Process List by Process The last time the process restarted on the node in the selected time
Node Restarts range. The maximum time range could be 5 days because it is limited
by the retention policy.
Process Health - Overview Avg. NIC Average bandwidth of the network interface controller hardware that
Bandwidth is used by the selected VDC or node.
Process Health - Process List by NIC Bandwidth Bandwidth of the network interface controller hardware that is used
Node by the selected VDC or node.
Process Health - Overview Avg. CPU Usage Average percentage of the CPU hardware that is used by the selected
VDC or node.
Process Health - Overview Avg. Memory Average usage of the aggregate memory available to the VDC or node.
Usage
● Process Health - by Nodes Relative NIC Percentage of the available bandwidth of the network interface
● Process Health - Overview (%) controller hardware that is used by the selected VDC or node.

● Process Health - by Nodes Relative Memory Percentage of the memory used relative to the memory available to
● Process Health - Overview (%) the selected VDC or node.
● Process Health - Process List
by Node
● Process Health - by Nodes CPU Usage Percentage of the node's CPU used by the process. The list of
● Process Health - Process List processes that are tracked is not the complete list of processes
by Node running on the node. The sum of the CPU used by the processes is
not equal to the CPU usage shown for the node.
Process Health - by Nodes Memory Usage The memory used by the process.
● Process Health - by Nodes Relative Memory Percentage of the memory used relative to the memory available to
● Process Health - Overview (%) the process.
● Process Health - Process List
by Node
Process Health - Process List by Avg. # Thread Average number of threads used by the process.
Node
Process Health - Process List by Last Restart The last time the process restarted on the node.
Node
Process Health - by Nodes Host -
Process Health - Process List by Process -
Node
Recovery Status Amount of Data With the Current filter selected, this is the logical size of the data yet
to be Recovered to be recovered.

Advanced Monitoring 195


Table 61. Advanced monitoring dashboard fields (continued)
Dashboard Field Description
● When a historical period is selected as the filter, the meaning of
Total Amount Data to be Recovered is the average amount of
data pending recovery during the selected time.
● For example, if the first hourly snapshot of the data showed 400
GB of data to be recovered in a historical time period and every
other snapshot showed 0 GB waiting to be recovered, the value of
this field would be 400 GB divided by the total number of hourly
snapshots in the period.
SSD Read Cache Disk Usage Used SSD space by Read Cache
SSD Read Cache Disk Capacity Total SSD disk capacity
Tech Refresh: Data Migration Remaining This panel shows graph of remaining volume on source nodes.
Volume to
Migrate
Tech Refresh: Data Migration Migration Speed This panel shows graph of remaining volume on source nodes.
Tech Refresh: Data Migration Data Migration Detailed status of migration on source nodes. Migration speed and
Status predictions are calculated based on last 1 hour of currently selected
time interval.
Top buckets Top Buckets by Top used buckets by size.
Size
Top buckets Top Buckets by Top used buckets by object count.
Object Count
Top buckets Time of The time at which the displayed metrics of Top Buckets dashboard
Calculation were calculated.

View mode
Steps
1. To view a dashboard in the view mode, click the title of a dashboard, for example (TPS (success/failure) > View.
The dashboard opens in the view mode or in the full-screen mode.
2. Click Back to dashboard icon to return back to the dashboards view.

Export CSV
Steps
1. To export the dashboard data to .csv format, click the title of a dashboard.
2. Navigate to Inspect > Data.
3. Click Download CSV to export the dashboard data to .csv format to your local storage.

Data Access Performance - Overview


Data Access Performance - Overview dashboard is the default.
In the Data Access Performance - Overview dashboard, you can monitor for all nodes in the VDC:
● TPS (success/failure)
● Bandwidth (read/write)
● Failed Requests/s by error type (user/system)
● Latency
● Successful Requests/s by Method
● Successful Requests/s by Protocol

196 Advanced Monitoring


● Failed Requests/s by Method
● Failed Requests/s by Protocol
● Failed Requests/s by error code
To view the Data Access Performance - Overview dashboard in the ECS Portal, select Advanced Monitoring.
Click Successful requests drill down to see the successful requests by all the methods, nodes, and protocols.
Click Failures drill down to see the failed requests by all the methods, nodes, protocols, and error code.
Click Related dashboards to view the other dashboards, with the selected time.

Data Access Performance - by Namespaces

In the Data Access Performance - by Namespaces dashboard, you can monitor for namespaces:
● TPS (success/failure)
● Failed Requests/s by error type (user/system)
● Successful Requests/s by Node
● Failed Requests/s by Node
● Compare TPS of successful requests
● Compare TPS of failed requests
To view the Data Access Performance - by Namespaces dashboard in the ECS Portal, select Advanced Monitoring >
Related dashboards > Data Access Performance - by Namespaces.
All the namespace data are visible in the default view. To select a namespace, click the legend parameter for the namespace
below the graph.
Requests drill down by nodes shows the successful and failed requests by node.
Compare: select multiple namespaces compares TPS of successful and failed requests.

Data Access Performance - by Nodes

In the Data Access Performance - by Nodes dashboard, you can monitor for nodes in a VDC:
● TPS (success/failure)
● Bandwidth (read/write)
● Failed Requests/s by error type (user/system)
● Latency
● Successful Requests/s by Method
● Successful Requests/s by Node
● Successful Requests/s by Protocol
● Failed Requests/s by Method
● Failed Requests/s by Node
● Failed Requests/s by Protocol
● Failed Requests/s by error code
● Compare TPS of successful requests
● Compare TPS of failed requests
● Compare read bandwidth
● Compare write bandwidth
● Compare read latency
● Compare write latency
To view the Data Access Performance - by Nodes dashboard in the ECS Portal, select Advanced Monitoring > Related
dashboards > Data Access Performance - by Nodes.
Data for all the nodes are visible in the default view. To select data for a node, click the legend parameter for the node below
the graph.
Successful requests drill down shows the successful requests by method, node, and protocol.
Failures drill down shows the failed requests by method, node, protocol, and error code.

Advanced Monitoring 197


Compare: select multiple namespaces compares TPS of successful and failed requests, compare read/write bandwidth,
compare read/write latency.

Data Access Performance - by Protocols

In the Data Access Performance - by Protocols dashboard, based on the protocol, you can monitor:
● TPS (success/failure)
● Bandwidth (read/write)
● Failed Requests/s by error type (user/system)
● Latency
● Successful Requests/s by Node
● Failed Requests/s by Node
● Compare TPS of successful requests
● Compare TPS of failed requests
● Compare read bandwidth
● Compare write bandwidth
● Compare read latency
● Compare write latency
To view the Data Access Performance - by Nodes dashboard in the ECS Portal, select Advanced Monitoring > Related
dashboards > Data Access Performance - by Protocols.
Data for all the protocols are visible in the default view. To select data for a protocol, click the legend parameter for the protocol
below the graph.
Requests drill down by nodes shows the successful and failed requests by node.
Compare: select multiple namespaces compares TPS of successful and failed requests, compare read/write bandwidth,
compare read/write latency.

Data Movement - Overview


Data Movement - Overview dashboard is the default.
In the Data Movement - Overview dashboard, you can monitor the namespace and Bucket in the VDC for a specific policy:
● Watermark Lag - If you are copying objects, a flat line with no upward movement indicates proper performance.
● Total Errors - If you are copying, you should investigate any number above zero.
● Objects Copied - If you are copying objects, a smooth upward line with no flattening indicates proper performance.
● Bytes Copied - If you are copying objects, a smooth upward line with no flattening indicates proper performance.
To view the Data Movement - Overview dashboard in the ECS Portal, select Advanced Monitoring.

Disk Bandwidth - by Nodes

You can use the Disk Bandwidth - by Nodes dashboard to monitor the disk usage metrics by read or write operations at the
node level. The dashboard displays the latest values.
To view the Disk Bandwidth - by Nodes dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Disk Bandwidth - by Nodes

Disk Bandwidth - Overview

You can use the Disk Bandwidth - Overview dashboard to monitor the disk usage metrics by read or write operations at the
VDC level.
To view the Disk Bandwidth - Overview dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Disk Bandwidth - Overview

198 Advanced Monitoring


Node Rebalancing

You can use the Node Rebalancing dashboard to monitor the status of data rebalancing operations when nodes are added to,
or removed from, a cluster. Node rebalancing is enabled by default at installation. Contact your customer support representative
to disable or re-enable this feature.
To view the Node Rebalancing dashboard, click Advanced Monitoring > expand Data Access Performance - Overview >
Node Rebalancing
A series of interactive graphs shows that the amount of data rebalanced, pending rebalancing, and the rate of rebalancing data
in bytes over time.
Node rebalancing works only for new nodes that are added to the cluster.

Process Health - by Nodes

You can use the Process Health - by Nodes dashboard to monitor for each node of the VDC use of network interface, CPU,
and available memory. The dashboard displays the latest values and the history graphs display values in the selected range.
To view the Process Health - by Nodes dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Process Health - by Nodes

Process Health - Overview

You can use the Process Health - Overview dashboard to monitor the VDC use of network interface, CPU, and available
memory. The dashboard displays the latest average values and the history graphs display values in the selected time range.
To view the Process Health - Overview dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Process Health - Overview

Process Health - Process List by Node

You can use the Process Health - Process List by Node dashboard to monitor processes use of CPU, memory, average
thread number and last restart time in the selected time range. The dashboard displays the latest values in the selected time
range.
To view the Process Health - Process List by Node dashboard, click Advanced Monitoring > expand Data Access
Performance - Overview > Process Health - Process List by Node

Recovery Status

You can use the Recovery Status dashboard to see:


● The latest value of the logical size of the data yet to be recovered in the selected time range, and
● History of the amount of data that is pending recovery in the selected time range.
To view the Recovery Status dashboard, click Advanced Monitoring > expand Data Access Performance - Overview >
Recovery Status.

SSD Read Cache


ECS is upgraded to enable SSD caching. There is one single SSD read cache drive per node. SSD read cache feature is
implemented on ECS Gen2 U-Series and Gen3.
If a VDC has a mixed hardware configuration where some nodes cannot support SSD read cache, then the SSD read cache
feature in such VDC is not supported.
You can use the SSD Read Cache dashboard to monitor total SSD disk capacity and disk space that is used by SSD read cache.
NOTE: The nodes which do not have SSD disks are also listed in the node selection dropdown but the values will be 0 since
it does not have SSD disks.

Advanced Monitoring 199


To view the SSD Read Cache dashboard, click Advanced Monitoring > expand Data Access Performance - Overview >
SSD Read Cache
See, ECS Solve Online for details.

Tech Refresh: Data Migration


You can use the Tech Refresh: Data Migration dashboard to monitor the data migration off and on a node or cluster.
To view the Tech Refresh: Data Migration dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Tech Refresh: Data Migration

Top Buckets
ECS is upgraded with a mechanism in metering to calculate the number of buckets with top utilization that is based on total
object size and count.
Statistics of buckets with top utilization for the system is displayed in monitoring dashboards. The number of buckets that are
displayed on the monitoring dashboard is a configurable value.
To view the Top buckets dashboard, click Advanced Monitoring > expand Data Access Performance - Overview > Top
buckets.

Automatic Metering Reconstruction


Automatic metering reconstruction is a mechanism to reconstruct the metering statistics completely.
Metering is responsible for storing the statistics for utilization by namespace and bucket that is based on object size and
count. When an object is created in a bucket, then the statistics are reported to the metering service where the statistics
are aggregated and stored. Statistics are aggregated and mapped to a time which is the nearest multiple of five minutes. For
example, objects that are created at 10:04:59 pm are mapped to time at 10:00:00 pm. The metering statistics are stored in time
series format to provide historical view of the statistics and to serve billing sample queries. The statistics are displayed in a time
window.
As a result of logic errors in implementation of metering, blob service side operations wrong statistics are reported to
metering. Incorrect metering information gets compounded and remains inaccurate from that point forward. Automatic metering
reconstruction is a mechanism to overcome the problem of erroneous statistics.
This feature is disabled in ESC 3.5.0.0. You have to manually enable it.
The automatic reconstruction is invoked in the following scenarios:
● During upgrade
● When the system recovers from a PSO

Share Advanced Monitoring Dashboards


Share dashboard icon enables you to create a direct link to the dashboard or panel, share a snapshot of an interactive dashboard
publicly, and export the dashboard to a JSON file.
For procedures on sharing the dashboard link, dashboard snapshot, and dashboard as a JSON file, refer to Grafana
documentation.

Flux API
Flux API enables you to retrieve time series database data by sending REST queries using curl. You can get raw data from
fluxd service in a way similar to using the Dashboard API. You have to get a token, and provide the token in the requests.

Prerequisites
Requires one of the following roles:
● SYSTEM_ADMIN

200 Advanced Monitoring


● SYSTEM_MONITOR
Request payload examples

json:

{
"query": "from(bucket:\"monitoring_main\") |> range(start: -30m) |> filter(fn: (r) =>
r._measurement == \"statDataHead_performance_internal_transactions\")"
}

application/vnd.flux - CSV format:

query=from(bucket: "monitoring_main")
|> range(start: -30m)
|> filter(fn: (r) => r._measurement == "statDataHead_performance_internal_transactions")

Steps
1. Generate a token.

Token:

admin@ecs:> tok=$(curl -iks https://localhost:4443/login -u emcmonitor:#### | grep X-


SDS-AUTH-TOKEN)

admin@ecs:/> echo $tok


X-SDS-AUTH-TOKEN:****

#### represents a password.


**** represents a X-SDS-AUTH-TOKEN value.
2. Run the query.
Curl arguments varies depending on output format (JSON or CSV). See the examples for details.
Example
JSON example

admin@ecs:/> curl https://localhost:4443/flux/api/external/v2/query -XPOST -k -sS -H


"$tok" -H 'accept:application/json' -H 'content-type:application/json' -d '{
"query": "from(bucket:\"monitoring_main\") |> range(start: -30m) |> filter(fn: (r) =>
r._measurement == \"statDataHead_performance_internal_transactions\")" }'
{
"Series": [
{
"Datatypes": [
"long",
"dateTime:RFC3339",
"dateTime:RFC3339",
"dateTime:RFC3339",
"long",
"string",
"string",
"string",
"string",
"string",
"string"
],
"Columns": [
"table",
"_start",
"_stop",
"_time",
"_value",
"_field",
"_measurement",
"host",
"node_id",
"process",

Advanced Monitoring 201


"tag"
],
"Values": [
[
"0",
"2020-03-10T09:54:31.207799855Z",
"2020-03-10T10:24:31.207799855Z",
"2020-03-10T09:56:43Z",
"1",
"failed_request_counter",
"statDataHead_performance_internal_transactions",
"ecs.lss.emc.com",
"28cd473e-ca45-4623-b30d-0481c548a650",
"statDataHead",
"dashboard"
],
[
"0",
"2020-03-10T09:54:31.207799855Z",
"2020-03-10T10:24:31.207799855Z",
"2020-03-10T10:01:43Z",
"1",
"failed_request_counter",
"statDataHead_performance_internal_transactions",
"ecs.lss.emc.com",
"28cd473e-ca45-4623-b30d-0481c548a650",
"statDataHead",
"dashboard"
],
[
"0",
"2020-03-10T09:54:31.207799855Z",
"2020-03-10T10:24:31.207799855Z",
"2020-03-10T10:06:43Z",
"1",
"failed_request_counter",
"statDataHead_performance_internal_transactions",
"ecs.lss.emc.com",
"28cd473e-ca45-4623-b30d-0481c548a650",
"statDataHead",
"dashboard"
],

CSV example

admin@ecs:> curl https://localhost:4443/flux/api/external/v2/query -XPOST -k -sS -H


"$tok" -H 'accept:application/csv' -H 'content-type:application/vnd.flux' -d
'from(bucket:"monitoring_main") |> range(start:-30m) |> filter(fn: (r) => r._measurement
== "statDataHead_performance_internal_transactions")'
#datatype,string,long,dateTime:RFC3339,dateTime:RFC3339,dateTime:RFC3339,long,string,stri
ng,string,string,string,string
#group,false,false,true,true,false,false,true,true,true,true,true,true
#default,_result,,,,,,,,,,,
,result,table,_start,_stop,_time,_value,_field,_measurement,host,node_id,process,tag
,,0,2020-03-10T09:58:59.049910533Z,2020-03-10T10:28:59.049910533Z,2020-03-10T10:01:43Z,1,
failed_request_counter,statDataHead_performance_internal_transactions,ecs.lss.emc.com,28c
d473e-ca45-4623-b30d-0481c548a650,statDataHead,dashboard
,,0,2020-03-10T09:58:59.049910533Z,2020-03-10T10:28:59.049910533Z,2020-03-10T10:06:43Z,1,
failed_request_counter,statDataHead_performance_internal_transactions,ecs.lss.emc.com,28c
d473e-ca45-4623-b30d-0481c548a650,statDataHead,dashboard
,,0,2020-03-10T09:58:59.049910533Z,2020-03-10T10:28:59.049910533Z,2020-03-10T10:11:43Z,1,
failed_request_counter,statDataHead_performance_internal_transactions,ecs.lss.emc.com,28c
d473e-ca45-4623-b30d-0481c548a650,statDataHead,dashboard
,,0,2020-03-10T09:58:59.049910533Z,2020-03-10T10:28:59.049910533Z,2020-03-10T10:16:43Z,1,
failed_request_counter,statDataHead_performance_internal_transactions,ecs.lss.emc.com,28c
d473e-ca45-4623-b30d-0481c548a650,statDataHead,dashboard

Monitoring list of metrics


Following tags have common values across all measurements:

202 Advanced Monitoring


● host- name of data node
● node_id- ID of data node
● tag- internal, set to dashboard

Flux API field descriptions


cq_gc_data
● system_gc_pending- Metadata Reclaimable Garbage
● system_gc_reclaimed- Metadata Reclaimed Garbage
● system_gc_unreclaim- Metadata Unreclaimable Garbage
● user_gc_pending- User Data Reclaimable Garbage
● user_gc_reclaimed- User Data Reclaimed Garbage
● user_gc_unreclaim- User Data Unreclaimable Garbage
cq_gc_remaining_elements
● system_gc_total_remaining_garbage- Total Garbage of Metadata
● system_gc_full_garbage- Full Reclaimable Garbage of Metadata
● system_gc_garbage_in_reclaiming- Metadata Full Reclaimable Garbage in Reclaiming
● system_gc_partial_garbage- Partial Eligible Garbage of Metadata
● system_gc_partial_in_handling- Metadata Partial Eligible Garbage in Reclaiming
● user_gc_repo_total_garbage- Total Garbage of User Data
● user_gc_repo_full_garbage- Full Reclaimable Garbage of User Data
● user_gc_repo_partial_garbage- Partial Garbage of User Data
● user_gc_repo_eligible_partial - Partial Eligible Garbage of User Data
● user_gc_partial_in_handling- User Data Partial Eligible Garbage in Reclaiming

Monitoring list of metrics: Non-Performance

Database monitoring_main
Performance metrics in this database are raw, each is split by data node, that is all have host and node_id tags.

Data for ECS Service I/O Statistics

Information:
Measurement in this section have following structure:

service_IO_Statistics_data_read - for read I/O counters

service_IO_Statistics_data_write - for read I/O counters

Service is the name of ECS service that produces the measurement, i.e. blob, cm, georcv,
statDataHead.

For example,

blob_IO_Statistics_data_read
cm_IO_Statistics_data_write

Measurement: blob_IO_Statistics_data_read
...
Tags: host, node_id, process, tag
Fields: read_CCTotal (float, bytes)
read_ECTotal (float, bytes)
read_GEOTotal (float, bytes)
read_RECOVERTotal (float, bytes)
read_USERTotal (float, bytes)

Advanced Monitoring 203


read_XORTotal (float, bytes)

Measurement: blob_IO_Statistics_data_write
...
Tags: host, node_id, process, tag
Fields: write_CCTotal (integer)
write_ECTotal (integer)
write_GEOTotal (integer)
write_RECOVERTotal (integer)
write_USERTotal (integer)
write_XORTotal (integer)

Data for SSD Read cache

Measurement: blob_SSDReadCache_Stats
Tags: host, id, last, node_id, process
Fields: +Inf (integer)
0.0 (integer)
1000.0 (integer)
25000.0 (integer)
5000.0 (integer)
rocksdb_disk_capacity_failure_counter (integer)
rocksdb_disk_usage_counter_bytes (integer)
rocksdb_disk_usage_percentage_counter (integer)
ssd_capacity_counter_bytes (integer)

CM statistics
These statistics represent processes in ECS service CM, such BTree GC, Chunk management, Erasure coding.

Measurement: cm_BTREE_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_candidate_garbage_btree_gc_level_0 (integer)
accumulated_candidate_garbage_btree_gc_level_1 (integer)
accumulated_detected_data_btree_level_0 (integer)
accumulated_detected_data_btree_level_1 (integer)
accumulated_reclaimed_data_btree_level_0 (integer)
accumulated_reclaimed_data_btree_level_1 (integer)
candidate_chunks_btree_gc_level_0 (integer)
candidate_chunks_btree_gc_level_1 (integer)
candidate_garbage_btree_gc_level_0 (integer)
candidate_garbage_btree_gc_level_1 (integer)
copy_candidate_chunks_btree_gc_level_0 (integer)
copy_candidate_chunks_btree_gc_level_1 (integer)
copy_completed_chunks_btree_gc_level_0 (integer)
copy_completed_chunks_btree_gc_level_1 (integer)
copy_waiting_chunks_btree_gc_level_0 (integer)
copy_waiting_chunks_btree_gc_level_1 (integer)
deleted_chunks_btree_level_0 (integer)
deleted_chunks_btree_level_1 (integer)
deleted_data_btree_level_0 (integer)
deleted_data_btree_level_1 (integer)
full_reclaimable_chunks_btree_gc_level_0 (integer)
full_reclaimable_chunks_btree_gc_level_1 (integer)
reclaimed_data_btree_level_0 (integer)
reclaimed_data_btree_level_1 (integer)
usage_between_0%_and_5%_chunks_btree_gc_level_0 (integer)
usage_between_0%_and_5%_chunks_btree_gc_level_1 (integer)
usage_between_10%_and_15%_chunks_btree_gc_level_0 (integer)
usage_between_10%_and_15%_chunks_btree_gc_level_1 (integer)
usage_between_5%_and_10%_chunks_btree_gc_level_0 (integer)
usage_between_5%_and_10%_chunks_btree_gc_level_1 (integer)
verification_waiting_chunks_btree_gc_level_0 (integer)
verification_waiting_chunks_btree_gc_level_1 (integer)

Measurement: cm_Chunk_Statistics
Tags: host, node_id, process, tag
Fields: chunks_copy (integer)

204 Advanced Monitoring


chunks_copy_active (integer)
chunks_copy_s0 (integer)
chunks_level_0_btree (integer)
chunks_level_0_btree_active (integer)
chunks_level_0_btree_active_index_page (integer)
chunks_level_0_btree_active_leaf_page (integer)
chunks_level_0_btree_index_page (integer)
chunks_level_0_btree_leaf_page (integer)
chunks_level_0_btree_s0 (integer)
chunks_level_0_btree_s0_index_page (integer)
chunks_level_0_btree_s0_leaf_page (integer)
chunks_level_0_journal (integer)
chunks_level_0_journal_active (integer)
chunks_level_0_journal_s0 (integer)
chunks_level_1_btree (integer)
chunks_level_1_btree_active (integer)
chunks_level_1_btree_active_index_page (integer)
chunks_level_1_btree_active_leaf_page (integer)
chunks_level_1_btree_index_page (integer)
chunks_level_1_btree_leaf_page (integer)
chunks_level_1_btree_s0 (integer)
chunks_level_1_btree_s0_index_page (integer)
chunks_level_1_btree_s0_leaf_page (integer)
chunks_level_1_journal (integer)
chunks_level_1_journal_active (integer)
chunks_level_1_journal_s0 (integer)
chunks_repo (integer)
chunks_repo_active (integer)
chunks_repo_s0 (integer)
chunks_typeII_ec_pending (integer)
chunks_typeI_ec_pending (integer)
chunks_undertransform_ec_pending (integer)
chunks_xor (integer)
data_copy (integer)
data_level_0_btree (integer)
data_level_0_btree_index_page (integer)
data_level_0_btree_leaf_page (integer)
data_level_0_journal (integer)
data_level_1_btree (integer)
data_level_1_btree_index_page (integer)
data_level_1_btree_leaf_page (integer)
data_level_1_journal (integer)
data_repo (integer)
data_repo_copy (integer)
data_xor (integer)
data_xor_shipped (integer)

Measurement: cm_EC_Statistics
Tags: host, node_id, process, tag
Fields: chunks_ec_encoded (integer)
chunks_ec_encoded_alive (integer)
data_ec_encoded (integer)
data_ec_encoded_alive (integer)

Measurement: cm_Geo_Replication_Statistics_Geo_Chunk_Cache
Tags: host, node_id, process, tag
Fields: Capacity_of_Cache (integer)
Number_of_Chunks (integer)

Measurement: cm_REPO_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_deleted_garbage_repo (integer)
accumulated_reclaimed_garbage_repo (integer)
deleted_chunks_repo (integer)
deleted_data_repo (integer)
ec_freed_slots (integer)
full_reclaimable_aligned_chunk (integer)
merge_copy_overhead_in_deleted_data_repo (integer)
merge_copy_overhead_in_reclaimed_data_repo (integer)
reclaimed_chunk_repo (integer)
reclaimed_data_repo (integer)
slots_waiting_shipping (integer)
slots_waiting_verification (integer)

Advanced Monitoring 205


total_ec_free_slots (integer)

Measurement: cm_Rebalance_Statistics
Tags: host, node_id, process, tag
Fields: bytes_rebalanced (integer)
bytes_rebalancing_failed (integer)
chunks_canceled (integer)
chunks_for_rebalancing (integer)
chunks_rebalanced (integer)
chunks_total (integer)
jobs_canceled (integer)
segments_for_rebalancing (integer)
segments_rebalanced (integer)
segments_rebalancing_failed (integer)
segments_total (integer)

Measurement: cm_Rebalance_Statistics_CoS
Tags: CoS, host, node_id, process, tag
Fields: bytes_rebalanced (integer)
bytes_rebalancing_failed (integer)
chunks_canceled (integer)
chunks_for_rebalancing (integer)
chunks_rebalanced (integer)
chunks_total (integer)
jobs_canceled (integer)
segments_for_rebalancing (integer)
segments_rebalanced (integer)
segments_rebalancing_failed (integer)
segments_total (integer)

Measurement: cm_Recover_Statistics
Tags: host, node_id, process, tag
Fields: chunks_to_recover (integer)
data_recovered (integer)
data_to_recover (integer)

Measurement: cm_Recover_Statistics_CoS
Tags: CoS, host, node_id, process, tag
Fields: chunks_to_recover (integer)
data_recovered (integer)
data_to_recover (integer)

SR statistics
These statistics represent processes in ECS service SR, responsible for space reclamation.

Measurement: sr_REPO_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_merge_copy_overhead_in_full_garbage (integer)
accumulated_total_repo_garbage (integer)
full_reclaimable_repo_chunk (integer)
garbage_in_partial_sr_tasks (integer)
garbage_in_repo_usage (integer)
merge_copy_overhead_in_full_garbage (integer)
merge_way_gc_processed_chunks (integer)
merge_way_gc_src_chunks (integer)
merge_way_gc_targeted_chunks (integer)
merge_way_gc_tasks (integer)
total_repo_garbage (integer)
usage_between_0%_and_33.3%_repo_chunk (integer)
usage_between_33.3%_and_50%_repo_chunk (integer)
usage_between_50%_and_66.7%_repo_chunk (integer)

206 Advanced Monitoring


SSM statistics
These statistics represent processes in ECS storage manager service SSM.

Measurement: ssm_sstable_SSTable_SS
Tags: SS, SSTable, last, process, tag
Fields: allocatedSpace (integer)
availableFreeSpace (integer)
downDurationTotal (integer)
freeSpace (integer)
largeBlockAllocated (integer)
largeBlockAllocatedSize (integer)
largeBlockFreed (integer)
largeBlockFreedSize (integer)
pendingDurationTotal (integer)
pingerDurationTotal (integer)
smallBlockAllocated (integer)
smallBlockFreed (integer)
smallBlockFreedSize (integer)
smallBlockSize (integer)
state (string)
timeInStateTotal (integer)
totalSpace (integer)
upDurationTotal (integer)

Measurement: ssm_sstable_SSTable_SS_datamigration
Tags: SS, SSTable, last, process
Fields: status (integer)
totalCapacityToMigrate (integer)

Database monitoring_last

Service status, memory, and cache statistics

Measurement: blob_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: blob_Total_memory_and_disk_cache_size
Tags: Total_memory_and_disk_cache_size, host, last, node_id, process
Fields: Disk_cache_size (integer)
Memory_cache_size (integer)

Measurement: cm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: eventsvc_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: mm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: resource_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: rm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)

Advanced Monitoring 207


NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: sr_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Measurement: sr_Total_memory_and_disk_cache_size
Tags: Total_memory_and_disk_cache_size, host, last, node_id, process
Fields: Disk_cache_size (integer)
Memory_cache_size (integer)

Measurement: ssm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)

Export of configuration framework values

Measurement: dtquery_cmf
Tags: last, process
Fields: com.emc.ecs.chunk.gc.btree.enabled (integer)
com.emc.ecs.chunk.gc.btree.scanner.verification.enabled (integer)
com.emc.ecs.chunk.gc.repo.enabled (integer)
com.emc.ecs.chunk.gc.repo.verification.enabled (integer)
com.emc.ecs.chunk.rebalance.is_enabled (integer)
com.emc.ecs.objectgc.cas.enabled (integer)
com.emc.ecs.sensor.btree_sr_pending_mininum (integer)
com.emc.ecs.sensor.repo_sr_pending_mininum (integer)

Top bucket statistics

Measurement: mm_topn_bucket_by_obj_count_place
Tags: last, place, process, tag
Fields: bucketName (string)
namespace (string)
value (integer)

Measurement: mm_topn_bucket_by_obj_size_place
Tags: last, place, process, tag
Fields: bucketName (string)
namespace (string)
value (integer)

Vnest membership and performance statistics

Measurement: vnestStat_membership_ismember
Tags: host, ismember, last, node_id, process
Fields: is_leader (string)

Measurement: vnestStat_performance_latency_type
Tags: host, id, last, node_id, process, type
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
7999999.99999999 (integer)
825912.9477680004 (integer)
85266.52466135359 (integer)
8802.840841123942 (integer)
9.686250859269972 (integer)
908.7975284781536 (integer)
93.82345570870827 (integer)

Measurement: vnestStat_performance_transactions_from_type

208 Advanced Monitoring


Tags: from, host, last, node_id, process, type
Fields: failed_request_counter (integer)
succeed_request_counter (integer)

Database monitoring_op

Node system level statistics

Information:
Measurements listed in this section are from default Telegraf plugins. Here, measurement
name equals plugin name. Refer to plugin documentation for more information.

For example, documentation for Telegraf plugin "cpu" can be found here.

Measurement: cpu
Tags: cpu, host, node_id, tag
Fields: usage_guest (float)
usage_guest_nice (float)
usage_idle (float)
usage_iowait (float)
usage_irq (float)
usage_nice (float)
usage_softirq (float)
usage_steal (float)
usage_system (float)
usage_user (float)

Measurement: disk
Tags: device, fstype, host, mode, node_id, path, tag
Fields: free (integer)
inodes_free (integer)
inodes_total (integer)
inodes_used (integer)
total (integer)
used (integer)
used_percent (float)

Measurement: diskio
Tags: ID_PART_ENTRY_UUID, SCSI_IDENT_SERIAL, SCSI_MODEL, SCSI_REVISION, SCSI_VENDOR,
host, name, node_id, tag
Fields: io_time (integer)
iops_in_progress (integer)
read_bytes (integer)
read_time (integer)
reads (integer)
weighted_io_time (integer)
write_bytes (integer)
write_time (integer)
writes (integer)

Measurement: linux_sysctl_fs
Tags: host, node_id, tag
Fields: aio-max-nr (integer)
aio-nr (integer)
dentry-age-limit (integer)
dentry-nr (integer)
dentry-unused-nr (integer)
dentry-want-pages (integer)
file-max (integer)
file-nr (integer)
inode-free-nr (integer)
inode-nr (integer)
inode-preshrink-nr (integer)

Measurement: mem
Tags: host, node_id, tag
Fields: active (integer)
available (integer)

Advanced Monitoring 209


available_percent (float)
buffered (integer)
cached (integer)
commit_limit (integer)
committed_as (integer)
dirty (integer)
free (integer)
high_free (integer)
high_total (integer)
huge_page_size (integer)
huge_pages_free (integer)
huge_pages_total (integer)
inactive (integer)
low_free (integer)
low_total (integer)
mapped (integer)
page_tables (integer)
shared (integer)
slab (integer)
swap_cached (integer)
swap_free (integer)
swap_total (integer)
total (integer)
used (integer)
used_percent (float)
vmalloc_chunk (integer)
vmalloc_total (integer)
vmalloc_used (integer)
wired (integer)
write_back (integer)
write_back_tmp (integer)

Measurement: net
Tags: host, interface, node_id, tag
Fields: bytes_recv (integer)
bytes_sent (integer)
bytes_sum (integer)
drop_in (integer)
drop_out (integer)
err_in (integer)
err_out (integer)
packets_recv (integer)
packets_sent (integer)
packets_sum (integer)
speed (integer)
utilization (integer)

Measurement: nstat
Tags: host, name, node_id, tag
Fields: IpExtInOctets (integer)
IpExtOutOctets (integer)
TcpInErrs (integer)
UdpInErrors (integer)

Measurement: processes
Tags: host, node_id, tag
Fields: blocked (integer)
dead (integer)
idle (integer)
paging (integer)
running (integer)
sleeping (integer)
stopped (integer)
total (integer)
total_threads (integer)
unknown (integer)
zombies (integer)

Measurement: procstat
Tags: host, node_id, process_name, tag, user
Fields: cpu_time (integer)
cpu_time_guest (float)

210 Advanced Monitoring


cpu_time_guest_nice (float)
cpu_time_idle (float)
cpu_time_iowait (float)
cpu_time_irq (float)
cpu_time_nice (float)
cpu_time_soft_irq (float)
cpu_time_steal (float)
cpu_time_stolen (float)
cpu_time_system (float)
cpu_time_user (float)
cpu_usage (float)
create_time (integer)
involuntary_context_switches (integer)
memory_data (integer)
memory_locked (integer)
memory_rss (integer)
memory_stack (integer)
memory_swap (integer)
memory_vms (integer)
nice_priority (integer)
num_fds (integer)
num_threads (integer)
pid (integer)
read_bytes (integer)
read_count (integer)
realtime_priority (integer)
rlimit_cpu_time_hard (integer)
rlimit_cpu_time_soft (integer)
rlimit_file_locks_hard (integer)
rlimit_file_locks_soft (integer)
rlimit_memory_data_hard (integer)
rlimit_memory_data_soft (integer)
rlimit_memory_locked_hard (integer)
rlimit_memory_locked_soft (integer)
rlimit_memory_rss_hard (integer)
rlimit_memory_rss_soft (integer)
rlimit_memory_stack_hard (integer)
rlimit_memory_stack_soft (integer)
rlimit_memory_vms_hard (integer)
rlimit_memory_vms_soft (integer)
rlimit_nice_priority_hard (integer)
rlimit_nice_priority_soft (integer)
rlimit_num_fds_hard (integer)
rlimit_num_fds_soft (integer)
rlimit_realtime_priority_hard (integer)
rlimit_realtime_priority_soft (integer)
rlimit_signals_pending_hard (integer)
rlimit_signals_pending_soft (integer)
signals_pending (integer)
voluntary_context_switches (integer)
write_bytes (integer)
write_count (integer)

Measurement: swap
Tags: host, node_id, tag
Fields: free (integer)
in (integer)
out (integer)
total (integer)
used (integer)
used_percent (float)

Measurement: system
Tags: host, node_id, tag
Fields: load1 (float)
load15 (float)
load5 (float)
n_cpus (integer)
n_users (integer)
uptime (integer)
uptime_format (string)

Advanced Monitoring 211


DT statistics

Measurement: dtquery_dt_dist_dt_node_id_type
Tags: dt_node_id, process, tag, type
Fields: count_i (integer)

Measurement: dtquery_dt_dist_host_dt_node_id
Tags: dt_node_id, process, tag
Fields: count_i (integer)

Measurement: dtquery_dt_dist_type_type
Tags: process, tag, type
Fields: count_i (integer)

Measurement: dtquery_dt_status
Tags: process, tag
Fields: total (integer)
unknown (integer)
unready (integer)

Measurement: dtquery_dt_status_detailed_type
Tags: process, tag, type
Fields: total (integer)
unknown (integer)
unready (integer)

Fabric agent statistics

Measurement: ecs_fabric_agent_dirstat_size_bytes
Tags: host, node_id, path, tag, url
Fields: gauge (float)

SR journal statistics

Measurement: sr_JournalParser_GC_RG_DT
Tags: DT, RG, last, process
Fields: majorMinorOfJournalRegion (string)
pendingChunks (integer)
timestampOfChunkRegion (string)
timestampOfJournalParserLastRun (string)

Measurement: sr_ObjectGC_CAS_RG
Tags: RG, last, process
Fields: STATUS (string)

Vnest Btree statistics

Measurement: vnestStat_btree
Tags: cumulative_stats, host, level, node_id, tag
Fields: level_count (float)
page_count (float)
size_bytes (float)

212 Advanced Monitoring


Database monitoring_vdc
Metrics in this database are calculated values over whole VDC without reference to particular data node.

Information:

Metrics below are aggregated over data nodes for raw measurements used in Grafana ECS UI.

Measurement: cq_disk_bandwidth
Tags: type_op ('read', 'write')
Fields: consistency_checker (float)
erasure_encoding (float)
geo (float)
hardware_recovery (float)
total (float)
user_traffic (float)
xor (float)

Measurement: cq_node_rebalancing_summary
Tags: none
Fields: data_rebalanced (integer)
pending_rebalance (integer)

Measurement: cq_process_health
Tags: none
Fields: cpu_used (float)
mem_used (float)
mem_used_percent (float)
nic_bytes (float)
nic_utilization (float)

Measurement: cq_recover_status_summary
Tags: none
Fields: data_recovered (integer)
data_to_recover (integer)

Monitoring list of metrics: Performance

Information about generic tag values


Following tags have common values across all measurements:
● process- internal, set to statDataHead
● head- type of protocol, that is S3
● namespace- name of namespace
● method - protocol-specific request method, that is GET, POST, READ, WRITE

Database monitoring_main
Performance metrics in this database are raw, each is split by data node, that is all have node and node_id tags.
Most of integer fields are increasing counters that is values that increase over time. Increasing counters restart from zero after
datahead service restart.

Measurement: statDataHead_performance_internal_error
Tags: host, node_id, process, tag
Fields: system_errors (integer)
user_errors (integer)

Measurement: statDataHead_performance_internal_error_code
Tags: code, host, node_id, process, tag
Fields: error_counter (integer)

Advanced Monitoring 213


Measurement: statDataHead_performance_internal_error_head
Tags: head, host, node_id, process, tag
Fields: system_errors (integer)
user_errors (integer)

Measurement: statDataHead_performance_internal_error_head_namespace
Tags: head, host, namespace, node_id, process, tag
Fields: system_errors (integer)
user_errors (integer)

Measurement: statDataHead_performance_internal_latency
Tags: host, id, node_id, process, tag
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
111.6295328521717 (integer)
12461.15260479408 (integer)
23.183877401213103 (integer)
2588.0054039994393 (integer)
4.814963904455889 (integer)
537.4921713544796 (integer)
59999.999999999985 (integer)

Measurement: statDataHead_performance_internal_latency_head
Tags: head, host, id, node_id, process, tag
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
111.6295328521717 (integer)
12461.15260479408 (integer)
23.183877401213103 (integer)
2588.0054039994393 (integer)
4.814963904455889 (integer)
537.4921713544796 (integer)
59999.999999999985 (integer)

Measurement: statDataHead_performance_internal_throughput
Tags: host, node_id, process, tag
Fields: total_read_requests_size (integer)
total_write_requests_size (integer)

Measurement: statDataHead_performance_internal_throughput_head
Tags: head, host, node_id, process, tag
Fields: total_read_requests_size (integer)
total_write_requests_size (integer)

Measurement: statDataHead_performance_internal_transactions
Tags: host, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)

Measurement: statDataHead_performance_internal_transactions_head
Tags: head, host, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)

Measurement: statDataHead_performance_internal_transactions_head_namespace
Tags: head, host, namespace, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)

Measurement: statDataHead_performance_internal_transactions_method
Tags: host, method, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)

Database monitoring_vdc
Performance metrics in this database are calculated values over whole VDC without reference to particular data node.
Most of values are:

214 Advanced Monitoring


● Rates (number of requests per seconds)- for all measurements not ending by "_delta"
● Delta values, increase of a counter from previous time stamp- for all measurements ending by "_delta"
● Down sampled values (aggregated one point per day)- for all measurements ending by "_downsampled"

Measurement: cq_performance_error
Tags: none
Fields: system_errors (float)
user_errors (float)

Measurement: cq_performance_error_downsampled
Tags: none
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_code
Tags: code
Fields: error_counter (float)

Measurement: cq_performance_error_code_downsampled
Tags: code
Fields: error_counter (float)
Measurement: cq_performance_error_delta
Tags: none
Fields: system_errors_i (integer)
user_errors_i (integer)

Measurement: cq_performance_error_delta_downsampled
Tags: none
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_head
Tags: head
Fields: system_errors (float)
user_errors (float)

Measurement: cq_performance_error_head_downsampled
Tags: head
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_head_delta
Tags: head
Fields: system_errors_i (integer)
user_errors_i (integer)

Measurement: cq_performance_error_head_delta_downsampled
Tags: head
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_ns
Tags: namespace
Fields: system_errors (float)
user_errors (float)

Measurement: cq_performance_error_ns_downsampled
Tags: namespace
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_ns_delta
Tags: namespace
Fields: system_errors_i (integer)
user_errors_i (integer)

Measurement: cq_performance_error_ns_delta_downsampled
Tags: namespace
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_latency
Tags: id
Fields: p50 (float)
p99 (float)
Measurement: cq_performance_latency_downsampled
Tags: id

Advanced Monitoring 215


Fields: p50 (float)
p99 (float)
Measurement: cq_performance_latency_head
Tags: head, id
Fields: p50 (float)
p99 (float)

Measurement: cq_performance_latency_head_downsampled
Tags: head, id
Fields: p50 (float)
p99 (float)
Measurement: cq_performance_throughput
Tags: none
Fields: total_read_requests_size (float)
total_write_requests_size (float)

Measurement: cq_performance_throughput_downsampled
Tags: none
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_throughput_head
Tags: head
Fields: total_read_requests_size (float)
total_write_requests_size (float)

Measurement: cq_performance_throughput_head_downsampled
Tags: head
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_transaction
Tags: none
Fields: failed_request_counter (float)
succeed_request_counter (float)

Measurement: cq_performance_transaction_downsampled
Tags: none
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_delta
Tags: none
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)

Measurement: cq_performance_transaction_delta_downsampled
Tags: none
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_head
Tags: head
Fields: failed_request_counter (float)
succeed_request_counter (float)

Measurement: cq_performance_transaction_head_downsampled
Tags: head
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_head_delta
Tags: head
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)

Measurement: cq_performance_transaction_head_delta_downsampled
Tags: head
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_method
Tags: method
Fields: failed_request_counter (float)
succeed_request_counter (float)

Measurement: cq_performance_transaction_method_downsampled
Tags: method
Fields: failed_request_counter (float)

216 Advanced Monitoring


succeed_request_counter (float)
Measurement: cq_performance_transaction_ns
Tags: namespace
Fields: failed_request_counter (float)
succeed_request_counter (float)

Measurement: cq_performance_transaction_ns_downsampled
Tags: namespace
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_ns_delta
Tags: namespace
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)

Measurement: cq_performance_transaction_ns_delta_downsampled
Tags: namespace
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)

Flux API replacements for deprecated dashboard API

Processes statistics

Dashboard API

GET /dashboard/nodes/{id}/processes

GET /dashboard/processes/{id}

Flux API
Database:
● monitoring_op
Measurement:
● procstat(detailed info on available fields and tags Github Influx Data Telegraf Inputs Procstat)
Fields:
● memory_rss- resident memory of a process (bytes)
● cpu_usage- cpu usage percentage for a process (percent used of a single cpu)
● num_threads- number of threads used by process (int)
Tags:
● process_name- valid process names:
○ nvmeengine
○ nvmetargetviewer
○ dtsm
○ rack-service-manager
○ rpcbind
○ blobsvc
○ cm
○ coordinatorsvc
○ dataheadsvc
○ dtquery
○ ecsportalsvc
○ eventsvc

Advanced Monitoring 217


○ georeceiver
○ metering
○ objcontrolsvc
○ resourcesvc
○ transformsvc
○ vnest
○ fluxd
○ influxd
○ throttler
○ grafana-server
○ dockerd
○ fabric-agent
○ fabric-lifecycle
○ fabric-registry
○ fabric-zookeeper
● host- ecs_node_fqdn
● node_id- host id
● range- maximum range is 1 hour
NOTE:

Due to resource limitation, the range is limited to a maximum of 1 hour.

For replacement of /dashboard/processes/{id}, specify corresponding r.process_name


and r.node_id fields accordingly to "{id}" value.

For example, id "330e4b8f-4491-4ec7-b816-7b10ac9c6abf-cm" equals to:

r.node_id == "330e4b8f-4491-4ec7-b816-7b10ac9c6abf"
r.process_name == "cm"

Example query:

from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "procstat" and r._field == "memory_rss" and
r.process_name == "vnest" and r.host == "ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "process_name"])

Example output:

#datatype,string,long,dateTime:RFC3339,long,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,process_name
,,0,2019-08-15T13:05:00Z,2505809920,vnest
,,0,2019-08-15T13:10:00Z,2505887744,vnest
,,0,2019-08-15T13:15:00Z,2506014720,vnest
,,0,2019-08-15T13:20:01Z,2506010624,vnest

Nodes statistics

Dashboard API

GET /dashboard/nodes/{id}

Database:
● monitoring_op
Measurement:

218 Advanced Monitoring


● cpu (detailed info on available fields and tags Github Influx Data Telegraf CPU Input Plugin)
Fields:
● usage_idle- idle cpu usage (percents)
Tags:
● host- ecs_node_fqdn
● node_id- host id
● range- maximum range is 1 hour
NOTE: Due to resource limitation, range is limited to a maximum of 1 hour.

Example query:

from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "cpu" and r.cpu == "cpu-total" and r._field ==
"usage_idle" and r.host == "ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "host"])

Example output:

#datatype,string,long,dateTime:RFC3339,double,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,host
,,0,2019-08-15T13:20:00Z,19.549454477395525,host_name
,,0,2019-08-15T13:25:00Z,17.920104933062728,host_name
,,0,2019-08-15T13:30:00Z,18.050788903551002,host_name
,,0,2019-08-15T13:35:00Z,19.801364027505095,host_name

Measurement:
● mem (detailed info on available fields and tags Github Influx Data Telegraf Memory Input Plugin)
Fields:
● free- free memory on host (bytes)
Tags:
● host- ecs_node_fqdn
● node_id- host id
● range- maximum range is 1 hour
NOTE: Due to resource limitation, range is limited to a maximum of 1 hour.

Example query:

from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "mem" and r._field == "free" and r.host ==
"ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "host"])

Example output:

#datatype,string,long,dateTime:RFC3339,long,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,host
,,0,2019-08-15T14:10:00Z,3181088768,host_name
,,0,2019-08-15T14:15:00Z,2988388352,host_name
,,0,2019-08-15T14:20:00Z,3002994688,host_name
,,0,2019-08-15T14:25:00Z,3115741184,host_name

Advanced Monitoring 219


Performance statistics

Dashboard API

GET /dashboard/nodes/{id}

GET /dashboard/zones/localzone

GET /dashboard/zones/localzone/nodes

Dashboard APIs
Lists the APIs that are changed or deprecated.

APIs changed in ECS 3.6.0.0


The following APIs are changed in ECS 3.6.0.0:
● /dashboard/zones/localzone
● /dashboard/zones/localzone/nodes
● /dashboard/nodes/{id}
● /dashboard/storagepools/{id}/nodes
From the above APIs, the following data are removed:

nodeCpuUtilization*, nodeMemoryUtilizationBytes*, nodeMemoryUtilization*,


nodeNicBandwidth*, nodeNicReceivedBandwidth*, nodeNicTransmittedBandwidth*
nodeNicUtilization*, nodeNicReceivedUtilization*, nodeNicTransmittedUtilization*
capacityRebalanceEnabled, capacityRebalanced, capacityPendingRebalancing
capacityRebalancedAvg, capacityRebalanceRate, capacityPendingRebalancingAvg
transactionReadLatency, transactionWriteLatency, transactionReadBandwidth,
transactionWriteBandwidth
transactionReadTransactionsPerSec, transactionWriteTransactionsPerSec,
transactionErrors.*
diskReadBandwidthTotal, diskWriteBandwidthTotal, diskReadBandwidthEc,
diskWriteBandwidthEc
diskReadBandwidthCc, diskWriteBandwidthCc, diskReadBandwidthRecovery,
diskWriteBandwidthRecovery
diskReadBandwidthGeo, diskWriteBandwidthGeo, diskReadBandwidthUser
diskWriteBandwidthUser, diskReadBandwidthXor, diskWriteBandwidthXor

Alternative places to find removed data


Below you can find information about where to find replacement for removed data. All data are accessible with Flux API.
NOTE: All removed data do not have direct alternatives. Some of the removed data should be calculated based on other
metrics.

Table 62. Alternative places to find removed data


Loc
atio
n
1. Node system level data
Data removed
nodeCpuUtilization*, nodeMemoryUtilizationBytes*,
nodeMemoryUtilization*, nodeNicBandwidth*,
nodeNicReceivedBandwidth*, nodeNicTransmittedBandwidth*,

220 Advanced Monitoring


Table 62. Alternative places to find removed data (continued)
Loc
atio
n

nodeNicUtilization*, nodeNicReceivedUtilization*,
nodeNicTransmittedUtilization*

Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_op >
Node system level statistics.
Measurements cpu, mem, net

2. Rebalance related data


2.1 Data removed
capacityRebalanced, capacityPendingRebalancing,
capacityRebalancedAvg, capacityRebalanceRate,
capacityPendingRebalancingAvg

Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_vdc.

Measurement cq_node_rebalancing_summary

2.2 Data removed


capacityRebalanceEnabled

Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_last
> Export of configuration framework values.
Measurement dtquery_cmf

Field com.emc.ecs.chunk.rebalance.is_enabled (integer)

3. Transaction-related data
Data removed
transactionReadLatency, transactionWriteLatency,
transactionReadBandwidth, transactionWriteBandwidth,
transactionReadTransactionsPerSec,
transactionWriteTransactionsPerSec, transactionErrors*

Where replacement can be found For VDC metrics, see Monitoring list of metrics: Performance > Database
monitoring_vdc.

For Node metrics, see Monitoring list of metrics: Performance > Database
monitoring_main.

4. Disk-related data
Data removed
diskReadBandwidthTotal, diskWriteBandwidthTotal,
diskReadBandwidthEc, diskWriteBandwidthEc,
diskReadBandwidthCc, diskWriteBandwidthCc,
diskReadBandwidthRecovery, diskWriteBandwidthRecovery,
diskReadBandwidthGeo, diskWriteBandwidthGeo,
diskReadBandwidthUser, diskWriteBandwidthUser,
diskReadBandwidthXor, diskWriteBandwidthXor

Where replacement can be found For VDC metrics, see Monitoring list of metrics: Non-Performance > Database
monitoring_vdc.

For Node metrics, see Monitoring list of metrics: Non-Performance > Data for ECS
Service I/O Statistics.

APIs removed in ECS 3.5.0.0


The following table lists the APIs that are removed in ECS 3.5.0.0:

Advanced Monitoring 221


Table 63. APIs removed in ECS 3.5.0
API Name Syntax Description
Get Process GET /dashboard/processes/{id} Gets the process instance details.
Get Node Processes GET /dashboard/nodes/{id}/processes Gets the details of processes in the node.

222 Advanced Monitoring


I
Document feedback
If you have any feedback or suggestions regarding this document, mailto:ecs.docfeedback@dell.com.

Document feedback 223


Index
A
Access Management 68

E
ECS management user role 60

K
Key rotation limitations 157

M
migrating external key management 154

N
Namespace Administrator 59

R
Rotate Keys 157

You might also like