Ecs 381 Admin Guide
Ecs 381 Admin Guide
Ecs 381 Admin Guide
1 Administration Guide
3.8.1
October 2024
Rev. 1.2
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2024 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................9
Tables..........................................................................................................................................10
Chapter 1: Overview.....................................................................................................................12
Revision history.................................................................................................................................................................. 12
Introduction......................................................................................................................................................................... 12
ECS platform.......................................................................................................................................................................13
ECS data protection..........................................................................................................................................................14
Configurations for availability, durability, and resilience..................................................................................... 15
ECS network....................................................................................................................................................................... 16
Load balancing considerations........................................................................................................................................ 16
Contents 3
Chapter 4: Authentication Providers............................................................................................37
Introduction to authentication providers..................................................................................................................... 37
Working with authentication providers in the ECS Portal.......................................................................................37
Considerations when adding Active Directory authentication providers....................................................... 38
AD or LDAP authentication provider settings...................................................................................................... 38
Add an AD or LDAP authentication provider........................................................................................................ 42
Add a Keystone authentication provider................................................................................................................42
Chapter 5: Namespaces...............................................................................................................44
Introduction to namespaces........................................................................................................................................... 44
Namespace tenancy....................................................................................................................................................44
Working with namespaces in the ECS Portal............................................................................................................. 45
Namespace settings................................................................................................................................................... 45
Create a namespace................................................................................................................................................... 49
Edit a namespace........................................................................................................................................................ 50
Delete a namespace.....................................................................................................................................................51
4 Contents
New Group..................................................................................................................................................................... 71
Delete Groups................................................................................................................................................................71
Roles...................................................................................................................................................................................... 71
New Role........................................................................................................................................................................72
Delete Roles.................................................................................................................................................................. 72
Policies................................................................................................................................................................................. 72
New Policy.....................................................................................................................................................................73
Delete Policies.............................................................................................................................................................. 73
Policy Simulator- Existing Policies...........................................................................................................................73
Policy Simulator- New Policy....................................................................................................................................74
Identity Provider................................................................................................................................................................ 75
New Identity Provider.................................................................................................................................................75
Delete Providers.......................................................................................................................................................... 75
SAML Service Provider Metadata.................................................................................................................................76
Generate SAML Service Provider Metadata........................................................................................................ 76
Root Access Key................................................................................................................................................................76
Create Access Key...................................................................................................................................................... 76
Chapter 8: Buckets......................................................................................................................78
Introduction to buckets....................................................................................................................................................78
Working with buckets in the ECS Portal..................................................................................................................... 79
Bucket settings............................................................................................................................................................ 79
Create a bucket........................................................................................................................................................... 82
Edit a bucket.................................................................................................................................................................83
Set ACLs........................................................................................................................................................................84
Set bucket policies...................................................................................................................................................... 86
Restrict user IP addresses that can access a CAS bucket............................................................................... 90
Create a bucket using the S3 API (with s3curl).........................................................................................................91
Bucket HTTP headers................................................................................................................................................ 93
Enable Data Movement................................................................................................................................................... 93
Data Mobility Common Issues .................................................................................................................................95
Troubleshoot Data Mobility.......................................................................................................................................95
Data Mobility Debug Logging .................................................................................................................................. 95
Bucket, object, and namespace naming conventions.............................................................................................. 96
S3 bucket and object naming in ECS..................................................................................................................... 96
OpenStack Swift container and object naming in ECS...................................................................................... 97
Atmos bucket and object naming in ECS...............................................................................................................97
CAS pool and object naming in ECS....................................................................................................................... 97
Simplified bucket delete ................................................................................................................................................. 98
Delete a bucket............................................................................................................................................................ 99
Simplified bucket delete common issues ..............................................................................................................99
Simplified bucket delete log files and debug logging ....................................................................................... 100
Priority task coordinator ............................................................................................................................................... 100
Priority task coordinator common issues ............................................................................................................ 101
Partial list results.............................................................................................................................................................. 101
Bucket listing limitation................................................................................................................................................... 101
Disable unused services................................................................................................................................................. 102
Contents 5
Introduction to file access............................................................................................................................................. 104
ECS multi-protocol access............................................................................................................................................ 105
S3/NFS multi-protocol access to directories and files.................................................................................... 105
Multiprotocol access permissions..........................................................................................................................105
Working with NFS exports in the ECS Portal........................................................................................................... 107
Working with user or group mappings in the ECS Portal.......................................................................................107
ECS NFS configuration tasks....................................................................................................................................... 108
Create a bucket for NFS using the ECS Portal..................................................................................................108
Add an NFS export.................................................................................................................................................... 109
Add a user or group mapping using the ECS Portal............................................................................................111
Configure ECS NFS with Kerberos security......................................................................................................... 111
Mount an NFS export example..................................................................................................................................... 116
Best practices for mounting ECS NFS exports...................................................................................................117
NFS access using the ECS Management REST API................................................................................................ 117
NFS WORM (Write Once, Read Many)...................................................................................................................... 118
S3A support...................................................................................................................................................................... 120
Configuration at ECS................................................................................................................................................ 120
Configuration at Hadoop Node............................................................................................................................... 121
Geo-replication status.................................................................................................................................................... 122
6 Contents
Reset the object certificate.....................................................................................................................................147
Contents 7
Recovery on disk and node failures.............................................................................................................................188
NFS file system access during a node failure..................................................................................................... 189
Data rebalancing after adding new nodes................................................................................................................. 189
8 Contents
Figures
Figures 9
Tables
10 Tables
41 Expected file operations.......................................................................................................................................120
42 Rack...........................................................................................................................................................................123
43 Node.......................................................................................................................................................................... 124
44 Disk............................................................................................................................................................................ 124
45 Distinguished Name (DN) fields......................................................................................................................... 139
46 Key Management properties............................................................................................................................... 152
47 Create a cluster...................................................................................................................................................... 153
48 New external key servers.....................................................................................................................................155
49 Key Management properties............................................................................................................................... 156
50 Secure Remote Services properties.................................................................................................................. 158
51 Syslog facilities used by ECS.............................................................................................................................. 168
52 Syslog severity keywords.....................................................................................................................................168
53 ECS Management REST API calls for managing node locking....................................................................170
54 Password rules........................................................................................................................................................ 173
55 Sessions.................................................................................................................................................................... 174
56 User agreement...................................................................................................................................................... 175
57 Object version limitation settings.......................................................................................................................176
58 Object Lock and ADO in different types of buckets..................................................................................... 185
59 Example scenario where locked data can be lost in TSO.............................................................................186
60 Advanced monitoring dashboards......................................................................................................................190
61 Advanced monitoring dashboard fields..............................................................................................................191
62 Alternative places to find removed data..........................................................................................................220
63 APIs removed in ECS 3.5.0................................................................................................................................. 222
Tables 11
1
Overview
Topics:
• Revision history
• Introduction
• ECS platform
• ECS data protection
• ECS network
• Load balancing considerations
Revision history
Table 1. Revision dates and changes
Revision Date Description of change
October 2024 Rev 1.2 Updated TSO and PSO Minimum Requirements.
July 2024 Rev 1.1 Removed broken links
April 2024 Rev 1.0 Initial release of ECS 3.8.1
Introduction
Dell EMC ECS provides a complete software-defined cloud storage platform that supports the storage, manipulation, and
analysis of unstructured data on a massive scale on commodity hardware. You can deploy ECS as a turnkey storage appliance or
as a software product that is installed on a set of qualified commodity servers and disks. ECS offers the cost advantages of a
commodity infrastructure and the enterprise reliability, availability, and serviceability of traditional arrays.
ECS uses a scalable architecture that includes multiple nodes and attached storage devices. The nodes and storage devices are
commodity components, similar to devices that are generally available, and are housed in one or more racks.
A rack and its components that are supplied by Dell EMC and that have preinstalled software, is referred to as an ECS
appliance. A rack and commodity nodes that are not supplied by Dell EMC, is referred to as a Dell EMC ECS software only
solution. Multiple racks are referred to as a cluster.
A rack, or multiple joined racks, with processing and storage that is handled as a coherent unit by the ECS infrastructure
software is referred to as a site, and at the ECS software level as a Virtual Data Center (VDC). If you add a VDC + Storage Pool
to a replication group, and it appears a Zone.
Management users can access the ECS UI, which is referred to as the ECS Portal, to perform administration tasks. Management
users can be assigned one of four roles: Security Administrator, System Administrator, Namespace Administrator, and System
Monitor. Management tasks that can be performed in the ECS Portal can also be performed by using the ECS Management
REST API.
ECS administrators can perform the following tasks in the ECS Portal:
● Configure and manage the object store infrastructure (compute and storage resources) for object users.
● Manage users, roles, and buckets within namespaces. Namespaces are equivalent to tenants.
Object users cannot access the ECS Portal, but can access the object store to read and write objects and buckets by using
clients that support the following data access protocols:
● Amazon Simple Storage Service (Amazon S3)
● EMC Atmos
● OpenStack Swift
● ECS CAS (content-addressable storage)
12 Overview
For more information about object user tasks, see ECS Data Access Guide.
For more information about system monitor tasks, see ECS Monitoring Guide.
ECS platform
The ECS platform is composed of the data services, portal, storage engine, fabric, infrastructure, and hardware component
layers.
Data Portal
Services
ECS
Storage Engine Software
Fabric
Infrastructure
Hardware
Data services The data services component layer provides support for access to the ECS object store through object
and NFS v3 protocols. In general, ECS provides multiprotocol access, data that is ingested through one
protocol can be accessed through another. For example, data that is ingested through S3 can be modified
through Swift or NFS v3. This multiprotocol access has some exceptions due to protocol semantics and
representations of how the protocol was designed. The following table shows the object APIs and the
protocols that are supported and interoperated.
NOTE: In this document, HDFS sees the native Hadoop Compatible File System (HCFS) support, also
known as ViPRFS. Hadoop support for ECS object storage (S3A) is typically referenced as S3A.
The following table shows the object APIs and the protocols that are supported and interoperated.
Portal The ECS Portal component layer provides a Web-based user interface that allows you to manage, license,
and provision ECS nodes. The portal has the following comprehensive reporting capabilities:
● Capacity utilization for each site, storage pool, node, and disk
● Performance monitoring on latency, throughput, transactions per second, and replication progress and
rate
● Diagnostic information, such as node and disk recovery status and statistics on hardware and process
health for each node, which helps identify performance and system bottlenecks.
Storage engine The storage engine component layer provides an unstructured storage engine that is responsible for
storing and retrieving data, managing transactions, and protecting and replicating data. The storage
engine provides access to objects ingested using multiple object storage protocols and the NFS file
protocols.
Overview 13
Fabric The fabric component layer provides cluster health management, software management, configuration
management, upgrade capabilities, and alerting. The fabric layer is responsible for keeping the services
running and managing resources such as the disks, containers, firewall, and network. It tracks and reacts
to environment changes such as failure detection and provides alerts that are related to system health.
The 9069 and 9099 ports are public IP ports that are protected by Fabric Firewall manager. Port is not
available outside of the cluster.
Infrastructure The infrastructure component layer uses SUSE Linux Enterprise Server 12 as the base operating system
for the ECS appliance, or qualified Linux operating systems for commodity hardware configurations.
Docker is installed on the infrastructure to deploy the other ECS component layers. The Java Virtual
Machine (JVM) is installed as part of the infrastructure because ECS software is written in Java.
Hardware The hardware component layer is an ECS appliance or qualified industry standard hardware. For more
information about ECS hardware, see Dell Support.
Sites can be federated, so that data is replicated to another site to increase availability and data durability, and to ensure that
ECS is resilient against site failure. For three or more sites, in addition to the erasure coding of chunks at a site, chunks that are
replicated to other sites are combined using a technique that is called XOR to provide increased storage efficiency.
If you have one site, with erasure coding the object data chunks use more space (1.33 or 1.2 times storage overhead) than the
raw data bytes require. If you have two sites, the storage overhead is doubled (2.67 or 2.4 times storage overhead) because
14 Overview
both sites store a replica of the data, and the data is erasure coded at both sites. If you have three or more sites, ECS combines
the replicated chunks so that, counter intuitively, the storage overhead reduces.
When one node is down in a four nodes system, ECS starts to rebuild the EC on priority to avoid DU. As one node is down, the
EC segment separates to the other three nodes, which results in the segment number being greater than the EC code number.
If the down node comes back, things go back to normal. When another node with the most number of EC segments goes down,
the DU window is as large as the node N/A window, when the node does not recover it causes DL.
EC retiring feature converts unsaved EC chunks into three mirror copies chunk for data safety. However, EC retiring has some
limitations:
● It increases system capacity, and protection overhead from 1.33 to 3.
● When there is no node down situation, EC retiring introduce unnecessary I/O.
● The feature applies to four nodes system. EC retiring does not automatically trigger, you must trigger it on demand using an
API through service console.
For a detailed description of the mechanism that is used by ECS to provide data durability, resilience, and availability, see the
ECS High Availability Design White Paper.
Local Protection Data is protected locally by using triple mirroring and erasure coding which provides resilience against disk
and node failures, but not against site failure.
Full Copy When the Replicate to All Sites setting is turned on for a replication group, the replication group makes
Protection a full readable copy of all objects to all sites within the replication group. Having full readable copies of
objects on all VDCs in the replication group provides data durability and improves local performance at all
sites at the cost of storage efficiency.
Active Active is the default ECS configuration. When a replication group is configured as Active, data is
replicated to federated sites and can be accessed from all sites with strong consistency. If you have
two sites, full copies of data chunks are copied to the other site. If you have three or more sites, the
replicated chunks are combined (XOR'ed) to provide increased storage efficiency. When data is accessed
from a site that is not the owner of the data, until that data is cached at the non-owner site, the access
time increases. Similarly, if the owner site that contains the primary copy of the data fails, and if you have
a global load balancer that directs requests to a non-owner site, the non-owner site must re-create the
data from XOR'ed chunks, and the access time increases.
Passive The Passive configuration includes two, three, or four active sites with an additional passive site that
is a replication target (backup site). The minimum number of sites for a Passive configuration is three
(two active, one passive) and the maximum number of sites is five (four active, one passive). Passive
configurations have the same storage efficiency as Active configurations. For example, the Passive
three-site configuration has the same storage efficiency as the Active three-site configuration (2.0 times
storage overhead). In the Passive configuration, all replication data chunks are sent to the passive site
and XOR operations occur only at the passive site. In the Active configuration, the XOR operations occur
at all sites. If all sites are on-premises, you can designate any of the sites as the replication target. If
there is a backup site hosted off-premise by a third-party data center, ECS automatically selects it as the
replication target when you create a Passive geo replication group (see Create a replication group). If you
Overview 15
want to change the replication target from a hosted site to an on-premises site, you can do so using the
ECS Management REST API.
ECS network
ECS network infrastructure consists of top of rack switches allowing for the following types of network connections:
● Public network – connects ECS nodes to your organization's network, providing data.
● Internal private network – manages nodes and switches within the rack and across racks.
For more information about ECS networking, see the ECS Networking and Best Practices White Paper.
CAUTION: It is required to have connections from the customer's network to both front end switches (rabbit
and hare) in order to maintain the high availability architecture of the ECS appliance. If the customer chooses
not to connect to their network in the required HA manner, there is no guarantee of high data availability for the
use of this product.
16 Overview
2
Getting Started with ECS
Topics:
• Initial configuration
• Log in to the ECS Portal
• View the Getting Started Task Checklist
• View the ECS Portal Dashboard
Initial configuration
The initial configuration steps that are required to get started with ECS include logging in to the ECS Portal for the first time,
using the ECS Portal Getting Started Task Checklist and Dashboard, uploading a license, and setting up an ECS virtual data
center (VDC).
Steps
1. Upload an ECS license.
See Licensing.
2. Select a set of nodes to create at least one storage pool.
See Create a storage pool.
3. Create a VDC.
See Create a VDC for a single site.
4. Create at least one replication group.
See Create a replication group.
a. Optional: Set authentication.
You can add Active Directory (AD), LDAP, or Keystone authentication providers to ECS to enable users to be
authenticated by systems external to ECS. See Introduction to authentication providers.
5. Create at least one namespace. A namespace is the equivalent of a tenant.
See Create a namespace.
a. Optional: Create object and/or management users.
See Working with users in the ECS Portal.
6. Create at least one bucket.
See Create a bucket.
After you configure the initial VDC, if you want to create an additional VDC and federate it with the first VDC, see Add a
VDC to a federation.
Prerequisites
Logging in to the ECS Portal requires the Security Administrator, System Administrator, System Monitor, or Namespace
Administrator role.
NOTE: You can log in to the ECS Portal for the first time with any valid login. However, you can configure the system, only
with System Administrator role and Security Administrator role.
On initial ECS login, use the default credentials. You are then prompted to change the password for the root user immediately.
Steps
1. Type the public IP address of the first node in the system, or the address of the load balancer that is configured as the front
end, in the address bar of your browser: https://<node1_public_ip>.
2. Log in with the default root credentials:
● User Name: root
● Password: ChangeMe
NOTE: The first system administrator to log in to the ECS Portal is prompted to acknowledge the End User License
Agreement. Once you acknowledge the agreement, you are prompted to change the password.
View requests
The Requests panel displays the total requests, successful requests, and failed requests.
Failed requests are organized by system error and user error. User failures are typically HTTP 400 errors. System failures are
typically HTTP 500 errors. Click Requests to see more request metrics.
Request statistics do not include replication traffic.
NOTE: For partial upgrade scenarios (for example, during 3.4 to 3.6), nodes in 3.4 pulls data from dashboard API, whereas
nodes upgraded to 3.6 pulls data from flux API. This may result in inconsistent display of data.
View performance
The Performance panel displays how network read and write operations are currently performing, and the average read/write
performance statistics over the last 24 hours for the VDC.
Click Performance to see more comprehensive performance metrics.
NOTE:
● There will be a label of SSD Cache Enabled if the feature is on the node. And if Read Cache is disabled or the nodes do
not have SSD disks there will be no SSD Cache Enabled label.
● For partial upgrade scenarios (for example, during 3.4 to 3.6), nodes in 3.4 pulls data from dashboard API, whereas
nodes upgraded to 3.6 pulls data from flux API. This may result in inconsistent display of data.
View alerts
The Alerts panel displays a count of critical alerts and errors.
Click Alerts to see the full list of current alerts. Any Critical or Error alerts are linked to the Alerts tab on the Events page
where only the alerts with a severity of Critical or Error are filtered and displayed.
NOTE: Alerts can also be filtered with Severity Info and Warning.
View audits
Audits can be filtered only with date time range and namespace.
NOTE:
● When the storage pool reaches 90% of its total capacity, it does not accept write requests, and it becomes a read-only
system. A storage pool must have a minimum of four nodes and must have three or more nodes with more than 10%
free capacity in order to allow writes. This reserved space is required to ensure that ECS does not run out of space while
persisting system metadata. If this criterion is not met, the write fails. The ability of a storage pool to accept writes does
not affect the ability of other pools to accept writes. For example, if you have a load balancer that detects a failed write,
the load balancer can redirect the write to another VDC.
● The maximum number of VDC per ECS federation and/or RG is eight.
● Node down on a single site VDC (e.g. VDC1) would block adding a new second VDC (e.g. VDC2).
The replication group is used by ECS for replicating data to other sites so that the data is protected and can be accessed from
other, active sites. When you create a bucket, you specify the replication group that it is in. ECS ensures that the bucket and
the objects in the bucket are replicated to all the sites in the replication group.
ECS can be configured to use more than one replication scheme, depending on the requirements to access and protect the
data. The following figure shows a replication group (RG 1) that spans all three sites. RG 1 takes advantage of the XOR storage
efficiency that is provided by ECS when using three or more sites. In the figure, the replication group that spans two sites (RG
2), contains full copies of the object data chunks and does not use XOR'ing to improve storage efficiency.
SP 2
Rack 1
Rack 1
SP 1
VDC C
SP 3
RG 1 RG 1
RG 1 (SP 1,2,3)
VDC C
RG 2 (SP 1,3)
VDC A
VDC B
Federation
Figure 5. Replication group spanning three sites and replication group spanning two sites
The physical storage that the replication group uses at each site is determined by the storage pool that is in the replication
group. The storage pool aggregates the disk storage of each of the minimum of four nodes to ensure that it can handle the
placement of erasure coding fragments. A node cannot exist in more than one storage pool. The storage pool can span racks,
but it is always within a site.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, click New Storage Pool.
3. On the New Storage Pool page, in the Name field, type the storage pool name (for example, StoragePool1).
NOTE:
● A storage pool can only contain HDD nodes or NVMe nodes.
● Click Drive Technology to list the nodes of the same drive technology.
4. In the Cold Storage field, specify if this storage pool is Cold Storage. Cold storage contains infrequently accessed data. The
ECS data protection scheme for cold storage is optimized to increase storage efficiency. After a storage pool is created, this
setting cannot be changed.
NOTE: Cold storage requires a minimum hardware configuration of six nodes. For more information, see ECS data
protection.
5. From the Available Nodes list, select the nodes to add to the storage pool.
a. To select nodes one-by-one, click the -> icon beside each node.
b. To select all available nodes, click the + icon at the top of the Available Nodes list.
c. To narrow the list of available nodes, in the search field, type the public IP address for the node or the host name.
6. In the Available Capacity Alerting fields, select the applicable available capacity thresholds that will trigger storage pool
capacity alerts:
a. In the Critical field, select 10 %, 15 %, or No Alert.
For example, if you select 10 %, that means a Critical alert will be triggered when the available storage pool capacity is
less than 10 percent.
b. In the Error field, select 20 %, 25 %, 30 %, or No Alert.
For example, if you select 25 %, that means an Error alert will be triggered when the available storage pool capacity is
less than 25 percent.
c. In the Warning field, select 30 %, 35 %, 40 %, or No Alert.
For example, if you select 40 %, that means a Warning alert will be triggered when the available storage pool capacity is
less than 40 percent.
When a capacity alert is generated, a call home alert is also generated that alerts ECS customer support that the ECS
system is reaching its capacity limit.
7. Click Save.
8. Wait 10 minutes after the storage pool is in the Ready state before you perform other configuration tasks, to allow the
storage pool time to initialize.
If you receive the following error, wait a few more minutes before you attempt any further configuration. Error
7000 (http: 500): An error occurred in the API Service. An error occurred in the API
service.Cause: error insertVdcInfo. Virtual Data Center creation failure may occur when
Data Services has not completed initialization.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Storage Pools.
2. On the Storage Pool Management page, locate the storage pool that you want to edit in the table. Click Edit in the
Actions column beside the storage pool you want to edit.
3. On the Edit Storage Pool page:
● Nodes cannot be deleted from a storage pool if the storage pool is saved.
● To modify the storage pool name, in the Name field, type the new name.
● Drive Technology drop-down list is not editable.
● To modify the nodes included in the storage pool:
○ In the Available Nodes list, add a node to the storage pool by clicking the + icon beside the node.
● To modify the available capacity thresholds that will trigger storage pool capacity alerts, select the applicable alert
thresholds in the Available Capacity Alerting fields.
4. Click Save.
NOTE: Fail this VDC is available when there is more than one VDC.
○ Ensure that Geo replication is up to date. Stop all writes to the VDC.
○ Ensure that all nodes of the VDC are shut down.
○ Replication to/from the VDC is disabled for all replication groups.
○ Recovery is initiated only when the VDC is removed from the replication group. Go to do
that next.
○ This VDC displays a status of Permanently Failed in any replication group to which it
belongs.
○ To Reconstruct this VDC, it must be added as a new site. Any previous data is lost, as
that data have failed over to other sites in the federation.
This topic provides conceptual information about storage pools, VDCs, and replication groups:
● Introduction to storage pools, VDCs, and replication groups
Prerequisites
This operation requires the System Administrator role in ECS.
Ensure that one or more storage pools are available and in the Ready state.
Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, click New Virtual Data Center.
3. On the New Virtual Data Center page, in the Name field, type the VDC name (for example: VDC1).
4. To create an access key for the VDC, either:
● Type the VDC access key value in the Key field, or
● Click Generate to generate a VDC access key.
The VDC Access Key is used as a symmetric key for encrypting replication traffic between VDCs in a multi-site federation.
5. In the Replication Endpoints field, type the replication IP address of each node assigned to the VDC. Type them as a
comma-separated list.
Prerequisites
Obtain the ECS Portal credentials for the root user, or for a user with System Administrator credentials, to log in to both VDCs.
In an ECS geo-federated system with multiple VDCs, the IP addresses for the replication and management networks are used
for connectivity of replication and management traffic between VDC endpoints. If the VDC you are adding to the federation is
configured with:
● Replication or management traffic running on the public network (default), you need the public network IP address that is
used by each node.
● Separate networks for replication or management traffic, you need the IP addresses of the separated network for each
node.
If a load balancer is configured to distribute the load between the replication IP addresses of the nodes, you need the IP address
that is configured on the load balancer.
Ensure that the VDC you are adding has a valid ECS license that is uploaded and has at least one storage pool in the Ready
state.
Steps
1. On the VDC you want to add (for example, VDC2):
a. Log in to the ECS Portal.
b. In the ECS Portal, select Manage > Virtual Data Center.
c. On the Virtual Data Center Management page, click Get VDC Access Key.
d. Select the key, and press Ctrl-c to copy it.
Important: You are only obtaining and copying the key of the VDC you want to add, you are not creating a VDC on the
site you are logged in to.
e. Log out of the ECS Portal on the site you are adding.
2. On the existing VDC (for example, VDC1):
a. Log in to the ECS Portal.
b. Select Manage > Virtual Data Center.
c. On the Virtual Data Center Management page, click New Virtual Data Center.
d. On the New Virtual Data Center page, in the Name field, type the name of the new VDC you are adding.
e. Click in the Key field, and then press Ctrl-v to paste the access key you copied from the VDC you are adding (from
step 1d).
3. In the Replication Endpoints field, enter the replication IP address of each node in the storage pools that are assigned to
the site you are adding (for example, VDC2). Use the:
● Public IP addresses for the network if the replication network has not been separated.
● IP address configured for replication traffic, if you have separated the replication network.
4. In the Management Endpoints fields, enter the management IP address of each node in the storage pools that are
assigned to the site you are adding (for example, VDC2). Use the:
● Public IP addresses for the network if the management network has not been separated.
● IP address configured for management traffic, if you have separated the management network.
Use a comma to separate IP addresses within the text box.
5. Click Save.
Results
The new VDC is added to the existing federation. The ECS system is now a geo-federated system. When you add the VDC to
the federation, ECS automatically sets the type of the VDC to either On-Premise or Hosted.
Next steps
NOTE: If External Key Manager (EKM) feature is activated for the federation, and then you have to add the necessary VDC
to EKM mapping for the newly added VDC in the key management section.
To complete the configuration of the geo-federated system, you must create a replication group that spans multiple VDCs so
that data can be replicated between the VDCs. To do this, you must ensure that:
● You have created storage pools in the VDCs that will be in the replication group (see Create a storage pool).
● You create the replication group, selecting the VDCs that provide the storage pools for the replication group (see Create a
replication group).
● After you create the replication group, you can monitor the copying of user data and metadata to the new VDC that you
added to the replication group on the Monitor > Geo Replication > Geo Bootstrap Processing tab. When all the user
data and metadata are successfully replicated to the new VDC, the Bootstrap State is Done and the Bootstrap Progress
(%) is 100 on all the VDCs.
NOTE: All the existing data are copied to a new VDC when it is added to an existing replication group. Retention is
maintained. And this occurs without the requirement to implement additional data copy or migration tooling.
Edit a VDC
You can change the name, the access key, or the replication and management endpoints of the VDC.
Prerequisites
This operation requires the System Administrator role in ECS.
If you have an ECS geo-federated system, and you want to update VDC endpoints after you:
● Separated the replication or management networks for a VDC, or
● Changed the IP addresses of multiple nodes in a VDC.
Use the update endpoints in multiple VDCs procedure. If you attempt to update the VDC endpoints by editing the settings for an
individual VDC from the Edit Virtual Data Center < VDC name > page, you lose connectivity between VDCs.
Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, locate the VDC you want to edit in the table. Click Edit in the Actions
column beside the VDC you want to edit.
3. On the Edit Virtual Data Center < VDC name > page:
● To modify the VDC name in the Name field, type the new name.
● To modify the VDC access key for the node you are logged into, in the Key field, type the new key value, or click
Generate to generate a new VDC access key.
● To modify the replication and management endpoints:
4. Click Save.
Prerequisites
NOTE: Update to endpoints on ECS requires assistance of ECS Professional Services to perform. Any change to
endpoints without engaging ECS Remote Support could potentially lead to DU.
This operation requires the System Administrator role in ECS.
Update the VDC endpoints from the Update All VDC Endpoints page after you have:
● Separated the replication or management networks of a VDC in an existing ECS federation.
● Changed the IP address of multiple nodes in a VDC.
If you attempt to update the VDC endpoints by editing the settings for an individual VDC from the Edit Virtual Data Center
< VDC name > page, you will lose connectivity between VDCs.
Steps
1. In the ECS Portal, select Manage > Virtual Data Center.
2. On the Virtual Data Center Management page, click Update All VDC Endpoints.
3. On the Update All VDC Endpoints page, if the replication network was separated, or if the IP address for the node was
changed, type the replication IP address of each node in the Replication Endpoints field for the VDC. Type them as a
comma-separated list.
Prerequisites
CAUTION: See Minimum Requirements to Remove VDC from a Replication Group before proceeding with the
below steps.
This operation requires the System Administrator role in ECS.
Restrictions to Remove VDC:
● You cannot log in to a VDC and remove the same VDC.
● If any VDC in the replication group is off, you cannot remove a VDC .
● If it is the only VDC in a replication group, you cannot remove a VDC.
● If any VDC in the replication group has a Bootstrap or Failover process in progress, you cannot remove a VDC.
● You cannot remove more than one VDC at a time.
● If the system is not fully upgraded, you cannot remove a VDC.
You cannot delete a VDC when it is still associated with any replication groups.
It is a prerequisite to stop any workload to the system and wait until data replication is completed between the VDCs.
NOTE: The time taken to complete data replication depends on the workload and network condition.
Steps
1. Log in to the ECS Portal.
2. In the ECS Portal, select Manage > Virtual Data Center, check to ensure the VDC you want to remove is online and
working correctly.
3. Select Manage > Replication Group and click Edit for the corresponding replication group.
The Edit Replication Group window opens.
4. Click Delete... for the VDC you want to delete.
Confirm Remove VDC window opens. Read all the important notes before you click the check box to confirm removal of
VDC. Click ok.
5. Click Save in the Replication Group Management Window.
6. Select Monitoring > Geo Replication > Failover Processing. The Failover Progress column must show 100% done on all
remaining VDCs in the replication group.
See Guidelines to check failover and bootstrap process procedure for details.
7. Select Monitoring > Geo Replication > Bootstrap Processing. The Bootstrap Progress (%) column must show 100%
done on all remaining VDCs in the replication group.
The time that is taken for both the Failover process and the Bootstrap Process to complete depends on the amount of data
on the failed VDC.
See Guidelines to check failover and bootstrap process procedure for details.
Next steps
NOTE: You may have to wait for 5 to 10 minutes before the Failover and the Bootstrap process show the status on the
page.
Prerequisites
CAUTION: See Minimum Requirements to Trigger PSO before proceeding with the below steps.
NOTE: The time taken to complete data replication depends on the workload and network condition.
Wait for more than 15 minutes after you power off the VDC for the system to confirm that the VDC is off. For unplanned PSO,
ensure the VDC is not accessible for more than 15 minutes.
Restrictions to fail a VDC.
● You cannot fail a VDC that is powered on. If a VDC is powered off for less than 15 minutes, the system does not detect that
the VDC is off, which results in Fail VDC operation to fail.
Steps
1. Log in to the ECS Portal.
2. In the ECS Portal, select Manage > Virtual Data Center. Click Edit and select Fail this VDC. Wait for a few minutes to
ensure the VDC status shows Permanent Site Outage.
See the restrictions in the prerequisites section for limitations of this operation.
NOTE: Until failover progress is completed after performing Step 3, there may be DU for objects that are owned by
VDCs being PSOed.
3. Select Manage > Replication Group, click Edit, and remove the failed VDC from each replication group.
See the restrictions in the prerequisites section for limitations of this operation.
4. Select Monitoring > Geo Replication > Failover Processing. The Failover Progress column must show 100% done on all
remaining VDCs in the replication group.
5. Select Monitoring > Geo Replication > Bootstrap Processing. The Bootstrap Progress (%) column must show 100%
done on all remaining VDCs in the replication group.
The time that is taken for both the Failover process and the Bootstrap Process to complete depends on the amount of data
on the failed VDC.
NOTE: You may have to wait for 5 to 10 minutes before the Failover and the Bootstrap process show the status on the
page.
6. VDC should be removed after Failover Processing percentage shows as 100%. Select Manage > Virtual Data Center, click
Edit and delete the Permanently failed VDC. If the VDC is still associated with any of the replication groups, this operation
fails.
NOTE: After this step, Failover Processing data will become unavailable.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. Log in to each of the VDC in the replication group (RG) and check the following:
This topic provides conceptual information about storage pools, VDCs, and replication groups:
● Introduction to storage pools, VDCs, and replication groups
Prerequisites
This operation requires the System Administrator role in ECS.
If you want the replication group to span multiple VDCs, you must ensure that the VDCs are federated (joined) to the primary
VDC, and that storage pools have been created in the VDCs that will be included in the replication group.
NOTE: The geo-replicated data is encrypted using AES256 when sent from one node to another. However, ECS 3.7 do not
use TLS for geo-replication. For enhanced security of traffic in transit, users must use VPN (or other such secure network
channel) when enabling and using geo-replication.
Steps
1. In the ECS Portal, select Manage > Replication Group.
2. On the Replication Group Management page, click New Replication Group.
3. On the New Replication Group page, in the Name field, type a name (for example, ReplicationGroup1).
NOTE:
● A replication group can only contain HDD storage pools or EXF900 storage pools.
● Click Drive Technology to list the storage pools of the same drive technology.
4. Optionally, in the Replicate to All Sites field, click On for this replication group. You can only turn this setting on when you
create the replication group; you cannot turn it off later.
For a Passive configuration, leave this setting Off.
Option Description
Replicate to All The replication group uses default replication. With default replication, data is stored at the primary
Sites Off site and a full copy is stored at a secondary site chosen from the sites within the replication group.
The secondary copy is protected by triple-mirroring and erasure coding. This process provides data
durability with storage efficiency.
Replicate to All The replication group makes a full readable copy of all objects to all sites (VDCs) within the replication
Sites On group. Having full readable copies of objects on all VDCs in the replication group provides data
durability and improves local performance at all sites at the cost of storage efficiency.
Next steps
After you create the replication group, you can monitor the copying of user data and metadata to the new VDC that you added
to the replication group on the Monitor > Geo Replication > Geo Bootstrap Processing tab. When all the user data and
metadata is successfully replicated to the new VDC, the Bootstrap State is Done and the Bootstrap Progress (%) is 100.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Replication Group.
2. On the Replication Group Management page, beside the replication group you want to edit, click Edit.
3. On the Edit Replication Group page,
● To modify the replication group name, in the Name field, type the new name.
CAUTION:
○ Deleting a VDC from one or more replication groups to which it belongs means that you are removing
this VDC from the replication group and not from the federation.
○ VDC removed from a specific Replication group cannot be added back to the same replication group
after deletion.
○ Ensure that Geo replication is up-to-date. Stop all writes to the VDC.
○ Ensure that the nodes are shut down only for failing (PSO) VDC at the federation level.
○ Recovery is initiated only when the VDC is removed from the replication group. Proceed to do that
next.
○ This VDC will display a status of Permanently Failedfor failed VDC and not for removed VDC.
○ In case of failing VDC, to reconstruct this VDC, it must be added as a new site. Any previous data will
be lost, as that data will have failed over to other sites in the federation.
4. Click Save.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. Shut down the VDC and wait for 15 minutes.
Replication group (RG) with a single VDC: In RG level, the shutdown VDC status is shown as Unattainable.
Replication group (RG) with more than one VDC: In RG level, the shutdown VDC status is shown as Temporarily
Unavailable.
2. Fail the shutdown VDC through REST API.
NOTE: An error message displays while trying to fail shutdown VDC from ECS UI.
3. In VDC and RG level, state of the shutdown VDC is shown as Permanently Failed.
● If there is a single VDC, the associated RG gets a new drop-down. To delete the RG, select Delete.
● If the shutdown VDC is associated with other RGs, to delete the RG, select Edit > Remove.
Error 1007 (http: 405): Method not supported. Operation not supported. Reason:
Zone still referenced by one or more User Replication Groups.
4. To delete a failed VDC, select the VDC, and from the drop-down of the failed VDC, select Delete.
Authentication Providers 37
Table 10. Authentication provider properties (continued)
Field Description
● New Authentication Provider button: Add an authentication provider.
Example:
mycompany.com
If an alternate UPN suffix is configured in the Active Directory, the Domains field should also contain
the alternate UPN configured for the domain. For example, if
myco is added as an alternate UPN suffix for
mycompany.com, then the Domains field should contain both
38 Authentication Providers
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements
myco and
mycompany.com.
Server URLs The LDAP or LDAPS (secure LDAP) with the domain controller IP address. The default port for LDAP
is 389. The default port for LDAPS is 636.
Example:
ldap://<Domain controller FQDN>:<port> (if not default port) or
ldaps://<Domain controller FQDN>:<port>(if not default port)
If the authentication provider supports a multidomain forest, use the global catalog server IP and
always specify the port number. The default port for LDAP is 3268. The default port for LDAPS is
3269.
Example:
ldap(s)://<Global catalog server FQDN>:<port>
Manager DN The Active Directory Bind user account that ECS uses to connect to the Active Directory or LDAP
server. This account is used to search Active Directory when an ECS administrator specifies a user
for role assignment.
Providers This setting is Enabled by default when adding an authentication provider. ECS validates the
connectivity of the enabled authentication provider and that the name and domain of the enabled
authentication provider are unique.
Authentication Providers 39
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements
Select Disabled only if you want to add the authentication provider to ECS, but you do not
immediately want to use it for authentication. ECS does not validate the connectivity of a disabled
authentication provider, but it does validate that the authentication provider name and domain are
unique.
The AD attribute that is used to identify a group. Used for searching the directory by groups.
Example:
CN
NOTE: After you set this attribute for an AD authentication provider, you cannot change it,
because the tenants using this provider might already have role assignments and permissions
that are configured with group names in a format that uses this attribute.
Optional. One or more group names as defined by the authentication provider. This setting filters the
group membership information that ECS retrieves about a user.
● When a group or groups are in the allowlist, ECS is aware only of the membership of a user
in the specified groups. Multiple values (one value on each line in the ECS Portal, and values
comma-separated in CLI and API) and wildcards (for example MyGroup*, TopAdminUsers*) are
allowed.
● The default setting is blank. ECS is aware of all groups that a user belongs to. Asterisk (*) is the
same as blank.
Example:
UserA belongs to
Group1 and
Group2.
If the whitelist is
Group1, ECS knows that UserA is a member of
Group1, but does not know that UserA is a member of
Group2 (or of any other group).
Use care when adding a whitelist value. For example, if you map a user to a namespace that is based
on group membership, then ECS must be aware of the user's membership in the group.
To restrict access to a namespace to only users of certain groups, complete the following tasks.
● Add the groups to the namespace user mapping . The namespace is configured to accept only
users of these groups.
● Add the groups to the allowlist. ECS is authorized to receive information about them.
By default, if no groups are added to the namespace user mapping, users from any groups are
accepted, regardless of the whitelist configuration.
Group Object Classes This setting applies only to LDAP. It does not apply to other types of authentication providers.
Object classes that represent groups in a specified LDAP server, one per line. If this field is empty,
then authorization by an LDAP group is not available.
Example:
groupOfNames
groupOfUniqueNames
40 Authentication Providers
Table 11. AD or LDAP authentication provider settings (continued)
Field Description and requirements
Group Member This setting applies only to LDAP. It does not apply to other types of authentication providers.
Attribute Group member attributes used in a specified LDAP server, one per line. If this field is empty, then
authorization by an LDAP group is not available.
Example:
member
uniqueMember
CN=Users,DC=mydomaincontroller,DC=com
Example:
userPrincipalName=%u
NOTE: ECS does not validate this value when you add the authentication provider.
If an alternate UPN suffix is configured in the Active Directory, the Search Filter value must be of
the format
sAMAccountName=%U where
%U is the username, and does not contain the domain name.
Authentication Providers 41
Add an AD or LDAP authentication provider
You can add one or more authentication providers to ECS to perform user authentication for ECS domain users.
Prerequisites
● This operation requires the Security Administrator role in ECS.
● You need access to the authentication provider information listed in AD/LDAP authentication provider settings. Note
especially the requirements for the Manager DN user.
Steps
1. In the ECS Portal, select Manage > Authentication.
2. On the Authentication Provider Management page, click New Authentication Provider.
3. On the New Authentication Provider page, type values in the fields. For more information about these fields, see AD/
LDAP authentication provider settings.
4. Click Save.
5. To verify the configuration, add a user from the authentication provider at Manage > Users > Management Users, and
then try to log in as the new user.
Next steps
If you want these users to perform ECS object user operations, add (assign) the domain users into a namespace. For more
information, see Add domain users into a namespace.
Prerequisites
● This operation requires the Security Administrator role in ECS.
● You can add only one Keystone authentication provider.
● Obtain the authentication provider information listed in Keystone authentication provider settings.
Steps
1. In the ECS Portal, select Manage > Authentication.
2. On the Authentication Provider Management page, click New Authentication Provider.
3. On the New Authentication Provider page, in the Type field, select Keystone V3.
The required fields are displayed.
4. Type values in the Name, Description, Server URL, Keystone Administrator, and Admin Password fields. For more
information about these fields, see Keystone authentication provider settings.
5. Click Save.
42 Authentication Providers
Table 12. Keystone authentication provider settings (continued)
Field Description
Server URL URl of the Keystone system that ECS connects to in order to validate Swift users.
Keystone Administrator Username for an administrator of the Keystone system. ECS connects to the
Keystone system using this username.
Admin Password Password of the specified Keystone administrator.
Authentication Providers 43
5
Namespaces
Topics:
• Introduction to namespaces
• Working with namespaces in the ECS Portal
Introduction to namespaces
You can use namespaces to provide multiple tenants with access to the ECS object store and to ensure that the objects and
buckets written by users of each tenant are segregated from the other tenants.
ECS supports access by multiple tenants, where each tenant is defined by a namespace and the namespace has a set of
configured users who can store and access objects within the namespace. Users from one namespace cannot access the
objects that belong to another namespace.
ECS supports access by multiple tenants, where each tenant is defined by a namespace and the namespace has a set of
configured users who can store and access objects within the namespace. Users from one namespace cannot access the
objects that belong to another namespace.
Namespaces are global resources in ECS. A System Administrator or Namespace Administrator can access ECS from any
federated VDC and can configure the namespace settings. The object users that you assign to a namespace are global and can
access the object store from any federated VDC.
Namespaces are global resources in ECS. A System Administrator or Namespace Administrator can access ECS from any
federated VDC and can configure the namespace settings. The object users that you assign to a namespace are global and can
access the object store from any federated VDC.
You configure a namespace with settings that define which users can access the namespace and what characteristics the
namespace has. Users with the appropriate privileges can create buckets, and can create objects within buckets, in the
namespace.
You can use buckets to create subtenants. The bucket owner is the subtenant administrator and can assign users to the
subtenant by using access control lists (ACLs). However, subtenants do not provide the same level of segregation as tenants.
Any user assigned to the tenant could be assigned privileges on a subtenant, so care must be taken when assigning users.
An object in one namespace can have the same name as an object in another namespace. ECS can identify objects by the
namespace qualifier.
You can configure namespaces to monitor and meter their usage, and you can grant management rights to the tenant so that it
can perform configuration, monitoring, and metering operations.
In the ECS Portal you can:
● create new namespaces
● edit namespaces
● delete namespaces
The namespace configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API.
The namespace configuration tasks that you can perform in the ECS Portal can also be performed using the ECS Management
REST API.
Namespace tenancy
A System Administrator can set up namespaces in the following tenant scenarios:
44 Namespaces
Enterprise single All users access buckets and objects in the same namespace. Buckets can be created for subtenants, to
tenant allow a subset of namespace users to access the same set of objects. For example, a subtenant might be
a department within the organization.
Enterprise Departments within an organization are assigned to different namespaces and department users are
multitenant assigned to each namespace.
Cloud Service A single namespace is configured and the Service Provider provides access to the object store for users
Provider single within the organization or outside the organization.
tenant
Cloud Service The Service Provider assigns namespaces to different companies and assigns an administrator for the
Provider namespace. The Namespace Administrator for the tenant can then add users and can monitor and meter
multitenant the use of buckets and objects.
Namespace settings
The following table describes the settings that you can specify when you create or edit an ECS namespace.
How to use namespace and bucket names when addressing objects in ECS is described in Object base URL.
Namespaces 45
Table 14. Namespace settings (continued)
Field Description Can be
edited
Namespace Admin The user ID of one or more users who are assigned to the namespace Administrator Yes
role; a list of users is comma that is separated. Namespace Administrators can be
local or domain users. If the namespace Administrator is a domain user, ensure that
an authentication provider is added to ECS. See Introduction to users and roles for
details.
Domain Group Admin The domain group that is assigned to the namespace Administrator role. Any Yes
authenticated member is assigned the namespace Administrator role for the
namespace. The domain group must be assigned to the namespace by setting the
Domain User Mappings for the namespace. To use this feature, you must ensure
that an authentication provider is added to ECS. See Introduction to users and roles
for details.
Replication Group The default replication group for the namespace. Yes
Namespace Quota The storage space limit that is specified for the namespace. You can specify a Yes
storage limit for the namespace and define notification and access behavior when
the quota is reached. The quota set for a namespace cannot be less than 1 GB. You
can specify namespace quota settings in increments of GB. You can select one of
the following quota behavior options:
● Notification Only at < quota_limit_in_GiB > Soft quota setting at which you
are notified.
● Block Access Only at < quota_limit_in_GiB > Hard quota setting which, when
reached, prevents write or update access to buckets in the namespace.
● Block Access at < quota_limit_in_GiB > and Send Notification at
< quota_limit_in_GiB > Hard quota setting which, when reached, prevents
write or update access to the buckets in the namespace and the quota setting
at which you are notified.
Default Bucket Quota The default storage limit that is specified for buckets that are created in this Yes
namespace. This is a hard quota which, when reached, prevents write or update
access to the bucket. Changing the default bucket quota does not change the
bucket quota for buckets that are already created.
Server-side Encryption The default value for server-side encryption for buckets created in this namespace. No
● Server-side encryption, also known as Data At Rest Encryption or D@RE,
encrypts data inline before storing it on ECS disks or drives. This encryption
helps prevent sensitive data from being acquired from discarded or stolen
media.
● If you turn this setting on for the namespace, and then all its buckets are
encrypted, and this setting cannot be changed when a bucket is created. If
you want the buckets in the namespace to be unencrypted, and then you must
leave this setting off. If you leave this setting off for the namespace, individual
buckets can be set as encrypted when created.
● For a complete description of the feature, see the ECS 3.8 Security
Configuration and Hardening Guide.
Access During Outage The default behavior when accessing data in the buckets created in this namespace Yes
during a temporary site outage in a geo-federated setup.
● If you turn this setting on for the namespace and a temporary site outage
occurs, if you cannot access a bucket at the failed site where the bucket was
created (owner site), you can access a copy of the bucket at another site.
Objects that you access in the buckets in the namespace might have been
updated at the failed site, but changes might not have been propagated to the
site from which you are accessing the object.
● If you leave this setting off for the namespace, data in the site which has the
temporary outage is not available for access from other sites, and object read
for data that is owned by the failed site fails.
● In ECS 3.8, Object Lock and ADO can be enabled together in a namespace for
new buckets. However, there is a risk of losing locked versions during a TSO,
and hence for Object Lock buckets setting ADO is denied by default. You must
46 Namespaces
Table 14. Namespace settings (continued)
Field Description Can be
edited
have system administrator privileges to allow Object Lock and ADO to co-exist
through the Management API. Before enabling it, you should understand the risk
of losing locked versions during a TSO.
● For more information, see TSO behavior with the ADO bucket setting turned on.
Compliance ● The rules that limit changes that can be made to retention settings on objects No
under retention. ECS has object retention features enabled or defined at the
object level, bucket level, and namespace level. Compliance strengthens these
features by limiting changes that can be made to retention settings on objects
under retention.
● You can turn this setting on only at the time the namespace is created; you
cannot change it after the namespace is created.
● Compliance is supported by S3 and CAS systems. For details about the rules
enforced by compliance, see the ECS Data Access Guide.
Retention Policies Enables one or more retention policies to be added and configured. Yes
● A namespace can have one or more associated retention policies, where each
policy defines a retention period. When you apply a retention policy to several
objects, rather than to an individual object, a change to the retention policy
changes the retention period for all the objects to which the policy is applied.
A request to modify an object before the expiration of the retention period is
disallowed.
● In addition to specifying a retention policy for several objects, you can specify
retention policies and a quota for the entire namespace.
● For more information about retention, see Retention periods and policies.
Domain Enables Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) Yes
domains to be specified and the rules for including users from the domain to be
configured.
● Domain users can be assigned to ECS management roles and can use the ECS
self-service capability to register as object users.
● The mapping of domain users into a namespace is described in Domain users
require an assigned namespace to perform object user operations.
Namespace root user A namespace root user is a user which has complete access to the namespace Yes
resources. The namespace root user is the default owner for resources created
using IAM roles. With this task, the system administrator and namespace
administrator can manage ECS Portal access for namespace root user.
You can set the following attributes using the ECS Management REST API, but not from the ECS Portal.
Allowed (and Enables a client to specify the replication groups that the namespace can use.
Disallowed)
Replication
Groups
Retention You can assign retention periods at the object level or the bucket level. Each time a user requests to
Periods modify or delete an object, an expiration time is calculated. The object expiration time equals the object
creation time plus the retention period. When you assign a retention period for a bucket, the object
Namespaces 47
expiration time is calculated based on the retention period set on the object and the retention period set
on the bucket, whichever is the longest. When you apply a retention period to a bucket, the retention
period for all objects in a bucket can be changed at any time, and can override the value that is written
to the object by an object client by setting it to a longer period. You can specify that an object is retained
indefinitely.
Auto-Commit The autocommit period is the time interval in which the updates through NFS are allowed for objects
Period under retention. This attribute enables NFS files that are written to ECS to be WORM-compliant. The
interval is calculated from the last modification time. The autocommit value must be less than or equal to
the retention value with a maximum of 1 day. A value of 0 indicates no autocommit period.
Retention Retention policies are associated with a namespace. Any policy that is associated with the namespace
Policies can be assigned to an object belonging to the namespace. A retention policy has an associated retention
period. When you change the retention period that is associated with a policy, the retention period
automatically changes for objects that have that policy that is assigned. You can apply a retention policy
to an object. When a user attempts to modify or delete an object, the retention policy is retrieved. The
retention period in the retention policy is used with object retention period and bucket retention period
to verify whether the request is allowed. For example, you could define a retention policy for each of
the following document types, and each policy could have an appropriate retention period. When a user
requests to modify or delete the legal document four years after it was created, the larger of the bucket
retention period or the object retention period is used to verify whether the operation can be performed.
In this case, the request is not allowed, and the document cannot be modified or deleted for one more
year.
● Email - six months
● Financial - three years
● Legal - five years
For information about how to access the ECS Management REST API, see the ECS Data Access Guide.
48 Namespaces
Create a namespace
You can create a namespace.
Prerequisites
● This operation requires the System Administrator role in ECS.
● A replication group must exist. The replication group provides access to storage pools in which object data is stored.
● If you want to enable domain users to access the namespace, an authentication provider must be added to ECS. To
configure domain object users or a domain group, you must plan how you want to map users into the namespace. For more
information about mapping users, see Domain users require an assigned namespace to perform object user operations.
Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, click New Namespace.
3. On the New Namespace page, in the Name field, type the name of the namespace.
● The name cannot be changed once created.
● To manage the ECS Portal access of a namespace root user, see
○ Manage namespace root user- System administrator
○ Manage namespace root user- Namespace administrator.
4. In the Namespace Admin field, specify the user ID of one or more domain or local users to whom you want to assign the
Namespace Administrator role.
You can add multiple users or groups as comma-separated lists.
5. In the Domain Group Admin field, you can also add one or more domain groups to whom you want to assign the
Namespace Administrator role.
6. In the Replication Group field, select the default replication group for this namespace.
7. In the Namespace Quota field, click On to specify a storage space limit for this namespace. If you enable a namespace
quota, select one of the following quota behavior options:
a. Notification Only at < quota_limit_in_GiB >
Select this option if you want to be notified when the namespace quota setting is reached.
b. Block Access Only at < quota_limit_in_GiB >
Select this option is you want write/update access to the buckets in this namespace that is blocked when the quota is
reached.
c. Block Access at < quota_limit_in_GiB > and Send Notification at < quota_limit_in_GiB >
Select this option if you want write/update access to the buckets in this namespace that is blocked when the quota is
reached and you want to be notified when the quota reaches a specified storage limit.
8. In the Default Bucket Quota field, click On to specify a default storage space limit that is automatically set on all buckets
that are created in this namespace.
9. In the Server-side Encryption field, click On to enable server-side encryption on all buckets that are created in the
namespace and to encrypt all objects in the buckets. If you leave this setting Off, you can apply server-side encryption to
individual buckets in the namespace at the time of creation.
10. In the Access During Outage field, click On or Off to specify the default behavior when accessing data in the buckets
created in this namespace during a temporary site outage in a geo-federated setup.
If you turn this setting on, if a temporary site outage occurs in a geo-federated system and you cannot access a bucket at
the failed site where it was created (owner site), you can access a copy of the bucket at another site.
If you leave this setting off, data in the site which has the temporary outage is not available for access from other sites, and
object reads for data that is owned by the failed site will fail.
11. In the Compliance field, click On to enable compliance features for objects in this namespace.
Once you turn this setting on, you cannot turn it off.
You can only turn this setting on during namespace creation.
Once you turn this setting on, you can add a retention policy by completing the following steps:
Namespaces 49
a. In the Retention Policies area, in the Name field, type the name of the policy.
b. In the Value fields, select a numerical value and then select the unit of measure (seconds, minutes, hours, days, months,
years, infinite) to set the retention period for this retention policy.
Instead of specifying a specific retention period, you can select Infinite as a unit of measure to ensure that buckets that
are assigned to this retention policy are never deleted.
c. Click Add to add the new policy.
12. To specify an Active Directory (AD) or Lightweight Directory Access Protocol (LDAP) domain that contains the users who
can log in to ECS and perform administration tasks for the namespace, click Domain.
a. In the Domain field, type the name of the domain.
b. Specify the groups and attributes for the domain users that are allowed to access ECS in this namespace by typing the
values in the Groups, Attribute, and Values fields.
For information about how to perform complex mappings using groups and attributes, see Domain users require an assigned
namespace to perform object user operations.
13. Click Save.
Edit a namespace
You can change the configuration of an existing namespace.
Prerequisites
This operation requires the System Administrator role in ECS.
The Namespace Administrator role can modify the AD or LDAP domain that contains the users in the namespace that are object
users or management users that can be assigned the Namespace Administrator role for the namespace.
Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, locate the namespace that you want to edit in the table. Click Edit in the Actions
column beside the namespace you want to edit.
3. On the Edit Namespace page:
● To modify the domain or local users to whom you want to assign the Namespace Administrator role, in the Namespace
Admin or Domain Group Admin fields, change the user IDs.
● To modify the default replication group for this namespace, in the Replication Group field, select a different replication
group.
● To modify which of the following settings are enabled, click the appropriate On or Off options.
○ Namespace Quota
○ Default Bucket Quota
○ Access During Outage
4. To modify an existing retention policy, in the Retention Policies area:
a. Click Edit in the Actions column beside the retention policy you want to edit.
b. To modify the policy name, in the Name field, type the new retention policy name.
c. To modify the retention period, in the Value field, type the new retention period for this retention policy.
5. To modify the AD or LDAP domain that contains the object users in the namespace and management users that can be
assigned the Namespace Administrator role for the namespace, click Domain.
a. To modify the domain name, in the Domain field, type the new domain name.
b. To modify the groups and attributes for the domain users that are allowed to access ECS in this namespace, type the
new values in the Groups, Attribute, and Values fields.
6. Click Save.
50 Namespaces
Manage namespace root user
You can manage the ECS Portal access of a namespace root user.
Prerequisites
This task requires system administrator role or namespace administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Edit Namespace > MANAGE (next to Namespace Root User).
2. On the Manage page, you can:
● Enable or Disable ECS Portal access for namespace root user.
● Set or change ECS Portal login password for namespace root user.
3. Click Save.
Steps
1. NOTE: This procedure is available only after the system administrator has enabled ECS Portal access for namespace
root user.
In the ECS Portal, select Manage > Edit Namespace > MANAGE (next to Namespace Root User).
2. On the Manage page, you can:
● Change current ECS Portal login password for the namespace root user.
3. Click Save.
NOTE: Current namespace administrator session will be invalidated after the password is changed.
Delete a namespace
You can delete a namespace, but you must delete the buckets in the namespace first.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, locate the namespace that you want to delete in the table. Click Delete in the
Actions column beside the namespace you want to delete.
An alert displays informing you of the number of buckets in the namespace and instructs you to delete the buckets in the
namespace before removing the namespace. Click OK.
3. Delete the buckets in the namespace.
a. Select Manage > Buckets.
b. On the Bucket Management page, locate the bucket that you want to delete in the table. Click Delete in the Actions
column beside the bucket you want to delete.
Namespaces 51
c. Repeat step 4b. for all the buckets in the namespace.
4. On the Namespace Management page, locate the namespace that you want to delete in the table. Click Delete in the
Actions column beside the namespace you want to delete.
Since there are no longer any buckets in this namespace, a message displays to confirm that you want to delete this
namespace. Click OK.
5. Click Save.
52 Namespaces
6
Users and Roles
Topics:
• Introduction to users and roles
• Users in ECS
• Management roles in ECS
• Working with users in the ECS Portal
Users in ECS
ECS requires two types of user: management users, who can perform administration of ECS, and object users, who access the
object store to read and write objects and buckets.
The following topics describe ECS user types and concepts.
● Management users
● Default management users
● Object users
● Domain and local users
● User scope
Management users
Management users can perform the configuration and administration of the ECS system and of namespaces (tenants)
configured in ECS.
The roles that can be assigned to management users are Security Administrator, System Administrator, System Monitor, and
Namespace Administrator as described in Management roles in ECS.
Management users can be local users or domain users. Management users that are local users are authenticated by ECS against
the locally held credentials. Management users that are domain users are authenticated in Active Directory (AD) or Lightweight
Directory Access Protocol (LDAP) systems. For more information about domain and local users, see Domain and local users.
Management users are not replicated across geo-federated VDCs.
Object users
Object users are users of the ECS object store. They access ECS through object clients that are using the object protocols that
ECS supports (S3, EMC Atmos, OpenStack Swift, and CAS). Object users can be assigned UNIX-style permissions to access
buckets that are exported as file systems.
A management user (System or namespace Administrator) can create an object user. The management user defines a username
and assigns a secret key to the object user when the user is created or at any time thereafter. A username can be a local name
or a domain-style username that includes @ in the name. The object user uses the secret key to access the ECS object store.
The secret key of object user is distributed by email or other means.
Users that are added to ECS as domain users can later add themselves as object users by creating their own secret key using
the ECS self-service capability through a client that communicates with the ECS Management REST API. The object username
that they are given is the same as their domain name. Object users do not have access to the ECS Portal. For more information
about domain users, see the Domain and local users. For information about creating a secret key, see the ECS Data Access
Guide.
Object users are global resources. An object user can have privileges to read and write buckets, and objects within the
namespace to which they are assigned, from any VDC.
NOTE: Set the user scope before you create the first object user. Setting up the user scope is a strict one-time
configuration. Once configured for an ECS system, the user scope cannot be changed. If you want to change the user
scope, ECS must be reinstalled and all the users, buckets, namespaces, and data must be cleaned up.
● Refer User scope for more information about object users and user scope.
● For more information about object user tasks, see the ECS Data Access Guide.
You must add (assign) domain users into a namespace if you want these users to perform ECS object user operations. To
access the ECS object store, object users and namespace Administrators must be assigned to a namespace. You can add an
entire domain of users into a namespace, or you can add a subset of the domain users into a namespace by specifying a
particular group or attribute associated with the domain.
A domain can provide users for multiple namespaces. For example, you might decide to add users such as the Accounts
department in the yourco.com domain into Namespace1, and users such as the Finance department in the yourco.com
domain into Namespace2. In this case, the yourco.com domain is providing users for two namespaces.
An entire domain, a particular set of users, or a particular user cannot be added into more than one namespace. For example, the
yourco.com domain can be added into Namespace1, but the domain cannot also be added into Namespace2.
The following example shows that a System or namespace Administrator has added into a namespace a subset of users in
the yourco.com domain; the users that have their Department attribute = Accounts in Active Directory. The System or
namespace Administrator has added the users in the Accounts department from this domain into a namespace by using the Edit
Namespace page in the ECS Portal.
Figure 6. Adding a subset of domain users into a namespace using one AD attribute
The following example shows a different example where the System or namespace Administrator is using more granularity
in adding users into a namespace. In this case, the System or namespace Administrator has added the members in the
yourco.com domain who belong to the Storage Admins group with the Department attribute = Accounts AND Region attribute
= Pacific, OR belong to the Storage Admins group with the Department attribute = Finance.
For more information about adding domain users into namespaces using the ECS Portal, see Add domain users into a
namespace.
User scope
The user scope setting affects all object users, in all namespaces across all federated VDCs.
The user scope can be GLOBAL or NAMESPACE. If the scope is set to GLOBAL, object usernames are unique across all VDCs
in the ECS system. If the scope is set to NAMESPACE, object usernames are unique within a namespace, so the same object
username can exist in different namespaces.
NOTE: CAS API and SWIFT API are supported only for GLOBAL user scope. CAS API and SWIFT API are not
supported with NAMESPACE user scope.
The default setting is GLOBAL. If you intend to use ECS in a multitenant configuration and you want to ensure that namespaces
can use names that are in use in another namespace, you must change this setting to NAMESPACE.
NOTE: Set the user scope before you create the first object user. Setting up user scope is a strict one time configuration.
Once configured for an ECS system, user scope cannot be changed. If you want to change the user scope, ECS must be
reinstalled and all the users, buckets, namespaces, and data must be cleaned up.
● See Object users for more information about object users.
● For more information about object user tasks, see the ECS Data Access Guide.
Prerequisites
This operation requires the System Administrator role in ECS.
If you are going to change the default user scope setting from GLOBAL to NAMESPACE, you must do so before you create the
first object user in ECS.
Steps
In the ECS Management REST API, use the PUT /config/object/properties API call and pass the user scope in the
payload.
The following example shows a payload that sets the user_scope to NAMESPACE.
PUT /config/object/properties/
<property_update>
<properties>
<properties>
<entry>
<key>user_scope</key>
<value>NAMESPACE</value>
</entry>
</properties>
</property_update>
User tags
A tag in the form of name=value pairs can be associated with the user ID for an object user, and retrieved by an application. For
example, an object user can be associated with a project or cost-center. Tags cannot be associated with management users.
This functionality is not available from the ECS Portal. Tags can be set on an object user, and the tags that are associated with
the object user can be retrieved by using the ECS Management REST API. You can add a maximum of 20 tags.
Security Administrator
Actions that are allowed to the Security Administrator:
● Upload certificates.
● Add authentication providers.
● Create, edit, and delete management users and/or AD/LDAP users/groups.
System Administrator
The System Administrator role allows a user to configure ECS and during initial configuration, specify the storage used for the
object store, how the store is replicated, how tenant access to the object store is configured (by defining namespaces), and
which users have permissions within an assigned namespace. The System Administrator can also configure namespaces and
perform namespace administration, or can assign a user who belongs to the namespace as the Namespace Administrator.
The System Administrator has access to the ECS Portal and system administration operations can also be performed from
programmatic clients using the ECS Management REST API.
After initial installation of ECS, the System Administrator is a pre-provisioned local management user called root. The default
root user is described in Default management users.
Because management users are not replicated across sites, a System Administrator must be created at each VDC that requires
one.
System Monitor
The System Monitor role enables a user to have read-only access to the ECS Portal. The System Monitor can view all ECS
Portal pages and all information on the pages, except user detail information such as passwords and secret key data. The
System Monitor cannot provision or configure the ECS system. For example, the monitor cannot create or update storage pools,
replication groups, namespaces, buckets, users through the portal or ECS Management REST API. Monitors cannot modify any
other portal setting except their own passwords.
Because management users are not replicated across sites, a System Monitor must be created at each VDC that requires one.
Namespace Administrator
The Namespace Administrator is a management user who can access the ECS Portal.
The Namespace Administrator can assign local users as object users for the namespace and create and manage buckets
within the namespace. Namespace Administrator operations can also be performed using the ECS REST API. A Namespace
Administrator can only be the administrator of a single namespace.
Because authentication providers and namespaces are replicated across sites (they are ECS global resources), a domain user
who is a Namespace Administrator can log in at any site and perform namespace administration from that site.
NOTE: If a domain user is to be assigned to the Namespace Administrator role, the user must be mapped into the
namespace if the user is not a namespace administrator.
Local management users are not replicated across sites, so a local user who is a Namespace Administrator can only log in at the
VDC at which the management user was created. If you want the same username to exist at another VDC, the user must be
created at the other VDC. As they are different users, changes to a same-named user at one VDC, such as a password change,
is not propagated to the user with the same name at the other VDC.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can assign new object users into any namespace.
● A Namespace Administrator can assign new object users into the namespace in which they are the administrator.
● If you create an object user who will access the ECS object store through the OpenStack Swift object protocol, the
Swift user must belong to an OpenStack group. A group is a collection of Swift users that have been assigned a role
by an OpenStack administrator. Swift users that belong to the admin group can perform all operations on Swift buckets
(containers) in the namespace to which they belong. Do not add ordinary Swift users to the admin group. For Swift users
that belong to any group other than the admin group, authorization depends on the permissions that are set on the Swift
bucket. You can assign permissions on the bucket from the OpenStack Dashboard UI or in the ECS Portal using the Custom
Group ACL for the bucket. For more information, see Set custom group bucket ACLs.
Steps
1. In the ECS Portal, select Manage > Users.
2. On the User Management page, click New Object User.
3. On the New Object User page, in the Name field, type a name for the local object user.
You can type domain-style names that include @ (for example, user@domain.com). You might want to do this to keep
names unique and consistent with AD names. However, local object users are authenticated using a secret key that is
assigned to the username, not through AD or LDAP.
4. In the Namespace field, select the namespace that you want to assign the object user to, and then complete one of the
following steps:
● To add the object user, and return later to specify passwords or secret keys to access the ECS object protocols, click
Save.
● To specify passwords or secret keys to access the ECS object protocols, click Next to Add Passwords.
5. NOTE: You can lock or unlock an object user by:
● Edit > LOCK USER
● Edit > UNLOCK USER
On the Update Passwords for User < username > page, in the Object Access area, for each of the protocols that you
want the user to use to access the ECS object store, type or generate a key for use in accessing the S3/Atmos, Swift, or
CAS interfaces.
a. For S3 access, in the S3/Atmos box, click Generate & Add Secret Key.
The secret key (password) is generated.
To view the secret key in plain text, select the Show Secret Key checkbox.
To create a second secret key to replace first secret key for security reasons, click Generate & Add Secret Key.
The Add S3/Atmos Secret Key/Set Expiration on Existing Secret Key dialog is displayed. When adding a second
secret key, you can specify for how long to retain the first password. Once this time has expired, the first secret key
expires.
In the Minutes field, type the number of minutes for which you want to retain the first password before it expires. For
example, if you typed 3 minutes, you would see This password will expire in 3 minute(s).
After 3 minutes, you would see that the first password displays as expired and you could then delete it.
b. For Swift access:
● In the Swift Groups field, type the OpenStack group to which the user belongs.
● In the Swift password field, type the OpenStack Swift password for the user.
● Click Set Groups & Password.
If you want an S3 user to be able to access Swift buckets, you must add a Swift password and group for the user. The
S3 user is authenticated by using the S3 secret key, and the Swift group membership enables access to Swift buckets.
c. For CAS access:
● In the CAS field, type the password and click Set Password or click Generate to automatically generate the
password and click Set Password.
● Click Generate PEA file to generate a Pool Entry Authorization (PEA) file. The file output displays in the PEA file
box and the output is similar to the following example. The PEA file provides authentication information to CAS before
CAS grants access to ECS; this information includes the username and secret. The secret is the base64-encoded
password that is used to authenticate the ECS application.
NOTE: Generate PEA file button is displayed after the password is set.
<pea version="1.0.0">
<defaultkey name="s3user4">
<credential id="csp1.secret" enc="base64">WlFOOTlTZUFSaUl3Mlg3VnZaQ0k=</
credential>
</defaultkey>
<key type="cluster" id="93b8729a-3610-33e2-9a38-8206a58f6514" name="s3user4">
<credential id="csp1.secret" enc="base64">WlFOOTlTZUFSaUl3Mlg3VnZaQ0k=</
credential>
</key>
</pea>
● In the Default Bucket field, select a bucket, and click Set Bucket.
● Optional. Click Add Attribute and type values in the Attribute and Group fields.
● Click Save Metadata.
6. Click Close.
The passwords/secret keys are saved automatically.
Prerequisites
● AD or LDAP domain users must have been added to ECS through an AD or LDAP authentication provider. Adding an
authentication provider must be performed by a System Administrator and is described in Add an AD or LDAP authentication
provider.
● Domain users must have been added into a namespace by a System or Namespace Administrator, as described in Add
domain users into a namespace.
Steps
Domain users can create secret keys for themselves by using the instructions in the ECS Data Access Guide.
When a domain user creates their own secret key, they become an object user in the ECS system.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● An authentication provider must exist in the ECS system that provides access to the domain that includes the users you
want to add into the namespace.
Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, beside the namespace, click Edit.
3. On the Edit Namespace page, click Domain and type the name of the domain in the Domain field.
4. In the Groups field, type the names of the groups that you want to use to add users into the namespace.
The groups that you specify must exist in AD.
5. In the Attribute and Values fields, type the name of the attribute and the values for the attribute.
The specified attribute values for the users must match the attribute values specified in AD or LDAP.
If you do not want to use attributes to add users into the namespace, click the Attribute button with the trash can icon to
remove the attribute fields.
6. Click Save.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● By default, the ECS root user is assigned the System Administrator role and can perform the initial assignment of a user to
the System Administrator role.
● To assign a domain user or an AD or LDAP group to a management role, the domain users or AD or LDAP group must have
been added to ECS through an authentication provider. Adding an authentication provider must be performed by a System
Administrator and is described in Add an AD or LDAP authentication provider.
Steps
1. In the ECS Portal, select Manage > Users.
2. On the User Management page, click the Management Users tab.
3. Click New Management User.
4. Click AD/LDAP User or Group or Local User.
● For a domain user, in the Username field, type the name of the user. The username and password that ECS uses to
authenticate a user are held in AD or LDAP, so you do not need to define a password.
● For an AD or LDAP group, in the Group Name field, type the name of the group. The username and password that ECS
uses to authenticate the AD or LDAP group are held in AD or LDAP, so you do not need to define a password.
● For a local user, in the Name field, type the name of the user and in the Password field, type the password for the user.
NOTE: User names can include uppercase letters, lowercase letters, numbers and any of the following characters: ! # $
&'()*+,-./:;=?@_~
5. To assign the System Administrator role to the user or AD or LDAP group, in the System Administrator box, click Yes.
If you select Yes, but later you want to remove System Administrator privileges from the user, you can edit this setting and
select No.
6. To assign the System Monitor role to the user or AD or LDAP group, in the System Monitor box, click Yes.
7. Click Save.
Prerequisites
● This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Namespace.
2. On the Namespace Management page, beside the namespace into which you want to assign the Namespace
Administrator, click Edit.
3. On the Edit Namespace page:
a. For a local management user or a domain user, in the Namespace Admin field, type the name of the user to whom you
want to assign the Namespace Administrator role.
To add more than one Namespace Administrator, separate the names with commas.
A user can be assigned as the Namespace Administrator only for a single namespace.
b. For an AD or LDAP group, in the Domain Group Admin field, type the name of the AD or LDAP group to which you want
to assign the Namespace Administrator role.
When the AD or LDAP group is assigned the Namespace Administrator role, all users in the group are assigned this role.
An AD or LDAP group can be the Namespace Administrator only for one namespace.
4. Click Save.
Account Management
Account Management enables you to manage IAM identities within each namespace such as users, groups, and roles.
All IAM entities have a unique ID associated with it. Deleting and re-creating an entity with the same name creates a unique ID
for the new entity.
Identities
Table 20. Identities
Field Description
Namespace root user ● The namespace root user is an admin user in the
namespace.
● Only the namespace root user can access the ECS UI.
● The namespace root user is the owner of the buckets and
objects that are created by the IAM entities.
IAM user ● An IAM user is a person or an application in the namespace
that can interact with ECS resources.
● An IAM user can belong to one or more IAM groups.
NOTE: IAM and namespace root users access S3 and IAM APIs using Access Keys. Access Keys are long-term credentials
which consist of an access key ID and a secret access key. A user can have at most two Access Keys associated with it at
any time.
Access Management
Access is managed by creating policies and attaching them to IAM identities or resources.
Policies
A policy is an object that when associated with an identity or resource defines their permission. Permissions in the policies
determine if the request is permitted or denied. Policies are stored in JSON format. ECS IAM enables creation, modification,
listing, assigning, and deletion of policies on an identity or resource.
The following policy types are supported:
ACLs
Access control lists enable you to manage access to objects and buckets. An ACL is attached to all objects and buckets.
Users
An IAM user represents a person or application in the namespace that can interact with ECS resources.
Steps
1. Select Manage > Identity and Access (S3) > Users > NEW USER.
2. In the Name field, type a unique name for the user and click NEXT.
To cancel creating a user, click CANCEL.
3. Add the user to one or more than one group that gives the user, permissions to perform the required tasks and click NEXT.
● You can also attach permission policies to the user and grant permissions.
● To limit permissions of a user, you can set permissions boundary by selecting the existing user.
4. You can attach tags to add metadata to a user.
5. Click NEXT.
6. Review the data of the user, and click CREATE USER.
7. Click COMPLETE.
Delete Users
Steps
1. Select Manage > Identity and Access (S3) > Users > DELETE USERS.
2. Select the user.
You can select more than one user.
3. Click DELETE USERS.
4. Click OK in the pop-up window.
Groups
An IAM group is a collection of IAM users. You can use groups to specify permissions for a collection of IAM users.
Delete Groups
About this task
NOTE: Deleting group will remove the permissions belonging to the group. The membership will also be removed for users
that are members of the group.
Steps
1. Select Manage > Identity and Access (S3) > Groups > DELETE GROUPS.
2. Select the group.
You can select more than one group.
3. Click DELETE GROUPS.
4. Click OK in the pop-up window.
Roles
A role is similar to a user, in that it is an identity with permission policies that determine what the identity can and cannot do .
However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also,
a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access
keys are created dynamically and provided to the user.
Effect
Principal
Assume Role Policy Document The trust relationship policy document that grants an entity
permission to assume the role.
Permissions Boundary The ARN of the policy that is used to set the permission
boundary for the role.
Tags A list of tags that you want to attach to the newly created
role. Each tag consists of a key name and an associated value.
Steps
1. Select Manage > Identity and Access (S3) > Roles > NEW ROLE.
2. In the Name field, type a unique name for the role. Describe about the role in the Description of field. Click Edit to set the
maximum session duration for the role, and click NEXT.
3. NOTE: Choose either Step 3 or Step 4, then go to Step 5.
Click Namespace.
a. Select Effect: Allow or Deny.
b. Click ADD PRINCIPAL ARN to add principal ARN, and click SAVE.
c. Click NEXT.
4. Click SAML2.0 Federation and click NEXT.
a. Select a SAML Provider.
b. Select an Attribute.
c. Enter a Value.
Click ADD CONDITION if you want to add conditions.
d. Click NEXT.
e. Enter the JSON file and click NEXT.
5. Add permissions to the role and click NEXT.
To limit permissions of a role, you can set permissions boundary.
6. You can attach tags to add metadata to a role.
7. Click NEXT.
8. Review the data of the role, and click CREATE ROLE.
9. Click COMPLETE.
NOTE: Clicking EDIT TRUST RELATIONSHIP opens up a JSON editor which contains the trust policy.
Delete Roles
NOTE: Deleting roles, deletes roles along with its inline policies and role policy attachments.
Steps
1. Select Manage > Identity and Access (S3) > Roles > DELETE ROLES.
2. Select the role.
You can select more than one role.
3. Click DELETE ROLES.
4. Click OK in the pop-up window.
Policies
A IAM policy is a document in JSON format which defines the permissions for a role.
New Policy
Steps
1. NOTE: The Inline policies can be applied to IAM users after they are created with edit operation. However, managed
policies are always encouraged for use.
Select Manage > Identity and Access (S3) > Policies > NEW POLICY.
2. Enter the details in the Name, and Description fields, and click NEXT.
3. Select the appropriate options in the Service, Actions, Resources, and Request Condition fields, and click NEXT.
Optionally you can click ADD ADDITIONAL PERMISSIONS to set additional permissions to the new policy.
4. Review the data of the new policy, and click CREATE POLICY.
5. Click COMPLETE.
Delete Policies
Steps
1. Select Manage > Identity and Access (S3) > Policies > DELETE POLICIES.
2. Select the policy.
You can select more than one policy.
3. Click DELETE POLICIES.
4. Click OK in the pop-up window.
Steps
1. Select Manage > Identity and Access (S3) > Policies > POLICY SIMULATOR.
The POLICY SIMULATOR opens in a new tab.
2. Select Namespace from the drop-down in the page header.
Existing Policies is the default in Mode.
3. Select an IAM entity to test the policy that is attached to it, at the upper left of the page.
● Select Group from the Group, Role, User drop-down list. Then choose the group. Or,
● Select Role from the Group, Role, User drop-down list. Then choose the role. Or,
● Select User from the Group, Role, User drop-down list. Then choose the user.
After you select the entity, you can see the policies that are attached to it.
4. Select the policy.
You can select more that one policy to test.
Click the policy to see the details about the policy.
5. Select Service.
6. Select Actions.
7. Select Global Settings.
Update the fields as required to test the policy.
NOTE:
● Include Resource Policy is available only for buckets and objects. Select Include Resource Policy, if you want to
include the policies that are associated with the bucket or the object in the policy simulation.
● Caller ARN is the ARN of the IAM user that you want to use as the simulated caller of the API operations. Caller
ARN is required if you include a resource policy so that the principal element of the policy has a value to use in
evaluating the policy.
● For global settings, if policy is simulated without specifying global condition key value, a message appears,
You have not specified any value for global conditions, are you sure to proceed
on simulation run anyway?
with Ok and Cancel options. Click Ok and simulate the policy ignoring the global condition key.
Results
The result of the simulation is displayed on the Permission column.
Identity Provider
Table 26. Identity Provider
Field Description
Name Name of the identity provider.
Type Only SAML is supported.
Created The time at which the user is created.
Metadata An XML document generated by an identity provider (IdP)
that supports SAML 2.0.
Steps
1. Select Manage > Identity and Access (S3) > Identity Provider > NEW IDENTITY PROVIDER.
2. Enter the details in the Name and Type fields.
3. Click Choose to select metadata provider.
4. Click NEXT.
5. Verify the data of the user, and click NEW IDENTITY PROVIDER.
6. Click COMPLETE.
Delete Providers
Steps
1. Select Manage > Identity and Access (S3) > Identity Provider > DELETE PROVIDERS.
2. Select the identity provider.
You can select more than one identity provider.
3. Click DELETE PROVIDERS.
4. Click OK in the pop-up window.
Steps
1. Select Manage > Identity and Access (S3) > SAML Service Provider Metadata.
2. Click Choose to select a Java Key Store.
3. Enter the details in the Key Alias, Key Password, DNS Base URL fields.
4. Click GENERATE.
Steps
1. NOTE: A root user can have a maximum of two access keys that are associated with it at any time.
Select Manage > Identity and Access (S3) > Root Access Key.
Introduction to buckets
Buckets are object containers that are used to control access to objects and to set properties that define attributes for all
contained objects, such as retention periods and quotas.
In S3, object containers are called buckets and this term has been adopted as a general term in ECS. In Atmos, the equivalent of
a bucket is a subtenant. In Swift, the equivalent of a bucket is a container. In CAS, a bucket is a CAS pool.
In ECS, buckets are assigned a type, which can be S3, Swift, Atmos, or CAS. S3, Atmos, or Swift buckets can be configured
to support file system access (for NFS). A bucket that is configured for file system access can be read and written by using
its object protocol and by using the NFS protocol. S3 and Swift buckets can also be accessed using each other's protocol.
Accessing a bucket using more than one protocol is often referred to as cross-head support.
You can create buckets for each object protocol using its API, usually using a client that supports the appropriate protocol. You
can also create S3, file system-enabled (NFS), and CAS buckets using the ECS Portal and the ECS Management REST API.
You can create buckets for each object protocol using its API, usually using a client that supports the appropriate protocol. For
information about how to create a bucket using the S3 API, click here.
You can also create S3, NFS file system-enabled, and CAS buckets using the ECS Portal and the ECS Management REST API.
Bucket names for the different object protocols must conform to the ECS specifications described in Bucket and key naming
conventions.
Bucket ownership
A bucket is assigned to a namespace and object users are also assigned to a namespace. An object user can create buckets only
in the namespace to which the object user is assigned. An ECS System or Namespace Administrator can assign the object user
as the owner of a bucket, or a grantee in a bucket ACL, even if the user does not belong to the same namespace as the bucket,
so that buckets can be shared between users in different namespaces. For example, in an organization where a namespace is a
department, a bucket can be shared between users in different departments.
Bucket access
Objects in a bucket that belong to a replication group spanning multiple VDCs can be accessed from all of the VDCs in the
replication group. Objects in a bucket that belongs to a replication group that is associated with only one VDC can be accessed
from only that VDC. Buckets cannot be accessed or listed from other VDCs that are not in the replication group. However,
because the identity of a bucket and its metadata, such as its ACL, are global management information in ECS, and the global
management information is replicated across the system storage pools, the existence of the bucket can be seen from all VDCs in
the federation.
78 Buckets
For information about how objects in buckets can be accessed during site outages, see TSO behavior with the ADO bucket
setting turned on.
In the ECS Portal, you can:
● Create a bucket
● Edit a bucket
● Set ACLs
● Set bucket policies
Bucket settings
The following table describes the settings that you can specify when you create or edit a bucket:
CAS Indicates whether the bucket can be used for CAS data. No
Metadata Search Indicates that metadata search indexes are created for the bucket, based on No
specified key values.
● If turned on, metadata keys that are used as the basis for indexing objects in the
bucket can be defined. These keys must be specified at bucket creation time.
● After the bucket is created, search can be turned off altogether, but the
configured index keys cannot be changed.
● The way to define the attribute is described in Metadata search fields.
NOTE: Metadata that is used for indexing is not encrypted, so metadata search
can still be used on a bucket when Server-side Encryption (D@RE) is turned on.
Buckets 79
Table 29. Bucket settings (continued)
Attribute Description Can be
Edited
Data Mobility Indicates whether Data Mobility is enabled or not. Requires Metadata Search and Yes
LastModified field to be indexed to enable it. Data Mobility allows you to set up
automated copying of bucket data to a target bucket. This target bucket can be on
an external ECS cluster or in the cloud on AWS or similar S3-compatible storage.
The bucket user must be an IAM user to enable Data Mobility.
Access During Outage ● The ECS system behavior when accessing data in the bucket during a temporary Yes
(ADO) site outage in a geo-federated setup.
● If you turn this setting on, and a temporary site outage occurs, if you cannot
access a bucket at the failed site where the bucket was created (owner site),
you can access a copy of the bucket at another site. Objects that you access
in the buckets in the namespace might have been updated at the failed site,
but changes might not have been propagated to the site from which you are
accessing the object.
● Turning this setting on in Object Lock-enabled buckets is disabled by default.
Users can explicitly request system administrators to allow this feature after
understanding the data loss risks in Object Lock-enabled buckets during a
temporary site outage.
● If you turn this setting off, data in the site which has the temporary outage
is not available for access from other sites, and object reads for data that
is owned by the failed site fails. This is the default ECS system behavior to
maintain strong consistency by continuing to allow access to data owned by
accessible sites and preventing access to data owned by a failed site.
● Read-Only option: Specifies whether a bucket with the ADO setting that is
turned on is accessible as read-only or read/write during a temporary site
outage. If you select the Read-Only option, the bucket is only accessible in
read-only mode during the outage.
● For more information, see TSO behavior with the ADO bucket setting turned on.
Server-side Encryption Indicates whether server-side encryption is turned on or off. No
● Server-side encryption, also known as Data At Rest Encryption or D@RE,
encrypts data inline before storing it on ECS disks or drives. This encryption
helps prevent sensitive data from being acquired from discarded or stolen
media. If you turn encryption on when the bucket is created, this feature cannot
be turned off later.
● If the namespace of the bucket is encrypted, then every bucket is encrypted.
If the namespace is not encrypted, you can select encryption for individual
buckets.
● For a complete description of the feature, see the ECS 3.8 Security
Configuration and Hardening Guide.
Quota The storage space limit that is specified for the bucket. You can specify a storage Yes
limit for the bucket and define notification and access behavior when the quota is
reached. The quota set for a bucket cannot be less than 1 GiB. You can specify
bucket quota settings in increments of GiB. You can select one of the following
quota behavior options:
● Notification Only at < quota_limit_in_GiB > Soft quota setting at which you
are notified.
● Block Access Only at < quota_limit_in_GiB > Hard quota setting which, when
reached, prevents write/update access to the bucket.
● Block Access at < quota_limit_in_GiB > and Send Notification at
< quota_limit_in_GiB > Hard quota setting which, when reached, prevents
write/update access to the bucket and the quota setting at which you are
notified.
NOTE: Quota enforcement depends on the usage that is reported by ECS
metering. Metering is a background process that is designed so that it does not
impact foreground traffic and so the metered value can lag the actual usage.
Because of the metering lag, there can be a delay in the enforcement of quotas.
80 Buckets
Table 29. Bucket settings (continued)
Attribute Description Can be
Edited
Bucket Tagging Name-value pairs that are defined for a bucket and enable buckets to be classified. Yes
For more information about bucket tagging, see Bucket tagging.
Bucket Retention Retention period for a bucket. Yes
● The expiration of a retention period on an object within a bucket is calculated
when a request to modify an object is made and is based on the value set on the
bucket and the objects themselves.
● The retention period can be changed during the lifetime of the bucket.
● You can find more information about retention and applying retention periods
and policies in Retention periods and policies.
Auto-Commit Period The autocommit period is the time interval in which the updates through NFS are Yes
allowed for objects under retention. This attribute enables NFS files that are written
to ECS to be WORM-compliant. The interval is calculated from the last modification
time. The autocommit value must be less than or equal to the retention value with a
maximum of 1 day. A value of 0 indicates no autocommit period.
Local Object Metadata Local Object Metadata Reads indicates whether Local CAS Metadata Reads is Yes
Reads turned on or off. If turned on, this improves latency on CAS read operations when
the metadata has been successfully replicated to the local site.
Default group
When you turn the File System setting on for a bucket to enable file system access, you can assign a default group for the
bucket. The default group is a Unix group, the members of which have permissions on the bucket when it is accessed as a file
system. Without this assignment, only the bucket owner can access the file system.
You can also specify Unix permissions that are applied to files and directories created using object protocols so that they are
accessible when the bucket is accessed as a file system.
Bucket tagging
Bucket tags are key-value pairs that you can associate with a bucket so that the object data in the bucket can be categorized.
For example, you could define keys like Project or Cost Center on each bucket and assign values to them. You can add up
to ten tags to a bucket.
You can assign bucket tags and values using the ECS Portal or using a custom client through the ECS Management REST API.
Bucket tags are included in the metering data reports displayed in the ECS Portal or retrieved using the ECS Management REST
API.
Buckets 81
Create a bucket
You can create and configure S3, S3+FS, or CAS buckets in the ECS Portal.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can create buckets in any namespace.
● A Namespace Administrator can create buckets in the namespace in which they are the administrator.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, click New Bucket.
3. On the New Bucket page, on the Basic tab, do the following:
a. In the Name field, type the bucket name.
b. In the Namespace field, select the namespace that you want the bucket and its objects to belong to.
c. In the Replication Group field, select the replication group that you want to associate the bucket with.
d. In the Bucket Owner field, type the bucket owner, or select the Set current user as Bucket Owner checkbox.
The bucket owner must be an ECS object user for the namespace. If you do not specify a user, you are assigned as the
owner. However, you cannot access the bucket unless your username is also assigned as an object user.
The user that you specify is given Full Control.
e. Click Next.
4. On the New Bucket page, on the Required tab, do the following:
a. In the File System field, click On to specify that the bucket supports operation as a file system (for NFS).
The bucket is an S3 bucket that supports file systems.
You can set a default UNIX group for access to the bucket and for objects that are created in the bucket. For more
information, see Default group.
b. In the CAS field, click On to set the bucket as a CAS bucket.
By default, CAS is disabled and the bucket is marked as an S3 bucket.
In the Reflection Expiration field, click On to configure an expiration time for reflections in the bucket.
In the Reflection Age field, select the appropriate expiration time. (The minimum expiration time is 1 day, and the
maximum is 99 years.)
If there is no configured expiration time for a reflection, the reflection is never deleted.
In the Local Object Metadata Reads field, click On to enable Local CAS Metadata reads. This option is available only
when the bucket is enabled for CAS data.
You can enable or disable Local Object Metadata Reads by editing the bucket. Ensure that ADO is enabled and Read-Only
is disabled before enabling Local Object Metadata Reads.
c. In the Metadata Search field, click On to specify that the bucket supports searches that are based on object metadata.
If the Metadata Search setting is turned on, you can add user and system metadata keys that are used to create object
indexes. For more information about entering metadata search keys, see Metadata search fields.
NOTE: If the bucket supports CAS, metadata search is automatically enabled and a CreateTime key is automatically
created. The metadata can be searched using the S3 metadata search capability or using the Centera API.
d. In the Access During Outage field, click On if you want the bucket to be available during a temporary site outage. For
more information about this option, see TSO behavior with the ADO bucket setting turned on.
● If the Access During Outage setting is turned on, you have the option of selecting the Read-Only checkbox to
restrict create, update, or delete operations on the objects in the bucket during a temporary site outage. Once you
turn the Read-Only option on for the bucket, you cannot change it after the bucket is created. For more information
about this option, see TSO behavior with the ADO bucket setting turned on.
e. In the Server-side Encryption field, click On to specify that the bucket is encrypted.
f. Click Next.
82 Buckets
5. On the New Bucket page, on the Optional tab, do the following:
a. In the Quota field, click On to specify a quota for the bucket and select the quota setting you require.
The settings that you can specify are described in Bucket settings.
b. In the Bucket Tagging field, click Add to add tags, and type name-value pairs.
For more information, see Bucket tagging.
c. In the Bucket Retention Period field, type a time period to set a bucket retention period for the bucket, or click
Infinite if you want objects in the bucket to be retained forever.
For more information about retention periods, see Retention periods and policies.
d. In the Auto-Commit Period field, type a time period to enable updates to the files that are under retention. The interval
applies only to file enabled buckets.
The autocommit value must be less than or equal to the retention value with a maximum of 1 day. A value of 0 indicates
no autocommit period.
e. Click Save to create the bucket.
Results
To assign permissions on the bucket for users or groups, see the tasks below.
Edit a bucket
You can edit some bucket settings after the bucket has been created and after it has had objects that are written to it.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the settings for a bucket in any namespace.
● A Namespace Administrator can edit the settings for a bucket in the namespace in which they are the administrator.
To edit a bucket, you must be assigned to the Namespace Administrator or System Administrator role.
NOTE: You can copy your S3 bucket ARN using the copy icon next to your ARN.
Known issue with If you change the bucket owner and try to revert the changes, reverting the bucket ownership does
changing NFS not work. You cannot access the NFS mount over the previous bucket owner until you reset the bucket
bucket owners ownership using the API mentioned in the support KB article KB 534080.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, in the Buckets table, select the Edit action for the bucket for which you want to
change the settings.
3. Edit the settings that you want to change.
Buckets 83
You can find out more information about the bucket settings in Bucket settings.
4. Click Save.
Set ACLs
The privileges a user has when accessing a bucket are set using an Access Control List (ACL). You can assign ACLs for a user,
for predefined groups, such as all users, and for a custom group.
The privileges a user has when accessing a bucket are set using an Access Control List (ACL). You can assign ACLs for a user,
for a set of pre-defined groups, such as all users, and for a custom group.
When you create a bucket and assign an owner to it, an ACL is created that assigns a default set of permissions to the bucket
owner - the owner is, by default, assigned full control.
When you create a bucket and assign an owner to it, an ACL is created that assigns a default set of permissions to the bucket
owner - the owner is, by default, assigned full control.
You can modify the permissions that are assigned to the owner, or you can add new permissions for a user by selecting the Edit
ACL operation for the bucket.
In the ECS Portal, the Bucket ACLs Management page has User ACLs, Group ACLs, and Custom Group ACLs tabs to
manage the ACLs associated with individual users and predefined groups, and to allow groups to be defined that can be used to
access the bucket as a file system.
NOTE: For information about ACLs with CAS buckets, see the ECS Data Access Guide.
The ACL permissions that can be assigned are provided in the following table. The permissions that are applicable depend on the
type of bucket.
Privileged Write Allows user to perform writes to a bucket or object when the user does not have normal write
permission. Required for CAS buckets.
Delete Allows the user to delete buckets and objects. Required for CAS buckets.
None The user has no privileges on the bucket.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the ACL settings for a bucket in any namespace.
84 Buckets
● A Namespace Administrator can edit the ACL settings for a bucket n the namespace in which they are the administrator.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. On the Bucket ACLs Management page, the User ACLs tab displays by default and shows the ACLs that have been
applied to the users who have access to the bucket.
The bucket owner has default permissions that are assigned.
NOTE: Because the ECS Portal supports S3, S3 + NFS File system, and CAS buckets, the range of permissions that can
be set are not applicable to all bucket types.
4. To set (or remove) the ACL permissions for a user that already has permissions that are assigned, in the ACL table, in the
Action column, click Edit or Remove.
5. To add a user and assign ACL permissions to the bucket, click Add.
a. Enter the username of the user that the permissions apply to.
b. Select the permissions for the user.
For more information about ACL permissions, see Bucket ACL permissions reference.
6. Click Save.
Prerequisites
● This operation requires the System Administrator or namespace Administrator role in ECS.
● A System Administrator can edit the group ACL settings for a bucket in any namespace.
● A namespace Administrator can edit the group ACL settings for a bucket in the namespace in which they are the
administrator.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. Click the Group ACLs tab to set the ACL permissions for a predefined group.
4. Click Add.
5. The Edit Group page is displayed.
The group names are described in the following table:
Buckets 85
Set custom group bucket ACLs
You can set a group ACL for a bucket in the ECS Portal and you can set bucket ACLs for a group of users (Custom Group ACL),
for individual users, or a combination of both. For example, you can grant full bucket access to a group of users, but you can
also restrict (or even deny) bucket access to individual users in that group.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can edit the group ACL settings for a bucket in any namespace.
● A Namespace Administrator can edit the group ACL settings for a bucket in the namespace in which they are the
administrator.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to edit in the table and select the Edit ACL action.
3. Click the Custom Group User ACLs tab to set the ACL for a custom group.
4. Click Add.
The Edit Custom Group page displays.
5. On the Edit Custom Group page, in the Custom Group Name field, type the name for the group.
This name can be a Unix/Linux group, or an Active Directory group.
6. Select the permissions for the group.
At a minimum you should assign Read, Write, Execute, and Read ACL.
7. Click Save.
86 Buckets
Figure 8. Bucket Policy Editor code view
The tree view, which is shown in the following screenshot, provides a mechanism for navigating a policy and is useful where you
have many statements in a policy. You can expand and contract the statements and search them.
Buckets 87
Bucket policy scenarios
In general, the bucket owner has full control on a bucket and can grant permissions to other users and can set S3 bucket
policies using an S3 client. In ECS, it is also possible for an ECS System or Namespace Administrator to set bucket policies using
the Bucket Policy Editor from the ECS Portal.
You can use bucket policies in the following typical scenarios:
● Grant bucket permissions to a user
● Grant bucket permissions to all users
● Automatically assign permissions to created objects
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "Grant permission to user1",
"Effect": "Allow",
"Principal": ["user1"],
"Action": [ "s3:PutObject","s3:GetObject" ],
"Resource":[ "mybucket/*" ]
}
]
}
You can also add conditions. For example, if you only want the user to read and write object when accessing the bucket from a
specific IP address, add a IpAddress condition as shown in the following policy:
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "Grant permission ",
"Effect": "Allow",
"Principal": ["user1"],
"Action": [ "s3:PutObject","s3:GetObject" ],
"Resource":[ "mybucket/*" ]
"Condition": {"IpAddress": {"aws:SourceIp": "<Ip address>"}
}
]
}
{
"Version": "2012-10-17",
"Id": "S3PolicyId2",
"Statement": [
{
"Sid": "statement2",
"Effect": "Allow",
"Principal": ["*"],
88 Buckets
"Action": [ "s3:GetObject" ],
"Resource":[ "mybucket/*" ]
}
]
}
{
"Version": "2012-10-17",
"Id": "S3PolicyId3",
"Statement": [
{
"Sid": "statement3",
"Effect": "Allow",
"Principal": ["user1", "user2"],
"Action": [ "s3:PutObject, s3:PutObjectAcl" ],
"Resource":[ "mybucket/*" ]
"Condition":{"StringEquals":{"s3:x-amz-acl":["public-read"]}}
}
]
}
Prerequisites
This operation requires the System Administrator or Namespace Administrator role (for the namespace to which the bucket
belongs).
Steps
1. In the ECS Portal, select Manage > Buckets
2. From the Namespace drop-down, select the namespace to which the bucket belongs.
3. In the Actions column for the bucket, select Edit Policy from the drop-down menu.
4. Provided your policy is valid, you can switch to the tree view of the Bucket Policy Editor. The tree view makes it easier to
view your policy and to expand and contract statements.
5. In the Bucket Policy Editor, type the policy or copy and paste a policy that you have previously created.
Some examples are provided in Bucket policy scenarios and full details of the supported operations and conditions are
provided in the ECS Data Access Guide.
6. Save.
The policy is validated and, if valid, the Bucket Policy Editor exits and the portal displays the Bucket Management page. If
the policy is invalid, the error message provides information about the reason the policy is invalid.
Buckets 89
Restrict user IP addresses that can access a CAS bucket
You can restrict client IPs from accessing a CAS bucket. Only the IPs that are set under the restriction list can have access to
the pertaining user.
Introduction
● By default, there is no restriction set for a user.
● CAS user IP restriction is applicable for users across all VDCs.
● Default IP limit for a user is 10, and it is configurable.
NOTE: To change the default IP limit value contact ECS Remote Support.
PUT /object/user-cas/ip-restrictions/{namespace_name}/{user_name}
response body example 1: (by default when there is no IP restriction set for the user)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions>
<user_name>clientuser</user_name>
</user_ip_restrictions>
response body example 2: (only the below provided client IP will have access to the user)
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<user_ip_restrictions_param>
<ip_restrictions>128.128.128.128</ip_restrictions>
< ip_restrictions>127.127.127.127</ip_restrictions>
<user_name>clientuser</user_name>
</user_ip_restrictions_param>
GET /object/user-cas/ip-restrictions/{namespace_name}/{user_name}
GET /object/user-cas/ip-restrictions/
90 Buckets
</userIpRestriction>
</user_ip_restrictions_list>
response body example 1: example for a user without any client IP restriction set. (By
default there won’t be any restriction set for a user)
response body example 2: example for user when client IP restriction is set
Prerequisites
● To create a bucket, ECS must have at least one replication group configured.
● Ensure that Perl is installed on the Linux machine on which you run s3curl.
● Ensure that the curl tool and the s3curl tool are installed. The s3curl tool acts as a wrapper around curl.
● To use s3curl with x-emc headers, minor modifications must be made to the s3curl script. You can obtain the modified,
ECS-specific version of s3curl from the EMCECS Git Repository.
● Ensure that you have obtained a secret key for the user who creates the bucket. For more information, see ECS Data
Access Guide.
Steps
1. Obtain the identity of the replication group in which you want the bucket to be created, by typing the following command:
The response provides the name and identity of all data services virtual pools. In the following example, the ID is
urn:storageos:ReplicationGroupInfo:8fc8e19bedf0-4e81-bee8-79accc867f64:global.
<data_service_vpools>
<data_service_vpool>
<creation_time>1403519186936</creation_time>
<id>urn:storageos:ReplicationGroupInfo:8fc8e19b-edf0-4e81-
bee8-79accc867f64:global</id>
<inactive>false</inactive>
<tags/>
<description>IsilonVPool1</description>
<name>IsilonVPool1</name>
<varrayMappings>
<name>urn:storageos:VirtualDataCenter:1de0bbc2-907c-4ede-b133-
f5331e03e6fa:vdc1</name>
<value>urn:storageos:VirtualArray:793757ab-ad51-4038-b80a-682e124eb25e:vdc1</
value>
</varrayMappings>
Buckets 91
</data_service_vpool>
</data_service_vpools>
2. Set up s3curl by creating a .s3curl file in which to enter the user credentials.
The .s3curl file must have permission 0600 (rw-/---/---) when s3curl.pl is run.
In the following example, the profile my_profile references the user credentials for the user@yourco.com account, and
root_profile references the credentials for the root account.
%awsSecretAccessKeys = (
my_profile => {
id => 'user@yourco.com',
key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
},
root_profile => {
id => 'root',
key => 'sZRCTZyk93IWukHEGQ3evPJEvPUq4ASL8Nre0awN'
},
);
3. Add the endpoint that you want to use s3curl against to the .s3curl file.
The endpoint is the address of your data node or the load balancer that sits in front of your data nodes.
push @endpoints , (
'203.0.113.10', 'lglw3183.lss.dell.com',
);
4. Create the bucket using s3curl.pl and specify the following parameters:
● Profile of the user
● Identity of the replication group in which to create the bucket (<vpool_id>, which is set using the x-emc-
dataservice-vpool header.
● Any custom x-emc headers
● Name of the bucket (<BucketName>).
The following example shows a fully specified command:
The example uses the x-emc-dataservice-vpool header to specify the replication group in which the bucket is created
and the x-emc-file-system-access-enabled header to enable the bucket for file system access, such as for NFS.
NOTE: T2he -acl public-read-write argument is optional, but can be used to set permissions to enable access
to the bucket. For example, if you intend to access to bucket as NFS from an environment that is not secured using
Kerberos.
92 Buckets
-H x-emc-dataservice-vpool:urn:storageos:ObjectStore:e0506a04-340b-4e78-
a694-4c389ce14dc8: http://203.0.113.10:9020/S3B4
Next steps
You can list the buckets using the S3 interface, using:
x-emc-namespace Specifies the namespace that is used for this bucket. If the namespace is not
specified using the S3 convention of host-style or path-style request, and then
it is specified using the x-emc-namespace header. If the namespace is not
specified in this header, the namespace that is associated with the user is used.
x-emc-retention-period Specifies the retention period that is applied to objects in a bucket. Each
time a request is made to modify an object in a bucket, the expiration of the
retention period for the object is calculated based on the retention period that is
associated with the bucket.
x-emc-is-stale-allowed Specifies whether the bucket is accessible during a temporary VDC outage in a
federated configuration.
x-emc-server-side-encryption-enabled Specifies whether objects that are written to a bucket are encrypted.
x-emc-metadata-search Specifies one or more user or system metadata values that are used to create
indexes of objects for the bucket. The indexes are used to perform object
searches that are filtered based on the indexed metadata.
x-emc-autocommit-period Specifies the autocommit period in seconds (applicable to FS enabled bucket
and for requests from NFS and Atmos heads).
Prerequisites
Observe the following prerequisites:
● Data Mobility requires 192 GB nodes.
● Data Mobility must be running on all nodes.
● Metadata search must be enabled with Last Modified indexed.
This generally requires a new bucket, unless an existing bucket already has the Last Modified field indexed.
Buckets 93
● Data Mobility is supported on IAM buckets only.
● Internal copy policies, which must be created through the Management API, require an additional IAM role for the target
bucket in order to work properly.
● Data Mobility supports one policy per bucket, and 100 policies per cluster.
● There may be a performance impact when running Data Movement policies. Specifically, you may observe slowness when
copying on the front end.
● Data Mobility does not copy object tags or retention/object lock.
● Data Mobility does not propagate deleted objects or delete markers to the target bucket.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, follow the workflow to create a new bucket, and click Next.
"Create a bucket" provides information.
3. In Metadata Search, select On.
a. In Type field, select System.
b. In Name field, select LastModified.
c. Click Add and Next.
4. In Data Mobility, select On.
5. In Policy Type: Copy to S3 is the default option.
a. In Destination Configuration Endpoint, specify the S3 API endpoint where you would like to copy the data.
For example, if you are copying data to AWS, the endpoint might be https://s3.amazonaws.com.
Destination Configuration Endpoint is editable the first time that you create a policy only.
WARNING: When Data Movement policy is active, it writes objects to the specified destination bucket. This action may
overwrite data if objects exist in a bucket with the same name. Therefore, the destination bucket should not be written
to by any other applications. If you are concerned with potentially overwriting data, you can enable versioning on the
destination bucket. Versioning may consume more capacity. Data in the destination bucket may get overwritten by an
active policy.
b. In TSL Certificate: If the target endpoint uses a self-signed or corporate-CA-signed SSL/TLS certificate, that is, one
that is not publicly trusted, provide the base64 (x.509) certificate, or its CA certificate. This is so that ECS can securely
communicate with the target.
Exclude the BEGIN CERTIFICATE and END CERTIFICATE statements from the key.
c. In Access Key, specify the S3 access key that is required to access the target bucket.
Access Key is a required field.
d. Secret Key, specify the S3 secret key that is required to access the target bucket.
This is a required field.
e. In Bucket Name, enter the bucket within the Destination Endpoint to which you would like to copy data.
Bucket is editable the first time that you create a policy only.
f. Object Tag Filtering defaults to Off.
If enabled, the policy applies to objects with the specified object tag only. Only one tag may be specified in the format
name=value. When you change the object tag filter, the change affects objects that have not already been copied only.
In other words, changing the filter will not rescan the bucket to copy objects that have been skipped because they did
not match the filter.
g. Server Side Encryption (SSE-S3) defaults to On.
When enabled, all objects that are copied to the target bucket are written with server-side (SSE-S3) object-level
encryption. When you change SSE-S3, the policy change affects objects that have not already been copied only.
h. Detailed Logging defaults to Off.
Off does not show logging operations, bucket, and prefix. On shows logging operations, bucket, and prefix.
i. Logged Operations, defaults to All Operations. Select Only Errors if wanted.
Only Errors specifies that the detailed logs in the log bucket should only include errors. All Operations specifies that the
details logs should include operations on all objects. Logging all operations is useful for audit purposes, but may take up a
lot of space.
j. In Logging Bucket , specify the bucket in which to write the detailed logs.
This field is required if logging is enabled.
k. In Logging Prefix, specify the prefix in the bucket under which to write the detailed logs.
94 Buckets
l. Click Validate Data Movement Policy to run a test in objcontrolsvc. ECS displays the following messages:
The Policy is valid..
Warning, missing configuration paramters. There is some missing configuration may must be completed
before the policy can run.
Test failed.
Grafana inconsistencies
Grafana may experience inconsistencies because data collection and aggregation tasks that run in batch. No data is displayed
when a cluster is added. Also, there are inconsistencies in large time ranges because ECS does not support real-time
aggregation.
resource-svc has high load, Data Mobility dashboard stats are lagged
This issue is caused by Data Mobility stats being aggregated too often, which can overload the scanner in resource-svc .
Contact Dell Customer Service to open a Service Request and reference STORAGE-32228.
Steps
1. In the ECS Web Portal, go to Settings > Alerts Policy to check errors and get policy information.
2. If the Data Mobility policy data log bucket is enabled, check the logs in the log bucket for syncing errors.
Buckets 95
Examples
See below examples of successful and failed log output from batch copied objects:
Successful log output:
2022-09-02T09:12:18Z DM.COPY demo test ASIA4258DF91ACFD0BBE S8d162an1wh1
2022-09-02T09:05:57Z 10,240 a1bd91c9d3c7f410b37dc36e14f203ca http://10.249.250.95:9020/
demo_target AKIAED117E670B7136E6 99 SUCCESS
Failed log output:
2022-05-27T07:10:18Z DM.COPY large-data large-buk ASIAEA0F10AF314E3CEE obj2_6542
2022-05-27T03:05:21Z 279 9073c9c324e9c6fb48c89955c0526e19
http://10.243.82.165:9020/ large-buk-target AKIAEAA15A0C9143994D 853 ERROR
"java.util.concurrent.ExecutionException: com.amazonaws.AmazonServiceException: Service
Unavailable (Service: Amazon S3; Status Code: 503; Error Code: Service Unavailable; Request
ID: null; Proxy: null)"
Namespace name
The following rules apply to the naming of ECS namespaces:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● Valid characters are defined by regex /[a-zA-Z0-9-_]+/. That is, alphanumeric characters and hyphen (-) and underscore
(_) special characters.
Bucket name
The following rules apply to the naming of S3 buckets in ECS:
● Must be between one and 255 characters in length. (S3 requires bucket names to be 1–255 characters long)
● Can include dot (.), hyphen (-), and underscore (_) characters and alphanumeric characters ([a-zA-Z0-9])
● Can start with a hyphen (-) or alphanumeric character.
● Cannot start with a dot (.)
● Cannot contain a double dot (..)
● Cannot end with a dot (.)
● Must not be formatted as IPv4 address.
You can compare these rules with the naming restriction in the S3 bucket quotas, restrictions and limitations.
96 Buckets
Object name
The following rules apply to the naming of S3 objects in ECS:
● Cannot be null or an empty string.
● Length range is 1..255 (Unicode char)
● No validation on characters
Container name
The following rules apply to the naming of Swift containers:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● Can include dot (.), hyphen (-), and underscore (_) characters and alphanumeric characters ([a-zA-Z0-9])
● Can include the at symbol (@) with the assistance of your customer support representative.
Object name
The following rules apply to the naming of Swift objects:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode char)
● No validation on characters
Subtenant (bucket)
The subtenant is created by the server, so the client does not need to know the naming scheme.
Object name
The following rules apply to the naming of Atmos objects:
● Cannot be null or an empty string
● Length range is 1..255 (Unicode characters)
● No validation on characters
● Name should be percent-encoded UTF-8.
Buckets 97
● Can contain a maximum of 255 characters
● Cannot contain: ' " / & ? * < > <tab> <newline> or <space>
Clip naming
The CAS API does not support user-defined keys. When an application using CAS API creates a clip, it opens a pool, creates a
clip, and adds tags, attributes, streams and so on. After a clip is complete, it is written to a device.
A corresponding clip ID is returned by the CAS engine and can be referred to using <pool name>/<clip id>.
98 Buckets
Delete a bucket
Using with the ECS Portal user interface, delete a bucket when you no longer need it.
Prerequisites
● This operation requires the System Administrator or Namespace Administrator role in ECS.
● A System Administrator can delete a bucket in any namespace.
● A Namespace Administrator can delete a bucket in the namespace in which they are the administrator.
To delete a bucket, you must be assigned to the Namespace Administrator or System Administrator role.
Steps
1. In the ECS Portal, select Manage > Buckets.
2. On the Bucket Management page, locate the bucket that you want to delete in the table, and then select the Edit Bucket
> Delete Bucket action.
ECS prompts This Action is permanent. It can not be stopped once started and this action is
not reversible.
3. In the Delete Confirmation window, select the option to proceed with the deletion.
● Select the Delete the selected Buckets option to delete the buckets you selected.
● Select the Delete ENTIRE Contents including the Selected bucket option to delete the bucket and its contents.
4. Check the I acknowledge and would like to delete the bucket checkbox, and then click Delete.
NOTE: Buckets being deleted cannot be modified. To delete a bucket with Filesystem enabled, you must remove any
associated NFS exports before you delete the bucket.
The bucket deletion process is started. Buckets being deleted are marked as Bucket deletion is in progress. To view the
current status of the bucket deletion, select the View Bucket > View Delete Status action.
The operation may fail because of service Initiate a new bucket delete request. The ECS 3.8
instability or other errors. Data Access Guide provides information about
bucket API commands.
S3 or management API May not meet policy and feature requirements. Verify that feature flags are enabled and all
not accepting empty- requirements are met.
bucket header or query
parameters
EmptyBucket task no The bucket may already be deleted. The task is 1. Check if the bucket exists.
longer found removed if the bucket is successfully deleted. 2. If the bucket still exists, check if the bucket
owner zone was removed from the associated
RG.
Buckets 99
Table 33. Simplified bucket delete issues (continued)
Issue Reason Corrective action
Asks not performing You may not have the correct permissions to Check that you have the correct permissions.
initiate simplified bucket delete. You must have sysadmin privileges.
Retention rules do not allow objects to be 1. Remove the retention rules on the buckets or
deleted. zones that you are trying to delete.
2. Then, either delete the objects manually or
initiate a new bucket delete request.
The ECS 3.8 Data Access Guide provides
information about bucket API commands.
DT instability may cause listing or delete requests Initiate a new delete bucket request.
to fail during task execution.
TSO may cause the task to be delayed. Check if there is an active TSO on the associated
RG. If so, delete resumes when TSO resolves.
PSO or Zone removal from RG may cause the 1. Reset the empty_bucket_in_progress
task to fail and possibly leave the bucket in a flag for the bucket in DTQuery.
read-only state. 2. Initiate a new delete bucket request once RG
recovery is at the bootstrap stage.
The ECS 3.8 Data Access Guide provides
information about bucket API commands.
Tasks aborted PSO or nonbucket owner zone that is removed Initiate a new delete bucket request once RG
from RG. recovery is at the bootstrap stage. The ECS 3.8
Data Access Guide provides information about
bucket API commands.
Multiple bucket-delete - Reduce the number of permits in
tasks impacting system PriorityTaskCoordinator to limit the parallel tasks
performance. that can run per DT.
com.emc.ecs.priority.coordinator.OB.permits
Files
Configuration and deployment file name and locations:
● com.emc.ecs.empty_bucket.*
● com.emc.ecs.priority.*
● CF values are defined in: shared-cf-conf.xml
100 Buckets
Task Priority values are 1-10, with 1 being highest priority. If there are more tasks to run in the time allotted, ECS switches
scheduling to a time- based scheme based on task priority. ECS gives higher-priority tasks more time to run. When a process
time exceeds, ECS signals the task processor to save the task state and stop. ECS schedules tasks by priority and start time.
The default number of tasks running in parallel in the priority task coordinator is three. Increasing this value can have a
performance impact, as the value is a multiplied by the number of directory tables that exist for the given service. If you have
disabled then enabled the priority task coordinator feature, you must restart the services that are associated with it.
Buckets 101
admin@ecs06pn08012-data:~> SDS_TOKEN=$(curl -i -s -L --location-trusted -k https://$
{MGMT_IP}:4443/login -u emcmonitor:ChangeMe | grep X-SDS-AUTH-TOKEN); echo ${SDS_TOKEN} X-
SDS-AUTH-TOKEN:
BAAccFE0TUNtWUJxdnlSbTRmTjV4UnBKODJVTURnPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW50ZXJ
EYXRhOmM1OWZjNTIzLTFiMTctNDE3Mi1iN2YyLTM0NDcxMTAyNDZhYQIADTE2NjEzNzc3NDkyMzcDAC51cm46VG9rZW
46YzUzMDcyMGYtZDU4MC00MjA5LThkMDgtOGQwM2RkMzQ5ZmM3AgAC0A8= admin@ecs06pn08012-data:~> curl
-v -L --location-trusted -k -H "${SDS_TOKEN}" https://${MGMT_IP}:4443/object/bucket?
namespace=ns-fr | xmllint --format - > /tmp/test.tmp
● Trying 10.185.65.142...
● TCP_NODELAY set % Total % Received % Xferd Average Speed Time Time Time Current Dload
Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 - - - - - - 0* Connected to 10.185.65.142
(10.185.65.142) port 4443 (#0)
● ALPN, offering h2
● ALPN, offering http/1.1 . } [5 bytes data] > GET /object/bucket?namespace=ns-fr HTTP/1.1 >
Host: 10.185.65.142:4443 > User-Agent: curl/7.60.0 > Accept:> X-SDS-AUTH-TOKEN:
BAAccFE0TUNtWUJxdnlSbTRmTjV4UnBKODJVTURnPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW50Z
XJEYXRhOmM1OWZjNTIzLTFiMTctNDE3Mi1iN2YyLTM0NDcxMTAyNDZhYQIADTE2NjEzNzc3NDkyMzcDAC51cm46VG
9rZW46YzUzMDcyMGYtZDU4MC00MjA5LThkMDgtOGQwM2RkMzQ5ZmM3AgAC0A8= > {
[5 bytes data]
< HTTP/1.1 200 OK < Date: Thu, 25 Aug 2022 13:17:18 GMT < Content-Type: application/xml <
Transfer-Encoding: chunked < Connection: keep-alive < {
[16245 bytes data]
100 1638k 0 1638k 0 0 2672k 0 - - - - - - 2668k
● Connection #0 to host 10.185.65.142 left intact
admin@ecs06pn08012-data:~> grep -i marker /tmp/test.tmp -C2
<object_buckets> <Filter>namespace=ns-fr&name=*</Filter> <NextMarker>FR_STG_S3_AVI-DATA</
NextMarker> <NextPageLink>/object/bucket?namespace=ns-fr&name=*&marker=FR_STG_S3_AVI-DATA</
NextPageLink> <object_bucket> <api_type>S3</api_type> admin@ecs06pn08012-data:~>
second listing request with NextMarker
admin@ecs06
PUT /service/{service_name}
{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
}
102 Buckets
For example,
PUT /service/atmos
{
"name": "atmos",
"settings": ["disabled"]
}
PUT /service
{
"service": [{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
},
{
"name": "{service_name}",
"settings": ["setting1", "setting2", "settingN"]
}]
For example,
PUT /service
{ "service": [{
"name": "s3",
"settings": ["http", "https"]
},
{
"name": "swift",
"settings": ["http"]
},
{
"name": "cas",
"settings": ["enabled"]
}]
}
Buckets 103
9
File Access
Topics:
• Introduction to file access
• ECS multi-protocol access
• Working with NFS exports in the ECS Portal
• Working with user or group mappings in the ECS Portal
• ECS NFS configuration tasks
• Mount an NFS export example
• NFS access using the ECS Management REST API
• NFS WORM (Write Once, Read Many)
• S3A support
• Geo-replication status
Limitations
● An issue can arise where both a directory object and a file object are created with the same name. This can occur in the
following ways:
○ A file path1/path2 is created from NFS, then an object path1/path2/path3 is created from S3. Because S3 allows
creation of objects that have another object's name as the prefix, this operation is valid and is supported. A file and a
directory called path2 will exist.
○ A directory path1/path2 is created from NFS, then an object path1/path2 is created from S3. This operation is a
valid operation from S3 because directory path1/path2 is not visible through the S3 API. A file and a directory called
path2 will exist.
To resolve this issue, requests from S3 always return the file, and requests from NFS always return the directory. However,
this means that in the first case the file created by NFS is hidden by the object created by S3.
● NFS does not support filenames with a trailing / in them, but the S3 protocol does. NFS does not show these files.
Table 35. Mapping between NFS ACL and Object ACL attributes
NFS ACL attribute Object ACL attribute
Owner User who is also an Owner
Group Custom Group that is also a Primary Group
Others Pre-Defined Group
Creating and modifying an object using NFS and accessing using the object service
When an NFS user creates an object using the NFS protocol, the owner permissions are mirrored to the ACL of the object user
who is designated as the owner of the bucket. If the NFS user has RWX permissions, Full Control is assigned to the object
owner through the object ACL.
The permissions that are assigned to the group that the NFS file or directory belongs to are reflected onto a custom group of
the same name, if it exists. ECS reflects the permissions that are associated with Others onto predefined group permissions.
The following example illustrates the mapping of NFS permissions to object permissions.
When a user accesses ECS using NFS and changes the ownership of an object, the new owner inherits the owner ACL
permissions and is given Read_ACL and Write_ACL. The previous owner permissions are kept in the object user's ACL.
When a chmod operation is performed, the ECS reflects the permissions in the same way as when creating an object.
Write_ACL is preserved in Group and Other permissions if it exists in the object user's ACL.
If the object owner is changed, the permissions that are associated with the new owner are applied to the object and are
reflected onto the file RWX permissions.
Steps
1. Create a bucket for NFS using the ECS Portal
2. Add an NFS export
3. Add a user or group mapping using the ECS Portal
4. Configure ECS NFS with Kerberos security
Next steps
After you perform the above steps, you can mount the NFS export on the export host.
Prerequisites
● This operation requires the Namespace Administrator or System Administrator role in ECS.
● If you are a Namespace Administrator, you can create buckets in your namespace.
● If you are System Administrator, you can create a bucket belonging to any namespace.
Steps
1. In the ECS Portal, select Manage > Buckets > New Bucket.
2. On the New Bucket page, in the Name field, type a name for the bucket.
3. In the Namespace field, select the namespace that the bucket belongs to.
4. In the Replication Group field, select a replication group or leave blank to use the default replication group for the
namespace.
Prerequisites
● This operation requires the namespace Administrator or System Administrator role.
● If you are a namespace Administrator, you can add NFS exports into your namespace.
● If you are a System Administrator, you can add NFS exports into any namespace.
● You must have created a bucket to provide the underlying storage for the export. For more information, see Create a bucket
for NFS using the ECS Portal.
Steps
1. In the portal, select File > Exports > New Export.
The New File Export page is displayed.
2. On the New File Export page, in the Namespace field, select the namespace that owns the bucket that you want to
export.
3. In the Bucket field, select the bucket.
Option Description
No The export size is reported as the storage pool size.
Yes The export size is reported as the hard quota on the bucket, if set.
6. To add the hosts that you want to be able to access the export, complete the following steps:
a. In the Export Host Options area, click Add.
The Add Export Host dialog is displayed.
b. In the Add Export Host dialog, specify one or more hosts that you want to be able to access the export and configure
the access options.
You must choose an Authentication option. This option is Sys unless you are intending to configure Kerberos. Default
values for Permissions (ro) and Write Transfer Policy (async) are already set in the Add Export Host dialog and are
passed to the NFS server. The remaining options are the same as the NFS server defaults and so are only passed by the
system if you change them.
The following table describes the parameters that you can specify when you add a host:
Write Transfer Policy Sets the write transfer policy as synchronous or asynchronous. The default is
asynchronous. This parameter is the same as setting sync or async for an export
in /etc/exports.
Authentication Sets the authentication types that are supported by the export.
Mounting Directories Inside Specifies whether subdirectories of the export path are allowed as mount points.
Export This parameter is the same as the alldir setting in /etc/exports. With the
alldir option, if you exported /namespace1/bucket1, for example, you can
also mount subdirectories, such as/namespace1/bucket1/dir1, provided the
directory exists.
AnonUser An object username (must have an entry in user or group mapping) to which all
unknown (not mapped using the user or group mapping) user IDs are mapped when
accessing the export.
NOTE: For performance reasons, user mapping must be predefined and can be
selected here.
AnonGroup An object group name (must have an entry in user or group mapping) to which all
unknown (not mapped using the user or group mapping) group IDs are mapped when
accessing the export.
NOTE: For performance reasons, group mapping must be predefined and can be
selected here.
RootSquash An object username to which the root user ID (0) is mapped when accessing the
export.
NOTE: The AnonUser/AnonGroup is the object user or group name that is used to map any incoming user id. This
overrides user mapping provided at the namespace level.
Next steps
Add a user or group mapping using the ECS Portal.
Prerequisites
● This operation requires the Namespace Administrator or System Administrator role in ECS.
● If you are a Namespace Administrator, you can add user or group mappings into your namespace.
● If you are System Administrator, you can add user or group mappings into any namespace.
● For mapping a single ECS object user to a UNIX user:
● ○ Ensure that the UID exists on the NFS client and the username is an ECS object username.
● For mapping a group of ECS object users to a group of UNIX users:
● ○ Ensure that a default custom group has been assigned to the bucket (either a default group that was assigned at bucket
creation, or a custom group ACL that was set after bucket creation). In order for UNIX group members to have access to
the file system, a default custom group must be assigned to the bucket and the UID for each member of the group must
be known to ECS. In other words, there must be a UNIX UID mapping for each member of the group in ECS.
○ Ensure that default object and directory permissions have been assigned to the bucket in order that group members have
access to objects and directories created using object protocols.
Steps
1. In the ECS Portal, select Manage > File and click the User/Group Mapping tab.
2. Click New User/Group Mapping.
The New User/Group Mapping page is displayed.
3. In the User/Group Name field, type the name of the ECS object user or ECS custom group that you want to map to a UNIX
UID or GID.
4. In the Namespace field, select the namespace that the ECS object user or custom group belongs.
5. In the ID field, enter the UNIX UID or GID that you want the ECS user or group to map to.
6. In the Type field, click the type of mapping: User or Group so that ECS knows that the ID you have entered is a UID or a
GID.
7. Click Save.
Prerequisites
Depending on your internal IT setup, you can use a Key Distribution Center (KDC) or you can use Active Directory (AD) as your
KDC.
To use AD, follow the steps in these tasks:
● Register an ECS node with Active Directory
● Register a Linux NFS client with Active Directory
Steps
1. Ensure that the hostname of the ECS node can be resolved.
You can use the hostname command to ensure that the FQDN of the ECS node is added to /etc/HOSTNAME.
In the example below, the following values are used and must be replaced with your own settings.
[libdefaults]
default_realm = NFS-REALM.LOCAL
[realms]
NFS-REALM.LOCAL = {
kdc = kdcname.yourco.com
admin_server = kdcname.yourco.com
}
[logging]
kdc = FILE:/var/log/krb5/krb5kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
default = SYSLOG:NOTICE:DAEMON
3. Add a host principal for the ECS node and create a keytab for the principal.
In this example, the FQDN of the ECS node is ecsnode1.yourco.com
$ kadmin
kadmin> addprinc -randkey nfs/ecsnode1.yourco.com
kadmin> ktadd -k /datanode.keytab nfs/ecsnode1.yourco.com
kadmin> exit
7. To set up the client, begin by making sure that the hostname of the client can be resolved.
8. If your client is running SUSE Linux make sure that line NFS_SECURITY_GSS="yes" is uncommented in /etc/
sysconfig/nfs.
9. If you are on Ubuntu make sure to have line NEED_GSSD=yes in /etc/default/nfs-common.
10. Install rpcbind and nfs-common.
Use apt-get or zypper. On SUSE Linux, for nfs-common, use:
[libdefaults]
default_realm = NFS-REALM.LOCAL
[realms]
NFS-REALM.LOCAL = {
kdc = kdcname.yourco.com
admin_server = kdcname.yourco.com
}
[logging]
kdc = FILE:/var/log/krb5/krb5kdc.log
admin_server = FILE:/var/log/krb5/kadmind.log
default = SYSLOG:NOTICE:DAEMON
12. Add a host principal for the NFS client and create a keytab for the principal.
In this example, the FQDN of the NFS client is nfsclient.yourco.com
$kadmin
kadmin> addprinc -randkey host/nfsclient.yourco.com
kadmin> ktadd -k /nkclient.keytab host/nfsclient.yourco.com
kadmin> exit
13. Copy the keytab file (nfsclient.keytab) from the KDC machine to /etc/krb5.keytab on the NFS client machine.
$kadmin
kadmin> addprinc yourusername@NFS-REALM.LOCAL
kadmin> exit
For example:
16. Log in as non root user and kinit as the non-root user that you created.
kinit yourusername@NFS-REALM.LOCAL
17. You can now mount the NFS export. For more information, see Mount an NFS export example and Best practices for
mounting ECS NFS exports.
NOTE:
Mounting as the root user does not require you to use kinit. However, when using root, authentication is done using
the client machine's host principal rather than your Kerberos principal. Depending upon your operating system, you can
configure the authentication module to fetch the Kerberos ticket when you login, so that there is no need to fetch the
ticket manually using kinit and you can mount the NFS share directly.
Prerequisites
You must have administrator credentials for the AD domain controller.
Steps
1. Log in to AD.
2. In Server Manager, go to Tools > Active Directory Users and Computers.
3. Create a user account for the NFS principal using the format "nfs-<host>" , for example, "nfs-ecsnode1". Set a password
and set the password to never expire.
4. Create an account for yourself (optional and one time).
5. Execute the following command to create a keytab file for the NFS service account.
For example, to associate the nfs-ecsnode1 account with the principle nfs/ecsnode1.yourco.com@NFS-REALM.LOCAL, you
can generate a keytab using:
ktutil
ktutil> rkt <keytab to import>
ktutil> wkt /etc/krb5.keytab
kinit -k nfs/<fqdn>@NFS-REALM.LOCAL
11. Follow steps 2, 4, and 5 from Configure ECS NFS with Kerberos security to place the Kerberos configuration files
(krb5.conf, krb5.keytab and jce/unlimited) on the ECS node.
Prerequisites
You must have administrator credentials for the AD domain controller.
Steps
1. Log in to AD.
2. In Server Manager, go to Tools > Active Directory Users and Computers.
3. Create a computer account for the client machine (for example, "nfsclient"). Set the password to never expire.
4. Create an account for a user (optional and one time).
5. Run the following command to create a keytab file for the NFS service account.
For example, to associate the nfs-ecsnode1 account with the principle host/nfsclient.yourco.com@NFS-REALM.LOCAL, you
can generate a keytab using:
ktutil
ktutil> rkt <keytab to import>
ktutil> wkt /etc/krb5.keytab
kinit -k host/<fqdn>@NFS-REALM.LOCAL
su - fred
mkdir /home/fred/nfsdir
2. As the root user, mount the export in the directory mount point that you created.
When mounting an NFS export, you can specify the name or IP address of any of the nodes in the VDC or the address of the
load balancer.
It is important that you specify -o "vers=3".
3. Check that you can access the file system as user fred.
a. Change to user fred.
$ su - fred
b. Check you are in the directory in which you created the mount point directory.
$ pwd
/home/fred
fred@lrmh229:~$ ls -al
total
drwxr-xr-x 7 fred fredsgroup 4096 May 31 05:38 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 16 May 31 05:31 .bash_history
drwxrwxrwx 3 fred anothergroup 96 Nov 24 2015 nfsdir
In this example, the bucket owner is fred and a default group, anothergroup, was associated with the bucket.
If no group mapping had been created, or no default group has been associated with the bucket, you will not see a group
name but a large numeric value, as shown below.
fred@lrmh229:~$ ls -al
total
drwxr-xr-x 7 fred fredssgroup 4096 May 31 05:38 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 16 May 31 05:31 .bash_history
drwxrwxrwx 3 fred 2147483647 96 Nov 24 2015 nfsdir
If you have forgotten the group mapping, you can create appropriate mapping in the ECS Portal.
And adding a mapping between the name and GID (in this case: anothergroup => GID 1005).
If you try and access the mounted file system as the root user, or another user that does not have permissions on the file
system, you will see ?, as below.
root@lrmh229:~# cd /home/fred
root@lrmh229:/home/fred# ls -al
total
drwxr-xr-x 8 fred fredsgroup 4096 May 31 07:00 .
drwxr-xr-x 18 root root 4096 May 30 04:03 ..
-rw------- 1 fred fred 1388 May 31 07:31 .bash_history
d????????? ? ? ? ? ? nfsdir
Use async
Whenever possible, use the async mount option. This option dramatically reduces latency, improves throughput, and reduces
the number of connections from the client.
Set wsize and rsize to reduce round trips from the client
Where you expect to read and/or write large files, ensure that the read or write size of files is set appropriately using the
rsize and wsize mount options. Usually, you set the wsize and rsize options to the highest possible value to reduce the
number of round trips from the client. This is typically 512 KB (524288 B).
For example, to write a 10 MB file, if the wsize option is set to 524288 (512 KB), the client makes 20 separate calls. If the write
size is set as 32 KB this results in 16 times as many calls.
When using the mount command, you can supply the read and write size using the options (-o) switch. For example:
Table 39. ECS Management REST API calls for managing NFS access
Method Description
POST /object/nfs/exports Creates an export. The payload specifies the export path, the hosts that
can access the export, and a string that defines the security settings for
the export.
PUT/GET/DELETE /object/nfs/exports/{id} Performs the selected operation on the specified export.
GET /object/nfs/exports Retrieves all user exports that are defined for the current namespace.
POST /object/nfs/users Creates a mapping between an ECS object username or group name and a
UNIX user or group ID.
The ECS Management REST API documentation provides full details of the API, and the documentation for the NFS export
methods can be accessed in the EMC ECS REST API REFERENCE.
Seal file
The seal file functionality helps to commit the file to WORM state when the file is written, ignoring the remaining autocommit
period. The seal function is performed through the command: chmod ugo-w <file> on the file.
NOTE: The seal functionality does not affect outside the retention period.
High-level overview
Table 40. Autocommit terms
Term Description
Autocommit period Time interval relative to the object's last modified time during which certain retention
constraints (example: File modifications, file deletions, and so on) are not applied. This does
not affect outside of the retention period.
Retention Start Delay Atmos head uses the start delay to indicate the autocommit period.
User Interface
The user interface has the following support during bucket create and edit:
● When the File System is not enabled, no autocommit option is displayed.
● When the File System is enabled /no retention value is specified, autocommit is displayed but disabled.
● When the File System is enabled/retention value selected/autocommit is displayed and enabled for selection.
NOTE: The maximum autocommit period is limited to the smaller of the Bucket Retention period or the default maximum
period of one day.
REST API
Create bucket REST API is modified with the new header x-emc-autocommit-period.
S3 head
Bucket creation
Bucket creation flow through s3 head can use optional request header, x-emc-auto-commit-period:seconds to set the
autocommit period. The following checks are made in this flow:
● Allow only positive integers.
● Settable only for file system buckets.
● Settable only when the retention value is present.
Atmos
./atmoscurl.pl -user USER1 -action PUT -pmode TID -path / -header "x-emc-retention-
period:300" -header "x-emc-retention-start-delay:120" -include
S3A support
The AWS S3A client is a connector for HDFS (Hadoop Distributed File System), which enables you to run Spark or MapReduce
jobs.
Configuration at ECS
About this task
To use S3A on Hadoop, do the following:
NOTE:
IAM User
Steps
1. Create appropriate IAM policies and roles.
2. Create one or more IAM groups, and attach to policies.
3. Create one or more IAM users, and assign to groups.
4. Create a normal object bucket.
5. Provide access key and secret key information to IAM users.
SAML Assertions
Steps
1. Configure cross trust relationship between ECS and Identity Provider (ADFS).
2. Create appropriate IAM policies and roles.
3. User authenticates to Identity Provider on Hadoop node.
4. User performs SAML assertion to ECS receive temporary credentials.
Putting S3A credentials in Hadoop core-site file leads to security vulnerability, since this allows bucket access for any
Hadoop user that can view the credentials. If your Hadoop cluster contains sensitive data on the S3A object bucket, use
one of the two IAM methods of authorization that is discussed above.
The following list of configuration parameters should be added in core-site.xml on Hadoop UI. If you are using credential
providers or IAM, you would not be defining the access key or secret key in core-site.
Geo-replication status
The ECS S3 head supports Geo replication status of an object with replicationInfo. It API retrieves Geo replication status of an
object using replicationInfo. This automates their capacity management operations, enable site reliability operations and ensures
that the critical date is not deleted accidently.
Retrieves Geo replication status of an object by API to confirm that the object has been successfully replicated.
Request:
GET /bucket/key?replicationInfo
Response:
<ObjectReplicationInfo xmlns="http://s3.amazonaws.com/doc/
2006-03001/"
<IndexReplicated>false</IndexReplicated>
<ReplicatedDataPercentage>64.0</ReplicatedDataPercentage>
</ObjectReplicationInfo>
Maintenance
ECS allows you to monitor and manage disks.
All the current generation 1, 2 & 3 hardware are supported for monitoring the disk status.
Replacing a disk by a user using the ECS UI is only supported for:
● Gen3 Hardware (all EX-Series)
● Gen2 Hardware (only U-Series)
NOTE: For hardware other than above, a support request is required, to replace disks.
Rack
The Rack page allows you to see all racks within the system and analyze node status inside each rack.
To view Rack, select Manage > Maintenance.
Node
The Node page allows you to see all nodes within a rack and analyze disk status inside each node.
To view Node, select Manage > Maintenance > rack_name.
Maintenance 123
Table 43. Node
Field Description
Node Name of the nodes
Data Disks Status of Data Disks
SSD Cache Disks Status of SSD Cache Disks
Unassigned Disks Status of disks that have not yet been automatically assigned
for use (this column is available from ECS 3.6). Disks
remaining in this category requires contacting ECS Remote
Support for assistance.
Disk
The Disk page allows you to see and manage all disks within a node.
To view Disk, select Manage > Maintenance > rack_name > node_name.
124 Maintenance
ECS Appliance CRU and FRU guide availability
Maintenance 125
11
Certificates
Topics:
• Introduction to certificates
• ECS certificate tool
• Generate certificates
• Upload a certificate
• Verify installed certificates
Introduction to certificates
ECS ships with default unsigned SSL certificates installed in the keystore for each node. This certificate is not trusted by
applications that talk to ECS, or by the browser when users access ECS through the ECS Portal.
To prevent users from seeing an untrusted certificate error, or to allow applications to communicate with ECS, you should install
a certificate that is signed by a trusted Certificate Authority (CA). You can generate a self-signed certificate to use until you
have a CA signed certificate. The self-signed certificate must be installed into the certificate store of any machines that will
access ECS via HTTPS.
ECS uses the following types of SSL certificates:
Management Used for management requests using the ECS Management REST API. These HTTPS requests use port
certificates 4443.
Object Used for requests using the supported object protocols. These HTTPS requests use ports 9021 (S3),
certificates 9023 (Atmos), 9025 (Swift).
You can upload a self-signed certificate, a certificate that is signed by a CA authority, or, for an object certificate, you can
request ECS to generate a certificate or you. The key/certificate pairs can be uploaded to ECS by using the ECS Management
REST API on port 4443.
The following topics explain how to create, upload, and verify certificates:
● ECS certificate tool
● Generate certificates
● Upload a certificate
● Verify installed certificates
126 Certificates
Installation
This task describes to install the ECS Certificate Tool.
Steps
1. Download the ecs_certificate_tool package from here.
2. Upload the tool to /home/admin on one of the ECS nodes.
3. Change to the /home/admin directory and extract the package.
# cd /home/admin
# tar -zxvf ecs_certificate_tool-1.1.tgz
# cd ecs_certificate_tool-1.1
5. Edit the config.ini file and enter the correct root UI credentials.
Command:
# sudo vi config.ini
Example:
[UI_CREDENTIALS]
USERNAME = root
PASSWORD = ChangeMe
6. Use the certificate tool to generate your SAN (subject alternative name) configuration. Manually add the FQDN and IP
address of your load balancer if you are using one.
Command:
Example:
======================================================================
Generating SAN (subject alternative name) config.
======================================================================
----------------------------------------------------------------------
Setting DATA_SUBJECT_ALTERNATIVE_NAME config
----------------------------------------------------------------------
Set DNS_NAMES to :
['layton-ex3000.example.com',
'ogden-ex3000.example.com',
'orem-ex3000.example.com',
'provo-ex3000.example.com',
'sandy-ex3000.example.com']
Set IP_ADDRESSES to :
['192.0.2.104',
'192.0.2.105',
'192.0.2.106',
'192.0.2.107',
'192.0.2.108']
----------------------------------------------------------------------
Certificates 127
Setting MANAGEMENT_SUBJECT_ALTERNATIVE_NAME config
----------------------------------------------------------------------
Set DNS_NAMES to :
['layton-ex3000.example.com',
'ogden-ex3000.example.com',
'orem-ex3000.example.com',
'provo-ex3000.example.com',
'sandy-ex3000.example.com']
Set IP_ADDRESSES to :
['192.0.2.104',
'192.0.2.105',
'192.0.2.106',
'192.0.2.107',
'192.0.2.108']
Configuration
This task describes about the values of your certificates.
The config.ini file is where you set all the values for your certificate.
If you do not want to use a value, leave it blank like the example below:
[GENERAL]
COMMON_NAME = *.ecs.example.com
# Two letter country name
COUNTRY_NAME = US
LOCALITY_NAME = Salt Lake City
STATE_OR_PROVINCE_NAME = Utah
STREET_ADDRESS = 123 Example Street
ORGANIZATION_NAME = Example Inc.
# optional unit name
ORGANIZATIONAL_UNIT_NAME =
# optional email address
EMAIL_ADDRESS = example@example.com
[UI_CREDENTIALS]
USERNAME = root
PASSWORD = ChangeMe
[SELF_SIGNED]
# 1825 days = 5 years
VALID_DAYS = 1825
[DATA_SUBJECT_ALTERNATIVE_NAME]
DNS_NAMES = node1.ecs.example.com node2.ecs.example.com node3.ecs.example.com
IP_ADDRESSES = 192.0.2.1 192.0.2.2 192.0.2.3 192.0.2.4
[MANAGEMENT_SUBJECT_ALTERNATIVE_NAME]
DNS_NAMES = node1.ecs.example.com node2.ecs.example.com node3.ecs.example.com
IP_ADDRESSES = 198.51.100.1 198.51.100.2 198.51.100.3 198.51.100.4
[ADVANCED]
# Probably dont use these unless you really know what your doing
SERIAL_NUMBER =
SURNAME =
GIVEN_NAME =
TITLE =
GENERATION_QUALIFIER =
X500_UNIQUE_IDENTIFIER =
DN_QUALIFIER =
128 Certificates
PSEUDONYM =
USER_ID =
DOMAIN_COMPONENT =
JURISDICTION_COUNTRY_NAME =
JURISDICTION_LOCALITY_NAME =
BUSINESS_CATEGORY =
POSTAL_ADDRESS =
POSTAL_CODE =
INN =
OGRN =
SNILS =
UNSTRUCTURED_NAME =
Steps
Run ecs_certificate_tool view_certs operation.
Command:
Example output:
ecs_certificate_tool v1.0
log_file: /home/admin/ecs_certificate_tool-1.0/certificate_tool.log
----------------------------------------------------------------------
View certificates
----------------------------------------------------------------------
======================================================================
Data Certificate:
======================================================================
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
3b:0f:a3:e2:fa:0a:90:14:86:6c:a3:3a:26:5c:0b:8d:6e:18:7d:eb
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123 Example
Street, O=Example Inc./emailAddress=example@example.com
Validity
Not Before: Oct 17 18:35:06 2020 GMT
Not After : Oct 16 18:35:06 2025 GMT
Subject: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123
Example Street, O=Example Inc./emailAddress=example@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ad:13:ea:31:bb:13:30:fc:ad:75:1a:84:16:53:
76:9d:0d:96:60:69:04:70:ad:00:76:c5:e4:f0:39:
3d:e3:9b:2e:2a:06:0b:ae:29:16:22:69:73:1d:2b:
27:73:68:7a:42:62:84:37:9b:7e:7f:60:48:aa:80:
14:96:07:52:ac:d5:dd:1f:af:59:3b:88:5e:15:43:
f1:9e:29:91:0a:6d:19:8e:41:4b:3c:9f:0c:64:16:
5c:c6:61:a6:c7:28:a9:9e:14:81:10:7e:4a:4f:25:
93:20:d9:5b:fe:b3:ac:56:28:f0:89:2c:e3:97:18:
df:1d:e3:1b:6d:c5:08:fb:d6:97:81:82:b1:6b:33:
45:1d:de:7a:30:5c:6d:4a:70:96:06:f8:05:48:a7:
89:ad:ce:db:99:f2:61:88:92:75:e5:cf:d2:b1:2c:
28:60:6f:5e:ba:6c:02:f4:12:90:be:eb:6d:48:ae:
Certificates 129
b2:3a:6e:76:a6:02:b1:9e:f7:95:2c:65:8a:80:1a:
64:52:ec:f5:0c:2b:c8:87:a7:e5:4d:f7:34:60:a5:
49:03:30:27:10:8d:ad:4e:92:52:8b:d9:6b:ad:2d:
15:60:a5:26:fc:1b:1d:69:9f:5c:a3:0f:d9:cb:b9:
1d:68:30:6c:c8:ca:e1:71:4b:88:bd:98:d7:10:ae:
89:c5
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:node1.ecs.example.com, DNS:node2.ecs.example.com,
DNS:node3.ecs.example.com, IP Address:192.0.2.1, IP Address:192.0.2.2, IP
Address:192.0.2.3, IP Address:192.0.2.4
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Key Usage: critical
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage: critical
TLS Web Server Authentication
X509v3 Authority Key Identifier:
0.
Signature Algorithm: sha256WithRSAEncryption
33:85:7e:3b:fd:fd:3a:35:97:17:11:2d:4d:e1:7e:03:35:82:
8a:47:30:ed:b2:f9:1b:b4:22:a2:60:00:b5:9c:aa:6c:0d:e7:
ea:c7:0a:e6:05:24:7d:bd:50:ab:23:9b:16:6a:e7:be:e9:21:
26:61:0e:e5:e1:62:7e:d8:01:3a:3e:19:14:89:c2:ef:62:a0:
17:5c:80:2b:24:6b:96:73:fa:b0:8f:4d:09:0e:69:4f:72:f0:
4d:b1:13:8d:90:4e:18:4b:82:be:fd:48:b0:c2:9d:9c:43:d9:
d9:73:e6:15:88:79:1f:3e:13:ec:c9:6f:5f:2a:08:7c:a7:5d:
b4:e1:50:0f:3c:49:e3:e4:9f:8f:dd:e0:b5:b5:2d:d8:2d:29:
94:2d:4b:66:20:36:f0:ae:3a:ae:a4:c5:91:3c:f4:2a:d6:f5:
24:ec:7b:3a:96:d6:75:91:f9:b3:1c:8a:93:87:1b:d7:f2:f7:
72:4d:0c:02:b9:2e:ab:f6:76:ca:c5:74:39:e0:a0:54:2b:85:
4d:dd:e6:c7:fc:d0:e7:bc:3e:9e:98:19:e5:ed:ad:5f:4b:ea:
20:17:c5:23:eb:09:ad:8e:13:57:75:78:f9:68:bb:18:34:fc:
3a:26:94:90:5e:ed:a6:09:bb:14:5c:bd:2e:d3:5b:c4:43:08:
66:95:e7:ee
======================================================================
Management Certificate:
======================================================================
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
3b:0f:a3:e2:fa:0a:90:14:86:6c:a3:3a:26:5c:0b:8d:6e:18:7d:eb
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123 Example
Street, O=Example Inc./emailAddress=example@example.com
Validity
Not Before: Oct 17 18:35:06 2020 GMT
Not After : Oct 16 18:35:06 2025 GMT
Subject: CN=*.ecs.example.com, C=US, L=Salt Lake City, ST=Utah/street=123
Example Street, O=Example Inc./emailAddress=example@example.com
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:ad:13:ea:31:bb:13:30:fc:ad:75:1a:84:16:53:
76:9d:0d:96:60:69:04:70:ad:00:76:c5:e4:f0:39:
3d:e3:9b:2e:2a:06:0b:ae:29:16:22:69:73:1d:2b:
27:73:68:7a:42:62:84:37:9b:7e:7f:60:48:aa:80:
14:96:07:52:ac:d5:dd:1f:af:59:3b:88:5e:15:43:
f1:9e:29:91:0a:6d:19:8e:41:4b:3c:9f:0c:64:16:
5c:c6:61:a6:c7:28:a9:9e:14:81:10:7e:4a:4f:25:
93:20:d9:5b:fe:b3:ac:56:28:f0:89:2c:e3:97:18:
df:1d:e3:1b:6d:c5:08:fb:d6:97:81:82:b1:6b:33:
45:1d:de:7a:30:5c:6d:4a:70:96:06:f8:05:48:a7:
89:ad:ce:db:99:f2:61:88:92:75:e5:cf:d2:b1:2c:
28:60:6f:5e:ba:6c:02:f4:12:90:be:eb:6d:48:ae:
b2:3a:6e:76:a6:02:b1:9e:f7:95:2c:65:8a:80:1a:
64:52:ec:f5:0c:2b:c8:87:a7:e5:4d:f7:34:60:a5:
130 Certificates
49:03:30:27:10:8d:ad:4e:92:52:8b:d9:6b:ad:2d:
15:60:a5:26:fc:1b:1d:69:9f:5c:a3:0f:d9:cb:b9:
1d:68:30:6c:c8:ca:e1:71:4b:88:bd:98:d7:10:ae:
89:c5
Exponent: 65537 (0x10001)
X509v3 extensions:
X509v3 Subject Alternative Name:
DNS:node1.ecs.example.com, DNS:node2.ecs.example.com,
DNS:node3.ecs.example.com, IP Address:192.0.2.1, IP Address:192.0.2.2, IP
Address:192.0.2.3, IP Address:192.0.2.4
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Key Usage: critical
Digital Signature, Non Repudiation, Key Encipherment
X509v3 Extended Key Usage: critical
TLS Web Server Authentication
X509v3 Authority Key Identifier:
0.
Signature Algorithm: sha256WithRSAEncryption
33:85:7e:3b:fd:fd:3a:35:97:17:11:2d:4d:e1:7e:03:35:82:
8a:47:30:ed:b2:f9:1b:b4:22:a2:60:00:b5:9c:aa:6c:0d:e7:
ea:c7:0a:e6:05:24:7d:bd:50:ab:23:9b:16:6a:e7:be:e9:21:
26:61:0e:e5:e1:62:7e:d8:01:3a:3e:19:14:89:c2:ef:62:a0:
17:5c:80:2b:24:6b:96:73:fa:b0:8f:4d:09:0e:69:4f:72:f0:
4d:b1:13:8d:90:4e:18:4b:82:be:fd:48:b0:c2:9d:9c:43:d9:
d9:73:e6:15:88:79:1f:3e:13:ec:c9:6f:5f:2a:08:7c:a7:5d:
b4:e1:50:0f:3c:49:e3:e4:9f:8f:dd:e0:b5:b5:2d:d8:2d:29:
94:2d:4b:66:20:36:f0:ae:3a:ae:a4:c5:91:3c:f4:2a:d6:f5:
24:ec:7b:3a:96:d6:75:91:f9:b3:1c:8a:93:87:1b:d7:f2:f7:
72:4d:0c:02:b9:2e:ab:f6:76:ca:c5:74:39:e0:a0:54:2b:85:
4d:dd:e6:c7:fc:d0:e7:bc:3e:9e:98:19:e5:ed:ad:5f:4b:ea:
20:17:c5:23:eb:09:ad:8e:13:57:75:78:f9:68:bb:18:34:fc:
3a:26:94:90:5e:ed:a6:09:bb:14:5c:bd:2e:d3:5b:c4:43:08:
66:95:e7:ee
DONE
Steps
Create a CSR.
----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------
Certificates 131
Validating STREET_ADDRESS = 123 Example Street..PASS
Validating ORGANIZATION_NAME = Example Inc...PASS
Validating EMAIL_ADDRESS = example@example.com..PASS
----------------------------------------------------------------------
Validating DNS_NAMES configuration
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------
Validating SELF_SIGNED..PASS
----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------
132 Certificates
Validating IPv4Address: 198.51.100.2..PASS
Validating IPv4Address: 198.51.100.3..PASS
Validating IPv4Address: 198.51.100.4..PASS
Validating SELF_SIGNED..PASS
Steps
Create a self-signed certificate.
----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------
Certificates 133
Validating SELF_SIGNED..PASS
----------------------------------------------------------------------
Validating REST API Credentials
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating GENERAL configuration
----------------------------------------------------------------------
----------------------------------------------------------------------
Validating IP_ADDRESSES configuration
----------------------------------------------------------------------
Validating SELF_SIGNED..PASS
134 Certificates
Upload certificate
This task describes to upload data and management certificates to ECS.
Steps
Upload your certificate.
Upload data certificate.
Command:
Example:
----------------------------------------------------------------------
Upload Certificate
----------------------------------------------------------------------
admin@provo-ex3000:~/ecs_certificate_tool-1.0>
NOTE: After uploading the data certificate, you have two options.
● You can wait two hours for dataheadsvc to propagate the new certificate across the cluster.
● You can manually restart dataheadsvc on the node you ran the tool from, but it can have a brief impact.
Command:
Example:
----------------------------------------------------------------------
Upload Certificate
Certificates 135
----------------------------------------------------------------------
NOTE: After uploading the new management certificate, you must do rolling restarts of objcontrolsvc or nginx across
the cluster.
a. Generate a cluster wide MACHINES file:
Generate certificates
You can generate a self-signed certificate, or you can purchase a certificate from a certificate authority (CA). The CA-signed
certificate is recommended for production purposes because it can be validated by any client machine without any extra steps.
Certificates must be in PEM-encoded x509 format.
When you generate a certificate, you typically specify the hostname where the certificate is used as the common name (CN).
However, since ECS has multiple nodes, each with its own hostname, we must create a single certificate that supports all the
different host names for an ECS cluster. SSL certificates support this using the Subject Alternative Names (SAN) configuration.
This configuration section allows you to specify all the host names and IP addresses that the certificate should supports.
For maximum compatibility with object protocols, the Common Name (CN) on your certificate must point to the wildcard DNS
entry used by S3, because S3 is the only protocol that uses virtually hosted buckets (and injects the bucket name into the
hostname). You can specify only one wildcard entry on an SSL certificate, and it must be under the CN. The other DNS entries
for your load balancer for the Atmos and Swift protocols must be registered as a Subject Alternative Names (SANs) on the
certificate.
The topics in this section show how to generate a certificate or certificate request using openssl, however, your IT
organization may have different requirements or procedures for generating certificates.
Steps
1. Log in to an ECS node or to a node that you can connect to the ECS cluster.
136 Certificates
2. Use the openssl tool to generate a private key.
For example, to create a key called server.key, use:
3. When prompted, enter a passphrase for the private key and reenter it to verify. You will need to provide this passphrase
when creating a self-signed certificate or a certificate signing request using the key.
You must create a copy of the key with the passphrase removed before uploading the key to ECS. For more information, see
Upload a certificate.
4. Set the permissions on the key file.
Steps
1. Create the configuration file.
cp /etc/ssl/openssl.conf request.conf
2. NOTE: If using a wildcard in the CN, it should also be added as an entry in the SAN section for maximum compatibility.
Edit the configuration file with a text editor and make the following changes.
a. Add the [ alternate_names ] .
For example:
[ alternate_names ]
DNS.1 = os.example.com
DNS.2 = atmos.example.com
DNS.3 = swift.example.com
NOTE: There is a space between the bracket and the name of the section.
NOTE: If you are using a load balancer, you can use FQDN instead of IP address.
Certificates 137
If you are uploading the certificates to ECS nodes rather than to a load balancer, the format is:
[ alternate_names ]
IP.1 = <IP node 1>
IP.2 = <IP node 2>
IP.3 = <IP node 3>
...
subjectAltName = @alternate_names
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
The following line is likely to exist in this [ v3_ca ] section. If you create a certificate signing request, you must comment
it out as shown:
#authorityKeyIdentifier=keyid:always,issuer
copy_extension=copy
Prerequisites
● Create a private key using the procedure in Create a private key.
● To create certificates that use SAN, you must create a SAN configuration file using the procedure in Generate a SAN
configuration.
● Create a self-signed certificate for management and one for data (object).
Steps
1. Use the private key to create a self-signed certificate.
Two ways of creating the signing request are shown. One for use if you have already prepared a SAN configuration file to
specify the alternative server name, another if you have not.
If you are using SAN:
openssl req -x509 -new -key server.key -config request.conf -out server.crt
Example output.
Signature ok
subject=/C=US/ST=GA/
138 Certificates
2. Enter the pass phrase for your private key.
3. At the prompts, enter the fields for the DN for the certificate.
Most fields are optional. Enter a Common Name (CN).
NOTE: The CN should have a wildcard (*) to support both path style and virtual style addressing. The wildcard (*)
supports virtual style addressing whereas the FQDN supports path style.
4. Enter the Distinguished Name (DN) details when prompted. More information about DN fields is provided in Distinguished
Name (DN) fields.
5. View the certificate.
The following table describes the fields that consist of the Distinguished Name (DN).
Locality or City The state or region where your organization is located. This must not Mountain View
be abbreviated.
State or Province The city where your organization is located. California
Country The two-letter ISO code for the country or region where your US
organization is located.
Email address An email address to contact your organization. contact@yourco.com
Certificates 139
Create a certificate signing request
You can create a certificate signing request to submit to a CA to obtain a signed certificate.
Prerequisites
● You must create a private key using the procedure in Create a private key.
● To create certificates that use SAN, you must create a SAN configuration file using the procedure in Generate a SAN
configuration.
Steps
1. Use the private key to create a certificate signing request.
Two ways of creating the signing request are shown. One for if you have already prepared a SAN configuration file to specify
the alternative server name, another if you have not.
If you are using SAN:
When creating a signing request, you are asked to supply the Distinguished Name (DN) which comprises a number of fields.
Only the Common Name is required and you can accept the defaults for the other parameters.
2. Enter the pass phrase for your private key.
3. At the prompts, enter the fields for the DN for the certificate.
Most fields are optional. However, you must enter a Common Name (CN).
NOTE: The CN should have a wildcard (*) to support both path style and virtual style addressing. The wildcard (*)
supports virtual style addressing whereas the FQDN supports path style.
More information on the DN fields are provided in Distinguished Name (DN) fields.
4. You are prompted to enter an optional challenge password and a company name.
140 Certificates
Results
You can submit the certificate signing request to your CA who will return a signed certificate file.
Next steps
Once you receive the CA-signed certificate file, make sure it is in the correct format as described in CA-signed certificate file
format.
If you received a signed certificate from a corporate CA, the format is host certificate > intermediate certificate > root
certificate, as shown below. The root certificate file should be included so that clients can import it.
NOTE: There is no text between the end of each certificate and the beginning of the next certificate.
——BEGIN CERTIFICATE——
host certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
intermediate certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
root certificate
——END CERTIFICATE——
If you received a signed certificate from a public CA, including the root certificate file is not required because it is installed
on the client. The certificate file format is host certificate > intermediate certificate, as shown below. (Note there is no text
between the end of the host certificate and the beginning of the intermediate certificate.)
——BEGIN CERTIFICATE——
host certificate
——END CERTIFICATE——
——BEGIN CERTIFICATE——
intermediate certificate
——END CERTIFICATE——
Upload a certificate
You can upload management or data certificates to ECS. Whichever type of certificate that you upload, you must authenticate
with the API.
● Authenticate with the ECS Management REST API
● Upload a management certificate
● Upload a data certificate for data access endpoints
Steps
Authenticate with the ECS Management REST API and obtain an authentication token that can be used when using the API to
upload or verify certificates.
a. Run the following command:
The username and password are those used to access the ECS Portal. The public_ip is the public IP address of the node.
Certificates 141
b. Verify the token exported correctly.
echo $TOKEN
Example output:
X-SDS-AUTH-TOKEN:
BAAcTGZjUjJ2Zm1iYURSUFZzKzhBSVVPQVFDRUUwPQMAjAQASHVybjpzdG9yYWdlb3M6VmlydHVhbERhdGFDZW
50ZXJEYXRhOjcxYjA1ZTgwLTNkNzktND
dmMC04OThhLWI2OTU4NDk1YmVmYgIADTE0NjQ3NTM2MjgzMTIDAC51cm46VG9rZW46YWMwN2Y0NGYtMjE5OS00
ZjA4LTgyM2EtZTAwNTc3ZWI0NDAyAgAC
0A8=
Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN) as
described in Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure that your private key and certificate are available on the machine from which you intend to perform the upload.
Steps
1. Ensure that your private key does not have a passphrase.
If it does, you can create a copy with the passphrase stripped, by typing the following command:
2. Upload the keystore for the data path using your private key and signed certificate.
Using curl:
The privateKeyFile, for example <path>/server_nopass.key, and certificateFile, for example <path>/
server.crt, must be replaced with the path to the key and certificate files.
3. Log in to one of the ECS nodes as the security admin user.
4. Verify that the MACHINES file has all nodes in it.
The MACHINES file is used by ECS wrapper scripts that perform commands on all nodes, such as viprexec.
The MACHINES file is in /home/admin.
a. Display the contents of the MACHINES file.
cat /home/admin/MACHINES
b. If the MACHINES file does not contain all nodes, recreate it.
getrackinfo -c MACHINES
142 Certificates
● The following check detects and reports inconsistency in xDoctor:
XDOC-1986 |Check consistency of MACHINES file across /home/admin and /root and
getrackinfo -c
5. Restart the objcontrolsvc and nginx services once the management certificates are applied.
a. Restart the object service.
Next steps
You can verify that the certificate has uploaded correctly using the following procedure: Verify the management certificate.
Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a suitable REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure that your private key and certificate are available on the machine from which you intend to perform the upload.
Steps
1. Ensure that your private key does not have a pass phrase.
If it does, you can create a copy with the pass phrase stripped, using:
2. NOTE:
● Sometimes while copying the API, the - is left out from object-cert. To avoid errors, review the API before
using.
● You can use a single certificate consisting of all nodes from more than one VDC.
Upload the keystore for the data path using your private key and signed certificate.
The privateKeyFile, for example <path>/server_nopass.key, and certificateFile, for example <path>/
server.crt, must be replaced with the path to the key and certificate files.
3. The certificate is distributed when the dataheadsvc service is restarted. You can do this with the commands below.
Certificates 143
NOTE: You do not have to restart the services when changing data certificate, dataheadsvc is restarted
automatically on each node two hours from certificate update.
viprexec -i 'pidof dataheadsvc; sudo kill -9 `pidof dataheadsvc`; sleep 60; pidof
dataheadsvc'
Next steps
You can verify that the certificate has correctly uploaded using the following procedure: Verify the object certificate.
Prerequisites
● This operation requires the System Administrator role in ECS.
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN) as
described in Authenticate with the ECS Management REST API.
● Ensure that the machine that you use has a suitable REST client (such as curl) and can access the ECS nodes using the ECS
Management REST API.
● Ensure your private key and certificate are available on the machine from which you intend to perform the upload.
Steps
1. To add certificate to TrustStore, use:
{
"add": [
"-----BEGIN CERTIFICATE-----\nMI7FS8J...DF=r\n-----END CERTIFICATE-----"
]
}
{"certificate":["-----BEGIN CERTIFICATE-----\nMIIDdzCCAl+gAwIBAgIQU
+WFap1wZplFATLD4CWbnTANBgkqhkiG9w0BAQUFADBO\r
\nMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxFjAUBgoJkiaJk/IsZAEZFgZoYWRvb3Ax\r
\nHTAbBgNVBAMTFGhhZG9vcC1OSUxFMy1WTTQzLUNBMB4XDTE1MDkwNzEzMDA0MFoX\r
\nDTIwMDkwNzEzMTAzOVowTjEVMBMGCgmSJomT8ixkARkWBWxvY2FsMRYwFAYKCZIm\r
\niZPyLGQBGRYGaGFkb29wMR0wGwYDVQQDExRoYWRvb3AtTklMRTMtVk00My1DQTCC\r
\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOqoxHBtrND7CiHQvHXSDdKy\r
\nxyZv6qK0BjlQKlQR2qiCjfOC3By9b8cSvzVFo6mdDiQurxPjlz5JLALfbIMxcslN\r
\nBvDkzn9tzzspbYSLyRqOyMxe4F+Bo9Hm8nGLtZU6liLBglPgrSt77Qvi6pAU0EjN\r
\nNZ3ZqBYZcmx/rD3iCeHojcl/P4UDy4lbCb3l7w6GbrczGRimitkFiriD3kUtkXyw\r
\nMM4L+ZY1j8o6WXSfCMhX0nX8OCrSIukMyZKCreeUQg4xykSp6GhIB74I6R6gIAh0\r
\nFOqqLsRNjMRjEhWpVXB7tTW74E3DgVwe2PF/3aL1i9sx90UekZREhA3L1sKKm10C\r
\nAwEAAaNRME8wCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE\r
\nFDRCvMyr1H52IuQtDpsfnj4PbBNwMBAGCSsGAQQBgjcVAQQDAgEAMA0GCSqGSIb3\r
\nDQEBBQUAA4IBAQAXPE0Agbjuhi02Yufz0UtUBQYcAHsRCwxrLKrvnz1UBqA3wU87\r
\nbUQDppQ7e3Iz5SJ9PZ/3nUVQyMgnI5NS1UP6Hn/j9jVlaAB4MKXzgXdBCIDUOtzh\r
\nBZuvlz6FDjBbBSyrAk3LVnqSC2DNYVPbyRrUBHQxWnYT2FuIMQeGDb/rjtWALkvb\r
\n7sTsLKiGHwkwmeYyrsiUpzoyarw21EWxLRRrMfNX1/CrGg883k0mYuEYgOaFmoi0\r
\nRWZBz2NE10V5yorVniuiql/Tvbi3gPYLhy3DFMO4mjh9eSgcakYCNsQFc5msmu4Y\r
\nG6ab3ChgU6kVF5sIEv/wXvyId8X2uLoa8Wcj\r\n-----END CERTIFICATE-----"]}
144 Certificates
2. To get certificate from TrustStore, use:
{"certificate":["-----BEGIN CERTIFICATE-----\nMIIDdzCCAl+gAwIBAgIQU
+WFap1wZplFATLD4CWbnTANBgkqhkiG9w0BAQUFADBO\r
\nMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxFjAUBgoJkiaJk/IsZAEZFgZoYWRvb3Ax\r
\nHTAbBgNVBAMTFGhhZG9vcC1OSUxFMy1WTTQzLUNBMB4XDTE1MDkwNzEzMDA0MFoX\r
\nDTIwMDkwNzEzMTAzOVowTjEVMBMGCgmSJomT8ixkARkWBWxvY2FsMRYwFAYKCZIm\r
\niZPyLGQBGRYGaGFkb29wMR0wGwYDVQQDExRoYWRvb3AtTklMRTMtVk00My1DQTCC\r
\nASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOqoxHBtrND7CiHQvHXSDdKy\r
\nxyZv6qK0BjlQKlQR2qiCjfOC3By9b8cSvzVFo6mdDiQurxPjlz5JLALfbIMxcslN\r
\nBvDkzn9tzzspbYSLyRqOyMxe4F+Bo9Hm8nGLtZU6liLBglPgrSt77Qvi6pAU0EjN\r
\nNZ3ZqBYZcmx/rD3iCeHojcl/P4UDy4lbCb3l7w6GbrczGRimitkFiriD3kUtkXyw\r
\nMM4L+ZY1j8o6WXSfCMhX0nX8OCrSIukMyZKCreeUQg4xykSp6GhIB74I6R6gIAh0\r
\nFOqqLsRNjMRjEhWpVXB7tTW74E3DgVwe2PF/3aL1i9sx90UekZREhA3L1sKKm10C\r
\nAwEAAaNRME8wCwYDVR0PBAQDAgGGMA8GA1UdEwEB/wQFMAMBAf8wHQYDVR0OBBYE\r
\nFDRCvMyr1H52IuQtDpsfnj4PbBNwMBAGCSsGAQQBgjcVAQQDAgEAMA0GCSqGSIb3\r
\nDQEBBQUAA4IBAQAXPE0Agbjuhi02Yufz0UtUBQYcAHsRCwxrLKrvnz1UBqA3wU87\r
\nbUQDppQ7e3Iz5SJ9PZ/3nUVQyMgnI5NS1UP6Hn/j9jVlaAB4MKXzgXdBCIDUOtzh\r
\nBZuvlz6FDjBbBSyrAk3LVnqSC2DNYVPbyRrUBHQxWnYT2FuIMQeGDb/rjtWALkvb\r
\n7sTsLKiGHwkwmeYyrsiUpzoyarw21EWxLRRrMfNX1/CrGg883k0mYuEYgOaFmoi0\r
\nRWZBz2NE10V5yorVniuiql/Tvbi3gPYLhy3DFMO4mjh9eSgcakYCNsQFc5msmu4Y\r
\nG6ab3ChgU6kVF5sIEv/wXvyId8X2uLoa8Wcj\r\n-----END CERTIFICATE-----"]}
{
"remove": [
"-----BEGIN CERTIFICATE-----\nMIIGWT...PvCr\n-----END CERTIFICATE-----",
]
}
{"certificate":[]}
{
"accept_all_certificates": ""
}
Certificates 145
Response Payload JSON example:
{"accept_all_certificates":true}
{"accept_all_certificates":true}
Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● If you have restarted services, the certificate is available immediately. Otherwise, you must wait two hours to be sure that
the certificate is propagated to all nodes.
Steps
1. Use the GET /vdc/keystore method to return the certificate.
Using the curl tool, the method can be run by typing the following:
146 Certificates
tzpQhtt6kFoSBO7p//76DNzXRXhBDADwpUGG9S4tgHChAFu9DpHFzvnjNGGw83ht
qcJ6JYgB2M3lOQAssgW4fU6VD2bfQbGRWKy9G1rPYGVsmKQ59Xeuvf/cWvplkwW2
bKnZmAbWEfE1cEOqt+5m20qGPcf45B7DPp2J+wVdDD7N8198Jj5HJBJt3T3aUEwj
kvnPx1PtFM9YORKXFX2InF3UOdMs0zJUkhBZT9cJ0gASi1w0vEnx850secu1CPLF
WB9G7R5qHWOXlkbAVPuFN0lTav+yrr8RgTawAcsV9LhkTTOUcqI=
-----END CERTIFICATE-----</chain></certificate_chain>
For example:
Prerequisites
● Ensure that you have authenticated with the ECS Management REST API and stored the token in a variable ($TOKEN). See
Authenticate with the ECS Management REST API.
● If you have restarted services, the certificate will be available immediately. Otherwise, you need to wait two hours to be sure
that the certificate has propagated to all nodes.
Steps
1. Use the GET /object-cert/keystore method to return the certificate.
Using the curl tool, the method can be run by typing the following:
Example:
Steps
Use the below ECS Management REST API call to reset the object certificate back to the default unsigned certificate.
Certificates 147
12
ECS Settings
Topics:
• Introduction to ECS settings
• Object base URL
• Key Management
• External Key Manager Configuration
• Key rotation
• Secure Remote Services
• Alert policy
• Event notification servers
• Platform locking
• Licensing
• Security
• About this VDC
• Object version limitation settings
In the virtual host style addressing scheme, the bucket name is in the hostname. For example, you can access the bucket named
mybucket on host ecs1.yourco.com using the following address:
http://mybucket.ecs1.yourco.com
You can also include a namespace in the address.
Example: mybucket.mynamespace.ecs1.yourco.com
To use virtual host style addressing, you must configure the base URL in ECS so that ECS can identify which part of the URL is
the bucket name. You must also ensure that the DNS system is configured to resolve the address. For more information on DNS
configuration, see DNS configuration
In the path style addressing scheme, the bucket name is added to the end of the path.
Example: ecs1.yourco.com/mybucket
You can specify a namespace by using the x-emc-namespace header or by including the namespace in the path style address.
Example: mynamespace.ecs1.yourco.com/mybucket
When ECS processes a request from an S3 compatible application to access ECS storage, ECS performs the following actions:
1. Try to extract the namespace from the x-emc-namespace header. If found, skip the following steps and process the
request.
2. Get the hostname of the URL from the host header and check if the last part of the address matches any of the configured
base URLs.
3. Where there is a BaseURL match, use the prefix part of the hostname (the part that is left when the base URL is removed),
to obtain the bucket location.
The following examples demonstrate how ECS handles incoming HTTP requests with different structures:
NOTE: When you add a base URL to ECS, you can specify if your URLs contain a namespace in the Use base URL with
Namespace field on the New Base URL page in the ECS Portal. This tells ECS how to treat the bucket location prefix. For
more information, see Add a Base URL
Host: baseball.image.yourco.finance.com
BaseURL: finance.com
Use BaseURL with namespace enabled
Namespace: yourco
Bucket Name: baseball.image
Example 2: Virtual Host Style Addressing, Use base URL with Namespace is disabled
Host: baseball.image.yourco.finance.com
BaseURL: finance.com
Use BaseURL without namespace enabled
Host: baseball.image.yourco.finance.com
BaseURL: not configured
DNS configuration
In order for an S3 compatible application to access ECS storage, you must ensure that the URL resolves to the address of the
ECS data node, or the data node load balancer.
If your application uses path style addressing, you must ensure that your DNS system can resolve the address. For example,
if your application issues requests in the form ecs1.yourco.com/bucket, you must have a DNS entry that resolves
ecs1.yourco.com to the IP address of the load balancer that is used for access to the ECS nodes. If your application is
configured to talk to Amazon S3, the URI is in the form s3-eu-west-1.amazonaws.com.
If your application uses virtual host style addressing, the URL includes the bucket name and can include a namespace. Under
these circumstances, you must have a DNS entry that resolves the virtual host style address by using a wildcard in the DNS
entry. This also applies where you are using path style addresses that include the namespace in the URL.
For example, if the application issues requests in the form mybucket.s3.yourco.com, you must have the following DNS
entries:
● ecs1.yourco.com
● *.ecs1.yourco.com
If the application is previously connected to the Amazon S3 service using mybucket.s3.amazonaws.com, you must have the
following DNS entries:
● s3.amazonaws.com
● *.s3.amazonaws.com
These entries resolve the virtual host style bucket address and the base name when you issue service-level commands (for
example, list buckets).
If you create an SSL certificate for the ECS S3 service, it must have the wildcard entry on the name of the certificate and the
non wildcard version as a Subject Alternate Name.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Object Base URL.
The Base URL Management page is displayed with a list of base URLs. The Use with Namespace property indicates
whether your URLs include a namespace.
2. On the Base URL Management page, click New Base URL.
3. On the New Base URL page, in the Name field, type the name of the base URL.
The name provides a label for this base URL in the list of base URLs on the Base URL Management page.
4. In the Base URL field, type the base URL.
If your object URLs are in the form https://mybucket.mynamespace.acme.com (that is,
bucket.namespace.baseurl ) or https://mybucket.acme.com (that is, bucket.baseurl), the base URL would
be acme.com.
5. In the Use base URL with Namespace field, click Yes if your URLs include a namespace.
6. Click Save.
Key Management
As a part of Data at Rest Encryption (D@RE), ECS supports centralized external key managers. The centralized external key
managers are compliant with the Key Management Interoperability Protocol (KMIP) which enhance the enterprise grade security
in the system. Also, it enables the customers to use the centralized key servers to store top-level Key Encrypting Keys (KEKs)
to provide the following benefits:
● Helps in obtaining benefits from the Hardware Security Module (HSM) based key production and the latest encryption
technology that is provided by the specialized key management servers.
● Provides production against loss of the entire appliance by storing top-level key information outside of the appliance.
ECS incorporates the KMIP standard for integration with external key managers and serves as a KMIP client, and supports the
following:
● Supports the Gemalto Safenet v8.9, Thales CipherTrust Manager 2.5.2, and IBM SKLM v3.01 (Security Key Lifecycle
Manager) key managers.
NOTE: The key manager supported versions are determined by Dell EMC's Key-Trust-Platform (KTP) client.
● Supports the use of top-level KEK (master key) supplied by an external key manager.
● Supports rotation of top-level KEK (master key) supplied by an external key manager.
Prerequisites
● Ensure that you are the security administrator or have the credentials to log in as an administrator.
● Ensure that you complete the following steps before the activation.
Results
Invoking the EKMCluster activation operation, triggers a background task to run the activation steps. These steps can be
observed through the UI. Upon completion of the activation steps, you complete the following:
● Master key created on external key manager
● Master key retrieval validated against all external key members
● Internal reference to the new Master Key is updated
● Rotation key created on external key manager
● Rotation key retrieval validated against all external key members.
● Internal reference to the new Rotation key is updated
● All namespace keys are re-protected using the virtual master key, which would now reference the new rotation key)
Create a cluster
From the Key Management External Key Servers, you can create a cluster and then create external key servers.
Steps
1. Select Settings > Key Management > External Key Servers > New Cluster.
2. In the Cluster Name field, type a unique name for the cluster.
Prerequisites
● Ensure that you are the security administrator or have the credentials to log in as an administrator.
● Ensure that you complete the following steps before the activation.
Steps
1. Deactivate the existing KeySecure in ECS using Dtquery API.
Contact Customer Support to deactivate the existing KeySecure in ECS.
2. Remove the deactivated EKM server.
Update VDC to EKM Server Mapping provides information.
3. Remove the deactivated EKM cluster
Change EKM Cluster status provides information.
4. Create an EKM Cluster representing the key management cluster.
Create cluster provides information.
5. Create EKM Servers, each representing a member of the external key management cluster.
New External Key Servers provides information.
6. Map a set of the EKMServers to each VDC.
Update VDC to EKM Server Mapping provides information.
● By default, ECS requires that there are at least two EKM Servers that are mapped per VDC.
● When mapping, the first EKMServer in the list is considered the primary, which is the server that is expected to handle
key creations and retrieval.
● The other EKMServers are considered secondaries and are used as a backup for key retrieval in case the primary is
unreachable or unavailable.
Once the EKMServers have been mapped, a background process is run to validate connectivity from each VDC. If all mapped
primary EKMServers are reachable, activate the EKMCluster, either through the UI or the API.
Results
Invoking the EKMCluster activation operation, triggers a background task to run the activation steps. These steps can be
observed through the UI. Upon completion of the activation steps, you complete the following:
● Master key created on external key manager
● Master key retrieval validated against all external key members
● Internal reference to the new Master Key is updated
● Rotation key created on external key manager
● Rotation key retrieval validated against all external key members.
● Internal reference to the new Rotation key is updated
● All namespace keys are re-protected using the virtual master key, which would now reference the new rotation key.
Steps
1. Select Settings > Key Management > External Key Servers > New External Key Server.
2. In the New External Key Server form, enter the Hostname/IP of the EKM Server.
3. Enter the Server Host Name.
This is the server name that is provided in the certificate that is used to identify the client associated with the identity store.
4. Enter the Port.
If different from the default 5696.
5. To import the server certificate that is associated with the key server and presented for ECS validation, click Browse.
6. To import the revocation certificate that is not accepted, click Browse.
It can be an empty file.
7. To import the Identity store, which is the client certificate that is signed by server and encrypted into the .p12 file, click
Browse.
Device Serial Number and Device ID are optional for SKLM. These fields are only available when the cluster type is
SKLM.
Actions Edit the VDC Mapping to add, remove, and prioritize key servers.
Steps
1. Select Settings > Key Management > VDC EKM Mapping.
a. To expand the vdc, and to see details of the server click > .
2. Click Edit button for the VDC.
3. Select servers from the available table for actions to the selected table per the actions available in the table above.
If you are adding servers, a minimum of two servers are required. The first server in the selected list is the primary EKM
Server for the VDC.
After servers are mapped to a VDC, the EKM Cluster has to be activated.
NOTE: After the VDC mappings are created, a background process will validate connectivity to all mapped servers per
VDC. The result of this server check is attached to each server per VDC.
Key rotation
This section provides information about ECS Key rotation and the limitations.
ECS supports rotation of keys, a practice of changing keys to limit the amount of data that is protected by any given key to
support industry standard practices. It can be performed on demand both through API and user interface, and is designed to
minimize the risk from compromised keys.
During key rotation, the system does the following:
● Creates control key natively or on EKM (if activated).
● Create a rotation key natively.
● Activate new control and rotation keys across all sites in the federation.
● Once activated, the new control key is used to generate new virtual control key.
● Once activated, the new rotation key is used to generate new virtual bucket key.
● The new virtual control key is used to rewrap all rotation and namespace keys.
● The new virtual bucket key is used to protect all new object keys and associated new data.
● Rewrapped namespace keys are instrumental in protecting existing data.
● Data is not reencrypted as a result of key rotation.
To initiate the key rotation, select Settings > Key Management > Key Rotation > Rotate Keys.
NOTE: Rotation is an asynchronous operation, and the latest status of the current operation can be seen in the table. The
Rotate Keys table also lists the status of previous rotation operations.
Limitations
● Key rotation does not rotate namespace and bucket keys.
● Only one key rotation request can be active anytime and any other new request fails.
● The scope of the key rotation is at cluster level so all the new system encrypted objects are affected.
● Namespace or bucket level rotation is not supported.
● Failed - During the process of adding a Secure Remote Services server to the VDC, a
connection cannot be established between the Secure Remote Services server and the
VDC due to invalid Secure Remote Services FQDN/IP, port, or user credential information.
○ When invalid Dell Secure Connect Gateway FQDN/IP or port information is entered in
the ECS Portal when adding a Secure Remote Services server, the mouse-over error
message for the Failed status reads: Failed to configure the esrs server,
reason: ESRS_CONNECTION_REFUSED
○ When invalid Dell Support credentials are entered in the ECS Portal when adding a
Secure Remote Services server, the mouse-over error message for the Failed status
reads: Failed to add device <VDC serial number> to esrs gw <dell
secure connect gateway IP>, reason: INVALID_CREDENTIALS
● Disabled - The System Administrator has disabled the Secure Remote Services connection
with the VDC. A System Administrator might choose to do this temporarily during a
planned maintenance activity to prevent flooding Secure Remote Services with alerts.
Test Dial Home Status Can be one of the following:
● Never Run
● Passed (with timestamp)
● Failed (with timestamp)
3. If the Activated Site ID you obtained from the ECS Portal is:
○ Listed in the Site ID row at the top of the Managed Device list in the Dell Secure Connect Gateway web UI, it is
supported on the Dell Secure Connect Gateway server.
○ Not listed in the Site ID row at the top of the Managed Device list in the Dell Secure Connect Gateway web UI, you
must add it by clicking the Add SiteID button at the bottom of the page.
● Verify that you have full access (administrator) rights to the Activated Site ID and that the VDC Serial Number is associated
with the Activated Site ID.
You can obtain the VDC Serial Number on the Settings > Licensing in the ECS Portal.
1. Go to Dell Support.
2. Log in and validate that you have full access rights.
If you can access this page, then you can use your user account credentials to configure Secure Remote Services.
3. To verify that the VDC Serial number is associated with the Activated Site ID, click the Install Base link near the center
of the page.
If you have access to multiple sites, select the appropriate site in the My Sites drop-down list.
4. In the search box, type the VDC Serial Number.
The VDC Serial Number is verified when it is displayed in the Product ID column in the table below the search box.
Prerequisites
● In an ECS geo-federated system, you must add a Secure Remote Services server for each VDC in the system.
● If you already have a Secure Remote Services server that is enabled, you must delete it, then add the new server. You
cannot edit a Secure Remote Services server in this release.
● Review the ESRS prerequisites before performing this task in the ECS Portal.
● This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > ESRS > New Server.
2. On the New ESRS Server page:
a. In the FQDN/IP field, type the Dell Secure Connect Gateway FQDN or IP address.
b. In the PORT field, type the Dell Secure Connect Gateway port (9443 by default).
c. In the Username field, type the login username that is used to interface with ECS support. This username is the same
login username that is used to log in to Dell Support.
d. In the Password field, type the password set up with the login username.
3. Click Save.
Server connectivity may take a few minutes to complete. To monitor the process, click the refresh button. Possible states of
transition are Processing, Connected, or Failed.
NOTE: If you receive an INVALID_CREDENTIALS error message, email support@emc.com with a description of the
issue, your account email, VDC serial number, and Activated Site ID.
Serial numbers
Serial numbers are specific to clusters and cannot be shared among clusters. You cannot configure specific serial numbers using
GUI. However, you can add it through fcli.
1. To configure ESRS on a cluster, choose a serial number corresponding to the hardware.
2. Configure serial number and the user information about the cluster by specifying the information in file settings.conf on
installer node.
[fabric.installer.docker]
# Set as true for DIY
bypass_docker_configuration = false
FQDN/IP:
Port:
User name:
Password:
6. To check the device status and connectivity, go to Devices > Manage Device ESRS page.
NOTE: To configure serial number on a cluster using fcli, there must be space characters ' ' after ',' in the JSON file for
server-side correct parsing.
Prerequisites
This operation requires the System Administrator role in ECS.
The VDC must be connected to the Dell Secure Connect gateway.
Prerequisites
This operation requires the System Administrator role in ECS.
The VDC must be connected to the Dell Secure Connect gateway.
Steps
1. In the ECS Portal, select Settings > ESRS.
2. On the EMC Secure Remote Services Management page, click Disable in the Actions column beside the ESRS server
that you want to temporarily disable call home alerts.
The ESRS server status displays as Disabled in the Status column.
Alert policy
Alert policies are created to alert about metrics, and are triggered when the specified conditions are met. Alert policies are
created per VDC.
You can use the Settings > Alerts Policy page to view alert policies.
There are two types of alert policy:
System alert ● System alert policies are precreated and exist in ECS during deployment.
policies ● All the metrics have an associated system alert policy.
● System alert policies cannot be updated or deleted.
● System alert policies can be enabled/disabled.
● Alert is sent to the UI and all channels (SNMP, SYSLOG, and Secure Remote Services).
User-defined ● You can create User-defined alert policies for the required metrics.
alert policies ● Alert is sent to the UI and customer channels (SNMP and SYSLOG).
Steps
1. Select New Alert Policy.
2. Give a unique policy name.
3. Use the metric type drop-down menu to select a metric type.
Metric Type is a grouping of statistics. It consists of:
● Btree Statistics
● CAS GC Statistics
● Geo Replication Statistics
● Garbage Collection Statistics
● EKM
4. Use the metric name drop-down menu to select a metric name.
5. Select level.
a. To inspect metrics at the node level, select Node.
b. To inspect metrics at the VDC level, select VDC.
6. Select polling interval.
Polling Interval determines how frequently data should be checked. Each polling interval gives one data point which is
compared against the specified condition and when the condition is met, alert is triggered.
7. Select instances.
Instances describe how many data points to check and how many should match the specified conditions to trigger an alert.
For metrics where historical data is not available only the latest data is used.
8. Select conditions.
You can set the threshold values and alert type with Conditions.
The alerts can be either a Warning Alert, Error Alert, or Critical Alert.
9. To add more conditions with multiple thresholds and with different alert levels, select Add Condition.
10. Click Save.
SNMP servers
Simple Network Management Protocol (SNMP) servers, also known as SNMP agents, provide data about network managed
device status and statistics to SNMP Network Management Station clients.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Event Notification.
On the Event Notification page, the SNMP tab displays by default and lists the SNMP servers that have been added to
ECS.
2. To add an SNMP server target, click New Target.
The New SNMP Target page is displayed.
3. On the New SNMP Target page, complete the following steps:
a. In the FQDN/IP field, type the Fully Qualified Domain Name or IP address for the SNMP v2c trap recipient node that
runs the snmptrapd server.
b. In the Port field, type the port number of the SNMP v2c snmptrapd running on the Network Management Station
clients.
The default port number is 162.
c. In the Version field, select SNMPv2.
d. In the Community Name field, type the SNMP community name.
Both the SNMP server and any Network Management Station clients that access it must use the same community name
to ensure authentic SNMP message traffic, as defined by the standards in RFC 1157 and RFC 3584.
The default community name is public.
4. Click Save.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Event Notification.
f. In the Privacy box, click Enabled if you want to enable Digital Encryption Standard (DES) (56-bit) or Advanced
Encryption Standard (AES) (128-bit, 192-bit or 256-bit) encryption for all SNMPv3 data transmissions, and do the
following:
● In the Privacy Protocol field, select DES, AES128, AES192, or AES256.
This is the cryptographic protocol to use in encrypting all traffic between SNMP servers and SNMP Network
Management Station clients. The default is DES.
● In the Privacy Passphrase field, type the string to use in the encryption algorithm as a secret key for encryption
between SNMPv3 USM standard hosts.
The length of this key must be 16 octets for DES and longer for the AES protocols.
4. Click Save.
Results
When you create the first SNMPv3 configuration, the ECS system creates an SNMP Engine ID to use for SNMPv3 traffic. The
Event Notification page displays that SNMP Engine ID in the Engine ID field. You could instead obtain an Engine ID from a
Network Monitoring tool and specify that Engine ID in the Engine ID field. The important issue is that the SNMP server and any
SNMP Network Management Station clients that have to communicate with it using SNMPv3 traffic must use the same SNMP
Engine ID in that traffic.
NOTE: Get the Engine ID from the SNMPv3 server and specify the Engine ID in the Engine ID field in the ECS UI.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Event Notification.
On the Event Notification page, the SNMP tab displays by default and lists the SNMP servers that have been added to
ECS.
ECS provides support for Simple Network Management Protocol (SNMP) data collection, queries, and MIBs in the following
ways:
● During the ECS installation process, your customer support representative can configure and start an snmpd server
to support specific monitoring of ECS node-level metrics. A Network Management Station client can query these kernel-
level snmpd servers to gather information about memory and CPU usage from the ECS nodes, as defined by standard
Management Information Bases (MIBs). For the list of MIBs for which ECS supports SNMP queries, see SNMP MIBs
supported for querying in ECS.
● The ECS fabric life cycle layer includes an snmp4j library which acts as an SNMP server to generate SNMPv2 traps and
SNMPv3 traps and send them to as many as ten SNMP trap recipient Network Management Station clients. For details of
the MIBs for which ECS supports as SNMP traps, see ECS-MIB SNMP Object ID hierarchy and MIB definition. You can add
the SNMP trap recipient servers by using the Event Notification page in the ECS Portal. For more information, see Add an
SNMPv2 trap recipient and Add an SNMPv3 trap recipient.
emc.............................1.3.6.1.4.1.1139
ecs.........................1.3.6.1.4.1.1139.102
trapAlarmNotification...1.3.6.1.4.1.1139.102.1.1
notifyTimestamp.....1.3.6.1.4.1.1139.102.0.1.1
notifySeverity......1.3.6.1.4.1.1139.102.0.1.2
notifyType..........1.3.6.1.4.1.1139.102.0.1.3
notifyDescription...1.3.6.1.4.1.1139.102.0.1.4
You can download the ECS-MIB definition (as the file ECS-MIB-v2.mib) from the Support Site in the Downloads section
under Add-Ons. The following Management Information Base syntax defines the SNMP enterprise MIB named ECS-MIB:
notifyTimestamp OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The timestamp of the notification"
::= { genericNotify 1 }
notifySeverity OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "The severity level of the event"
::= { genericNotify 2 }
notifyType OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "A type of the event"
::= { genericNotify 3 }
notifyDescription OBJECT-TYPE
SYNTAX OCTET STRING
MAX-ACCESS read-only
STATUS current
DESCRIPTION "A complete description of the event"
::= { genericNotify 4 }
trapAlarmNotification NOTIFICATION-TYPE
OBJECTS {
notifyTimestamp,
notifySeverity,
notifyType,
notifyDescription,
}
STATUS current
DESCRIPTION "This trap identifies a problem on the ECS. The description can be
used to describe the nature of the change"
::= { notificationTrap 1 }
END
Trap messages that are formulated in response to a Disk Failure alert are sent to the ECS Portal Monitor > Events >
Alerts page in the format Disk {diskSerialNumber} on node {fqdn} has failed:
Trap messages that are formulated in response to a Disk Back Up alert are sent to the ECS Portal Monitor > Events >
Alerts page in the format Disk {diskSerialNumber} on node {fqdn} was revived:
Syslog servers
Syslog servers provide a method for centralized storage and retrieval of system log messages. ECS supports forwarding of
alerts and audit messages to remote syslog servers, and supports operations using the following application protocols:
● BSD Syslog
● Structured Syslog
Alerts and audit messages that are sent to Syslog servers are also displayed on the ECS Portal, with the exception of OS level
Syslog messages (such as node SSH login messages), which are sent only to Syslog servers and not displayed in the ECS Portal.
Once you add a Syslog server, ECS initiates a syslog container on each node. The message traffic occurs over either TCP or the
default UDP.
ECS sends Audit log messages to Syslog servers, including the severity level, using the following format:
${serviceType} ${eventType} ${namespace} ${userId} ${message}
ECS sends Alert logs to Syslog servers using the same severity as appears in the ECS Portal, using the following format:
${alertType} ${symptomCode} ${namespace} ${message}
ECS sends Fabric alerts using the following format:
Fabric {symptomCode} "{description}"
Starting with ECS 3.1, ECS forwards only the following OS logs to Syslog servers:
● External SSH messages
● All sudo messages with Info severity and higher
● All messages from the auth facility with Warning severity and higher, which are security-related and authorization-related
messages
Prerequisites
● This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Event Notification.
2. On the Event Notification page, click the Syslog tab.
This page lists the Syslog servers that have been added to ECS and allows you to configure new Syslog servers.
3. To add a Syslog server, click New Server.
The New Syslog Server page is displayed.
For each facility, you can filter by severity level by using the following format:
facility-keyword.severity-keyword
Severity keywords are described in the following table.
Steps
1. You might configure the /etc/rsyslog.conf file in the following ways:
a. To receive incoming ECS messages from all facilities and all severity levels, use this configuration and specify the
complete path and name of your target log file:
*.* /var/log/ecs-messages.all
b. To receive all fabric alerts, object alerts and object audits, use this configuration with the full path and name of your
target log file:
user.*,local0.* /var/log/ecs-fabric-object.all
c. To receive all fabric alerts, object alerts and object audits, and limit auth facility messages to warning severity and above,
use this configuration with the full path and name of your target log file:
user.*,local0.*/var/log/ecs-fabric-object.allauth.warn /var/log/ecs-auth-
messages.warn
auth.info /var/log/ecs-auth-info.log
auth.warn /var/log/ecs-auth-warn.log
auth.err /var/log/ecs-auth-error.log
2. After any modification of the configuration file, restart the Syslog service on the Syslog server:
Output:
Platform locking
You can use the ECS Portal to lock remote access to nodes.
ECS can be accessed through the ECS Portal or the ECS Management REST API by management users assigned administration
roles. ECS can also be accessed at the node level by a privileged default node user who is named admin that is created during
the initial ECS install. This default node user can perform service procedures on the nodes and have access:
● By directly connecting to a node through the management switch with a service laptop and using SSH or the CLI to directly
access the operating system of the node.
● By remotely connecting to a node over the network using SSH or the CLI to directly access the node's operating system.
For more information about the default admin node-level user, see the ECS Security Configuration and Hardening Guide.
Node locking provides a layer of security against remote node access. Without node locking, the admin node-level user can
remotely access nodes at any time to collect data, configure hardware, and run Linux commands. If all the nodes in a cluster are
locked, then remote access can be planned and scheduled for a defined window to minimize the opportunity for unauthorized
activity.
You can lock selected nodes in a cluster or all the nodes in the cluster by using the ECS Portal or the ECS Management REST
API. Locking affects only the ability to remote access (SSH to) the locked nodes. Locking does not change the way the ECS
Portal and the ECS Management REST APIs access nodes, and it does not affect the ability to directly connect to a node
through the management switch.
Prerequisites
This operation requires the Security Administrator role assigned to the emcsecurity user in ECS.
Steps
1. Log in as the emcsecurity user.
For the initial login for this user, you are prompted to change the password and log back in.
2. In the ECS Portal, select Settings > Platform Locking.
The Platform Locking page lists the nodes in the cluster and displays the lock status.
The node states are:
● Unlocked: Displays an open green lock icon and the Lock action button.
● Locked: Displays a closed red lock icon and the Unlock action button.
● Offline: Displays a circle-with-slash icon but no action button because the node is unreachable and the lock state cannot
be determined.
3. Perform any of the following steps.
a. Click Lock in the Actions column beside the node that you want to lock.
Any user who is remotely logged in by SSH or CLI has approximately five minutes to exit before their session is
terminated. An impending shutdown message appears on the user's terminal screen.
b. Click Unlock in the Actions column beside the node that you want to unlock.
The admin default node user can remotely log in to the node after a few minutes.
c. Click Lock the VDC if you want to lock all unlocked, online nodes in the VDC.
It does not set a state where a new or offline node is automatically locked once detected.
Lock and unlock nodes using the ECS Management REST API
You can use the following APIs to manage node locks.
Table 53. ECS Management REST API calls for managing node locking
Resource Description
GET /vdc/nodes Gets the data nodes that are configured in the cluster.
GET /vdc/lockdown Gets the locked or unlocked status of a VDC.
PUT /vdc/lockdown Sets the locked or unlocked status of a VDC.
PUT /vdc/nodes/{nodeName}/lockdown Sets the Lock or unlock status of a node.
GET /vdc/nodes/{nodeName}/lockdown Gets the Lock or unlock status of a node.
Prerequisites
To obtain the license file, you must have the License Authorization Code (LAC), which was emailed from Dell EMC. If you have
not received the LAC, contact your customer support representative.
Steps
1. Go to the license page at: https://support.emc.com/servicecenter/license/
2. From the list of products, select ECS Appliance.
3. Click Activate My Software.
4. On the Activate page, enter the LAC code, and then click Activate.
5. Select the feature to activate, and then click Start Activation Process.
6. Select Add a Machine to specify any meaningful string for grouping licenses.
For the machine name, enter any string that helps you to keep track of your licenses. (It does not have to be a machine
name.)
7. Enter the quantities for each feature, or select Activate All, and then click Next.
8. Optionally, specify an addressee to receive an email summary of the activation transaction.
9. Click Finish.
10. Click Save to File to save the license file (.lic) to a folder on your computer.
Prerequisites
● This operation requires the System Administrator role in ECS.
● Ensure that you have a valid license file. You can follow the instructions that are provided in Obtain the Dell EMC ECS
license file to obtain a license.
Steps
1. In the ECS Portal, select Settings > Licensing.
2. On the Licensing page, in the Upload a New License File field, click Browse to go to your local copy of the license file.
3. Click Upload to add the license.
The license features and associated information be displayed in the list of licensed features.
Security
You can use the ECS Portal to change your password, set password rules, manage user sessions, and set user agreement text.
ECS logs an audit log event when the password change fails for reasons such as connection disruption, forgetting the password.
You can see and audit log for failed password changes.
● Password
● Password Rules
● Sessions
● User Agreement
Password
This section describes how a user with System Administrator or System Monitor role can change their own password.
Prerequisites
This operation requires either System Administrator role or System Monitor role in ECS.
NOTE:
● Users with either System Administrator role or System Monitor can change their password.
● To change the password, users have to provide the old password.
Steps
1. In the ECS Portal, select Settings > Security > Password.
The Password page opens.
Password Change
This section describes how a user with System Administrator role can change their own password, and also of other users.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Manage > Users > Management Users > Select User > Edit.
The Edit Management User <user_name> page opens.
2. Enter the new password, in the Password field, and then enter it again in the Confirm Password .
3. Click Save.
Password Rules
You can use the ECS Portal to set password rules.
Prerequisites
This operation requires the System or Security Administrator role in ECS.
NOTE: When a user is locked out due to password expiration or failed login attempts, Security Administrator can unlock the
user from Manage>Users>Management Users Unlock Action.
Steps
1. In the ECS Portal, select Settings > Security > Password Rules.
2. Enter the values for all the fields in the Password Rules page.
3. Click Save.
Sessions
You can use the ECS Portal to manage user sessions.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, select Settings > Security > Sessions.
2. Enter the values for all the fields in the Sessions page.
3. Click Save.
User Agreement
You can use the ECS Portal to define the user agreement text.
Prerequisites
This operation requires the System Administrator role in ECS.
Steps
1. In the ECS Portal, choose Settings > Security > User Agreement.
2. Enter the agreement text in the Agreement text field or click UPLOAD.TXT FILE to upload text from a text file.
You can review and modify the agreement text before you proceed.
3. Click Save.
Steps
1. In the ECS Portal, select Settings > About this VDC.
Default values
● The threshold is 50,000 versions of an object.
● Alerts are enabled by default.
● The threshold is disabled during upgrade.
● Threshold is enabled during a new install.
● Default values can be modified.
NOTE: For more information about TSO and PSO behavior, see the ECS High Availability Design white paper.
NOTE: Best practices for administrators to consider while configuring network bandwidth for ECS replication:
● For most scenarios, ECS replicates the same amount of data that it ingests. Best practice for replication bandwidth
allocation should be at least equal or greater than the frontend injection rate.
● In full replication mode, the bandwidth that is required is also dependent on the number of VDC in the full replication
RG. For example, if a user has four VDC in an RG, the replication network bandwidth that is required is three times the
frontend ingest rate.
● Network that is used for replication must be stable under high utilization scenarios. It should avoid or account for
additional load such as load from the firewall.
● When a failure scenario such as a PSO, TSO, or VDC extend operation happens, there could be a backlog that is
generated. This increases the bandwidth that is required by replication to catch up and clear the backlog. Administrators
must account for these situations to avoid network saturation or in the worst case, network failure. Best practices to
consider:
○ Use a third-party QoS method to throttle the network used by replication.
○ Discuss options with the Dell Service provider, to tune the system based on your network situation.
ECS recovery and data balancing behavior are described in these topics:
● Recovery on disk and node failures
● Data rebalancing after adding new nodes
X 6 read/write
request fails
Application
The following figure shows a nonowner site that is marked as TSO with the ADO setting turned off for the bucket. When an
application tries to access the primary copy at the owner site, the read/write request made to the owner site will be successful.
A read/write request made from an application connected to the nonowner site fails.
16 minutes 4 5 read/write
after power request
succeeds
outage, an app
Application
connected to
Site A makes a read or write
request for an object owned by Site A
Figure 11. Read/write request succeeds during TSO when data is accessed from owner site and non-owner site is
unavailable
Figure 12. Read/write request succeeds during TSO when ADO-enabled data is accessed from non-owner site and
owner site is unavailable
The ECS system operates under the eventual consistency model during a TSO with ADO turned on for buckets. When a change
is made to an object at one site, it will be eventually consistent across all copies of that object at other sites. Until enough time
elapses to replicate the change to other sites, the value might be inconsistent across multiple copies of the data at a particular
point in time.
An important factor to consider is that turning the ADO setting on for buckets has performance consequences; ADO-enabled
buckets have slower read/write performance than buckets with the ADO turned off. The performance difference is because
when ADO is turned on for a bucket, ECS must first resolve object ownership to provide strong consistency when all sites
become available after a TSO. When ADO is turned off for a bucket, ECS does not have to resolve object ownership because
the bucket does not enable change of object ownership during a TSO.
The benefit of the ADO setting is that it enables you to access data during temporary site outages. The disadvantage is that the
data returned may be outdated and read/write performance on ADO buckets will be slower.
In ECS 3.8, you can enable ADO setting in Object Lock-enabled buckets if you have system administrator privileges, and you are
aware of the data loss risks.
By default, the ADO setting is turned off because there is a risk that object data retrieved during a TSO is not the most recent.
TSO behavior with the ADO bucket setting that is turned on is described for the following ECS system configurations:
● Two-site geo-federated deployment with ADO-enabled buckets
● Three-site active federated deployment with ADO-enabled buckets
● Three-site passive federated deployment with ADO-enabled buckets
When an application is connected to a nonowner site, and it modifies an object within an ADO-enabled bucket during a network
outage, ECS transfers ownership of the object to the site where the object was modified.
The following figure shows how a write to a nonowner site causes the nonowner site to take ownership of the object during a
TSO in a two-site geo-federated deployment. This functionality allows applications connected to each site to continue to read
and write objects from buckets in a shared namespace.
When the same object is modified in both Site A and Site B during a TSO, the copy on the nonowner site is the authoritative
copy. When an object that is owned by Site B is modified in both Site A and Site B during a network outage, the copy on Site A
is the authoritative copy that is kept, and the other copy is overwritten.
When network connectivity between two sites is restored, the heartbeat mechanism automatically detects connectivity,
restores service, and reconciles objects from the two sites. This synchronization operation is done in the background and
can be monitored on the Monitor > Recovery Status page in the ECS Portal.
After TSO - Site A rejoins federation and object versions are reconciled
network
Site A connection Site B
Primary copies of these 2 objects still reside in Primary copy of updated Word doc now exists in Site B
Site A. SIte A is owner site of these objects. after TSO is over. Site B is owner site of Word doc.
Figure 13. Object ownership example for a write during a TSO in a two-site federation
When more than two sites are part of a replication group, and if network connectivity is interrupted between one site and the
other two, write, update, or ownership operations continue as they would with two sites, but the process for responding to read
requests is more complex.
If an application requests an object that is owned by a site that is not reachable, ECS sends the request to the site with
the secondary copy of the object. The secondary copy might have been subject to a data contraction operation, which is an
XOR between two different datasets that produces a new dataset. The site with the secondary copy must retrieve the chunks
of the object that is in the original XOR operation, and it must XOR those chunks with the recovery copy. This operation
returns the contents of the chunk that is originally stored on the owner site. Then the chunks from the recovered object can
be reassembled and returned. When the chunks are reconstructed, they are also cached so that the site can respond more
quickly to subsequent requests. Reconstruction is time consuming. More sites in a replication group imply more chunks that
must be retrieved from other sites, and hence reconstructing the object takes longer. The following figure shows the process
for responding to read requests in a three-site federation.
X Site A - owner site TSO Site B - non-owner site Site C - non-owner site
Namespace
ADO-enabled Bucket 1 ADO-enabled Bucket 1 4 retrieve XOR chunks ADO-enabled Bucket 1
MP4 XOR
primary copy MP4
copy of 5 use XOR chunks to reconstruct
secondary copy of object
object reconstructed
secondary
copy of object
3
Site C routes request to Site B where secondary
2 load balancer routes
copy resides; it is an XOR copy, so it must be request to one of the
reconstructed sites that is up, Site C
load balancer
1 read request
for MP4 object
6 read request
successfully
completed Application
Figure 14. Read request workflow example during a TSO in a three-site federation
When ECS is deployed in a three-site passive configuration, the TSO behavior is the same as described in Three-site active
federated deployment with ADO-enabled buckets, with one difference. If a network connection fails between an active site and
the passive site, ECS always marks the passive site as TSO (not the active site).
When the network connection fails between the two active sites, the following normal TSO behavior occurs:
1. ECS marks one of the active sites as TSO (unavailable), for example, owner Site B.
2. Read/write/update requests are rendered from the site that is up (Site A).
3. For a read request, Site A requests the object from the passive site (Site C).
4. Site C decodes (undo XOR) the XOR chunks and sends to Site A.
5. Site A reconstructs a copy of the object to honor the read request.
6. If there is a write or update request, Site A becomes the owner of the object and keeps the ownership after the outage.
The following figure shows a passive configuration in a normal state; users can read and write to active Sites A and B and the
data and metadata is replicated one way to the passive Site C. Site C XORs the data from the active sites.
ADO-enabled Bucket 1
chunk 3
(chunk 1 XOR chunk 2 = chunk 3)
ADO-enabled
AD Bucket 1
ADO-enabled Bucket 1
metadata
replication
chunk 1 chunk 2
Namespace
The following figure shows the workflow for a write request that is made during a TSO in a three-site passive configuration.
ADO-enabled Bucket 1
TSO
Site A - Active Site Network
connection Site B - Active Site X
down
ADO-enabled Bucket 1 ADO-enabled Bucket 1
1 write request
for MP4 object
Application
In ECS 3.8, with system administrator privileges, you can enable ADO and Object Lock in a bucket when you decide to explicitly
allow it. When Object Lock and ADO are enabled together in a bucket, there is a risk of losing locked versions during a TSO. As a
result, for ADO buckets, setting Object Lock is denied by default. Object Lock and ADO can co-exist only when the user agrees
to understand the risk of losing locked versions during a TSO but would like to still allow this feature.
This scenario can be considered in three types of buckets.
1. Non-ADO
2. ADO read-only (RO)
3. ADO read/write (RW)
Object Lock will be allowed in Read-Only ADO buckets. Data cannot be lost but can be unavailable/inconsistent during TSO. This
means that locked version, or updates to lock versions, can be not visible. Suppose the first version of an object was written,
and then updated to have a lock, and then the second version was created with a lock. If the second version is not replicated
before a temporary site outage, data could be unavailable or inconsistent.
Object Lock is not allowed on ADO buckets by default. In ECS 3.8, you see an error if you try to enable ADO and Object Lock
together: "Object Lock has been disabled for use with ADO enabled buckets, consult your local systems administrator or data
access guide for information to enable."
The error occurs in all three below situations when a user tries to enable ADO and Object Lock together.
● Create a bucket with both options set to true in request.
● Enable ADO in the Object Lock bucket.
● Enable Object Lock in ADO bucket.
This forces you to accept the risks of losing locked data before enabling both the services together. With system administrator
privileges, you can enable ADO and Object Lock together through the Management API. Here is an example scenario in a series
of figures to understand the data loss possibilities when ADO and Object Lock are enabled together. The following figure shows
the behavior of a bucket that is enabled with both ADO and Object Lock when the network connection is normal. The fist
version of the object is created and locked in Zone 1, which is then replicated to Zone 2.
Figure 17. ADO and Object Lock enabled bucket where the object is replicated from Zone 1 to Zone 2 during normal
network connection
The following figure shows the behavior of the ADO and Object Lock enabled bucket as time elapses. A new version of the
object is created in Zone 1 and is replicated to Zone 2. The new version (V2) is then locked in Zone 1, however the lock is not
replicated to Zone 2 at this moment in time.
During an outage, the connection between two zones is lost. The following figure demonstrates how during an outage Zone 1 is
inaccessible. As ADO is enabled, the unlocked version of the object can be accessed and modified from Zone 2. This situation
may lead to the creation of a new version (V3) in Zone 2.
Figure 19. ADO and Object Lock enabled bucket during TSO when the unlocked version of the object is overwritten
in Zone 2
The following figure shows the state of the bucket that is enabled with ADO and Object Lock after normal connection is
restored and the data is reconciled between linked zones. This means V3 is replicated from Zone 2 to Zone 1, and V2 is lost as
the lock was not replicated before the outage.
Figure 20. ADO and Object Lock enabled bucket after TSO where the version updated in Zone 2 during the outage
will be replicated to Zone 1
Table 59. Example scenario where locked data can be lost in TSO
Network connectivity Data available in Zone Data available in Zone 1 Activity in the bucket
status 2
Normal A new version of an Version 1 (Governance) -
object is created and
locked.
Normal The new version is Version 1 (Governance) Version 1 (Governance)
shipped to Zone 2.
Normal Version 2 of the object Version 1 (Governance) Version 1 (Governance)
is created and shipped to
Version 2 Version 2
Zone 2.
TSO considerations
You can perform many object operations during a TSO. You cannot perform create, delete, or update operations on the following
entities at any site in the geo-federation until the temporary failure is resolved, regardless of the ADO bucket setting:
● Namespaces
● Buckets
● Object users
● Authentication providers
● Replication groups (you can remove a VDC from a replication group for a site failover)
● NFS user and group mappings
The following limitations apply to buckets during a TSO:
● File systems within file system-enabled (NFS) buckets that are owned by the unavailable site are read-only.
● When you copy an object from a bucket owned by the unavailable site, the copy is a full copy of the source object. This
means that the same object's data is stored more than once. Under normal non-TSO circumstances, the object copy consists
of the data indexes of the object, not a full duplicate of the object's data.
Disk health
ECS reports disk health as Good, Suspect, or Bad.
● Good: The partitions of the disk can be read from and written to.
● Suspect: The disk has not yet met the threshold to be considered bad.
● Bad: A certain threshold of declining hardware performance has been met. When met, no data can be read or written.
ECS writes only to disks in good health. ECS does not write to disks in suspect or bad health. ECS reads from good disks and
suspect disks. When two of an object’s chunks are located on suspect disks, ECS writes the chunks to other nodes.
Node health
ECS reports node health as Good, Suspect, or Bad.
● Good: The node is available and responding to I/O requests in a timely manner.
● Suspect: The node has been unavailable for more than 30 minutes.
● Bad: The node has been unavailable for more than an hour.
ECS writes to reachable nodes regardless of the node health state. When two of an object’s chunks are located on suspect
nodes, ECS writes two new chunks of it to other nodes.
Advanced Monitoring
Advanced Monitoring dashboards provide critical information about the ECS processes on the VDC you are logged in to.
The advanced monitoring dashboards are based on time series database, and are provided by Grafana, which is well known
open-source time series analytics platform.
Refer Grafana for basic details of navigation in Grafana dashboards.
● View Advanced Monitoring Dashboards
● Share Advanced Monitoring Dashboards
Disk Bandwidth - Overview You can use the Disk Bandwidth - Overview dashboard to monitor the
disk usage metrics by read or write operations at the VDC level.
NOTE: For Disk Bandwidth - Overview dashboard, consistency
checker metric shows data only for read but not write as it is irrelevant.
● Disk Bandwidth - by Nodes Erasure Rate at which disk bandwidth is used in system erasure coding
● Disk Bandwidth - Overview Encoding operations.
● Disk Bandwidth - by Nodes XOR Rate at which disk bandwidth is used in the XOR data protection
● Disk Bandwidth - Overview operations of the system. XOR operations occur for systems with
three or more sites (VDCs).
● Disk Bandwidth - by Nodes Consistency Rate at which disk bandwidth is used to check for inconsistencies
● Disk Bandwidth - Overview Checker between protected data and its replicas.
● Disk Bandwidth - by Nodes User Traffic Rate at which disk bandwidth is used by object users.
● Disk Bandwidth - Overview
Node Rebalancing Data Rebalanced Amount of data that has been rebalanced.
Node Rebalancing Pending Amount of data that is in the rebalance queue but has not been
Rebalancing rebalanced yet.
Node Rebalancing Rate of The incremental amount of data that was rebalanced during a specific
Rebalance (per time period. The default time period is one day.
day)
Process Health - Process List by Process The last time the process restarted on the node in the selected time
Node Restarts range. The maximum time range could be 5 days because it is limited
by the retention policy.
Process Health - Overview Avg. NIC Average bandwidth of the network interface controller hardware that
Bandwidth is used by the selected VDC or node.
Process Health - Process List by NIC Bandwidth Bandwidth of the network interface controller hardware that is used
Node by the selected VDC or node.
Process Health - Overview Avg. CPU Usage Average percentage of the CPU hardware that is used by the selected
VDC or node.
Process Health - Overview Avg. Memory Average usage of the aggregate memory available to the VDC or node.
Usage
● Process Health - by Nodes Relative NIC Percentage of the available bandwidth of the network interface
● Process Health - Overview (%) controller hardware that is used by the selected VDC or node.
● Process Health - by Nodes Relative Memory Percentage of the memory used relative to the memory available to
● Process Health - Overview (%) the selected VDC or node.
● Process Health - Process List
by Node
● Process Health - by Nodes CPU Usage Percentage of the node's CPU used by the process. The list of
● Process Health - Process List processes that are tracked is not the complete list of processes
by Node running on the node. The sum of the CPU used by the processes is
not equal to the CPU usage shown for the node.
Process Health - by Nodes Memory Usage The memory used by the process.
● Process Health - by Nodes Relative Memory Percentage of the memory used relative to the memory available to
● Process Health - Overview (%) the process.
● Process Health - Process List
by Node
Process Health - Process List by Avg. # Thread Average number of threads used by the process.
Node
Process Health - Process List by Last Restart The last time the process restarted on the node.
Node
Process Health - by Nodes Host -
Process Health - Process List by Process -
Node
Recovery Status Amount of Data With the Current filter selected, this is the logical size of the data yet
to be Recovered to be recovered.
View mode
Steps
1. To view a dashboard in the view mode, click the title of a dashboard, for example (TPS (success/failure) > View.
The dashboard opens in the view mode or in the full-screen mode.
2. Click Back to dashboard icon to return back to the dashboards view.
Export CSV
Steps
1. To export the dashboard data to .csv format, click the title of a dashboard.
2. Navigate to Inspect > Data.
3. Click Download CSV to export the dashboard data to .csv format to your local storage.
In the Data Access Performance - by Namespaces dashboard, you can monitor for namespaces:
● TPS (success/failure)
● Failed Requests/s by error type (user/system)
● Successful Requests/s by Node
● Failed Requests/s by Node
● Compare TPS of successful requests
● Compare TPS of failed requests
To view the Data Access Performance - by Namespaces dashboard in the ECS Portal, select Advanced Monitoring >
Related dashboards > Data Access Performance - by Namespaces.
All the namespace data are visible in the default view. To select a namespace, click the legend parameter for the namespace
below the graph.
Requests drill down by nodes shows the successful and failed requests by node.
Compare: select multiple namespaces compares TPS of successful and failed requests.
In the Data Access Performance - by Nodes dashboard, you can monitor for nodes in a VDC:
● TPS (success/failure)
● Bandwidth (read/write)
● Failed Requests/s by error type (user/system)
● Latency
● Successful Requests/s by Method
● Successful Requests/s by Node
● Successful Requests/s by Protocol
● Failed Requests/s by Method
● Failed Requests/s by Node
● Failed Requests/s by Protocol
● Failed Requests/s by error code
● Compare TPS of successful requests
● Compare TPS of failed requests
● Compare read bandwidth
● Compare write bandwidth
● Compare read latency
● Compare write latency
To view the Data Access Performance - by Nodes dashboard in the ECS Portal, select Advanced Monitoring > Related
dashboards > Data Access Performance - by Nodes.
Data for all the nodes are visible in the default view. To select data for a node, click the legend parameter for the node below
the graph.
Successful requests drill down shows the successful requests by method, node, and protocol.
Failures drill down shows the failed requests by method, node, protocol, and error code.
In the Data Access Performance - by Protocols dashboard, based on the protocol, you can monitor:
● TPS (success/failure)
● Bandwidth (read/write)
● Failed Requests/s by error type (user/system)
● Latency
● Successful Requests/s by Node
● Failed Requests/s by Node
● Compare TPS of successful requests
● Compare TPS of failed requests
● Compare read bandwidth
● Compare write bandwidth
● Compare read latency
● Compare write latency
To view the Data Access Performance - by Nodes dashboard in the ECS Portal, select Advanced Monitoring > Related
dashboards > Data Access Performance - by Protocols.
Data for all the protocols are visible in the default view. To select data for a protocol, click the legend parameter for the protocol
below the graph.
Requests drill down by nodes shows the successful and failed requests by node.
Compare: select multiple namespaces compares TPS of successful and failed requests, compare read/write bandwidth,
compare read/write latency.
You can use the Disk Bandwidth - by Nodes dashboard to monitor the disk usage metrics by read or write operations at the
node level. The dashboard displays the latest values.
To view the Disk Bandwidth - by Nodes dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Disk Bandwidth - by Nodes
You can use the Disk Bandwidth - Overview dashboard to monitor the disk usage metrics by read or write operations at the
VDC level.
To view the Disk Bandwidth - Overview dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Disk Bandwidth - Overview
You can use the Node Rebalancing dashboard to monitor the status of data rebalancing operations when nodes are added to,
or removed from, a cluster. Node rebalancing is enabled by default at installation. Contact your customer support representative
to disable or re-enable this feature.
To view the Node Rebalancing dashboard, click Advanced Monitoring > expand Data Access Performance - Overview >
Node Rebalancing
A series of interactive graphs shows that the amount of data rebalanced, pending rebalancing, and the rate of rebalancing data
in bytes over time.
Node rebalancing works only for new nodes that are added to the cluster.
You can use the Process Health - by Nodes dashboard to monitor for each node of the VDC use of network interface, CPU,
and available memory. The dashboard displays the latest values and the history graphs display values in the selected range.
To view the Process Health - by Nodes dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Process Health - by Nodes
You can use the Process Health - Overview dashboard to monitor the VDC use of network interface, CPU, and available
memory. The dashboard displays the latest average values and the history graphs display values in the selected time range.
To view the Process Health - Overview dashboard, click Advanced Monitoring > expand Data Access Performance -
Overview > Process Health - Overview
You can use the Process Health - Process List by Node dashboard to monitor processes use of CPU, memory, average
thread number and last restart time in the selected time range. The dashboard displays the latest values in the selected time
range.
To view the Process Health - Process List by Node dashboard, click Advanced Monitoring > expand Data Access
Performance - Overview > Process Health - Process List by Node
Recovery Status
Top Buckets
ECS is upgraded with a mechanism in metering to calculate the number of buckets with top utilization that is based on total
object size and count.
Statistics of buckets with top utilization for the system is displayed in monitoring dashboards. The number of buckets that are
displayed on the monitoring dashboard is a configurable value.
To view the Top buckets dashboard, click Advanced Monitoring > expand Data Access Performance - Overview > Top
buckets.
Flux API
Flux API enables you to retrieve time series database data by sending REST queries using curl. You can get raw data from
fluxd service in a way similar to using the Dashboard API. You have to get a token, and provide the token in the requests.
Prerequisites
Requires one of the following roles:
● SYSTEM_ADMIN
json:
{
"query": "from(bucket:\"monitoring_main\") |> range(start: -30m) |> filter(fn: (r) =>
r._measurement == \"statDataHead_performance_internal_transactions\")"
}
query=from(bucket: "monitoring_main")
|> range(start: -30m)
|> filter(fn: (r) => r._measurement == "statDataHead_performance_internal_transactions")
Steps
1. Generate a token.
Token:
CSV example
Database monitoring_main
Performance metrics in this database are raw, each is split by data node, that is all have host and node_id tags.
Information:
Measurement in this section have following structure:
Service is the name of ECS service that produces the measurement, i.e. blob, cm, georcv,
statDataHead.
For example,
blob_IO_Statistics_data_read
cm_IO_Statistics_data_write
Measurement: blob_IO_Statistics_data_read
...
Tags: host, node_id, process, tag
Fields: read_CCTotal (float, bytes)
read_ECTotal (float, bytes)
read_GEOTotal (float, bytes)
read_RECOVERTotal (float, bytes)
read_USERTotal (float, bytes)
Measurement: blob_IO_Statistics_data_write
...
Tags: host, node_id, process, tag
Fields: write_CCTotal (integer)
write_ECTotal (integer)
write_GEOTotal (integer)
write_RECOVERTotal (integer)
write_USERTotal (integer)
write_XORTotal (integer)
Measurement: blob_SSDReadCache_Stats
Tags: host, id, last, node_id, process
Fields: +Inf (integer)
0.0 (integer)
1000.0 (integer)
25000.0 (integer)
5000.0 (integer)
rocksdb_disk_capacity_failure_counter (integer)
rocksdb_disk_usage_counter_bytes (integer)
rocksdb_disk_usage_percentage_counter (integer)
ssd_capacity_counter_bytes (integer)
CM statistics
These statistics represent processes in ECS service CM, such BTree GC, Chunk management, Erasure coding.
Measurement: cm_BTREE_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_candidate_garbage_btree_gc_level_0 (integer)
accumulated_candidate_garbage_btree_gc_level_1 (integer)
accumulated_detected_data_btree_level_0 (integer)
accumulated_detected_data_btree_level_1 (integer)
accumulated_reclaimed_data_btree_level_0 (integer)
accumulated_reclaimed_data_btree_level_1 (integer)
candidate_chunks_btree_gc_level_0 (integer)
candidate_chunks_btree_gc_level_1 (integer)
candidate_garbage_btree_gc_level_0 (integer)
candidate_garbage_btree_gc_level_1 (integer)
copy_candidate_chunks_btree_gc_level_0 (integer)
copy_candidate_chunks_btree_gc_level_1 (integer)
copy_completed_chunks_btree_gc_level_0 (integer)
copy_completed_chunks_btree_gc_level_1 (integer)
copy_waiting_chunks_btree_gc_level_0 (integer)
copy_waiting_chunks_btree_gc_level_1 (integer)
deleted_chunks_btree_level_0 (integer)
deleted_chunks_btree_level_1 (integer)
deleted_data_btree_level_0 (integer)
deleted_data_btree_level_1 (integer)
full_reclaimable_chunks_btree_gc_level_0 (integer)
full_reclaimable_chunks_btree_gc_level_1 (integer)
reclaimed_data_btree_level_0 (integer)
reclaimed_data_btree_level_1 (integer)
usage_between_0%_and_5%_chunks_btree_gc_level_0 (integer)
usage_between_0%_and_5%_chunks_btree_gc_level_1 (integer)
usage_between_10%_and_15%_chunks_btree_gc_level_0 (integer)
usage_between_10%_and_15%_chunks_btree_gc_level_1 (integer)
usage_between_5%_and_10%_chunks_btree_gc_level_0 (integer)
usage_between_5%_and_10%_chunks_btree_gc_level_1 (integer)
verification_waiting_chunks_btree_gc_level_0 (integer)
verification_waiting_chunks_btree_gc_level_1 (integer)
Measurement: cm_Chunk_Statistics
Tags: host, node_id, process, tag
Fields: chunks_copy (integer)
Measurement: cm_EC_Statistics
Tags: host, node_id, process, tag
Fields: chunks_ec_encoded (integer)
chunks_ec_encoded_alive (integer)
data_ec_encoded (integer)
data_ec_encoded_alive (integer)
Measurement: cm_Geo_Replication_Statistics_Geo_Chunk_Cache
Tags: host, node_id, process, tag
Fields: Capacity_of_Cache (integer)
Number_of_Chunks (integer)
Measurement: cm_REPO_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_deleted_garbage_repo (integer)
accumulated_reclaimed_garbage_repo (integer)
deleted_chunks_repo (integer)
deleted_data_repo (integer)
ec_freed_slots (integer)
full_reclaimable_aligned_chunk (integer)
merge_copy_overhead_in_deleted_data_repo (integer)
merge_copy_overhead_in_reclaimed_data_repo (integer)
reclaimed_chunk_repo (integer)
reclaimed_data_repo (integer)
slots_waiting_shipping (integer)
slots_waiting_verification (integer)
Measurement: cm_Rebalance_Statistics
Tags: host, node_id, process, tag
Fields: bytes_rebalanced (integer)
bytes_rebalancing_failed (integer)
chunks_canceled (integer)
chunks_for_rebalancing (integer)
chunks_rebalanced (integer)
chunks_total (integer)
jobs_canceled (integer)
segments_for_rebalancing (integer)
segments_rebalanced (integer)
segments_rebalancing_failed (integer)
segments_total (integer)
Measurement: cm_Rebalance_Statistics_CoS
Tags: CoS, host, node_id, process, tag
Fields: bytes_rebalanced (integer)
bytes_rebalancing_failed (integer)
chunks_canceled (integer)
chunks_for_rebalancing (integer)
chunks_rebalanced (integer)
chunks_total (integer)
jobs_canceled (integer)
segments_for_rebalancing (integer)
segments_rebalanced (integer)
segments_rebalancing_failed (integer)
segments_total (integer)
Measurement: cm_Recover_Statistics
Tags: host, node_id, process, tag
Fields: chunks_to_recover (integer)
data_recovered (integer)
data_to_recover (integer)
Measurement: cm_Recover_Statistics_CoS
Tags: CoS, host, node_id, process, tag
Fields: chunks_to_recover (integer)
data_recovered (integer)
data_to_recover (integer)
SR statistics
These statistics represent processes in ECS service SR, responsible for space reclamation.
Measurement: sr_REPO_GC_Statistics
Tags: host, node_id, process, tag
Fields: accumulated_merge_copy_overhead_in_full_garbage (integer)
accumulated_total_repo_garbage (integer)
full_reclaimable_repo_chunk (integer)
garbage_in_partial_sr_tasks (integer)
garbage_in_repo_usage (integer)
merge_copy_overhead_in_full_garbage (integer)
merge_way_gc_processed_chunks (integer)
merge_way_gc_src_chunks (integer)
merge_way_gc_targeted_chunks (integer)
merge_way_gc_tasks (integer)
total_repo_garbage (integer)
usage_between_0%_and_33.3%_repo_chunk (integer)
usage_between_33.3%_and_50%_repo_chunk (integer)
usage_between_50%_and_66.7%_repo_chunk (integer)
Measurement: ssm_sstable_SSTable_SS
Tags: SS, SSTable, last, process, tag
Fields: allocatedSpace (integer)
availableFreeSpace (integer)
downDurationTotal (integer)
freeSpace (integer)
largeBlockAllocated (integer)
largeBlockAllocatedSize (integer)
largeBlockFreed (integer)
largeBlockFreedSize (integer)
pendingDurationTotal (integer)
pingerDurationTotal (integer)
smallBlockAllocated (integer)
smallBlockFreed (integer)
smallBlockFreedSize (integer)
smallBlockSize (integer)
state (string)
timeInStateTotal (integer)
totalSpace (integer)
upDurationTotal (integer)
Measurement: ssm_sstable_SSTable_SS_datamigration
Tags: SS, SSTable, last, process
Fields: status (integer)
totalCapacityToMigrate (integer)
Database monitoring_last
Measurement: blob_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: blob_Total_memory_and_disk_cache_size
Tags: Total_memory_and_disk_cache_size, host, last, node_id, process
Fields: Disk_cache_size (integer)
Memory_cache_size (integer)
Measurement: cm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: eventsvc_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: mm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: resource_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: rm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
Measurement: sr_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: sr_Total_memory_and_disk_cache_size
Tags: Total_memory_and_disk_cache_size, host, last, node_id, process
Fields: Disk_cache_size (integer)
Memory_cache_size (integer)
Measurement: ssm_Process_status
Tags: Process_status, host, node_id, process
Fields: MemoryTableFreeSpacePercentagePerMinute (integer)
NumberofWritePageAllocationOutsideWriteCache (integer)
Measurement: dtquery_cmf
Tags: last, process
Fields: com.emc.ecs.chunk.gc.btree.enabled (integer)
com.emc.ecs.chunk.gc.btree.scanner.verification.enabled (integer)
com.emc.ecs.chunk.gc.repo.enabled (integer)
com.emc.ecs.chunk.gc.repo.verification.enabled (integer)
com.emc.ecs.chunk.rebalance.is_enabled (integer)
com.emc.ecs.objectgc.cas.enabled (integer)
com.emc.ecs.sensor.btree_sr_pending_mininum (integer)
com.emc.ecs.sensor.repo_sr_pending_mininum (integer)
Measurement: mm_topn_bucket_by_obj_count_place
Tags: last, place, process, tag
Fields: bucketName (string)
namespace (string)
value (integer)
Measurement: mm_topn_bucket_by_obj_size_place
Tags: last, place, process, tag
Fields: bucketName (string)
namespace (string)
value (integer)
Measurement: vnestStat_membership_ismember
Tags: host, ismember, last, node_id, process
Fields: is_leader (string)
Measurement: vnestStat_performance_latency_type
Tags: host, id, last, node_id, process, type
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
7999999.99999999 (integer)
825912.9477680004 (integer)
85266.52466135359 (integer)
8802.840841123942 (integer)
9.686250859269972 (integer)
908.7975284781536 (integer)
93.82345570870827 (integer)
Measurement: vnestStat_performance_transactions_from_type
Database monitoring_op
Information:
Measurements listed in this section are from default Telegraf plugins. Here, measurement
name equals plugin name. Refer to plugin documentation for more information.
For example, documentation for Telegraf plugin "cpu" can be found here.
Measurement: cpu
Tags: cpu, host, node_id, tag
Fields: usage_guest (float)
usage_guest_nice (float)
usage_idle (float)
usage_iowait (float)
usage_irq (float)
usage_nice (float)
usage_softirq (float)
usage_steal (float)
usage_system (float)
usage_user (float)
Measurement: disk
Tags: device, fstype, host, mode, node_id, path, tag
Fields: free (integer)
inodes_free (integer)
inodes_total (integer)
inodes_used (integer)
total (integer)
used (integer)
used_percent (float)
Measurement: diskio
Tags: ID_PART_ENTRY_UUID, SCSI_IDENT_SERIAL, SCSI_MODEL, SCSI_REVISION, SCSI_VENDOR,
host, name, node_id, tag
Fields: io_time (integer)
iops_in_progress (integer)
read_bytes (integer)
read_time (integer)
reads (integer)
weighted_io_time (integer)
write_bytes (integer)
write_time (integer)
writes (integer)
Measurement: linux_sysctl_fs
Tags: host, node_id, tag
Fields: aio-max-nr (integer)
aio-nr (integer)
dentry-age-limit (integer)
dentry-nr (integer)
dentry-unused-nr (integer)
dentry-want-pages (integer)
file-max (integer)
file-nr (integer)
inode-free-nr (integer)
inode-nr (integer)
inode-preshrink-nr (integer)
Measurement: mem
Tags: host, node_id, tag
Fields: active (integer)
available (integer)
Measurement: net
Tags: host, interface, node_id, tag
Fields: bytes_recv (integer)
bytes_sent (integer)
bytes_sum (integer)
drop_in (integer)
drop_out (integer)
err_in (integer)
err_out (integer)
packets_recv (integer)
packets_sent (integer)
packets_sum (integer)
speed (integer)
utilization (integer)
Measurement: nstat
Tags: host, name, node_id, tag
Fields: IpExtInOctets (integer)
IpExtOutOctets (integer)
TcpInErrs (integer)
UdpInErrors (integer)
Measurement: processes
Tags: host, node_id, tag
Fields: blocked (integer)
dead (integer)
idle (integer)
paging (integer)
running (integer)
sleeping (integer)
stopped (integer)
total (integer)
total_threads (integer)
unknown (integer)
zombies (integer)
Measurement: procstat
Tags: host, node_id, process_name, tag, user
Fields: cpu_time (integer)
cpu_time_guest (float)
Measurement: swap
Tags: host, node_id, tag
Fields: free (integer)
in (integer)
out (integer)
total (integer)
used (integer)
used_percent (float)
Measurement: system
Tags: host, node_id, tag
Fields: load1 (float)
load15 (float)
load5 (float)
n_cpus (integer)
n_users (integer)
uptime (integer)
uptime_format (string)
Measurement: dtquery_dt_dist_dt_node_id_type
Tags: dt_node_id, process, tag, type
Fields: count_i (integer)
Measurement: dtquery_dt_dist_host_dt_node_id
Tags: dt_node_id, process, tag
Fields: count_i (integer)
Measurement: dtquery_dt_dist_type_type
Tags: process, tag, type
Fields: count_i (integer)
Measurement: dtquery_dt_status
Tags: process, tag
Fields: total (integer)
unknown (integer)
unready (integer)
Measurement: dtquery_dt_status_detailed_type
Tags: process, tag, type
Fields: total (integer)
unknown (integer)
unready (integer)
Measurement: ecs_fabric_agent_dirstat_size_bytes
Tags: host, node_id, path, tag, url
Fields: gauge (float)
SR journal statistics
Measurement: sr_JournalParser_GC_RG_DT
Tags: DT, RG, last, process
Fields: majorMinorOfJournalRegion (string)
pendingChunks (integer)
timestampOfChunkRegion (string)
timestampOfJournalParserLastRun (string)
Measurement: sr_ObjectGC_CAS_RG
Tags: RG, last, process
Fields: STATUS (string)
Measurement: vnestStat_btree
Tags: cumulative_stats, host, level, node_id, tag
Fields: level_count (float)
page_count (float)
size_bytes (float)
Information:
Metrics below are aggregated over data nodes for raw measurements used in Grafana ECS UI.
Measurement: cq_disk_bandwidth
Tags: type_op ('read', 'write')
Fields: consistency_checker (float)
erasure_encoding (float)
geo (float)
hardware_recovery (float)
total (float)
user_traffic (float)
xor (float)
Measurement: cq_node_rebalancing_summary
Tags: none
Fields: data_rebalanced (integer)
pending_rebalance (integer)
Measurement: cq_process_health
Tags: none
Fields: cpu_used (float)
mem_used (float)
mem_used_percent (float)
nic_bytes (float)
nic_utilization (float)
Measurement: cq_recover_status_summary
Tags: none
Fields: data_recovered (integer)
data_to_recover (integer)
Database monitoring_main
Performance metrics in this database are raw, each is split by data node, that is all have node and node_id tags.
Most of integer fields are increasing counters that is values that increase over time. Increasing counters restart from zero after
datahead service restart.
Measurement: statDataHead_performance_internal_error
Tags: host, node_id, process, tag
Fields: system_errors (integer)
user_errors (integer)
Measurement: statDataHead_performance_internal_error_code
Tags: code, host, node_id, process, tag
Fields: error_counter (integer)
Measurement: statDataHead_performance_internal_error_head_namespace
Tags: head, host, namespace, node_id, process, tag
Fields: system_errors (integer)
user_errors (integer)
Measurement: statDataHead_performance_internal_latency
Tags: host, id, node_id, process, tag
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
111.6295328521717 (integer)
12461.15260479408 (integer)
23.183877401213103 (integer)
2588.0054039994393 (integer)
4.814963904455889 (integer)
537.4921713544796 (integer)
59999.999999999985 (integer)
Measurement: statDataHead_performance_internal_latency_head
Tags: head, host, id, node_id, process, tag
Fields: +Inf (integer)
0.0 (integer)
1.0 (integer)
111.6295328521717 (integer)
12461.15260479408 (integer)
23.183877401213103 (integer)
2588.0054039994393 (integer)
4.814963904455889 (integer)
537.4921713544796 (integer)
59999.999999999985 (integer)
Measurement: statDataHead_performance_internal_throughput
Tags: host, node_id, process, tag
Fields: total_read_requests_size (integer)
total_write_requests_size (integer)
Measurement: statDataHead_performance_internal_throughput_head
Tags: head, host, node_id, process, tag
Fields: total_read_requests_size (integer)
total_write_requests_size (integer)
Measurement: statDataHead_performance_internal_transactions
Tags: host, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)
Measurement: statDataHead_performance_internal_transactions_head
Tags: head, host, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)
Measurement: statDataHead_performance_internal_transactions_head_namespace
Tags: head, host, namespace, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)
Measurement: statDataHead_performance_internal_transactions_method
Tags: host, method, node_id, process, tag
Fields: failed_request_counter (integer)
succeed_request_counter (integer)
Database monitoring_vdc
Performance metrics in this database are calculated values over whole VDC without reference to particular data node.
Most of values are:
Measurement: cq_performance_error
Tags: none
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_downsampled
Tags: none
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_code
Tags: code
Fields: error_counter (float)
Measurement: cq_performance_error_code_downsampled
Tags: code
Fields: error_counter (float)
Measurement: cq_performance_error_delta
Tags: none
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_delta_downsampled
Tags: none
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_head
Tags: head
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_head_downsampled
Tags: head
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_head_delta
Tags: head
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_head_delta_downsampled
Tags: head
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_ns
Tags: namespace
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_ns_downsampled
Tags: namespace
Fields: system_errors (float)
user_errors (float)
Measurement: cq_performance_error_ns_delta
Tags: namespace
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_error_ns_delta_downsampled
Tags: namespace
Fields: system_errors_i (integer)
user_errors_i (integer)
Measurement: cq_performance_latency
Tags: id
Fields: p50 (float)
p99 (float)
Measurement: cq_performance_latency_downsampled
Tags: id
Measurement: cq_performance_latency_head_downsampled
Tags: head, id
Fields: p50 (float)
p99 (float)
Measurement: cq_performance_throughput
Tags: none
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_throughput_downsampled
Tags: none
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_throughput_head
Tags: head
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_throughput_head_downsampled
Tags: head
Fields: total_read_requests_size (float)
total_write_requests_size (float)
Measurement: cq_performance_transaction
Tags: none
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_downsampled
Tags: none
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_delta
Tags: none
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_delta_downsampled
Tags: none
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_head
Tags: head
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_head_downsampled
Tags: head
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_head_delta
Tags: head
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_head_delta_downsampled
Tags: head
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_method
Tags: method
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_method_downsampled
Tags: method
Fields: failed_request_counter (float)
Measurement: cq_performance_transaction_ns_downsampled
Tags: namespace
Fields: failed_request_counter (float)
succeed_request_counter (float)
Measurement: cq_performance_transaction_ns_delta
Tags: namespace
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Measurement: cq_performance_transaction_ns_delta_downsampled
Tags: namespace
Fields: failed_request_counter_i (integer)
succeed_request_counter_i (integer)
Processes statistics
Dashboard API
GET /dashboard/nodes/{id}/processes
GET /dashboard/processes/{id}
Flux API
Database:
● monitoring_op
Measurement:
● procstat(detailed info on available fields and tags Github Influx Data Telegraf Inputs Procstat)
Fields:
● memory_rss- resident memory of a process (bytes)
● cpu_usage- cpu usage percentage for a process (percent used of a single cpu)
● num_threads- number of threads used by process (int)
Tags:
● process_name- valid process names:
○ nvmeengine
○ nvmetargetviewer
○ dtsm
○ rack-service-manager
○ rpcbind
○ blobsvc
○ cm
○ coordinatorsvc
○ dataheadsvc
○ dtquery
○ ecsportalsvc
○ eventsvc
r.node_id == "330e4b8f-4491-4ec7-b816-7b10ac9c6abf"
r.process_name == "cm"
Example query:
from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "procstat" and r._field == "memory_rss" and
r.process_name == "vnest" and r.host == "ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "process_name"])
Example output:
#datatype,string,long,dateTime:RFC3339,long,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,process_name
,,0,2019-08-15T13:05:00Z,2505809920,vnest
,,0,2019-08-15T13:10:00Z,2505887744,vnest
,,0,2019-08-15T13:15:00Z,2506014720,vnest
,,0,2019-08-15T13:20:01Z,2506010624,vnest
Nodes statistics
Dashboard API
GET /dashboard/nodes/{id}
Database:
● monitoring_op
Measurement:
Example query:
from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "cpu" and r.cpu == "cpu-total" and r._field ==
"usage_idle" and r.host == "ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "host"])
Example output:
#datatype,string,long,dateTime:RFC3339,double,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,host
,,0,2019-08-15T13:20:00Z,19.549454477395525,host_name
,,0,2019-08-15T13:25:00Z,17.920104933062728,host_name
,,0,2019-08-15T13:30:00Z,18.050788903551002,host_name
,,0,2019-08-15T13:35:00Z,19.801364027505095,host_name
Measurement:
● mem (detailed info on available fields and tags Github Influx Data Telegraf Memory Input Plugin)
Fields:
● free- free memory on host (bytes)
Tags:
● host- ecs_node_fqdn
● node_id- host id
● range- maximum range is 1 hour
NOTE: Due to resource limitation, range is limited to a maximum of 1 hour.
Example query:
from(bucket: "monitoring_op")
|> filter(fn: (r) => r._measurement == "mem" and r._field == "free" and r.host ==
"ecs_node_fqdn")
|> range(start: -1h)
|> keep(columns: ["_time", "_value", "host"])
Example output:
#datatype,string,long,dateTime:RFC3339,long,string
#group,false,false,false,false,true
#default,_result,,,,
,result,table,_time,_value,host
,,0,2019-08-15T14:10:00Z,3181088768,host_name
,,0,2019-08-15T14:15:00Z,2988388352,host_name
,,0,2019-08-15T14:20:00Z,3002994688,host_name
,,0,2019-08-15T14:25:00Z,3115741184,host_name
Dashboard API
GET /dashboard/nodes/{id}
GET /dashboard/zones/localzone
GET /dashboard/zones/localzone/nodes
Dashboard APIs
Lists the APIs that are changed or deprecated.
nodeNicUtilization*, nodeNicReceivedUtilization*,
nodeNicTransmittedUtilization*
Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_op >
Node system level statistics.
Measurements cpu, mem, net
Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_vdc.
Measurement cq_node_rebalancing_summary
Where replacement can be found See Monitoring list of metrics: Non-Performance > Database monitoring_last
> Export of configuration framework values.
Measurement dtquery_cmf
3. Transaction-related data
Data removed
transactionReadLatency, transactionWriteLatency,
transactionReadBandwidth, transactionWriteBandwidth,
transactionReadTransactionsPerSec,
transactionWriteTransactionsPerSec, transactionErrors*
Where replacement can be found For VDC metrics, see Monitoring list of metrics: Performance > Database
monitoring_vdc.
For Node metrics, see Monitoring list of metrics: Performance > Database
monitoring_main.
4. Disk-related data
Data removed
diskReadBandwidthTotal, diskWriteBandwidthTotal,
diskReadBandwidthEc, diskWriteBandwidthEc,
diskReadBandwidthCc, diskWriteBandwidthCc,
diskReadBandwidthRecovery, diskWriteBandwidthRecovery,
diskReadBandwidthGeo, diskWriteBandwidthGeo,
diskReadBandwidthUser, diskWriteBandwidthUser,
diskReadBandwidthXor, diskWriteBandwidthXor
Where replacement can be found For VDC metrics, see Monitoring list of metrics: Non-Performance > Database
monitoring_vdc.
For Node metrics, see Monitoring list of metrics: Non-Performance > Data for ECS
Service I/O Statistics.
E
ECS management user role 60
K
Key rotation limitations 157
M
migrating external key management 154
N
Namespace Administrator 59
R
Rotate Keys 157