Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Isilon OneFS Version 7.0 Administration Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 320

Isilon OneFS

Version 7.0

Administration Guide

Published November, 2012 Copyright 2001 - 2012 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date product documentation, go to the Isilon Customer Support Center. EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com

OneFS 7.0 Administration Guide

CONTENTS

Chapter 1

Introduction to Isilon scale-out NAS

15

Architecture..................................................................................................16 Isilon Node....................................................................................................16 Internal and external networks......................................................................17 Isilon cluster.................................................................................................17 Cluster administration............................................................................17 Quorum..................................................................................................17 Splitting and merging.............................................................................18 Storage pools.........................................................................................18 IP address pools.....................................................................................19 The OneFS operating system.........................................................................19 Data-access protocols............................................................................19 Identity management and access control................................................20 Structure of the file system............................................................................21 Data layout.............................................................................................21 Writing files............................................................................................21 Reading files...........................................................................................22 Metadata layout.....................................................................................22 Locks and concurrency...........................................................................22 Striping..................................................................................................22 Data protection overview...............................................................................23 N+M data protection...............................................................................24 Data mirroring........................................................................................24 The file system journal............................................................................25 Virtual hot spare.....................................................................................25 Balancing protection with storage space.................................................25 VMware integration.......................................................................................25 The iSCSI option............................................................................................25 Software modules.........................................................................................26

Chapter 2

Authentication and access control

27

Data access control.......................................................................................29 ACLs.......................................................................................................29 UNIX permissions...................................................................................30 Mixed-permission environments.............................................................30 Roles and privileges......................................................................................31 Built-in roles...........................................................................................31 OneFS privileges.....................................................................................32 Authentication..............................................................................................34 Local provider.........................................................................................34 File provider............................................................................................34 Active Directory.......................................................................................35 LDAP.......................................................................................................35 NIS.........................................................................................................35 Authentication provider features.............................................................36 Identity management....................................................................................36 Identity types..........................................................................................36 Access tokens........................................................................................37 ID mapping.............................................................................................38
OneFS 7.0 Administration Guide
3

CONTENTS

On-disk identity selection.......................................................................39 User mapping across identities...............................................................40 Configuring user mapping.......................................................................40 Well-known security identifiers...............................................................41 Access zones................................................................................................41 Home directories...........................................................................................42 Home directory creation through SMB.....................................................42 Home directory creation through SSH and FTP.........................................42 Home directory creation in mixed environments.....................................43 Default home directory settings in authentication providers....................44 Supported expansion variables..............................................................44 Managing access permissions.......................................................................45 Configure access management settings..................................................45 Modify ACL policy settings......................................................................46 Update cluster permissions....................................................................51 Managing roles.............................................................................................52 View roles...............................................................................................52 Create a custom role...............................................................................53 Modify a role...........................................................................................53 Delete a custom role...............................................................................53 Create a local user.........................................................................................53 Create a local group......................................................................................54 Managing users and groups..........................................................................55 Modify a local user.................................................................................55 Modify a local group...............................................................................55 Delete a local user..................................................................................56 Delete a local group................................................................................56 Creating file providers...................................................................................56 Create a file provider...............................................................................56 Generate a password file........................................................................57 Managing file providers.................................................................................57 Modify a file provider..............................................................................57 Delete a file provider...............................................................................58 Password file format...............................................................................58 Group file format....................................................................................59 Netgroup file format................................................................................59 Create an Active Directory provider................................................................59 Managing Active Directory providers..............................................................60 Modify an Active Directory provider.........................................................60 Delete an Active Directory provider.........................................................60 Configure Kerberos settings....................................................................60 Active Directory provider settings............................................................61 Create an LDAP provider................................................................................62 Managing LDAP providers..............................................................................65 Modify an LDAP provider.........................................................................65 Delete an LDAP provider.........................................................................65 Create a NIS provider.....................................................................................65 Managing NIS providers................................................................................66 Modify a NIS provider.............................................................................66 Delete a NIS provider..............................................................................67 Create an access zone...................................................................................67 Managing access zones................................................................................68 Modify an access zone............................................................................68 Associate an IP address pool with an access zone..................................69 Delete an access zone............................................................................69

OneFS 7.0 Administration Guide

CONTENTS

Chapter 3

File sharing

71

NFS...............................................................................................................72 SMB..............................................................................................................72 HTTP..............................................................................................................72 FTP................................................................................................................73 Mixed protocol environments........................................................................73 Write caching with SmartCache.....................................................................73 Write caching for asynchronous writes....................................................74 Write caching for synchronous writes......................................................74 Create an NFS export.....................................................................................75 Create an SMB share.....................................................................................75 Configure NFS file sharing.............................................................................76 Disable NFS file sharing..........................................................................77 NFS service settings................................................................................77 NFS export behavior settings..................................................................78 NFS performance settings.......................................................................78 NFS client compatibility settings.............................................................80 Configure SMB file sharing............................................................................80 File and directory permission settings.....................................................81 Disable SMB file sharing.........................................................................81 Snapshots settings directory..................................................................82 SMB performance settings......................................................................82 SMB security settings.............................................................................83 Configure and enable HTTP file sharing..........................................................83 Configure and enable FTP file sharing............................................................84 Managing NFS exports...................................................................................84 Modify an NFS export..............................................................................85 Delete an NFS export..............................................................................85 View and configure default NFS export settings.......................................85 Managing SMB shares...................................................................................86 Add a user or group to an SMB share......................................................86 Modify an SMB share..............................................................................86 Delete an SMB share..............................................................................87 SMB share settings.................................................................................87 View and modify SMB share settings......................................................88

Chapter 4

Snapshots

89

Data protection with SnapshotIQ...................................................................90 Snapshot disk-space usage..........................................................................90 Snapshot schedules......................................................................................91 Snapshot aliases..........................................................................................91 File and directory restoration.........................................................................91 File clones.....................................................................................................91 File clones considerations......................................................................92 iSCSI LUN clones....................................................................................93 Snapshot locks.............................................................................................93 Snapshot reserve..........................................................................................93 SnapshotIQ license functionality...................................................................93 Creating snapshots with SnapshotIQ.............................................................94 Create a SnapRevert domain...................................................................94 Create a snapshot...................................................................................95 Create a snapshot schedule....................................................................95 Snapshot naming patterns......................................................................96 Managing snapshots ....................................................................................99
OneFS 7.0 Administration Guide
5

CONTENTS

Reducing snapshot disk-space usage.....................................................99 Delete snapshots..................................................................................100 Modify a snapshot................................................................................101 Modify a snapshot alias........................................................................101 View snapshots....................................................................................101 Snapshot information...........................................................................101 Restoring snapshot data.............................................................................102 Revert a snapshot.................................................................................102 Restore a file or directory using Windows Explorer................................102 Restore a file or directory through a UNIX command line.......................103 Clone a file from a snapshot.................................................................103 Managing snapshot schedules....................................................................103 Modify a snapshot schedule.................................................................103 Delete a snapshot schedule..................................................................104 View snapshot schedules.....................................................................104 Managing with snapshot locks....................................................................104 Create a snapshot lock.........................................................................104 Modify a snapshot lock.........................................................................105 Delete a snapshot lock.........................................................................105 Snapshot lock information....................................................................105 Configure SnapshotIQ settings....................................................................106 SnapshotIQ settings.............................................................................106 Set the snapshot reserve.............................................................................107

Chapter 5

Data replication with SyncIQ

109

Replication policies and jobs......................................................................110 Source and target cluster association...................................................111 Full and differential replication.............................................................111 Controlling replication job resource consumption.................................111 Replication reports...............................................................................112 Replication snapshots.................................................................................112 Source cluster snapshots.....................................................................112 Target cluster snapshots.......................................................................113 Data failover and failback with SyncIQ.........................................................113 Data failover.........................................................................................114 Data failback........................................................................................114 Recovery times and objectives for SyncIQ....................................................114 SyncIQ license functionality........................................................................115 Creating replication policies........................................................................115 Excluding directories in replication.......................................................115 Excluding files in replication.................................................................116 File criteria options...............................................................................117 Configure default replication policy settings.........................................118 Create a replication policy....................................................................119 Create a SyncIQ domain........................................................................123 Assess a replication policy....................................................................124 Managing replication to remote clusters......................................................124 Start a replication job...........................................................................124 Pause a replication job.........................................................................124 Resume a replication job......................................................................125 Cancel a replication job........................................................................125 View active replication jobs..................................................................125 View replication performance information............................................125 Replication job information..................................................................125 Initiating data failover and failback with SyncIQ..........................................126
6

OneFS 7.0 Administration Guide

CONTENTS

Fail over data to a secondary cluster.....................................................126 Fail over SmartLock directories.............................................................127 Failover revert.......................................................................................127 Fail back data to a primary cluster........................................................128 Prepare SmartLock directories for failback............................................128 Fail back SmartLock directories............................................................129 Managing replication policies.....................................................................130 Modify a replication policy....................................................................130 Delete a replication policy....................................................................130 Enable or disable a replication policy...................................................130 View replication policies.......................................................................131 Replication policy information..............................................................131 Replication policy settings....................................................................131 Managing replication to the local cluster.....................................................133 Cancel replication to the local cluster...................................................133 Break local target association...............................................................133 View replication jobs targeting the local cluster....................................134 Remote replication policy information...................................................134 Managing replication performance rules.....................................................134 Create a network traffic rule..................................................................134 Create a file operations rule..................................................................135 Modify a performance rule....................................................................135 Delete a performance rule.....................................................................135 Enable or disable a performance rule....................................................136 View performance rules........................................................................136 Managing replication reports.......................................................................136 Configure default replication report settings.........................................136 Delete replication reports.....................................................................136 View replication reports........................................................................137 Replication report information..............................................................137 Managing failed replication jobs.................................................................138 Resolve a replication policy..................................................................138 Reset a replication policy......................................................................138 Perform a full or differential replication.................................................139

Chapter 6

Data layout with FlexProtect

141

File striping.................................................................................................142 Data protection levels.................................................................................142 FlexProtect data recovery.............................................................................142 Smartfail...............................................................................................143 Node failures........................................................................................143 Managing protection levels.........................................................................144 Data protection level information................................................................144 Data protection level disk space usage.......................................................145

Chapter 7

NDMP backup

147

NDMP two way backup................................................................................148 NDMP protocol support...............................................................................148 Supported DMAs.........................................................................................148 NDMP hardware support.............................................................................149 NDMP backup limitations............................................................................149 NDMP performance recommendations........................................................149 Excluding files and directories from NDMP backups....................................151 Configuring basic NDMP backup settings....................................................152
OneFS 7.0 Administration Guide
7

CONTENTS

Configure and enable NDMP backup.....................................................152 Disable NDMP backup..........................................................................153 View NDMP backup settings.................................................................153 NDMP backup settings..........................................................................153 Create an NDMP user account.....................................................................153 Managing NDMP user accounts...................................................................154 Modify the password of an NDMP user account.....................................154 Delete an NDMP user account...............................................................154 View NDMP user accounts....................................................................154 Managing NDMP backup devices.................................................................154 Detect NDMP backup devices...............................................................154 Modify an NDMP backup device name..................................................155 Delete a device entry for a disconnected NDMP backup device.............155 View NDMP backup devices..................................................................155 NDMP backup device settings...............................................................156 Managing NDMP backup ports....................................................................156 Modify NDMP backup port settings.......................................................156 Enable or disable an NDMP backup port...............................................156 View NDMP backup ports......................................................................157 NDMP backup port settings..................................................................157 Managing NDMP backup sessions...............................................................157 Terminate an NDMP session.................................................................157 View NDMP sessions............................................................................158 NDMP session information...................................................................158 View NDMP backup logs..............................................................................159 NDMP environment variables......................................................................159

Chapter 8

File retention with SmartLock

163

SmartLock operation modes........................................................................164 Enterprise mode...................................................................................164 Compliance mode.................................................................................164 Replication and backup with SmartLock......................................................165 Data replication in compliance mode....................................................165 Data replication and backup in enterprise mode...................................165 SmartLock license functionality...................................................................166 SmartLock best practices and considerations..............................................166 Set the compliance clock............................................................................167 View the compliance clock..........................................................................168 Creating a SmartLock directory....................................................................168 Retention periods.................................................................................168 Autocommit time periods.....................................................................168 Create a SmartLock directory................................................................169 Managing SmartLock directories.................................................................170 Modify a SmartLock directory................................................................170 View SmartLock directory settings........................................................170 SmartLock directory configuration settings...........................................170 Managing files in SmartLock directories......................................................171 Set a retention period through a UNIX command line............................172 Set a retention period through Windows Powershell.............................172 Commit a file to a WORM state through a UNIX command line...............172 Commit a file to a WORM state through Windows Explorer....................173 Override the retention period for all files in a SmartLock directory.........173 Delete a file committed to a WORM state .............................................174 View WORM status of a file...................................................................174
OneFS 7.0 Administration Guide

CONTENTS

Chapter 9

Protection domains

175

Protection domain considerations...............................................................176 Create a protection domain.........................................................................176 Delete a protection domain.........................................................................176 View protection domains.............................................................................177 Protection domain types.............................................................................177

Chapter 10

Cluster administration

179

User interfaces............................................................................................180 Web administration interface................................................................180 Command-line interface .......................................................................180 Node front panel...................................................................................180 OneFS Platform API...............................................................................181 Connecting to the cluster.............................................................................181 Log in to the web administration interface............................................181 Open an SSH connection to a cluster....................................................181 Restart or shut down the cluster...........................................................181 Licensing....................................................................................................182 Activating licenses................................................................................182 Activate a license through the web administration interface.................182 Activate a license through the command-line interface.........................183 View license information......................................................................183 Unconfiguring licenses.........................................................................183 Unconfigure a license...........................................................................184 General cluster settings...............................................................................184 Configuring the cluster date and time...................................................184 Set the cluster date and time................................................................185 Specify an NTP time server....................................................................185 Set the cluster name.............................................................................186 Specify contact information..................................................................186 View SNMP settings..............................................................................186 Configure SMTP email settings..............................................................186 Configuring SupportIQ..........................................................................187 Enable and configure SupportIQ...........................................................187 Disable SupportIQ................................................................................188 Enable or disable access time tracking.................................................188 Specify the cluster join mode................................................................188 Specify the cluster character encoding..................................................188 Cluster statistics.........................................................................................189 Performance monitoring..............................................................................189 Cluster monitoring.......................................................................................189 Monitor the cluster...............................................................................190 View node status..................................................................................191 Events and notifications.......................................................................191 Monitoring cluster hardware........................................................................196 View node hardware status...................................................................196 SNMP monitoring..................................................................................197 Cluster maintenance...................................................................................199 Replacing node components.................................................................199 Managing cluster nodes.......................................................................200 Remote support using SupportIQ.................................................................201 SupportIQ scripts..................................................................................202 Upgrading OneFS........................................................................................203 Cluster join modes......................................................................................204
OneFS 7.0 Administration Guide
9

CONTENTS

Event notification settings...........................................................................204 System job management.............................................................................205 Job engine overview..............................................................................205 Job performance impact........................................................................207 Job impact policies...............................................................................207 Job priorities.........................................................................................208 Managing system jobs..........................................................................208 Monitoring system jobs........................................................................211 Creating impact policies.......................................................................211 Managing impact policies.....................................................................212

Chapter 11

SmartQuotas

215

Quotas overview.........................................................................................216 Quota types..........................................................................................216 Usage accounting and limits.................................................................218 Disk-usage calculations........................................................................219 Quota notifications...............................................................................220 Quota notification rules........................................................................221 Quota reports.......................................................................................221 Creating quotas...........................................................................................222 Create an accounting quota..................................................................222 Create an enforcement quota................................................................223 Managing quotas........................................................................................223 Search for quotas.................................................................................224 Manage quotas.....................................................................................224 Export a quota configuration file...........................................................225 Import a quota configuration file...........................................................225 Managing quota notifications......................................................................226 Configure default quota notification settings........................................226 Configure custom quota notification rules.............................................227 Map an email notification rule for a quota.............................................228 Configure a custom email quota notification template..........................228 Managing quota reports..............................................................................229 Create a quota report schedule.............................................................229 Generate a quota report........................................................................229 Locate a quota report............................................................................230 Basic quota settings....................................................................................230 Advisory limit quota notification rules settings............................................231 Soft limit quota notification rules settings...................................................232 Hard limit quota notification rules settings..................................................233 Limit notification settings............................................................................233 Quota report settings..................................................................................234 Custom email notification template variable descriptions...........................235

Chapter 12

Storage pools

237

Storage pool overview.................................................................................238 Autoprovisioning.........................................................................................238 Virtual hot spare and SmartPools................................................................239 Spillover and SmartPools............................................................................239 Node pools.................................................................................................240 Add or move node pools in a tier..........................................................240 Change the name or protection level of a node pool..............................240 SSD pools...................................................................................................241 File pools with SmartPools..........................................................................241
10

OneFS 7.0 Administration Guide

CONTENTS

Tiers............................................................................................................242 Create a tier..........................................................................................242 Rename a tier.......................................................................................242 Delete a tier..........................................................................................243 File pool policies.........................................................................................243 Pool monitoring...........................................................................................243 Monitor node pools and tiers................................................................244 View unhealthy subpools......................................................................244 Creating file pool policies with SmartPools..................................................244 Managing file pool policies.........................................................................245 Configure default file pool policy settings.............................................245 Configure default file pool protection settings.......................................246 Configure default I/O optimization settings..........................................246 Modify a file pool policy........................................................................246 Copy a file pool policy...........................................................................247 Prioritize a file pool policy.....................................................................247 Use a file pool template policy..............................................................247 Delete a file pool policy........................................................................248 SmartPools settings....................................................................................248 Default file pool protection settings.............................................................250 Default file pool I/IO optimization settings..................................................252

Chapter 13

Networking

253

Cluster internal network overview................................................................254 Internal IP address ranges....................................................................254 Cluster internal network failover...........................................................254 External client network overview.................................................................254 External network settings......................................................................255 IP address pools...................................................................................255 Connection balancing with SmartConnect.............................................256 External IP failover................................................................................257 NIC aggregation....................................................................................257 VLANs...................................................................................................258 DNS name resolution............................................................................258 IPv6 support.........................................................................................258 Configuring the internal cluster network......................................................259 Modify the internal IP address range.....................................................259 Modify the internal network netmask....................................................259 Configure and enable an internal failover network................................260 Disable internal network failover..........................................................261 Configuring an external network..................................................................261 Adding a subnet...................................................................................261 Managing external client subnets.........................................................266 Managing IP address pools...................................................................268 Managing IP address pool interface members.......................................271 Configure DNS settings.........................................................................274 Managing external client connections with SmartConnect...........................275 Configure client connection balancing..................................................275 Client connection settings....................................................................276 Managing network interface provisioning rules............................................276 Create a node provisioning rule............................................................277 Modify a node provisioning rule............................................................278 Delete a node provisioning rule............................................................278

OneFS 7.0 Administration Guide

11

CONTENTS

Chapter 14

Hadoop

279

Hadoop support overview...........................................................................280 Hadoop cluster integration..........................................................................280 Managing HDFS...........................................................................................280 Configure the HDFS protocol........................................................................280 Create a local user.......................................................................................282 Enable or disable the HDFS service..............................................................282

Chapter 15

Antivirus

283

On-access scanning....................................................................................284 Antivirus policy scanning............................................................................284 Individual file scanning...............................................................................284 Antivirus scan reports.................................................................................285 ICAP servers................................................................................................285 Supported ICAP servers...............................................................................285 Anitvirus threat responses...........................................................................286 Configuring global antivirus settings...........................................................287 Exclude files from antivirus scans.........................................................287 Configure on-access scanning settings.................................................288 Configure antivirus threat response settings.........................................288 Configure antivirus report retention settings.........................................288 Enable or disable antivirus scanning....................................................289 Managing ICAP servers................................................................................289 Add and connect to an ICAP server........................................................289 Test an ICAP server connection.............................................................289 Modify ICAP connection settings...........................................................289 Temporarily disconnect from an ICAP server..........................................289 Reconnect to an ICAP server..................................................................290 Remove an ICAP server.........................................................................290 Create an antivirus policy............................................................................290 Managing antivirus policies.........................................................................291 Modify an antivirus policy.....................................................................291 Delete an antivirus policy.....................................................................291 Enable or disable an antivirus policy.....................................................291 View antivirus policies..........................................................................291 Managing antivirus scans............................................................................291 Scan a file............................................................................................291 Manually run an antivirus policy...........................................................292 Stop a running antivirus scan...............................................................292 Managing antivirus threats..........................................................................292 Manually quarantine a file....................................................................292 Rescan a file.........................................................................................292 Remove a file from quarantine..............................................................292 Manually truncate a file........................................................................292 View threats.........................................................................................293 Antivirus threat information..................................................................293 Managing antivirus reports..........................................................................293 Export an antivirus report......................................................................293 View antivirus reports...........................................................................293 View antivirus events............................................................................294

Chapter 16

iSCSI

295

iSCSI targets and LUNs................................................................................296


12

OneFS 7.0 Administration Guide

CONTENTS

Using SmartConnect with iSCSI targets.................................................296 iSNS client service.......................................................................................296 Access control for iSCSI targets...................................................................297 CHAP authentication.............................................................................297 Initiator access control.........................................................................297 iSCSI considerations and limitations...........................................................297 Supported SCSI mode pages.......................................................................298 Supported iSCSI initiators...........................................................................298 Configuring the iSCSI and iSNS services......................................................298 Configure the iSCSI service...................................................................298 Configure the iSNS client service..........................................................299 View iSCSI sessions and throughput.....................................................299 Create an iSCSI target..................................................................................299 Managing iSCSI targets...............................................................................301 Modify iSCSI target settings..................................................................301 Delete an iSCSI target...........................................................................301 View iSCSI target settings.....................................................................301 Configuring iSCSI initiator access control....................................................302 Configure iSCSI initiator access control.................................................302 Control initiator access to a target........................................................303 Modify initiator name...........................................................................303 Remove an initiator from the access list................................................303 Create a CHAP secret............................................................................304 Modify a CHAP secret............................................................................304 Delete a CHAP secret............................................................................304 Enable or disable CHAP authentication.................................................305 Creating iSCSI LUNs....................................................................................305 Create an iSCSI LUN..............................................................................305 Clone an iSCSI LUN...............................................................................307 Managing iSCSI LUNs..................................................................................309 Modify an iSCSI LUN.............................................................................309 Delete an iSCSI LUN..............................................................................309 Migrate an iSCSI LUN to another target.................................................309 Import an iSCSI LUN..............................................................................310 View iSCSI LUN settings........................................................................311

Chapter 17

VMware integration

313

VASA...........................................................................................................314 Isilon VASA alarms...............................................................................314 VASA storage capabilities.....................................................................314 VAAI............................................................................................................315 VAAI support for block storage..............................................................315 VAAI support for NAS............................................................................315 Configuring VASA support...........................................................................315 Enable VASA.........................................................................................315 Download the Isilon vendor provider certificate....................................315 Add the Isilon vendor provider..............................................................316 Disable or re-enable VASA...........................................................................316

Chapter 18

File System Explorer

317

Browse the file system................................................................................318 Create a directory........................................................................................318 Modify file and directory properties.............................................................318 View file and directory properties................................................................318
OneFS 7.0 Administration Guide
13

CONTENTS

File and directory properties........................................................................319

14

OneFS 7.0 Administration Guide

CHAPTER 1 Introduction to Isilon scale-out NAS

The EMC Isilon scale-out NAS storage platform combines modular hardware with unified software to harness unstructured data. Powered by the distributed OneFS operating system, an EMC Isilon cluster delivers a scalable pool of storage with a global namespace. The platform's unified software provides centralized web-based and command-line administration to manage the following features:
u u u u u u u u u u u u u u u

A symmetrical cluster that runs a distributed file system Scale-out nodes that add capacity and performance Storage options that manage files, block data, and tiering Flexible data protection and high availability Software modules that control costs and optimize resources Architecture..........................................................................................................16 Isilon Node............................................................................................................16 Internal and external networks..............................................................................17 Isilon cluster.........................................................................................................17 The OneFS operating system.................................................................................19 Structure of the file system....................................................................................21 Data protection overview.......................................................................................23 VMware integration...............................................................................................25 The iSCSI option....................................................................................................25 Software modules.................................................................................................26

Introduction to Isilon scale-out NAS

15

Introduction to Isilon scale-out NAS

Architecture
OneFS combines the three traditional layers of storage architecturefile system, volume manager, and data protectioninto a scale-out NAS cluster. In contrast to a scale-up approach, EMC Isilon takes a scale-out approach by creating a cluster of nodes that runs a distributed file system. Each node adds resources to the cluster. Because each node contains globally coherent RAM, as a cluster becomes larger, it becomes faster. Meanwhile, the file system expands dynamically and redistributes content, which eliminates the work of partitioning disks and creating volumes. Nodes work as peers to spread data across the cluster. Segmenting and distributing data a process known as stripingnot only protects data, but also enables a user connecting to any node to take advantage of the entire cluster's performance. The use of distributed software to scale data across commodity hardware sets OneFS apart from other storage systems. No master device controls the cluster; no slaves invoke dependencies. Instead, each node helps control data requests, boosts performance, and expands the cluster's capacity.

Isilon Node
As a rack-mountable appliance, a node includes the following components in a 2U or 4U rack-mountable chassis with an LCD front panel: memory, CPUs, RAM, NVRAM, network interfaces, InfiniBand adapters, disk controllers, and storage media. An Isilon cluster comprises three or more nodes, up to 144. When you add a node to a cluster, you increase the cluster's aggregate disk, cache, CPU, RAM, and network capacity. OneFS groups RAM into a single coherent cache so that a data request on a node benefits from data that is cached anywhere. NVRAM is grouped to write data with high throughput and to protect write operations from power failures. As the cluster expands, spindles and CPU combine to increase throughput, capacity, and input-output operations per second (IOPS). EMC Isilon makes several types of nodes, all of which can be added to a cluster to balance capacity and performance with throughput or IOPS: Node S-Series X-Series NL-Series

Use Case IOPS-intensive applications High-concurrency and throughput-driven workflows Near-primary accessibility, with near-tape value

The following EMC Isilon nodes improve performance: Node Performance Accelerator Backup Accelerator

Function Independent scaling for high performance High-speed and scalable backup-and-restore solution

16

OneFS 7.0 Administration Guide

Introduction to Isilon scale-out NAS

Internal and external networks


A cluster includes two networks: an internal network to exchange data between nodes and an external network to handle client connections. Nodes exchange data through the internal network with a propriety, unicast protocol over InfiniBand. Each node includes redundant InfiniBand ports so you can add a second internal network in case the first one fails. Clients reach the cluster with 1 GigE or 10 GigE Ethernet. Since every node includes Ethernet ports, the cluster's bandwidth scales with performance and capacity as you add nodes.

Isilon cluster
An Isilon cluster consists of three or more hardware nodes, up to 144. Each node runs the Isilon OneFS operating system, the distributed file-system software that unites the nodes into a cluster. A clusters storage capacity ranges from a minimum of 18 TB to a maximum of 15.5 PB.

Cluster administration
OneFS centralizes cluster management through a web administration interface and a command-line interface. Both interfaces provide methods to activate licenses, check the status of nodes, configure the cluster, upgrade the system, generate alerts, view client connections, track performance, and change various settings. In addition, OneFS simplifies administration by automating maintenance with a job engine. You can schedule jobs that scan for viruses, inspect disks for errors, reclaim disk space, and check the integrity of the file system. The engine manages the jobs to minimize impact on the cluster's performance. With SNMP versions 1, 2c, and 3, you can remotely monitor hardware components, CPU usage, switches, and network interfaces. EMC Isilon supplies management information bases (MIBs) and traps for the OneFS operating system. OneFS also includes a RESTful application programming interfaceknown as the Platform APIto automate access, configuration, and monitoring. For example, you can retrieve performance statistics, provision users, and tap the file system. The Platform API integrates with OneFS role-based access control to increase security. See the Isilon Platform API Reference.

Quorum
An Isilon cluster must have a quorum to work properly. A quorum prevents data conflicts for example, conflicting versions of the same filein case two groups of nodes become unsynchronized. If a cluster loses its quorum for read and write requests, you cannot access the OneFS file system. For a quorum, more than half the nodes must be available over the internal network. A seven-node cluster, for example, requires a four-node quorum. A 10-node cluster requires a six-node quorum. If a node is unreachable over the internal network, OneFS separates the node from the cluster, an action referred to as splitting. When the split node can reconnect with the cluster and resynchronize with the other nodes, the node rejoins the cluster's majority group, an action referred to as merging. Although a cluster can contain only one majority group, nodes that split from the majority side can form multiple groups. A OneFS cluster contains two quorum properties:
Internal and external networks
17

Introduction to Isilon scale-out NAS

u u

read quorum (efs.gmp.has_quorum) write quorum (efs.gmp.has_super_block_quorum)

By connecting to a node with SSH and running the sysctl command-line tool as root, you can view the status of both types of quorum. Here is an example for a cluster that has a quorum for both read and write operations, as the command's output indicates with a 1, for true:
sysctl efs.gmp.has_quorum efs.gmp.has_quorum: 1 sysctl efs.gmp.has_super_block_quorum efs.gmp.has_super_block_quorum: 1

The degraded states of nodessuch as smartfail, read-only, offline, and so onaffect quorum in different ways. A node in a smartfail or read-only state affects only write quorum. A node in an offline state, however, affects both read and write quorum. In a cluster, the combination of nodes in different degraded states determines whether read requests, write requests, or both work. A cluster can lose write quorum but keep read quorum. Consider a four-node cluster in which nodes 1 and 2 are working normally. Node 3 is in a read-only state, and node 4 is in a smartfail state. In such a case, read requests to the cluster succeed. Write requests, however, receive an input-output error because the states of nodes 3 and 4 break the write quorum. A cluster can also lose both its read and write quorum. If nodes 3 and 4 in a four-node cluster are in an offline state, both write requests and read requests receive an inputoutput error, and you cannot access the file system. When OneFS can reconnect with the nodes, OneFS merges them back into the cluster. Unlike a RAID system, an Isilon node can rejoin the cluster without being rebuilt and reconfigured.

Splitting and merging


Splitting and merging optimize the use of nodes without your intervention. OneFS monitors every node in a cluster. If a node is unreachable over the internal network, OneFS separates the node from the cluster, an action referred to as splitting. When the cluster can reconnect to the node, OneFS adds the node back into the cluster, an action referred to as merging. If a cluster splits during a write operation, OneFS might need to re-allocate blocks for the file on the side with the quorum, which leads allocated blocks on the side without a quorum to become orphans. When the split nodes reconnect with the cluster, the OneFS Collect system job reclaims the orphaned blocks. Meanwhile, as nodes split and merge with the cluster, the OneFS AutoBalance job redistributes data evenly among the nodes in the cluster, optimizing protection and conserving space.

Storage pools
Storage pools segment nodes and files into logical divisions to simplify the management and storage of data. A storage pool comprises node pools and tiers. Node pools group equivalent nodes to protect data and ensure reliability. Tiers combine node pools to optimize storage by need, such as a frequently used high-speed tier or a rarely accessed archive. The SmartPools module groups nodes and files into pools. By default, the basic unlicensed technology provisions node pools and creates one file pool. When you license the SmartPools module, you receive more features. You can, for example, create multiple file pools and govern them with policies. The policies move files, directories, and file
18

OneFS 7.0 Administration Guide

Introduction to Isilon scale-out NAS

pools among node pools or tiers. You can also define how OneFS handles write operations when a node pool or tier is full. A virtual hot spare, which reserves space to reprotect data if a drive fails, comes with both the licensed and unlicensed technology.

IP address pools
Within a subnet, you can partition a cluster's external network interfaces into pools of IP address ranges. The pools empower you to customize your storage network to serve different groups of users. Although you must initially configure the default external IP subnet in IPv4 format, you can configure additional subnets in IPv4 or IPv6. You can associate IP address pools with a node, a group of nodes, or NIC ports. For example, you can set up one subnet for storage nodes and another subnet for accelerator nodes. Similarly, you can allocate ranges of IP addresses on a subnet to different teams, such as engineering and sales. Such options help you create a storage topology that matches the demands of your network. In addition, network provisioning rules streamline the setup of external connections. After you configure the rules with network settings, you can apply the settings to new nodes. As a standard feature, the OneFS SmartConnect module balances connections among nodes by using a round-robin policy with static IP addresses and one IP address pool for each subnet. The licensed version of SmartConnect adds features, such as defining IP address pools to support multiple DNS zones.

The OneFS operating system


A distributed operating system based on FreeBSD, OneFS presents an Isilon cluster's file system as a single share or export with a central point of administration. The OneFS operating system does the following:
u u

Supports common data-access protocols, such as SMB and NFS. Connects to multiple identity management systems, such as Active Directory and LDAP. Authenticates users and groups. Controls access to directories and files.

u u

Data-access protocols
With the OneFS operating system, you can access data with multiple file-sharing and transfer protocols. As a result, Microsoft Windows, UNIX, Linux, and Mac OS X clients can share the same directories and files. OneFS supports the following protocols. Protocol SMB

Description Server Message Block gives Windows users access to the cluster. OneFS works with SMB 1, SMB 2, and SMB 2.1. With SMB 2.1, OneFS supports client opportunity locks (oplocks) and large (1 MB) MTU sizes. The default file share is /ifs. The Network File System enables UNIX, Linux, and Mac OS X systems to remotely mount any
IP address pools
19

NFS

Introduction to Isilon scale-out NAS

Protocol -

Description
subdirectory, including subdirectories created by Windows users. OneFS works with versions 2 through 4 of the Network File System protocol (NFSv2, NFSv3, NFSv4). The default export is / ifs. File Transfer Protocol lets systems with an FTP client connect to the cluster to exchange files. The Internet Small Computer System Interface protocol provides access to block storage. The Hadoop Distributed File System protocol makes it possible for a cluster to work with Apache Hadoop, a framework for data-intensive distributed applications. HDFS integration requires a separate license. Hyper Text Transfer protocol gives systems browser-based access to resources. OneFS includes limited support for WebDAV.

FTP iSCSI HDFS

HTTP

Identity management and access control


OneFS works with multiple identity management systems to authenticate users and control access to files. In addition, OneFS features access zones that allow users from different directory services to access different resources based on their IP address. Rolebased access control, meanwhile, segments administrative access by role. OneFS authenticates users with the following identity management systems:
u u u u u

Microsoft Active Directory (AD) Lightweight Directory Access Protocol (LDAP) Network Information Service (NIS) Local users and local groups A file provider for accounts in /etc/spwd.db and /etc/group files. With the file provider, you can add an authoritative third-party source of user and group information.

You can manage users with different identity management systems; OneFS maps the accounts so that Windows and UNIX identities can coexist. A Windows user account managed in Active Directory, for example, is mapped to a corresponding UNIX account in NIS or LDAP. To control access, an Isilon cluster works with both the access control lists (ACLs) of Windows systems and the POSIX mode bits of UNIX systems. When OneFS must transform a file's permissions from ACLs to mode bits or from mode bits to ACLs, OneFS merges the permissions to maintain consistent security settings. OneFS presents protocol-specific views of permissions so that NFS exports display mode bits and SMB shares show ACLs. You can, however, manage not only mode bits but also ACLs with standard UNIX tools, such as the chmod and chown commands. In addition, ACL policies enable you to configure how OneFS manages permissions for networks that mix Windows and UNIX systems.
20

OneFS 7.0 Administration Guide

Introduction to Isilon scale-out NAS

Access zones OneFS includes an access zones feature. Access zones allow users from
different authentication providers, such as two untrusted Active Directory domains, to access different OneFS resources based on an incoming IP address. An access zone can contain multiple authentication providers and SMB namespaces.

RBAC for administration OneFS includes role-based access control (RBAC) for administration. In place of a root or administrator account, RBAC lets you manage administrative access by role. A role limits privileges to an area of administration. For example, you can create separate administrator roles for security, auditing, storage, and backup.

Structure of the file system


OneFS presents all the nodes in a cluster as a global namespacethat is, as the default file share, /ifs. In the file system, directories are inode number links. An inode contains file metadata and an inode number, which identifies a file's location. OneFS dynamically allocates inodes, and there is no limit on the number of inodes. To distribute data among nodes, OneFS sends messages with a globally routable block address through the cluster's internal network. The block address identifies the node and the drive storing the block of data.

Data layout
OneFS evenly distributes data among a cluster's nodes with layout algorithms that maximize storage efficiency and performance. The system continuously reallocates data to conserve space. OneFS breaks data down into smaller sections called blocks, and then the system places the blocks in a stripe unit. By referencing either file data or erasure codes, a stripe unit helps safeguard a file from a hardware failure. The size of a stripe unit depends on the file size, the number of nodes, and the protection setting. After OneFS divides the data into stripe units, OneFS allocates, or stripes, the stripe units across nodes in the cluster. When a client connects to a node, the client's read and write operations take place on multiple nodes. For example, when a client connects to a node and requests a file, the node retrieves the data from multiple nodes and rebuilds the file. You can optimize how OneFS lays out data to match your dominant access patternconcurrent, streaming, or random.

Writing files
On a node, the input-output operations of the OneFS software stack split into two functional layers: A top layer, or initiator, and a bottom layer, or participant. In read and write operations, the initiator and the participant play different roles. When a client writes a file to a node, the initiator on the node manages the layout of the file on the cluster. First, the initiator divides the file into blocks of 8 KB each. Second, the initiator places the blocks in one or more stripe units. At 128 KB, a stripe unit consists of 16 blocks. Third, the initiator spreads the stripe units across the cluster until they span a width of the cluster, creating a stripe. The width of the stripe depends on the number of nodes and the protection setting. After dividing a file into stripe units, the initiator writes the data first to non-volatile random-access memory (NVRAM) and then to disk. NVRAM retains the information when the power is off. During the write transaction, NVRAM guards against failed nodes with journaling. If a node fails mid-transaction, the transaction restarts without the failed node. When the
Structure of the file system
21

Introduction to Isilon scale-out NAS

node returns, it replays the journal from NVRAM to finish the transaction. The node also runs the AutoBalance job to check the file's on-disk striping. Meanwhile, uncommitted writes waiting in the cache are protected with mirroring. As a result, OneFS eliminates multiple points of failure.

Reading files
In a read operation, a node acts as a manager to gather data from the other nodes and present it to the requesting client. Because an Isilon cluster's coherent cache spans all the nodes, OneFS can store different data in each node's RAM. By using the internal InfiniBand network, a node can retrieve file data from another node's cache faster than from its own local disk. If a read operation requests data that is cached on any node, OneFS pulls the cached data to serve it quickly. In addition, for files with an access pattern of concurrent or streaming, OneFS pre-fetches in-demand data into a managing node's local cache to further improve sequential-read performance.

Metadata layout
OneFS protects metadata by spreading it across nodes and drives. Metadatawhich includes information about where a file is stored, how it is protected, and who can access itis stored in inodes and protected with locks in a B+ tree, a standard structure for organizing data blocks in a file system to provide instant lookups. Meanwhile, OneFS replicates a file's metadata at least to the protection level of the file. Working together as peers, all the nodes help manage metadata access and locking. If a node detects an error in metadata, the node looks up the metadata in an alternate location and then corrects the error.

Locks and concurrency


OneFS includes a distributed lock manager that orchestrates locks on data across all the nodes in a cluster. The lock manager grants locks for the file system, byte ranges, and protocols, including SMB share-mode locks and NFS advisory locks. OneFS also supports SMB opportunistic locks and NFSv4 delegations. Because OneFS distributes the lock manager across all the nodes, any node can act as a lock coordinator. When a thread from a node requests a lock, the lock manager's hashing algorithm typically assigns the coordinator role to a different node. The coordinator allocates a shared lock or an exclusive lock, depending on the type of request. A shared lock allows users to share a file simultaneously, typically for read operations. An exclusive lock allows only one user to access a file, typically for write operations.

Striping
In a process known as striping, OneFS segments files into units of data and then distributes the units across nodes in a cluster. Striping protects your data and improves cluster performance. To distribute a file, OneFS reduces it to blocks of data, arranges the blocks into stripe units, and then allocates the stripe units to nodes over the internal network.

22

OneFS 7.0 Administration Guide

Introduction to Isilon scale-out NAS

At the same time, OneFS distributes erasure codes that protect the file. The erasure codes encode the file's data in a distributed set of symbols, adding space-efficient redundancy. With only a part of the symbol set, OneFS can recover the original file data. Taken together, the data and its redundancy form a protection group for a region of file data. OneFS places the protection groups on different drives on different nodescreating data stripes. Because OneFS stripes data across nodes that work together as peers, a user connecting to any node can take advantage of the entire cluster's performance. By default, OneFS optimizes striping for concurrent access. If your dominant access pattern is streaming--that is, lower concurrency, higher single-stream workloads, such as with video--you can change how OneFS lays out data to increase sequential-read performance. To better handle streaming access, OneFS stripes data across more drives. Streaming is most effective on clusters or subpools serving large files.

Data protection overview


An Isilon cluster is designed to serve data even when components fail. By default, OneFS protects data with erasure codes, enabling you to retrieve files when a node or disk fails. As an alternative to erasure codes, you can protect data with two to eight mirrors. When you create a cluster with five or more nodes, erasure codes deliver as much as 80 percent efficiency. On larger clusters, erasure codes provide as much as four levels of redundancy. OneFS applies data protection at the level of the file, not the block. You can, however, set different protection levels on directories, files, file pools, subpools, and the cluster. Although a file inherits the protection level of its parent directory by default, you can change the protection level at any time. OneFS protects metadata and inodes at the same protection level as their data. A system job called FlexProtect detects and repairs degraded files. In addition to erasure codes and mirroring, OneFS includes the following features to help protect the integrity, availability, and confidentiality of data: Feature Antivirus

Description OneFS can send files to servers running the Internet Content Adaptation Protocol (ICAP) to scan for viruses and other threats. OneFS enables you to create clones that share blocks with other files to save space. OneFS can back up data to tape and other devices through the Network Data Management Protocol. Although OneFS supports both NDMP 3-way and 2-way backup, 2-way backup requires an Isilon Backup Accelerator node. You can apply protection domains to files and directories to prevent changes.

Clones NDMP backup and restore

Protection domains

The following software modules also help protect data, but they require a separate license:

Data protection overview

23

Introduction to Isilon scale-out NAS

Licensed Feature SyncIQ

Description SyncIQ replicates data on another Isilon cluster and automates failover and failback operations between clusters. If a cluster becomes unusable, you can fail over to another Isilon cluster. You can protect data with a snapshota logical copy of data stored on a cluster. The SmartLock tool prevents users from modifying and deleting files. With a SmartLock license, you can commit files to a write-once, read-many state: The file can never be modified and cannot be deleted until after a set retention period. SmartLock can help you comply with Securities and Exchange Commission Rule 17a-4.

SnapshotIQ SmartLock

N+M data protection


OneFS supports N+M erasure code levels of N+1, N+2, N+3, and N+4. In the N+M data model, N represents the number of nodes, and M represents the number of simultaneous failures of nodes or drives that the cluster can handle without losing data. For example, with N+2 the cluster can lose two drives on different nodes or lose two nodes. To protect drives and nodes separately, OneFS also supports N+M:B. In the N+M:B notation, M is the number of disk failures, and B is the number of node failures. With N +3:1 protection, for example, the cluster can lose three drives or one node without losing data. The default protection level for clusters larger than 18 TB is N+2:1. The default for clusters smaller than 18 TB is N+1. The quorum rule dictates the number of nodes required to support a protection level. For example, N+3 requires at least seven nodes so you can maintain a quorum if three nodes fail. You can, however, set a protection level that is higher than the cluster can support. In a four-node cluster, for example, you can set the protection level at 5x. OneFS protects the data at 4x until a fifth node is added, after which OneFS automatically reprotects the data at 5x.

Data mirroring
You can protect on-disk data with mirroring, which copies data to multiple locations. OneFS supports two to eight mirrors. You can use mirroring instead of erasure codes, or you can combine erasure codes with mirroring. Mirroring, however, consumes more space than erasure codes. Mirroring data three times, for example, duplicates the data three times, which requires more space than erasure codes. As a result, mirroring suits transactions that require high performance, such as with iSCSI LUNs.

24

OneFS 7.0 Administration Guide

Introduction to Isilon scale-out NAS

You can also mix erasure codes with mirroring. During a write operation, OneFS divides data into redundant protection groups. For files protected by erasure codes, a protection group consists of data blocks and their erasure codes. For mirrored files, a protection group contains all the mirrors of a set of blocks. OneFS can switch the type of protection group as it writes a file to disk. By changing the protection group dynamically, OneFS can continue writing data despite a node failure that prevents the cluster from applying erasure codes. After the node is restored, OneFS automatically converts the mirrored protection groups to erasure codes.

The file system journal


A journal, which records file-system changes in a battery-backed NVRAM card, recovers the file system after failures, such as a power loss. When a node restarts, the journal replays file transactions to restore the file system.

Virtual hot spare


When a drive fails, OneFS uses space reserved in a subpool instead of a hot spare drive. The reserved space is known as a virtual hot spare. In contrast to a spare drive, a virtual hot spare automatically resolves drive failures and continues writing data. If a drive fails, OneFS migrates data to the virtual hot spare to reprotect it. You can reserve as many as four disk drives as a virtual hot spare.

Balancing protection with storage space


You can set protection levels to balance protection requirements with storage space. Higher protection levels typically consume more space than lower levels because you lose an amount of disk space to storing erasure codes. The overhead for the erasure codes depends on the protection level, the file size, and the number of nodes in the cluster. Since OneFS stripes both data and erasure codes across nodes, the overhead declines as you add nodes.

VMware integration
OneFS integrates with several VMware products, including vSphere, vCenter, and ESXi. For example, OneFS works with the VMware vSphere API for Storage Awareness (VASA) so that you can view information about an Isilon cluster in vSphere. OneFS also works with the VMware vSphere API for Array Integration (VAAI) to support the following features for block storage: hardware-assisted locking, full copy, and block zeroing. VAAI for NFS requires an ESXi plug-in. With the Isilon for vCenter plug-in, you can backup and restore virtual machines on an Isilon cluster. With the Isilon Storage Replication Adapter, OneFS integrates with the VMware vCenter Site Recovery Manager to recover virtual machines that are replicated between Isilon clusters.

The iSCSI option


Block-based storage offers flexible storage and access. OneFS enables clients to store block data on an Isilon cluster by using the Internet Small Computer System Interface (iSCSI) protocol. With the iSCSI module, you can configure block storage for Windows, Linux, and VMware systems. On the network side, the logical network interface (LNI) framework dynamically manages interfaces for network resilience. You can combine multiple network interfaces with LACP
The file system journal
25

Introduction to Isilon scale-out NAS

and LAGG to aggregate bandwidth and to fail over client sessions. The iSCSI module requires a separate license.

Software modules
You can license additional EMC Isilon software modules to manage a cluster by using advanced features.
u

SmartLock SmartLock protects critical data from malicious, accidental, or premature


alteration or deletion to help you comply with SEC 17a-4 regulations. You can automatically commit data to a tamper-proof state and then retain it with a compliance clock.

SyncIQ automated failover and failback SyncIQ replicates data on another Isilon
cluster and automates failover and failback between clusters. If a cluster becomes unusable, you can fail over to another Isilon cluster. Failback restores the original source data after the primary cluster becomes available again.

File clones OneFS provides provisioning of full read/write copies of files, LUNs, and other clones. OneFS also provides virtual machine linked cloning through VMware API integration. SnapshotIQ SnapshotIQ protects data with a snapshota logical copy of data stored on
a cluster. A snapshot can be restored to its top-level directory.

SmartPools SmartPools enable you to create multiple file pools governed by file-pool policies. The policies move files and directories among node pools or tiers. You can also define how OneFS handles write operations when a node pool or tier is full. SmartConnect A SmartConnect license adds advanced balancing policies to evenly distribute CPU usage, client connections, or throughput. The licensed mode also lets you define IP address pools to support multiple DNS zones in a subnet. In addition, SmartConnect supports IP failover, also known as NFS failover. InsightIQ The InsightIQ virtual appliance monitors and analyzes the performance of your Isilon cluster to help you optimize storage resources and forecast capacity. Aspera for Isilon Aspera moves large files over long distances fast. Aspera for Isilon is a cluster-aware version of Aspera technology for non-disruptive, wide-area content delivery. iSCSI OneFS supports the Internet Small Computer System Interface (iSCSI) protocol to
provide block storage for Windows, Linux, and VMware clients. The iSCSI module includes parallel LUN allocation and zero-copy support.

HDFS OneFS works with the Hadoop Distributed File System protocol to help clients
running Apache Hadoop, a framework for data-intensive distributed applications, analyze big data.

SmartQuotas The SmartQuotas module tracks disk usage with reports and enforces
storage limits with alerts.

26

OneFS 7.0 Administration Guide

CHAPTER 2 Authentication and access control

OneFS supports several methods for ensuring that your cluster remains secure, through UNIX- and Windows-style data access permissions as well as configuration controls including role-based administration and access zones. OneFS is designed for a mixed environment in which both Windows Access Control Lists (ACLs) and standard UNIX permissions can be configured on the cluster file system. Windows and UNIX permissions cannot coexist on a single file or directory; however, OneFS uses identity mapping to translate between Windows and UNIX permissions as needed. Access zones enable you to partition authentication control configuration based on the IP address that a user connects to on the cluster. OneFS includes a built-in access zone named "system." By default, new authentication providers, SMB shares, and NFS exports are added to the system zone. When a new IP address pool is added to the cluster, you can select a single access zone that will be used when connecting to any IP address in that pool. Roles enable you to assign privileges to member users and groups. By default, only the "root" and "admin" users can log in to the web administration interface through HTTP, or the command-line interface (CLI) through SSH. The root and admin users can then assign other users to built-in or custom roles with login privileges and other privileges that are required to perform administrative functions. It is recommended that you assign users to roles that contain the minimum set of privileges necessary. In most situations, the default permission policy settings, system access zone, and builtin roles are sufficient; however, you can create additional access zones and custom roles and modify permission policies as necessary for your particular environment.
u u u u u u u u u u u u u u u u u u u u

Data access control...............................................................................................29 Roles and privileges..............................................................................................31 Authentication......................................................................................................34 Identity management............................................................................................36 Access zones........................................................................................................41 Home directories...................................................................................................42 Managing access permissions...............................................................................45 Managing roles.....................................................................................................52 Create a local user.................................................................................................53 Create a local group..............................................................................................54 Managing users and groups..................................................................................55 Creating file providers...........................................................................................56 Managing file providers.........................................................................................57 Create an Active Directory provider........................................................................59 Managing Active Directory providers......................................................................60 Create an LDAP provider........................................................................................62 Managing LDAP providers......................................................................................65 Create a NIS provider.............................................................................................65 Managing NIS providers........................................................................................66 Create an access zone...........................................................................................67
Authentication and access control
27

Authentication and access control

Managing access zones........................................................................................68

28

OneFS 7.0 Administration Guide

Authentication and access control

Data access control


OneFS supports two types of authorization data on a file: Windows-style access control lists (ACLs) and POSIX mode bits (UNIX permissions). The type used is based on the ACL policies that are set and on the file-creation method. Generally, files that are created over SMB or in a directory that has an ACL receive an ACL; otherwise, OneFS relies on the POSIX mode bits that define UNIX permissions. In either case, the owner can be represented by a UNIX identifier (UID or GID) or by its Windows identifier (SID). The primary group can be represented by a GID or SID. Although mode bits are present when a file has an ACL, the mode bits are provided only for protocol compatibility and are not used for access checks. During user authorization, OneFS compares the access token that is generated during the initial connection with the authorization data on the file. All user and identity mapping occurs during token generation; no mapping takes place when evaluating permissions. Access to a file or directory can be governed by either a Windows access control list (ACL) or UNIX mode bits. Regardless of the security model, OneFS enforces access rights consistently across access protocols. A user is granted or denied the same rights to a file when using SMB for Windows file sharing as when using NFS for UNIX file sharing. The OneFS file system ships with UNIX permissions. By using Windows Explorer or OneFS administrative tools, you can give a file or directory an ACL. In addition to Windows domain users and groups, ACLs in OneFS can include local, NIS, and LDAP users and groups. After you give a file an ACL, OneFS stops enforcing the file's mode bits, which remain only as an estimate of the effective permissions. An EMC Isilon cluster includes global policy settings that enable you to customize the default ACL and UNIX permissions to best support your environment. Although you can configure ACL policies to optimize a cluster for UNIX or Windows, you should do so only if you understand how ACL and UNIX permissions interact.

ACLs
In Windows environments, file and directory permissions, referred to as access rights, are defined in access control lists (ACLs). Although ACLs are more complex than mode bits, ACLs can express much more granular sets of access rules. OneFS uses the ACL processing rules commonly associated with Windows ACLs. A Windows ACL contains zero or more access control entries (ACEs), each of which represents the security identifier (SID) of a user or a group as a trustee. In OneFS, an ACL can contain ACEs with a UID, GID, or SID as the trustee. Each ACE contains a set of rights that allow or deny access to a file or folder. An ACE can optionally contain an inheritance flag to specify whether the ACE should be inherited by child folders and files. Instead of the standard three permissions available for mode bits, ACLs have 32 bits of fine-grained access rights. Of these, the upper 16 bits are general and apply to all object types. The lower 16 bits vary between files and directories but are defined in a way that allows most applications to use the same bits for files and directories. Rights can be used for granting or denying access for a given trustee. A user's access can be blocked explicitly through a deny ACE. Access can also be blocked implicitly by ensuring that the user does not directly (or indirectly through a group) appear in an ACE that grants the right in question.

Data access control

29

Authentication and access control

UNIX permissions
In a UNIX environment, file and directory access is controlled by POSIX mode bits, which grant read, write, or execute permissions to the owning user, the owning group, and everyone else. OneFS supports the standard UNIX tools for changing permissions, chmod and chown. For more information, see the OneFS man pages for the chmod, chown, and ls commands. All files contain 16 permission bits, which provide information about the file or directory type and the permissions. The lower 9 bits are grouped as three 3-bit sets, called triples, which contain the read ), write (w), and execute (x) permissions for each class of users (owner, group, and other). You can set permissions flags to grant permissions to each of these classes. Assuming the user is not root, OneFS uses the class to determine whether to grant or deny access to the file. The classes are not cumulative; the first class matched is used. It is therefore common to grant permissions in decreasing order.

Mixed-permission environments
When a file operation requests an objects authorization data (for example, with the ls l command over NFS or with the Security tab of the Properties dialog box in Windows Explorer over SMB), OneFS attempts to provide that data in the requested format. In an environment that mixes UNIX and Windows systems, some translation may be required when performing create file, set security, get security, or access operations.

NFS access of Windows-created files


If a file contains an owning user or group that is a SID, the system attempts to map it to a corresponding UID or GID before returning it to the caller. In UNIX, authorization data is retrieved by calling stat(2) on a file and examining the owner, group, and mode bits. Over NFSv3, the GETATTR command functions similarly. The system approximates the mode bits and sets them on the file whenever its ACL changes. Mode bit approximations need to be retrieved only to service these calls. SID-to-UID and SID-to-GID mappings are cached in both the OneFS ID mapper and the stat cache. If a mapping has recently changed, the file might report inaccurate information until the file is updated or the cache is flushed.

SMB access of UNIX-created files


No UID-to-SID or GID-to-SID mappings are performed when creating an ACL for a file; all UIDs and GIDs are converted to SIDs or principals when the ACL is returned. OneFS uses a two-step process for returning a security descriptor, which contains SIDs for the owner and primary group of an object: 1. The current security descriptor is retrieved from the file. If the file does not have a discretionary access control list (DACL), a synthetic ACL is constructed from the files lower 9 mode bits, which are separated into three sets of permission triplesone each for owner, group, and everyone. For details about mode bits, see "UNIX permissions." 2. Two access control entries (ACEs) are created for each triple: the allow ACE contains the corresponding rights that are granted according to the permissions; the deny ACE contains the corresponding rights that are denied. In both cases, the trustee of the
30

OneFS 7.0 Administration Guide

Authentication and access control

ACE corresponds to the file owner, group, or everyone. After all of the ACEs are generated, any that are not needed are removed before the synthetic ACL is returned.

Roles and privileges


In addition to controlling access to files and directories through ACLs and POSIX mode bits, OneFS controls configuration-level access through administrator roles. A role is a collection of OneFS privileges, usually associated with a configuration subsystem, that are granted to members of that role as they log in to the cluster through the platform API, command-line interface, or web administration interface. OneFS includes built-in administrator roles with predefined sets of privileges that cannot be modified. You can also create additional roles with configurable sets of privileges. Privileges have one of two forms:
u

Action Allows a user to perform a specific action on the cluster. For example the
ISI_PRIV_LOGIN_SSH privilege allows a user to log in to the cluster through an SSH client.

Read/Write Allows a user to view or modify a configuration subsystem such as statistics,


snapshots, or quotas. For example, the ISI_PRIV_SNAPSHOT privilege allows an administrator to create and delete snapshots and snapshot schedules. A read/write privilege can grant either read-only (RO) or read/write (RW) access. Read-only access allows a user to view configuration settings; read/write access allows a user to view and modify configuration settings.

OneFS includes a small set of privileges that allow access to Platform API URIs, but do not allow additional configuration through the CLI or web administration interface. For example, the ISI_PRIV_EVENT privilege provides access to the /platform/1/event URI, but does not allow access to the isi events CLI command. By default, API-only privileges are part of the built-in SystemAdmin role but are hidden from the system privilege list that is viewable by running the isi auth privileges.

Built-in roles
Built-in roles include privileges to perform a set of administrative functions. The following table describes each of the built-in roles from most powerful to least powerful. The table includes the privileges and read/write access levels (if applicable) that are assigned to each role. You can assign users and groups to built-in roles as well as to roles that you create. Role SecurityAdmin

Description Administer security configuration on the cluster, including authentication providers, local users and groups, and role membership.

Privileges ISI_PRIV_LOGIN_CONSOLE ISI_PRIV_LOGIN_PAPI ISI_PRIV_LOGIN_SSH ISI_PRIV_AUTH ISI_PRIV_ROLE

Read/write access N/A N/A N/A Read/write Read/write N/A N/A N/A Read-only

SystemAdmin

Administer all aspects of cluster configuration that are not specifically handled by the SecurityAdmin role.

ISI_PRIV_LOGIN_CONSOLE ISI_PRIV_LOGIN_PAPI ISI_PRIV_LOGIN_SSH ISI_PRIV_EVENT

Roles and privileges

31

Authentication and access control

Role -

Description -

Privileges
ISI_PRIV_LICENSE ISI_PRIV_NFS ISI_PRIV_QUOTA ISI_PRIV_SMB ISI_PRIV_SNAPSHOT ISI_PRIV_STATISTICS ISI_PRIV_NS_TRAVERSE ISI_PRIV_NS_IFS_ACCESS

Read/write access Read-only Read/write Read/write Read/write Read/write Read/write N/A N/A N/A N/A N/A Read-only Read-only Read-only Read-only Read-only Read-only

AuditAdmin

View all system configuration settings.

ISI_PRIV_LOGIN_CONSOLE ISI_PRIV_LOGIN_PAPI ISI_PRIV_LOGIN_SSH ISI_PRIV_LICENSE ISI_PRIV_NFS ISI_PRIV_QUOTA ISI_PRIV_SMB ISI_PRIV_SNAPSHOT ISI_PRIV_STATISTICS

OneFS privileges
Privileges in OneFS are assigned through role membership; they cannot be assigned directly to users and groups.
Table 1 Login privileges

OneFS privilege -

User right -

Privilege type
Action Action

ISI_PRIV_LOGIN_CONSOLE Log in from the console. ISI_PRIV_LOGIN_PAPI Log in to the Platform API and the web administration interface. Log in using SSH.

ISI_PRIV_LOGIN_SSH

Action

32

OneFS 7.0 Administration Guide

Authentication and access control

Table 2 Security privileges

OneFS privilege ISI_PRIV_AUTH ISI_PRIV_ROLE

User right Configure external authentication providers. Create new roles and assign privileges.

Privilege type Read/Write Read/Write

Table 3 Configuration privileges

OneFS privilege ISI_PRIV_NFS ISI_PRIV_QUOTA ISI_PRIV_SMARTPOOLS ISI_PRIV_SMB ISI_PRIV_SNAPSHOT

User right Configure the NFS server. Configure file system quotas. Configure storage pools. Configure the SMB server. Schedule, take, and view snapshots.

Privilege type Read/Write Read/Write Read/Write Read/Write Read/Write

Table 4 Namespace privileges

OneFS privilege ISI_PRIV_NS_TRAVERSE ISI_PRIV_NS_IFS_ACCESS

User right Traverse and view directory metadata.

Privilege type Action

Access the /ifs directory Action tree through the namespace REST service.

Table 5 Platform API-only privileges

OneFS privilege ISI_PRIV_EVENT ISI_PRIV_LICENSE ISI_PRIV_STATISTICS

User right View and modify system events. Activate OneFS software licenses. View file system performance statistics.

Privilege type Read/Write Read/Write Read/Write

OneFS privileges

33

Authentication and access control

Authentication
OneFS supports a variety of local and remote authentication providers to verify that users attempting to access the cluster are who they claim to be. Anonymous access, which does not require authentication, is supported for protocols that allow it. To use an authentication provider, it must be added to an access zone. By default, when you create an authentication provider it is added to the built-in system zone, which already includes a local provider and a file provider. You can create multiple instances of each provider type, but it is recommended that you only use a single instance of a provider type within an access zone. For more information about creating and managing access zones, see "Access zones." OneFS supports the concurrent use of multiple authentication providers. For example, OneFS is frequently configured to authenticate Windows clients with Active Directory and to authenticate UNIX clients with LDAP. It is important that you understand their interactions before enabling multiple providers on the cluster. Authentication providers support a mix of the following features:
u

Authentication. All authentication providers support plain text authentication; some providers can also be configured to support NTLM or Kerberos authentication. Ability to manage users and groups directly on the cluster. Netgroups. Used primarily by NFS, netgroups configure access to NFS exports. UNIX-centric user and group properties such as login shell, home directory, UID, and GID. Missing information is supplemented by configuration templates or additional authentication providers. Windows-centric user and group properties such as NetBIOS domain and SID. Missing information is supplemented by configuration templates.

u u u

Local provider
The local provider provides authentication and lookup facilities for user accounts that were added by an administrator. Local users do not include system accounts such as root or admin. The local provider also maintains local group membership. Local authentication can be useful when Active Directory, LDAP, or NIS directory services are not used, or when a specific user or application needs to access the cluster. Unlike UNIX groups, local groups can include built-in groups and Active Directory groups as members. Local groups can also include users from other providers. Netgroups are not supported in the local provider. Each access zone in the cluster contains a separate instance of the local provider, which allows each access zone to have its own list of local users that can authenticate to it.

File provider
A file provider enables you to supply an authoritative third-party source of user and group information to the cluster. A third-party source is useful in UNIX environments where passwd, group, and netgroup files are synchronized across multiple UNIX servers. OneFS uses standard BSD /etc/spwd.db and /etc/group database files as the backing store for the file provider. You generate the spwd.db file by running the pwd_mkdb command-line utility. You can script updates to the database files.

34

OneFS 7.0 Administration Guide

Authentication and access control

The built-in system file provider includes services to list, manage, and authenticate against system accounts such as root, admin, and nobody. Modifying the system file provider is not recommended.

Active Directory
The Active Directory directory service is a Microsoft implementation of Lightweight Directory Access Protocol (LDAP), Kerberos, and DNS technologies that can store information about network resources. Active Directory can serve many functions, but the primary reason for joining the cluster to an Active Directory domain is to perform user and group authentication. When the cluster joins an Active Directory domain, a single Active Directory machine account is created. The machine account is used to establish a trust relationship with the domain and to enable the cluster to authenticate and authorize users in the Active Directory forest. By default, the machine account is named the same as the cluster; however, if the cluster name is more than 15 characters long, the name is hashed and displayed after joining the domain. Whenever possible, a single Active Directory instance should be used when all domains have a trust relationship. Multiple instances should be used only to grant access to multiple sets of mutually-untrusted domains.

LDAP
The Lightweight Directory Access Protocol (LDAP) is a networking protocol that enables you to define, query, and modify directory services and resources. OneFS can authenticate users and groups against an LDAP repository in order to grant them access to the cluster. The LDAP service supports the following features:
u u

Users, groups, and netgroups. Configurable LDAP schemas. For example, the ldapsam schema allows NTLM authentication over the SMB protocol for users with Windows-like attributes. Simple bind authentication (with and without SSL). Redundancy and load balancing across servers with identical directory data. Multiple LDAP provider instances for accessing servers with different user data. Encrypted passwords.

u u u u

NIS
The Network Information Service (NIS) provides authentication and identity uniformity across local area networks. OneFS includes a NIS authentication provider that enables you to integrate the cluster with your NIS infrastructure. NIS, designed by Sun Microsystems, can be used to authenticate users and groups when they access the cluster. The NIS provider exposes the passwd, group, and netgroup maps from a NIS server. Hostname lookups are also supported. Multiple servers can be specified for redundancy and load balancing. NIS is different from NIS+, which OneFS does not support.

Active Directory

35

Authentication and access control

Authentication provider features


The following table compares features that are available with each of the authentication providers that OneFS supports. In the following table, an 'x' indicates that a feature is fully supported by a provider; an asterisk (*) indicates that additional configuration or support from another provider is required. Authentic ation provider Active Directory LDAP NIS Local File x x x

NTLM

Kerberos

x *

User/ Netgroups UNIX Windows group properties properties managem ent * x x x x x x

x x

Identity management
There are several methods by which a user can be identified. UNIX users are represented by a user or group identifier (UID or GID); Windows users are represented by a security identifier (SID). Names can also be used as identifiers in one of a variety of formats, depending on their source (for example, SMB, NFSv3, NFSv4, or Kerberos). OneFS provides advanced identity management options to equate these different identity types and enable proper access controls.

Identity types
OneFS supports three primary identity types, each of which can be stored directly on the file system: user identifier (UID) and group identifier (GID) for UNIX and security identifier (SID) for Windows. These identity types are used when creating files, checking file ownership or group membership, and performing file access checks. In OneFS, names are classified as a secondary identifier and are used for authentication but never for authorization. UNIX and Windows identifiers are formatted as follows: A UID or GID is a 32-bit number with a maximum value of 4,294,967,295. A SID is a series of authorities and sub-authorities ending with a 32-bit relative identifier (RID). Most SIDs have the form S-1-5-21-A-B-C-<RID>, where A, B, and C are specific to a domain or computer and <RID> denotes the object in the domain. When a name is provided as an identifier, it is converted into the corresponding user or group object and the correct identity type.
u u

There are various ways that a name can be entered or displayed:


u

UNIX assumes unique case-sensitive namespaces for users and groups. For example, "Name" and "name" represent different objects. Windows provides a single, case-insensitive namespace for all objects and also specifies a prefix to target an Active Directory domain (for example, domain\name).

36

OneFS 7.0 Administration Guide

Authentication and access control

Kerberos and NFSv4 define principals, which require names to be formatted the same way as email addresses (for example, name@domain.com).

Multiple names can reference the same object. For example, given the name "support" and the domain "example.com", support, EXAMPLE\support, and support@example.com are all names for a single object in Active Directory.

Access tokens
Access tokens form the basis of who you are when performing actions on the cluster, and supply the primary owner and group identities to use during file creation. Access tokens are also compared against the ACL or mode bits during authorization checks. An access token includes all UIDs, GIDs, and SIDs for an identity as well as all OneFS privileges. OneFS exclusively uses the information in the token to determine whether a user has access to a resource. It is important that the token contains the correct list of UIDs, GIDs, and SIDs at all times. An access token is created from one of the following sources: Source Username

Authorization method SMB impersonate user Kerberized NFSv3 Kerberized NFSv4 mountd root mapping HTTP FTP

Privilege Attribute Certificate (PAC)

SMB NTLM Active Directory Kerberos

User identifier (UID)

NFS AUTH_SYS mapping

Access token generation


For most protocols, the access token is generated from the username or from the authorization data retrieved during authentication. The process of token generation and user mapping is described below: 1. Using the initial identity, the user is looked up in all configured authentication providers in the access zone, in the order in which they are listed, until a match is found. An exception to this behavior occurs if the AD provider is configured to call other providers, such as LDAP or NIS. The user identity and group list are retrieved from the authenticating provider. Any SIDs, UIDs, or GIDs are added to the initial token. 2. All identities in the token are queried in the ID mapper. All SIDs are converted to their equivalent UID/GID and vice versa. These ID mappings are also added to the access token. 3. If the username matches any user mapping rules, the rules are processed in order and the token is updated accordingly. (For details about user mapping rules, see "User mapping.")

Access tokens

37

Authentication and access control

The default on-disk identity is calculated using the final token and the global setting. These identities are used for newly created files.

ID mapping
The file access protocols provided by OneFS support a limited number of identity types: UIDs, GIDs, and SIDs. When an identity is requested that does not match the stored type, a mapping is required. Administrators with advanced knowledge of UNIX and Windows identities can modify the default settings that determine how those identities are mapped in the system. Mappings are stored in a cluster-distributed database called the ID mapper. When retrieving a mapping from the database, as input the ID mapper takes a source and target identity type (UID, GID, or SID). If a mapping already exists between the specified source and the requested type, that mapping is returned; otherwise, a new mapping is created. Each mapping is stored as a one-way relationship from source to destination. Two-way mappings are presented as two complementary one-way mappings in the database. There are four types of identity mappings. The mapping type and identity source determine whether these mappings are stored persistently in the ID mapper.
u

External mappings are derived from identity sources outside OneFS. For example, Active Directory (AD) can store a UID or GID along with a SID. When retrieving the SID from AD, the UID/GID is also retrieved and used for mappings on OneFS. By default, mappings derived from AD are not persistently stored in the ID mapper, but mappings from other external identity sources including LDAP and NIS are persistently stored. Algorithmic mappings are created by adding a UID or GID to a well-known base SID, resulting in a temporary UNIX SID. (For more information, see Mapping UNIX IDs to Windows IDs.) Unlike external mappings, algorithmic mappings are not persistently stored in the ID mapper database. Manual mappings are set explicitly by running the isi auth mapping command at the command line. For command syntax and examples, see the OneFS Command Reference. Manual mappings are stored persistently in the ID mapper database. Automatic mappings are generated if no other mapping type can be found. A SID is mapped to a UID or GID out of the default range of 1,000,000-2,000,000. This range is assumed to be otherwise unused, and a check is made only to ensure there is no mapping from the given UID before it is used. After creation, these mappings are stored persistently in the ID mapper database.

Mapping Windows IDs to UNIX IDs


If the caller requests a SID-to-UID or SID-to-GID mapping, the actual object (an AD user or group) must first be located. After the object is located, the following rules are applied to create two mappings, one in each direction: 1. If the object has an associated UID or GID (an external mapping), create a mapping from the SID. 2. If a mapping for the SID already exists in the ID mapper database, use that mapping. 3. Determine whether a lookup of the user or group is necessary in an external source, according to the following conditions: l The user or group is in the primary domain or one of the listed lookup domains.
l

Lookup is enabled for users or groups.

4. If a lookup is necessary, follow these steps: a. By default, normalize the user or group name to lowercase.
38

OneFS 7.0 Administration Guide

Authentication and access control

b. Search all authentication providers except Active Directory for a matching user or group object by name. c. If an object is found, use the associated UID or GID to create an external mapping. 5. Allocate an automatic mapping from the configured range.

Mapping UNIX IDs to Windows IDs


UID-to-SID and GID-to-SID mappings are used only if the caller requests a mapping to be created and one did not already exist. Resulting UNIX SIDs are never stored on-disk. UIDs and GIDs have a set of pre-defined mappings to and from SIDs. If a UID-to-SID or GID-to-SID mapping is requested, a temporary UNIX SID is generated in the format S-1-22-1-<UID> or S-1-22-2-<GID> by using the following rules:
u

UIDs are mapped to a SID with a domain of S-1-22-1 and a resource ID (RID) matching the UID. For example, the UNIX SID for UID 600 is S-1-22-1-600. GIDs are mapped to a SID with a domain of S-1-22-2 and a RID matching the GID. For example, the UNIX SID for GID 800 is S-1-22-2-800.

On-disk identity selection


OnesFS can store either UNIX or Windows identities in file metadata on disk. The identities are set when a file is created or a file's access control data is modified. Choosing the preferred identity to store is important because nearly all protocols require some level of mapping to operate correctly. You can choose to store the UNIX or the Windows identity, or allow OneFS to determine the optimal identity to store.

The on-disk selection does not guarantee the preferred identity can always be stored on disk. On new installations, the on-disk identity is set to native, which is optimized for a mixed Windows and UNIX environment. When you upgrade from OneFS 6.0 or earlier, the on-disk identity is set to unix to match the file system behavior of those earlier versions without requiring an upgrade of all your files and directories.

The available on-disk identities and the corresponding actions taken by the system authentication daemon are described below.
u

native: Determine the identity to store on disk by checking the following ID mapping types in order. The first rule that applies is used to set the on-disk identity. 1. Algorithmic mappings: If an incoming SID matches S-1-22-1-<UID> or S-1-22-2-<GID> (also called a "UNIX SID"), convert it back to the corresponding UID or GID and set it as the on-disk identity. 2. External mappings: If an incoming UID or GID is defined in an external provider (AD, LDAP, or NIS), set it as the on-disk identity. 3. Persistent mappings (usually created with the isi auth mapping create command): If an incoming identity has a mapping that is stored persistently in the ID mapper database, store the incoming identity as the on-disk identity unless the mapping is flagged as on-disk (in which case, set the target ID as the on-disk identity). For example, if a mapping of GID:10000 -> S-1-5-32-545 exists and the --on-disk option has been set with the isi auth mapping modify command, a request for the on-disk storage of GID:10000 returns S-1-5-32-545.
On-disk identity selection
39

Authentication and access control

4. Automatic mappings: If an incoming SID is not mapped to a UID or GID, set the SID as the on-disk identity. If a UNIX identifier is later required (for example, for crossprotocol NFS or local file system access), a mapping to an auto-allocated UNIX identifier is created.
u

unix: Always store incoming UNIX identifiers on disk. For incoming SIDs, search the configured authentication providers by user name. If a match is found, the SID is mapped to either a UID or GID. If the SID does not exist on the cluster (for example, it is local to the client or part of an untrusted AD domain), a UID or GID is allocated from the ID mapper database and stored on disk, and the resulting SID-to-UID or -GID mapping is stored in the ID mapper database. sid: Store incoming SIDs on disk, with the exception of temporary UNIX SIDs, which are always converted back to their corresponding UNIX identifiers before being stored on disk. For incoming UIDs or GIDs, search the configured authentication providers. If a match is found, store the SID on disk; otherwise, store the UNIX identity.

User mapping across identities


User mapping provides a way to control the permissions given to users by specifying which user and group identifiers (SIDs, UIDs, and GIDs) the user has. These identifiers are used when creating files and when checking file or group ownership. Mapping rules can be used to rename users, add supplemental user identities, and modify a user's group membership. User mapping is performed only during login or protocol access.

Configuring user mapping


You can create and configure user mapping rules in each access zone. By default, every mapping rule is processed. This behavior allows multiple rules to be applied, but can present problems when applying a deny all" rule such as "deny all unknown users." Additionally, replacement rules may interact with rules that contain wildcard characters. To minimize complexity when configuring multiple mapping rules, it is recommended that you group rules by type and organize them in the following order: 1. Replacements: Any user renaming should be processed first to ensure that all instances of the name are replaced. 2. Joins: After the names are set by any replacement operations, use join, add, and insert rules to add extra identifiers. 3. Allow/deny: All processing must be stopped before a default deny rule can be applied. To do this, create a rule that matches allowed users but does nothing (such as an add operator with no field options) and has the break option. After enumerating the allowed users, a catchall deny may be placed at the end to replace anybody unmatched with an empty user. Within each group of rules, put explicit rules before rules involving wildcard characters; otherwise, the explicit rules might be skipped.

40

OneFS 7.0 Administration Guide

Authentication and access control

Well-known security identifiers


The OneFS file system can store any SID it receives in a file's ACL, including all wellknown SIDs; however, only a small subset of well-known SIDs have meaning in OneFS.
Table 6 Well-known SIDs

SID S-1-1-0

Name Everyone

Description A system-controlled list of all users, including anonymous users and guests. If set on an ACL, this SID grants file or directory access to all users. If assigned to a role, all users are considered members of that role. A placeholder in an inheritable access control entry (ACE) for the identity of the object's creator. This well-known SID is replaced when the ACE is inherited. A placeholder in an inheritable ACE for the identity of the object creator's primary group. This well-known SID is replaced when the ACE is inherited. A group that represents the object's current owner which, when applied to an object through an ACE, instructs the system to ignore the object owner's implied READ_CONTROL and WRITE_DAC permissions. An account for users who do not have individual accounts. This account does not require a password and cannot log in to a shell. By default, the Guest account is mapped to the UNIX 'nobody' account and is disabled. A built-in group whose members can administer the cluster through Microsoft MMC RPC calls. After the initial OneFS installation, this group contains only the Administrator account. The Domain Admins group is added to this group the first time a cluster is joined to an Active Directory domain.

S-1-3-0

Creator Owner

S-1-3-1

Creator Group

S-1-3-4

Owner Rights

S-1-5-21-domain-501

Guest

S-1-5-32-544

Administrators

Access zones
Access zones provide a way to partition cluster configuration into self-contained units, allowing a subset of parameters to be configured as a virtual cluster. OneFS includes a
Well-known security identifiers
41

Authentication and access control

built-in access zone called "system." By default, all cluster IP addresses connect to the system zone, which contains all configured authentication providers, all available SMB shares, and all available NFS exports. Access zones contain all of the necessary configuration settings to support authentication and identity management services in OneFS. You can create additional access zones and configure each zone with its own set of authentication providers, user mapping rules, and SMB shares. NFS users can only be authenticated against the system zone. Multiple access zones are particularly useful for server consolidation, for example when merging multiple Windows file servers that are potentially joined to different untrusted forests. If you create access zones, it is recommended that you use them for data access only and that you use the system zone strictly for configuration access. To use an access zone, you must configure your network settings to map an IP address pool to the zone.

Home directories
When you create a local user, OneFS automatically creates a home directory for the user. OneFS also supports dynamic home directory creation for users who access the cluster by connecting to an SMB share or by logging in through FTP or SSH. Regardless of the method by which a home directory was created, you can configure access to the home directory through a combination of SMB, SSH, and FTP.

Home directory creation through SMB


You can create a special SMB share that includes expansion variables in the share path, enabling users to access their home directories by connecting to the share. You can enable dynamic creation of home directories that do not exist at SMB connection time. By default, an SMB share's directory path is created with a synthetic ACL based on mode bits. You can enable the "inheritable ACL" setting on a share to specify that, if the parent directory has an inheritable ACL, it will be inherited on the share path. For details about creating and modifying SMB shares through the CLI, see the isi smb
create and isi smb modify entries in the OneFS Command Reference.

Home directory creation through SSH and FTP


For users who access the cluster through SSH or FTP, you can configure home directory support by modifying authentication provider settings. The following authentication provider settings determine how home directories are set up.
u

Home Directory Naming Specifies the path to use as a template for naming home
directories. The path must begin with /ifs and may contain variables, such as %U, that are expanded to generate the home directory path for the user.

Create home directories on first login Specifies whether to create a home directory the first time a user logs in, if a home directory does not already exist for the user. UNIX Shell Specifies the path to the user's login shell. This setting applies only to users
who access the file system through SSH.

42

OneFS 7.0 Administration Guide

Authentication and access control

Home directory creation in mixed environments


If a user will log in through both SMB and SSH, it is recommended that you set up the home directory such that the path template is the same in both the SMB share and the authentication provider against which the user is authenticating through SSH.

Home directory permissions


A user's home directory can be set up with a Windows ACL or with POSIX mode bits, which are then converted into a synthetic ACL. The method by which a home directory is created determines the initial permissions that are set on the home directory. When you create a local user, the user's home directory is created with mode bits by default. For users who authenticate against external sources, home directories can be dynamically created at login time. If a home directory is created during a login through SSH or FTP, it is set up with mode bits; if a home directory is created during an SMB connection, it receives either mode bits or an ACL. For example, if an LDAP user first logs in through SSH or FTP, the user's home directory is created with mode bits. However, if the same user first connects through an SMB share, the home directory is created with the permissions indicated by the configured SMB settings. If the "inherited path ACL" setting is enabled, an ACL is generated; otherwise, mode bits are used. Because SMB sends an NT password hash to authenticate SMB users, only users from authentication providers that can handle NT hashes can log in over SMB. These providers include the local provider, Active Directory, and LDAP with Samba extensions enabled. File, NIS, and non-Samba LDAP users cannot log in over SMB.

Dot file provisioning


Home directories that are created through SSH or FTP are provisioned with configuration files that are pulled from a template "skeleton" directory. The skeleton directory is defined in the configuration settings of the user's access zone. The skeleton directory, which is located at /usr/share/skel by default, contains a set of files that are copied to the user's home directory when a local user is created or when a user home directory is dynamically created during login. Files in the skeleton directory that begin with dot. are renamed to remove the dot prefix when they are copied to the user's home directory. For example, dot.cshrc is copied to the user's home directory as .cshrc. This format enables dot files in the skeleton directory to be viewable through the command-line interface without requiring the ls -a command. For SMB shares that might use home directories that were provisioned with dot files, you can set an option to prevent users who connect to the share through SMB from viewing the dot files. For example, the following command modifies an SMB share named "homedir" to hide dot files:
isi smb modify share homedir --hide-dot-files=yes

For a user who can access the cluster both through SMB and through an SSH or FTP login, dot files are not provisioned if the user's home directory is created dynamically through an SMB connection; however, they can be manually copied from the skeleton directory of the user's access zone. You can find the location of the skeleton directory by running the isi zone zones view command through the OneFS command-line interface.

Home directory creation in mixed environments

43

Authentication and access control

Default home directory settings in authentication providers


The default settings that affect how home directories are set up differ based on the authentication provider that the user authenticates against. Authentication provider Local File Active Directory

Home directory naming /ifs/home/%U None /ifs/home/%D/%U

Home directory creation Enabled Disabled Disabled

UNIX login shell /bin/sh None /bin/sh

If available, provider information overrides this value.


LDAP NIS None None Disabled Disabled None None

Supported expansion variables


You can include expansion variables in an SMB share path or in an authentication provider's home directory template. OneFS supports the following expansion variables. Variable %U

Description Expands to the user name, for example, user_001. This variable is typically included at the end of the path, for example, /ifs/home/ %U. Expands to the user's domain name, which varies by authentication provider:
l

%D

For Active Directory users, %D expands to the Active Directory NetBIOS name. For local users, %D expands to the cluster name in uppercase characters. For example, given a cluster named cluster1, %D expands to CLUSTER1. For users in the system file provider, %D expands to UNIX_USERS. For users in a file provider other than the system provider, %D expands to FILE_USERS. For LDAP users, %D expands to LDAP_USERS. For NIS users, %D expands to NIS_USERS.

44

OneFS 7.0 Administration Guide

Authentication and access control

Variable %Z

Description Expands to the access zone name, for example, System. If multiple zones are activated, this variable is useful for differentiating users in separate zones. For example, given the path / ifs/home/%Z/%U, a user named "admin7" in the system zone will be mapped to /ifs/ home/System/admin7.

%L %0 %1 %2

Expands to the host name of the cluster, normalized to lowercase. Expands to the first character of the user name. Expands to the second character of the user name. Expands to the third character of the user name.

If the user name includes fewer than three characters the %0, %1, and %2 variables wrap around. For example, given a user named "ab" the %2 variable maps to a; given a user named "a", all three variables map to a.

Managing access permissions


The internal representation of identities and permissions can contain information from UNIX sources, Windows sources, or both. Because access protocols can process the information from only one of these sources, the system may need to make approximations to present the information in a format the protocol can process.

Configure access management settings


You can configure default access settings, including the identity type to store on-disk; the Windows workgroup name to use when running in local mode and whether to send NTLMv2 responses for SMB connections; and character substitution for spaces encountered in user and group names. If you change the on-disk identity, it is recommended that you also run the Repair Permissions job. 1. Click Cluster Management > Access Management > Settings. 2. Configure the following settings as needed.
l

Send NTLMv2: Configures the type of NTLM response that is sent to an SMB client. Acceptable values are: yes, no (default). On-Disk Identity: Controls the preferred identity to store on-disk. If OneFS is unable to convert an identity to the preferred format, it is stored as-is. This setting does not affect identities that are currently stored on-disk. Select one of the following settings: native: Let OneFS determine the identity to store on-disk. This is the recommended setting. unix: Always store incoming UNIX identifiers (UIDs and GIDs) on-disk.
Managing access permissions
45

Authentication and access control

sid: Store incoming Windows security identifiers (SIDs) on-disk, unless the SID was generated from a UNIX identifier; in that case, convert it back to the UNIX identifier and store it on-disk.

If you change the on-disk identity selection, permission errors may occur unless you run the Repair Permissions job as described in the final step of this procedure.
l l

Workgroup: Specifies the NetBIOS workgroup. The default value is WORKGROUP. Space Replacement: For clients that have difficulty parsing spaces in user and group names, specifies a substitute character.

3. Click Save. 4. If you changed the on-disk identity, run the Repair Permissions 'Convert Permissions' task to prevent potential permission errors. a. Click Protocols > ACLs > Repair Permissions Job. b. Optional: Modify the Priority and Impact policy settings. c. For the Repair task setting, click to select Convert permissions. d. For the Path to repair setting, type or click Browse to select the path to the directory whose permissions you want to repair. e. For the Target setting, ensure the Use default system type option is selected. f. For the Access Zone setting, click to select the zone that is using the directory specified in the Path to repair setting. g. Click Start.

Modify ACL policy settings


The default ACL policy settings are sufficient for most cluster deployments. Because they change the behavior of permissions throughout the system, these policies should be modified only as necessary by experienced administrators with advanced knowledge of Windows ACLs. This is especially true for the advanced settings, which are applied regardless of the cluster's environment. For UNIX, Windows, or balanced environments, the optimal permission policy settings are selected and cannot be modified. However, you can choose to manually configure the cluster's default permission settings if necessary to support your particular environment. 1. Click Protocols > ACLs > ACL Policies. 2. In the Standard Settings section, under Environment, click to select the setting that best describes your environment, or select Configure permission policies manually to configure individual permission policies. UNIX only Causes cluster permissions to operate with UNIX semantics, as opposed to Windows semantics. Enabling this option prevents ACL creation on the system. Causes cluster permissions to operate in a mixed UNIX and Windows environment. This setting is recommended for most cluster deployments.

Balanced

46

OneFS 7.0 Administration Guide

Authentication and access control

Windows only

Causes cluster permissions to operate with Windows semantics, as opposed to UNIX semantics. Enabling this option causes the system to return an error on UNIX chmod requests.

Configure Allows you to configure the individual permissions policy permission settings available under Permission Policies. policies manually 3. If you selected the Configure permission policies manually option, configure the following settings as needed. ACL creation over SMB Specifies whether to allow or deny creation of ACLs over SMB. Select one of the following options.
l

Do not allow the creation of ACLs over Windows File Sharing (SMB): Prevents ACL creation on the cluster. Allow the creation of ACLs over SMB: Allows ACL creation on the cluster.

Inheritable ACLs on the system take precedence over this setting: If inheritable ACLs are set on a folder, any new files and folders created in that folder will inherit the folder's ACL. Disabling this setting does not remove ACLs currently set on files. If you want to clear an existing ACL, run the chmod -b <mode> <file> command to remove the ACL and set the correct permissions. chmod on Controls what happens when a chmod operation is initiated on a file files with with an ACL, either locally or over NFS. This setting controls any existing ACLs elements that set UNIX permissions, including File System Explorer. Enabling this policy setting does not change how chmod operations affect files that do not have ACLs. Select one of the following options.
l

Remove the existing ACL and set UNIX permissions instead: For chmod operations, removes any existing ACL and instead sets the chmod permissions. Select this option only if you do not need permissions to be set from Windows. Remove the existing ACL and create an ACL equivalent to the UNIX permissions: Stores the UNIX permissions in a Windows ACL. Select this option only if you want to remove Windows permissions but do not want files to have synthetic ACLs. Remove the existing ACL and create an ACL equivalent to the UNIX permissions, for all users/groups referenced in old ACL: Stores the UNIX permissions in a Windows ACL. Select this option only if you want to remove Windows permissions but do not want files to have synthetic ACLs. Merge the new permissions with the existing ACL: Causes Windows and UNIX permissions to operate smoothly in a balanced environment by merging permissions that are applied by chmod with existing ACLs. An ACE for each identity (owner, group, and everyone) is either modified or created, but all other ACEs are unmodified. Inheritable ACEs are also left unmodified to enable Windows users to continue to inherit appropriate permissions.

Modify ACL policy settings

47

Authentication and access control

However, UNIX users can set specific permissions for each of those three standard identities.
l

Deny permission to modify the ACL: Prevents users from making NFS and local chmod operations. Enable this setting if you do not want to allow permission sets over NFS.

If you try to run the chmod command on the same permissions that are currently set on a file with an ACL, you may cause the operation to silently failThe operation appears to be successful, but if you were to examine the permissions on the cluster, you would notice that the chmod command had no effect. As a workaround, you can run the chmod command away from the current permissions and then perform a second chmod command to revert to the original permissions. For example, if your file shows 755 UNIX permissions and you want to confirm this number, you could run chmod 700
file; chmod 755 file

ACLs created On Windows systems, the access control entries for directories can on define fine-grained rules for inheritance; on UNIX, the mode bits are directories by not inherited. Making ACLs that are created on directories by the UNIX chmod chmod command inheritable is more secure for tightly controlled environments but may deny access to some Windows users who would otherwise expect access. Select one of the following options.
l l

Make them inheritable Do not make them inheritable

chown on files

with existing ACLs

Changes a file or folder's owning user or group. Select one of the following options.
l

Modify the owner and/or group permissions: Causes the chown operation to perform as it does in UNIX. Enabling this setting modifies any ACEs in the ACL associated with the old and new owner or group. Do not modify the ACL: Cause the NFS chown operation to function as it does in Windows. When a file owner is changed over Windows, no permissions in the ACL are changed.

Over NFS, the chown operation changes the permissions and the owner or owning group. For example, consider a file owned by user Joe with "rwx------" (700) permissions, signifying "rwx" permissions for the owner, but no permissions for anyone else. If you run the chown command to change ownership of the file to user Bob, the owner permissions are still "rwx" but they now represent the permissions for Bob, rather than for Joe. In fact, Joe will have lost all of his permissions. This setting does not affect UNIX chown operations performed on files with UNIX permissions, and it does not affect Windows chown operations, which do not change any permissions.

48

OneFS 7.0 Administration Guide

Authentication and access control

Access checks (chmod, chown)

In UNIX environments, only the file owner or superuser has the right to run a chmod or chown operation on a file. In Windows environments, you can implement this policy setting to give users the right to perform chmod operations, called the "change permissions" right, or the right to perform chown operations, called the "take ownership" right. The "take ownership" right only gives users the ability to take file ownership, not to give ownership away. Select one of the following options.
l

Allow only owners to chmod or chown: Causes chmod and chown access checks to operate with UNIX-like behavior. Allow owner and users with 'take ownership' right to chown, and owner and users with 'change permissions' right to chmod: Causes chmod and chown access checks to operate with Windows-like behavior.

4. In the Advanced Settings section, configure the following settings as needed. Treatment of "rwx" permissions In UNIX environments, "rwx" permissions signify two things: A user or group has read, write, and execute permissions; and a user or group has the maximum possible level of permissions. When you assign UNIX permissions to a file, no ACLs are stored for that file. However, a Windows system processes only ACLs; Windows does not process UNIX permissions. Therefore, when you view a file's permissions on a Windows system, the cluster must translate the UNIX permissions into an ACL. This type of ACL is called a synthetic ACL. Synthetic ACLs are not stored anywhere; instead, they are dynamically generated as needed and then they are discarded. If a file has UNIX permissions, you may notice synthetic ACLs when you run the ls file command on the cluster in order to view a files ACLs. When you generate a synthetic ACL, the cluster maps UNIX permissions to Windows rights. Windows supports a more granular permissions model than UNIX does, and it specifies rights that cannot easily be mapped from UNIX permissions. If the cluster maps "rwx" permissions to Windows rights, you must enable one of the following options. The main difference between "rwx" and "Full Control" is the broader set of permissions with "Full Control". Select one of the following options.
l

Retain 'rwx' permissions: Generates an ACE that provides only read, write, and execute permissions. Treat 'rwx' permissions as Full Control: Generates an ACE that provides the maximum Windows permissions for a user or a group by adding the "change permissions" right, the "take ownership" right, and the "delete" right.

Group owner inheritance

Operating systems tend to work with group ownership and permissions in two different ways: BSD inherits the group owner from the file's parent folder; Windows and Linux inherit the group owner from the file creator's primary group. If you enable a setting

Modify ACL policy settings

49

Authentication and access control

that causes the group owner to be inherited from the creator's primary group, it can be overridden on a per-folder basis by running the chmod command to set the set-gid bit. This inheritance applies only when the file is created. For more information, see the manual page for the chmod command. Select one of the following options.
l

When an ACL exists, use Linux and Windows semantics, otherwise use BSD semantics: Controls file behavior based on whether the new file inherits ACLs from its parent folder. If it does, the file uses the creator's primary group. If it does not, the file inherits from its parent folder. BSD semantics - Inherit group owner from the parent folder: Causes the group owner to be inherited from the file's parent folder. Linux and Windows semantics - Inherit group owner from the creator's primary group: Causes the group owner to be inherited from the file creator's primary group.
(007)

chmod (007) Specifies whether to remove ACLs when running the chmod on files with command. Select one of the following options. existing ACLs l chmod(007) does not remove existing ACL: Sets 007 UNIX permissions without removing an existing ACL.
l

chmod(007) removes existing ACL and sets 007 UNIX permissions: Removes ACLs from files over UNIX file sharing (NFS) and locally on the cluster through the chmod (007) command. If you enable this setting, be sure to run the chmod command on the file immediately after using chmod (007) to clear an ACL. In most cases, you do not want to leave 007 permissions on the file.

Owner permissions

It is impossible to represent the breadth of a Windows ACL's access rules using a set of UNIX permissions. Therefore, when a UNIX client requests UNIX permissions for a file with an ACL over NFS (an action known as a "stat"), it receives an imperfect approximation of the file's true permissions. By default, executing an ls -lcommand from a UNIX client returns a more open set of permissions than the user expects. This permissiveness compensates for applications that incorrectly inspect the UNIX permissions themselves when determining whether to attempt a file-system operation. The purpose of this policy setting is to ensure that these applications proceed with the operation to allow the file system to properly determine user access through the ACL. Select one of the following options.
l

Approximate owner mode bits using all possible owner ACEs: Makes the owner permissions appear more permissive than the actual permissions on the file. Approximate owner mode bits using only the ACE with the owner ID: Makes the owner permissions appear more accurate, in that you see only the permissions for a particular owner and not the more permissive set. However, this may cause access-denied problems for UNIX clients.

50

OneFS 7.0 Administration Guide

Authentication and access control

group permissions

Select one of the following options for group permissions:


l

Approximate group mode bits using all possible group ACEs: Makes the group permissions appear more permissive than the actual permissions on the file. Approximate group mode bits using only the ACE with the group ID: Makes the group permissions appear more accurate, in that you see only the permissions for a particular group and not the more permissive set. However, this may cause access-denied problems for UNIX clients.

No "deny" ACEs

The Windows ACL user interface cannot display an ACL if any "deny" ACEs are out of canonical ACL order. However, in order to correctly represent UNIX permissions, deny ACEs may be required to be out of canonical ACL order. Select one of the following options.
l

Remove deny ACEs from synthetic ACLs: Does not include "deny" ACEs when generating synthetic ACLs. This setting can cause ACLs to be more permissive than the equivalent mode bits. Do not modify synthetic ACLs and mode bit approximations: Specifies to not modify synthetic ACL generation; deny ACEs will be generated when necessary.

This option can lead to permissions being reordered, permanently denying access if a Windows user or an application performs an ACL get, an ACL modification, and an ACL set (known as a "roundtrip") to and from Windows. Access check You can control who can change utimes, which are the access and (utimes) modification times of a file, by selecting one of the following options.
l

Allow only owners to change utimes to client-specific times (POSIX compliant): Allows only owners to change utimes, which complies with the POSIX standardan approach that is probably familiar to administrators of UNIX systems. Allow owners and users with write access to change utimes to clientspecific times: Allows owners as well as users with write access to modify utimesa less restrictive approach that is probably familiar to administrators of Windows systems.

Update cluster permissions


You can run the Repair Permissions job to update file permissions or ownership. If you change the on-disk identity, we recommend that you run this job with the 'Convert permissions' task to ensure the changes propagate throughout the file system. 1. Click Protocols > ACLs > Repair Permissions Job. 2. Optional: To change the priority level of this job compared to other jobs, click a value (1-10) in the Priority box. 3. Optional: To specify a different impact policy for this job to use, click an available option in the Impact policy box.
Update cluster permissions
51

Authentication and access control

4. For Repair task, click to select one of the following settings: l Convert permissions: For each file and directory within the specified Path to repair setting, converts the owner, group and access control list (ACL) to the target ondisk identity. To prevent permissions issues, this task should be run whenever the on-disk identity has been changed.
l

Clone permissions: Applies the permissions settings for the specified Template Directory as-is to the directory specified in the Path to repair. Inherit permissions: Recursively applies the ACL that is used by the specified Template Directory to each file and subdirectory within the specified Path to repair directory, according to normal inheritance rules.

5. For Path to repair, type the full path beginning at /ifs to the directory whose permissions need repaired, or click Browse to navigate to the directory via File System Explorer. 6. For Template Directory (available with Clone and Inherit tasks only), type the full path beginning at /ifs to the directory whose permissions settings you want to apply, or click Browse to navigate to the directory via File System Explorer. 7. Optional: For Target (available with Convert task only), select the on-disk identity type to convert to: l Use default system type: Uses the system's default identity type. This is the default setting.
l

Use native type: If a user or group does not have an authoritative UNIX identifier (UID or GID), uses the Windows identity type (SID). Use UNIX type: Uses the UNIX identity type. Use SID (Windows) type: Uses the Windows identity type.

l l

8. Optional: For Access Zone (available with Convert task only), click to select the access zone to use for ID mapping.

Managing roles
You can view, add, or remove members of any role. Except for built-in roles, whose privileges you cannot modify, you can add or remove OneFS privileges on a role-by-role basis. Roles take both users and groups as members. If a group is added to a role, all users who are members of that group are assigned the privileges associated with the role. Similarly, members of multiple roles are assigned the combined privileges of each role.

View roles
You can view information about built-in and custom roles. For information about the commands and options used in this procedure, run the isi auth roles --help command. 1. Establish an SSH connection to any node in the cluster. 2. At the command prompt, run one of the following commands. l To view a basic list of all roles on the cluster, run:
isi auth roles list

52

OneFS 7.0 Administration Guide

Authentication and access control

To view detailed information about each role on the cluster, including member and privilege lists, run:
isi auth roles list --verbose

To view detailed information about a single role, run the following command, where <role> is the name of the role:
isi auth roles view <role>

Create a custom role


You can create custom roles and then assign privileges and members to them. 1. Establish an SSH connection to any node in the cluster. 2. At the command prompt, run the following command, where <name> is the name to assign to the role and --description <string> specifies an optional description of the role:
isi auth roles create <name> [--description <string>]

Results After creating a role, you can add privileges and member users and groups by running the isi auth roles modify command. For more information, see "Modify a custom role" or run the isi auth roles modify --help command.

Modify a role
You can modify the description and the user or group membership of any role, including built-in roles. However, you cannot modify the name or privileges that are assigned to built-in roles. 1. Establish an SSH connection to any node in the cluster. 2. At the command prompt, run the following command, where <role> is the role name and <options> are optional parameters:
isi auth roles modify <role> [<options>]

For a complete list of the available options, see the OneFS Command Reference.

Delete a custom role


Deleting a role does not affect the privileges or users that are assigned to it. Built-in roles cannot be deleted. 1. Establish an SSH connection to any node in the cluster. 2. At the command prompt, run the following command, where <role> is the name of the role that you want to delete:
isi auth roles delete <role>

3. At the confirmation prompt, type y.

Create a local user


Each access zone includes a local provider that allows you to create and manage local users and groups. When creating a local user account, you can configure its name, password, home directory, UNIX user identifier (UID), UNIX login shell, and group memberships. 1. Click Cluster Management > Access Management > Users. 2. From the Select a zone list, select an access zone (for example, System).
Create a custom role
53

Authentication and access control

3. From the Select a provider list, select the local provider for the zone (for example, LOCAL:System). 4. Click Create a user. 5. In the Username field, type a username for the account. 6. In the Password field, type a password for the account. 7. Optional: Configure the following additional settings as needed.
l

Allow password to expire: Select this check box to specify that the password is allowed to expire. UID: If this setting is left blank, the system automatically allocates a UID for the account. This is the recommended setting. You cannot assign a UID that is in use by another local user account.

l l l l l

Full Name: Type a full name for the user. Email Address: Type an email address for the account. Primary Group: Click Select group to specify the owner group. Additional Groups: Specify any additional groups to make this user a member of. Home Directory: Type the path to the user's home directory. If you do not specify a path, a directory is automatically created at /ifs/home/<Username>. UNIX Shell: This setting applies only to users who access the file system through SSH. From the list, click the shell that you want. By default, the /bin/zsh shell is selected. Enabled: Select this check box to allow the user to authenticate against the local database for SSH, FTP, HTTP, and Windows file sharing through SMB. This setting is not used for UNIX file sharing through NFS. Account Expires: Optionally select one of the following options: Never expires: Click to specify that this account does not have an expiration date. Account expires on: Click to display the Expiration date field, and then type the date in the format mm/dd/yyyy.

Prompt password change: Select this check box to prompt for a password change the next time the user logs in.

8. Click Create User.

Create a local group


In the local provider of an access zone, you can create groups and assign members to them. 1. Click Cluster Management > Access Management > Groups. 2. From the Select a zone list, select an access zone (for example, System). 3. From the Select a provider list that appears, select the local provider for the zone (for example, LOCAL:System). 4. Click the Create a group link. 5. In the Group Name box, type a name for the group. 6. Optional: To override automatic allocation of the UNIX group identifier (GID), in the GID box, type a numerical value.
54

OneFS 7.0 Administration Guide

Authentication and access control

You cannot assign a GID that is in use by another group. It is recommended that you leave this field blank to allow the system to automatically generate the GID. 7. Optional: Follow these steps for each member that you want to add the group: a. For the Members setting, click Add user. The Select a User dialog box appears. b. For the Search for setting, select either Users or Well-known SIDs. c. If you selected Users, specify values for the following fields: Username: Type all or part of a user name, or leave the field blank to return all users. Wildcard characters are accepted. Access Zone: Select the access zone that contains the authentication provider that you want to search. Provider: Select an authentication provider. d. Click Search. e. In the Search Results table, select a user and then click Select. The dialog box closes. 8. Click Create.

Managing users and groups


You can view the users and groups of any authentication provider. You can create, modify, and delete users and groups in the local provider only.

Modify a local user


You can modify any setting for a local user account except the user name. 1. Click Cluster Management > Access Management > Users. 2. From the Select a zone list, select an access zone (for example, System). 3. From the Select a provider list, select the local provider for the access zone (for example, LOCAL:System). 4. In the list of users, click View details for the local user whose settings you want to modify. 5. For each setting that you want to modify, click Edit, make the change, and then click Save. 6. Click Close.

Modify a local group


You can add or remove members from a local group. 1. Click Cluster Management > Access Management > Groups. 2. From the Select a zone list, select an access zone (for example, System). 3. From the Select a provider list, select the local provider for the access zone (for example, LOCAL:System). 4. In the list of groups, click View details for the local group whose settings you want to modify. 5. For the Members setting, click Edit. 6. Add or remove the users that you want, and then click Save.
Managing users and groups
55

Authentication and access control

7. Click Close.

Delete a local user


A deleted user can no longer access the cluster through the command-line interface, web administration interface, or file access protocol. When you delete a local user account, its home directory remains in place. 1. Click Cluster Management > Access Management > Users. 2. From the Select a zone list, select an access zone (for example, System). 3. From the Select a provider list, select the local provider for the zone (for example, LOCAL:System). 4. Click Delete for the user that you want to delete. 5. In the confirmation dialog box, click Delete.

Delete a local group


You can delete a local group even if members are assigned to it; deleting a group does not affect the members of that group. 1. Click Cluster Management > Access Management > Groups. 2. From the Select a zone list, select an access zone (for example, System). 3. From the Select a provider list, select the local provider for the zone (for example, LOCAL:System). 4. Click Delete for the group that you want to delete. 5. In the confirmation dialog box, click Delete.

Creating file providers


You can create one or more file providers, each with its own combination of replacement files, for each access zone. Password database files, which are also called user databases files, must be in binary format.

Create a file provider


You can specify replacement files for any combination of users, groups, and netgroups. 1. Click Cluster Management > Access Management > File Provider. 2. Click Add a file provider. 3. In the File Provider Name field, type a name for the file provider. 4. Optional: Specify one or more replacement files by typing or browsing to their locations:
l l l

Users File: The full path to the spwd.db replacement file. Groups File: The full path to the group replacement file. Netgroups File: The full path to the netgroup replacement file.

5. Optional: To enable this provider to authenticate users, select the Authenticate users from this provider check box. 6. Optional: To specify a home directory naming template, in the Home Directory Naming field, type the full directory path that will contain all home directories.
56

OneFS 7.0 Administration Guide

Authentication and access control

7. Optional: To automatically create home directories for users the next time they log in, select the Create home directories on first login check box. 8. Optional: From the UNIX Shell list, select the shell that will be used when users access the file system through SSH. 9. Click Add File Provider.

Generate a password file


A file provider requires a password database file that is in binary format. To generate a binary password file, you must use the pwd_mkdb utility in the command-line interface (CLI). 1. Establish an SSH connection to any node in the cluster. 2. Run the following command, where -d <directory> specifies the location to store the spwd.db file and <file> specifies the location of the source password file:
pwd_mkdb -d <directory> <file>

If you omit the -d option, the file is created in the /etc directory. For full command usage, view the manual ("man") page by running the man pwd_mkdb command. The following command generates an spwd.db file in the /ifs directory from a password file located at /ifs/test.passwd:
pwd_mkdb -d /ifs /ifs/test.passwd

What to do next To use the spwd.db file, when creating or modifying a file provider using the web administration interface, specify its full path in the Users File setting.

Managing file providers


Each file provider pulls directly from up to three replacement database files: a group file that uses the same format as /etc/group; a netgroups file; and a binary password file, spwd.db, which provides fast access to the data in a file that uses the /etc/ master.passwd format. You must copy the replacement files to the cluster and reference them by their directory path. If the replacement files are located outside the /ifs directory tree, you must manually distribute them to every node in the cluster. Changes that are made to the system provider's files are automatically distributed across the cluster.

Modify a file provider


You can modify any setting for a file provider, including its name. 1. Click Cluster Management > Access Management > File Provider. 2. In the File Providers table, click View details for the provider whose settings you want to modify. 3. For each setting that you want to modify, click Edit, make the change, and then click Save. 4. Click Close.

Generate a password file

57

Authentication and access control

Delete a file provider


To stop using a file provider, you can clear all of its replacement file settings or you can permanently delete the provider. 1. Click Cluster Management > Access Management > File Provider. 2. In the File Providers table, click Delete for the provider that you want to delete. 3. In the confirmation dialog box, click Delete.

Password file format


A file provider uses a binary password database file, spwd.db, which you generate from a master.passwd-formatted file by using the pwd_mkdb command-line utility. The spwd.db file has ten colon-separated fields. A sample line looks like the following:
admin:*:10:10::0:0:Web UI Administrator:/ifs/home/admin:/bin/zsh

The fields are defined below in the order in which they appear in the file. UNIX systems often define the passwd format as a subset of these fields, omitting the class, change, and expire fields. To convert a file from passwd to master.passwd format, add :0:0: between the GID field and the Gecos field.
u

Username: The users name. This field is case sensitive. OneFS does not set a limit on the length; however, many applications truncate the name to 16 characters. Password: The users encrypted password. If authentication is not required for the user, an asterisk (*) can be substituted in place of a password. The asterisk character is guaranteed to not match any password. UID: The users primary identifier. This value should be in the range of 0-4294967294. Take care when choosing a UID to ensure that it does not conflict with an existing account. For example, do not choose the reserved value 0 as the UID. There is no guarantee of compatibility if an assigned value conflicts with an existing UID.

GID: The group identifier of the users primary group. All users are a member of at least one group, which is used for access checks and can also be used when creating files. Class: This field is not supported by OneFS and should be blank. Change: Password change time. OneFS does not support changing passwords of users in the file provider. Expiry: The time at which the account expires. OneFS does not support expiration time of users in the file provider. Gecos: This field can store a variety of information. It is usually used to store the users full name. Home: The users home directory. This field should point to a directory on /ifs. Shell: The absolute path to the users shell (/bin/sh, /bin/csh, /bin/tcsh, / bin/bash, /bin/rbash, /bin/zsh, or /sbin/nologin). For example, to deny command-line access to the user, set the shell to /sbin/nologin.

u u

u u

58

OneFS 7.0 Administration Guide

Authentication and access control

Group file format


The file provider uses the format of the /etc/group file found on most UNIX systems. The group file uses four colon-separated fields. A sample line looks like the following:
admin:*:10:root,admin

The fields are defined below in the order in which they appear in the file.
u

Group name: The groups name. This field is case sensitive. Although OneFS does not set a limit on the length of the group name, many applications truncate the name to 16 characters. Password: This field is not supported by OneFS and should be set as an asterisk (*). GID: The group identifier. This value should be in the range of 0-4294967294. Be careful when choosing a GID to ensure that it does not conflict with an existing group. Group members: A comma-delimited list of user names that make up the groups members.

u u

Netgroup file format


A netgroup file consists of one or more netgroups, each of which can contain members. Members of a netgroup can be hosts, users, or domains specified in a member triple. A netgroup can also contain another netgroup. Each entry in a netgroup file consists of the netgroup name, followed by a spacedelimited set of member triples and nested netgroup names. A nested netgroup must be defined somewhere else in the file. A member triple takes the form (host, user, domain), where host is a machine, user is a username, and domain is a domain. Any combination is valid except an empty triple (,,). A sample file looks like the following:
rootgrp (myserver, root, somedomain.com) othergrp (someotherserver, root, somedomain.com) othergrp (other-win,, somedomain.com) (other-linux,, somedomain.com)

In this sample file, if you use rootgrp you get all four hosts; if you use othergrp you get only the last two. A new line signifies a new netgroup. For long netgroup entries, you can type a backslash character (\) in the right-most position of a line to indicate line continuation.

Create an Active Directory provider


You can configure multiple Active Directory domains, each with its own configuration settings. You can join each access zone to an Active Directory domain. 1. Click Cluster Management > Access Management > Active Directory. 2. Click Join a domain. 3. In the Domain Name field, type a fully-qualified Active Directory domain name. This name will be used as the provider name 4. In the User field, type the username of an account that is authorized to join the Active Directory domain. 5. In the Password field, type the password for the account.
Group file format
59

Authentication and access control

6. Optional: In the Organizational Unit field, type the organizational unit (OU) to connect to on the Active Directory server. 7. Optional: In the Machine Account field, type the name of the machine account. Joining the domain will fail if the machine account exists but resides in a different organizational unit than the one you specified. 8. Optional: To enable Active Directory authentication for NFS, select the Enable Secure NFS check box. If you enable this setting, OneFS registers NFS service principal names (SPNs) during the domain join. 9. Optional: Click Advanced Active Directory Settings to configure advanced settings. 10. Click Join.

Managing Active Directory providers


You can view, modify, and delete Active Directory providers. OneFS includes a Kerberos configuration file for Active Directory in addition to the global Kerberos configuration file, both of which you can configure through the command-line interface.

Modify an Active Directory provider


You can modify the advanced settings for an Active Directory provider. 1. Click Cluster Management > Access Management > Active Directory. 2. In the list of Active Directory providers, click View details for the provider whose settings you want to modify. 3. Click Advanced Active Directory Settings. 4. For each setting that you want to modify, click Edit, make the change, and then click Save. 5. Optional: Click Close.

Delete an Active Directory provider


When you delete an Active Directory provider, you disconnect the cluster from the Active Directory domain that is associated with the provider, disrupting service for users who are accessing it. After you leave an Active Directory domain, users can no longer access the domain from the cluster. 1. Click Cluster Management > Access Management > Active Directory. 2. In the Active Directory Providers table, click Leave for the domain you want to leave. 3. In the confirmation dialog box, click Leave.

Configure Kerberos settings


Kerberos 5 protocol configuration is supported through the command line only. In addition to the global Kerberos configuration file, OneFS includes a Kerberos configuration file for Active Directory. You can modify either file by following this procedure.

60

OneFS 7.0 Administration Guide

Authentication and access control

Most settings require modification only if you are using a Kerberos Key Distribution Center (KDC) other than Active Directoryfor example, if you are using an MIT KDC for NFS version 3 or version 4 authentication. 1. Establish an SSH connection to any node in the cluster. 2. Run the isi auth krb5 command with the add, modify, or delete sub-command to specify which entries to modify in the Kerberos configuration file. For usage information, see the OneFS Command Reference. 3. Propagate the changes to the Kerberos configuration file by running the isi krb5 write command.
auth

By default, changes are written to the global Kerberos configuration file, / etc/krb5.conf. To update the Kerberos configuration file for Active Directory, use the --path option to specify the /etc/likewise-krb5-ad.conf file.

Active Directory provider settings


You can view or modify the settings for an Active Directory provider. Setting Services For UNIX

Description Specifies whether to support RFC 2307 attributes for domain controllers. RFC 2307 is required for Windows UNIX Integration and Services For UNIX technologies. Enables the lookup of unqualified user names in the primary domain. If this setting is not enabled, the primary domain must be specified for each authentication operation. Ignores all trusted domains. Specifies trusted domains to include if Ignore

Map user/group into primary domain

Ignore Trusted Domains Trusted Domains

Trusted Domains is set to Yes.


Domains to Ignore Specifies trusted domains to ignore even if Ignore Trusted Domains is set to No. Sends an alert if the domain goes offline. Encrypts communication to and from the domain controller. Specifies the path to use as the home directory naming template. Creates a home directory the first time a user logs in. Specifies the path to the UNIX login shell. Looks up Active Directory users in all other providers before allocating a UID.

Offline Alerts Enhanced Privacy Home Directory Naming Create Home Directory UNIX Shell Lookup User

Active Directory provider settings

61

Authentication and access control

Setting Match Users with Lowercase Auto-assign UIDs Lookup Group Match Groups with Lowercase Auto-assign GIDs Make UID/GID assignments for users and groups in these specific domains

Description Normalizes Active Directory user names to lowercase before lookup. Enables UID allocation for unmapped Active Directory users. Looks up Active Directory groups in all other providers before allocating a GID. Normalizes Active Directory group names to lowercase before lookup. Enables GID allocation for unmapped Active Directory groups. Restricts user and group lookups to the specified domains.

Create an LDAP provider


You can create multiple LDAP providers and add them to one or more access zones. Each instance of an LDAP provider can have its own configuration settings. If an access zone is configured to use all providers, which is the default behavior for the system zone, the new provider is automatically added to that zone. 1. Click Cluster Management > Access Management > LDAP. 2. Click Add an LDAP provider. 3. In the LDAP Provider Name text box, type a name for the provider. 4. In the Servers text box, type one or more valid LDAP server URIs, one per line, in the format ldaps://server:port (secure LDAP) or ldap://server:port (nonsecure LDAP).

If you do not specify a port, the default port is used (389 for LDAP; 636 for secure LDAP). If non-secure LDAP (ldap://) is specified, the bind password will be transmitted to the server in clear text. If the Load balance servers option is not selected, servers will be accessed in the order in which they are listed.

5. Optional: Configure the following settings as needed.


l

Load balance servers: Select the check box to connect to a random server, or clear the check box to connect according to the order in which the servers are listed in the Servers setting. Base Distinguished Name: Type the distinguished name (DN) of the entry at which to start LDAP searches. Base DNs may include cn (Common Name), l (Locality), dc (Domain Component), ou (Organizational Unit), or other components. For example, dc=emc,dc=com is a base DN for emc.com. Bind to: Type the distinguished name of the entry to use to bind to the LDAP server.

62

OneFS 7.0 Administration Guide

Authentication and access control

Password: Specify the password to use when binding to the LDAP server. Use of this password does not require a secure connection; if the connection is not using TLS the password will be sent in clear text. Distinguished Name: Type the distinguished name of the entry at which to start LDAP searches. Search Scope: Defines the default depth from the base DN to perform LDAP searches. Click to select one of the following values: base: Search only the entry at the base DN. onelevel: Search all entries exactly one level below the base DN. subtree: Search the base DN and all entries below it. children: Search all entries below the base DN, excluding the base DN itself. Search Timeout: Type the number of seconds after which a search will not be retried and will fail. The default value is 100. Distinguished Name: Type the distinguished name of the entry at which to start LDAP searches for users. Search Scope: Defines the depth from the base DN to perform LDAP searches for users. Click to select one of the following values: default: Use the setting defined in the default query settings. base: Search only the entry at the base DN. onelevel: Search all entries exactly one level below the base DN. subtree: Search the base DN and all entries below it. children: Search all entries below the base DN, excluding the base DN itself. Query Filter: Sets the LDAP filter for user objects. Authenticate users from this provider: Select the check box to enable the provider to respond to authentication requests, or clear the check box to prevent responding to authentication requests. Home Directory Naming: Type the full path to the location on /ifs to create home directories. Create home directories on first login: Select the check box to automatically create a home directory when a user logs in, if one does not already exist for the user. UNIX Shell: Click to select a login shell from the list. This setting applies only to users who access the file system through SSH. Distinguished Name: Type the distinguished name of the entry at which to start LDAP searches for groups. Search Scope: Defines the depth from the base DN to perform LDAP searches for groups. Click to select one of the following values: default: Use the setting defined in the default query settings. base: Search only the entry at the base DN.
Create an LDAP provider
63

6. Optional: Click Default Query Settings to configure the following additional settings.
l

7. Optional: Click User Query Settings to configure the following additional settings.
l

l l

8. Optional: Click Group Query Settings to configure the following additional settings.
l

Authentication and access control

onelevel: Search all entries exactly one level below the base DN. subtree: Search the base DN and all entries below it. children: Search all entries below the base DN, excluding the base DN itself.
l

Query Filter: Sets the LDAP filter for group objects. Distinguished Name: Type the distinguished name of the entry at which to start LDAP searches for netgroups. Search Scope: Defines the depth from the base DN to perform LDAP searches for netgroups. Click to select one of the following values: default: Use the setting defined in the default query settings. base: Search only the entry at the base DN. onelevel: Search all entries exactly one level below the base DN. subtree: Search the base DN and all entries below it. children: Search all entries below the base DN, excluding the base DN itself. Query Filter: Sets the LDAP filter for netgroup objects. Name Attribute: Specifies the LDAP attribute that contains UIDs, which are used as login names. The default value is uid. Common Name Attribute: Specifies the LDAP attribute that contains common names. The default value is cn. Email Attribute: Specifies the LDAP attribute that contains email addresses. The default value is email. GECOS Field Attribute: Specifies the LDAP attribute that contains GECOS fields. The default value is gecos. UID Attribute: Specifies the LDAP attribute that contains UID numbers. The default value is uidNumber. GID Attribute: Specifies the LDAP attribute that contains GIDs. The default value is gidNumber. Home Directory Attribute: Specifies the LDAP attribute that contains home directories. The default value is homeDirectory. UNIX Shell Attribute: Specifies the LDAP attribute that contains UNIX login shells. The default value is loginShell. Netgroup Members Attribute: Specifies the LDAP attribute that contains netgroup members. The default value is memberNisNetgroup. Netgroup Triple Attribute: Specifies the LDAP attribute that contains netgroup triples. The default value is nisNetgroupTriple. Group Members Attribute: Specifies the LDAP attribute that contains group members. The default value is memberUid. Unique Group Members Attribute: Specifies the LDAP attribute that contains unique group members. This determines what groups a user is a member of if the LDAP server is queried by the users DN instead of the users name. This setting has no default value.

9. Optional: Click Netgroup Query Settings to configure the following additional settings.
l

10. Optional: Click Advanced LDAP Settings to configure the following additional settings.
l

64

OneFS 7.0 Administration Guide

Authentication and access control

UNIX Password Attribute: Specifies the LDAP attribute that contains UNIX passwords. This setting has no default value. Windows Password Attribute: Specifies the LDAP attribute that contains Windows passwords. The default value is ntpasswdhash. Certificate Authority File: Specifies the full path to the root certificates file. Require secure connection for passwords: Specifies whether to require a TLS connection. Ignore TLS Errors: Continues over a secure connection even if identity checks fail.

l l

11. Click Add LDAP provider.

Managing LDAP providers


You can view, modify, and delete LDAP providers or you can stop using an LDAP provider by removing it from all access zones that are using it.

Modify an LDAP provider


You can modify any setting for an LDAP provider except its name. You must specify at least one server for the provider to be enabled. 1. Click Cluster Management > Access Management > LDAP. 2. In the list of LDAP providers, click View details for the provider whose settings you want to modify. 3. For each setting that you want to modify, click Edit, make the change, and then click Save. 4. Optional: Click Close.

Delete an LDAP provider


When you delete an LDAP provider, it is removed from all access zones. As an alternative, you can stop using an LDAP provider by removing it from each access zone that contains it so that the provider remains available for future use. 1. Click Cluster Management > Access Management > LDAP. 2. Click Delete for the provider that you want to delete. 3. In the confirmation dialog box, click Delete.

Create a NIS provider


You can create multiple NIS providers and add them to one or more access zones. Each instance of a NIS provider can have its own configuration settings. If an access zone is configured to use all providers, which is the default behavior for the system zone, the new provider is automatically added to the zone. 1. Click Cluster Management > Access Management > NIS. 2. Click Add a NIS provider. 3. In the NIS Provider Name field, type a name for the provider. 4. In the Servers text box, type one or more valid IP addresses, host names, or fully qualified domain names, separated by commas.

Managing LDAP providers

65

Authentication and access control

If the Load balance servers option is not selected, servers will be accessed in the order in which they are listed. 5. Optional: For the Load balance servers setting, click the check box to connect to a random server, or clear the check box to connect according to the order in which the servers are listed in the Servers setting. 6. Optional: Click Default Query Settings to configure the following additional settings.
l l

NIS Domain: Type the NIS domain name. Search Timeout: Type the number of seconds after which a search will not be retried and will fail. The default value is 100. Retry Frequency: Type the number of seconds after which a search will be retried. The default value is 5. Authenticate users from this provider: Select the check box to enable this provider to respond to authentication requests, or clear the check box to prevent responding to authentication requests. Home Directory Naming: Type the full path to the location on /ifs to create home directories. Create home directories on first login: Select the check box to automatically create a home directory when a user logs in, if one does not already exist for the user. UNIX Shell: Click to select a shell from the list. This setting applies only to users who access the file system through SSH. Resolve Hosts: Select the check box to resolve hosts, or clear the check box to specify not to resolve hosts.

7. Optional: Click User Query Settings to configure the following additional settings.
l

8. Optional: Click Host Name Query Settings to configure the following additional settings.
l

9. Click Add NIS provider.

Managing NIS providers


You can view and modify NIS providers or delete providers that are no longer needed. As an alternative to deleting a NIS provider, you can remove it from any access zones that are using it.

Modify a NIS provider


You can modify any setting for a NIS provider except its name. You must specify at least one server for the provider to be enabled. 1. Click Cluster Management > Access Management > NIS. 2. In the list of NIS providers, click View details for the provider whose settings you want to modify. 3. For each setting that you want to modify, click Edit, make the change, and then click Save. 4. Click Close.

66

OneFS 7.0 Administration Guide

Authentication and access control

Delete a NIS provider


When you delete a NIS provider, it is removed from all access zones. As an alternative, you can stop using a NIS provider by removing it from each access zone that contains it so that the provider remains available for future use. 1. Click Cluster Management > Access Management > NIS. 2. Click Delete for the provider that you want to delete. 3. In the confirmation dialog box, click Delete.

Create an access zone


When you create an access zone, you can add one or more authentication provider instances, user mapping rules, and SMB shares, or you can create an empty access zone and configure it later. 1. Click Cluster Management > Access Management > Access Zones. 2. Click Create an access zone. 3. In the Access Zone Name field, type a name for the access zone. 4. Optional: In the Authentication Providers list, click one of the following options: l Use all authentication providers: Adds an instance of each available provider to the access zone.
l

Manually select authentication providers: Allows you to select one or more provider instances to add to the access zone. Follow these steps for each provider instance that you want to add: a. Click Add an authentication provider. b. In the Authentication Provider Type list, select a provider type. A provider type is listed only if an instance of that type exists and is not already in use by the access zone. c. In the Authentication Provider list, select an available provider instance. d. If you are finished adding provider instances, you can change the priority in which they are called by changing the order in which they are listed. To do so, click the title bar of a provider instance and drag it up or down to a new position in the list.

5. Optional: In the User Mapping Rules list, follow these steps for each user mapping rule that you want to add: a. Click Create a user mapping rule. The User Mapping Rules table appears and displays the Create a User Mapping Rule form. b. In the Operation list, click to select one of the following operations: Append fields from a user: Modifies a token by adding specified fields to it. All appended identifiers become members of the additional groups list. Insert fields from a user: Modifies a token by adding specified fields from another token. An inserted primary user or group becomes the new primary user or group in the token and moves the old primary user or group to the additional identifiers list. Modifying the primary user leaves the tokens username unchanged. When inserting additional groups from a token, the new groups are added to the existing groups. Replace a user with a new user: Replaces a token with the token identified by another user. If another user is not specified, the token is removed from the
Delete a NIS provider
67

Authentication and access control

list and no user is inserted to replace it. If there are no tokens in the list, access is denied with a "no such user" error. Remove supplemental groups from a user: Modifies a token by removing the supplemental groups. Join two users together: Inserts the new token into the list of tokens. If the new token is the second user, it is inserted after the existing token; otherwise, it is inserted before the existing token. The insertion point is primarily relevant when the existing token is already the first in the list because the first token is used to determine the ownership of new system objects. c. Fill in the fields as needed. Available fields differ depending on the selected operation. d. Click Add Rule. e. If you are finished adding user mapping rules, you can change the priority in which they are called by changing the order in which they are listed. To do so, click the title bar of a rule and drag it up or down to a new position in the list. To ensure that each rule gets processed, it is recommended that you list replacements first and allow/deny rules last. 6. Optional: For the SMB Shares setting, select one of the following options: l Use no SMB shares: Ignores all SMB shares.
l l

Use all SMB shares: Adds each available SMB share to the access zone. Manually select SMB shares: Allows you to select the SMB shares to add to the access zone. The following additional steps are required: a. Click Add SMB shares. The Select SMB Shares dialog box appears. b. Select the check box for each SMB share that you want to add to the access zone. c. Click Select.

7. Click Create Access Zone. What to do next Before you can use an access zone, you must associate it with an IP address pool. See "Associate an IP address pool with an access zone."

Managing access zones


You can configure the authentication providers, SMB namespace, and user mapping rules for any access zone. You can modify the name of any zone except the system zone.

Modify an access zone


You can modify the authentication providers, SMB shares, and user mapping rules for an access zone. You can modify the name of any access zone except the system zone. 1. Click Cluster Management > Access Management > Access Zones. 2. Click View details for the access zone whose settings you want to modify. 3. For each setting that you want to modify, click Edit, make the change, and then click Save.

68

OneFS 7.0 Administration Guide

Authentication and access control

Associate an IP address pool with an access zone


You can specify which access zone to use according to the IP address that a user is connecting to. 1. Click Cluster Management -> Network Configuration. 2. In the External Network Settings section, under Subnets, click a subnet name (for example, subnet0). 3. In the IP Address Pools section, click the + icon if necessary to view the settings for a pool. 4. Next to the Basic Settings heading, click Edit. The Configure IP Pool dialog box appears. 5. For the Access zone setting, select the zone to use when connecting through an IP address that belongs to this pool. 6. Click Submit.

Delete an access zone


You can delete any access zone except the built-in system zone. When you delete an access zone, any associated authentication provider instances or SMB shares remain available to other zones. 1. Click Cluster Management > Access Management > Access Zones. 2. Click Delete for the zone that you want to delete. 3. In the confirmation dialog box, click Delete.

Associate an IP address pool with an access zone

69

Authentication and access control

70

OneFS 7.0 Administration Guide

CHAPTER 3 File sharing

Multi-protocol support is built into the OneFS operating system, enabling a single file or directory to be accessed through SMB for Windows file sharing, NFS for UNIX file sharing, secure shell (SSH), FTP, and HTTP. By default, only the SMB and NFS protocols are enabled. OneFS creates the /ifs directory, which is the root directory for all file system data on the cluster. The /ifs directory is configured as an SMB share and an NFS export by default. You can create additional shares and exports within the /ifs directory tree. You can set Windows- and UNIX-based permissions on OneFS files and directories. Users who have the required permissions and administrative privileges can create, modify, and read data on the cluster through one or more of the supported file sharing protocols.
u

SMB. Allows Microsoft Windows and Mac OS X clients to access files that are stored on the cluster. NFS. Allows UNIX, Linux, Mac OS X, Solaris, and other UNIX-based clients to access files that are stored on the cluster. HTTP (with optional DAV). Allows clients to access files that are stored on the cluster through a web browser. FTP. Allows any client that is equipped with an FTP client program to access files that are stored on the cluster through the FTP protocol. NFS.......................................................................................................................72 SMB......................................................................................................................72 HTTP......................................................................................................................72 FTP........................................................................................................................73 Mixed protocol environments................................................................................73 Write caching with SmartCache.............................................................................73 Create an NFS export.............................................................................................75 Create an SMB share.............................................................................................75 Configure NFS file sharing.....................................................................................76 Configure SMB file sharing....................................................................................80 Configure and enable HTTP file sharing..................................................................83 Configure and enable FTP file sharing....................................................................84 Managing NFS exports...........................................................................................84 Managing SMB shares...........................................................................................86

u u u u u u u u u u u u u u

File sharing

71

File sharing

NFS
NFS exports provide UNIX clients network access to file system resources on the cluster. OneFS includes a configurable NFS service that enables you to create and manage NFS exports. OneFS supports asynchronous and synchronous communication over NFS. The OneFS cluster supports the following authentication providers for NFS file sharing.
u

Network Information Service (NIS). NIS is a client/server directory service protocol for distributing system configuration data, such as user and host names, between computers on a network. Lightweight Directory Access Protocol (LDAP). LDAP is an application protocol for querying and modifying directory services running over TCP/IP.

SMB
SMB shares provide Windows clients network access to file system resources on the cluster. OneFS includes a configurable SMB service that enables you to create and manage SMB shares. You can grant permissions to users and groups to carry out operations such as reading, writing, and setting access permissions on SMB shares. The /ifs directory is configured as an SMB share and enabled by default. OneFS supports the "user" and "anonymous" security modes. If the "user" security mode is enabled, when you connect to a share from an SMB client you must provide a valid user name with proper credentials. The SMB protocol uses security identifiers (SIDs) exclusively for authorization data. All identities are converted to SIDs during retrieval and are converted back to their on-disk representation before storage. When a file or directory is created, OneFS checks the access control list (ACL) of its parent directory. If any inheritable access control entries (ACEs) are found, a new ACL is generated from those ACEs. If no inheritable ACEs are found, a default ACL is created from the combined file and directory create mask / create mode settings. OneFS supports the following SMB clients:
u u u u

SMB 1 in Windows 2000/ Windows XP and later. SMB 1 in Mac OS X 10.5 and later. SMB 2 in Windows Vista/ Windows Server 2008 and later. SMB 2.1 in Windows 7/ Windows Server 2008 R2 and later.

HTTP
OneFS includes a configurable HTTP service, which it uses to request files that are stored on the cluster and to interact with the web administration interface. OneFS supports the Distributed Authoring and Versioning (DAV) service to enable multiple users to manage and modify files. DAV is a set of extensions to HTTP that allows clients to read and write from the cluster through the HTTP protocol. You can enable DAV in the web administration interface. OneFS supports a form of the web-based DAV (WebDAV) protocol that enables users to modify and manage files on remote web servers. OneFS performs distributed authoring, but does not support versioning and does not perform security checks. Each node in the cluster runs an instance of the Apache HTTP Server to provide HTTP access. You can configure the HTTP service to run in different modes.

72

OneFS 7.0 Administration Guide

File sharing

FTP
The FTP service is disabled by default. You can enable the FTP service to allow any node in the cluster to respond to FTP requests through a standard user account. When configuring FTP access, make sure that the specified FTP root is the home directory of the user who logs in. For example, the FTP root for local user "jsmith" should be ifs/ home/jsmith. You can enable the transfer of files between remote FTP servers and enable anonymous FTP service on the root by creating a local user named "anonymous" or "ftp".

Mixed protocol environments


The /ifs directory is the root directory for all file system data in the cluster, serving as an SMB share, an NFS export, and a document root directory. You can create additional shares and exports within the /ifs directory tree. Although it is not recommended, the OneFS cluster can be configured to use SMB or NFS exclusively. Within these scenarios you can also enable HTTP, FTP, and SSH. Access rights are enforced consistently across access protocols regardless of the security model. A user is granted or denied the same rights to a file when using SMB as they are when using NFS. Clusters running OneFS support a set of global policy settings that enable you to customize the default access control list (ACL) and UNIX permissions settings. OneFS is configured with traditional UNIX permissions on the file tree. By using Windows Explorer or OneFS administrative tools, any file or directory can be given an ACL. ACLs in OneFS can include local, NIS, and LDAP users and groups in addition to Windows domain users and groups. After a file is given an ACL, its mode bits are no longer enforced, and exist only as an estimate of the effective permissions. It is recommended that you only configure ACL and UNIX permissions if you fully understand how they interact with one another.

Write caching with SmartCache


Write caching accelerates the process of writing data to the cluster. OneFS includes a write-caching feature called SmartCache, which is enabled by default for all files and directories. If write caching is enabled, OneFS writes data to a write-back cache instead of immediately writing the data to disk. OneFS can write the data to disk at a time that is more convenient. It is recommended that you do not disable write caching for a file or directory and that you enable write caching for all file pool policies. OneFS interprets writes to the cluster as either synchronous or asynchronous, depending on a client's specifications. The impacts and risks of write caching depend on what protocols clients use to write to the cluster, and whether the writes are interpreted as synchronous or asynchronous. If you disable write caching, client specifications are ignored and all writes are performed synchronously. The following table explains how clients' specifications are interpreted, according to the protocol.

FTP

73

File sharing

Protocol NFS SMB iSCSI

Synchronous The stable field is set to data_sync or file_sync. The write-through flag has been applied.

Asynchronous The stable field is set to unstable. The write-through flag has not been applied.

The write-cache enabled (WCE) The WCE setting is set to true. setting is set to false.

Write caching for asynchronous writes


Writing to the cluster asynchronously with write caching is the fastest method of writing data to your cluster. Write caching for asynchronous writes requires fewer cluster resources than write caching for synchronous writes, and will improve overall cluster performance for most workflows. However, there is some risk of data loss with asynchronous writes. The following table describes the risk of data loss for each protocol when write caching for asynchronous writes is enabled: Protocol NFS

Risk If a node fails, no data will be lost except in the unlikely event that a client of that node also crashes before it can reconnect to the cluster. In that situation, asynchronous writes that have not been committed to disk will be lost. If a node fails, asynchronous writes that have not been committed to disk will be lost.

SMB iSCSI

If a node fails, asynchronous writes that have not been committed can cause inconsistencies in any file system that is laid out on the LUN, rendering the file system unusable. It is recommended that you do not disable write caching, regardless of the protocol that you are writing with. If you are writing to the cluster with asynchronous writes, and you decide that the risks of data loss are too great, it is recommended that you configure your clients to use synchronous writes, rather than disable write caching.

Write caching for synchronous writes


Write caching for synchronous writes costs cluster resources, including a negligible amount of storage space. Although it is not as fast as write caching with asynchronous writes, unless cluster resources are extremely limited, write caching with synchronous writes is faster than writing to the cluster without write caching. Write caching does not affect the integrity of synchronous writes; if a cluster or a node fails, none of the data in the write-back cache for synchronous writes is lost.
74

OneFS 7.0 Administration Guide

File sharing

Create an NFS export


OneFS does not restrict the number of NFS exports that you can create. 1. Click Protocols > UNIX Sharing (NFS) > NFS Export. 2. Click Add an Export. 3. Optional: In the Description text field, type a comment with basic information about the export you are creating. There is a 255 character limit. 4. Optional: In the Clients text field, type the names of the clients that are allowed to access the specified directories. To export all clients, leave this field blank. You can specify a client by host name, IP address, subnet, or netgroup. You can include multiple clients in this field, in the form of one client entry per line. 5. Click Browse and select the file directory you want to export. Repeat this step to add as many directory paths as needed. 6. Specify export permissions: l Restrict actions to read-only.
l

Enable mount access to subdirectories. Allow subdirectories below the path(s) to be mounted.

7. Specify User/Group mapping. If you select the Custom Default option, you can limit access by mapping root users or all users to a specific user and/or group ID. For root squash, map the root user (UID 0) to the user name "nobody". 8. Select the security type for the export: l UNIX system.
l l l

Kerberos5. Kerberos5 Integrity. Kerberos5 Privacy.

9. Configure Advanced NFS Export Settings. 10. Click Save.

Create an SMB share


When creating an SMB share, you can override default permissions, performance, and access settings. You can configure SMB home directory provisioning by using directory path variables to automatically create and redirect users to their own home directories. 1. Click Protocols > Windows Sharing (SMB) > SMB Shares. 2. Click Add a share. 3. In the Share Name text field, type a name for the share. Share names can contain up to 80 characters, and can only contain alphanumeric characters, hyphens, and spaces. 4. In the Description text field, type a comment with basic information about the share you are creating. There is a 255 character limit. A description is optional, but is helpful when managing multiple shares. 5. In the Directory to be Shared field, type the full path of the share, beginning with /ifs, or click Browse to locate the share. The variables that can be used in the directory path are as follows:

Create an NFS export

75

File sharing

Variable %D %U %Z %L %0 %1 %2

Expansion NetBIOS domain name. User name, for example user_001. Zone name, for example System. Host name of the cluster, normalized to lowercase. First character of the user name. Second character of the user name. Third character of the user name.

For example, if a user in the domain named DOMAIN and with the username of user_1, the path /ifs/home/%D/%U is interpreted as /ifs/home/DOMAIN/user_1. 6. Apply the initial Directory ACLs settings. These settings can be modified later. l To maintain the existing permissions on the shared directory, click the Do not change existing permissions option.
l

To apply a default ACL to the shared directory, click the Apply Windows default ACLs option. To maintain the existing permissions on the shared directory, click the Do not change existing permissions option.

To apply a default ACL to the shared directory, click the Apply Windows default ACLs option If the Auto-Create Directories setting is selected, an ACL with the equivalent of UNIX 700 mode bit permissions is created for any directory that is automatically created. 7. Optional: If needed, apply the home directory provisioning options. l Select the Allow Variable Expansion option to expand path variables (%U, %L, %D, and %Z) in the share directory path.
l

Select the Auto-Create Directories option to automatically create directories when users access the share for the first time. This option can only be selected if Allow Variable Expansion has been applied.

Allow Variable Expansion is required for paths to be created automatically. If Allow Variable Expansionis applied, but Auto-Create Directories is not, no paths will be automatically created, and any expansion variables in the name will not be expanded. For example: a new share named "home_share" is created with the path / ifs/%U/home. User "user_1" attempts to connect to "home_share", they will not be able to connect to the cluster and will see an error message. 8. If needed, apply the Users & Groups options. 9. If needed, apply advanced SMB share settings. 10. Click Create.

Configure NFS file sharing


You can enable or disable the NFS service, set the lock protection level, and set the security type. These settings are applied across all nodes in the cluster. You can change
76

OneFS 7.0 Administration Guide

File sharing

the settings for individual NFS exports as you create them, or edit the settings for individual exports as needed. 1. Click Protocols > UNIX Sharing (NFS) > NFS Settings. 2. Enable or disable the NFS service and version support settings: l NFS Service
l l l

NFSv2 Support NFSv3 Support NFSv4 Support

3. Select the Lock Protection Level setting. 4. Click the Reload Cached Configuration button. The cached NFS export settings are reloaded to ensure that changes to DNS or NIS are applied. 5. In the Users/Groups Mapping menu, click Custom Default. A box containing the settings for Map to User Credentials and Also map these user groups appears. a. To limit access by mapping root users or all users to a specific user or group, from the Root users list, click Specific username and then type the user names in the text field. A user is any user available in one of the configured authorization providers. b. To map users to groups, select the Also map these users to groups check box, click Specific user group(s), and then type the group names in the text field. 6. Select the security type. The default setting is UNIX. 7. Click Save.

Disable NFS file sharing


OneFS supports multiple versions of NFS. You can disable support for NFSv2, NFSv3, NFSv4, or the entire NFS service. 1. Click Protocols > UNIX Sharing (NFS) > NFS Settings. 2. Click disable for the entire NFS service or for individual versions of the NFS protocol. 3. Click Save.

NFS service settings


The NFS service settings are the global settings that determine how the NFS file sharing service operates. These settings include versions of NFS to support, the lock protection level, NFS exports configuration, user/group mappings, and security types. Setting Service NFSv2 support NFSv3 support

Description Enables or disables the NFS service. This setting is enabled by default. Enables or disables support for NFSv2. This setting is enabled by default. Enables or disables support for NFSv3. This setting is enabled by default.

Disable NFS file sharing

77

File sharing

Setting NFSv4 support Lock protection level

Description Enables or disables support for NFSv4. This setting is disabled by default. Determines the number of node failures that can happen before a lock may be lost. The default value is +2

NFS export behavior settings


The NFS export behavior settings are global settings that control options such as whether non-root users can set file times, the general encoding settings of an export, whether to look up UIDs (incoming user identifiers), or set the server clock granularity. Setting Can set time

Setting value Permits non-root users to set file times. The default value is Yes. Overrides the general encoding settings the cluster has for the export. The default value is DEFAULT. Looks up incoming user identifiers (UIDs) in the local authentication database. The default value is No. Enables symlink support for the export. The default value is Yes. Sets the server clock granularity. The default value is 1e-9.

Encoding

Map Lookup UID

Symlinks

Time delta

NFS performance settings


The performance settings for NFS can be configured to adjust the settings for block size, commit asynchronously, directory read transfer, read transfer max, read transfer multiple, read transfer preferred, readdirplus prefetch, setattr asynchronous, write datasync action, write datasync reply, write filesync action, write filesync reply, write transfer max, write transfer multiple, write transfer preferred, write unstable action, and write unstable reply. Setting Block size

Description The block size reported to NFSv2+ clients. The default value is 8192. If set to yes, allows NFSv3 and NFSv4 COMMIT operations to be asynchronous. The default value is No.

Commit asynchronously

78

OneFS 7.0 Administration Guide

File sharing

Setting Directory read transfer

Description The preferred directory read transfer size reported to NFSv3 and NFSv4 clients. The default value is 131072. The maximum read transfer size reported to NFSv3 and NFSv4 clients. The default value is 1048576. The recommended read transfer size multiple reported to NFSv3 and NFSv4 clients. The default value is 512. The preferred read transfer size reported to NFSv3 and NFSv4 clients. The number of file nodes to be prefetched on readdir. The default value is 10. If set to yes, performs set attribute operations asynchronously. The default value is No. The action to perform for DATASYNC writes. The default value is DATASYNC. The reply to send for DATASYNC writes. The default value is DATASYNC. The action to perform for FILESYNC writes. The default value is FILESYNC. The reply to send for FILESYNC writes.The default value is FILESYNC. The maximum write transfer size reported to NFSv3 and NFSv4 clients. The default value is 1048576. The recommended write transfer size reported to NFSv3 and NFSv4 clients. The default value is 512. The preferred write transfer size reported to NFSv3 and NFSv4 clients. The default value is 524288. The action to perform for UNSTABLE writes. The default value is UNSTABLE. The reply to send for UNSTABLE writes. The default value is UNSTABLE.

Read transfer max

Read transfer multiple

Read transfer preferred Readdirplus prefetch

Setattr asynchronous

Write datasync action

Write datasync reply

Write filesync action

Write filesync reply

Write transfer max

Write transfer multiple

Write transfer preferred

Write unstable action

Write unstable reply

NFS performance settings

79

File sharing

NFS client compatibility settings


The NFS client compatibility settings are global settings that affect the customization of NFS exports. These settings include the maximum file size, enabling readdirplus, and 32bit file IDs. Setting Max file size

Setting value Specifies the maximum file size to allow. The default value is 9223372036854776000. Enables readdirplus. The default value is yes. Returns 32-bit file IDs.

Readdirplus enable Return 32 bit file IDs

Configure SMB file sharing


You can configure the global settings for SMB, including SMB server, snapshots directory, SMB share, file and directory permissions, performance, and security settings. The global advanced settings for SMB shares are the same as the advanced settings for individual SMB shares. To change the advanced settings for an individual share, click SMB Shares. Modifying the advanced settings could result in operational failures. Be aware of the potential consequences before committing changes to these settings. 1. Click Protocols > Windows Sharing (SMB) > SMB Settings . The SMB Settings page appears. The advanced settings for an SMB share are the following:
l

File and Directory Permissions.

The File and Directory Permissions include Create Permissions, Create Mask (Dir), Create Mode (Dir), Create Mask (File), and Create Mode (File).
l

Performance settings.

The Performance settings include Change Notify and Oplocks. The Security settings include Impersonate Guest, Impersonate User, and NTFS ACL.
l

Security Settings.

The Security settings include Impersonate Guest, Impersonate User, and NTFS ACL. 2. Select the setting you want to modify, and then click the drop-down list next to it and select Custom default. a. A message reminding you that uninformed changes to the advanced settings could result in operational failures. Be aware of the potential consequences of changes before committing to save them appears. Click Continue. The setting properties can now be modified. 3. Make your changes to all of the settings you want to modify, and then click Save. Results The global settings for SMB have now been configured.

80

OneFS 7.0 Administration Guide

File sharing

File and directory permission settings


You can view and configure the Create Permissions, Create Mask (Dir), Create Mode (Dir), Create Mask (File), and Create Mode (File) file and directory permission settings of an SMB share. If changes to the file and directory permission settings are made from the SMB Settings tab, those changes will affect all current and future SMB shares. To change settings for individual SMB shares, click Protocols > Windows Sharing (SMB) > SMB Shares, and select the share you want to modify from the list that appears. If the mask and mode bits match the default values, a green check mark next to a setting appears, indicating that the specified read (R), write (W), or execute (X) permission is enabled at the user, group, or other level. The "other" level includes all users who are not listed as the owner of the share, and are not part of the group level that the file belongs to. Setting Create Permissions

Setting value Sets the default source permissions to apply when a file or directory is created. The default value is Default ACL. Specifies UNIX mode bits that are removed when a directory is created, restricting permissions. Mask bits are applied before mode bits are applied. Specifies UNIX mode bits that are added when a directory is created, enabling permissions. Mode bits are applied after mask bits are applied. Specifies UNIX mode bits that are removed when a file is created, restricting permissions. Mask bits are applied before mode bits are applied. Specifies UNIX mode bits that are added when a file is created, enabling permissions. Mode bits are applied after mask bits are applied.

Create Mask (Dir)

Create Mode (Dir)

Create Mask (File)

Create Mode (File)

Disable SMB file sharing


You can disable the SMB file sharing service. 1. Click Protocols > Windows Sharing (SMB) > SMB Settings . 2. Click Disable SMB. 3. Click Save.

File and directory permission settings

81

File sharing

Snapshots settings directory


You can view and configure the settings that control the snapshot directories in SMB. Any changes made to these settings will affect the behavior of the SMB service, and may affect all current and future SMB shares. Setting Visible at Root

Setting value Specifies whether to make the .snapshot directory visible at the root of the share. The default value is Yes. Specifies whether to make the .snapshot directory accessible at the root of the share. The default value is Yes. Specifies whether to make the .snapshot directory visible in subdirectories of the share root. The default value is No. Specifies whether to make the .snapshot directory accessible in subdirectories of the share root. The default value is Yes.

Accessible at Root

Visible in Subdirectories

Accessible in Subdirectories

SMB performance settings


You can view and configure the change notify and oplocks performance settings of an SMB share. If changes to the performance settings are made from the SMB Settings tab, those changes will affect all current and future SMB shares. To change performance settings for individual SMB shares, click Protocols > Windows Sharing (SMB) > SMB Shares, and select the share you want to modify from the list that appears. Setting Change Notify

Setting value Configures notification of clients when files or directories change. This helps prevent clients from seeing stale content, but requires server resources. The default value is All. Indicates whether an opportunistic lock (oplock) request is allowed. An oplock allows clients to provide performance improvements by using locally-cached information. The default value is Yes.

Oplocks

82

OneFS 7.0 Administration Guide

File sharing

SMB security settings


You can view and configure the Impersonate Guest, Impersonate User, and NTFS ACL security settings of an SMB share. If changes to the security settings are made from the SMB Settings tab, those changes will affect all current and future SMB shares. To change settings for individual SMB shares, click Protocols > Windows Sharing (SMB) > SMB Shares, and select the share you want to modify from the list that appears. Setting Impersonate Guest

Setting value Determines guest access to a share. The default value is Never. Allows all file access to be performed as a specific user. This must be a fully qualified user name. The default value is No value. Allows ACLs to be stored and edited from SMB clients. The default value is Yes.

Impersonate User

NTFS ACL

Configure and enable HTTP file sharing


You can configure HTTP and DAV to enable users to edit and manage files collaboratively across remote web servers. 1. Navigate to Protocols > HTTP Settings. 2. Select the setting for HTTP Protocol: l Enable HTTP. Allows HTTP access for cluster administration and browsing content on the cluster.
l

Disable HTTP and redirect to the web interface. Allows only administrative access to the web administration interface. This is the default mode. Disable HTTP entirely. Closes the HTTP port used for file access. Users can still access the web administration interface, but they must specify the port number (8080) in the URL in order to do so.

3. In the Document root directory field, type or click Browse to navigate to an existing directory in /ifs, or click File System Explorer to create a new directory and set its permissions. The HTTP server runs as the daemon user and group. To properly enforce access controls, you must grant the daemon user or group read access to all files under the document root, and allow the HTTP server to traverse the document root. 4. In the Server hostname field, type the HTTP server name. The server hostname must be a fully-qualified, SmartConnect zone name and valid DNS name. The name must begin with a letter and contain only letters, numbers, and hyphens (-). 5. In the Administrator email address field, type an email address to display as the primary contact for issues that occur while serving files. 6. From the Active Directory Authentication list, select an authentication setting: l Off.
SMB security settings
83

File sharing

Basic Authentication Only. Enables HTTP basic authentication. User credentials are sent in plain text. Integrated Authentication Only. Enables HTTP authentication via NTLM, Kerberos, or both. Integrated and Basic Authentication. Enables both basic and integrated authentication. Basic Authentication with Access Controls. Enables HTTP authentication via NTLM and Kerberos, and enables the Apache web server to perform access checks. Integrated and Basic Auth with Access Controls. Enables HTTP basic authentication and integrated authentication, and enables access checks via the Apache web server.

7. Click the Enable DAV check box. This allows multiple users to manage and modify files collaboratively across remote web servers. 8. Click the Disable access logging check box. 9. Click Submit.

Configure and enable FTP file sharing


You can configure the File Transfer Protocol (FTP) to upload and download files that are stored on the cluster. OneFS includes a secure FTP (sFTP) service that you can enable for this purpose. 1. Navigate to Protocols > FTP Settings. 2. Click Enable. The FTP service is disabled by default. 3. Select one of the following Service settings: l Server-to-server transfers. This enables the transfer of files between two remote FTP servers. This setting is disabled by default.
l

Anonymous access. This enables users with "anonymous" or "ftp" as the user name to access files and directories. With this setting enabled, authentication is not required. This setting is disabled by default. Local access. The enables local users to access files and directories with their local user name and password. Enabling this setting allows local users to upload files directly through the file system. This setting is enabled by default.

4. Click Submit.

Managing NFS exports


The default /ifs export is configured to allow UNIX clients to mount any subdirectory. You can view and modify NFS export settings, and you can delete NFS exports that are no longer needed. You must define all mount points for a given export host as a single export rule, which is the collection of options and constraints that govern the access of an export to the file system. To add a mount point for an export host that appears in the list of export rules, you must modify that entry rather than add a new one. You can apply individual host rules to each export, or you can specify all hosts, which eliminates the need to create multiple rules for the same host. To prevent problems when setting up new exports, be sure to delete export rules for directories that have been removed from the file system.

84

OneFS 7.0 Administration Guide

File sharing

Changes to the advanced settings affect all current and future NFS exports that use default settings, and may impact the availability of the NFS file sharing service. Do not make changes to these settings unless you have experience working with NFS. It is recommended that you change the default values for individual NFS exports as you create them, or edit the settings of existing exports.

Modify an NFS export


You can modify the settings for individual NFS exports. 1. ClickProtocols > UNIX Sharing (NFS) > NFS Export. 2. From the list of NFS exports, click View details for the export whose settings you want to modify. 3. For each setting that you want to modify, click Edit, make the change, and then click Save. 4. Click Close.

Delete an NFS export


You can delete NFS exports that are no longer needed. You can delete all the exports on a cluster by selecting the Export ID/Path option, and then selecting Delete from the drop-down menu. 1. Click Protocols > UNIX Sharing (NFS) > NFS Export 2. From the list of NFS exports, click the check box for the export that you want to delete. 3. Click Delete. 4. In the confirmation dialog box, click Delete to confirm the deletion.

View and configure default NFS export settings


Default settings apply to all current and future NFS exports. For each setting, you can customize the default value or select the factory default value. The factory default value cannot be modified. Modifying the global default values is not recommended. You can override the settings for NFS exports as you create them, or modify the settings for existing exports. 1. Click Protocols > UNIX Sharing (NFS) > NFS Settings. 2. Click the NFS export settings menu. 3. For each setting that you want to modify, click System Default in the list of options and select Custom Default. If a confirmation dialog box appears, click Continue. 4. Make your changes to the information in the setting value text field. 5. When you are finished modifying settings, click Save.

Modify an NFS export

85

File sharing

Managing SMB shares


You can configure the rules and other settings that govern the interaction between your Windows network and individual SMB shares on the cluster. OneFS supports %U, %D, %Z, %L, %0, %1, %2, and %3 variable expansion and automatic provisioning of user home directories. You can configure the users and groups that are associated with an SMB share, and view or modify their share-level permissions. It is recommended that you configure advanced SMB share settings only if you have a solid understanding of the SMB protocol.

Add a user or group to an SMB share


For each SMB share, you can add share-level permissions for specific users and groups. 1. Click Protocols > Windows Sharing (SMB) > SMB Shares , and then click View details for the share you want to add a user or group to. 2. Click Edit next to the Users & Groups option. The User/Group permission list for the share appears. 3. Click Add a User or Group. Then select the option you want to search for. Users Groups Enter the username you want to search for in the text field, and then click Search. Enter the group you want to search for in the text field, and then click Search.

Well-known SIDs Skip to step 5. 4. From the Access Zone list, select the access zone you want to search. 5. From the Provider list, select the authentication provider you want to search. Only providers that are currently configured and enabled on the cluster are listed. 6. Click Search. The results of the search appear in the Search Results box. 7. In the search results, click the user, group, or SID that you want to add to the SMB share and then click Select. 8. By default, the access rights of the new account are set to "Deny All". To enable a user or group to access the share, follow these additional steps: a. Next to the user or group account you added, click Edit. b. Select the permission level you want to assign to the user or group. The choices are Run as Root or specific permission levels: Full Control, Read-Write, or Read. 9. Click Save.

Modify an SMB share


You can modify the permissions, performance, and access settings for individual SMB shares. You can configure SMB home directory provisioning by using directory path variables to automatically create and redirect users to their own home directories.
86

OneFS 7.0 Administration Guide

File sharing

Any changes made to these settings will only affect the settings for this share. If you need to make changes to the global default values, that can be done from the SMB Settings tab. 1. Click Protocols > Windows Sharing (SMB) > SMB Shares. 2. From the list of SMB shares, locate the share you want to modify and then click View details. 3. For each setting that you want to modify, click Edit, make the change, and then click Save. 4. To modify the settings for file and directory permissions, performance, or security, click Advanced SMB Share Settings.

Delete an SMB share


You can delete SMB shares that are no longer needed. While unused SMB shares do not hinder cluster performance, they can be deleted. When an SMB share is deleted, the path of the share is deleted but the directory it referenced will still exist. If you create a new share with the same path as the share that was deleted, the directory the previous share referenced will be accessible again through the new share. You can delete all of the shares on a cluster by selecting the Name/Path option, and then selecting Delete from the drop-down menu. 1. On the web dashboard, click Protocols > Windows Sharing (SMB) > SMB Shares . 2. Select the share you want to delete from the list of SMB shares listed. 3. Click Delete. 4. In the confirmation dialog box, click Delete to confirm the deletion.

SMB share settings


Settings for SMB shares can be applied on individual shares, or on a global level for the entire SMB service. SMB share settings The settings for an SMB share can be applied at the time the share is created, or modified at a later time. To view the settings for an individual SMB share, on the web dashboard click Protocols > Windows Sharing (SMB) > SMB Shares. A list of SMB shares will appear. Locate the share you want to view the settings for, then click the View details link. The basic settings for an SMB share are the following:
u u u u u

Share Name. Description. Shared Directory. Home Directory Provisioning. Users and Groups.

The advanced settings for individual SMB shares are the same as the advanced settings for all SMB shares. If you need to change the global default values for the advanced settings, click the SMB Settings tab.
Delete an SMB share
87

File sharing

Uninformed changes to the advanced settings could result in operational failures. Be aware of the potential consequences of changes before committing to save them. The advanced settings for an SMB share are the following:
u

File and Directory Permissions.

The File and Directory Permissions include Create Permissions, Create Mask (Dir), Create Mode (Dir), Create Mask (File), and Create Mode (File).
u

Performance settings.

The Performance settings include Change Notify and Oplocks. The Security settings include Impersonate Guest, Impersonate User, and NTFS ACL.
u

Security Settings.

The Security settings include Impersonate Guest, Impersonate User, and NTFS ACL.

View and modify SMB share settings


You can view the status and settings of any SMB shares on the cluster. 1. Click Protocols > Windows Sharing (SMB) > SMB Shares . 2. To view the settings for file and directory permissions, performance, or security, click Advanced SMB Share Settings. Any changes made to these settings will override the default settings for this share only. If you need to make changes to the global default values, that can be done from the SMB Settings tab.

88

OneFS 7.0 Administration Guide

CHAPTER 4 Snapshots

A OneFS snapshot is a logical pointer to data stored on a cluster at a specific point in time. A snapshot contains a directory on a cluster, and includes all data stored in the given directory and any subdirectories that the directory contains. If data contained in a snapshot is modified, the snapshot stores a physical copy of the original data, and references the copy. Snapshots are created according to users' specifications, or automatically generated by OneFS to facilitate system operations. In some cases, snapshots generated by OneFS operations are optional and can be disabled. However, some applications are unable to function without generating snapshots. To create and manage snapshots, you must configure a SnapshotIQ license on the cluster. However, some OneFS operations generate snapshots without requiring that the SnapshotIQ license be configured. If an application generates a snapshot, and a SnapshotIQ license is not configured, you can still view the snapshot. However, there are some OneFS operations that generate snapshots for internal system use. Unless a SnapshotIQ license is configured, all snapshots generated by OneFS operations are automatically deleted after they are no longer needed. You can identify and locate snapshots by the snapshot name or ID. A snapshot name is specified by a user and assigned to the subdirectory that contains the snapshot. A snapshot ID is a numerical identifier that is assigned to snapshots by the system.
u u u u u u u u u u u u u u u u

Data protection with SnapshotIQ...........................................................................90 Snapshot disk-space usage...................................................................................90 Snapshot schedules..............................................................................................91 Snapshot aliases...................................................................................................91 File and directory restoration.................................................................................91 File clones.............................................................................................................91 Snapshot locks.....................................................................................................93 Snapshot reserve..................................................................................................93 SnapshotIQ license functionality...........................................................................93 Creating snapshots with SnapshotIQ.....................................................................94 Managing snapshots ............................................................................................99 Restoring snapshot data.....................................................................................102 Managing snapshot schedules............................................................................103 Managing with snapshot locks............................................................................104 Configure SnapshotIQ settings............................................................................106 Set the snapshot reserve.....................................................................................107

Snapshots

89

Snapshots

Data protection with SnapshotIQ


You can create snapshots to protect data through the SnapshotIQ tool. Snapshots protect data against accidental deletion and modification by enabling you to restore deleted files and revert modified files. To use the SnapshotIQ tool, you must configure a SnapshotIQ license on the cluster. Snapshots are less costly than backing up your data on a separate physical storage device in terms of both time and storage consumption. While the time required to move data to another physical device depends on the amount of data being moved, snapshots are created almost instantaneously regardless of the amount of data contained in the snapshot. Also, because snapshots are available locally, end-users can often restore their data without the assistance of a system administrator, saving administrators the time it takes to retrieve the data from another physical location. Snapshots require less space than a remote backup because unaltered data is referenced rather than recreated. Using another physical storage device requires you to create another physical copy of the data. It is important to remember that snapshots do not protect against hardware or file-system issues. Snapshots reference data that is stored on a cluster, and if the data on a cluster becomes unavailable, the snapshots will also be unavailable. Because of this, it is recommended that you use snapshots in addition to backing up your data on a separate physical device. Snapshots are convenient, but do not protect against all forms of data loss.

Snapshot disk-space usage


The amount of disk space used by a snapshot is dependent on the number of snapshots that contain the data and the amount of modifications made to the data that the snapshot contains. At the time it is created, a OneFS snapshot consumes a negligible amount of storage space on the cluster. However, as the files that a snapshot contains are modified, the snapshot stores read-only copies of the unmodified data. This allows the snapshot to maintain a pointer to data that existed in a directory at the time that the snapshot was created, even after the data has changed. A snapshot consumes only the space that is necessary to restore the data contained in the snapshot. If the data that a snapshot contains is not modified, the snapshot does not consume additional storage space on the cluster. In order to reduce storage disk-space usage, snapshots that contain the same information reference each other, with older snapshots referencing newer ones. If data is modified on the cluster, and several snapshots contain the same data, only one copy of the unmodified data is made. The size of a snapshot reflects the amount of disk-space consumed by physical copies of data stored in that snapshot. However, snapshots almost always reference data that is stored by other snapshots. The size of a snapshot does not reflect the amount of data that is referenced by that snapshot. Because OneFS snapshots do not consume a set amount of storage space, there is no available-space requirement for creating a snapshot. The size of a snapshot grows according to the modification of the data that it contains. However, a cluster can contain no more than 2048 snapshots.

90

OneFS 7.0 Administration Guide

Snapshots

Snapshot schedules
OneFS can automatically generate snapshots intermittently according to a snapshot schedule. With snapshot schedules, you can periodically generate snapshots of a directory, without having to manually create a snapshot every time. You can also assign an expiration period to the snapshots that are generated, causing OneFS to automatically delete each snapshot after the specified period has expired. It is often advantageous to create more than one snapshot per directory, with shorter duration periods assigned to snapshots that are generated more frequently, and longer expiration periods assigned to snapshots that are generated less frequently.

Snapshot aliases
A snapshot alias is an optional, alternative name for a snapshot. If a snapshot is assigned an alias, and that alias is later assigned to another snapshot, the alias is automatically removed from the old snapshot before it is assigned to the new snapshot. Snapshot aliases are most commonly used by snapshot schedules. When specified in a snapshot schedule, OneFS assigns the alias to each snapshot generated by the schedule. The alias is then attached only to the most recent snapshot generated based on the schedule. You can use aliases to quicky identify the most recent snapshot generated according to a schedule. OneFS also uses snapshot aliases internally to identify the most recent snapshot generated by OneFS operations.

File and directory restoration


There are various ways that you can restore the files and directories contained in snapshots. You can copy data from a snapshot, clone a file from a snapshot, or revert an entire snapshot. Copying a file out of a snapshot creates a new copy of the file on the cluster. If you copy a file from a snapshot, two copies of the same file exist on the cluster, taking up twice the amount of storage space as the original file. Even if you delete the original file from the non-snapshot directory, the copy of the file is still stored in the snapshot. You can also clone a file from a snapshot. While copying a file from a snapshot immediately consumes additional space on the cluster, cloning a file from a snapshot does not consume any additional space on the cluster unless the clone or cloned file is modified. You can also revert an entire snapshot. Reverting a snapshot replaces the contents of a directory with the data stored in a snapshot. Before a snapshot is reverted, OneFS creates a snapshot of the directory that is being replaced, enabling you to undo a snapshot revert. Reverting a snapshot can be useful if you have made many changes to files and directories contained in a snapshot and you want to revert all of the changes. If new files or directories have been created in a directory since a snapshot of the directory was created, those files and directories are deleted when the snapshot is reverted.

File clones
OneFS enables you to create file clones that share blocks with existing files in order to save space on the cluster. Although you can clone files from snapshots, clones are primarily used internally by OneFS. File clones share blocks with existing files in order to save space on the cluster. The shared blocks are contained in a shadow store that is referenced by both the clone and
Snapshot schedules
91

Snapshots

the cloned file. A file clone usually consumes less space and takes less time to create than a file copy. Immediately after a clone is created, all data originally contained in the cloned file is transferred to a shadow store. A shadow store is a hidden file that is used to hold shared data for clones and cloned files. Because both files reference all blocks from the shadow store, the two files consume no more space than the original file; the clone does not take up any additional space on the cluster. However, if the cloned file or clone is modified, the file and clone will share only blocks that are common to both of them, and the modified, unshared blocks will occupy additional space on the cluster. Over time, the shared blocks contained in the shadow store might become useless if neither the file nor clone references the blocks. The blocks that are no longer needed are deleted routinely by the cluster. However, you can cause the cluster to delete unused blocks at any time by running the shadow store delete job.

File clones considerations


Clones of files and cloned files behave differently in OneFS than files that have not been cloned. Be aware of the following considerations about file clones:
u

Reading a cloned file might be slower than reading a copied file. Specifically, reading non-cached data from a cloned file is slower than reading non-cached data from a copied file. Reading cached data from a cloned file takes no more time than reading cached data from a copied file. When a file and its clone are replicated to another Isilon cluster or backed up to an Network Data Management Protocol (NDMP) backup device, the file and clone do not share blocks on the target Isilon cluster or backup device. Shadows stores are not transferred to the target cluster or backup device, so clones and cloned files consume the same amount of space as copies. When a file is cloned, the shadow store referenced by the clone and cloned file is assigned to the storage pool of the cloned file. If you delete the storage pool that the shadow store resides on, the shadow store is moved to a pool occupied either by the original file or a clone of the file. The protection level of a shadow store is at least as high as the most protected file or clone associated referencing the shadow store. For example, if a cloned file resides in a storage pool with +2 protection, and the clone resides in a storage pool with +3 protection, the shadow store is protected at +3. Quotas account for clones and cloned files as if they consumed both shared and unshared data; from the perspective of a quota, a clone and a copy of the same file do not consume different amounts of data. However, if the quota includes data protection overhead, the data protection overhead for the shadow store is not accounted for by the quota. Clones cannot contain alternate data streams. If you clone a file with alternate data streams (ADS), the clone will not contain the alternate data streams.

92

OneFS 7.0 Administration Guide

Snapshots

iSCSI LUN clones


OneFS enables you to create clones of iSCSI logical units (LUNs) that share blocks with existing LUNs in order to save space on the cluster. Internally, OneFS creates iSCSI LUN clones by creating file clones.

Snapshot locks
A snapshot lock prevents a snapshot from being deleted. If a snapshot has one or more locks applied to it, the snapshot cannot be deleted and is referred to as a locked snapshot. You cannot delete a locked snapshot manually. OneFS is also unable to delete a snapshot lock. If the duration period of a locked snapshot expires, the snapshot will not be deleted by the system until all locks on the snapshot have been deleted. OneFS applies snapshot locks to ensure that snapshots generated by OneFS applications are not deleted prematurely. For this reason, it is recommended that you do not delete snapshot locks or modify the duration period of snapshot locks. A limited number of locks can be applied to a snapshot at a time. If you create too many snapshot locks and the limit is reached, OneFS might be unable to apply a snapshot lock when necessary. For this reason, it is recommended that you do not create snapshot locks.

Snapshot reserve
The snapshot reserve enables you to set aside a minimum portion of the cluster-storage capacity specifically for snapshots. If specified, the percentage of cluster capacity that is reserved for snapshots is not accessible to any other OneFS operation. The snapshot reserve does not limit the amount of space that snapshots are allowed to consume on the cluster. Snapshots can consume more than the percentage of capacity specified by the snapshot reserve. It is recommended that you do not specify a snapshot reserve.

SnapshotIQ license functionality


You can create snapshots only if you configure a SnapshotIQ license on a cluster. However, you can view snapshots and snapshot locks that are created for internal use by OneFS without configuring a SnapshotIQ license. The following table describes the functionality that is available with the SnapshotIQ license in both configured and unconfigured states: Configured Create snapshots and snapshot schedules

Unconfigured No No Yes Yes Yes

Yes

Configure SnapshotIQ settings Yes View snapshot schedules Delete snapshots Access snapshot data Yes Yes Yes

iSCSI LUN clones

93

Snapshots

Configured View snapshots

Unconfigured Yes

Yes

If you unconfigure a SnapshotIQ license, you will not be able to create new snapshots, all snapshot schedules will be disabled, and you will not be able to modify snapshots or snapshot settings. However, you will still be able to delete snapshots and access data contained in snapshots.

Creating snapshots with SnapshotIQ


To create snapshots, you must configure the SnapshotIQ licence on the cluster. You can create snapshots either by creating a snapshot schedule or manually generating an individual snapshot. Manual snapshots are useful if you want to create a snapshot immediately, or at a time that is not specified in a snapshot schedule. For example, if you plan to make changes to your file system, but are unsure of the consequences, you can capture the current state of the file system in a snapshot before you make the change. Before capturing snapshots, consider that reverting a snapshot requires that a SnapRevert domain exist for the directory that is being reverted. If you intend on reverting snapshots for a directory, it is recommended that you create SnapRevert domains for those directories while the directories are empty.

Create a SnapRevert domain


Before you can revert a snapshot that contains a directory, you must create a SnapRevert domain for the directory. It is recommended that you create SnapRevert domains for a directory while the directory is empty. The root path of the SnapRevert domain must be the same root path of the snapshot. For example, a domain with a root path of /ifs/data/media cannot be used to revert a snapshot with a root path of /ifs/data/media/archive. To revert /ifs/data/media/ archive, you must create a SnapRevert domain with a root path of /ifs/data/media/ archive. 1. Click Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, click Start Job. 3. From the Job list, select Domain Mark. 4. Optional: To specify the priority a priority for the job, from the Priority list, select a priority. Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default domain mark priority. 5. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact policy list, select an impact policy. If you do not specify a policy, the job is assigned the default domain mark policy. 6. From the Domain type list, select snaprevert. 7. Ensure that the Delete domain check box is cleared. 8. In the Domain root path field, type the path of the directory you want to create a SnapRevert domain for, and then click Start.

94

OneFS 7.0 Administration Guide

Snapshots

Create a snapshot
You can create a snapshot of a directory. 1. Click Data Protection > SnapshotIQ > Summary. 2. Click Capture a new snapshot. 3. Optional: To modify the default name of a snapshot, in the Capture a Snapshot area, in the Snapshot Name field, type a name 4. In the Directory Path field, specify the directory that you want the snapshot to contain. 5. Optional: To create an alternative name for the snapshot, specify a snapshot alias. a. Next to Create an Alias, click Yes. b. To modify the default snapshot alias name, in the Alias Name field, type an alternative name for the snapshot. 6. Optional: To assign a time that OneFS will automatically delete the snapshot, specify an expiration period. a. Next to Snapshot Expiration, click Snapshot Expires on. b. In the calendar, specify the day that you want the snapshot to be automatically deleted. 7. Click Capture.

Create a snapshot schedule


You can create a snapshot schedule to continuously generate snapshots of directories. 1. Click Data Protection > SnapshotIQ > Snapshot Schedules. 2. Click Create a snapshot schedule. 3. Optional: To modify the default name of the snapshot schedule, in the Create a Snapshot Schedule area, in the Schedule Name field, type a name for the snapshot schedule. 4. Optional: To modify the default naming pattern for the snapshot schedule, in the Naming pattern for Generated Snapshots field, type a naming pattern. Each snapshot generated according to this schedule is assigned a name based on the pattern. For example, the following naming pattern is valid:
WeeklyBackup_%m-%d-%Y_%H:%M

The example produces names similar to the following:


WeeklyBackup_07-13-2012_14:21

5. In the Directory Path field, specify the directory that you want to be contained in snapshots that are generated according to this schedule. 6. Specify how often you want to generate snapshots according to the schedule. Generate snapshots every day, or skip generating snapshots for a specified number of days. From the Snapshot Frequency list, select Daily, and specify how often you want to generate snapshots.

Generate snapshots on specific days of From the Snapshot Frequency list, select the week, and optionally skip Weekly and specify how often you want to generating snapshots for a specified generate snapshots. number of weeks.
Create a snapshot
95

Snapshots

Generate snapshots on specific days of From the Snapshot Frequency list, select the month, and optionally skip Monthly and specify how often you want to generating snapshots for a specified generate snapshots. number of months. Generate snapshots on specific days of From the Snapshot Frequency list, select the year. Yearly and specify how often you want to generate snapshots. A snapshot schedule cannot span multiple days. For example, you cannot specify to begin generating snapshots at 5:00 PM Monday and end at 5:00 AM Tuesday. To continuously generate snapshots for a period greater than a day, you must create two snapshot schedules. For example, to generate snapshots from 5:00 PM Monday to 5:00 AM Tuesday, create one schedule that generates snapshots from 5:00 PM to 11:59 PM on Monday, and another schedule that generates snapshots from 12:00 AM to 5:00 AM on Tuesday. 7. Optional: To assign an alternative name to the most recent snapshot generated by the schedule, specify a snapshot alias. a. Next to Create an Alias, click Yes. b. To modify the default snapshot alias name, in the Alias Name field, type an alternative name for this snapshot. 8. Optional: To specify a length of time that snapshots generated according to the schedule exist on the cluster before they are automatically deleted by OneFS, specify an expiration period. a. Next to Snapshot Expiration, click Snapshots expire. b. Next to Snapshots expire, specify how long you want to retain the snapshots generated according to this schedule. 9. Click Create.

Snapshot naming patterns


If you schedule snapshots to be automatically generated by OneFS, either according to a snapshot schedule or a replication policy, you must assign a snapshot naming pattern that determines how the snapshots are named. Snapshot naming patterns contain variables that include information about how and when the snapshot is created. The following variables can be included in a snapshot naming pattern: Variable %A %a

Description The day of the week. The abbreviated day of the week. For example, if the snapshot is generated on a Sunday, %a is replaced with "Sun". The name of the month. The abbreviated name of the month. For example, if the snapshot is generated in September, %b is replaced with "Sep".

%B %b

96

OneFS 7.0 Administration Guide

Snapshots

Variable %C

Description The first two digits of the year. For example, if the snapshot is created in 2012, %C is replaced with "20". The time and day. This variable is equivalent to specifying "%a %b %e %T %Y". The two digit day of the month. The day of the month. A single-digit day is preceded by a blank space. The date. This variable is equivalent to specifying "%Y-%m-%d" The year. This variable is equivalent to specifying "%Y". However, if the snapshot is created in a week that has less than four days in the current year, the year that contains the majority of the days of the week is displayed. The first day of the week is calculated as Monday. For example, if a snapshot is created on Sunday, January 1, 2017, %G is replaced with "2016", because only one day of that week is in 2017.

%c

%d %e %F

%G

%g

The abbreviated year. This variable is equivalent to specifying "%y". However, if the snapshot was created in a week that has less than four days in the current year, the year that contains the majority of the days of the week is displayed. The first day of the week is calculated as Monday. For example, if a snapshot is created on Sunday, January 1, 2017, %g is replaced with "16", because only one day of that week is in 2017.

%H

The hour. The hour is represented on the 24hour clock. Single-digit hours are preceded by a zero. For example, if a snapshot is created at 1:45 AM, %H is replaced with "01". The abbreviated name of the month. This variable is equivalent to specifying "%b". The hour represented on the 12-hour clock. Single-digit hours are preceded by a zero. For example, if a snapshot is created at 1:45 AM, %I is replaced with "01". The numeric day of the year. For example, if a snapshot is created on February 1, %j is replaced with "32".

%h

%I

%j

Snapshot naming patterns

97

Snapshots

Variable %k

Description The hour represented on the 24-hour clock. Single-digit hours are preceded by a blank space. The hour represented on the 12-hour clock. Single-digit hours are preceded by a blank space. For example, if a snapshot is created at 1:45 AM, %I is replaced with "1". The two-digit minute. The two-digit month.

%l

%M %m %p %{PolicyName}

AM or PM.
The name of the replication policy that the snapshot was created for. This variable is valid only if you are specifying a snapshot naming pattern for a replication policy. The time. This variable is equivalent to specifying "%H:%M". The time. This variable is equivalent to specifying "%I:%M:%S %p". The two-digit second. The second represented in UNIX or POSIX time. The name of the source cluster of the replication policy that the snapshot was created for. This variable is valid only if you are specifying a snapshot naming pattern for a replication policy. The time. This variable is equivalent to specifying "%H:%M:%S" The two-digit numerical week of the year. Numbers range from 00 to 53. The first day of the week is calculated as Sunday.

%R

%r

%S %s %{SrcCluster}

%T

%U

%u

The numerical day of the week. Numbers range from 1 to 7. The first day of the week is calculated as Monday. For example, if a snapshot is created on Sunday, %u is replaced with "7".

%V

The two-digit numerical week of the year that the snapshot was created in. Numbers range from 01 to 53. The first day of the week is calculated as Monday. If the week of January 1 is four or more days in length, then that week is counted as the first week of the year.

98

OneFS 7.0 Administration Guide

Snapshots

Variable %v

Description The day that the snapshot was created. This variable is equivalent to specifying "%e-%b-

%Y".
%W The two-digit numerical week of the year that the snapshot was created in. Numbers range from 00 to 53. The first day of the week is calculated as Monday. %w The numerical day of the week that the snapshot was created on. Numbers range from 0 to 6. The first day of the week is calculated as Sunday. For example, if the snapshot was created on Sunday, %w is replaced with "0". %X The time that the snapshot was created. This variable is equivalent to specifying "%H:%M:

%S".
%Y %y The year that the snapshot was created in. The last two digits of the year that the snapshot was created in. For example, if the snapshot was created in 2012, %y is replaced with "12". The time zone that the snapshot was created in. The offset from coordinated universal time (UTC) of the time zone that the snapshot was created in. If preceded by a plus sign, the time zone is east of UTC. If preceded by a minus sign, the time zone is west of UTC. The time and date that the snapshot was created. This variable is equivalent to specifying "%a %b %e %X %Z %Y". Escapes a percent sign. "100%%" is replaced with "100%".

%Z %z

%+

%%

Managing snapshots
You can delete and view snapshots. You can also modify attributes of snapshots. You can modify the name, duration period, and alias of an existing snapshot. However, you cannot modify the data contained in a snapshot; the data contained in a snapshot is read-only.

Reducing snapshot disk-space usage


If multiple snapshots contain the same directories, deleting one of the snapshots might not free the entire amount of space that the system reports as the size of the snapshot.

Managing snapshots

99

Snapshots

The size of a snapshot is the maximum amount of data that might be freed if the snapshot is deleted. Be aware of the following considerations when attempting to reduce the capacity used by snapshots:
u

Deleting a snapshot frees only the space that is taken up exclusively by that snapshot. If two snapshots reference the same stored data, that data is not freed until both snapshots are deleted. Remember that snapshots store data contained in all subdirectories of the root directory; if snapshot_one contains /ifs/data/, and snapshot_two contains /ifs/data/dir, the two snapshots most likely share data. If you delete a directory, and then re-create it, a snapshot containing the directory stores the entire re-created directory, even if the files in that directory are never modified. Deleting multiple snapshots that contain the same directories is more likely to free data than deleting multiple snapshots that contain different directories. If multiple snapshots of the same directories exist, deleting the older snapshots is more likely to free disk-space than deleting newer snapshots. Snapshots store only data that cannot be found on the file system or another snapshot. If you delete the oldest snapshot of a directory, the amount of space known as the size of the snapshot will be freed. Snapshots that are assigned expiration dates are automatically marked for deletion by the snapshot daemon. If the daemon is disabled, snapshots will not be automatically deleted by the system.

Delete snapshots
You can delete a snapshot if you no longer want to access the data contained in the snapshot. Disk space occupied by deleted snapshots is freed when the snapshot delete job is run. Also, if you delete a snapshot that contains clones or cloned files, data in a shadow store might no longer be referenced by files on the cluster; unreferenced data in a shadow store is deleted when the shadow store delete job is run. OneFS routinely runs both the shadow store delete and snapshot delete jobs. However, you can also manually run the jobs at any time. 1. Click Data Protection > SnapshotIQ > Snapshots. 2. Specify the snapshots that you want to delete. a. For each snapshot you want to delete, in the Saved File System Snapshots table, in the row of a snapshot, select the check box. b. From the Select an action list, select Delete. c. In the confirmation dialog box, click Delete. 3. Optional: To increase the speed at which deleted snapshot data is freed on the cluster, run the snapshot delete job. a. Navigate to Cluster Management > Operations. b. In the Running Jobs area, click Start Job. c. From the Job list, select SnapshotDelete. d. Click Start. 4. Optional: To increase the speed at which deleted data shared between cloned files is freed on the cluster, run the shadow store delete job. Run the shadow store delete job only after you run the snapshot delete job.
100

OneFS 7.0 Administration Guide

Snapshots

a. Navigate to Cluster Management > Operations. b. In the Running Jobs area, click Start Job. c. From the Job list, select ShadowStoreDelete. d. Click Start.

Modify a snapshot
You can modify the name and expiration date of a snapshot. 1. Click File System Management > SnapshotIQ > Snapshots. 2. In the Saved File System Snapshots table, in the row of the snapshot that you want to modify, click View Details. 3. In the Snapshot Details area, modify snapshot attributes. 4. Next to each snapshot attribute that you modified, click Save.

Modify a snapshot alias


You can modify the alias of a snapshot to assign a different alternative name for the snapshot. 1. Click Data Protection > SnapshotIQ > Snapshots. 2. Above the Saved File System Snapshots table, click View snapshot aliases. 3. In the Snapshot Aliases table, in the row of the alias that you want to modify, click View details. 4. In the Snapshot Alias Details pane, in the Alias Name area, click Edit. 5. In the Alias Name field, type a new alias name. 6. Click Save.

View snapshots
You can view all snapshots. 1. Click Data Protection > SnapshotIQ > Snapshots. 2. In the Saved File System Snapshots table, view snapshots.

Snapshot information
You can view information about snapshots, including the total amount of space consumed by all snapshots. The following information is displayed in the Saved Snapshots area:
u u

SnapshotIQ Status Indicates whether the SnapshotIQ tool is accessible on the cluster. Total Number of Saved Snapshots Indicates the total number of snapshots that
exist on the cluster.

Total Number of Snapshots Pending Deletion Indicates the total number of


snapshots that were deleted on the cluster since the last snapshot delete job was run. The space consumed by the deleted snapshots is not freed until the snapshot delete job is run again.

Modify a snapshot

101

Snapshots

Total Number of Snapshot Aliases Indicates the total number of snapshot aliases
that exist on the cluster.

Capacity Used by Saved Snapshots Indicates the total amount of space consumed
by all snapshots.

Restoring snapshot data


You can restore snapshot data through various methods. You can revert a snapshot, or access snapshot data through the snapshots directory. You can revert all data contained in a snapshot path to the state it was in when the snapshot was created. You can also access individual files and directories through the snapshots directory. From the snapshots directory, you can either clone a file or copy a directory or a file. The snapshots directory can be accessed through Windows Explorer or a UNIX command line. You can disable and enable access to the snapshots directory for any of these methods through snapshots settings.

Revert a snapshot
You can revert a directory back to the state it was in when a snapshot was taken. Before you begin u Create a SnapRevert domain for the directory.
u

Create a snapshot of a directory.

1. Click Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, click Start job. 3. From the Job list, select SnapRevert. 4. Optional: To specify a priority for the job, from the Priority list, select a priority. Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default snapshot revert priority. 5. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact policy list, select an impact policy. If you do not specify a policy, the job is assigned the default snapshot revert policy. 6. In the Snapshot field, type the name or ID of the snapshot that you want to revert, and then click Start.

Restore a file or directory using Windows Explorer


If the Microsoft Shadow Copy Client is installed on your computer, you can use it to restore files and directories that are stored in snapshots. You can access up to 64 snapshots of a directory through Windows explorer, starting with the most recent snapshot. To access more than 64 snapshots for a directory, access the cluster through a UNIX command line. 1. In Windows Explorer, navigate to the directory that you want to restore or the directory that contains the file that you want to restore. 2. Right-click the folder, and then click Properties. 3. In the Properties window, click the Previous Versions tab. 4. Select the version of the folder that you want to restore or the version of the folder that contains the version of the file that you want to restore.
102

OneFS 7.0 Administration Guide

Snapshots

5. Restore the version of the file or directory. l To restore all files in the selected directory, click Restore.
l

To copy the selected directory to another location, click Copy and then specify a location to copy the directory to. To restore a specific file, click Open, and then copy the file into the original directory, replacing the existing copy with the snapshot version.

Restore a file or directory through a UNIX command line


You can restore a file or directory through a UNIX command line. 1. Open a connection to the cluster through a UNIX command line. 2. To view the contents of the snapshot you want to restore a file or directory from, run the ls command for a subdirectory of the snapshots root directory. For example, the following command displays the contents of the /archive directory contained in Snapshot2012Jun04:
ls /ifs/.snapshot/Snapshot2012Jun04/archive

3. Copy the file or directory by using the cp command. For example, the following command creates a copy of file1:
cp /ifs/.snapshot/Snapshot2012Jun04/archive/file1 /ifs/archive/ file1_copy

Clone a file from a snapshot


You can clone a file from a snapshot. However, you cannot clone a file from a nonsnapshot directory. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. To view the contents of the snapshot you want to restore a file or directory from, run the ls command for a subdirectory of the snapshots root directory. For example, the following command displays the contents of the /archive directory contained in Snapshot2012Jun04:
ls /ifs/.snapshot/Snapshot2012Jun04/archive

3. Clone a file from the snapshot by running the cp command with the -c option. For example, the following command clones test.txt from Snapshot2012Jun04:
cp -c /ifs/.snapshot/Snapshot2012Jun04/archive/test.txt /ifs/ archive/test_clone.text

Managing snapshot schedules


You can modify, delete, and view snapshot schedules. You can modify the name, path, naming pattern, schedule, expiration period, and alias of a snapshot schedule. Modifications to a snapshot schedule affect only the snapshots that are generated after the schedule is modified.

Modify a snapshot schedule


You can modify a snapshot schedule. Any changes to a snapshot schedule are applied only to snapshots generated after the modifications are made. Existing snapshots are not affected by schedule modifications. If you modify the alias of a snapshot schedule, the alias is assigned to the next snapshot generated based on the schedule. However, in this case, the old alias is not removed
Restore a file or directory through a UNIX command line
103

Snapshots

from the last snapshot that it was assigned to. Unless you manually remove the alias, the alias remains attached to the last snapshot that it was assigned to. 1. Click Data Protection > SnapshotIQ > Snapshot Schedules. 2. In the Snapshot Schedules table, in the row of the snapshot schedule you want to modify, click View details. 3. In the Snapshot Schedule Details area, modify snapshot schedule attributes. 4. Next to each snapshot schedule attribute that you modified, click Save.

Delete a snapshot schedule


You can modify a snapshot schedule. Deleting a snapshot schedule will not delete snapshots that were previously generated according to the schedule. 1. Click Data Protection > SnapshotIQ > Snapshot Schedules. 2. In the Snapshot Schedules table, in the row of the snapshot schedule you want to delete, click Delete. 3. In the Confirm Delete dialog box, click Delete.

View snapshot schedules


You can view snapshot schedules. 1. Click Data Protection > SnapshotIQ > Snapshot Schedules. 2. In the Snapshot Schedules table, view snapshot schedules. 3. Optional: To view detailed information about a snapshot schedule, in the Snapshot Schedules table, in the row of the snapshot schedule that you want to view, click View details. Snapshot schedule settings are displayed in the Snapshot Schedule Details area. Snapshots that are scheduled to be generated according to the schedule are displayed in the Snapshot Calendar area.

Managing with snapshot locks


You can delete, create, and modify the expiration period of snapshot locks.

It is recommended that you do not create, delete, or modify snapshots locks unless you are instructed to do so by Isilon Technical Support. Deleting a snapshot lock that was created by OneFS might result in data loss. If you delete a snapshot lock that was created by OneFS, it is possible that the corresponding snapshot might be deleted while it is still in use by OneFS. If OneFS cannot access a snapshot that is necessary for an operation, the operation will malfunction and data loss might result. Modifying the expiration period of a snapshot lock can have impacts similar to deleting and creating snapshot locks. Reducing the duration period of a snapshot lock that has been created by OneFS might cause the lock to be deleted prematurely.

Create a snapshot lock


You can create snapshot locks that prevent snapshots from being deleted. Although you can prevent a snapshot from being automatically deleted by creating a snapshot lock, it is recommended that you do not create snapshot locks. Instead, it is
104

OneFS 7.0 Administration Guide

Snapshots

recommended that you extend the duration period of the snapshot by modifying the snapshot. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Create a snapshot lock by running the isi snapshot locks create command. For example, the following command applies a snapshot lock to "Snapshot April2012", sets the lock to expire in one month, and adds a description of "Maintenance Lock":
isi snapshot locks create "Snapshot April2012" --expires "1M" -comment "Maintenance Lock"

Modify a snapshot lock


You can modify the expiration date of a snapshot lock.

It is recommended that you do not modify the expiration dates of snapshot locks. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Modify a snapshot lock by running the isi snapshot locks modify command. For example, the following command sets a snapshot lock that is applied to "Snapshot April2012" and has an ID of 1 to expire in two days:
isi snapshot locks modify "Snapshot 2012Apr16" 1 --expires "2D"

Delete a snapshot lock


You can delete a snapshot lock.

It is recommended that you do not delete snapshot locks. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Delete a snapshot lock by running the isi snapshot locks delete command. For example, the following command deletes a snapshot lock that is applied to "Snapshot April2012" and has an ID of 1:
isi snapshot locks delete "Snapshot 2012Apr16" 1

The system prompts you to confirm that you want to delete the snapshot lock. 3. Type yes and then press ENTER.

Snapshot lock information


You can view snapshot lock information through the isi snapshot locks view and
isi snapshot locks list commands.

You can view the following information about snapshot locks.


u u u u

ID Numerical identification number of the snapshot lock. Comment Description of the snapshot lock. This can be any string specified by a user. Expires The date that the snapshot lock will be automatically deleted by OneFS. Count The number of times the snapshot lock is held. The file clone operation can hold a single snapshot lock multiple times. If multiple file clones are created simultaneously, the file clone operation holds the same lock multiple times, rather
Modify a snapshot lock
105

Snapshots

than creating multiple locks. If you delete a snapshot lock that is held more than once, you will delete only one of the instances that the lock is held. In order to delete a snapshot lock that is held multiple times, you must delete the snapshot lock the same number of times as displayed in the count field.

Configure SnapshotIQ settings


You can configure SnapshotIQ settings that determine how snapshots can be created and the methods that users can access snapshot data. 1. Click Data Protection > SnapshotIQ > Settings. 2. Modify SnapshotIQ settings, and then click Save.

SnapshotIQ settings
SnapshotIQ settings determine how snapshots behave and can be accessed. The following SnapshotIQ settings can be configured:
u

Snapshot Scheduling Determines whether snapshots can be generated. Disabling snapshot generation might cause some OneFS operations to fail. It is recommended that you do not disable this setting.
u

Auto-create Snapshots Determines whether snapshots are automatically generated according to snapshot schedules. Auto-delete Snapshots Determines whether snapshots are automatically deleted
according to their expiration dates.

NFS Visibility & Accessibility


u

Root Directory Accessible Determines whether snapshot directories are


accessible through NFS.

Root Directory Visible Determines whether snapshot directories are visible


through NFS.

Sub-directories Accessible Determines whether snapshot subdirectories are


accessible through NFS.

SMB Visibility & Accessible


u

Root Directory Accessible Determines whether snapshot directories are


accessible through SMB.

Root Directory Visible Determines whether snapshot directories are visible


through SMB.

Sub-directories Accessible Determines whether snapshot subdirectories are


accessible through SMB.

Local Visibility & Accessibility


u

Root Directory Accessible Determines whether snapshot directories are


accessible through the local file system accessed through an SSH connection or the local console.

Root Directory Visible Determines whether snapshot directories are visible


through the local file system accessed through an SSH connection or the local console.

106

OneFS 7.0 Administration Guide

Snapshots

Sub-directories Accessible Determines whether snapshot subdirectories are


accessible through the local file system accessed through an SSH connection or the local console.

Set the snapshot reserve


You can specify a minimum percentage of cluster-storage capacity that you want to reserve for snapshots. The snapshot reserve does not limit the amount of space that snapshots are allowed to consume on the cluster. Snapshots can consume more than the percentage of capacity specified by the snapshot reserve. It is recommended that you do not specify a snapshot reserve. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Set the snapshot reserve by running the isi snapshot settings modify command with the --reserve option. For example, the following command sets the snapshot reserve to 20%:
isi snapshot settings modify --reserve 20

Set the snapshot reserve

107

Snapshots

108

OneFS 7.0 Administration Guide

CHAPTER 5 Data replication with SyncIQ

OneFS enables you to replicate data from one Isilon cluster to another through the SyncIQ tool. To replicate data from one Isilon cluster to another, you must configure a SyncIQ license on both of the Isilon clusters. You can specify what data you want to replicate at the directory level, with the option to exclude specific files and sub-directories from being replicated. SyncIQ creates and references snapshots to replicate a consistent point-in-time image of a root directory. Metadata such as access control lists (ACLs) and alternate data streams (ADSs) are replicated along with data. You can use data replication to retain a consistent backup copy of your data on another Isilon cluster. OneFS offers automated failover and failback capabilities that enable you to continue operations on another Isilon cluster if a primary cluster becomes unavailable.
u u u u u u u u u u u u u

Replication policies and jobs..............................................................................110 Replication snapshots.........................................................................................112 Data failover and failback with SyncIQ.................................................................113 Recovery times and objectives for SyncIQ............................................................114 SyncIQ license functionality................................................................................115 Creating replication policies................................................................................115 Managing replication to remote clusters..............................................................124 Initiating data failover and failback with SyncIQ..................................................126 Managing replication policies.............................................................................130 Managing replication to the local cluster.............................................................133 Managing replication performance rules.............................................................134 Managing replication reports...............................................................................136 Managing failed replication jobs.........................................................................138

Data replication with SyncIQ

109

Data replication with SyncIQ

Replication policies and jobs


Data replication is coordinated according to replication policies and jobs. Replication policies specify what data is replicated, where the data is replicated to, and how often the data is replicated. Replication jobs are the operations that replicate data from one Isilon cluster to another. OneFS generates replication jobs according to replication policies. A replication policy specifies two clusters: the source and the target. The cluster on which the replication policy exists is the source cluster. The cluster that data is being replicated to is the target cluster. When a replication policy starts, OneFS generates a replication job for the policy. When a replication job runs, files from a directory on the source cluster are replicated to a directory on the target cluster; these directories are known as source and target directories. After the first replication job created by a replication policy finishes, the target directory and all files contained in the target directory are set to a read-only state, and can be modified only by other replication jobs belonging to the same replication policy. You can configure replication policies to replicate data according to a schedule; however, all replication policies can be started manually at any time. There is no limit to the number of replication policies that can exist on a cluster. It is recommended that you ensure that ACL policy settings are the same across source and target clusters. You can create two types of replication policies: synchronization policies and copy policies. A synchronization policy maintains an exact replica of the source directory on the target cluster. If a file or sub-directory is deleted from the source directory, the file or directory is deleted from the target cluster when the policy is run again. You can use synchronization policies to failover and failback data between source and target clusters. When a source cluster becomes unavailable, you can fail over data to a target cluster and make data available to clients. When the source cluster becomes available again, you can fail back data to the source cluster. A copy policy maintains recent versions of the files that are stored on the source cluster. However, files that are deleted on the source cluster are not deleted from the target cluster. Failback is not supported for copy policies. Copy policies are most commonly used for archival purposes. Copy policies enable you to remove files from the source cluster without losing those files on the target cluster. Deleting files on the source cluster improves performance on the source cluster while the deleted files are maintained on the target cluster. This can be useful if, for example, your source cluster is being used for production purposes and your target cluster is being used only for archiving. When a replication policy is started, OneFS creates a replication job on the source cluster. After a job is created for a replication policy, OneFS does not create another job for the policy until the existing job completes. Any number of replication jobs can exist on a cluster at a given time; however, only five replication jobs can run on a source cluster at the same time. If more than five replication jobs exist on a cluster, the first five jobs run while the others are queued to run. The number of replication jobs that a single target cluster can support concurrently is dependent on the number of workers available on the target cluster. There is no limit to the amount of data that can be replicated in a replication job; any number of files and directories can be replicated by a single job. You can control the amount of cluster resources and network bandwidth that data synchronization consumes, enabling you to prevent a large replication job from overwhelming the system. Because each node in a cluster is able to send and receive data, the speed at which data is replicated increases for larger clusters.

110

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Source and target cluster association


OneFS associates a replication policy with a target cluster by marking the target cluster when the job is run for the first time. Even if you modify the name or IP address of the target cluster, the mark persists on the target cluster. When a replication policy is run, OneFS checks the mark to ensure that data is being replicated to the correct location. On the target cluster, you can manually break an association between a replication policy and target directory. Breaking the association between a source and target cluster causes the mark on the target cluster to be deleted. You might want to manually break a target association if an association is obsolete. If you break the association of a policy, the policy is disabled on the source cluster and you are not able to run the policy. If you want to run the disabled policy again, you must reset the replication policy. Breaking an association of a policy causes either a full or differential replication to occur the next time the replication policy is run. During a full or differential replication, OneFS creates a new association between the source and target. Depending on the amount of data being replicated, a full or differential replication can take a very long time to complete.

Full and differential replication


If a replication policy encounters an error that cannot be fixed (for example, if the association was broken on the target cluster), you might need to reset the replication policy. If you reset a replication policy, OneFS performs either a full or differential replication the next time the policy is run. You can specify the type of replication that OneFS performs. During a full replication, OneFS transfers all data from the source cluster regardless of what data exists on the target cluster. Full replications consume large amounts of network bandwidth and can take a very long time to complete. However, full replications are less strenuous on CPU usage than a differential replication. During a differential replication, OneFS transfers only data that does not already exist on the target cluster; OneFS does this by first checking to make sure that a file does not already exist on the target cluster before sending the file. If a file already exists on the target cluster, OneFS does not transfer the file to the target cluster. A differential replication consumes less network bandwidth than a full replication; however, differential replications consume more CPU. Differential replication can be much faster than a full replication, provided there is an adequate amount of available CPU for the differential replication job to consume.

Controlling replication job resource consumption


You can create rules that limit the network traffic created and the rate at which files are sent by replication jobs. You can also specify the number of workers that are spawned by a replication policy to limit the amount of cluster resources consumed. Also, you can restrict a replication policy to connect only to a specific storage pool. You can create network-traffic rules that control the amount of network traffic generated by replication jobs during specified time periods. These rules can be useful if, for example, you want to limit the amount of network traffic created during other resourceintensive operations. You can create multiple network traffic rules to enforce different limitations at different times. For example, you might allocate a small amount of network bandwidth during peak business hours, but allow unlimited network bandwidth during non-peak hours.
Source and target cluster association
111

Data replication with SyncIQ

When a replication job runs, OneFS generates workers on the source and target cluster. Workers on the source cluster send data while workers on the target cluster write data. You can modify the maximum number of workers generated per node to control the amount of resources that a replication job is allowed to consume. For example, you can increase the maximum number of workers per node to increase the speed at which data is replicated to the target cluster. OneFS generates no more than 40 workers for a replication job. You can also reduce resource consumption through file-operation rules that limit the rate at which replication policies are allowed to send files. However, it is recommended that you do not create file-operation rules unless the files you intend on replicating are predictably similar in size, and not especially large.

Replication reports
OneFS generates reports that contain detailed information about replication job operations. You cannot customize the content of replication policy reports. OneFS routinely deletes replication policy reports. You can specify the maximum number of replication reports retained by OneFS and the length of time that replication reports are retained by OneFS. If the maximum number of replication reports is exceeded on a cluster, OneFS deletes reports beginning with the oldest reports. If a replication job fails, and you run the job again, OneFS creates two reports for that replication job and consolidates those reports into a single report. This is also true if you manually cancel a replication job and then start the replication policy again. If you delete a replication policy, OneFS automatically deletes any reports that were generated for that policy.

Replication snapshots
OneFS generates snapshots to facilitate replication, failover, and failback between Isilon clusters. Snapshots generated by OneFS can also be used for archival purposes on the target cluster.

Source cluster snapshots


OneFS generates snapshots on the source cluster to ensure that a consistent point-intime image is replicated, and that unaltered data is not sent to the target cluster. Before a replication job is run, OneFS takes a snapshot of the source directory. OneFS then replicates data according to the snapshot, rather than the current state of the cluster. This enables modifications to source-directory files while ensuring that an exact point-in-time image of the source directory is replicated. For example, if a replication job of /ifs/data/dir/ starts at 1:00 PM and finishes at 1:20 PM, and /ifs/data/dir/ file is modified at 1:10 PM, the modifications are not reflected on the target cluster, even if /ifs/data/dir/file is not replicated until 1:15 PM. You can replicate data according to a snapshot generated with the SnapshotIQ tool. If you replicate according to a user-generated snapshot, OneFS does not generate another snapshot of the source directory. This method can be useful if you want to replicate identical copies of data to multiple Isilon clusters. Source snapshots are also used by OneFS to ensure that replication jobs do not transfer unmodified data. When a job is created for a replication policy, OneFS checks to see if this is the first job created for the policy. If this is not the first job created for the policy, OneFS compares the snapshot generated for the earlier job with the snapshot generated
112

OneFS 7.0 Administration Guide

Data replication with SyncIQ

for the new job. OneFS replicates only data that has changed since the last time a snapshot was generated for the replication policy. When a replication job completes, the system deletes the previous sourcecluster snapshot and retains the most recent snapshot until the next job runs.

Target cluster snapshots


When a replication job is run, OneFS generates a snapshot on the target cluster to facilitate failover operations. When the next replication job is created for the replication policy, the job creates a new snapshot and deletes the old one. If the SnapshotIQ module is licensed on the target cluster, you can configure a replication policy to generate additional snapshots that remain on the target cluster even as subsequent replication jobs run. Snapshots are generated to facilitate failover on the target cluster regardless of whether the SnapshotIQ module is licensed. These snapshots are generated when a replication job completes. OneFS retains only one of these snapshots per replication policy, and deletes the old snapshot once the new snapshot is created. If the SnapshotIQ module is licensed on the target cluster, you can configure OneFS to generate snapshots on the target cluster that are not automatically deleted when subsequent replication jobs run. These snapshots contain the same data as the snapshots that are generated for failover purposes; however, you can configure how long these snapshots are retained on the target cluster. You can access these snapshots the same way that you access other snapshots generated on a cluster. These snapshots provide a consistent view of replicated files even after subsequent transfers.

Data failover and failback with SyncIQ


OneFS enables you to perform automated data failover and failback operations between Isilon clusters. If a cluster is rendered unusable, you can fail over to another Isilon cluster, enabling clients to access their data on the other cluster. If the unusable cluster becomes accessible again, you can fail back to the original Isilon cluster. For the purposes of explaining failover and failback procedures, the cluster originally accessed by clients is referred to as the primary cluster, and the cluster that client data is originally replicated to is referred to as the secondary cluster. Failover is the process that allows clients to modify data on a secondary cluster. Failback is the process that allows clients to access data on the primary cluster again, and begins to replicate data back to the secondary cluster. Failover and failback can be useful in disaster recovery procedures. For example, if a primary cluster is damaged by a natural disaster, clients can be migrated to a secondary cluster. Then, after the primary cluster is repaired, the clients can be migrated back to the primary cluster. You can also use failover and failback to facilitate scheduled cluster maintenance. For example, if you are upgrading the primary cluster, you might want to migrate clients to a secondary cluster while the primary cluster is upgraded. You can then migrate clients back to the primary cluster after maintenance is complete. Automated data failover and failback are not supported for SmartLock directories. However, you can manually failover and failback SmartLock directories.

Target cluster snapshots

113

Data replication with SyncIQ

Data failover
Data failover is the process of preparing data on the secondary cluster to be modified by clients. After you fail over to a secondary cluster, you can redirect clients to modify their data on the secondary cluster. Before failover is performed, you must create and run a replication policy on the primary cluster. The failover process is initiated on the secondary cluster. Failover is performed per replication policy, meaning that if the data you want to migrate is spread across multiple replication policies, you must initiate failover for each replication policy. You can use any replication policy to failover; however, if the action of the replication policy is set to copy, any file that was deleted on the primary cluster will be present on the secondary cluster. When the client connects to the secondary cluster, all files that were deleted on the primary cluster are available to the client. If you initiate failover for a replication policy while an associated replication job is running, the replication job fails, and the failover operation succeeds. Because data might be in an inconsistent state, OneFS uses the snapshot generated by the last successful replication job to revert data on the secondary cluster to the last recovery point. If a disaster occurs on the primary cluster, any modifications to data that were made after the last successful replication job started are not reflected on the secondary cluster. When a client connects to the secondary cluster, their data appears in the same state as it was when the last successful replication job was started.

Data failback
Data failback is the process of restoring clusters to the roles they occupied before the failover was performed, with the primary cluster hosting clients and the secondary cluster being replicated to for backup. The failback process includes updating the primary cluster with all of the modifications that were made to the data on the secondary cluster, preparing the primary cluster to be accessed by clients, and resuming data replication from the primary to the secondary cluster. At the end of the failback process, you can redirect users to resume accessing their data on the primary cluster. You can fail back data with any replication policy that meets the following criteria:
u u u u

Was failed over Is a synchronization policy Does not replicate a SmartLock directory Does not exclude any files or directories from replication

Recovery times and objectives for SyncIQ


The Recovery Point Objective (RPO) and the Recovery Time Objective (RTO) are measurements of the impacts that a disaster will have on business operations. You can calculate your RPO and RTO for a disaster recovery with replication policies. RPO is the maximum amount of time for which data is lost if a cluster suddenly becomes unavailable. For an Isilon cluster, the RPO is the amount of time that has passed since the last completed replication job started. The RPO is never greater than the time it takes for two consecutive replication jobs to run and complete. If a disaster occurs while a replication job is running, the data on the secondary cluster is reverted to the state it was in when the last replication job completed. For example, consider an environment in which a replication policy is scheduled to run every three hours, and replication jobs take
114

OneFS 7.0 Administration Guide

Data replication with SyncIQ

two hours to complete. If a disaster occurs five hours after a replication job begins, the RPO is four hours, because it has been four hours since a completed job began replicating data. RTO is the maximum amount of time required to make backup data available to clients after a disaster. The RTO is always less than or approximately equal to the RPO, depending on the rate at which replication jobs are created for a given policy. If replication jobs run continuously, meaning that another replication job is created before the previous replication job completes, the RTO is approximately equal to the RPO. When the secondary cluster is failed over, the data on the cluster is reset to the state it was in when the last job completed; resetting the data takes an amount of time proportional to the time it took users to modify the data. If replication jobs run on an interval, meaning that there is a period of time after a replication job completes before the next replication job starts, the relationship between RTO and RPO is dependant on whether a replication job is running when the disaster occurs. If a job is in progress when a disaster occurs, the RTO is roughly equal to the RPO. However, if a job is not running when a disaster occurs, the RTO is equal to a negligible amount of time. This is because the secondary cluster was not modified since the last replication job ran, and the failover procedure is essentially instantaneous.

SyncIQ license functionality


You can replicate data to another Isilon cluster only if you configure a SyncIQ license on both the local cluster and the cluster you are replicating data to. If you unconfigure a SyncIQ license, you cannot create, run, or manage replication policies; all previously created replication policies are disabled. Replication policies that target the local cluster are also disabled. However, data that was previously replicated to the local cluster is still available.

Creating replication policies


You can create replication policies that determine when replication jobs are created. You can use replication jobs to replicate data with SyncIQ.

Excluding directories in replication


You can exclude directories from being replicated by replication policies even if the directories exist under the specified source directory. You cannot fail back replication policies that exclude directories. By default, all files and directories under the source directory of a replication policy are replicated to the target cluster. However, you can prevent any of the directories under the source directory from being replicated. If you specify a directory to exclude, files and directories under the excluded directory are not replicated to the target cluster. If you specify a directory to include, only the files and directories under the included directory are replicated to the target cluster. By including directories, you are excluding any directories that are not contained in the included directories. If you both include and exclude directories, any excluded directories must be contained in one of the included directories; otherwise, the excluded-directory setting has no effect. For example, consider a policy in which the specified root directory is /ifs/data, and the following directories are included and excluded: Included directories:
u

/ifs/data/media/music
SyncIQ license functionality
115

Data replication with SyncIQ

/ifs/data/media/movies

Excluded directories:
u u

/ifs/data/archive /ifs/data/media/music/working

In this example, the setting that excludes the /ifs/data/archive directory has no effect because the /ifs/data/archive directory is not under either of the included directories; the /ifs/data/archive directory is not replicated regardless of whether the directory is explicitly excluded. However, the setting that excludes the /ifs/data/ media/music/working directory does have an effect, because the directory would be replicated if the setting was not specified. In addition, if you exclude a directory that contains the source directory, the excludedirectory setting has no effect. For example, if the root directory of a policy is /ifs/data, explicitly excluding the /ifs directory has no effect. Any directories that you explicitly include or exclude must be contained in or under the specified root directory. For example, consider a policy in which the specified root directory is /ifs/data. In this example, you could include both the /ifs/data/media and the /ifs/data/users/ directories because they are under /ifs/data. Excluding directories from a synchronization policy does not cause the directories to be deleted on the target cluster. For example, consider a replication policy that synchronizes /ifs/data on the source cluster to /ifs/data on the target cluster. If the policy excludes /ifs/data/media from replication, and /ifs/data/media/file exists on the target cluster, running the policy does not cause /ifs/data/media/file to be deleted from the target cluster.

Excluding files in replication


If you do not want specific files to be replicated by a replication policy, you can exclude them from the replication process through file-matching criteria statements. You can configure file-matching criteria statements during the replication policy creation process. You cannot fail back replication policies that exclude files. A file-criteria statement can include one or more elements. Each file-criteria element contains a file attribute, a comparison operator, and a comparison value. To combine multiple criteria elements into a criteria statement, use the Boolean "AND" and "OR" operators. You can configure any number of file-criteria definitions. Configuring file-criteria statements can cause the associated jobs to run slowly. It is recommended that you specify file-criteria statements in a replication policy only if necessary. Modifying a file-criteria statement will cause a full replication to occur the next time that a replication policy is started. During a full replication, all data is replicated from the source to the target directory, and OneFS creates a new association between the source and target. Depending on the amount of data being replicated, a full replication can take a very long time to complete.

116

OneFS 7.0 Administration Guide

Data replication with SyncIQ

For synchronization policies, if you modify the comparison operators or comparison values of a file attribute, and a file no longer matches the specified file-matching criteria, the file is deleted from the target the next time the job is run. This rule does not apply to copy policies.

File criteria options


You can configure a replication policy to exclude files that meet or do not meet specific criteria. You can specify file criteria based on the following options:
u

Date created Includes or excludes files based on when the file was created. This option
is available for copy policies only. You can specify a relative or specific date and time. Time settings are based on a 24-hour clock.

Date accessed Includes or excludes files based on when the file was last accessed. This
option is available for copy policies only. This setting is available only if the global accesstime-tracking option of the cluster is enabled. You can specify a relative date and time, such as "two weeks ago", or specific date and time, such as "January 1, 2012." Time settings are based on a 24-hour clock.

Date modified Includes or excludes files based on when the file was last modified. This
option is available for copy policies only. You can specify a relative or specific date and time. Time settings are based on a 24-hour clock.

File name Includes or excludes files based on the file name. You can specify to include or
exclude full or partial names that contain specific text. The following wildcards are accepted:

Alternatively, you can filter file names by using POSIX regular-expression (regex) text. Regular expressions are sets of symbols and syntax that are used to match patterns of text. These expressions can be more powerful and flexible than simple wildcard characters. Isilon clusters support IEEE Std 1003.2 (POSIX.2) regular expressions. For more information about POSIX regular expressions, see the BSD man pages. Wildcard *

Description Matches any string in place of the asterisk. For example, specifying "m*" would match "movies" and "m123"

[]

Matches any characters contained in the brackets, or a range of characters separated by a dash. For example, specifying "b[aei]t" would match "bat", "bet", and "bit" For example, specifying "1[4-7]2" would match "142", "152", "162", and "172"

File criteria options

117

Data replication with SyncIQ

Wildcard -

Description
You can exclude characters within brackets by following the first bracket with an exclamation mark. For example, specifying "b[!ie]" would match "bat" but not "bit" or "bet" You can match a bracket within a bracket if it is either the first or last character. For example, specifying "[[c]at" would match "cat", and "[at" You can match a dash within a bracket if it is either the first or last character. For example, specifying "car[-s]" would match "cars", and "car-"

Matches any character in place of the question mark. For example, specifying "t?p" would match "tap", "tip", and "top"

Path Includes or excludes files based on the file path. This option is available for copy
policies only. You can specify to include or exclude full or partial paths that contain specified text. You can also include the wildcard characters *, ?, and [ ].

Size Includes or excludes files based on size. File sizes are represented in multiples of 1024, not 1000.

Type Includes or excludes files based one of the following file-system object types:
u u u

Soft link Regular file Directory

Configure default replication policy settings


You can configure default settings for replication policies. If you do not modify these settings when creating a replication policy, the specified default settings are applied. 1. Click Data Protection > SyncIQ > Settings. 2. In the Default Policy Settings section, specify how you want the replication policies to connect to target clusters by selecting one of the following options: l Click Connect to any nodes in the cluster.
l

Click Connect to only the nodes in the subnet and pool if the target cluster name specifies a SmartConnect zone.

3. Specify which nodes you want replication policies to connect to when the policy is run. Connect policies to all nodes on Click Run the policy on all nodes in this cluster. a source cluster.
118

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Connect policies only to nodes contained in a specified subnet and pool.

a. Click Run the policy only on nodes in the specified subnet and pool. b. From the Subnet and pool list, select the subnet and pool .

Replication does not support dynamic pools. 4. Click Submit.

Create a replication policy


You can create a replication policy with SyncIQ that defines how and when data is replicated to another Isilon cluster. Configuring a SyncIQ policy is a five-step process. Configure replication policies carefully. If you modify any of the following policy settings after the policy is run, OneFS performs either a full or differential replication the next time the policy is run:
u u u u

Source directory Included or excluded directories File-criteria statement Target cluster name or address This applies only if you target a different cluster. If you modify the IP or domain name of a target cluster, and then modify the replication policy on the source cluster to match the new IP or domain name, a full replication is not performed. Target directory

Configure basic policy settings


You must configure basic settings for a replication policy. 1. Click Data Protection > SyncIQ > Policies. 2. Click Add policy. 3. In the Basic Settings area, in the Policy name field, type a name for the replication policy. 4. Optional: To specify a description for the replication policy, in the Description field, type a description. 5. In the Action area, specify the type of replication policy. l To copy all files from the source directory to the target directory, click Copy. Failback is not supported for copy policies.
l

To copy all files from the source directory to the target directory and delete any files on the target directory that are not in the source directory, click Synchronize.

6. In the Run job area, specify whether the job runs according to a schedule, or only when initiated by a user. Run the policy only Click Manually. when manually initiated by a user.

Create a replication policy

119

Data replication with SyncIQ

Run the policy a. Click Scheduled. automatically according b. Click Edit schedule. to a schedule. c. Specify a schedule. If you configure a replication policy to run more than once a day, you cannot configure the interval to span across two calendar days. For example, you can configure a replication policy to run every hour starting at 7:00 PM and ending at 11:00 PM, but you cannot configure a replication policy to run every hour starting at 7:00 PM and ending at 1:00 AM. What to do next The next step in the process of creating a replication policy is specifying source directories and files.

Specify source directories and files


You must specify policy source directories and files for a replication policy. 1. In the Source Cluster pane, in the Root directory field, type the full path of the source directory that you want to replicate to the target cluster. You must specify a directory contained in /ifs. You cannot specify the / ifs/.snapshot directory or subdirectory of it. 2. Optional: To prevent specific subdirectories of the root directory from being replicated, click the Add directory links located to the right of the Include directories and Exclude directories fields. 3. Optional: To prevent specific files from being replicated, specify a file-criteria statement. a. In the File criteria box, click Add criteria. b. In the Configure File Matching Criteria dialog box, select a file attribute and comparison operator. What to do next The next step in the process of creating a replication policy is specifying the target directory.

Specify the policy target directory


You must specify a target cluster and directory to replicate data to. 1. In the Target Cluster area, in the Name or address field, type one of the following: l The fully qualified domain name of any node in the target cluster.
l l l l

The host name of any node in the target cluster. The name of a SmartConnect zone in the target cluster. The IPv4 or IPv6 address of any node in the target cluster. localhost This will replicate data to another directory on the local cluster. Replication does not support dynamic pools.

2. Specify how you want the replication policy to connect to the target cluster by selecting one of the following options. l Click Connect to any nodes in the cluster.
120

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Click Connect to only the nodes in the subnet and pool if the target cluster name specifies a SmartConnect zone.

3. In the Target directory field, type the absolute path of the directory on the target cluster that you want to replicate data to.

If you specify an existing directory on the target cluster, ensure that the directory is not the target of another replication policy. If this is a synchronization policy, ensure that the directory is empty. All files are deleted from the target of a synchronization policy the first time the policy is run. If the specified target directory does not already exist on the target cluster, the directory created the first time the job is run. It is recommended that you do not specify the /ifs directory. If you specify the /ifs directory, the entire target cluster is set to a read-only state, preventing you from storing any other data on the cluster. If this is a copy policy, and files exist in the target directory that are also present in the source directory, those files are overwritten when the job is run. What to do next The next step in the process of creating a replication policy is specifying policy target snapshot settings.

Configure policy target snapshot settings


You can optionally specify target cluster snapshot settings that determine how archival snapshots are generated on the target cluster. You can use archival snapshots the same way that you use regular snapshots for data protection. One snapshot is always retained on the target cluster to facilitate failover, regardless of these settings. 1. In the Target Cluster Archival Snapshots area, next to Create snapshots, specify whether to create archival snapshots on the target cluster by selecting one of the following options. l Click Create archival snapshots on the target cluster.
l

Click Do not create archival snapshots on the target cluster.

2. Optional: To modify the default alias of the last snapshot created according to this replication policy, in the Snapshot alias name field, type a new alias. You can specify the alias name as a snapshot naming pattern. For example, the following naming pattern is valid:
%{PolicyName}-on-%{SrcCluster}-latest

The previous example produces names similar to the following:


newPolicy-on-Cluster1-latest

3. Optional: To modify the snapshot naming pattern, in the Snapshot naming pattern field, type a naming pattern. Each snapshot generated for this replication policy is assigned a name based on this pattern. For example, the following naming pattern is valid:
%{PolicyName}-from-%{SrcCluster}-at-%H:%M-on-%m-%d-%Y

The example produces names similar to the following:


newPolicy-from-Cluster1-at-10:30-on-7-12-2012

4. Specify whether you want OneFS to automatically delete snapshots generated according to this policy.
Create a replication policy
121

Data replication with SyncIQ

l l

Click Do not delete any archival snapshots. Click Delete archival snapshots when they expire and specify an expiration period.

What to do next The next step in the process of creating a replication policy is configuring advanced policy settings.

Configure advanced policy settings


You can optionally configure advanced settings for a replication policy. 1. In the Workers per node field, specify the maximum number of concurrent processes per node that will perform replication operations. Do not modify the default setting without consulting Isilon Technical Support. 2. From the Log level list, select the level of logging you want OneFS to perform for replication jobs. The following log levels are valid, listed from least to most verbose:
l l l l

Click Error. Click Notice. Click Network Activity. Click File Activity.

3. If you want OneFS to perform a checksum on each file data packet that is affected by the replication job, select the Validate file integrity check box. If you enable this option, and the checksum values for a file data packet do not match, OneFS retransmits the affected packet. 4. To configure shared secret authentication, in the Shared secret field, type a shared secret. To establish this type of authentication, you must configure both the source and target cluster to require the same shared secret. For more information, see the Isilon Knowledge Base. This feature does not perform any encryption. 5. To modify the length of time OneFS retains replication reports for the policy, in the Keep reports for area, specify a length of time. After the specified expiration period has passed for a report, OneFS automatically deletes the report. Some units of time are displayed differently when you view a report than how they were originally entered. Entering a number of days that is equal to a corresponding value in weeks, months, or years results in the larger unit of time being displayed. For example, if you enter a value of 7 days, 1 week appears for that report after it is created. This change occurs because OneFS internally records report retention times in seconds and then converts them into days, weeks, months, or years for display. 6. Specify which nodes you want the replication policy to connect to when the policy is run. Connect the policy to all nodes on Click Run the policy on all nodes in this cluster. the source cluster.

122

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Connect the policy only to nodes contained in a specified subnet and pool.

Click Run the policy only on nodes in the specified subnet and pool. From the Subnet and pool list, select the subnet and pool .

Replication does not support dynamic pools. 7. Specify whether to record information about files that are deleted by synchronization jobs by selecting one of the following options: l Click Record when a synchronization deletes files or directories.
l

Click Do not record when a synchronization deletes files or directories.

This option is applicable for synchronization policies only. What to do next The next step in the process of creating a replication policy is saving the replication policy settings.

Save replication policy settings


OneFS does not begin replicating data according to a replication policy until you save the replication policy. Before you begin Review the current settings of the replication policy. If necessary, modify the policy settings. 1. Click Submit. What to do next You can increase the speed at which you can failback a replication policy by creating a replication domain for the source directory of the policy.

Create a SyncIQ domain


You can create a SyncIQ domain to increase the speed at which failback is performed for a replication policy. Failing back a replication policy requires that a SyncIQ domain be created for the source directory. OneFS automatically creates a SyncIQ domain during the failback process. However, if you intend on failing back a replication policy, it is recommended that you create a SyncIQ domain for the source directory of the replication policy while the directory is empty. 1. Click Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, click Start Job. 3. From the Job list, select Domain Mark. 4. Optional: To specify a priority for the job, from the Priority list, select a priority. Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default domain mark priority. 5. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact policy list, select an impact policy. If you do not specify a policy, the job is assigned the default domain mark policy. 6. From the Domain type list, select synciq.
Create a SyncIQ domain
123

Data replication with SyncIQ

7. Ensure that the Delete domain check box is cleared. 8. In the Domain root path field, type the path of a source directory of a replication policy, and then click Start.

Assess a replication policy


Before running a replication policy for the first time, you can view statistics on the files that would be affected by the replication without transferring any files. This can be useful if you want to preview the size of the data set that is affected if you ran the policy. You can assess only replication policies that have never been run before. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies area, in the row of the policy you want to assess, click Assess. 3. After the job completes, click Data Protection > SyncIQ > Reports. 4. In the Reports table, in the row of the assessment job, click View details. The report displays data as if the assessment job had transferred the data to the target directory. Assessment is displayed in the Transferred column.

Managing replication to remote clusters


You can manually run, view, assess, pause, resume, cancel, resolve, and reset replication jobs that target other clusters. After a policy job starts, you can pause the job to suspend replication activities. Afterwards, you can resume the job, continuing replication from the point where the job was interrupted. You can also cancel a running or paused replication job if you want to free the cluster resources allocated for the job. A paused job reserves cluster resources whether or not the resources are in use. A cancelled job releases its cluster resources and allows another replication job to consume those resources. No more than five running and paused replication jobs can exist on a cluster at a time. However, there is no limit to the number of canceled replication jobs that can exist on a cluster. If a replication job remains paused for more than a week, OneFS automatically cancels the job.

Start a replication job


You can manually start a replication job for a replication policy at any time. If you want to replicate data according to an existing snapshot, at the OneFS command prompt, run the isi sync policy run command with the --use_snapshot option. Replicating data according to snapshots generated by the SyncIQ tool is not supported. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, in the Actions column of the policy you want to start a replication job for, click Start.

Pause a replication job


You can pause a running replication job and then resume the job later. Pausing a replication job temporarily stops data from being replicated, but does not free the cluster resources replicating the data. 1. Click Data Protection > SyncIQ > Summary. 2. In the Currently Running table, in the Actions column of the job, click Pause.
124

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Resume a replication job


You can resume a paused replication job. 1. Click Data Protection > SyncIQ > Summary. 2. In the Currently Running table, in the Actions column of the job, click Resume.

Cancel a replication job


You can cancel a running or paused replication job. Cancelling a replication job stops data from being replicated and frees the cluster resources that were replicating data. You cannot resume a cancelled replication job; in order to restart replication, you must start the replication policy again. 1. Click Data Protection > SyncIQ > Summary. 2. In the Currently Running table, in the Actions column of the job, click Cancel.

View active replication jobs


You can view information about replication jobs that are currently running or paused. 1. Click Data Protection > SyncIQ > Policies. 2. In the Currently Running and Recently Completed tables, review information about active replication jobs.

View replication performance information


You can view information about how many files are sent and the amount of network bandwidth consumed by replication policies. 1. Click Data Protection > SyncIQ > Policies. 2. In the Network Performance and File Operations tables, view performance information.

Replication job information


You can view information about replication jobs through the Currently Running and Recently Completed tables. The following information is displayed in the Currently Running table:
u u u u u

Run The status of the job. Policy The name of the associated replication policy. Started The time the job started. Elapsed Indicates how much time has elapsed since the job started. Transferred The number of files that were transferred during the job run, and the total size of all transferred files. Sync Type The type of replication being performed. The possible values are Initial, which indicates that either a differential or a full replication is being performed; Upgrade, which indicates that a policy-conversion replication is being performed; and Incremental, which indicates that only modified files are being transferred to the target cluster. Source The source directory on the source cluster.

Resume a replication job

125

Data replication with SyncIQ

u u

Target The target directory on the target cluster. Actions Displays any job-related actions that you can perform.

The Recently Completed table contains the following information:


u

Run Indicates the status of the job. A green icon indicates that the last job completed
successfully. A yellow icon indicates that the last job did not complete successfully, but that an earlier job did complete successfully. A red icon indicates that jobs have run, but that none of the jobs completed successfully. If no icon appears, the job was not run.

u u u u u

Policy The name of the associated replication policy. Started The time at which the job started. Ended The time at which the job finished running. Duration Indicates the total amount of time that the job ran for. Transferred The number of files that were transferred during the job run, and the total
size of all transferred files.

Sync Type The type of replication that was performed. The possible values are Initial, which indicates that either a differential or a full replication was performed; Upgrade, which indicates that a policy-conversion replication occurred after upgrading the OneFS operating system or merging policies; and Incremental, which indicates that only modified files were transferred to the target cluster. Source The source directory on the source cluster. Target The target directory on the target cluster.

u u

Initiating data failover and failback with SyncIQ


You can fail over to a secondary Isilon cluster if, for example, a cluster becomes unavailable. You can also fail back to a primary cluster if, for example, the primary cluster becomes available again. You can undo a failover if you decide that the failover was unnecessary, or for testing purposes. The procedures for failing over and failing back vary if SmartLock directories are involved. Failover revert is not supported for SmartLock directories.

Fail over data to a secondary cluster


You can fail over to a secondary Isilon cluster if, for example, a cluster becomes unavailable. Complete the following procedure for each replication policy you want to fail over. Before you begin Create and successfully run a replication policy. 1. On the secondary Isilon cluster, in the Isilon web administration interface, click Data Protection > SyncIQ > Local Targets . 2. In the Local Targets table, in the row of a replication policy, click Allow Writes. What to do next Redirect clients to begin accessing the secondary cluster.

126

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Fail over SmartLock directories


You can fail over SmartLock directories to a secondary Isilon cluster if, for example, a cluster becomes unavailable. Complete the following procedure for each replication policy you want to fail over. Before you begin Create and successfully run a replication policy. 1. On the secondary cluster, in the Isilon web administration interface, click Data Protection > SyncIQ > Local Targets. 2. In the Policies table, in the row of the replication policy, enable writes to the target directory of the policy. l If the last replication job completed successfully and a replication job is not currently running, click Allow Writes. This will maintain the association between the primary and secondary cluster.
l

If a replication job is currently running, it is recommended that you wait until the replication job completes, and then click Allow Writes. This will maintain the association between the primary and secondary cluster. If the primary cluster became unavailable while a replication job was running, click Break. This will break the association between the primary and secondary cluster.

3. If you clicked Break, restore any files left in an inconsistent state. a. Delete all files that were not committed to a WORM state from the target directory. b. Copy all files from the failover snapshot to the target directory. Failover snapshots are named according to the following naming pattern:
SIQ-Failover-<policy-name>-<year>-<month>-<day>_<hour>-<minute><second>

4. If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the source directory of the replication policy, apply those settings to the target directory. What to do next Redirect clients to begin accessing the secondary cluster.

Failover revert
You can perform a failover revert if, for example, the primary cluster becomes available before data is modified on the secondary cluster. You also might want to perform failover revert if you were failing over for testing purposes. Failover revert enables you to replicate data from the primary cluster to the secondary cluster again. Failover revert does not migrate data back to the primary cluster. If clients modified data on the secondary cluster, and you want to migrate the modified data back to the primary cluster, you must fail back to the primary cluster. Failover revert is not supported for SmartLock directories. Complete the following procedure for each replication policy you want to fail over. Before you begin Fail over a replication policy.
Fail over SmartLock directories
127

Data replication with SyncIQ

1. On the secondary Isilon cluster, click Data Protection > SyncIQ > Local Targets . 2. In the Local Targets table, in the row of a replication policy, click Disallow Writes, and then, in the confirmation dialog box, click Yes.

Fail back data to a primary cluster


After you fail over to a secondary cluster, you can fail back to the primary cluster if, for example, the primary cluster is available again. Before you begin Fail over a replication policy. 1. On the primary cluster, click Data Protection > SyncIQ > Policies . 2. For each replication policy you want to fail back, in the Policies table, in the row of a policy you want to fail back, click Prepare re-sync. A mirror policy is created for each replication policy on the secondary cluster. Mirror policies are named according to the following pattern:
<replication-policy-name>_mirror

3. On the secondary cluster, replicate data to the primary cluster by using the mirror policies. You can replicate data either by manually starting the mirror policies or by modifying the mirror policies and specifying a schedule. 4. Disallow client access to the secondary cluster and run each mirror policy again. 5. On the primary cluster, click Data Protection > SyncIQ > Local Targets . 6. For each mirror policy, in the Local Targets table, in the row of the mirror policy, click Allow Writes. 7. On the secondary cluster, click Data Protection > SyncIQ > Policies . 8. For each mirror policy, in the Policies table, in the row of the policy, click Prepare resync. What to do next Redirect clients to begin accessing the primary cluster.

Prepare SmartLock directories for failback


You must prepare SmartLock directories before you can fail back to them. You cannot failback to compliance directories or to enterprise directories with the privileged delete functionality permanently disabled. Instead, you must create new SmartLock directories, and fail back to those empty directories. Failing back to an empty SmartLock directory creates two copies of your data on the primary cluster. If you do not have space on the primary cluster to store two copies of your data, contact Isilon Technical Support for information about reformatting your cluster. Complete the following procedure for each SmartLock directory you want to fail back to. Before you begin Fail over SmartLock directories. 1. On the primary cluster, enable the privileged delete functionality for the directory you want to fail back to by running the isi worm modify command.

128

OneFS 7.0 Administration Guide

Data replication with SyncIQ

For example, the following command enables privileged delete functionality for / ifs/data/dir:
isi worm modify --path /ifs/data/dir --privdel on

2. Disable the autocommit time period for the directory you want to fail back to by running the isi worm modify command. For example, the following command disables the autocommit time period for /ifs/ data/dir:
isi worm modify --path /ifs/data/dir --autocommit none

Fail back SmartLock directories


After you fail over SmartLock directories to a cluster, you can fail back to the cluster you originally failed over from if, for example, the original cluster becomes available again. In some cases, you might be able to fail back a SmartLock directory by following the procedure for non-SmartLock directory failback. Follow the non-SmartLock directory procedure if the following statements are true:
u u u

The SmartLock directory you are failing back is an enterprise directory. The privileged delete functionality was not permanently disabled for the directory. During the failover process, you maintained the association between the source and target cluster.

Complete the following procedure for each replication policy you want to fail back. Before you begin Prepare SmartLock directories for failback. 1. On the secondary cluster, create a replication policy that meets the following requirements: l The source directory is the target directory of the policy you are failing back.
l

If you are failing back to an enterprise directory with the privileged delete functionality enabled, the target directory of the policy must be the source directory of the policy you are failing back. If you are failing back to a compliance directory, or an enterprise directory with the privileged delete functionality permanently disabled, the target must be an empty SmartLock directory. The directory must be of the same SmartLock type as the source directory of the policy you are failing back. For example, if the target directory is a compliance directory, the source must also be a compliance directory.

2. Optional: Replicate data to the primary cluster by running the policy you created. Continue to replicate data until a time when client access to the cluster is minimal. For example, you might wait until a weekend when client access to the cluster is reduced. 3. Disallow client access to the secondary cluster and run the policy that you created. 4. On the primary cluster, click Data Protection > SyncIQ > Local Targets . 5. In the Local Targets table, for the replication policy that you created, click Allow Writes. 6. Optional: If any SmartLock directory configuration settings, such as an autocommit time period, were specified for the source directory of the replication policy, apply those settings to the target directory. 7. Prepare the source directory of the replication policy on the secondary cluster for failback.
Fail back SmartLock directories
129

Data replication with SyncIQ

For more information, see Prepare SmartLock directories for failback. 8. Begin replicating data by enabling or replacing the replication policy that you originally failed over. What to do next Redirect clients to begin accessing the primary cluster.

Managing replication policies


You can modify, view, enable and disable replication policies.

Modify a replication policy


You can modify the settings of a replication policy. If you modify any of the following policy settings after the policy runs, OneFS performs either a full or differential replication the next time the policy runs:
u u u u

Source directory Included or excluded directories File-criteria statement Target cluster This applies only if you target a different cluster. If you modify the IP or domain name of a target cluster, and then modify the replication policy on the source cluster to match the new IP or domain name, a full replication is not performed. Target directory

1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, click the name of the policy you want to modify. 3. Modify the settings of the replication policy, and then click Submit.

Delete a replication policy


You can delete a replication policy. Once a policy is deleted, it no longer creates replication jobs. Deleting a replication policy breaks the target association on the target cluster, and allows writes to the target directory again. If you want to temporarily suspend a replication policy from creating replication jobs, you can disable the policy, and then enable the policy again later. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, in the Actions column of the policy, click Delete. 3. In the confirmation dialog box, click Yes.

Enable or disable a replication policy


You can temporarily suspend a replication policy from creating replication jobs, and then enable it again later. If you disable a replication policy while an associated replication job is currently running, the running replication job completes. However, another replication job will not be created according to the policy until the policy is enabled. 1. Click Data Protection > SyncIQ > Policies.
130

OneFS 7.0 Administration Guide

Data replication with SyncIQ

2. In the Policies table, in the Actions column of the policy, click either Enable or Disable. If neither Enable nor Disable is displayed in the Actions column, verify that an associated replication job is not running. If an associated replication job is not running, ensure that the SyncIQ license is configured on the cluster.

View replication policies


You can view information about replication policies on a cluster. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, review information about replication policies.

Replication policy information


You can view information about replication policies through the Policies table.
u

Run If a replication job is running for this policy, indicates whether the job is running or
paused. If no job is running, indicates whether the SyncIQ tool is disabled on the cluster. If no icon appears, indicates that SyncIQ is enabled and that no replication job is currently running.

Data Indicates the status of the last run of the job. A green icon indicates that the last job
completed successfully. A yellow icon indicates that the last job did not complete successfully, but that an earlier job did complete successfully. If no icon appears, a job for the policy was not run.

u u u

Policy Displays the name of the policy. Last Known Good Indicates when the last successful job ran. Schedule Indicates when the next job is scheduled to run. A value of Manual indicates
that the job can be run only manually.

u u u

Source Displays the source directory path. Target Displays the target directory path. Actions Displays any policy-related actions that you can perform.

Replication policy settings


Replication policies are configured to run according to specific replication policy settings. The following replication policy fields are available through the Isilon web administration interface and OneFS command-line interface.
u u

Policy name Name of the policy. Description Optional string that describes the policy. For example, the description might explain the purpose or function of the policy. Action Describes the how the policy replicates data. All policies copy files from the
source directory to the target directory and update files in the target directory to match files on the source directory. The action dictates how deleting a file on the source directory affects the target. The following values are valid:
u

Copy If a file is deleted in the source directory, the file is not deleted in the target
directory.

Synchronize Deletes files in the target directory if they are no longer present on the
source. This ensures that an exact replica of the source directory is maintained on the target cluster.

View replication policies

131

Data replication with SyncIQ

Run job Specifies whether the job is run automatically according to a schedule, or only
manually when specified by a user.

Root directory The full path of the source directory. Data is replicated from the source
directory to the target directory.

Include directories Determines which directories are included in replication. If one or


more directories are specified by this setting, any directories that are not specified are not replicated.

Exclude directories Determines which directories are excluded from replication. Any
directories specified by this setting are not replicated.

u u

File criteria Determines which files are excluded from replication. Name or address (of target cluster) The IP address or fully qualified domain name
of the target cluster.

Target directory The full path of the target directory. Data is replicated to the target directory from the source directory. Create snapshots Determines whether archival snapshots are generated on the target
cluster.

Snapshot alias name Specifies an alias for the latest archival snapshot taken on the
target cluster.

Snapshot naming pattern Specifies how archival snapshots are named on the target
cluster.

Snapshot expiration Specifies how long archival snapshots are retained on the target
cluster before they are automatically deleted by the system.

Workers per node Specifies the number of workers per node that are generated by
OneFS to perform each replication job for the policy.

Log level Specifies the amount of information that is recorded in the logs for replication jobs. More verbose options include all information from less verbose options. The following list describes the log levels from least to most verbose:
u

Notice Includes job and process-level activity, including job starts and stops, and
worker coordination information. It is recommended that you select this option.

u u

Error Includes events related to specific types of failures. Network Activity Includes more job-level activity and work-item information,
including specific paths and snapshot names.

File Activity Includes a separate event for each action taken on a file. Do not select this option without first consulting Isilon Technical Support.

Replication logs are typically used only for debugging purposes. If necessary, you can log in to a node through the command-line interface and view the contents of the /var/log/ isi_migrate.log file on the node.
u

Check integrity Determines whether OneFS performs a checksum on each file data
packet that is affected by a replication job. If a checksum value does not match, OneFS retransmits the affected file data packet.

Shared secret Determines whether OneFS references a shared secret on the source and target cluster to prevent certain types of attacks. This feature does not perform any encryption.

132

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Keep reports for Specifies how long replication reports are kept before they are
automatically deleted by OneFS.

Source node restrictions Specifies whether replication jobs connect to any nodes in
the cluster or if jobs can connect only to nodes in a specified subnet and pool.

Delete on synchronization Determines whether OneFS records when a synchronization job deletes files or directories on the target cluster.

The following replication policy fields are available only through the OneFS command-line interface.
u u

Password Specifies a password to access the target cluster. Max reports Specifies the maximum number of replication reports that are retained for
this policy.

Diff sync Determines whether full or differential replications are performed for this policy.
Full or differential replications are performed the first time a policy is run and after a policy is reset.

Rename pattern Determines whether snapshots generated for the replication policy on
the source cluster are deleted when the next replication policy is run. If specified, snapshots that are generated for the replication policy on the source cluster are retained and renamed according to the specified rename pattern. If not specified, snapshots generated on the source cluster are deleted. Specifying this setting does not require that the SnapshotIQ license be configured on the cluster.

Rename expiration If snapshots generated for the replication policy on the source
cluster are retained, specifies an expiration period for the snapshots.

Managing replication to the local cluster


You can control certain operations of replication jobs that target the local cluster. You can cancel a currently running job that targets the local cluster, or you can break the association between a policy and its specified target. Breaking a source and target cluster association causes the replication policy to perform a full replication the next time a job for the policy is run.

Cancel replication to the local cluster


You can cancel a replication job that is targeting the local cluster. 1. Click Data Protection > SyncIQ > Local Targets. 2. In the Local Targets table, specify whether to cancel a specific replication job or all replication jobs targeting the local cluster. l In the Actions column of a job, click Cancel.
l

Click Cancel all running target policies.

Break local target association


You can break the association between a replication policy and the local cluster. Breaking this association requires you to reset the replication policy before you can run the policy again.

Depending on the amount of data being replicated, a full or differential replication can take a very long time to complete.
Managing replication to the local cluster
133

Data replication with SyncIQ

1. Click Data Protection > SyncIQ > Local Targets. 2. In the Local Targets table, in the Actions column of the policy, click Break. 3. In the Confirm dialog box, click Yes.

View replication jobs targeting the local cluster


You can view information about currently running replication jobs that are replicating data to the local cluster. 1. Click Data Protection > SyncIQ > Local Targets. 2. In the Local Targets table, view information about replication jobs.

Remote replication policy information


You can view information about replication policies that are currently targeting the local cluster. The following information is displayed in the Local Targets table:
u

Run The status of the last run of the job.


The following icons might appear:
u u

Green Indicates that the last job completed successfully. Yellow Indicates that the last job did not complete successfully, but that an earlier
job did complete successfully.

Red Indicates that jobs have run, but that none of the jobs completed successfully.

A yellow or red icon might indicate that the policy is in an unrunnable state. You can view more detailed policy-status information and, if necessary, resolve the source-target association, through the web administration interface on the source cluster.
u u

Policy The name of the replication policy. Updated The time when data about the policy or job was last collected from the source
cluster.

u u u

Source The source directory on the source cluster. Target The target directory on the target cluster. Coordinator IP The IP address of the node on the source cluster that is acting as the
job coordinator.

Actions Displays any job-related actions that you can perform.

Managing replication performance rules


You can manage the impact of replication on cluster performance by creating rules that limit the network traffic created and the rate at which files are sent by replication jobs.

Create a network traffic rule


You can create a network traffic rule that limits the amount of network traffic that replication policies are allowed to generate during a specified time period. 1. Click Data Protection > SyncIQ > Performance . 2. In the Network Rules area, click Add rule.
134

OneFS 7.0 Administration Guide

Data replication with SyncIQ

3. In the Edit Limit dialog box, in the Limit (bits/sec) area, specify the maximum number of bits per second that replication rules are allowed to send. 4. In the Days area, select the days of the week that you want to apply the rule. 5. In the Start and End areas, specify the period of time that you want to apply the rule. 6. Optional: To add an optional description of this network traffic rule, in the Description box, type a description. 7. Click Enabled. If you do not select Enabled, the rule is disabled by default. 8. Click Submit.

Create a file operations rule


You can create a file-operations rule that limits the number of files that replication jobs can send per second. 1. Click Data Protection > SyncIQ > Performance . 2. In the File Operations Rules area, click Add rule. 3. In the Edit Limit dialog box, in the Limit (files/sec) area, specify the maximum number of files per second that replication rules are allowed to send. 4. In the Days area, select the days that you want to apply the rule. 5. In the Start and End areas, specify the period of time that you want to apply the rule. 6. Optional: To add an optional description of this file-operations rule, in the Description box, type a description. 7. Click Enabled. If you do not select Enabled, the rule is disabled by default. 8. Click Submit.

Modify a performance rule


You can modify a performance rule. 1. Click Data Protection > SyncIQ > Performance . 2. In either the Network Rules table or the File Operation Rules table, in the row of the rule you want to modify, click Edit. 3. In the Edit Limit dialog box, modify rule settings as needed, and then click Submit.

Delete a performance rule


You can delete a performance rule. 1. Click Data Protection > SyncIQ > Performance . 2. In either the Network Rules table or the File Operation Rules table, in the row of the rule, click Delete. 3. In the Confirm dialog box, click Yes.

Create a file operations rule

135

Data replication with SyncIQ

Enable or disable a performance rule


You can disable a performance rule to temporarily prevent the rule from being enforced. You can also enable a performance rule after it has been disabled. 1. Click Data Protection > SyncIQ > Performance . 2. In either the Network Rules table or the File Operation Rules table, in the row of the rule, click Edit. 3. In the Edit Limit dialog box, click either Enabled or Disabled. 4. Click Submit.

View performance rules


You can view performance rules. 1. Click Data Protection > SyncIQ > Performance . 2. In either the Network Rules table or the File Operation Rules table, view performance rules.

Managing replication reports


In addition to viewing replication reports, you can configure how long reports are retained on the cluster. You can also request that the cluster immediately delete any reports that have passed their expiration period. If necessary, you can also rebuild the databases that contain replication reports.

Configure default replication report settings


You can configure the default amount of time that OneFS retains replication reports on the cluster and the maximum number of reports that OneFS retains for each replication policy. 1. Click Data Protection > SyncIQ > Settings. 2. In the Report Settings area, in the Retain reports for area, specify how long you want to retain replication reports for. After the specified expiration period has passed for a report, OneFS automatically deletes the report. Some units of time are displayed differently when you view a report than how you originally enter them. Entering a number of days that is equal to a corresponding value in weeks, months, or years results in the larger unit of time being displayed. For example, if you enter a value of 7 days, 1 week appears for that report after it is created. This change occurs because OneFS internally records report retention times in seconds and then converts them into days, weeks, months, or years for display. 3. In the Number of reports to keep per policy field, type the maximum number of reports you want to retain at a time for a replication policy. 4. Click Submit.

Delete replication reports


Replication reports are routinely deleted by OneFS after the reports are retained past their given expiration date or when the number of reports exceeds the maximum number
136

OneFS 7.0 Administration Guide

Data replication with SyncIQ

specified for a report. Excess reports are periodically deleted by OneFS; however, you can manually delete all excess replication reports at any time. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Delete excess replication reports by running the following command:
isi sync report rotate

What to do next If OneFS did not delete the desired number of replication reports, configure the replication report settings, and then repeat this procedure.

View replication reports


You can view replication reports. 1. Click Data Protection > SyncIQ > Reports. 2. In the Reports table, view replication reports. 3. Optional: View details of a replication report by clicking View Details in the Actions column of the report you want to view.

Replication report information


You can view information about replication reports. The following information is displayed in the Reports table:
u

Status The status of the last run of the job.


The following icons are displayed:
u u

Green Indicates that the last job completed successfully. Yellow Indicates that the last job did not complete successfully, but that an earlier job did complete successfully. Red Indicates that jobs have run, but were unsuccessful.

If no icon appears, the job was not run.


u

Policy The name of the associated policy for the job. You can view or edit settings for the policy by clicking the policy name. Started, Ended, and Duration Indicates when the job started and ended, and the
duration of the job.

Transferred The total number of files that were transferred during the job run, and the total size of all transferred files. For assessed policies, Assessment appears. Sync Type The action that was performed by the replication job.
The following actions are displayed:
u u

Initial Sync Indicates that either a differential or a full replication was performed. Incremental Sync Indicates that only modified files were transferred to the target
cluster.

Failover / Failback Allow Writes Indicates that writes were enabled on a target
directory of a replication policy.

Failover / Failback Dissallow Writes Indicates that an allow writes operation


was undone.

View replication reports

137

Data replication with SyncIQ

Failover / Failback Resync Prep Indicates that an association between files on


the source cluster and files on the target cluster was created. This is the first step in the failback preparation process.

Failover / Failback Resync Prep Domain Mark Indicates that a SyncIQ domain
was created for the source directory. This is the second step in the failback preparation process.

Failover / Failback Resync Prep Restore Indicates that a source directory was restored to the last recovery point. This is the third step in the failback preparation process. Failover / Failback Resync Prep Finalize Indicates that a mirror policy was
created on the target cluster. This is the last step in the failback preparation process.

Upgrade Indicates that a policy-conversion replication occurred after upgrading the


OneFS operating system or merging policies.

u u u

Source The source directory on the source cluster. Target The target directory on the target cluster. Actions Displays any report-related actions that you can perform.

Managing failed replication jobs


If a replication job fails due to an error, OneFS might disable the corresponding replication policy. If a replication policy is disabled, the policy cannot be run. Replication jobs are often disabled due to the IP or hostname of the target cluster being modified. To resume replication for a disabled policy, you must either fix the error that caused the policy to be disabled, or reset the replication policy. It is recommended that you attempt to fix the issue rather than reset the policy. If you believe you have fixed the error, you can return the replication policy to an enabled state by resolving the policy. You can then run the policy again to test whether the issue was fixed. If you are unable to fix the issue, you can reset the replication policy. Resetting the policy causes a full or differential replication to be performed the next time the policy is run. Depending on the amount of data being synchronized or copied, a full and differential replications can take a very long time to complete.

Resolve a replication policy


If a replication job encounters an error, and you fix the issue that caused the error, you can resolve a replication policy. Resolving a replication policy enables you to run the policy again. If you cannot resolve the issue that caused the error, you can reset the replication policy. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, in the Actions column of the policy, click Resolve. 3. In the confirmation dialog box, type yes and then click Yes.

Reset a replication policy


If a replication job encounters an error that you cannot resolve, you can reset the corresponding replication policy. Resetting a policy causes OneFS to perform a full or differential replication the next time the policy is run. Resetting a replication policy deletes the snapshot for the source-cluster snapshot.
138

OneFS 7.0 Administration Guide

Data replication with SyncIQ

Depending on the amount of data being replicated, a full or differential replication can take a very long time to complete. Reset a replication policy only if you cannot fix the issue that caused the replication error. If you fix the issue that caused the error, resolve the policy instead of resetting the policy. 1. Click Data Protection > SyncIQ > Policies. 2. In the Policies table, in the Actions column of a policy, click Reset. 3. In the Confirm dialog box, type yes and then click Yes.

Perform a full or differential replication


After you reset a replication policy, you must perform either a full or differential replication. Before you begin Reset a replication policy. 1. Open a secure shell (SSH) connection to any node in the cluster and log in through the root or compliance administrator account. 2. Specify the type of replication you want to perform by running the isi sync policy modify command. l To perform a full replication, disable the --diff_sync option. For example, the following command disables differential synchronization for newPolicy:
isi sync policy modify newPolicy --diff_sync off
l

To perform a differential replication, enable the --diff_sync option. For example, the following command enables differential synchronization for newPolicy:
isi sync policy modify newPolicy --diff_sync on

3. Run the policy by running the isi sync policy run command. For example, the following command runs newPolicy:
isi sync policy run newPolicy

Perform a full or differential replication

139

Data replication with SyncIQ

140

OneFS 7.0 Administration Guide

CHAPTER 6 Data layout with FlexProtect

An Isilon cluster is designed to continuously serve data, even when one or more components simultaneously fail. OneFS ensures data availability by striping or mirroring data across the cluster. If a cluster component fails, data stored on the failed component is available on another component. After a component failure, lost data is restored on healthy components by the FlexProtect proprietary system. Data protection is specified at the file level, not the block level, enabling the system to recover data quickly. Because all data, metadata, and parity information is distributed across all nodes in the cluster, an Isilon cluster does not require a dedicated parity node or drive. This ensures that no single node limits the speed of the rebuild process.
u u u u u u

File striping.........................................................................................................142 Data protection levels.........................................................................................142 FlexProtect data recovery.....................................................................................142 Managing protection levels.................................................................................144 Data protection level information........................................................................144 Data protection level disk space usage................................................................145

Data layout with FlexProtect

141

Data layout with FlexProtect

File striping
OneFS uses the back-end network to automatically allocate and stripe data across nodes and disks in the cluster. OneFS protects data as the data is being written. No separate action is necessary to stripe data. OneFS breaks files into smaller logical chunks called stripes before writing the files to disk; the size of each file chunk is referred to as the stripe unit size. Each OneFS block is 8 KB, and a stripe unit consists of 16 blocks, for a total of 128 KB per stripe unit. During a write, OneFS breaks data into stripes and then logically places the data in a stripe unit. As OneFS stripes data across the cluster, OneFS fills the stripe unit according to the number of nodes and protection level. OneFS can continuously reallocate data and make storage space more usable and efficient. As the cluster size increases, OneFS stores large files more efficiently.

Data protection levels


Data protection levels determine the amount of redundant data created on the cluster to ensure that data is protected against component failures. OneFS enables you to modify the protection levels in real time, while clients are reading and writing data on the cluster. OneFS provides several levels of configurable data protection settings. You can modify these protection settings at any time without rebooting or taking the cluster or file system offline. The configured protection level allows up to four failures at one time on distinct nodes. When planning your storage solution, keep in mind that increasing the parity protection settings reduces write performance and requires additional storage space for the increased number of nodes. OneFS uses the Reed Solomon algorithm for N+M protection. In the N+M data protection model, N represents the number of data-stripe units, and M represents the number of simultaneous node or drive failuresor a combination of node and drive failuresthat the cluster can withstand without incurring data loss. N must be larger than M. In addition to N+M data protection levels, OneFS also supports data mirroring from 2x-8x, allowing from two to eight mirrors of the specified content. In terms of overall cluster performance and resource consumption, N+M protection is often more efficient than mirrored protection. However, because read and write performance is reduced for N+M protection, data mirroring might be faster for data that is updated often and is small in size. Data mirroring requires significant overhead and might not always be the best dataprotection method. For example, if you enable 3x mirroring, the specified content is duplicated three times on the cluster; depending on the amount of content mirrored, this can consume a significant amount of storage space.

FlexProtect data recovery


OneFS uses the FlexProtect proprietary system to detect and repair files and directories that are in a degraded state due to node or drive failures. OneFS protects data in the cluster based on the configured protection policy. OneFS rebuilds failed disks, uses free storage space across the entire cluster to further prevent data loss, monitors data, and migrates data off of at-risk components. OneFS distributes all data and error-correction information across the cluster and ensures that all data remains intact and accessible even in the event of simultaneous component failures. Under normal operating conditions, all data on the cluster is protected against one or more failures of a node or drive. However, if a node or drive fails, the cluster protection status is considered to be in a degraded state until the data is protected by OneFS again. OneFS reprotects data by rebuilding data in the free space of the cluster. While the protection status is in a degraded state, data is more vulnerable to data loss.
142

OneFS 7.0 Administration Guide

Data layout with FlexProtect

Because data is rebuilt in the free space of the cluster, the cluster does not require a dedicated hot-spare node or drive in order to recover from a component failure. Because a certain amount of free space is required to rebuild data, it is recommended that you reserve adequate free space through the virtual hot spare feature. As a cluster grows larger, data restriping operations become faster. As you add more nodes, the cluster gains more CPU, memory, and disks to use during recovery operations.

Smartfail
OneFS protects data stored on failing nodes or drives through a process called smartfailing. During the smartfail process, OneFS places a device into quarantine. Quarantined devices can be used only for read operations. While the device is quarantined, OneFS reprotects the data on the device by distributing the data to other devices. After all data migration is complete, OneFS logically removes the device from the cluster, the cluster logically changes its width to the new configuration, and the node or drive can be physically replaced. OneFS automatically smartfails devices only as a last resort. Although you can manually smartfail nodes or drives, it is recommended that you first consult Isilon Technical Support. Occasionally a device might fail before OneFS detects a problem. If a drive fails without being smartfailed, OneFS automatically starts rebuilding the data to available free space on the cluster. However, because a node might recover from a failure, if a node fails, OneFS does not start rebuilding data unless the node is logically removed from the cluster.

Node failures
Because node loss is often a temporary issue, OneFS does not automatically start reprotecting data when a node fails or goes offline. If a node reboots, the file system does not need to be rebuilt because it remains intact during the temporary failure. If an N+1 data protection level is configured, and one node fails, all of the data is still accessible from every other node in the cluster. If the node comes back online, the node rejoins the cluster automatically without requiring a full rebuild. To ensure that data remains protected, if you physically remove a node from the cluster, you must also logically remove the node from the cluster. After you logically remove a node, the node automatically reformats its own drives, and resets itself to the factory default settings. The reset occurs only after OneFS has confirmed that all data has been reprotected. You can logically remove a node using the smartfail process. It is important that you use the smartfail process only when you want to permanently remove a node from the cluster. If you remove a failed node before adding a new node, data stored on the failed node must be rebuilt in the free space in the cluster. After the new node is added, the data is then distributed to the new node. It is more efficient to add a replacement node to the cluster before failing the old node because OneFS can immediately use the replacement node to rebuild the data stored on the failed node.

Smartfail

143

Data layout with FlexProtect

Managing protection levels


You can apply a protection level to a node pool, file, or directory. This flexibility enables you to protect disparate sets of data at different levels. The default protection policy of node pools is N+2:1, which means that two drives or one node can fail without causing any data loss. For clusters or node pools containing less than two petabytes or fewer than 16 nodes, N+2:1 is the recommended protection level. However, if the cluster or node pool is larger, you might consider a higher protection level. OneFS allows you to specify a protection level that the cluster is currently incapable of matching. If you specify an unmatchable protection level, the cluster will continue trying to match the requested protection level until a match is possible. For example, in a fournode cluster, you might specify a 5x protection level; in this example, OneFS would protect the data at 4x until you added a fifth node to the cluster, at which point OneFS would reprotect the data at the 5x level. For 4U Isilon IQ X-Series and NL-Series nodes, and IQ 12000x/EX 12000 combination platforms, the minimum cluster size of three nodes requires a minimum protection level of N+2:1.

Data protection level information


Data protection levels determine the level of hardware failure that a cluster can recover from without suffering data loss. You can protect your data at the following data protection levels: Data protection level N+1

Minimum number of nodes required 2

Definition The cluster can absorb the failure of any single drive or the unscheduled shutdown of any single node without causing any loss in stored data. The cluster can recover from two simultaneous drive failures or one node failure without sustaining any data loss. The cluster can recover from two simultaneous drive or node failures without sustaining any data loss. The cluster can recover from three simultaneous drive failures or one node failure without sustaining any data loss. The cluster can recover from three simultaneous drive failures or one node failure without sustaining any data loss.

N+2:1

N+2

N+3:1

N+3

144

OneFS 7.0 Administration Guide

Data layout with FlexProtect

Data protection level N+4

Minimum number of nodes required 5

Definition The cluster can recover from four simultaneous drive or node failures without sustaining any data loss.

Nx (Data mirroring)

The cluster can recover from N - 1 node failures without sustaining data For example, 5x requires loss. For example, 5x protection a minimum of five nodes. means that the cluster can recover from four node failures

Data protection level disk space usage


Increasing protection levels increases the amount of space consumed by the data on the cluster. The parity overhead for N + M protection levels depends on the file size and the number of nodes in the cluster. The percentage of parity overhead declines as the cluster gets larger. The following table describes the estimated amount of overhead depending on the N+M protection level and the size of the cluster or node pool. The table is for informational purposes only and does not reflect protection level recommendations based on cluster size. Number of +1 nodes 3 4 5 6 7 8 9 10 12 14

+2:1 4 + 2 (33%) 6 + 2 (25%) 8 + 2 (20%) 10 + 2 (17%) 12 + 2 (14%) 14 + 2 (12.5%) 16 + 2 (11%) 16 + 2 (11%) 16 + 2 (11%) 16 + 2 (11%)

+2 3x 2 + 2 (50%) 3 + 2 (40%) 4 + 2 (33%) 5 + 2 (29%) 6 + 2 (25%) 7 + 2 (22%) 8 + 2 (20%) 10 + 2 (17%) 12 + 2 (14%)

+3:1 9 + 3 (25%) 12 + 3 (20%) 15 + 3 (17%) 15 + 3 (17%) 15 + 3 (17%) 15 + 3 (17%) 15 + 3 (17%) 15 + 3 (17%) 15 + 3 (17%)

+3 4x 4x 3 + 3 (50%) 4 + 3 (43%) 5 + 3 (38%) 6 + 3 (33%) 7 + 3 (30%) 9 + 3 (25%) 11 + 3 (21%)

+4 5x 5x 5x 4 + 4 (50%) 5 + 4 (44%) 6 + 4 (40%) 8 + 4 (33%) 10 + 4 (29%)

2 +1 (33%) 3 +1 (25%) 4 +1 (20%) 5 +1 (17%) 6 +1 (14%) 7 +1 (13%) 8 +1 (11%) 9 +1 (10%) 11 +1 (8%) 13 + 1 (7%)

Data protection level disk space usage

145

Data layout with FlexProtect

Number of +1 nodes 16 18 20 30

+2:1 16 + 2 (11%) 16 + 2 (11%) 16 + 2 (11%) 16 + 2 (11%)

+2 14 + 2 (13%) 16 + 2 (11%) 16 + 2 (11%) 16 + 2 (11%)

+3:1 15 + 3 (17%) 15 + 3 (17%) 16 + 3 (16%) 16 + 3 (16%)

+3 13 + 3 (19%) 15 + 3 (17%) 16 + 3 (16%) 16 + 3 (16%)

+4 12 + 4 (25%) 14 + 4 (22%) 16 + 4 (20%) 16 + 4 (20%)

15 + 1 (6%) 16 + 1 (6%) 16 + 1 (6%) 16 + 1 (6%)

The parity overhead for mirrored data protection is not affected by the number of nodes in the cluster. The following table describes the parity overhead for each mirrored data protection level. 2x 50%

3x 67%

4x 75%

5x 80%

6x 83%

7x 86%

8x %88

146

OneFS 7.0 Administration Guide

CHAPTER 7 NDMP backup

OneFS enables you to back up and restore file-system data through the Network Data Management Protocol (NDMP). From a backup server, you can direct backup and recovery processes between an Isilon cluster and backup devices such as tape devices, media servers, and virtual tape libraries (VTLs). OneFS supports both NDMP three-way backup and NDMP two-way backup. During a three-way NDMP backup operation, a data management application (DMA) on a backup server instructs the cluster to start backing up data to a tape media server that is either attached to the LAN or directly attached to the DMA. During a two-way NDMP backup, a DMA on a backup server instructs a Backup Accelerator node on the cluster to start backing up data to a tape media server that is attached to the Backup Accelerator node. Two-way NDMP backup is the most efficient method in terms of cluster resource consumption; however, two-way NDMP backup requires that one or more Backup Accelerator nodes be attached to the cluster. In both the two-way and three-way NDMP backup models, file history data is transferred from the cluster to a backup server. Before a backup begins, OneFS creates a snapshot of the targeted directory. OneFS then backs up the snapshot, which ensures that the backup image represents a specific point in time. After the backup is completed or canceled, OneFS automatically deletes the snapshot. You do not need to configure a SnapshotIQ license on the cluster to perform NDMP backups. However, if a SnapshotIQ license is configured on the cluster, you can generate a snapshot through the SnapshotIQ tool, and then back up the snapshot. If you back up a snapshot that you generated, OneFS does not create another snapshot for the backup. If you are backing up SmartLock compliance directories, it is recommended that you do not specify autocommit time periods for the SmartLock directories.
u u u u u u u u u u u u u u u

NDMP two way backup........................................................................................148 NDMP protocol support.......................................................................................148 Supported DMAs.................................................................................................148 NDMP hardware support.....................................................................................149 NDMP backup limitations....................................................................................149 NDMP performance recommendations................................................................149 Excluding files and directories from NDMP backups............................................151 Configuring basic NDMP backup settings............................................................152 Create an NDMP user account.............................................................................153 Managing NDMP user accounts...........................................................................154 Managing NDMP backup devices.........................................................................154 Managing NDMP backup ports............................................................................156 Managing NDMP backup sessions.......................................................................157 View NDMP backup logs......................................................................................159 NDMP environment variables..............................................................................159

NDMP backup

147

NDMP backup

NDMP two way backup


To perform NDMP two-way backups, you must attach a Backup Accelerator node to the Isilon cluster, attach one or more tape devices to the Backup Accelerator node. Before tape devices can be used for backup, you must detect the devices through OneFS. Each Backup Accelerator node contains Fibre Channel ports through which you can connect tape and media changer devices or Fibre Channel switches. If you connect a Fibre Channel switch to a port, you can connect multiple tape and media changer devices through a single port; for more information, see your Fibre Channel switch documentation about zoning the switch to allow communication between the Backup Accelerator node and the tape and media changer devices. If you attach devices to a Backup Accelerator node, the cluster detects the devices when you start or restart the Backup Accelerator node or when you rescan the Fibre Channel ports to discover devices. If a cluster detects a tape or media changer device, it creates a device entry for each path to the device. If you connect a device through a Fibre Channel switch, multiple paths can exist for a single device. For example, if you connect a tape device to a Fibre Channel switch, and then connect the Fibre Channel switch to two Fibre Channel ports, OneFS creates two entries for the device, one for each path. If you perform an NDMP two-way backup operation, you must assign static IP addresses to the Backup Accelerator node. If you connect to the cluster through a Data Management Application (DMA), you must connect to the IP address of a Backup Accelerator node. If you perform an NDMP three-way backup, you can connect to any node in the cluster.

NDMP protocol support


You can back up cluster data through NDMP version 3 or 4. OneFS supports the following features of NDMP versions 3 and 4:
u u u u

Full (0) NDMP backups Level-based (1-9) NDMP incremental backups Token-based NDMP backups NDMP TAR and dump types If you specify the NDMP dump backup type, the backup will be stored on the backup device in TAR format.

u u u u u u u

Path-based and dir/node file history format. Direct Access Restore (DAR) Directory DAR (DDAR) Including and excluding specific files and directories from backup Backup of file attributes Backup of Access Control Lists (ACLs) Backup of Alternate Data Streams (ADSs)

Supported DMAs
NDMP backups are coordinated by a data management application (DMA) that runs on a backup server. OneFS supports the following DMAs:
148

OneFS 7.0 Administration Guide

NDMP backup

u u u u u u u

Symantec NetBackup EMC Networker Symantec Backup Exec IBM Tivoli Storage Manager Quest Software NetVault CommVault Simpana Atempo Time Navigator

NDMP hardware support


OneFS can backup data to specific backup devices. OneFS supports the following types of emulated and physical tape devices:
u u u

LTO-3 LTO-4 LTO-5 FalconStor VTL 5.20 Data Domain VTL 5.1.04 or later

OneFS supports the following virtual tape libraries (VTLs):


u u

Point-to-point (also known as direct) topologies are not supported for VTLs. VTLs must include a Fibre Channel switch.

NDMP backup limitations


The OneFS NDMP backup functionality includes the following limitations:
u

OneFS does not back up file system configuration data, such as file protection level policies and quotas. OneFS does not support multiplexing across multiple streams. OneFS does not support shared storage options to shared media drives. OneFS does not support restoring data from another file system other than OneFS. However, you can migrate data from a NetApp storage system to OneFS. Backup Accelerator nodes cannot interact with more than 1024 device paths, including paths of tape and media changer devices. For example, if each device has four paths, you can connect 256 devices to a Backup Accelerator node. If each device has two paths, you can connect 512 devices. OneFS does not support more than 64 concurrent NDMP sessions.

u u u

NDMP performance recommendations


To improve the speed and efficiency of NDMP backup and restore operations, consider implementing these recommended best practices. General performance recommendations The following recommended best practices can improve performance of NDMP backup and restore operations:
u

If you are backing up multiple directories that contain small files, set up a separate schedule for each directory. If you are performing three-way NDMP backups, run multiple NDMP sessions on multiple nodes.
NDMP hardware support
149

NDMP backup

Do not perform both three-way and two-way NDMP backup operations on the same cluster. Install the latest patches from Isilon and your data management application (DMA) vendor when available. Restore files through Direct Access Restore (DAR) and Directory DAR (DDAR). This is especially recommended if you restore files frequently. However, it is recommended that you do not use DAR to restore a full backup or a large number of files. Use the largest tape record size available for your version of OneFS. The largest tape record size for OneFS versions 6.5.5 and later is 256 k. The largest tape record size for versions of OneFS earlier than 6.5.5 is 128 k. If possible, do not include or exclude files from backup. Including or excluding files can affect backup performance, due to filtering overhead during tree walks. Limit the depth of nested subdirectories in your file system. Limit the number of files in a directory. Distribute files across multiple directories instead of including a large number of files in a single directory.

u u

Networking recommendations The following best practices are recommended for configuring the connection between a cluster and NDMP backup devices:
u u

Assign static IP addresses to Backup Accelerator nodes. Configure SmartConnect zones that are dedicated to NDMP backup activity. It is recommended that you connect NDMP sessions only through SmartConnect zones that are exclusively used for NDMP backup.

Configure multiple policies when scheduling backup operations, with each policy capturing a portion of the file system. Do not attempt to back up the entire file system through a single policy.

Backup Accelerator recommendations The following best practices are recommended if you are performing NDMP two-way backups:
u

Run four concurrent streams per Backup Accelerator node. This is recommended only if you are backing up a significant amount of data. Running four concurrent streams might not be possible or necessary for smaller backups.

Attach more Backup Accelerator nodes to larger clusters. The recommended number of Backup Accelerator nodes depends on the type of nodes that are included in the cluster. The following table lists the recommended number of Backup Accelerator nodes to include in a cluster. Node type i-Series X-Series NL-Series S-Series Recommended number of nodes per Backup Accelerator node 5 3 3 3

150

OneFS 7.0 Administration Guide

NDMP backup

DMA-specific recommendations If possible, configure your DMA according to the following best practices:
u

If you perform backup operations through Symantec NetBackup, it is recommended that you increase throughput by specifying an NDMP buffer size of 256 k. It is recommended that you use path-based file history instead of dir/node file history. Enable multistreaming, which enables OneFS to back up data to multiple tape devices concurrently.

Excluding files and directories from NDMP backups


You can exclude files and directories from NDMP backup operations by specifying NDMP environment variables through the DMA. You can explicitly include or exclude specific files or directories in NDMP backup operations. If you include a file or directory, all other non-included files and directories are automatically excluded from backup operations. If a file or directory is excluded, all files except the excluded directories are backed up. You can specify include and exclude patterns to include or exclude files and directories that match the patterns. You can specify the following special characters in patterns: Character Description Example Example includes or excludes the following directories /ifs/data/archive1 /ifs/data/ archive42_a/media data_store_[a-f] /ifs/data/ data_store_a

Takes the place of any character or characters

archive*

[]

Takes the place of a range of letters or numbers

data_store_[abcd /ifs/data/ ef] data_store_c /ifs/data/ data_store_f/ archive

Takes the place of any single character

user_?

/ifs/data/user_1 /ifs/data/user_2/ archive

Includes a blank space

user\ 1

/ifs/data/user 1 /ifs/data/user 1/ archive

Although you can specify both anchored and unanchored patterns, it is recommended that you do not specify unanchored patterns. Unanchored patterns target a string of text that might belong to several files or directories, such as home or user1. An anchored pattern targets a file path, such as /home/user1. Specifying unanchored patterns can degrade the performance of backups and result in empty directories being backed up. For example, you might back up the /ifs/data directory that contains the following files and directories:
u u

/ifs/data/home/user1/file.txt /ifs/data/home/user2/user1/file.txt
Excluding files and directories from NDMP backups
151

NDMP backup

u u

/ifs/data/home/user3/other/file.txt /ifs/data/home/user4/emptydirectory

If you include the home directory, you will back up the following files and directories:
u u u u

/ifs/data/home/user1/file.txt /ifs/data/home/user2/user1/file.txt /ifs/data/home/user3/other/file.txt /ifs/data/home/user4/emptydirectory

The empty directory /ifs/data/home/user4/home/emptydirectory would be backed up. If you specify both include and exclude patterns, any excluded files or directories under included directories are not backed up. If the excluded directories are not contained in any of the included directories, the exclude specification has no effect. For example, assume that you are backing up the /ifs/data directory, and you include the following directories:
u u

/ifs/data/media/music /ifs/data/media/movies

Also, assume that you exclude the following directories:


u u

/ifs/data/media/music/working /ifs/data/archive

In this example, the setting that excludes the /ifs/data/archive directory has no effect because the /ifs/data/archive directory is not contained under either of the included directories. However, the setting that excludes the /ifs/data/media/music/ working directory does have an effect because the /ifs/data/media/music/working directory is contained under the /ifs/data/media/music directory. If the setting had not been specified, the /ifs/data/media/music/working would have been backed up.

Configuring basic NDMP backup settings


You can configure basic NDMP backup settings that control whether NDMP backups are enabled for the cluster. You can also configure which data management application (DMA) vendor you want to access the Isilon cluster through. You can optionally configure the port that the Isilon cluster is accessed through for NDMP backups.

Configure and enable NDMP backup


NDMP backup in OneFS is disabled by default. Before you can perform NDMP backups, you must enable NDMP backups and configure NDMP settings. 1. Click Data Protection > Backup > NDMP Settings. 2. In the Service area, click Enable. 3. Optional: To specify a port through which data management applications (DMAs) access the cluster, or the default DMA vendor that OneFS is configured to interact with, in the Settings area, click Edit settings. l To specify the port, in the Port number field, type a port number.
l

To modify the DMA vendor, from the DMA vendor list, select the name of the DMA vendor you are coordinating backup operations through.

152

OneFS 7.0 Administration Guide

NDMP backup

If your DMA vendor is not included in the list, select generic. However, any vendors not included in the list are not supported. 4. Optional: To add an NDMP user account through which a DMA can access the cluster, click Add administrator. a. In the Add Administrator dialog box, in the Name field, type a name for the account. b. In the Password and Confirm password fields, type a password for the account. c. Click Submit.

Disable NDMP backup


You can disable NDMP backup if you no longer want to back up data through NDMP. 1. Click Data Protection > Backup > NDMP Settings. 2. In the Service area, click Disable.

View NDMP backup settings


You can view current NDMP backup settings, which indicate whether the service is enabled, the port through which data management applications (DMAs) connect to the cluster, and the DMA vendor that OneFS is configured to interact with. 1. Click Data Protection > Backup > NDMP Settings and view NDMP backup settings. 2. In the Settings area, review NDMP backup settings.

NDMP backup settings


You can configure settings that control how NDMP backups are performed on the cluster. The following NDMP backup settings can be configured:
u

Port number The number of the port through which data management applications (DMAs) can connect to the cluster. DMA vendor The DMA vendor that the cluster is configured to interact with.

Create an NDMP user account


Before you can perform NDMP backups, you must create an NDMP user account through which a data management application (DMA) can access the cluster. 1. Click Data Protection > Backup > NDMP Settings. 2. In the NDMP Administrators area, click Add administrator. 3. In the Add Administrator dialog box, in the Name field, type a name for the account. 4. In the Password and Confirm password fields, type a password for the account. 5. Click Submit.

Disable NDMP backup

153

NDMP backup

Managing NDMP user accounts


To access the cluster through a data management application (DMA), you must log in to the cluster by providing valid authentication credentials. You can configure these authentication credentials by creating NDMP user accounts.

Modify the password of an NDMP user account


You can modify the password associated with an NDMP user account. 1. Click Data Protection > Backup > NDMP Settings. 2. In the NDMP Administrator table, in the row for the user account that you want to modify, click Change password. 3. In the Password and Confirm password fields, type a new password for the account. 4. Click Submit.

Delete an NDMP user account


You can delete an NDMP user account if you no longer want to access the cluster through that account. 1. Click Data Protection > Backup > NDMP Settings. 2. In the NDMP Administrators table, in the row for the user account that you want to delete, click Delete. 3. In the Confirm dialog box, click Yes.

View NDMP user accounts


You can view information about NDMP user accounts. 1. Click Data Protection > Backup > NDMP Settings. 2. In the NDMP administrators area, review information about NDMP user accounts.

Managing NDMP backup devices


You can manage tape and media changer devices that are attached to an Isilon cluster through a Backup Accelerator node. After you attach a tape or media changer device to a cluster, you must configure OneFS to detect the device; this establishes a connection between the cluster and the device. After the cluster detects a device, you can modify the name that the cluster has assigned to the device, or disconnect the device from the cluster.

Detect NDMP backup devices


If you connect devices to a Backup Accelerator node, you must configure OneFS to detect the devices before OneFS can back up data to and restore data from the devices. You can scan a specific node, a specific port, or all ports on all nodes. 1. Click Data Protection > Backup > Devices. 2. Click Discover devices. 3. Optional: To scan only a specific node for NDMP devices, from the Nodes list, select a node.
154

OneFS 7.0 Administration Guide

NDMP backup

4. Optional: To scan only a specific port for NDMP devices, from the Ports list, select a port. If you specify a port and a node, only the specified port on the node is scanned. However, if you specify only a port, the specified port will be scanned on all nodes. 5. Optional: To remove entries for devices or paths that have become inaccessible, select the Delete inaccessible paths or devices check box. 6. Click Submit. Results For each device that is detected, an entry is added to either the Tape Devices or Media Changers tables.

Modify an NDMP backup device name


You can modify the name of an NDMP device entry. 1. Click Data Protection > Backup > Devices. 2. In the Tape Devices table, click the name of the backup device entry that you want to modify. 3. In the Rename Device dialog box, in the Device Name field, type a new name for the backup device. 4. Click Submit.

Delete a device entry for a disconnected NDMP backup device


If you physically remove an NDMP device from a cluster, OneFS retains the entry for the device. You can delete a device entry for a removed device. You can also remove the device entry for a device that is still physically attached to the cluster; this causes OneFS to disconnect from the device. If you remove a device entry for a device that is connected to the cluster, and you do not physically disconnect the device, OneFS will detect the device the next time it scans the ports. You cannot remove a device entry for a device that is currently being backed up to or restored from. 1. Click Data Protection > Backup > Devices. 2. In the Tape Devices table, in the row for the device that you want to disconnect or remove the device entry for, click Delete device. 3. In the Confirm dialog box, click Yes.

View NDMP backup devices


You can view information about tape and media changer devices that are currently attached to the cluster through a Backup Accelerator node. 1. Click Data Protection > Backup > Devices. 2. In the Tape Devices and Media Changers tables, review information about NDMP backup devices.

Modify an NDMP backup device name

155

NDMP backup

NDMP backup device settings


If you attach tape or media changer devices to a cluster through a Backup Accelerator node, OneFS creates a device entry for each device. OneFS configures settings for each device entry. The following settings appear in the Tape Devices and Media Changers tables:
u u

Name Specifies a unique device name assigned by OneFS. State Indicates whether data is currently being backed up to or restored from the device. If the device is in use, Read/Write appears. If the device is not in use, Closed appears. WWN Specifies the world wide node name (WWNN) of the device. Product Specifies the name of the device vendor, and the model name or number of the
device.

u u

u u

Serial Number Specifies the serial number of the device. Paths Specifies the name of the Backup Accelerator node that the device is attached to,
and the port number or numbers to which the device is connected.

u u

LUN Specifies the logical unit number (LUN) of the device. Port ID Specifies the port ID of the device that binds the logical device to the physical
device.

WWPN Specifies the world wide port name (WWPN) of the port on the tape or media
changer device.

Actions Displays actions you can perform on the device. The following action is available.
u

Delete Disconnects this device by deleting the associated device entry.

Managing NDMP backup ports


You can manage the Fibre Channel ports through which tape and media changer devices attach to a Backup Accelerator node. You can also modify the settings of an NDMP backup port, or enable or disable an NDMP backup port.

Modify NDMP backup port settings


You can modify the settings of an NDMP backup port. 1. Click Data Protection > Backup > Ports. 2. In the Sessions table, click the port whose settings you want to modify. 3. In the Edit Port dialog box, modify port settings as needed, and then click Submit.

Enable or disable an NDMP backup port


You can enable or disable an NDMP backup port. 1. Click Data Protection > Backup > Ports. 2. In the Ports table, in the row of the port that you want to enable or disable, click Enable or Disable.

156

OneFS 7.0 Administration Guide

NDMP backup

View NDMP backup ports


You can view information about the Fibre Channel ports of Backup Accelerator nodes attached to a cluster. 1. Click Data Protection > Backup > Ports. 2. In the Ports table, review information about NDMP backup ports.

NDMP backup port settings


If a Backup Accelerator node is attached to your cluster, OneFS assigns default settings to each port on the node. These settings control, for example, the Fibre Channel topology that the port is configured to support. You can modify NDMP backup port settings at any time. The following settings appear in the Ports table:
u u

Port Specifies the name of the Backup Accelerator node, and the number of the port. Topology Indicates the type of Fibre Channel topology that the port is configured to
support. The following settings might appear:
u

Point to Point Indicates that the port is configured to support a point-to-point topology, with one backup device or Fibre Channel switch directly connected to the port. Loop Indicates that the port is configured to support an arbitrated loop topology, with
multiple backup devices connected to a single port in a circular formation.

Auto Indicates that the port is configured to detect the topology automatically. This is
the recommended setting. If you are using a fabric topology, specify this setting.

WWNN Indicates the world wide node name (WWNN) of the port. This name is the same for
each port on a given node.

WWPN Indicates the world wide port name (WWPN) of the port. This name is unique to the
port.

Rate Indicates the rate at which data is sent through the port. Valid values are 1 Gb/s, 2
Gb/s, 4 Gb/s, and Auto. If set to Auto, the port automatically detects the rate of data.

Actions Displays actions that you can perform on the port. The following actions might be
available:
u u

Enable Enables this port. If Enable appears, the port is currently disabled. Disable Disables this port. If Disable appears, the port is currently enabled.

Managing NDMP backup sessions


You can view the status of NDMP backup sessions or terminate a session that is in progress.

Terminate an NDMP session


You can terminate an NDMP session if you want to interrupt an NDMP backup or restore operation. 1. Click Data Protection > Backup > Sessions. 2. In the Sessions table, in the row of the NDMP session that you want to terminate, click Kill.
View NDMP backup ports
157

NDMP backup

3. In the Confirm dialog box, click Yes.

View NDMP sessions


You can view information about NDMP sessions that exist between the cluster and data management applications (DMAs). 1. Click Data Protection > Backup > Sessions. 2. In the Sessions table, review information about NDMP sessions.

NDMP session information


OneFS displays information about active NDMP sessions that you can view through the OneFS web administration interface. The following information is included in the Sessions table:
u u u u u

Session The unique identification number that OneFS assigned to the session. Elapsed Specifies how much time has elapsed since the session started. Transferred Specifies the amount of data that has been transferred during the session. Throughput Specifies the average throughput of the session over the past five minutes. Client/Remote Specifies the IP address of the backup server that the data management
application (DMA) is running on. If a three-way NDMP backup or restore operation is currently running, the IP address of the remote tape media server also appears.

Mover/Data Describes the current state of the data mover and the data server. The first
word describes the activity of the data mover. The second word describes the activity of the data server. The data mover and data server send data to and receive data from each other during backup and restore operations. The data mover is a component of the backup server that receives data during backups and sends data during restore operations. The data server is a component of OneFS that sends data during backups and receives information during restore operations. The following states might appear:
u u

Active The data mover or data server is currently sending or receiving data. Paused The data mover is in the process of receiving data during a backup operation,
but is temporarily unable to continue. While the data mover is paused, the data server cannot send data to the data mover. The data server cannot be paused.

u u

Idle The data mover or data server is not sending or receiving data. Listen The data mover or data server is ready to send or receive data, but is waiting to connect to the data server or data mover.

Operation Indicates the type of operation that is currently in progress. If no operation is


in progress, this field is blank. The following operations can be performed:
u

Backup Indicates that data is currently being backed up to a media server. The level of NDMP backup (0-9). Restore Indicates that data is currently being restored from a media server.

Source/Destination If an operation is currently in progress, specifies the /ifs directories that are affected by the operation. If a backup is in progress, the path of the source directory that is being backed up is displayed. If a restore operation is in progress, the path of the directory that is being restored is displayed along with the destination directory to which

158

OneFS 7.0 Administration Guide

NDMP backup

the tape media server is restoring data. If you are restoring data to the same location that you backed up your data from, the same path appears twice.
u

Device Specifies the name of the tape or media changer device that is communicating
with the cluster.

Mode Indicates how OneFS is interacting with data on the backup media server. OneFS interacts with data in one of the following ways:
u u u

Read/Write OneFS is reading and writing data during a backup operation. Read OneFS is reading data during a restore operation. Raw Indicates that the DMA has access to tape drives, but the drives are not
necessarily attached to tape media.

Actions Displays session-related actions that you can perform on the session. You can
perform the following action:
u

Kill Terminates this NDMP session.

View NDMP backup logs


You can view information about NDMP backup and restore operations through NDMP backup logs. 1. Click Data Protection > Backup > Logs. 2. In the Log Location area, from the Node list, select the node that you want to view NDMP backup logs for. 3. In the Log Contents area, review information about NDMP backup and restore operations.

NDMP environment variables


You can configure how NDMP backup and restore operations are performed by configuring NDMP environment variables through your data management application (DMA). If you do not specify these variables, OneFS assigns default values. For more information about how to configure NDMP environment variables, see your DMA documentation. The following NDMP environment variables are valid: FILESYSTEM <path> Specifies the full path of the directory you want to back up. The default value is /ifs. LEVEL <integer> Specifies the level of NDMP backup to perform. This variable is ignored if BASE_DATE is specified. The following values are valid:
u u

0 Performs a full NDMP backup. 1 - 9 Performs an incremental backup at the level specified.

The default value is 0. BASE_DATE If this variable is specified, a token-based incremental backup is performed. Also, if this variable is specified, the dump dates file will not be updated, regardless of the setting of the UPDATE variable. UPDATE {Y | N}
View NDMP backup logs
159

NDMP backup

Determines whether OneFS updates the dump dates file. The following values are valid:
u u

Y OneFS updates the dump dates file. N OneFS updates the dump dates file.

TYPE <backup-format> Specifies the format for the backup. The following values are valid:
u u

tar Backups are in TAR format. dump Backups are in dump format. If you specify dump, the backup will still be stored on the backup device in TAR format.

The default value is tar. HIST <file-history-format> Specifies the file history format. The following values are valid:
u u u u

D Specifies dir/node file history. F Specifies path-based file history. Y Specifies the default file history format determined by your DMA. N Disables file history.

The default value is Y. DIRECT {Y | N} Enables or disables Direct Access Restore (DAR) and Directory DAR (DDAR). The following values are valid:
u u

Y Enables DAR and DDAR. N Disables DAR and DDAR.

The default value is Y. FILES <file-matching-pattern> If this option is specified, OneFS includes only the files and directories that meet the specified pattern in backup operations. Separate multiple patterns with a space. There is no default value. EXCLUDE <file-matching-pattern> If this option is specified, OneFS does not back up files that meet the specified pattern. Separate multiple patterns with a space. There is no default value. ENCODING <encoding-type> Encodes file-selection or file-history information according to the specified encoding type. The following values are valid:
u u u u

UTF8 UTF8_MAC EUC_JP EUC_JP_MS

160

OneFS 7.0 Administration Guide

NDMP backup

u u u u u u u u u u u u u u u u u u

EUC_KR CP932 CP949 CP1252 ISO_8859_1 ISO_8859_2 ISO_8859_3 ISO_8859_4 ISO_8859_5 ISO_8859_6 ISO_8859_7 ISO_8859_8 ISO_8859_9 ISO_8859_10 ISO_8859_13 ISO_8859_14 ISO_8859_15 ISO_8859_16

The default value is UTF8. RESTORE_HARDLINK_BY_TABLE {Y | N} Determines whether OneFS recovers hard links by building a hard-link table during restore operations. Specify this option if hard links have been incorrectly backed up, and restore operations are failing. If a restore operation fails because hard links have been incorrectly backed up, the following message appears in the NDMP backup logs:
Bad hardlink path for <path>

FH_REPORT_FULL_DIRENTS {Y | N} If you are using node-based file history, specifying Y causes entries for backed up directories up to be reported. You must enable this option if you are performing an incremental backup with node-based file history through NDMP v4.

NDMP environment variables

161

NDMP backup

162

OneFS 7.0 Administration Guide

CHAPTER 8 File retention with SmartLock

OneFS enables you to prevent users from modifying and deleting files on an Isilon cluster through the SmartLock tool. To use the SmartLock tool, you must configure a SmartLock license on a cluster. With the SmartLock tool, you can create SmartLock directories. Within SmartLock directories, you can commit files to a write once read many (WORM) state. A file committed to a WORM state is non-erasable and non-rewritable. The file can never be modified, and cannot be deleted until after a specified retention period has passed.
u u u u u u u u u

SmartLock operation modes................................................................................164 Replication and backup with SmartLock..............................................................165 SmartLock license functionality...........................................................................166 SmartLock best practices and considerations......................................................166 Set the compliance clock....................................................................................167 View the compliance clock..................................................................................168 Creating a SmartLock directory............................................................................168 Managing SmartLock directories.........................................................................170 Managing files in SmartLock directories..............................................................171

File retention with SmartLock

163

File retention with SmartLock

SmartLock operation modes


OneFS can operate in either SmartLock enterprise or SmartLock compliance mode. The operation mode controls not only how SmartLock directories function, but how the cluster can be accessed by users. The Smartlock operation mode is set during the initial cluster configuration process, before you configure the SmartLock license. If you do not set a cluster to compliance mode, the cluster is automatically set to enterprise mode. All clusters that are upgraded from a OneFS version earlier than 7.0 are automatically set to enterprise mode. You cannot modify the operation mode after it has been set.

Enterprise mode
You can use SmartLock enterprise mode if you want to protect files from accidental modification or deletion, but are not required by law to do so. SmartLock enterprise mode is the default SmartLock operation mode. If a file is committed to a WORM state in a SmartLock enterprise directory, the file can be deleted by the root user through the privileged delete feature. SmartLock enterprise directories reference the system clock to facilitate time-dependent operations, including file retention. Before you can create SmartLock enterprise directories, you must configure the SmartLock enterprise license. Isilon clusters operating in SmartLock enterprise mode cannot be made compliant with the regulations defined by U.S. Securities and Exchange Commission rule 17a-4.

Compliance mode
SmartLock compliance mode enables you to protect your data in compliance with the regulations defined by U.S. Securities and Exchange Commission rule 17a-4. If you set a cluster to SmartLock compliance mode, you will not be able to log in to that cluster through the root user account. Instead, you can log in to the cluster through the compliance administrator account. You must configure the compliance administrator account during the initial cluster configuration process. If you are logged in through the compliance administrator account, you can perform administrative tasks through the sudo command. In SmartLock compliance mode, you can create SmartLock compliance directories. In a SmartLock enterprise directory, a file can be committed to a WORM state either manually or automatically by the system. A file that has been committed to a WORM state in a compliance directory cannot be modified or deleted before the specified retention period has expired. You cannot delete committed files, even if you are logged in to the compliance administrator account. The privileged delete feature is not available in SmartLock compliance mode. Before you can create SmartLock compliance directories, you must set the SmartLock compliance clock. SmartLock compliance directories reference the SmartLock compliance clock to facilitate time-dependent operations, including file retention. You can set the compliance clock only once. After you set the compliance clock, you cannot modify the compliance-clock time. In addition to creating SmartLock compliance directories, you can also create SmartLock enterprise directories on SmartLock compliance clusters.

164

OneFS 7.0 Administration Guide

File retention with SmartLock

SmartLock compliance mode is not compatible with Isilon for vCenter, VMware vSphere API for Storage Awareness (VASA), or the vSphere API for Array Integration (VAAI) NAS Plug-In for Isilon.

Replication and backup with SmartLock


If you want to create duplicate copies of SmartLock directories on other storage devices, there are some considerations that you should be aware of. If you replicate SmartLock directories to another Isilon cluster through the SyncIQ tool, the WORM state of files is replicated. However, SmartLock directory configuration settings are not transferred to the target directory. For example, if you replicate a directory that contains a committed file that is set to expire on March 4th, the replicated directory will still be set to expire on March 4th on the target cluster. However, if the directory on the source cluster is set to not allow files to be committed for more than a year, the target directory will not automatically be set to the same restriction. It is recommended that you configure all nodes on both the source and target clusters into Network Time Protocol (NTP) peer mode to ensure that the node clocks are synchronized. In compliance mode, it is recommended that you configure all nodes on both the source and target clusters into NTP peer mode before you set the compliance clock to ensure that the compliance clocks are initially set to the same time. Failover and failback procedures for SmartLock directories are different than failover and failback for other directories. Configuring an autocommit time period for a target SmartLock directory can cause replication jobs to fail. If the target SmartLock directory commits a file to a WORM state, the replication job will not be able to update the file when the next replication job is run. Do not configure SmartLock settings for a target SmartLock directory unless you are no longer replicating data to the directory.

Data replication in compliance mode


To remain compliant with the regulations defined by U.S. Securities and Exchange Commission rule 17a-4, and create another physical copy of the data stored on an Isilon cluster, you can replicate data to another Isilon cluster through the Isilon SyncIQ tool. Other methods of replicating or backing up data are not compliant. To replicate data contained in SmartLock compliance directories to another Isilon cluster, the other cluster must also be running in SmartLock compliance mode. The source and target directories of the replication policy must be root paths of SmartLock compliance directories on the source and target cluster. If you attempt to replicate data from a compliance directory to a non-compliance directory, the replication job will fail.

Data replication and backup in enterprise mode


Before replicating or backing up data from an enterprise SmartLock directory, be aware of the limitations. It is recommended that you do not replicate a SmartLock enterprise directory to a SmartLock compliance directory on another Isilon cluster. Files that are replicated to a compliance directory are subject to the same restrictions as any file in a compliance directory, even if the files were replicated from an enterprise directory. If you replicate data from a SmartLock directory to a SmartLock directory on a target cluster, all metadata that relates to the retention date and commit status will persist on
Replication and backup with SmartLock
165

File retention with SmartLock

the target cluster. However, if you replicate data to a non-SmartLock directory, all metadata relating to the retention date and commit status will be lost. If you backup data to an NDMP device, all SmartLock metadata relating to the retention date and commit status will be transferred to the NDMP device. When you restore the data on an Isilon cluster, if the directory that you restore to is not a SmartLock directory, the metadata will be lost. However, if you restore to a SmartLock directory, the metadata will persist on the cluster. You can restore to a SmartLock directory only if the directory is empty.

SmartLock license functionality


You can create SmartLock directories and commit files to a SmartLock state only if you configure a SmartLock license on a cluster. If you unconfigure a SmartLock license or your SmartLock license expires, you will not be able to create new SmartLock directories on the cluster, or modify SmartLock directory configuration settings. However, you can still commit files within existing SmartLock directories to a WORM state. If a cluster is in SmartLock enterprise mode, you will not be able to delete the committed file before its expiration time by using the privileged delete command, because privileged delete is part of the SmartLock tool. If you unconfigure a SmartLock license from a cluster that is running in SmartLock compliance mode, root access to the cluster is not restored.

SmartLock best practices and considerations


It is recommended that you follow best practices when enforcing file retention with the SmartLock tool. It is especially important that you follow best practices when creating and managing compliance directories. Be aware of the following best practices and considerations:
u

You cannot move or rename a directory that contains a SmartLock directory. A SmartLock directory can be renamed only if the directory is empty. You cannot move a file that has been committed to a WORM state, even after the retention period for the file has expired. SmartLock compliance directories reference the compliance clock, which is controlled by the compliance clock daemon. Because a user can disable the compliance clock daemon, it is possible to increase the retention period of WORM committed files in SmartLock compliance mode. However, it is not possible to decrease the retention period of a WORM committed file. It is recommended that you create files outside of SmartLock directories and then transfer them into a SmartLock directory after you are finished working with the files. If you are uploading files to a cluster, it is recommended that you upload the files to a non-SmartLock directory, and then later transfer the files to a SmartLock directory. If a file is committed to a WORM state while it is being uploaded, the file will become trapped in an inconsistent state. Files can be committed to a WORM state even if they are not closed. If you specify an autocommit time period for a directory, the autocommit time period is calculated according to the length of time since you last modified the file, not when the file was last closed. If you do not close a file, and then delay writing to the file for more than the autocommit time period, the file will be committed to a WORM state the next time you attempt to write to it.

In order to commit a file to a WORM state, you must remove all existing read-write permissions from the file. However, if the file is already in a read-only state, attempting to remove the read-write permissions from that file will not cause the file

166

OneFS 7.0 Administration Guide

File retention with SmartLock

to be committed to a WORM state. You must successfully remove at least one readwrite permission in order for the file to be committed. In order to commit a file that is currently set to a read-only state, you must first enable write permissions for the file, and then remove read-write permissions for that file.
u

If the autocommit time period expires for a file, and the file is accessed by a user, the file is committed to a WORM state. However, the read-write permissions of the file are not modified. The file is still committed to a WORM state, and can never be modified and cannot be deleted until the specified retention period expires. However, the WORM state is not indicated by the read-write permissions. If you are replicating SmartLock directories to another Isilon cluster, it is recommended that you do not enable autocommit for the target directories. If you enable autocommit on target directories, it is possible that files will become committed on the target before they are committed on the source. If this happens, replication jobs will fail, and OneFS will not be able to replicate data to the target cluster. If you run the touch command on a file in a SmartLock directory without specifying a date to release the file from a SmartLock state, and you commit the file, the retention period is automatically set to the minimum retention period specified for the SmartLock directory. If you have not specified a minimum retention period for the SmartLock directory, the file is assigned a retention period of zero seconds. It is recommended that you specify a minimum retention period for all SmartLock directories. It is recommended that you set SmartLock configuration settings only once and do not modify the settings after files have been added to the SmartLock directory. If an autocommit time period is specified for the directory, modifying SmartLock configuration settings can affect the retention period of files, even if the autocommit time period of the files has already expired.

Set the compliance clock


Before you can create SmartLock compliance directories, you must set the compliance clock. Setting the compliance clock configures the clock to the same time as the cluster system clock. Before you set the compliance clock, ensure that the cluster system clock is set to the correct time. After the compliance clock is set, if the compliance clock becomes unsynchronized with the system clock, the compliance clock slowly corrects itself to match the system clock. The compliance clock corrects itself at a rate of approximately one week per year. 1. Open a secure shell (SSH) connection to any node in the cluster and log in through the compliance administrator account. 2. Set the compliance clock by running the following command.
isi worm cdate set

The system displays output similar to the following:


WARNING! The Compliance Clock can only be set once! It can only be set to the current system time. Once it is set, it may NEVER be set again. !! Are you SURE you want to set the Compliance Clock? (yes, [no])

3. Type yes and then press ENTER.

Set the compliance clock

167

File retention with SmartLock

View the compliance clock


You can view the current time of the compliance clock. 1. Open a secure shell (SSH) connection to any node in the cluster and log in through the compliance administrator account. 2. View the compliance clock by running the following command:
isi worm cdate

The system displays output similar to the following:


Current Compliance Clock Date/Time: 2012-07-07 01:00:00

Creating a SmartLock directory


You can create a SmartLock directory and configure settings that control how long files are retained in a WORM state and when files are automatically committed to a WORM state.

Retention periods
When a file is committed to a WORM state in a SmartLock directory, that file is retained for a specified retention period. If you manually commit a file by removing the read-write privileges of the file, you can optionally specify an expiration date that the retention period of a files set to expire on. You can configure minimum and maximum retention periods for a SmartLock directory that prevent files from being retained for too long or short a time period. For example, assume that you have a SmartLock cluster with a minimum retention period of two days. At 1:00 PM on Monday, you commit the file to a WORM state, and specify the file to expire on Tuesday at 3:00 PM. The retention period still expires two days later on Wednesday at 1:00 PM. You can also configure a default retention period that is assigned when a client manually commits a file, but does not specify an expiration date. For example, assume that you have a SmartLock cluster with a default retention period of two days. At 1:00 PM on Monday, you commit the file to a WORM state. The retention period expires two days later on Wednesday at 1:00 PM.

Autocommit time periods


You can configure an autocommit time period for SmartLock directories. After a file has existed in a SmartLock directory without being modified for the specified autocommit time period, the file is automatically committed to a WORM state the next time that the file is accessed by a user. After the autocommit time period for a file has passed, the file continues to reference the current autocommit time period until the file is accessed by a user. Because of this, increasing the autocommit time period of a directory might cause files to be committed to a WORM state later than expected. For example, assume that you have a SmartLock directory with an autocommit time period of one day, and an expiration period of one day. You then copy a file into the SmartLock directory on Monday, at 3:00 PM. At 5:00 PM on Tuesday, you increase the autocommit time period to two days. If the file was not accessed, users can modify or delete the file until 3:00 PM on Wednesday. Decreasing the autocommit time period of a directory might cause a file to be released from a WORM state earlier than expected. For example, assume that you have a SmartLock directory with an autocommit time period of one day, and a default expiration period of one day. You then copy a file into the SmartLock directory on Monday, at 3:00 PM. If, at 4:00 PM on Tuesday, the file has not yet been accessed by a user, and you
168

OneFS 7.0 Administration Guide

File retention with SmartLock

decrease the autocommit time period to two hours, the file is set to be removed from a WORM state at 5:00 PM on Tuesday, instead of 3:00 PM on Wednesday. Modifying the minimum, maximum, or default retention period of a SmartLock directory can modify the retention period of files, even after the autocommit time period of a file expires. For example, assume that you have a SmartLock directory with an autocommit time period of two days, and a default expiration period of one day. You then copy a file into the SmartLock directory on Monday, at 3:00 PM. If, at 4:00 PM on Wednesday, the file was not accessed by a user, and you decrease the default retention period to two hours, the file is set to be removed from a WORM state at 5:00 PM on Tuesday, instead of 3:00 PM on Wednesday. If you specify an autocommit time period along with a minimum, maximum, or default retention period, the retention period is calculated according to the time that the autocommit period expires. For example, assume that you have a SmartLock cluster with a minimum retention period of two days and an autocommit time period of one day. At 1:00 PM on Monday, you modify a file; then, at 1:00 PM on Tuesday, you access a file, causing the file to be committed to a WORM state. The retention period expires on Thursday at 1:00 PM, two days after the autocommit time period for the file expired.

Create a SmartLock directory


You can create a SmartLock directory and commit files in that directory to a WORM state. Before creating a WORM root directory, be aware of the following conditions and requirements:
u

You cannot create a SmartLock directory as a subdirectory of an existing SmartLock directory. Hard links cannot cross SmartLock directory boundaries. Creating a SmartLock directory causes a corresponding SmartLock domain to be created for that directory.
worm mkdir command to create a SmartLock directory. worm mkdir command cannot be the path of an existing

u u

1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Run the isi The path specified in the isi directory.

For example, the following command creates a compliance directory with a default retention period of four years, a minimum retention period of three years, and an maximum retention period of five years:
sudo isi worm mkdir --path /ifs/data/dir --compliance --default 4y --min 3y --max 5y

For example, the following command creates an enterprise directory with an autocommit time period of thirty minutes and a minimum retention period of three months:
isi worm mkdir --path /ifs/data/dir --autocommit 30n --min 3m

Create a SmartLock directory

169

File retention with SmartLock

Managing SmartLock directories


You can modify the default, minimum, and maximum retention period and the autocommit period for a SmartLock directory at any time.

Modify a SmartLock directory


You can modify the SmartLock configuration settings for a SmartLock directory. It is recommended that you set SmartLock configuration settings only once and do not modify the settings after files are added to the SmartLock directory. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Modify SmartLock configuration settings by running the isi worm modify command. For example, the following command sets the default retention period to one year:
isi worm modify --path /ifs/data/protected_directory --default 1y

View SmartLock directory settings


You can view the SmartLock configuration settings for SmartLock directories. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Run the isi worm list command to view the SmartLock configuration settings for SmartLock directories.

SmartLock directory configuration settings


SmartLock directory configuration settings determine when files are committed to and how long files are retained in a WORM state for a given directory. All SmartLock directories are assigned the following settings:
u u u

ID The numerical ID of the corresponding SmartLock domain. Root path The path of the directory. Type The type of directory. Enterprise directories display SmartLock. Compliance directories display Compliance. Override date The override retention date for the directory. Files committed to a WORM state are not released from a WORM state until after the specified date, regardless of the maximum retention period for this directory or whether a user specifies a retention period expiration date. Default retention period The default retention period for the directory. If a retention
period expiration date is not explicitly assigned by a user, the default retention period is assigned to the file when it is committed to a WORM state. Times are expressed in the format "<integer><time>", where <time> is one of the following values:
u u u u

y Specifies years m Specifies months w Specifies weeks d Specifies days

170

OneFS 7.0 Administration Guide

File retention with SmartLock

Minimum retention period The minimum retention period for the directory. Files are
retained in a WORM state for at least the specified amount of time, even if a user specifies a retention period expiration date that equates to a shorter period of time. Times are expressed in the format "<integer><time>", where <time> is one of the following values:
u u u u

y Specifies years m Specifies months w Specifies weeks d Specifies days

Maximum retention period The maximum retention period for the directory. Files are
retained in a WORM state for longer than the specified amount of time, even if a user specifies a retention period expiration date that equates to a longer period of time. Times are expressed in the format "<integer><time>", where <time> is one of the following values:
u u u u

y Specifies years m Specifies months w Specifies weeks d Specifies days

Autocommit period The autocommit time period for the directory. After a file exists in this SmartLock directory without being modified for the specified time period, the file is automatically committed the next time the file is accessed by a user. Times are expressed in the format "<integer><time>", where <time> is one of the following values:
u u u u u u

y Specifies years m Specifies months w Specifies weeks d Specifies days h Specifies hours n Specifies minutes

Privileged delete The state of the privileged delete functionality for the directory.
The following values are valid:
u

On A root user can delete files committed to a WORM state by running the isi worm
filedelete command.

Off WORM committed files cannot be deleted, even through the isi worm filedelete command. Disabled (Permanently) WORM committed files cannot be deleted, even through
the isi worm filedelete command. After this setting is set for a SmartLock directory, the setting cannot be modified.

Managing files in SmartLock directories


You can commit files in SmartLock directories to a WORM state by removing the readwrite privileges of the file. You can also set a specific retention date at which the retention period of the file expires. If the cluster is set to SmartLock enterprise mode, and
Managing files in SmartLock directories
171

File retention with SmartLock

you are accessing the cluster through the root user, you can delete files that are committed to a WORM state. If you need to retain all currently committed files until a specified date, you can override the retention date for all files in a SmartLock directory. An override retention date extends the retention period of all files scheduled to expire earlier than the specified date. However, it does not decrease the retention period of files that are scheduled to expire later than the specified date. The retention period expiration date is set by modifying the access time of a file. In the UNIX command line, the access time can be modified through the touch command. Although there is no method of modifying the access time through Windows explorer, the access time can be modified through Windows Powershell. Accessing a file does not set the retention period expiration date.

Set a retention period through a UNIX command line


You can specify when a file is released from a WORM state through a UNIX command line. 1. Open a connection to any node in the cluster through a UNIX command line and log in. 2. Set the retention period by modifying the access time of the file through the touch command. For example, the following command sets a retention date of June 1, 2013 for /ifs/ data/test.txt:
touch -at 201306010000 /ifs/data/test.txt

Set a retention period through Windows Powershell


You can specify when a file is released from a WORM state through Microsoft Windows Powershell. 1. Open the Windows PowerShell command prompt. 2. Optional: If you have not already done so, establish a connection to the cluster by running the net use command. For example, the following command establishes a connection to the /ifs directory on cluster.ip.address.com:
net use "\\cluster.ip.address.com\ifs" /user:root password

3. Specify the name of the file you want to set a retention period for by creating an object. The file must exist in a SmartLock directory. For example, the following command creates an object for /smartlock/file.txt:
$file = Get-Item "\\cluster.ip.address.com\ifs\smartlock\file.txt"

4. Specify the retention period by setting the last access time for the file. For example, the following command sets an expiration date of July 1, 2012 at 1:00 PM:
$file.LastAccessTime = Get-Date "2012/7/1 1:00 pm"

Commit a file to a WORM state through a UNIX command line


You can commit a file to a WORM state through a UNIX command line. After a file is committed to a WORM state, you can never modify the file and you cannot delete the file

172

OneFS 7.0 Administration Guide

File retention with SmartLock

until the retention period expires. Additionally, you cannot change the path of a file that is committed to a WORM state. To commit a file to a WORM state, you must remove all write privileges from the file. However, if you attempt to remove write privileges of a file that is already set to a readonly state, the file is not committed to a WORM state. In that case, you must add write privileges to the file, and then return the file to a read-only state. 1. Open a connection to the cluster through a UNIX command line and log in. 2. Remove write privileges from a file by running the chmod command. For example, the following command removes write privileges of /ifs/data/ smartlock/file.txt:
chmod ugo-w /ifs/data/smartlock/file.txt

Commit a file to a WORM state through Windows Explorer


You can commit a file to a WORM state through Microsoft Windows Explorer. After a file is committed to a WORM state, you can never modify the file and you cannot delete the file until the retention period expires. Additionally, you cannot change the path of a file that is committed to a WORM state. To commit a file to a WORM state, you must apply the read-only setting. If a file is already set to a read-only state, you must first remove the file from a read-only state and then return it to a read-only state. 1. In Windows Explorer, navigate to the file you want to commit to a WORM state. 2. Right-click the folder and then click Properties. 3. In the Properties window, click the General tab. 4. Select the Read-only check box, and then click OK.

Override the retention period for all files in a SmartLock directory


You can override the retention period for files in a SmartLock directory. All files committed to a WORM state within the directory will remain in a WORM state until after the specified day. If files are committed to a WORM state after the retention period is overridden, the override date functions as a minimum retention date. All files committed to a WORM state do not expire until at least the given day, regardless of user specifications. 1. Open a connection to the cluster through a UNIX command line. 2. Override the retention period expiration date for all WORM committed files in a SmartLock directory by running the isi worm modify command. For example, the following command overrides the retention period expiration date of /ifs/data/smartlock to June 5, 2012:
isi worm modify --path /ifs/data/smartlock --override 20120605

Commit a file to a WORM state through Windows Explorer

173

File retention with SmartLock

Delete a file committed to a WORM state


You can delete a file that is committed to a WORM state before the expiration period for that file has passed. You can delete WORM committed files only if you are logged in as the root user. Before you begin u The SmartLock directory that the file exists in must allow privileged delete functionality. 1. Open a connection to the cluster through a UNIX command line and log in through the root user account. 2. If privileged delete functionality was disabled for the SmartLock directory, modify the directory by running the isi worm modify command with the --privdel option. For example, the following command enables privileged delete for /ifs/worm/ enterprise:
isi worm modify --path /ifs/data/enterprise --privdel on

3. Delete the WORM committed file by running the isi worm filedelete command. For example, the following command deletes /ifs/worm/enterprise/file:
isi worm filedelete /ifs/worm/enterprise/file

The system displays output similar to the following:


!! Are you sure? Please enter 'yes' to confirm: (yes, [no])

4. Type yes and then press ENTER.

View WORM status of a file


You can view the WORM status of an individual file. 1. Open a connection to the cluster through a UNIX command line. 2. View the WORM status of a file by running the isi worm info command. For example, the following command displays the WORM status of /ifs/worm/ enterprise/file:
isi worm info --path /ifs/worm/enterprise/file --verbose

174

OneFS 7.0 Administration Guide

CHAPTER 9 Protection domains

Protection domains are markers that OneFS uses to prevent modifications to files and directories. If a domain is applied to a directory, the domain is also applied to all of the files and subdirectories under the directory. You can specify domains manually; however, domains are usually created automatically by OneFS. There are three types of domains: SyncIQ, SmartLock, and SnapRevert. SyncIQ domains can be assigned to source and target directories of replication policies. OneFS automatically creates SyncIQ domains for target directories of replication policies the first time that replication policies are run. OneFS also automatically creates SyncIQ domains for source directories of replication policies during the failback process. You can manually create SyncIQ domains for source directories before you initiate the failback process, but you cannot delete SyncIQ domains that mark target directories of replication policies. SmartLock domains are assigned to SmartLock directories to prevent committed files from being modified or deleted. SmartLock domains are automatically created when a SmartLock directory is created. You cannot delete SmartLock domains. However, if you delete a SmartLock directory, OneFS automatically deletes the SmartLock domain associated with the directory. SnapRevert domains are assigned to directories that are contained in snapshots to prevent files and directories from being modified while a snapshot is being reverted. SnapRevert domains are not created automatically by OneFS. You cannot revert a snapshot until you create a SnapRevert domain for the directory that the snapshot contains. You can create SnapRevert domains for subdirectories of directories that already have SnapRevert domains. For example, you could create SnapRevert domains for both /ifs/data and /ifs/data/archive. A SnapRevert domain can be deleted if you no longer want to revert snapshots of a directory.
u u u u u

Protection domain considerations.......................................................................176 Create a protection domain.................................................................................176 Delete a protection domain.................................................................................176 View protection domains.....................................................................................177 Protection domain types.....................................................................................177

Protection domains

175

Protection domains

Protection domain considerations


You can manually create protection domains before they are required by OneFS to perform certain actions. However, manually creating protection domains can limit your ability to interact with the data contained in the domain. Before creating protection domains, be aware of the following considerations:
u

Copying a large number of files into a protection domain might take a very long time, because each file must be marked individually as belonging to the protection domain. You cannot move directories in or out of protection domains. However, you can move a directory contained in a protection domain to another location within the same protection domain. Creating a protection domain for a directory that contains a large number of files will take more time than creating a protection domain for a directory with fewer files. Because of this, it is recommended that you create protection domains for directories while the directories are empty, and then add files to the directory. If a domain is currently preventing the modification or deletion of a file, you cannot create a protection domain for a directory that contains that file. For example, if / ifs/data/smartlock/file.txt is set to a WORM state by a SmartLock domain, you cannot create a SnapRevert domain for /ifs/data/.

Create a protection domain


You can create replication or snapshot revert domains to facilitate snapshot revert and failover operations. You cannot create a SmartLock domain. SmartLock domains are created automatically when you create a SmartLock directory. 1. Click Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, click Start job. 3. From the Job list, select Domain Mark. 4. Optional: To specify a priority for the job, from the Priority list, select a priority. Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default domain mark priority. 5. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact policy list, select an impact policy. If you do not specify a policy, the job is assigned the default domain mark policy. 6. From the Domain type list, select the type of domain you want to create. 7. Ensure that the Delete domain check box is cleared. 8. In the Domain root path field, type the path of the directory that you want to create a domain for. 9. Click Start.

Delete a protection domain


You can delete a replication or snapshot revert domain if you want to move directories out of the domain. You cannot delete a SmartLock domain. SmartLock domains are deleted automatically when you delete a SmartLock directory. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, click Start job. 3. From the Job list, select Domain Mark.
176

OneFS 7.0 Administration Guide

Protection domains

4. Optional: To specify a priority for the job, from the Priority list, select a priority. Lower values indicate a higher priority. If you do not specify a priority, the job is assigned the default domain mark priority. 5. Optional: To specify the amount of cluster resources the job is allowed to consume, from the Impact policy list, select an impact policy. If you do not specify a policy, the job is assigned the default domain mark policy. 6. From the Domain type list, select the type of domain you want to delete. 7. Select the Delete domain check box. 8. In the Domain root path field, type the path of the directory that is associated with the domain that you want to delete. 9. Click Start.

View protection domains


You can view protection domains on a cluster. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. View protection domains by running the isi
domain list command.

Protection domain types


There are three general protection domain types: SmartLock, SnapRevert, and SyncIQ. Each protection domain type can be divided into additional subcategories. The following domain types appear in the output of the isi domain list command.
u u u u u

SmartLock SmartLock domain of an enterprise directory. Compliance SmartLock domain of a compliance directory. SyncIQ SyncIQ domain that prevents users from modifying files and directories. SyncIQ, Writable SyncIQ domain that allows users to modify files and directories. SnapRevert SnapRevert domain that prevents users from modifying files and directories while a snapshot is being reverted. Writable, SnapRevert SnapRevert domain that allows users to modify files and
directories.

If Incomplete is appended to a domain, OneFS is in the process of creating the domain. An incomplete domain does not prevent files from being modified or deleted.

View protection domains

177

Protection domains

178

OneFS 7.0 Administration Guide

CHAPTER 10 Cluster administration

The OneFS cluster can be managed through both the web administration interface and the command-line interface. General cluster settings can be configured and module licenses can be managed through either the command-line or web administration interface. Using either the web administration or command-line interface, you can view cluster status details for node pools, tiers, and file pool policies. Tiers and file pool policies can be managed, and you can configure events to generate email notifications and SNMP traps. OneFS cluster settings and module licenses can be configured and managed, and you can use the web administration interface to configure and graph real-time and historical cluster performance.
u u u u u u u u u u u u u u

User interfaces....................................................................................................180 Connecting to the cluster.....................................................................................181 Licensing............................................................................................................182 General cluster settings.......................................................................................184 Cluster statistics.................................................................................................189 Performance monitoring......................................................................................189 Cluster monitoring...............................................................................................189 Monitoring cluster hardware................................................................................196 Cluster maintenance...........................................................................................199 Remote support using SupportIQ.........................................................................201 Upgrading OneFS................................................................................................203 Cluster join modes..............................................................................................204 Event notification settings...................................................................................204 System job management.....................................................................................205

Cluster administration

179

Cluster administration

User interfaces
Depending on your preference, location, or the task at hand, OneFS provides four different interfaces for managing the cluster.
u

Web administration interface The browser-based OneFS web administration interface provides secure access with OneFS-supported browsers. You can use this interface to view robust graphical monitoring displays and to perform cluster-management tasks. Command-line interface Cluster-administration tasks can be performed using the
command-line interface (CLI). Although most tasks can be performed from either the CLI or the web administration interface, a few tasks can be accomplished only from the CLI.

u u

Node front panel You can monitor node and cluster details from a node front panel. OneFS Platform API OneFS includes a RESTful services application programmatic
interface (API). Through this interface, cluster administrators can develop clients and software to automate the management and monitoring of their Isilon storage systems.

Web administration interface


The OneFS web administration interface provides a graphical user interface for a single point of management. In this interface, you can view and monitor cluster status, and you can perform most tasks that are related to managing the cluster. Access to the OneFS interface is controlled by built-in administrator roles with predefined sets of privileges that cannot be modified. With OneFS, you can also create additional roles with configurable sets of privileges. The root user has access to both the command-line and web administration interfaces. The root user and some administrator roles can create additional local users who can be given privileges to use the file system or manage some or all of the administrative system. The OneFS web administration interface uses port 8080 as its default port.

Command-line interface
The OneFS command-line interface can be used to manage the OneFS cluster. The FreeBSD-based interface provides an extended standard UNIX command set for managing all aspects of the cluster. With the OneFS command-line interface, you can run OneFS isi commands to configure, monitor, and manage the cluster. You can access the command-line interface by opening a secure shell (SSH) connection or serial connection to any node in the cluster.

Node front panel


The node front panel can be used to view node and cluster details, with the exception of Accelerator nodes, which do not have front panels. The front panel of each node contains an LCD screen with five buttons. You can monitor node and cluster details from the node front panel, including the following:
u u u u u

Node status Events Cluster details, capacity, IP and MAC addresses Throughput Drive status

180

OneFS 7.0 Administration Guide

Cluster administration

Consult the node-specific installation documentation for a complete list of monitoring activities that can be performed by using the node front panel.

OneFS Platform API


OneFS provides an API that you can use to automate cluster monitoring and management tasks. The OneFS Platform API (PAPI) is an HTTP-based interface that conforms to the principles of the Representation State Transfer (REST) architecture. Through this interface, cluster administrators can develop clients and software to automate the management and monitoring of their Isilon storage systems. An understanding of HTTP/1.1 (RFC 2616) is required to use the API. Whenever possible, HTTP/1.1 defines the standards of operation for PAPI. For more information, see the OneFS Platform API Reference.

Connecting to the cluster


The OneFS cluster can be accessed by using the web administration interface or via SSH. A serial connection can be used to perform cluster-administration tasks through the command-line interface. You can also manage the cluster by using the node front panel to execute a subset of cluster-management tasks. For information about connecting to the node front panel, see the installation documentation for your node.

Log in to the web administration interface


You can monitor and manage your cluster from the browser-based web administration interface. 1. Open a browser window and type the URL for your cluster in the address field, replacing yourNodeIPaddress in the following example with the first IP address you provided when you configured ext-1: http://<yourNodeIPaddress>. If your security certificates have not been configured, a message displays. Resolve any certificate configurations and continue to the web site. 2. Log in to OneFS by typing your OneFS credentials in the Username and Password fields.

Open an SSH connection to a cluster


You can use any SSH client such as OpenSSH or PuTTY to connect to a OneFS cluster. Before you begin You must have valid OneFS credentials to log in to a cluster after the connection is open. 1. Open a secure shell (SSH) connection to any node in the cluster, using the IP address and port number for the node. 2. Log in with your OneFS credentials. At the OneFS command line prompt, you can use isi commands to monitor and manage your cluster.

Restart or shut down the cluster


You can restart or shut down the cluster from the web administration interface. 1. Click Cluster Management > Hardware Configuration > Shutdown & Reboot Controls.
OneFS Platform API
181

Cluster administration

2. Optional: In the Shut Down or Reboot This Cluster area, select the action that you want to perform. l To shut down the cluster, click Shut down and then click Submit.
l

To stop the cluster and then restart it, click Reboot and then click Submit.

Licensing
Advanced cluster features are available when you license OneFS software modules. Each optional OneFS software module requires a separate license. For more information about the following optional software modules, contact your EMC Isilon sales representative.
u u u u u u u u u u

HDFS InsightIQ Isilon for vCenter SmartConnect Advanced SmartLock SmartPools SmartQuotas SnapshotIQ SyncIQ iSCSI

Activating licenses
Optional OneFS modules, which provide advanced cluster features, are activated by a valid license key. To activate a licensed OneFS module, you must obtain a license key and then enter the key through either the OneFS web administration interface or the command-line interface. To obtain a module license, contact your EMC Isilon Storage Division sales representative.

Activate a license through the web administration interface


To activate a OneFS module, you must enter a valid module license key through the web administration interface. Before you begin Before you can activate an Isilon software module, you must obtain a valid license key, and you must have root user privileges on your OneFS cluster. To obtain a license key, contact your EMC Isilon Storage Division sales representative. 1. Click Help > About This Cluster. 2. In the Licensed Modules section, click Activate license. The Activate License page appears. 3. In the License key field, type the license key for the module that you want to enable. 4. Read the end user license agreement, click I have read and agree , and then click Submit. The License List page appears, displaying an updated expiration date for the software module.
182

OneFS 7.0 Administration Guide

Cluster administration

Results You can manage the added features in both the OneFS command-line and web administration interfaces, excluding the few features that appear in only one interface.

Activate a license through the command-line interface


To activate a OneFS module, you must enter a valid module license key. Before you begin Before you can activate an Isilon software module, you must obtain a valid license key, and you must have root user privileges on your OneFS cluster. To obtain a license key, contact your EMC Isilon sales representative. 1. Open a secure shell (SSH) connection with any node in the cluster. 2. At the OneFS command prompt, log in to the cluster as root. 3. At the command prompt, type the following command, where <license key> is the key for the module: isi license activate <license key>. OnesFS returns a confirmation message, similar to the following text: SUCCESS, <module name> has been successfully activated. Results You can manage the added features through the OneFS command-line and web administration interfaces, excluding the few features that appear in only one interface.

View license information


You can view information about the current status of any optional Isilon software modules. 1. Click Help > About This Cluster. 2. In the Licensed Modules area, review information about licensed modules, including status and expiration date.

Unconfiguring licenses
You can unconfigure a OneFS licensed module, but removing a licensed feature may have system-wide implications. You may want to unconfigure a license for a OneFS software module if, for example, you enabled an evaluation version of a module but later decided not to purchase a permanent license. Unconfiguring a module license disables recurring jobs or scheduled operations for the module, but it does not deactivate the license. You can unconfigure module licenses only through the command-line interface. You cannot unconfigure a module license through the web administration interface. The results of unconfiguring a license are different for each module.
u u u

HDFS No system impact. InsightIQ No system impact. Isilon for vCenter If you unconfigure this license, you cannot manage vSphere
machine backup and restore operations.

SmartPools If you unconfigure a SmartPools license, all file pool policies (except the
default file pool policy) are deleted.

Activate a license through the command-line interface

183

Cluster administration

SmartConnect If you unconfigure a SmartConnect license, the system converts dynamic


IP address pools to static IP address pools.

SmartLock If you unconfigure a SmartLock license, you cannot create new SmartLock
directories or modify SmartLock directory configuration settings for existing directories. You can commit files to a write once read many (WORM) state, even after the SmartLock license is unconfigured, but you cannot delete WORM-committed files from enterprise directories.

SnapshotIQ If you unconfigure a SnapshotIQ license, the system disables snapshot


schedules.

SmartQuotas If you unconfigure a SmartQuotas license, the system disables the


thresholds that you have configured.

u u

SyncIQ If you unconfigure a SyncIQ license, the system disables SyncIQ policies and jobs. iSCSI If you unconfigure an iSCSI license, iSCSI initiators can no longer establish iSCSI connections to the cluster.

Unconfigure a license
You can unconfigure a licensed module only through the command-line interface. You must have root user privileges on your OneFS cluster to unconfigure a module license. Unconfiguring a license does not deactivate the license. 1. Open a secure shell (SSH) connection with any node in the cluster. 2. At the OneFS command prompt, log in to the cluster as root. 3. At the command prompt, run the following command, replacing <module name> with the name of the module: isi license unconfigure -m <module name> If you do not know the module name, run the command isi prompt for a list of OneFS modules and their status.
license at the command

OnesFS returns a confirmation message, similar to the following text: The <module name> module has been unconfigured. The license is unconfigured, and any processes enabled for the module are disabled.

General cluster settings


General settings that are applied across the entire cluster can be modified. You can modify the following general settings to customize the OneFS cluster for your needs:
u u u u u u

Cluster name Cluster date and time, NTP settings Character encoding Email settings SNMP monitoring SupportIQ settings

Configuring the cluster date and time


As an alternative to the Network Time Protocol (NTP) method, in which the cluster automatically synchronizes its date and time settings through an NTP server, the date
184

OneFS 7.0 Administration Guide

Cluster administration

and time reported by the cluster can be set manually. The NTP service can be configured to ensure that all nodes in a cluster are synchronized to the same time source. Windows domains provide a mechanism to synchronize members of the domain to a master clock running on the domain controllers. OneFS uses a service to adjust the cluster time to that of Active Directory. Whenever a cluster is joined to an Active Directory domain and an external NTP server is not configured, the cluster is set automatically to Active Directory time, which is synchronized by a job that runs every 6 hours. When the cluster and domain time become out sync by more than 4 minutes, OneFS generates an event notification. If the cluster and Active Directory become out of sync by more than 5 minutes, authentication will not work. To summarize:
u

If no NTP server is configured but the cluster is joined to an Active Directory domain, the cluster synchronizes with Active Directory every 6 hours. If an NTP server is configured, the cluster synchronizes the time with the NTP server.

Set the cluster date and time


You can set the date, time, and time zone that is used by the OneFS cluster. 1. Click Cluster Management > General Settings > Date & Time. The Date and Time page appears and displays a list of each node's IP address and the date and time settings for each node. 2. From the Date and time lists, select the month, date, year, hour, and minute settings. 3. From the Time zone list, select a value. l If the time zone that you want is not in the list, select Advanced from the Time zone list, and then select the time zone from the Advanced time zone list. 4. Click Submit.

Specify an NTP time server


You can specify one or more Network Time Protocol (NTP) servers to synchronize the system time on the cluster. The cluster periodically contacts the NTP servers and sets the date and time based on the information it receives. 1. Click Cluster Management > General Settings > NTP. 2. Optional: Add a server. a. In the Server IP or hostname field, type the host name or IP address of the NTP server, click Add, and then click Submit. b. Optional: To enable NTP authentication with a keyfile, type the path and file name in the Keyfile field, and then click Submit. The server is added to the list of NTP servers. 3. Optional: Delete a server. a. Select the check box next to the server name in the Server list for each server that you want to delete. b. Click Delete. c. Click Submit.

Set the cluster date and time

185

Cluster administration

Set the cluster name


You can assign a name and add a login message to your cluster to make the cluster and its nodes more easily recognizable on your network. Cluster names must begin with a letter and can contain only numbers, letters, and hyphens. The cluster name is added to the node number to identify each node in the cluster. For example, the first node in a cluster named Images may be named Images-1. 1. Click Cluster Management > General Settings > Cluster Identity. 2. Optional: In the Cluster Name and Description area, type a name for the cluster in the Cluster Name field and type a description in the Cluster Description field. 3. Optional: In the Login Message area, type a title in the Message Title field and a message in the Message Body field. 4. Click Submit. What to do next You must add the cluster name to your DNS servers.

Specify contact information


You can specify contact information so that Isilon Technical Support personnel and event notification recipients can contact you. 1. Click Dashboard > Events > Notification Settings. 2. In the Contact Information area, click Modify contact information settings. The Cluster Name and Description page appears and displays cluster identity and contact information. 3. In the Contact Information area, type the name and contact information in the fields for those details. 4. Click Submit.

View SNMP settings


You can review SNMP monitoring settings. 1. Click Cluster Management > General Settings > SNMP Monitoring.

Configure SMTP email settings


You can send event notifications with SMTP by using an SMTP mail server. If your SMTP server is configured to use authentication, you can also enable SMTP authentication. You can use SMTP email settings if your network environment requires the use of an SMTP server or if you want to route OneFS cluster event notifications with SMTP through a port. 1. Click Cluster Management > General Settings > Email Settings. 2. In the Email Settings area, type the SMTP information for your environment in each field. 3. Optional: For the Use SMTP AUTH option, select Yes, type the user credentials, and then select a connection security option. 4. Click Submit.
186

OneFS 7.0 Administration Guide

Cluster administration

You can test your configuration by sending a test event notification.

Configuring SupportIQ
OneFS logs contain data that Isilon Technical Support personnel can securely upload, with your permission, and then analyze to troubleshoot cluster problems. The SupportIQ technology must be enabled and configured. When SupportIQ is enabled, Isilon Technical Support personnel can request logs via scripts that gather cluster data and then upload the data to a secure location. You must enable and configure the SupportIQ module before SupportIQ can run scripts to gather data. The feature may have been enabled when the cluster was first set up. As an option, you can also enable remote access, which allows Isilon Technical Support personnel to troubleshoot your cluster remotely and run additional data-gathering scripts. Remote access is disabled by default. To enable Isilon to remotely access your cluster using SSH, you must provide the cluster password to a Technical Support engineer.

Enable and configure SupportIQ


You can enable and configure SupportIQ to allow the SupportIQ agent to run scripts that gather and upload information about your cluster to Isilon Technical Support personnel. Optionally, you can enable remote access to your cluster. 1. Click Cluster Management > General Settings > SupportIQ. 2. In the SupportIQ Settings area, select the Enable SupportIQ check box. 3. For SupportIQ alerts, select an option. l Send alerts via SupportIQ agent (HTTPS) and by email (SMTP) SupportIQ delivers notifications to Isilon through the SupportIQ agent over HTTPS and by email over SMTP.
l

Send alerts via SupportIQ agent (HTTPS) SupportIQ delivers notifications to Isilon only through the SupportIQ agent over HTTPS.

4. Optional: Enable HTTPS proxy support for SupportIQ. a. Select the HTTPS proxy for SupportIQ check box. b. In the Proxy host field, type the IP address or fully qualified domain name (FQDN) of the HTTP proxy server. c. In the Proxy port field, type the number of the port on which the HTTP proxy server receives requests. d. Optional: In the Username field, type the user name for the proxy server. e. Optional: In the Password field, type the password for the proxy server. 5. Optional: Enable remote access to the cluster. a. Select the Enable remote access to cluster via SSH and web interface check box. The remote-access end user license agreement (EULA) appears. b. Review the EULA and, if you agree to the terms and conditions, select the I have read and agree to... check box. 6. Click Submit. A successful configuration is indicated by a message similar to SupportIQ settings have been updated.

Configuring SupportIQ

187

Cluster administration

Disable SupportIQ
You can disable SupportIQ, so that the SupportIQ agent does not run scripts to gather and upload data about your OneFS cluster. 1. Click Cluster Management > General Settings > SupportIQ. 2. Clear the Enable SupportIQ check box. 3. Click Submit. The SupportIQ agent is deactivated.

Enable or disable access time tracking


Access time tracking can be enabled to support features that require it. By default, the OneFS cluster does not track the timestamp when files are accessed, but you can enable this feature to support OneFS features that use it. For example, access time tracking must be enabled to configure SyncIQ policy criteria that match files based on when they were last accessed. Enabling access time tracking may affect cluster performance. 1. Click File System Management > File System Settings > Access Time Tracking. 2. In the Access Time Tracking area, select a configuration option. l To enable access time tracking, click Enabled, and then specify in the Precision fields how often to update the last-accessed time by typing a numeric value and by selecting a unit of measure, such as Seconds, Minutes, Hours, Days, Weeks, Months, or Years. For example, if you configure a Precision setting of 1 day, the cluster updates the last-accessed time once each day, even if some files were accessed more often than once during the day.
l

To disable access time tracking, click Disabled.

3. Click Submit.

Specify the cluster join mode


You can specify the method to use when nodes are added to the cluster. 1. Click Cluster Management > General Settings > Join Mode. 2. Optional: In the Settings area, select the mode that you want to be used when nodes are added to the cluster. l Manual joins can be initiated by either the node or the cluster.
l

Securejoins can be initiated only by the cluster.

3. Click Submit.

Specify the cluster character encoding


You can modify the character encoding set for the cluster after installation. Only OneFS-supported character sets are available for selection, and UTF-8 is the default character set for OneFS nodes. You must restart the cluster to apply character encoding changes.

188

OneFS 7.0 Administration Guide

Cluster administration

Character encoding is typically established during installation of the cluster. Modifying the character encoding after installation may render files unreadable if done incorrectly. Modify settings only if necessary after consultation with Isilon Technical Support. 1. Click File System Management > File System Settings > Character Encoding. 2. Optional: From the Character encoding list, select the character-encoding set that you want to use. 3. Click Submit, and then click Yes to acknowledge that the encoding change becomes effective after the cluster is restarted. 4. Restart the cluster. Results After the cluster restarts, the web administration interface reflects your change. What to do next

Cluster statistics

Command-line options provide the ability to view performance, historical, and in-depth usage statistics for your cluster. The isi statistics and isi status command-line tools include options for querying and filtering the display of OneFS cluster performance and usage statistics. You can use generic and type-specific options to control filtering, aggregation, and reporting for each mode of statistics reporting. You can access these modes of operation by typing subcommands in the isi statistics and isi status tools. For more information about the statistics and status options and descriptions of the subcommands, see the OneFS 7.0 Command Reference.

Performance monitoring
Cluster throughput can be monitored through either the web administration interface or the command-line interface. You can view cluster throughput graphically and numerically for average and maximum usage. Performance can be monitored through the web administration interface or the command-line interface by using the isi statistics command options. You can view details about the input and output traffic to and from the cluster's file system and also monitor throughput distribution across the cluster. Advanced performance monitoring and analytics are available through the InsightIQ module, which requires a separate license. For more information about optional software modules, contact your EMC Isilon Storage Division sales representative.

Cluster monitoring
Cluster health, performance, and status can be monitored from both the web administration interface and from the command-line interface. In addition, real-time and historical performance can be graphed in the web administration interface. The condition and status of the OneFS hardware can be monitored in the OneFS dashboard through the web administration interface. You can monitor information about the health and performance of the cluster, including the following.

Cluster statistics

189

Cluster administration

Node status Health and performance statistics for each node in the cluster, including
hard disk drive (HDD) and solid-state drive (SSD) usage.

u u

Client connections Number of clients connected per node. New events List of event notifications generated by system events, including the severity,
unique instance ID, start time, alert message, and scope of the event.

Cluster size Current view: Used and available HDD and SSD space and space reserved for the virtual hot spare (VHS). Historical view: Total used space and cluster size for a one-year
period.

Cluster throughput (file system) Current view: Average inbound and outbound traffic volume passing through the nodes in the cluster for the past hour. Historical view: Average inbound and outbound traffic volume passing through the nodes in the cluster for the past two weeks. CPU usage Current view: Average system, user, and total percentages of CPU usage for the past hour. Historical view: displays CPU usage for the past two weeks.

Information is accessible for individual nodes, including node-specific network traffic, internal and external network interfaces, and details about node pools, tiers and overall cluster health. Using the OneFS dashboard, you can monitor the status and health of the oneFS system hardware. In addition, SNMP can be used to remotely monitor hardware components, such as fans, hardware sensors, power supplies, and disks.

Monitor the cluster


You can monitor the health and performance of a cluster with charts and tables that show the status and performance of nodes, client connections, events, cluster size, cluster throughput, and CPU usage. 1. Click Dashboard > Cluster Overview > Cluster Status. 2. Optional: View cluster details. l Status: To view details about a node, click the ID number of the node.
l

Client connection summary: To view a list of current connections, click Dashboard > Cluster Overview > Client Connections Status. New events: To view more information about an event, click View details in the Actions column. Cluster size: To switch between current and historical views, click Historical or Current near the Monitoring section heading. In historical view, click Used or Cluster size to change the display. Cluster throughput (file system): To switch between current and historical views, click Historical or Current next to the Monitoring section heading. To view throughput statistics for a specific period within the past two weeks, click Dashboard > Cluster Overview > Throughput Distribution. You can hide or show inbound or outbound throughput by clicking Inbound or Outbound in the chart legend. To view maximum throughput, next to Show, select Maximum.

CPU usage: To switch between current and historical views, click Historical or Current near the Monitoring section heading.

190

OneFS 7.0 Administration Guide

Cluster administration

You can hide or show a plot by clicking System, User, or Total in the chart legend. To view maximum usage, next to Show, select Maximum.

View node status


You can view the current and historical status of a node. 1. Click Dashboard > Cluster Overview > Cluster Status. 2. Optional: In the Status area, click the ID number for the node that you want to view status for. Information about the node appears, including the chassis and drive status, node size, throughput, CPU usage, and client connections. 3. View node details. l Status: To view networks settings for a node interface or subnet or pool, click the link in the Status area.
l

Client connections: To view current clients connected to this node, review the list in this area. Chassis and drive status: To view the state of drives in this node, review this area. To view details about a drive, click the name link of the drive; for example, Bay1. Node size: To switch between current and historical views, click Historical or Current next to the Monitoring area heading. In historical view, click Used or Cluster size to change the display accordingly. Node throughput (file system): To switch between current and historical views, click Historical or Current next to the Monitoring area heading. To view throughput statistics for a period within the past two weeks, click Dashboard > Cluster Overview > Throughput Distribution. You can hide or show inbound or outbound throughput by clicking Inbound or Outbound in the chart legend. To view maximum throughput, next to Show, select Maximum.

CPU usage: To switch between current and historical views, click Historical or Current next to the Monitoring area heading. You can hide or show a plot by clicking System, User, or Total in the chart legend. To view maximum usage, next to Show, select Maximum.

Events and notifications


OneFS generates events and notifications to alert you to potential problems related to cluster health and performance. Cluster events and event notifications enable you to receive information about the health and performance of the cluster. You can select the OneFS hardware, software, network, and system events that you want to monitor, and you can cancel, quiet, or unquiet events. In addition, you can configure event notification rules to send an email notification or SNMP trap when a threshold is exceeded.

Event notification methods


You can configure event notification rules to send notifications to specified email recipients or to SupportIQ, or to generate SNMP traps. You can create notification rules that perform an action as a response to the occurrence of an event. When you configure event notification rules, you can choose from three
View node status
191

Cluster administration

methods to notify recipients: Email, SupoprtIQ, and SNMP trap. Each event notification method can be configured through the web administration interface or the command-line interface.
u

Email If you configure email event notifications, you designate recipients and specify
SMTP, authorization, and security settings. You can specify batch email settings and the email notification template that you want to use when email notifications are sent.

SupportIQ If you enable SupportIQ, you can specify the protocol that you prefer to use
for notifications: HTTPS, SMTP, or both.

SNMP trap If you configure the OneFS cluster for SNMP monitoring, you select events to send SNMP traps to one or more network monitoring stations, or trap receivers. Each event can generate one or more SNMP traps. The ISILON-TRAP-MIB describes the traps that the cluster can generate, and the ISILON-MIB describes the associated varbinds that accompany the traps. You can download both management information base files (MIBs) from the cluster. You must configure an event notification rule to generate SNMP traps.

Coalesced events
Multiple related or duplicate event occurrences are grouped, or coalesced, into one logical event by the OneFS system. For example, if the CPU fan crosses the speed threshold more than 10 times in an hour, the system coalesces this sequence of identical but discrete occurrences into one event. You can view coalesced events and details through the web administration interface or the command-line interface. This message is representative of coalesced event output.
# isi events show 24.924 ID: 24.924 Type: 199990001 Severity: critical Value: 0.0 Message: Disk Errors detected (Bay 1) Node: 21 Lifetime: Sun Jun 17 23:29:29 2012 - Now Quieted: Not quieted Specifiers: disk: 35 val: 0.0 devid: 24 drive_serial: 'XXXXXXXXXXXXX' lba: 1953520064L lnn: 21 drive_type: 'HDD' device: 'da1' bay: 1 unit: 805306368 Coalesced by: -Coalescer Type: Group Coalesced events: ID STARTED ENDED SEV LNN MESSAGE 24.911 06/17 23:29 -- I 21 Disk stall: Bay 1, Type Disk ... 24.912 06/17 23:29 -- I 21 Sector error: da1 block 24.913 06/17 23:29 -- I 21 Sector error: da1 block 24.914 06/17 23:29 -- I 21 Sector error: da1 block 24.915 06/17 23:29 -- I 21 Sector error: da1 block 24.916 06/17 23:29 -- I 21 Sector error: da1 block 24.917 06/17 23:29 -- I 21 Sector error: da1 block 24.918 06/17 23:29 -- I 21 Sector error: da1 block 24.919 06/17 23:29 -- I 21 Sector error: da1 block 24.920 06/17 23:29 -- I 21 Sector error: da1 block 24.921 06/17 23:29 -- I 21 Sector error: da1 block
192

HDD, LNUM 35. 1953520064 2202232 2202120 2202104 2202616 2202168 2202106 2202105 1048670 223

OneFS 7.0 Administration Guide

Cluster administration

24.922 06/17 23:29 -HDD, LNUM...

21

Disk Repair Initiated: Bay 1, Type

This message is representative output for coalesced duplicated events.


# isi events show 1.3035 ID: 1.3035 Type: 500010001 Severity: info Value: 0.0 Message: SmartQuotas threshold violation on quota violated, domain direc... Node: All Lifetime: Thu Jun 14 01:00:00 2012 - Now Quieted: Not quieted Specifiers: enforcement: 'advisory' domain: 'directory /ifs/quotas' name: 'violated' val: 0.0 devid: 0 lnn: 0 Coalesced by: -Coalescer Type: Duplicate Coalesced events: ID STARTED ENDED SEV LNN MESSAGE 18.621 06/14 01:00 -- I All SmartQuotas threshold violation on quota vio... 18.630 06/15 01:00 -- I All SmartQuotas threshold violation on quota vio... 18.638 06/16 01:00 -- I All SmartQuotas threshold violation on quota vio... 18.647 06/17 01:00 -- I All SmartQuotas threshold violation on quota vio... 18.655 06/18 01:00 -- I All SmartQuotas threshold violation on quota vio...

Quieting, unquieting, and canceling events


You can change event state by quieting, unquieting, or canceling an event. The following information describes the actions that you can select to change the state of an event and the result of the selection:
u

Quiet Acknowledges an event, which removes it from list of new events and adds it to the
list of quieted events.

If a new event of the same event type is triggered, it is a separate new event and must be quieted.
u

Unquiet Returns a quieted event to an unacknowledged state in the list of new events
and removes it from the list of quieted events.

Cancel Permanently ends an occurrence of an event. Events are canceled, or ended, in


one of two ways: The system cancels an event when conditions are met that end its duration, which is bounded by a start time and an end time, or you cancel an event manually.

Most events are canceled automatically by the system when they reach the end of their duration. They remain in the system until you manually acknowledge, or quiet them, however. You can acknowledge events through either the web administration interface or the command-line interface. For information about managing events through the command-line interface, see the OneFS Command Reference.

Events and notifications

193

Cluster administration

Responding to events
You can view event details and respond to cluster events through the web administration interface or the command-line interface. In the web administration interface, you can view new events on a cluster. You can view and manage new events, open events, and recently ended events. You can also view coalesced events and additional, more detailed information about specific events, and you can quiet or cancel events.

View event details


You can view the details of an event and you can add a new notification rule or add settings to another notification rule. 1. Click Dashboard > Events > Summary. The New Events page appears and displays a list of all new events. 2. In the Actions column of an event whose details you want to view, click View details. The Event Details page appears and displays additional information about the event. 3. Optional: To acknowledge the event, click Quiet Event. 4. Optional: To create a new event notification rule or to add the event settings of this event to an existing event notification rule, click Create Notification Rule. l To add a new notification rule for this event, in the Create Rule area, select Create a new notification rule for event, click Submit, and then specify the settings for the rule.
l

To add the settings of this event to an existing event notification rule, in the Create Rule area, select Add to an existing notification rule, select the existing event notification rule from the list, and then click Submit.

View the event history


You can view all events in a chronological list and view additional information about a selected event. 1. Click Dashboard > Events > Events History. The Events History page appears and displays a list of all events in chronological ordernewest to oldestthat have occurred on your OneFS cluster. 2. Scroll down and navigate to subsequent pages to view the events in chronological order.

Manage an event
You can change the status of an event by quieting, unquieting, or canceling it. 1. Click Dashboard > Events > Summary. The New Events page appears and displays a list of all new, or unquieted, events. 2. Perform the following actions as needed. l To view additional information about an event, in the Actions column for that event, click View details.
l l l

To acknowledge an event, click Quiet. To restore an event to an unacknowledged state, click Unquiet. To permanently remove an occurrence of an event, click Cancel.

194

OneFS 7.0 Administration Guide

Cluster administration

Managing event notification rules


You can modify or delete notification rules, configure event notification settings, and configure batch notification settings through the web administration interface or the command-line interface. You can specify event notification settings, and you can create, modify, or delete event notification rules. You can configure the setting for how notifications are received, individually or in a batch.

Send a test event notification


You can generate a test event notification to confirm that event notifications are working as you intend. 1. Click Dashboard > Events > Notification Settings. The Cluster Events page appears and displays email and SupportIQ settings and contact information. 2. In the Send Test Event area, click Send test event. 3. On the Cluster Events page, click Summary to verify whether the test event was successful. A corresponding test event notification appears in the New Events list, which appears in the Message column as a message similar to Test event sent from WebUI.

View event notification rules


You can view a list of event notification rules and details about specific rules. 1. Click Dashboard > Events > Event Notification Rules. The Cluster Events page appears and displays a list of all event notification rules. 2. In the Actions column of the rule whose settings you want to view, click Edit. 3. When you have finished viewing the rule details, click Cancel.

Modify an event notification rule


You can modify event notification rules that you created. System event notification rules cannot be modified. 1. Click Dashboard > Events > Event Notification Rules. The Cluster Events page appears. 2. In the Actions column for the rule that you want to modify, click Edit. The Edit Notification Rule page appears. 3. Modify the event notification rule settings as needed. 4. Click Submit.

Delete an event notification rule


You can delete event notification rules that you created, but system event notification rules cannot be deleted. 1. Click Dashboard > Events > Event Notification Rules. The Cluster Events page appears and displays a list of event notification rules. 2. In the Notification Rules area, in the Actions column for the rule that you want to delete, click Delete. 3. Click Yes to confirm the deletion.

Events and notifications

195

Cluster administration

View event notification settings


You can view email, SupportIQ, and contact information for event notifications. 1. Click Dashboard > Events > Notification Settings.

Modify event notification settings


You can modify email, SupportIQ, and contact settings for event notifications. 1. Click Dashboard > Events > Notification Settings. 2. Click the Modify link for the setting that you want to change. 3. Click Submit.

Specify event-notification batch mode or template settings


You can choose an event-notification batch option to specify whether you want to receive notifications individually or as an aggregate. You also can specify a custom notification template for email notifications. Before you can select a custom notification template, you must create it and then upload it to a directory at the same level or below /ifs; for example, /ifs/templates. 1. Click Cluster Management > General Settings > Email Settings. The General Settings page appears and displays SMTP email settings and batch and custom template options for event notification settings. 2. In the Event Notification Settings area, select a Notification batch mode option. 3. Leave the Set custom notification template field blank to use the default notification template. 4. In the Custom notification template field, select the custom event notification template. l Click Browse, navigate to and select the template file that you want to use, and then click OK.
l

In the Set custom notification template field, type the path and file name of the template file that you want to use.

5. Click Submit.

Monitoring cluster hardware


The default Linux SNMP tools or a GUI-based SNMP tool of your choice can be used to monitor cluster hardware. You can enable SNMP on all OneFS nodes to remotely monitor the hardware components across the cluster, including fans, hardware sensors, power supplies, and disks. To maintain optimal cluster health, you can enable and configure SupportIQ to forward all cluster events to Isilon Technical Support for analysis and resolution. The isi batterystatus command also can be used to display the current state of NVRAM batteries and charging systems on node hardware that supports the command.

View node hardware status


You can view the hardware status of a node. 1. Click Dashboard > Cluster Overview > Cluster Status. 2. Optional: In the Status area, click the ID number for a node. Information about the node appears, including the chassis and drive status, node size, throughput, CPU usage, and client connections.
196

OneFS 7.0 Administration Guide

Cluster administration

3. In the Chassis and drive status area, click Platform. Information about the hardware appears, including status, a list of monitored components, the system partitions, and the hardware log.

SNMP monitoring
You can use SNMP to remotely monitor the OneFS cluster hardware components, such as fans, hardware sensors, power supplies, and disks. The default Linux SNMP tools or a GUI-based SNMP tool of your choice can be used for this purpose. You can enable SNMP monitoring on individual nodes on your cluster, and you can also monitor cluster information from any node. Generated SNMP traps are sent to your SNMP network. You can configure an event notification rule that specifies the network station where you want to send SNMP traps for specific events, so that when an event occurs. the cluster send the trap to that server. OneFS supports SNMP in read-only mode. SNMP v1 and v2c is the default, but you can configure settings for SNMP v3 alone or SNMP v1, v2c, and v3. When SNMP v3 is used, OneFS requires AuthNoPriv as the default. AuthPriv is not supported. Elements in an SNMP hierarchy are arranged in a tree structure, similar to a directory tree. As with directories, identifiers move from general to specific as the string progresses from left to right. Unlike a file hierarchy, however, each element is not only named, but also numbered. For example, the SNMP entity .iso.org.dod.internet.private.enterprises.isilon.oneFSss.ssLocalN odeId.0 maps to .1.3.6.1.4.1.12124.3.2.0. The part of the name that refers to the OneFS SNMP namespace is the 12124 element. Anything further to the right of that number is related to OneFS-specific monitoring. Management Information Base (MIB) documents define human-readable names for managed objects and specify their data types and other properties. You can download MIBs that are created for SNMP-monitoring of a OneFS cluster from the webadministration interface or manage them using the command-line interface. MIBs are stored in /usr/local/share/snmp/mibs/ on a OneFS node. The OneFS ISILON-MIBs serve two purposes:
u u

Augment the information available in standard MIBs Provide OneFS-specific information that is unavailable in standard MIBs ISILON-MIB Defines a group of SNMP agents that respond to queries from a network monitoring system (NMS) called OneFS Statistics Snapshot agents. As the name implies, these agents snapshot the state of the OneFS file system at the time that it receives a request and reports this information back to the NMS. ISILON-TRAP-MIB Generates SNMP traps to send to an SNMP monitoring station when
the circumstances occur that are defined in the trap protocol data units (PDUs).

ISILON-MIB is a registered enterprise MIB. OneFS clusters have two separate MIBs:
u

The OneFS MIB files map the OneFS-specific object IDs with descriptions. Download or copy MIB files to a directory where your SNMP tool can find them, such as /usr/share/ snmp/mibs/ or /usr/local/share/snmp/mibs, depending on the tool that you use. To have Net-SNMP tools read the MIBs to provide automatic name-to-OID mapping, add m All to the command, as in the following example. snmpwalk -v2c -c public -m All <node IP> isilon
SNMP monitoring
197

Cluster administration

If the MIB files are not in the default Net-SNMP MIB directory, you may need to specify the full path, as in the following example. Note that all three lines are one command.
snmpwalk -m /usr/local/share/snmp/mibs/ONEFS-MIB.txt:/usr/local/ share/snmp/mibs/ONEFS-SNAPSHOT-MIB.txt:/usr/local/share/snmp/mibs /ONEFS-TRAP-MIB.txt \ -v2c -C c -c public <node IP> enterprises.onefs

The examples are from running the snmpwalk command on a cluster. Your SNMP version may require different arguments.

Configure the cluster for SNMP monitoring


You can configure your cluster to monitor hardware components using SNMP. You can enable or disable SNMP monitoring, allow SNMP access by version, and configure other settings, some of which are optional. All SNMP access is read-only. The OneFS cluster does not generate SNMP traps unless you configure an event notification rule to send events. Before you begin When SNMP v3 is used, OneFS requires AuthNoPriv as the default. AuthPriv is not supported. 1. Click Cluster Management > General Settings > SNMP Monitoring. The SNMP Monitoring page appears. 2. In the Service area, enable or disable SNMP monitoring. a. To disable SNMP monitoring, click Disable, and then click Submit. SNMP monitoring is disabled on the cluster. b. To enable SNMP monitoring, click Enable, and then continue with the following steps to configure your settings. 3. In the Downloads area, click Download for the MIB file that you want to download. Follow the download process that is specific to your browser. If you are using Internet Explorer as your browser, continue to the next step. 4. If you are using Internet Explorer as your browser, right-click the Download link, select Save As from the menu, and save the file to your local drive. You can save the text in the file format that is specific to your Net-SNMP tool. 5. Copy MIB files to a directory where your SNMP tool can find them, such as /usr/ share/snmp/mibs/ or /usr/local/share/snmp/mibs, depending on the SNMP tool that you use. To have Net-SNMP tools read the MIBs to provide automatic name-to-OID mapping, add -m All to the command, as in the following example: snmpwalk -v2c -c public
-m All <node IP> isilon

6. Navigate back to the SNMP Monitoring page. 7. Configure General Settings. a. In the Settings area, configure protocol access by selecting the version that you want. OneFS does not support writable OIDs; therefore, no write-only community string setting is available. b. In the System location field, type the system name. This setting is the value that the node reports when responding to queries. Type a name that helps to identify the location of the node. c. Type the contact email address in the System contact field.
198

OneFS 7.0 Administration Guide

Cluster administration

8. Optional: If you selected SNMP v1/v2 as your protocol, in the SNMP v1/v2c Settings section, in the Read-only community field, type the community name 9. Configure SNMP v3 Settings. a. In the Read-only user field, type the SNMP v3 security name to change the name of the user with read-only privileges. The default read-only user is general. The password must contain at least eight characters and must not contain any space characters. b. in the SNMP v3 password field, type the new password for the read-only user to set a new SNMP v3 authentication password. The default password is password. c. Type the new password in the Confirm password field to confirm the new password. 10. Click Submit. Results SNMP monitoring is configured for remote monitoring of the cluster hardware components.

Managing SNMP settings


SNMP can be used to monitor cluster hardware and system information. Settings can be configured through either the web administration interface or the command-line interface. You can enable SNMP monitoring on individual nodes in the cluster, and you can monitor information cluster-wide from any node when you enable SNMP on each node. When using SNMP on a OneFS cluster, you should use a fixed general username. A password for the general user can be configured in the web administration interface. You should configure a network monitoring system (NMS) to query each node directly through a static IP address. This approach allows you to confirm that all nodes have external IP addresses and therefore respond to SNMP queries. Because the SNMP proxy is enabled by default, the SNMP implementation on each node is configured automatically to proxy for all other nodes in the cluster except itself. This proxy configuration allows the Isilon Management Information Base (MIB) and standard MIBs to be exposed seamlessly through the use of context strings for supported SNMP versions. After you download and save the appropriate MIBs, you can configure SNMP monitoring through either the web administration interface or though the command-line interface.

Cluster maintenance
Isilon nodes contain components that can be replaced or upgraded in the field by trained service personnel. Isilon Technical Support can assist you with replacing node components or upgrading components to increase performance.

Replacing node components


If a node component fails, Isilon Technical Support will work with you to quickly replace the component and return the node to a healthy status. The following components, which are considered field replaceable units (FRUs), can be replaced in the field by trained service personnel: u battery
u

boot flash drive


Cluster maintenance
199

Cluster administration

u u u u u u u u u u

SATA/SAS Drive memory (DIMM) fan front panel intrusion switch network interface card (NIC) IB/NVRAM card SAS controller NVRAM battery power supply

If your cluster is configured to send alerts to Isilon Technical Support, you will be contacted when a component needs to be replaced. If your cluster is not configured to send alerts to Isilon, you will need to instigate a service request on your own.

Managing cluster nodes


You can add and remove nodes from a cluster. You can also shut down or restart the entire cluster.
u

The following actions can be taken to manage the health and performance of a cluster: Add a node to the cluster. Expand a cluster by adding another node.
u

Remove a node from the cluster. Take a node out of the cluster. Shut down or restart the cluster. Shutdown or restart the cluster to perform maintenance.

Add a node to the cluster


You can add an available node to an existing cluster. If the available node is running a different version of OneFS than the cluster, or contains different patches, the node will be upgraded or downgraded to match the cluster. The following nodes have minimum OneFS version requirements, and will not join clusters that are running a lower version of OneFS: u S200 and X200 nodes require version 6.5.2 or later.
u

X400 and NL400 nodes require version 6.5.5 or later.

Before you begin For a node to be added to a cluster, an internal IP address must be available. Before you add new nodes, add IP addresses as necessary. For information on how to add IP addresses, see "Managing the internal cluster network." 1. Navigate to Cluster Management > Hardware Configuration > Add Nodes. 2. In the Available Nodes table, click Add for the node you want to add to the cluster.

Remove a node from the cluster


You can remove a node from a cluster. When you remove a node, the system smartfails the node to ensure that data on the node is transferred to other nodes in the cluster. Removing a storage node from a cluster deletes the data from that node. Before the system deletes the data, the FlexProtect job safely redistributes data across the nodes remaining in the cluster. 1. Navigate to Cluster Management > Hardware Configuration > Remove Nodes.
200

OneFS 7.0 Administration Guide

Cluster administration

2. In the Remove Node area, specify the node you want to remove. 3. Click Submit. If you remove a storage node, the Cluster Status area displays smartfail progress. If you remove a non-storage accelerator node, it is immediately removed from the cluster.

Shut down or restart a cluster


You can shut down or restart an entire cluster. 1. Navigate to Cluster Management > Hardware Configuration > Shutdown & Reboot Controls. 2. In the Shut Down or Reboot This Cluster area, specify an action: Shut down Reboot 3. Click Submit. Shuts down the cluster. Stops then restarts the cluster.

Remote support using SupportIQ


With your permission, Isilon Technical Support personnel can remotely manage your OneFS cluster to troubleshoot an open support case. The Isilon SupportIQ module allows Isilon Technical Support personnel to gather diagnostic data about the cluster. Isilon Technical Support representative can use SupportIQ to run scripts that gather data about cluster settings and operations. The SupportIQ agent then uploads the information to a secure Isilon FTP site so it is available for Isilon Technical Support personnel to review. These scripts do not affect cluster services or data availability. The SupportIQ scripts are based on the Isilon isi_gather_info log-gathering tool. For more information, see the isi_gather_info man page. The SupportIQ module is included with the OneFS operating system and does not require a separate license. You must enable and configure the SupportIQ module before SupportIQ can run scripts to gather data. The feature may have been enabled when the cluster was first set up, but you can enable or disable SupportIQ through the Isilon web administration interface. In addition to enabling the SupportIQ module to allow the SupportIQ agent to run scripts, you can enable remote access, which allows Isilon Technical Support personnel to monitor cluster events and remotely manage your cluster using SSH or the web administration interface. Remote access helps Isilon Technical Support to quickly identify and troubleshoot cluster issues. Other diagnostic tools are available for you to use in conjunction with Isilon Technical Support to gather and upload information such as packet capture metrics. If you enable remote access, you must also share cluster login credentials with Isilon Technical Support personnel. Isilon Technical Support personnel remotely access your cluster only in the context of an open support case and only after receiving your permission.

Remote support using SupportIQ

201

Cluster administration

SupportIQ scripts
When SupportIQ is enabled, Isilon Technical Support personnel can request logs with scripts that gather cluster data and then upload the data. The SupportIQ scripts are located in the /usr/local/SupportIQ/Scripts/ directory on each node. Data-gathering scripts The following table lists all of the data-gathering scripts that SupportIQ can run. These scripts can be run automatically, at the request of an Isilon Technical Support representative, to collect information about your cluster's configuration settings and operations. The SupportIQ agent then uploads the information to a secure Isilon FTP site, so that it is available for Isilon Technical Support personnel to analyze. The SupportIQ scripts do not affect cluster services or the availability of your data. Script name Clean watch folder Get application data Generate dashboard file daily Generate dashboard file sequence Get ABR data (as built record) Get ATA control and GMirror status Get cluster data Get cluster events Get cluster status Get contact info Get contents (var/crash) Get job status Get domain data Get file system data

Description Clears the contents of /var/crash. Collects and uploads information about OneFS application programs. Generates daily dashboard information. Generates dashboard information in the sequence that it occurred. Collects as-built information about hardware. Collects system output and invokes a script when it receives an event that corresponds to a predetermined eventid. Collects and uploads information about overall cluster configuration and operations. Gets the output of existing critical events and uploads the information. Collects and uploads cluster status details. Extracts contact information and uploads a text file that contains it. Uploads the contents of /var/crash. Collects and uploads details on a job that is being monitored. Collects and uploads information about the clusters Active Directory Services (ADS) domain membership. Collects and uploads information about the state and health of the OneFS /ifs/ file system. Collects and uploads information about the configuration and operation of the InfiniBand back-end network. Collects and uploads only the most recent cluster log information.

Get IB data Get logs data

202

OneFS 7.0 Administration Guide

Cluster administration

Script name Get messages Get network data Get NFS clients Get node data Get protocol data Get Pcap client stats Get readonly status Get usage data

Description Collects and uploads active /var/log/messages files. Collects and uploads information about cluster-wide and nodespecific network configuration settings and operations. Runs a command to check if nodes are being used as NFS clients. Collects and uploads node-specific configuration, status, and operational information. Collects and uploads network status information and configuration settings for the NFS, SMB, FTP, and HTTP protocols. Collects and uploads client statistics. Warns if the chassis is open and uploads a text file of the event information. Collects and uploads current and historical information about node performance and resource usage. Collects and uploads all recent cluster log information. Collects and uploads changes to cluster log information that have occurred since the most recent full operation. Collects and uploads details for a single node. Prompts you for the node number. Collects and uploads changes to cluster log information that have occurred since the most recent full operation. Prompts you for the node number.

isi_gather_info isi_gather_info --

incremental
isi_gather_info --

incremental single node


isi_gather_info single

node

Upload the dashboard file

Uploads dashboard information to the secure Isilon Technical Support FTP site.

Upgrading OneFS
Two options are available for upgrading the OneFS operating system: a rolling upgrade or a simultaneous upgrade. Before upgrading the OneFS 6.0 or 6.5.x system, a pre-upgrade check must be performed. A rolling upgrade individually upgrades and restarts each node in the cluster sequentially. During a rolling upgrade, the cluster remains online and continues serving clients with no interruption in service, although some connection resets may occur on SMB clients. Rolling upgrades are performed sequentially by node number, so a rolling upgrade takes longer to complete than a simultaneous upgrade. The final node in the upgrade process is the node that you used to start the upgrade process. Rolling upgrades are not available for all clusters. For instructions on how to upgrade the cluster operating system, see the OneFS Release Notes. A simultaneous upgrade installs the new operating system and restarts all nodes in the cluster at the same time. Simultaneous upgrades are faster than rolling upgrades but require a temporary interruption of service during the upgrade process. Your data is inaccessible during the time that it takes to complete the upgrade process.
Upgrading OneFS
203

Cluster administration

Before beginning either a simultaneous or rolling upgrade, OneFS compares the current cluster and operating system with the new version to ensure that the cluster meets certain criteria, such as configuration compatibility (SMB, LDAP, SmartPools), disk availability, and the absence of critical cluster events. If upgrading puts the cluster at risk, OneFS warns you, provides information about the risks, and prompts you to confirm whether to continue the upgrade. If the cluster does not meet the pre-upgrade criteria, the upgrade does not proceed, and the unsupported statuses are listed.

Cluster join modes


You can specify the method that you want to use to join nodes to a cluster. The join mode determines how the system responds when new OneFs nodes are added to the subnet occupied by the OneFS cluster. You can set join modes through the web administration interface or the command-line interface. Mode Manual

Description Configures OneFS to join new nodes to the cluster in a separate manual process, allowing the addition of a node without requiring authorization

Notes -

Secure

Requires authorization of every If you use the secure join node added to the cluster mode, you cannot use the serial console wizard option [2]

Join an existing cluster to join


a node to the cluster. You must add the node from cluster by using the web administration interface or the isi devices
-a add -d <unconfigured_node_seria l_no> command in the command-line interface.

Event notification settings


You can specify whether you want to receive event notifications as aggregated batches or as event notifications or receive individual notifications for each event. Batch notifications are sent every 10 seconds. The batch options that are described in this table affect both the content and the subject line of notification emails that are sent in response to system events. You can specify event notification batch options when you configure SMTP email settings. Setting Notification batch mode

Option Batch all Batch by severity

Description Generates a single email for each event notification. Generates an email that contains aggregated notifications for each event of

204

OneFS 7.0 Administration Guide

Cluster administration

Setting -

Option -

Description
the same severity, regardless of event category. Generates an email an email that contains aggregated notifications for event of the same category, regardless of severity. Generates one email per event. Sends the email notification in the default OneFS notification template format. Sends the email notifications in the format that you defined in your custom template file.

Batch by category

No batching Custom notification template No custom notification template is set Set custom notification template

System job management


For a cluster to run efficiently, consistent maintenance must be performed to ensure that file systems are properly organized. The OneFS System Job Engine schedules and executes this maintenance. When performing maintenance on a clustered system, even a routine task like a virus scan has the ability to affect distributed computing performance. The System Job Engine is designed to allow for the automated execution of maintenance jobs, while constantly monitoring and mitigating the impact of jobs on the overall performance of the cluster. System administrators access job schedules and policies through the OneFS interface, and can tailor maintenance to the specific workflow of the cluster.

Job engine overview


The System Job Engine breaks jobs down into smaller blocks of work, allowing the engine to react quickly to errors or cluster performance degradation even during longer jobs. Jobs are broken down into phases, which are broken down into tasks, which are directed at items.
u u

Job An application built into the System Job Engine. Phase Jobs are broken down into phases. Some jobs have as little as one phase, others have as many as seven. If an error is detected while a job is running, the job will not progress past its current phase unless it is determined safe to do so. Task Phases are broken down into tasks. A phase includes at least one task, but may
employ several. A task is the actual action that is taken on the system.

Item Items are the targets of tasks, the file system components that are being operated on
by a task.

The Job Engine tracks interaction between a task and its target items. u If an error is detected, the job will continue as long as the error does not affect the overall goal of the job, otherwise the job is cancelled.
u

If the task is slowing the performance of a node, the task will be asked to slow down and consume less resources.

The System Job Engine accumulates task and item results in logs that provide administrators insight into the maintenance of a cluster.
System job management
205

Cluster administration

System jobs
OneFS offers a number of jobs to assist with cluster administration and maintenance. AutoBalance Balances free space within the cluster. AutoBalance is most efficient in clusters containing only hard disk drives (HDDs). AutoBalanceLin Balances free space within the cluster. AutoBalanceLin is most efficient in clusters where file system metadata is stored on solid state drives (SSDs). AVScan Performs an antivirus scan on all files. Collect Reclaims free space that could not be freed earlier due to a node or drive being unavailable. DomainMark Associates a path and its contents with a domain. FlexProtect Performs a protection pass on the file system. FlexProtect is most efficient in clusters containing only HDDs. FlexProtectLin Performs a protection pass on the file system. FlexProtectLin is most efficient in clusters where file system metadata is stored on SSDs. FSAnalyze Gathers file system analytics. IntegrityScan Verifies file system integrity. MediaScan Removes media-level errors from disks. MultiScan Runs the AutoBalance and Collect jobs together. PermissionRepair Corrects file and directory permissions in the /ifs directory. QuotaScan Updates quota accounting for domains created on an existing file tree. SetProtectPlus Applies a default file policy across the cluster. This is used only if SmartPools is not licensed.
206

OneFS 7.0 Administration Guide

Cluster administration

ShadowStoreDelete Creates free space associated with a shadow store. SmartPools Enforces SmartPools file policies. This is used only if SmartPools is licensed. SnapRevert Reverts an entire snapshot back to head. SnapshotDelete Creates free space that is associated with deleted snapshots. TreeDelete Deletes a file path in the /ifs directory.

Job performance impact


The System Job Engine constantly monitors the cluster to ensure that system maintenance jobs do not place an excessive processing burden on individual nodes. Cluster performance is tracked by the Job Engine at both the system and process level by monitoring the load placed on cluster CPUs. If a job affects overall system performance, the Job Engine reduces the activity of maintenance jobs to yield resources to users. However, there are certain jobs that are more vital to cluster health than others. For these jobs, an administrator may want to ensure greater access to system resources than would be provided to less urgent jobs. For this reason, every system job has an associated impact policy. Impact policies set a limit of system resources that a job is allowed to consume, and also dictate what time of day a job is allowed to run.

Job impact policies


Impact policies dictate the time of day that a job is allowed to run, and the maximum amount of resources that a job is allowed to consume. The following default impact policies are available to administrators. Policy LOW MEDIUM HIGH OFF_HOURS

Allowed to run Any time of day. Any time of day. Any time of day. Outside of business hours.

Resource consumption Low Medium Unlimited Low

These default policies cannot be deleted or modified. However, administrators can tailor policies to a specific workflow by creating new policies. New policies are created by copying and modifying the default policies.

Job performance impact

207

Cluster administration

Job priorities
Every job in the System Job Engine is assigned a priority. Priorities determine which job will yield when two jobs attempt to run at the same time. When jobs are scheduled to run at the same time, the higher priority job will run first. The highest priority jobs have a priority of 1. Higher priority jobs will always interrupt lower priority jobs. If a low priority job is interrupted, it will be inserted back into the priority queue. When the job reaches the front of the priority queue again, it will resume from where it left off. If two jobs at the same priority level attempt to run, the job that entered the queue first will run. The following list contains Priority values for each job. These priorities can be adjusted by a system administrator. Job Autobalance Autobalance Lin AV Scan Collect SmartPools MultiScan Flex Protect FS Analyze Integrity Scan Media Scan Repair Quota Scan Set Protect Plus Snapshot Delete Tree Delete Upgrade Priority 4 4 6 4 6 4 1 6 1 8 5 6 6 2 4 3

Managing system jobs


Administrators have the ability to manually control job activity. The following actions are available to control current job activity: Start a job. Begin a job that is currently inactive. u Pause a job. Momentarily stop a job. u Update a job. Change the settings of a running, waiting, or paused job.
u

208

OneFS 7.0 Administration Guide

Cluster administration

u u u u

Resume a job. Continue a paused job. Cancel a job. Discontinue a job that is currently active. Retry a job. Restart a job that was previously interrupted by the system. Modify job settings. Change a job's priority level or impact policy.

Start a job
You can start a job manually at any time. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Running Jobs area, clickStart job. 3. From the Job list, select the job that you want to run. 4. From the Priority list, select the priority level for the job. If no priority is selected, the job runs at its default priority. 5. From the Impact policy list, select the impact policy for the job. If no impact policy is selected, the job runs with the default impact policy for that job type. 6. Click Start. The job displays in the Running Jobs table.

Pause a job
You can pause an in-progress job. Pausing a job allows you to temporarily free cluster resources without losing progress made by the job. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Policies table, click Pause for the job you want to pause. The job moves from the Running Jobs table to the Paused and Waiting Jobs table. 3. In the Paused and Waiting Jobs table, click Resume for the job you want to resume. The job moves from the Paused and Waiting Jobs table to the Running Jobs table.

Update a job
You can change the priority and impact policy of a running, waiting, or paused job. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Running Jobs or Paused and Waiting Jobs table, click Update for the job you want to update. 3. Adjust the priority of the job by selecting a new priority from the Priority list. 4. Adjust the default impact policy of the job by selecting a new impact policy from the Impact policy list. 5. Click Update to save the new settings. If you update a running job, the job will automatically resume. If you update a paused or waiting job, the job will return to that status. Results Only the current instance of the job will run with the updated settings. The next instance of the job will return to the default settings for that job. To permanently modify job settings, click on a job name, then click Modify job defaults in the Job Details area.

Managing system jobs

209

Cluster administration

Resume a job
You can resume a paused job. The job will continue from the phase in which it was paused. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Paused and Waiting Jobs table, click Resume for the job you want to continue. The job displays in the Running Jobs table.

Cancel a job
You can discontinue a running, paused, or waiting job. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Running Jobs table, click Cancel for the job you want to cancel. This action can also be performed on a job in the Paused and Waiting Jobs table. The job displays in the Recent Job History table as User Cancelled.

Retry a job
If a job fails, you can manually restart the job without waiting for the next scheduled run time. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. In the Failed Jobs table, click Retry for the job you want to run again. The job moves from the Failed Jobs table to the Running Jobs table.

Modify job settings


You can modify the default priority level, impact level, and schedule of a job. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Jobs table, click the name of the job you want to modify. Information related to the job displays, including the current default settings, schedule, current state, and recent activity. 3. In the Job Details area, click Modify job defaults. 4. Adjust the default priority of the job by selecting a new priority from the Default priority list. 5. Adjust the default impact policy of the job by selecting a new impact policy from the Default impact policy list. 6. Adjust the schedule of the job by clicking Edit schedule. a. In the Interval area, click Daily, Weekly, Monthly, or Yearly to set the frequency that the job runs. When you click an option, the right side of the Interval area will update with additional scheduling options. Select the exact day or days you want the job to run. b. In the Frequency area, click Once or Multiple times to set the number of times the job will run on the day specified in the previous step. When you click an option, the right side of the Frequency area will update with additional scheduling options. Select the exact time or times of day you want the job to run. c. Click Done. The Schedule line displays the new schedule.
210

OneFS 7.0 Administration Guide

Cluster administration

7. Click Submit The modified settings display in the Jobs table.

Monitoring system jobs


Administrators have the ability to monitor jobs that are currently running, and review the results of completed jobs. The following actions are available for viewing job activity: u View active jobs. See a list of jobs that are currently running.
u

View job history. See the recent activity of a specific job.

View active jobs


You can view the system jobs that are currently running on your cluster. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. Locate the Running Jobs table. All currently active jobs display in the Running Jobs table, along with job settings and progress details.

View job history


You can view recent job engine activity for all jobs, or for a specific job. 1. Navigate to Cluster Management > Operations > Operations Summary. 2. Locate the Recent Job History table. The table displays a chronological list of the last ten job events occurring on the cluster. Event information includes the time the event occurred, the job responsible for the event, and results related to the event. 3. View recent events beyond the last ten by clicking View full job history. 4. Click on a job name to recent events for that job only. Recent events for the job are listed in the Job Events table.

Creating impact policies


Administrators have the ability to create impact policies that fit the specific needs of their workflow. The following actions are available when working with impact policies: u Create an impact policy. Create and configure a new impact policy.
u

Copy an impact policy. Use an existing impact policy as the foundation for a new policy.

Create an impact policy


You can create and configure new impact policies for use by scheduled jobs. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Impact Policies area, click Add impact policy. The Policy Details area will display. 3. In the Name field, type the name of the new impact policy. The name value must be less than 64 characters. 4. Optional: In the Description field, type an optional overview of the new impact policy. The description must be less than 1024 characters.
Monitoring system jobs
211

Cluster administration

Include information specific to the impact policy such as unique schedule parameters, or logistical requirements that make the new impact policy necessary. 5. Click Submit. 6. In the Impact Schedule area, modify the schedule of the impact policy by adding, editing, or deleting impact intervals. The default impact schedule for a new policy is to run at any time with an impact setting of Low. 7. In the Policy Details area, click Submit. The new impact policy appears in the Impact Policies table, and is now available to assign to jobs.

Copy an impact policy


You can use an existing impact policy as the template for a new policy by making a copy of a policy, then editing the copy. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Impact Policies table, click Copy for the policy you want to use to create your new policy. The copy appears directly below the original in the Impact Policies table, with _1 added to the original name. 3. Click Edit for the copy you want to modify. 4. Modify one or all of the following: Policy name a. In the Policy field, type in a new name for the policy. b. Click Submit. Policy description a. In the Description field, type an overview for the new impact policy. b. Click Submit. Impact schedule a. In the Impact Schedule area, modify the schedule of the new impact policy by adding, editing, or deleting impact intervals .

Managing impact policies


Administrators have the ability to tailor existing impact policies to fit their workflow. The following actions are available when working with impact policies: u Modify an impact policy. Adjust the impact profile of a policy by changing its resource allocation or schedule.
u u

Delete an impact policy. Remove an impact policy from the System Job Engine. Modify the impact schedule of a policy. Establish a period of time where specific impact limitations are placed on a job. For example, a job will run with low impact during business hours, but run with high impact during non-business hours. View impact policy settings. Open a full list of the available impact policies to review current settings.

212

OneFS 7.0 Administration Guide

Cluster administration

Modify an impact policy


You can change the name, description, and impact intervals of an existing impact policy. Before you begin The base impact policies are high, medium, low, and off_hours. These policies cannot be modified directly. If you want to modify one of these policies, create a copy of the policy, then modify the copy. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Impact Policies table, click Edit for the policy you want to modify. 3. Modify one or all of the following: Policy name a. In the Policy field, type in a new name for the impact policy. b. Click Submit. Policy description a. In the Description field, type a new overview for the impact policy. b. Click Submit. Impact schedule a. In the Impact Schedule area, modify the schedule of the impact policy by adding, editing, or deleting impact intervals .

Delete an impact policy


You can delete unneeded policies from the Impact Policies list. The base impact policies are high, medium, low, and off_hours. These policies cannot be deleted. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Impact Policies table, click Delete for the impact policy you want to delete. 3. Confirm that you would like to delete the impact policy. The impact policy is removed from the Impact Policies table.

Modify the impact schedule of a policy


You can set the impact schedule to dictate when and at what impact level the jobs under the policy are run. 1. You can adjust the impact schedule of a policy while performing one of the following tasks: Create a new a. Navigate to Cluster Management > Operations > Jobs and Impact impact Policies. policy b. In the Impact Policies area, click Add impact policy. c. Type a policy Name and Description, then click Submit. The Impact Schedule area appears at the bottom of the page. Copy an impact policy a. Navigate to Cluster Management > Operations > Jobs and Impact Policies. b. In the Impact Policies table, click Copy for the policy you want to use to create your new policy.
Managing impact policies
213

Cluster administration

c. Click Edit for the copy you want to modify. The Impact Schedule area appears at the bottom of the page. Modify an impact policy a. Navigate to Cluster Management > Operations > Jobs and Impact Policies. b. In the Impact Policies table, click Edit for the policy you want to modify. The Impact Schedule area appears at the bottom of the page.

The Impact Schedule area displays the schedule for the policy in a table. Every row in the table contains an impact interval. An impact interval is a window of time where the impact level of a job is raised or lowered, or the job is paused. 2. You can modify the impact schedule of a policy by performing the following actions: Add an impact interval a. Click Add impact interval. b. From the Impact list, select the impact level for the new interval. c. Specify the day and time to begin the impact interval. d. Specify the day and time to end the impact interval. e. Click Submit. The new impact interval appears as part of the impact schedule for the policy. Create additional impact intervals as necessary. New impact intervals will overwrite existing intervals. Modify an impact interval a. In the Impact Interval table, click Edit for the interval you want to modify. b. Adjust the impact level, start time, or end time. c. Click Submit. The modified impact interval appears as part of the impact schedule for the policy. Delete an impact interval a. In the Impact Interval table, click Delete for the interval you want to delete. b. Confirm that you would like to delete the impact interval. The impact interval is removed from the impact schedule for the policy.

View impact policy settings


You can view the impact policy settings for any job. 1. Navigate to Cluster Management > Operations > Jobs and Impact Policies. 2. In the Jobs table, click the name of the job you would like to view. 3. In the Job Details area, locate the following impact policy settings: Default impact policy Schedule The impact policy that the job uses. The impact schedule that the job uses.

214

OneFS 7.0 Administration Guide

CHAPTER 11 SmartQuotas

The SmartQuotas module is an optional quota-management tool that monitors and enforces administrator-defined storage limits. Using accounting and enforcement quota limits, reporting capabilities, and automated notifications, SmartQuotas manages storage use, monitors disk storage, and issues alerts when disk-storage limits are exceeded. Quotas help you manage storage usage according to criteria that you define. Quotas are used as a method of trackingand sometimes limitingthe amount of storage that a user, group, or project consumes. Quotas are a useful way of ensuring that a user or department does not infringe on the storage that is allocated to other users or departments. In some quota implementations, writes beyond the defined space are denied, and in other cases, a simple notification is sent. The SmartQuotas module requires a separate license. For additional information about the SmartQuotas module or to activate the module, contact your EMC Isilon sales representative.
u u u u u u u u u u u u

Quotas overview.................................................................................................216 Creating quotas...................................................................................................222 Managing quotas................................................................................................223 Managing quota notifications..............................................................................226 Managing quota reports......................................................................................229 Basic quota settings............................................................................................230 Advisory limit quota notification rules settings....................................................231 Soft limit quota notification rules settings...........................................................232 Hard limit quota notification rules settings..........................................................233 Limit notification settings....................................................................................233 Quota report settings..........................................................................................234 Custom email notification template variable descriptions...................................235

SmartQuotas

215

SmartQuotas

Quotas overview
The integrated OneFS SmartQuotas module is an optional quota-management tool that monitors and enforces administrator-defined storage limits. Through the use of accounting and enforcement quota limits, reporting capabilities, and automated notifications, you can manage storage utilization, monitor disk storage, and issue alerts when storage limits are exceeded. A storage quota defines the boundaries of storage capacity that are allowed for an entity in a OneFS cluster, such as a group, a user, or a directory. The SmartQuotas module can provision, monitor, and report disk-storage usage and can send automated notifications when storage limits are exceeded or are being approached. SmartQuotas also provides flexible reporting options that can help you analyze data usage.

Quota types
OneFS uses the concept of quota types, sometimes referred to as quota domains, as the fundamental organizational unit of storage quotas. Storage quotas comprise a set of resources and an accounting of each resource type for that set. Storage quotas creation always begins with creating one or more quota types. Every quota type is defined by a directory or an entity, which together encapsulate the files and subdirectories to be tracked. When you describe a storage quota type, three important identifiers are used: The directory that it is on u The quota entity u Whether snapshots are to be tracked against the quota limit Quota types support default user and group entities (in addition to specified users and groups) to describe quotas that have default user and group policies. You can choose a quota type from the following entities:
u u

Directory A specific directory and its subdirectories. User Either a specific user or default user (every user). Specific-user quotas that you configure take precedence over a default user quota. Group All members of a specific group or all members of a default group (every group).
Any specific-group quotas that you configure take precedence over a default group quota. Associating a group quota with a default group quota creates a linked quota.

You can create multiple quota types on the same directory, but they must be of a different type or have a different snapshot option. Quota types can be specified for any directory in OneFS and can be nested within each other, creating a hierarchy of complex storage-use policies. You should not create quotas of any type on the OneFS root (/ifs). A root-level quota may significantly degrade performance. Nested storage quotas can overlap. For example, the following quota settings ensure that the finance directory never exceeds 5 TB, while limiting the users in the finance department to 1 TB each:
u u

Set a 5 TB hard quota on /ifs/data/finance.

Set 1 TB soft quotas on each user in the finance department. A default quota type is a quota that does not account for a set of files, but instead specifies a policy for new entities that match a trigger. The default-user@/ifs/cs becomes specific-user@/ifs/cs for each specific-user that is not otherwise defined. As an example,
216

OneFS 7.0 Administration Guide

SmartQuotas

you can create a default-user quota on the /ifs/dir-1 directory, where that directory is owned by the root user. The default-user type automatically creates a new domain on that directory for root and adds the usage there:
my-OneFS-1# mkdir /ifs/dir-1 my-OneFS-1# isi quota quotas create --default-user --path=/ifs/dir-1 my-OneFS-1# isi quota quotas ls -v --path=/ifs/dir-1 Type Path Policy Snap Usage -------------------- ---------------------- ----------- ----- -------default-user /ifs/dir-1 enforcement no 0B [usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 0B) [usage-inode-count] (0) * user:root /ifs/dir-1 enforcement no 0B [usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 2.0K) [usage-inode-count] (1)

Now add a file that is owned by a different user (admin).


my-OneFS-1# touch /ifs/dir-1/somefile my-OneFS-1# chown admin /ifs/dir-1/somefile my-OneFS-1# isi quota quotas ls -v --path=/ifs/dir-1 Type Path Policy Snap Usage -------------------- ---------------------- ----------- ----- -------default-user /ifs/dir-1 enforcement no 0B [usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 0B) [usage-inode-count] (0) * user:root /ifs/dir-1 enforcement no 18B [usage-with-no-overhead] ( 18B) [usage-with-overhead] ( 2.0K) [usage-inode-count] (1) * user:admin /ifs/dir-1 enforcement no 0B [usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 1.5K) [usage-inode-count] (1)

In this example, the default-user type created a new specific-user type automatically (user:admin) and added the new usage to it. Default-user does not have any usage because it is used only to generate new quotas automatically. Default-user enforcement is copied to a specific-user (user:admin), and the inherited quota is called a linked quota. In this way, each user account gets its own usage accounting. Defaults can overlap; for example, default-user@/ifs and default-user@/ifs/cs both may be defined. If default enforcement changes, OneFS storage quotas propagate the changes to the linked quotas asynchronously. Because the update is asynchronous, there is some lag before updates are in effect. If a default type (every user or every group) is deleted, OneFS deletes all children that are marked as inherited. As an option, you can delete the default without deleting the children, but it is important to note that this action breaks inheritance on all inherited children. Continuing with the example, add another file that is owned by the root user. Because the root type exists, the new usage is added to it.
my-OneFS-1# touch /ifs/dir-1/anotherfile my-OneFS-1# isi quota ls -v --path=/ifs/dir-1 Type Path Policy Snap Usage -------------------- ---------------------- ----------- ----- -------default-user /ifs/dir-1 enforcement no 0B [usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 0B) [usage-inode-count] (0) * user:root /ifs/dir-1 enforcement no 39B [usage-with-no-overhead] ( 39B) [usage-with-overhead] ( 3.5K) [usage-inode-count] (2) * user:admin /ifs/dir-1 enforcement no 0B
Quota types
217

SmartQuotas

[usage-with-no-overhead] ( 0B) [usage-with-overhead] ( 1.5K) [usage-inode-count] (1)

The enforcement on default-user is copied to the specific-user when the specific-user allocates within the type, and the new inherited quota type is also a linked quota. Configuration changes for linked quotas must be made on the parent (default) quota that the linked quota is inheriting from. Changes to the parent quota are propagated to all children. If you want to override configuration from the parent quota, you must unlink the quota first.

Usage accounting and limits


Storage quotas support two usage typesenforcement limits and accountingthat you can create to manage storage space. OneFS quotas can be configured by usage type to track or limit storage use. The accounting option, which monitors disk-storage use, is useful for auditing, planning, and billing. Enforcement limits set storage limits for users, groups, or directories.
u

Accounting The accounting option tracks but does not limit disk-storage use. Using the
accounting option for a quota, you can monitor inode count and physical and logical space resources. Physical space refers to all of the space used to store files and directories, including data and metadata in the domain. Logical space refers to the sum of all files sizes, excluding file metadata and sparse regions. User data storage is tracked using logical-space calculations, which do not include protection overhead. As an example, by using the accounting option, you can do the following: u Track the amount of disk space used by various users or groups to bill each entity for only the disk space used.
u

Review and analyze reports that help you identify storage usage patterns, which you can use to define storage policies for the organization and educate users of the file system about using storage more efficiently. Plan for capacity and other storage needs.

Enforcement limits Enforcement limits include all of the functionality of the accounting option, plus the ability to limit disk storage and send notifications. Using enforcement limits, you can logically partition a cluster to control or restrict how much storage that a user, group, or directory can use. For example, you can set hard- or soft-capacity limits to ensure that adequate space is always available for key projects and critical applications and to ensure that users of the cluster do not exceed their allotted storage capacity. Optionally, you can deliver real-time email quota notifications to users, group managers, or administrators when they are approaching or have exceeded a quota limit.

If a quota type uses the accounting-only option, enforcement limits cannot be used for that quota. The actions of an administrator logged in as root may push a domain over a quota threshold. For example, changing the protection level or taking a snapshot has the potential to exceed quota parameters. System actions such as repairs also may push a quota domain over the limit. There are three types of administrator-defined enforcement thresholds.

218

OneFS 7.0 Administration Guide

SmartQuotas

Threshold type Hard

Description Limits disk usage to a size that cannot be exceeded. If an operation, such as a file write, causes a quota target to exceed a hard quota, the following events occur:
l l l

the operation fails an alert is logged to the cluster a notification is issued to specified recipients.

Writes resume when the usage falls below the threshold. Soft Allows a limit with a grace period that can be exceeded until the grace period expires. When a soft quota is exceeded, an alert is logged to the cluster and a notification is issued to specified recipients; however, data writes are permitted during the grace period. If the soft threshold is still exceeded when the grace period expires, data writes fail, and a hard-limit notification is issued to the recipients you have specified. Writes resume when the usage falls below the threshold. Advisory An informational limit that can be exceeded. When an advisory quota threshold is exceeded, an alert is logged to the cluster and a notification is issued to specified recipients. Advisory thresholds do not prevent data writes.

Disk-usage calculations
For each quota that you configure, you can specify whether data-protection overhead is included in future disk-usage calculations. Overhead settings should be configured carefully, because they can significantly affect the amount of disk space that is available to users. Typically, most quota configurations do not need to include overhead calculations. If you include data-protection overhead in usage calculations for a quota, future disk-usage calculations for the quota include the total amount of space that is required to store files and directories, in addition to any space that is required to accommodate your dataprotection settings, such as parity or mirroring. For example, consider a user who is restricted by a 40 GB quota that includes data-protection overhead in its disk-usage calculations. If your cluster is configured with a 2x data-protection level (mirrored) and the user writes a 10 GB file to the cluster, that file actually consumes 20 GB of space: 10 GB for the file and 10 GB for the data-protection overhead. In this example, the user has reached 50 percent of the 40 GB quota by writing a 10 GB file to the cluster. You can configure quotas to include the space that is consumed by snapshots. A single path can have two quotas applied to it: one without snapshot usage (default) and one with snapshot usage. If snapshots are included in the quota, more files are included in
Disk-usage calculations
219

SmartQuotas

the calculation than are in the current directory. The actual disk usage is the sum of the current directory and any snapshots of that directory. You can see which snapshots are included in the calculation by examining the .snapshot directory for the quota path. Older snapshots are not added retroactively to usage when you create a new quota. Only those snapshots created after the QuotaScan job finishes are included in the calculation. If you do not include data-protection overhead in usage calculations for a quota, future disk-usage calculations for the quota include only the space that is required to store files and directories. Space that is required for the cluster's data-protection setting is not included. Consider the same example user, who is now restricted by a 40 GB quota that does not include data-protection overhead in its disk-usage calculations. If your cluster is configured with a 2x data-protection level and the user writes a 10 GB file to the cluster, that file consumes only 10 GB of space: 10 GB for the file and no space for the dataprotection overhead. In this example, the user has reached 25 percent of the 40 GB quota by writing a 10 GB file to the cluster. This method of disk-usage calculation is typically recommended for most quota configurations. Clones and cloned files are accounted for by quotas as though they consume both shared and unshared data: a clone and a copy of the same file do not consume different amounts of data. If the quota includes data protection overhead, however, the data protection overhead for shared data is not included in the usage calculation.

Quota notifications
Storage quota notifications, which are generated as a part of enforcement quotas, provide users with information about threshold violations when a violation condition occurs and while the violation condition persists. Each notification rule defines the condition that is to be enforced and the action that is to be executed when the condition is true. An enforcement quota can define multiple notification rules. Quota notifications are generated on quota domains with enforcement quota thresholds that you define. When thresholds are exceeded, automatic email notifications can be sent to specified users, or you can monitor notifications as system alerts or receive emails for these events. Notifications can be configured globally, to apply to all quota domains, or be configured for specific quota domains. Enforcement quotas support the following notification settings. A given quota can use only one of these settings. Limit notification settings Turn Off Notifications for this Quota Use Default Notification Rules Use Custom Notification Rules

Description Disables all notifications for the quota. Uses the global default notification for the specified type of quota. Enables the creation of advanced, custom notifications that apply to the specific quota. Custom notifications can be configured for any or all of the threshold types (hard, soft, or advisory) for the specified quota.

220

OneFS 7.0 Administration Guide

SmartQuotas

Quota notification rules can be written to trigger an action according to event thresholds (a notification condition). A rule can specify a schedule, such as "every day at 1:00 AM," for executing an action or immediate notification of certain state transitions. When an event occurs, a notification trigger executes one or more specified actions, such as sending an email to a user or administrator or pushing a cluster alert to the interface. Examples of notification conditions include the following:
u u u u

"Notify when a threshold is exceeded; at most, once every 5 minutes" "Notify when allocation is denied; at most, once an hour" "Notify while over threshold, daily at 2 AM" "Notify while grace period expired weekly, on Sundays at 2 AM" Instant notifications Includes the write-denied notification, triggered when a hard
threshold denies a write, and the threshold-exceeded notification, triggered at the moment a hard, soft, or advisory threshold is exceeded. These are one-time notifications because they represent a discrete event in time.

Notifications are triggered for events grouped by the following categories:


u

Ongoing notifications Generated on a scheduled basis to indicate a persisting


condition, such as a hard, soft, or advisory threshold being over a limit or a soft threshold's grace period being expired for a prolonged period.

Quota notification rules


You can write quota notification rules that are triggered by event thresholds. The following examples demonstrate the types of criteria that can be used to configure notification rules.
u u u u

"Notify when a threshold is exceeded; at most, once every 5 minutes" "Notify when allocation is denied; at most, once an hour" "Notify while over threshold, daily at 2 AM" "Notify while grace period expired weekly, on Sundays at 2 AM"

When an event occurs, a notification is triggered according to your notification rule. For example, you can create a notification rule that sends an email to a user or administrator when a disk-space allocation threshold is exceeded by a group.

Quota reports
The OneFS SmartQuotas module provides reporting options that enable administrators to more effectively manage cluster resources and analyze usage statistics. Storage quota reports provide a summarized view of the past or present state of the quota domains. After raw reporting data is collected by OneFS, you can produce data summaries by using a set of filtering parameters and sort types. Storage-quota reports include information about violators, grouped by types of thresholds. You can general reports from a historical data sample or from current data. In either case, the reports are views of usage data at a given time. OneFS does not provide reports on data that is aggregated over time, such as trending reports, but you can use raw data to analyze trends. There is no configuration limit on the number of reports other than the space you need to store them. OneFS provides three methods of data collection and reporting:
u u

Scheduled reports are generated and saved on a regular interval. Ad hoc reports are generated and saved at the request of the user.
Quota notification rules
221

SmartQuotas

Live reports are generated for immediate and temporary viewing.

Scheduled reports are placed by default in the /ifs/.isilon/smartquotas/reports directory, but the location is configurable to any directory under /ifs. Each generated report includes quota domain definition, state, usage, and global configuration settings. By default, ten reports are kept at a time, and older reports are purged. Ad hoc reports can be created on demand to provide a current view of the storage quotas system. These live reports can be saved manually. Ad hoc reports are saved to a location that is separate from scheduled reports to avoid skewing the timed-report sets.

Creating quotas
You can create two types of storage quotas to monitor data: accounting quotas and enforcement quotas. Storage quota limits and restrictions can apply to specific users, groups, or directories. The type of quota that you create depends on your goal.
u u

Accounting quotas monitor, but do not limit, disk usage. Enforcement quotas monitor and limit disk usage. You can create enforcement quotas that use any combination of hard limits, soft limits, and advisory limits. Enforcement quotas are not recommended for snapshot-tracking quota domains.

After you create a new quota, it begins to report data almost immediately, but the data is not valid until the QuotaScan job completes. Before using quota data for analysis or other purposes, verify that QuotaScan job has finished.

Create an accounting quota


You can create an accounting quota to monitor but not limit disk usage. Optionally, you can include snapshot data, data-protection overhead, or both in the accounting quota. 1. Click File System Management > SmartQuotas > Quotas & Usage. 2. On the Storage Quotas & Usage page, click Create a storage quota. 3. From the Quota Type list, select the target for this quota: a directory, user, or group. 4. Depending on the target that you selected, select the entity that you want to apply the quota to. For example, if you selected User from the Quota Type list, you can target either all users or a specific user. 5. In the Directory path field, type the path and directory for the quota, or click Browse, and then select a directory. 6. Optional: In the Usage Accounting area, select the options that you want. l To include snapshot data in the accounting quota, select the Include Snapshot Data check box.
l

To include the data-protection overhead in the accounting quota, select the Include Data-Protection Overhead check box. To include snapshot data in the accounting quota, select the Include Snapshot Data check box.

7. In the Usage Limits area, click No Usage Limit (Accounting Only) 8. Click Create Quota.
222

OneFS 7.0 Administration Guide

SmartQuotas

Results The quota appears in the Quotas & Usage list. What to do next After you create a new quota, it begins to report data almost immediately, but the data is not valid until the QuotaScan job completes. Before using quota data for analysis or other purposes, verify that the QuotaScan job has finished.

Create an enforcement quota


An enforcement quota monitors and limits disk usage. You can create enforcement quotas that set hard, soft, and advisory limits. 1. Click File System Management > SmartQuotas > Quotas & Usage. 2. On the Storage Quotas & Usage page, click Create a storage quota. 3. From the Quota Type list, select the target for this quota: a directory, user, or group. 4. Depending on the target that you selected, select the entity that you want to apply the quota to. For example, if you selected User from the Quota Type list, you can target all users or a specific user. 5. In the Directory path field, type the path and directory for the quota, or click Browse, and then select a directory. 6. Optional: In the Usage Accounting area, click the Include Snapshot Data check box, the Include Data-Protection Overhead check box, or both to include them in the quota. 7. In the Usage Limits area, click Specify Usage Limits. The usage limit options appear. 8. Click the check box next to the option for each type of limit that you want to enforce. 9. Type numerals in the fields and select from the lists the values that you want to use for the quota. 10. In the Limit Notations area, click the notification option that you want to apply to the quota. 11. To generate an event notification, select the Create cluster event check box. 12. If you selected the option to use custom notification rules, click the link to expand the custom notification type that applies to the usage-limit selections. 13. Click Create Quota. Results The quota appears in the Quotas & Usage list. What to do next After you create a new quota, it begins to report data almost immediately but the data is not valid until the QuotaScan job completes. Before using quota data for analysis or other purposes, verify that the QuotaScan job has finished.

Managing quotas
The configured values of a storage quota can be modified, and you can enable or disable a quota. You can modify the default storage quotas, and you can create quota limits and restrictions that apply to specific users, groups, or directories. Quota management in OneFS is simplified by the quota search feature, which enables you to locate a quota or quotas by using filters. You can also clone quotas to speed quota
Create an enforcement quota
223

SmartQuotas

creation, and you can unlink quotas that are associated with a parent quota. Optionally, custom notifications can be configured for quotas. You can also temporarily disable a quota and then enable it when needed. Quotas can be managed through either the web administration interface or the command-line interface. Moving quota directories across quota domains is not supported.

Search for quotas


You can search for a quota by using a variety of search criteria. By default, all storage quotas and display options are listed on this page before you apply report or search filters. If the Quotas & Storage section is collapsed, click Define quota display. 1. Click File System Management > SmartQuotas > Quotas & Usage. 2. In the Quotas & Usage area, for Report Filters, select Search for specific quotas within this report. 3. In the Quota Type list, select the quota type that you want to find. 4. If you selected User Quota or Group quota for a quota type, type a full or partial user or group name in the User or Group field. You can use the wildcard character (*) in the User or Group field.
l l

To search for only default users, select the Only show default users checkbox. To search for only default groups, select the Only show default groups check box.

5. In the Directory Path field, type a full or partial path. You can use the wildcard character (*) in the Directory Path field.
l l

To search subdirectories, select the Include subdirectories check box. To search for only quotas that are in violations, select the Only show quotas for which usage limits are currently in violation check box.

6. Optional: Click Update Display. Quotas that match the search criteria appear in the sections where quotas are listed. Results An accounting or enforcement quota with a threshold value of zero is indicated by a dash (). You can click the column headings to sort the result set. To clear the result set and display all storage quotas, in the Quotas & Usage area, select Show all quotas and usage for this report for Report Filters, and then click Update Display.

Manage quotas
Quotas help you monitor and analyze the current or historic use of disk storage. You can search for quotas, and then you can view, modify, delete, and unlink a quota. An initial QuotaScan job must run for the default or scheduled quotas. Otherwise, the data displayed may be incomplete. Before you modify a quota, consider how the changes will affect the file system and end users.

224

OneFS 7.0 Administration Guide

SmartQuotas

The options to edit or delete a quota appear only when the quota is not linked to a default quota. The option to unlink a quota is available only when the quota is linked to a default quota.

1. Click File System Management > SmartQuotas > Quotas & Usage. 2. From the Quota Report options, select the type of quota report that you want to view or manage. l To monitor and analyze current disk storage use, click Show current quotas and usage (Live Report).
l

To monitor and analyze historic disk storage use, click Show archived quota report to select from the list of archived scheduled and manually generated quota reports.

3. For Report Filters, select the filters to be used for this quota report. l To view all information in the quota report, click Show all quotas and usage for this report.
l

To filter the quota report, click Search for specific quotas within this report, and then select the filters that you want to apply.

4. Click Update Display. The quota report displays below. 5. Optional: Select a quota to view its settings or to perform the following management actions. l To review or edit this quota, click View details.
l l

To delete this quota, click Delete. To unlink a linked quota, click Unlink. Configuration changes for linked quotas must be made on the parent (default) quota that the linked quota is inheriting from. Changes to the parent quota are propagated to all children. If you want to override configuration from the parent quota, you must first unlink the quota.

Export a quota configuration file


You can export quota settings in the form of a configuration file, which can then be imported for reuse to another OneFS cluster. You can also store the exported quota configurations in another location outside of the cluster. You can pipe the XML output to a file or directory. The XML file can then be imported to another cluster. 1. Establish an SSH connection to any node in the cluster. 2. At the command prompt, run the following command: isi_classic quota list --export The quota configuration file displays as raw XML.

Import a quota configuration file


You can import quota settings in the form of a configuration file that has been exported from another OneFS cluster. 1. Establish an SSH connection to any node in the cluster.
Export a quota configuration file
225

SmartQuotas

2. Navigate to the location of the exported quota configuration file. 3. At the command prompt, run the following command, where <filename> is the name of an exported configuration file: isi_classic quota import --from-file=<filename> The system parses the file and imports the quota settings from the configuration file. Quota settings that you configured before importing the quota configuration file are retained, and the imported quota settings are effective immediately.

Managing quota notifications


Quota notifications can be enabled or disabled, modified, and deleted. By default, a global quota notification is already configured and applied to all quotas. You can continue to use the global quota notification settings, modify the global notification settings, or disable or set a custom notification for a quota. Enforcement quotas support four types of notifications and reminders:
u u u u

Threshold exceeded Over-quota reminder Grace period expired Write access denied

If a directory service is used to authenticate users, you can configure notification mappings that control how email addresses are resolved when the cluster sends a quota notification. If necessary, you can remap the domain that is used for quota email notifications and you can remap Active Directory domains, local UNIX domains, or both.

Configure default quota notification settings


You can configure default global quota notification settings that apply to all quotas of a specified threshold type. The custom notification settings that you configure for a quota take precedence over the default global notification settings. 1. Click File System Management > SmartQuotas > Settings. 2. Optional: On the Quota Settings page, for Scheduled Reporting, select On. 3. Click Change Schedule, and then select a report frequency from the list. Reporting schedule settings appear for the frequency that you selected. 4. Select the reporting schedule options that you want, and then click Select. 5. In the Scheduled Report Archiving area, you can configure the following size and directory options: l To configure the number of live reports that you want to archive, type the number of reports in the Limit archive size field.
l

To specify an archive directory that is different from the default, in the Archive Directory field, type the path or click Browse to select the path.

6. In the Manual Report Archiving area, you can configure the following size and directory options: l To configure the number of live reports that you want to archive, type the number of reports in the Limit archive size field.
l

To specify an archive directory that is different from the default, in the Archive Directory field, type the path or click Browse to select the path.

226

OneFS 7.0 Administration Guide

SmartQuotas

7. In the Email Mapping Rules area, choose each mapping rule that you want to use by selecting the check box in the Provider Type column. 8. In the Notification Rules area, define default notification rules for each rule type. l Click Default Notifications Settings to expand the list of limit notifications rules types.
l

Click Advisory Limit Notification Rules to display default settings options for this type of notification. Click Event: Advisory Limit Value Exceeded and Event: While Advisory Limit Remains Exceeded to set the options that you want. Click Soft Limit Notification Rules to display default settings options for this type of notification. Click Event: Soft Limit Value Exceeded, Event: While Soft Limit Remains Exceeded, Event: Soft Limit Grace Period Expired, and Event: Soft Limit Write Access Denied to set the options that you want. Click Hard Limit Notification Rules to display the options for this type of notification. Click Event: Hard Limit Write Access Denied and Event: While Hard Limit Remains Exceeded to set the options that you want.

l l

9. Click Save. What to do next After you create a new quota, it begins to report data almost immediately, but the data is not valid until the QuotaScan job completes. Before using quota data for analysis or other purposes, verify that the QuotaScan job has finished.

Configure custom quota notification rules


You can configure custom quota notification rules that apply only to a specified quota. Quota-specific custom notification rules must be configured in for that quota. If notification rules are not configured for a quota, the default event notification configuration is used. For more information about configuring default notification rules, see Create an event notification rule. Before you begin An enforcement quota must exist or be in the process of being created. To configure notifications for an existing enforcement quota, follow the procedure to modify a quota and then use these steps from the Quota Details pane of the specific quota. 1. In the Limit Notifications area, click Use Custom Notification Rules. The links display for the rules options that are available for the type of notification that you have selected for this quota. 2. Click the View details, and then click Edit limit notifications. 3. Click the link for the limit notification type that you want to configure for this quota. From the list, select the target for this quota: a directory, user, or group. The Limit Notification Rules options for the selection type display. 4. Select or type the values to configure the custom notification rule for this quota. 5. Click Create quota when you have completed configuring the settings for this notification rule. Results The quota appears in the Quotas & Usage list.
Configure custom quota notification rules
227

SmartQuotas

What to do next After you create a new quota, it begins to report data almost immediately, but the data is not valid until the QuotaScan job completes. Before using quota data for analysis or other purposes, verify that the QuotaScan job has finished.

Map an email notification rule for a quota


Email notification mapping rules control how email addresses are resolved when the cluster sends a quota notification. If necessary, you can remap the domain used for SmartQuotas email notifications. You can remap Active Directory Windows domains, local UNIX domains, or NIS domains. 1. Click File System Management > SmartQuotas > Settings. 2. Optional: In the Email Mapping area, click Create an email mapping rule. The settings for email mapping rules appear. 3. From the Provider Type list 4. From the Current Domain list, select the domain that you want to use for the mapping rule. 5. In the Map-to-Domain field, type the name of the domain that you want to map email notifications to. Repeat this step if you want to map more than one domain. 6. Click Save Rule. A message displays indicating that your email mapping rule has been created and that the mapping rule is added to the Email Mapping Rules list.

Configure a custom email quota notification template


If email notifications are enabled, you can configure custom templates for email notifications. If the default email notification templates do not meet your needs, you can configure your own custom email notification templates by using a combination of text and SmartQuotas variables. 1. Open a text editor and create a .txt file that includes any combination of text and OneFS email notification variables. 2. Save the template file as ASCII text or in ISO-8859-1 format. 3. Upload the file to an appropriate directory on the OneFS cluster. For example, /ifs/templates.
Example 1 Example of a custom quota email notification text file

The following example illustrates a custom email template to notify recipients about an exceeded quota.
Text-file contents with variables The disk quota on directory <ISI_QUOTA_PATH> owned by <ISI_QUOTA_OWNER> was exceeded. The <ISI_QUOTA_TYPE> quota limit is <ISI_QUOTA_THRESHOLD>, and <ISI_QUOTA_USAGE> is in use. Please free some disk space by deleting unnecessary files. For more information, contact Jane Anderson in IT. Email contents with resolved variables The disk quota on directory
228

OneFS 7.0 Administration Guide

SmartQuotas

Example 1 Example of a custom quota email notification text file (continued, page X of Y) /ifs/data/sales_tools/collateral owned by jsmith was exceeded. The hard quota limit is 10 GB, and 11 GB is in use. Please free some disk space by deleting unnecessary files. For more information, contact Jane Anderson in IT.

What to do next To use the custom template, click Cluster Managements > General Settings > Email Settings, and then select the custom template in the Event Notification Settings area.

Managing quota reports


Reports can be created, configured, and scheduled to monitor, track, and analyze storage use on a OneFS cluster. You can view and schedule reports and customize report settings to track, monitor, and analyze disk storage use. Quota reports are managed by configuring settings that give you control over when reports are scheduled, how they are generated, where and how many are stored, and how they are viewed. The maximum number of scheduled reports that are available for viewing in the web-administration interface can be configured for each report type. When the maximum number of reports are stored, the system deletes the oldest reports to make space for new reports as they are generated.

Create a quota report schedule


You can configure quota report settings to generate the quota report on a specified schedule. These settings determine whether and when scheduled reports are generated, and where and how the reports are stored. If you disable a scheduled report, you can still run unscheduled reports at any time. 1. Click File System Management > SmartQuotas > Settings. 2. Optional: On the Quota settings page, for Scheduled Reporting, click On. The Report Frequency option appears. 3. Click Change schedule, and then select the report frequency that you want to set from the list. Reporting schedule settings appear that are specific to the reporting frequency that you selected. 4. Select the reporting schedule options that you want. 5. Click Save. Results Reports are generated according to your criteria and can be viewed in the Generated Reports Archive.

Generate a quota report


In addition to scheduled quota reports, you can generate a report to capture usage statistics at a point in time. Before you begin Quotas must exist and the initial QuotaScan job must run before you can generate a quota report. 1. Click File System Management > SmartQuotas > Generated Reports Archive.
Managing quota reports
229

SmartQuotas

2. In the Generated Quota Reports Archive area, click Generate a quota report. The Generate a Quota Report area appears. 3. Click Generate Report. Results The new report appears in the Quota Reports list.

Locate a quota report


You can locate quota reports, which are stored as XML files, and then use your own tools and transforms to view them. 1. Establish an SSH connection to any node in the cluster. 2. Navigate to the directory where quota reports are stored. The following path is the default quota report location:
/ifs/.isilon/smartquotas/reports

If quota reports are not in the default directory, you can run the isi command to find the directory where they are stored. 3. At the command prompt, run one of the following commands: To view a list of all quota reports in the specified directory To view a specific quota report in the specified directory

quota settings

Run the following command:


ls

-a *.xml

Run the following command:


ls <filename>.xml

Basic quota settings


When you create a storage quota, the following attributes must be defined, at a minimum. When you specify usage limits, more options are available for defining your quota. Option Directory Path User Quota

Description The directory that the quota is on. Select to automatically create a quota for every current or future user that stores data in the specified directory. Select to automatically create a quota for every current or future group that stores data in the specified directory. Select to count all snapshot data in usage limits; cannot be changed after the quota is created. Select to count protection overhead in usage limits. Select to account for usage only.

Group Quota

Include Snapshot Data

Include Data-Protection Overhead No Usage Limit

230

OneFS 7.0 Administration Guide

SmartQuotas

Option Specify Usage Limits

Description Select to enforce advisory, soft, or absolute limits.

Advisory limit quota notification rules settings


You can configure custom quota notification rules for advisory limits for a quota. These settings are available when you select the option to use custom notification rules. Option Send email Notify owner

Description Specify the type of email to use. Select to send an email notification to the owner of the entity. Select to send an email notification to another recipient and type the recipient's email address. Select from the following template types for use in formatting email notifications:
l

Exceeded Yes Yes

Remains exceeded Yes Yes

Notify another

Yes

Yes

Message template

Yes

Yes

Default (leave Message Template field blank to use default) Custom Yes

Create cluster event

Select to generate an event Yes notification for the quota when exceeded. Specify the length of time (hours, days, weeks) to delay before generating a notification. Specify the notification and alert frequency: daily, weekly, monthly, yearly; depending on selection, specify intervals, day to send, time of day, multiple emails per rule. Yes

Delay

No

Frequency

No

Yes

Advisory limit quota notification rules settings

231

SmartQuotas

Soft limit quota notification rules settings


You can configure custom soft limit notification rules for a quota. These settings are available when you select the option to use custom notification rules. Option Send email

Description Specify the recipient of the email notification.

Exceeded Yes

Remains exceeded Yes

Grace period expired Yes

Write access denied Yes

Notify owner

Select to send an Yes email notification to the owner of the entity. Select to send an Yes email notification to another recipient and type the recipient's email address. Select from the following template types for use in formatting email notifications:
l

Yes

Yes

Yes

Notify another

Yes

Yes

Yes

Message template

Yes

Yes

Yes

Yes

Default (leave

Message Template field


blank to use default)
l

Custom Yes Yes Yes Yes

Create cluster event Select to generate an event notification for the quota. Delay Specify the length of time (hours, days, weeks) to delay before generating a notification. Specify the notification and alert frequency: daily, weekly, monthly, yearly; depending on selection, specify intervals, day to

Yes

No

No

Yes

Frequency

No

Yes

Yes

No

232

OneFS 7.0 Administration Guide

SmartQuotas

Option -

Description
send, time of day, multiple emails per rule.

Exceeded -

Remains exceeded -

Grace period expired -

Write access denied

Hard limit quota notification rules settings


You can configure custom quota notification rules for hard limits for a quota. These settings are available when you select the option to use custom notification rules. Option Send email Notify owner

Description Specify the recipient of the email notification. Select to send an email notification to the owner of the entity. Select to send an email notification to another recipient and type the recipient's email address. Select from the following template types for use in formatting email notifications:
l

Write access denied Yes Yes

Exceeded Yes Yes

Notify another

Yes

Yes

Message template

Yes

Yes

Default (leave Message Template field blank to use default) Custom Yes

Create cluster event

Select to generate an event Yes notification for the quota when exceeded. Specify the length of time (hours, days, weeks) to delay before generating a notification. Specify the notification and alert frequency: daily, weekly, monthly, yearly; depending on selection, specify intervals, day to send, time of day, multiple emails per rule. Yes

Delay

No

Frequency

No

Yes

Limit notification settings


You have three notification options when you create an enforcement quota: turn off notifications, use default rules, or define custom rules. Enforcement quotas support the

Hard limit quota notification rules settings

233

SmartQuotas

following notification settings for each threshold type. A quota can use only one of these settings. Notification setting Use Default Notification Rules Turn Off Notifications for this Quota Use Custom Notification Rules

Description Uses the default notification rules that you configured for the specified threshold type. Disables all notifications for the quota. Provides settings to create basic custom notifications that apply to only this quota.

Quota report settings


You can configure quota report settings that track disk usage. These settings determine whether and when scheduled reports are generated, and where and how reports are stored. Setting Scheduled reporting

Description Enables or disables the scheduled reporting feature.


l

Notes -

Off. Manually generated on-demand reports can be run at any time. On. Reports run automatically
according to the schedule that you specify.

Report frequency

Specifies the interval for this report to run: daily, weekly, monthly, or yearly; you can further refine the report schedule by using the following options.

Generate report every. Specify the


numeric value for the selected report frequency; for example, every 2 months.

Generate reports on. Select the day or


multiple days to generate reports.

Select report day by. Specify date or day


of the week to generate the report.

Generate one report per specified by. Set


the time of day to generate this report.

Generate multiple reports per specified day. Set the intervals and times of day to
generate the report for that day. Scheduled report archiving Determines the maximum number of scheduled reports that are available for viewing on the SmartQuotas Reports page. When the maximum number of reports are stored, the system deletes the oldest reports to make space for new reports as they are generated.

Limit archive size for scheduled reports to a specified number of reports. Type the
234

OneFS 7.0 Administration Guide

SmartQuotas

Setting -

Description
integer to specify the maximum number of reports to keep.

Notes -

Archive Directory. Browse to the directory where you want to store quota reports for archiving.
Manual report archiving Determines the maximum number of manually generated (on-demand) reports that are available for viewing on the SmartQuotas Reports page. When the maximum number of reports are stored, the system deletes the oldest reports to make space for new reports as they are generated.

Limit archive size for live reports to a


specified number of reports. Type the integer to specify the maximum number of reports to keep.

Archive Directory. Browse to the directory


where you want to store quota reports for archiving.

Custom email notification template variable descriptions


If the default OneFS email notification templates do not meet your needs, you can configure and upload your own custom email templates for SmartQuotas notifications. An email template can contain text and, optionally, variables that represent values. You can use any of the SmartQuotas variables in your templates. Template files must be saved as a .txt file. Variable ISI_QUOTA_PATH ISI_QUOTA_THRESHOLD ISI_QUOTA_USAGE ISI_QUOTA_OWNER ISI_QUOTA_TYPE ISI_QUOTA_GRACE ISI_QUOTA_EXPIRATION

Description Path of quota domain Threshold value Disk space in use Name of quota domain owner Threshold type Grace period, in days Expiration date of grace period

Example /ifs/data 20 GB 10.5 GB jsmith Advisory 5 days Fri Feb 23 14:23:19 PST 2007

Custom email notification template variable descriptions

235

SmartQuotas

236

OneFS 7.0 Administration Guide

CHAPTER 12 Storage pools

Storage pools are a logical division of nodes and files. They give you the ability to aggregate and manage large numbers of files from a single management interface. OneFS uses storage pools to efficiently manage and protect the data on a cluster.

Node pools are sets of like nodes that are grouped into a single pool of storage. Node pool membership changes automatically through the addition or removal of nodes to or from the cluster. File pools are user-defined logical groupings of files that are stored in node pools according to file pool policies. By default, the basic unlicensed technology is implemented in a cluster, and additional features are available when you license the SmartPools module. These licensed features include the ability to create multiple file pools and file pool policies that direct specific files and directories to a targeted node pool or tier and spillover management, which enables you to define how write operations are handled when a node pool or tier is full. Virtual hot spare allocation, which reserves space for data re-protection if a drive fails, is available with both the licensed and unlicensed technology.
The following table compares licensed and unlicensed storage pool features. Feature Automatic pool provisioning Spillover Policy-based data movement Virtual hot spare

Unlicensed Yes Undirected No Yes

Licensed Yes Configurable Yes Yes

u u u u u u u u u u u u u u u

Storage pool overview.........................................................................................238 Autoprovisioning.................................................................................................238 Virtual hot spare and SmartPools........................................................................239 Spillover and SmartPools....................................................................................239 Node pools.........................................................................................................240 SSD pools...........................................................................................................241 File pools with SmartPools..................................................................................241 Tiers....................................................................................................................242 File pool policies.................................................................................................243 Pool monitoring...................................................................................................243 Creating file pool policies with SmartPools..........................................................244 Managing file pool policies.................................................................................245 SmartPools settings............................................................................................248 Default file pool protection settings.....................................................................250 Default file pool I/IO optimization settings..........................................................252

Storage pools

237

Storage pools

Storage pool overview


OneFS provides storage pools to simplify the management and storage of data. File pools, node pools, and tiers are types of storage pools. Node pools are physical nodes that are grouped by type to optimize reliability and data protection settings. Node pools are created automatically, or autoprovisioned, by OneFS when the system is installed and whenever nodes are added or removed. Tiers are collections of node pools that are grouped to optimize storage according to need, such as a mission-critical high-speed tier or node pools that are best suited to data archiving. The licensed SmartPools technology gives you the ability to create file pools, which are logical collections of files. Membership in a file pool is determined by criteria-based rules, called file pool policies, that specify operations to determine which storage pools store the data. By default, the basic storage pool feature creates one file pool for the OneFS cluster. This file pool provides a single point of management and a single namespace for the cluster. The licensed SmartPools module enables you to create multiple file pools according to file attributes and to direct spillover to a target tier or node pool. Using the licensed module, you can create multiple storage pools and apply pool policies to manage file storage according to the criteria that you specify. OneFS includes the following basic features.
u u

Node pools Groups of equivalent nodes that are associated in a single pool of storage. Tiers Groups of node pools, used to optimize data storage according to OneFS platform
type.

OneFS adds the following features when the SmartPools module is licensed.
u

Custom file pools Storage pools that you define to filter files and directories into
specific node pools according to your criteria. Using file attributes such as file size, type, access time, and location that you specify in a file pool policy , custom file pools automate data movement and storage according to your unique storage needs. The licensed module also includes customizable template policies that are optimized for archiving, extra protection, performance, and VMware files.

Storage pool spillover Automated node-capacity overflow management. Spillover defines how to handle write operations when a storage pool is not writable. When spillover is enabled, data is redirected to a specified storage pool. If spillover is disabled, new data writes fail and an error message appears. If the SmartPools module is not licensed, files are stored on any available node pools across the cluster.

Autoprovisioning
Autoprovisioning is the process of automatically assigning storage by node type to improve the performance and reliability of the file storage system. When you configure a cluster, OneFS automatically assigns nodes to node pools, or autoprovisions, in your cluster to increase data-protection and cluster reliability. Autoprovisioning reduces the time required for the manual management tasks associated with configuring storage pools and resource planning.

238

OneFS 7.0 Administration Guide

Storage pools

Nodes are not provisioned, meaning they are not associated with each other and not writable, until at least three nodes of an equivalence class are assigned to the pool. If you have added only two nodes of an equivalence class to your cluster, there is no communication between nodes until one more is added. If you remove nodes from a provisioned cluster so that fewer than three equivalenceclass nodes remain, the pool is underprovisioned. In this situation, when two like nodes remain, they are still writable; if only one node remains, it is not writable but it remains readable. Node pool attributes and status are visible in the web administration interface. You can view storage pool health information through the command-line interface also.

Virtual hot spare and SmartPools


Virtual hot spare allocation allows you to allocate space to be used to protect data in the event of a drive failure. The virtual hot spare option reserves the free space needed to rebuild the data if a disk failure occurs. For example, if you specify two virtual drives and 15 percent, each node pool reserves virtual drive space that is equivalent to two drives or 15 percent of their total capacity for virtual hot spare, whichever is larger. You can reserve space in node pools across the cluster for this purpose, up to the equivalent of a maximum of four full drives. Virtual hot spare allocation is defined using these options:
u u u

A minimum number of virtual drives in the node pool (1-4). A minimum percentage of total disk space (0-20 percent) . A combination of minimum virtual drives and total disk space. The larger number of the two determines the space allocation, not the sum of the numbers.

It is important to understand the following information when configuring VHS settings:


u

If you configure both settings, the enforced minimum value satisfies both requirements. If you select the option to reduce the amount of available space, free-space calculations do not include the space reserved for the virtual hot spare. The reserved virtual hot spare free space is used for write operations unless you select the option to deny new data writes. If Reduce amount of available space is enabled while Deny new data writes is disabled, it is possible for the file system to report utilization as more than 100 percent.

Virtual hot spare reservations affect spillover. For example, if the virtual hot spare reservation is 10 percent of storage pool capacity, spillover occurs when the storage pool is 90 percent full.

Spillover and SmartPools


When SmartPools is licensed, a specific storage pool can be designated to receive spill data when a file pool target is not writable. If spillover is undesirable, it can be disabled so that writes do not occur. Spillover management is a feature that is available when the SmartPools module is licensed. Write operations can be directed to a specified node pool or tier in the cluster when there is not enough space to write a file according to the file pool policy rules.
Virtual hot spare and SmartPools
239

Storage pools

Virtual hot spare reservations affect spillover. For example, if the virtual hot spare reservation is 10 percent of storage pool capacity, spillover occurs when the storage pool is 90 percent full.

Node pools
A node pool is a logical grouping of equivalent nodes across the cluster. OneFS nodes are grouped automatically to create a storage pool for ease of administration and application of file pool target policies. Each node in the OneFS clustered storage system is a peer, and any node can handle a data request. File pool policies can then be applied to files to target node pools that have different performance and capacity characteristics to meet different workflow requirements. Each node that is added to a cluster increases aggregate disk, cache, CPU, and network capacity. When additional nodes are added to the cluster, they are automatically added to node pools according to matching attributes, such as drive size, RAM, series, and SSD-node ratio.

Add or move node pools in a tier


You can group node pools into tiers and move node pools among tiers to most efficiently use resources or for other cluster management purposes. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: the current capacity usage and a list of tiers and node pools. 2. In the Tiers & Node Pools area, select and drag a node pool to the tier name to add it to the tier. To add a node pool that is currently in another tier, expand that tier and drag and drop the node pool to the target tier name. 3. Continue dragging and dropping node pools until you complete the tier. Each node that you added to the tier appears under the tier name when it is in an expanded state.

Change the name or protection level of a node pool


You can change the name or protection level of a node pool. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: current capacity usage and a list of tiers and node pools. 2. In the Tiers & Node Pools section, in the row of the node pool that you want to modify, click Edit. A dialog box appears. 3. Type a name for this node pool, select a protection level from the list, or do both. A node pool name can contain alphanumeric characters but cannot begin with a number. 4. Click Submit. The node pool appears in the list of tiers and node pools.

240

OneFS 7.0 Administration Guide

Storage pools

SSD pools
OneFS clusters can contain both HDDs and SSDs. When OneFS autoprovisions nodes, nodes with SSDs are grouped into equivalent node pools. Your SSD strategy defines how SSDs are used within the cluster. Clusters that include both hard-disk drives (HDDs) and solid-state drives (SSDs) can be optimized by your SSD strategy options to increase performance across a wide range of workflows. SSD strategy is applied on a per file basis. When you select your options during the creation of a file pool policy, you can identify the directories and files in the OneFS cluster that require faster or slower performance. OneFS automatically moves that data to the appropriate pool and drive type. Global namespace acceleration (GNA) allows data stored on node pools without SSDs to use SSDs elsewhere in the cluster to store extra metadata mirrors, which accelerates metadata read operations. To avoid overloading node pools with SSDs, certain thresholds must be satisfied for GNA to be enabled. GNA can be enabled if 20% or more of the nodes in the cluster contain at least one SSD and 1.5% or more of the total cluster storage is SSD-based. For best results, ensure that at least 2.0% of the total cluster storage is SSD-based before enabling global namespace acceleration. If the ratio of accessible SSD-containing nodes in the cluster drops below the 20% requirement, GNA is not active despite being enabled. GNA is reactivated when the ratio is corrected. The following SSD strategy options are listed in order of slowest to fastest choices.
u

Avoid SSDs Writes all associated file data and metadata to HDDs only.

Use this option to free SSD space only after consulting with Isilon Technical Support personnel. Using this strategy may negatively affect performance.
u

Metadata read acceleration This is the default setting. Writes both file data and metadata to HDDs. An extra mirror of the file metadata is written to SSDs, if available. The SSD mirror is in addition to the number required to satisfy the protection level. Enabling GNA makes read acceleration available to files in node pools that do not contain SSDs. GNA is only for metadata and extra mirrors. Metadata read/write acceleration Writes file data to HDDs and metadata to SSDs,
when available. This strategy accelerates metadata writes in addition to reads but requires about four to five times more SSD storage than the Metadata read acceleration setting. Enabling GNA does not affect read/write acceleration.

Data on SSDs Uses SSD node pools for both data and metadata. Regardless of whether
global namespace acceleration is enabled, any SSD blocks reside on the file target pool if there is room. This SSD strategy does not result in the creation of additional mirrors beyond the normal protection level but requires significantly increased storage requirements compared with the other SSD strategy options.

File pools with SmartPools


File pools provide policy-based control of the storage characteristics of your files. By default, all files in the cluster belong to a single file pool, which is defined by the default file pool policy. Additional file pools can be created when you license the SmartPools module. Using the licensed SmartPools module, you can create multiple file pools, which are userdefined file aggregates that are governed by automated file pool policies. By applying file
SSD pools
241

Storage pools

pool policies to file pools, data can be moved automatically from one type of storage to another within a single cluster from a single point of management to meet performance, space, cost, or other requirements while retaining protection-level settings. File pool policies are based on the file attributes that you specify. For example, a file pool policy can be created for an specific file extension that requires high availability, so you can target a pool that provides the fastest reads or read/writes. Another file pool policy can be created to evaluate last accessed date, allowing you to target node pools best suited for archiving for historical or regulatory purposes.

Tiers
A tier is a user-defined collection of node pools that can be used as a target for a file pool policy. You can create tiers to assign your data to any of the node pools in the tier to meet your data-classification needs. For example, a collection of node pools can be assigned to a tier that you create for frequently accessed or mission-critical data that requires high availability and fast access. In a three-tier system, this classification may be Tier 1. You can classify data that is used less frequently or that is accessed by fewer users as Tier-2 data. Tier-3 usually comprises data that is seldom used and can be archived for historical or regulatory purposes. A node pool can belong to only one tier.

Create a tier
You can group node pools into a tier that can be used as a target for a file pool policy. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: the current capacity usage and a list of tiers and node pools. 2. In the Tiers & Node Pools section, click Create a Tier. 3. In the dialog box that displays, type a name for this tier, and then click Submit. The tier appears in the list of tiers and node pools. 4. Select and drag a node pool to the tier name to add it to the tier. Continue dragging and dropping node pools until you complete the tiered group. Each node pool that you added to the tier appears under the tier name when it is in an expanded state.

Rename a tier
You can modify the name of a tier that contains node pools. A tier can contain alphanumeric characters but cannot begin with a number. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: the current capacity usage and a list of tiers and node pools. 2. In the Tiers & Node Pools area, in the row of the tier you want to rename, click Edit. 3. In the dialog box that displays, type a name for this tier and click Submit. The newly named tier appears in the list of tiers and node pools.

242

OneFS 7.0 Administration Guide

Storage pools

Delete a tier
You can delete a tier, but the option is not available until you move all node pools out of that tier. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: current capacity usage and a list of tiers and node pools. 2. In the Tiers & Node Pools area, in the row of the tier that you want to delete, click Delete. 3. In the confirmation dialog box that displays, click Yes to confirm the deletion. Results The tier is removed from list of tiers and node pools.

File pool policies


File pool policies define file movement among node pools, optimization for file-access patterns, and file protection settings. The licensed SmartPools module augments the basic OneFS storage pool features to give you the ability to create multiple file pools by creating file pool policies, so you can automatically store files on a specified target according to your criteria. You configure file pool policies using file attributes that are both system- and userdefined. You can then set protection levels and access attributes for files types that you specify. Multiple criteria can be defined in one file pool policy, and you can also use timebased filters for the date that a file was last accessed, modified, or created. You can also define a relative elapsed time instead of a date, such as three days before the current date. The unlicensed OneFS storage pool technology allows you to configure the default file pool policy for managing the node pools that are created when the cluster is autoprovisioned. By default, the basic unlicensed storage pools technology is implemented in a cluster: the default file pool contains all files and is targeted to any node pool. The single file pool is called the default file pool and is defined by the default file pool policy that is configured in the default file pool policy settings. You cannot reorder or remove the default file pool policy. The settings in the default file pool policy apply to all files that are not covered by another file pool policy. For example, data that is not covered by a file pool policy can be moved to a tier that you identify as a default for this purpose. All file pool policy operations are executed when the SmartPools job runs. When new files are created, the OneFS system temporarily chooses a node pool for new files, using a mechanism based on file pool policies used when the last SmartPools job ran. They are then moved according to a matching file pool policy when the next SmartPools job runs.

Pool monitoring
Pool health, performance, and status can be monitored through the web administration interface or the command-line interface. Information is displayed for individual nodes, including node-specific network traffic, internal and external network interfaces, and drive status. You can configure real-time and historical performance to be graphed in the web administration interface. You can assess pool health and performance by viewing the following information:
u u u

Subpool status Node status New events


Delete a tier
243

Storage pools

u u u

Cluster size Cluster throughput CPU usage

Monitor node pools and tiers


You can view the status and details of node pools and tiers in the SmartPools summary. Pool names that are longer than 40 characters are truncated by OneFS. To view the full pool name, rest the mouse pointer over the shortened name to display a tooltip of the long name. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays two groupings: the current capacity usage and a list of tiers and node pools. 2. In the Current Capacity Usage area, move the pointer over the usage bar-graph measurements to view details. 3. In the Tiers & Node Pools area, expand any tiers to view all node pool information.

View unhealthy subpools


When subpools are unhealthy, OneFS exposes them in a list. 1. Click File System Management > SmartPools > Summary. The SmartPools page appears and displays three groupings: the current capacity usage, a list of tiers and node pools, and any unhealthy subpools. 2. In the Unhealthy Subpools area, review details of any problematic subpools.

Creating file pool policies with SmartPools


File pool policies are used to filter and store files by the attributes and values that you specify, automating file management and aggregation. SmartPools module must be licensed to create file pool policies. File pool policies have two parts: the criteria that determine which files are members and the operations that determine the settings that are applied to member files. Membership criteria are defined by tests against file attributes and can be combined with boolean AND and OR operators. Operations are defined by settings for storage target, protection level, and I/O optimization. For example, you can create a file pool policy that identifies all JPG files that are larger than 2MB and moves them to nearline storage. The following table lists the file attributes that you can use to define a file pool policy. File attribute File name Path File type File size Modified time Create time

Specifies Name of the file Where the file is stored File-system object type Size of the file When the file was last modified When the file was created

244

OneFS 7.0 Administration Guide

Storage pools

File attribute Metadata change time Access time User attributes

Specifies When the file metadata was last modified When the file was last accessed Custom attributes

OneFS supports UNIX shell-style (glob) pattern matching for file name attributes and paths, using these characters: *, ?, and [a-z]. As many as four file pool policies can apply to a file (one per action) if the stop processing option is not selected. However, if the stop processing option is selected when you create a file-pool policy, only one file pool policy can be applied because OneFS applies only the first matching policy rule that it encounters. If a file type matches multiple policies, subsequent policies in the list are not evaluated. If one policy rule moves all JPG files to a nearline node pool and another policy rule moves all files smaller than 2 MB to a performance tier and the JPG rule is first in the list, then all JPG files smaller than 2 MB are moved to nearline storage instead of to the performance tier. OneFS provides customizable template policies that archive older files, increase the protection level for specified files, send files that are saved to a particular path to a higher-performance disk pool, and change the access setting for VMWare files. You also can copy any file pool policy except the default file pool policy, and then modify the settings that you need to change. After a file pool policy is created, OneFS stores and lists it with other file pool policies. When the SmartPools job runs, it traverses the stored file pool policies list from top to bottom (per file) and policies are applied in the order of that list. The file pool policy list can be reordered at any time, but the default file pool policy is always last in the list of enabled file pool policies.

Managing file pool policies


File pool policies can be modified, reordered, copied, or removed. The default file pool policy can be modified, and template policies can be applied. You can perform the following file pool policy management tasks:
u u u u u u

Modify file pool policies Modify the default file pool policy Copy file pool policies Use a file pool policy template Reorder file pool policies Delete file pool policies

Configure default file pool policy settings


You can configure file pool policies to filter and store files according to criteria that you specify. File pool policy settings include protection levels and I/O optimization. 1. Click File System Management > SmartPools > Settings. The SmartPools Settings page appears. 2. On the SmartPools Settings page, choose the default settings that you want and click Submit.
Managing file pool policies
245

Storage pools

Results Changes to the default file pool policy are applied when the next scheduled SmartPools job runs.

Configure default file pool protection settings


You can configure default file pool protection settings. The default settings are applied to any file that is not covered by another file pool policy.

If existing file pool policies direct data to a specific node pool or tier, do not add or modify a file pool policy to target anywhere for the Data storage target option. Target a specific file pool instead. 1. Click File System Management > SmartPools > Settings. The SmartPools page appears. 2. In the SmartPools Settings section, choose the settings that you want apply as the global default for Data storage target, Snapshot storage target, or Protection level. 3. Click Submit. The settings that you selected are applied to any entity that is not covered by another file pool policy.

Configure default I/O optimization settings


You can specify default I/O optimization settings. 1. Click File System Management > SmartPools > Settings. The SmartPools page appears. 2. In the Default File Pool I/O Optimization Settings area, choose the settings that you want apply as the global default for SmartCache and Data access pattern. 3. Click Submit. The settings that you selected are applied to any entity that is not covered by another file pool policy.

Modify a file pool policy


You can modify the name and description, filter criteria, and the protection and I/O optimization settings that are applied to files that are filtered by a file pool policy.

If existing file pool policies direct data to a specific node pool or tier, do not add or modify a file pool policy to target anywhere for the Data storage target option. Target a specific file pool instead. 1. Click File System Management > SmartPools > File Pool Policies. The SmartPools page appears and displays three groupings: a list of file pool policies, a list of template policies, and latest scan job results. 2. In the File Pool Policies area, in the Actions column of the file pool policy you want to modify, click Copy. The settings options appear. 3. Make your changes in the appropriate areas and click Submit.
246

OneFS 7.0 Administration Guide

Storage pools

Results Changes to the file pool policy are applied when the next scheduled SmartPools job runs. To run the job immediately, click Start SmartPools Job.

Copy a file pool policy


You can copy any file pool policy with the exception of the default file pool policy. Settings for the copied policy can then be modified. 1. Click File System Management > SmartPools > File Pool Policies. The SmartPools page appears and displays three groupings: a list of file pool policies, a list of template policies, and latest scan job results. 2. In the File Pool Policies area, click Copy in the Actions column of the file pool policy that you want to copy. The settings options appear. 3. Make changes in the appropriate areas and click Submit. A copy of the file pool policy is added to the list of policies in the File Pool Policies area. Results The copied file pool policy name is prefaced with Copy of, so that you can differentiate it from the source policy.

Prioritize a file pool policy


File pool policies are evaluated in descending order according to their position in the File Pool Policies list. You can position a file pool policy to prioritize its position. By default, new policies are inserted immediately above the default file pool policy, which is always last. You can give a policy higher or lower priority by moving it up or down the list. 1. Click File System Management > SmartPools > File Pool Policies. The SmartPools page appears and displays three groupings: a list of file pool policies, a list of template policies, and the latest scan job results. 2. In the Order column of the File Pool Policies area, select the policy that you want to move. 3. Click either Move up or Move down until the policy is positioned where you want it in the order.

Use a file pool template policy


You can use a OneFS template to configure file pool policies. 1. Click File System Management > SmartPools > File Pool Policies. The SmartPools page appears and displays three groupings: a list of file pool policies, a list of template policies, and the latest scan job results. 2. In the Action column of the Template Policies area, in the row of the template you want to use, click Use. The file pool policy settings options appear, with values pre-configured for the type of template that you selected. 3. Optional: Rename the template or modify the template policy settings. 4. Click Submit.
Copy a file pool policy
247

Storage pools

Results The policy is added to the File Pool Policies list.

Delete a file pool policy


You can delete any file pool policy except the default policy. Because a file pool policy determines the target where a file is stored, when you delete a file pool policy the files that are associated with that policy may be stored on a different target, depending on the settings of a different or the default file pool policy. Files are not moved to another target until the SmartPools job runs. 1. Click File System Management > SmartPools > File Pool Policies. The SmartPools page appears and displays three groupings: a list of file pool policies, a list of template policies, and latest scan job results. 2. In the File Pool Policies area, in the Actions column of the file pool policy you want to remove, click Delete. 3. In the confirmation dialog box, click Yes to confirm the deletion. Results The file pool policy is removed from the list in the File Pool Policies area.

SmartPools settings
SmartPools settings include directory protection, global namespace acceleration, virtual hot spare, node pool, spillover, protection management, and I/O optimization management. Setting Directory protection

Description Increases the amount of protection for directories at a higher level than the directories and files that they contain, so that data that is not lost can still be accessed.

Notes The option to Protect directories at one

level higher should be enabled.

When this setting is disabled, the directory that contains a file pool is protected according to your protectionWhen devices failures result in data loss level settings, but the devices used to (for example, three drives or two nodes in store the directory and the file may not be a +2:1 policy), enabling this setting the same. There is potential to lose nodes ensures that intact data is still accessible. with file data intact but not be able to access the data because those nodes contained the directory. As an example, consider a cluster that has a +2 default file pool protection setting and no additional file pool policies. OneFS directories are always mirrored, so they are stored at 3x, which is the mirrored equivalent of the +2 default. This configuration can sustain a failure of two nodes before data loss or inaccessibility. If this setting is enabled, all directories are protected at 4x. If the cluster experiences three node failures, although individual files may be inaccessible, the directory tree is

248

OneFS 7.0 Administration Guide

Storage pools

Setting -

Description -

Notes
available and provides access to files that are still accessible. In addition, if another file pool policy protects some files at a higher level, these too are accessible in the event of a threenode failure.

Global namespace acceleration

Specifies whether to allow per-file metadata to use SSDs in the node pool.
l

This setting is available only if 20 percent or more of the nodes in the cluster contain SSDs and at least 1.5 percent of the total Disabled. Restrict per-file metadata to cluster storage is SSD-based the target pool of the file, except in the case of spillover. This is the default setting. use the SSDs in any node pool.

Enabled. Allow per-file metadata to


If you configure both the minimum number of virtual drives and a minimum percentage of total disk space when you configure reserved VHS space, the enforced minimum value satisfies both requirements. If this setting is enabled and Deny new

Virtual hot spare

Reserves a minimum amount of space in the node pool that can be used for data migration in the event of a drive failure. To reserve disk space for use as a virtual hot spare, select one or both of the following options:
l

Reduce amount of available space.


Subtracts the space reserved for virtual hot spare when calculating available free space.

data writes is disabled, it is possible for


the file system utilization to be reported at more than 100%.

Deny new data writes. Prevents write operations from using reserved disk space. VHS space to reserve. You can reserve a minimum number of virtual drives (1-4), as well as a minimum percentage of total disk space (0-20%).

Global spillover

Specifies how to handle write operations to a node pool that is full.


l

Enabled. Redirect write operations from a node pool that is not writable to another node pool. Disabled. Return a disk space error
for write operations to a node pool that is not writable

Spillover data to

Specifies which pool to target when a storage pool is not writable.

When spillover is enabled but it is important that data writes do not fail, select anywhere for the Spillover data to setting, even if file pool policies send data to specific pools. Disabling both protection management and I/O optimization management
SmartPools settings
249

Protection management

Uses SmartPools technology to manage node pool protection.

Storage pools

Setting -

Description
l

Notes settings disables SmartPools functionality.

SmartPools manages protection settings. Specify that SmartPools


manages the protection-level settings. You can optionally modify the default settings under Default

Including files with manuallymanaged protection settings.


Overwrite any protection settings that were configured through File System Explorer or the command-line interface.

I/O optimization management

Uses SmartPools technology to manage node pool I/O optimization.

SmartPools manages I/O optimization settings. Specify that SmartPools


technology is used to manages I/O optimization.

Disabling both protection management and I/O optimization management settings disables SmartPools functionality. You can modify the default settings in the

Default I/O Optimization Settings group


(optional).

Including files with manually-managed protection settings. Overwrite any I/O


optimization settings that were configured through File System Explorer or the command-line interface

Default file pool protection settings


Default protection settings include specifying the data pool, snapshot pool, protection level, and SSD strategy for files that are filtered by the default file pool policy. Setting Data storage target

Description Specifies the node pool or tier that you want to target with this file pool policy.

Notes If GNA is not enabled and the pool that you choose to target does not contain SSDs, you cannot define a strategy. Metadata read acceleration
writes both file data and metadata to HDD pools but adds an additional SSD mirror if possible to accelerate read performance. Uses HDDs to provide reliability and an SSD, if available, to improve read performance. Recommended for most uses. When you select Metadata

If existing file pool policies direct data to a specific node pool or tier, do not add or modify a file pool policy to target anywhere for the Data storage target option. Target a specific file pool instead.
Select one of the following options to define your SSD strategy:
u

Metadata read acceleration


Default. Write both file

read/write acceleration, the strategy uses SSDs, if available

250

OneFS 7.0 Administration Guide

Storage pools

Setting -

Description data and metadata to HDDs and metadata to SSDs. Accelerates metadata reads only. Uses less SSD space than the Metadata read/write acceleration setting.
u

Notes
in the file pool policy, for performance and reliability. The extra mirror may be from a different node pool using GNA enabled or from the same node pool. The Data on SSDs strategy does not result in the creation of additional mirrors beyond the normal protection level. Both file data and metadata are stored on SSDs if available within the file pool policy. This option requires a significant amount of SSD storage.

Metadata read/write acceleration Write


metadata to SSD pools. Uses significantly more SSD space than Metadata read acceleration, but accelerates metadata reads and writes.

Avoid SSDs Write all associated file data and metadata to HDDs only.

Use this to free SSD space only after consulting with Isilon Technical Support personnel; may negatively affect performance.
u

Data on SSDs Use nodes with SSDs for both data and metadata. Regardless of whether global namespace acceleration is enabled, any SSD blocks reside on the file target pool if there is room.
Notes for Data storage target apply to snapshot storage target

Snapshot storage target

Specifies the node pool or tier that you want to target for snapshot storage with this file pool policy. The settings are the same as those for Data storage target, but apply to snapshot data.

Protection level

Default protection level of disk To change the protection policy to a specific level, select a new pool. Assign the default
protection policy of the disk pool to the filtered files. value from the list.

Default file pool protection settings

251

Storage pools

Setting -

Description Specific level. Assign a specified protection policy to the filtered files. -

Notes

Default file pool I/IO optimization settings


You can manage the I/O optimization settings that are used in the default file pool policy, including files with manually-managed attributes. To allow SmartPools to overwrite optimization settings that were configured using File System Explorer or the isi set command, select the Including files with manuallymanaged I/O optimization settings option in the Default Protection Settings group. Setting SmartCache

Description Enables or disables SmartCache.

Notes SmartCache can improve performance, but can also lead to data loss if a node loses power or crashes while uncommitted data is in the write cache. By default, iSCSI LUNs are configured to use a random access pattern. Other files and directories use a concurrent access pattern by default.

Data access pattern

Defines the optimization settings for accessing data: Concurrency, Streaming, or Random.

252

OneFS 7.0 Administration Guide

CHAPTER 13 Networking

After you determine the topology of your network, you can set up and manage your internal and external networks. There are two types of networks associated with a cluster:
u

Internal Nodes use the internal network to communicate with one another.
Communication occurs through InfiniBand connections. You can optionally configure a failover network for redundancy.

External Clients connect to the cluster through the external network with Ethernet. The
Isilon cluster supports standard network communication protocols, including NFS, SMB, HTTP, and FTP. The cluster includes various external Ethernet connections, providing flexibility for a wide variety of network configurations. External speeds vary by product.

With the cluster's web administration interface, you can manage both the internal and external network settings from a centralized location.
u u u u u u

Cluster internal network overview........................................................................254 External client network overview.........................................................................254 Configuring the internal cluster network..............................................................259 Configuring an external network..........................................................................261 Managing external client connections with SmartConnect...................................275 Managing network interface provisioning rules....................................................276

Networking

253

Networking

Cluster internal network overview


The internal networks enable communication between nodes in a cluster. You can configure a single internal network, or optionally specify a second internal network that includes internal network failover if either the int-a or int-b switch is unavailable. If your network topology uses a single internal network, only the int-a interface must be configured. If your network topology uses more than one internal network, you must first configure the int-a IP address ranges, and then configure the int-b and failover IP address ranges.

Internal IP address ranges


The number of internal IP addresses determines how many nodes can be joined to the cluster. When you initially configure the cluster, you specify one or more IP address ranges for the internal network. This range of addresses is used by the nodes to communicate. It is recommended that you create a range of addresses large enough to add nodes later. For certain configuration changes, such as deleting an IP address assigned to a node, the cluster must be rebooted. If the IP address range defined during the initial configuration is too restrictive for the size of the internal network, you can add ranges to the int-a network and int-b network.

Cluster internal network failover


Internal network failover provides redundancy for intra-cluster communications. To enable an internal failover network, the int-a ports of each node in the cluster must be connected to one switch, and the int-b ports on each node must be connected to another switch. The failover function is automatically enabled when the following conditions are met:
u

IP address ranges on separate subnets for the int-a, int-b, and failover networks are configured. The int-b interface is enabled. Enabling an internal failover network requires that the cluster be rebooted.

External client network overview


Client computers connect to a cluster through the external network. OneFS supports network subnets, IP address pools, and network provisioning rules. Subnets simplify front-end network management and provide flexibility in implementing and maintaining the cluster's network. By creating IP address pools within subnets, you can further partition your network interfaces. External network settings can be configured once using provisioning rules and then automatically applied as nodes are added to the cluster. You must initially configure the default external IP subnet in IPv4 format. After configuration is complete, you can configure additional subnets using IPv4 or IPv6. IP address pools can be associated with a node or a group of nodes as well as with the NIC ports on the nodes. For example, based on the network traffic that you expect, you might decide to establish one subnet for storage nodes and another subnet for accelerator nodes.
254

OneFS 7.0 Administration Guide

Networking

How you set up your external network subnets depends on your network topology. In a basic network topology where all client-node communication occurs through a single gateway, only a single external subnet is required. If clients connect through multiple external subnets or internal connections, you must configure multiple external network subnets.

External network settings


You can configure settings for your external networks through a wizard in the commandline interface. During the initial cluster setup, you must specify the following information about your external network:
u u u u u u u

Netmask IP address range Gateway Domain name server list (optional) DNS search list (optional) SmartConnect zone name (optional) SmartConnect service address (optional) Creates a default external network subnet called subnet0, with the specified netmask, gateway, and SmartConnect service address. Creates a default IP address pool called pool0 with the specified IP address range, the SmartConnect zone name, and the external interface of the first node in the cluster as the only member. Creates a default network provisioning rule called rule0, which automatically assigns the first external interface for all newly added nodes to pool0. Adds pool0 to subnet0 and configures pool0 to use the virtual IP of subnet0 as its SmartConnect service address. Sets the global, outbound DNS settings to the domain name server list and DNS search list, if provided.

After you configure these settings, OneFS performs the following actions:
u

After you have configured the network configuration for the cluster through the commandline Configuration wizard, you can make changes to your external network settings through the web administration interface. For example, you can add external network subnets, or modify existing external network settings such as subnets, IP address pools, and network provisioning rules.

IP address pools
IP address pools are logical network partitions of the nodes and external network interfaces that belong to a cluster. IP address pools are also used to configure SmartConnect zones and IP failover support for protocols such as NFS.
u

IP address pools: Map available addresses to configured interfaces.


u

Belong to external network subnets. Allow you to partition your cluster's network interfaces into groups. Can be to assigned to groups in your organization.
External network settings
255

u u

Networking

Multiple pools for a single subnet require a configured SmartConnect Advanced license. The IP address pool of a subnet consists of one or more ranges of IP addresses and a set of cluster interfaces. All IP address ranges in a pool must be unique. A default IP address pool is configured during the initial cluster setup using the command-line Configuration wizard. You can modify the default IP address pool at any time. Additional pools can also be added, removed, or modified. If you add external network subnets to your cluster by using the Subnet wizard, you must specify the IP address pools that belong to the subnet. IP address pools are allocated to external network interfaces either dynamically or statically. The static allocation method assigns one IP address per pool interface. The IP addresses remain assigned, regardless of that interface's status, but the method does not guarantee that all IP addresses are assigned. The dynamic allocation method distributes all pool IP addresses, and the IP address can be moved depending on the interface's status and connection policy settings.

Connection balancing with SmartConnect


SmartConnect balances client connections in OneFS. The SmartConnect module is available in two modes:
u

Basic The unlicensed Basic mode balances client connections by using a round robin
policy. The Basic mode is limited to static IP address allocation and to one IP address pool per external network subnet. This mode is included with OneFS as a standard feature and does not require a license.

Advanced The licensed Advanced mode enables features such as CPU utilization,
connection counting, and client connection policies in addition to the round robin policy. The Advanced mode also allows IP address pools to be defined to support multiple DNS zones within a single subnet, and supports IP failover.

The following information describes the SmartConnect DNS client-connection balancing policies:
u

Round Robin This method selects the next available node on a rotating basis. This is the
default state (after SmartConnect is activated) if no other policy is selected.

Round robin is the only connection policy available without a SmartConnect advanced license.
u

Connection Count This method determines the number of open TCP connections on
each available node and optimizes the cluster usage.

Network Throughput This method determines the average throughput on each available node to optimize the cluster usage. CPU Usage This method determines the average CPU utilization on each available node to optimize the cluster usage.

SmartConnect requires that a new name server (NS) record is added to the existing authoritative DNS zone containing the cluster.

256

OneFS 7.0 Administration Guide

Networking

External IP failover
External IP failover redistributes IP addresses among node interfaces in an IP address pool when one or more interfaces become unavailable. To enable dynamic IP allocation and IP failover in your cluster, you must have an active SmartConnect Advanced license. The unlicensed SmartConnect Basic, provided as a standard feature in the OneFS operating system, does not support IP failover. Dynamic IP allocation ensures that all IP addresses in the IP address pool are assigned to member interfaces. Dynamic IP allocation allows clients to connect to any IP addresses in the pool and receive a response. If a node or an interface becomes unavailable, OneFS moves the IP address to other member interfaces in the IP address pool. IP failover ensures that all of the IP addresses in the pool are assigned to an available node. When an interface becomes unavailable, the dynamic IP address of the node is redistributed among the remaining available interfaces. Subsequent client connections are directed to the node that is assigned to that IP address. If your cluster has an active SmartConnect Advanced license, you may have already enabled IP failover during the process of running the Subnet wizard to configure your external network settings. You can also modify your subnet settings at any time to enable IP failover for selected IP address pools. IP failover occurs when a pool has a dynamic IP address allocation set. You can further configure IP failover for your network environment by using the following options:
u

IP allocation method This method ensures that all of the IP addresses in the pool are
assigned to an available node.

Rebalance policy This policy controls how IP addresses are redistributed when node
interface members for a given IP address pool become available after a period of unavailability.

IP failover policy This policy determines how to redistribute the IP addresses among
remaining members of an IP address pool when one or more members are unavailable.

SmartConnect requires that a new name server (NS) record be added to the existing authoritative DNS zone that contains the cluster.

NIC aggregation
Network interface card (NIC) aggregation, also known as link aggregation, is optional, and enables you to combine the bandwidth of a node's physical network interface cards into a single logical connection. NIC aggregation provides improved network throughput. Configuring link aggregation requires advanced knowledge of network switches. Consult your network switch documentation before configuring your cluster for link aggregation. NIC aggregation can be configured during the creation of a new external network subnet by using the Subnet wizard. Alternatively, NIC aggregation can be configured on the existing IP address pool of a subnet. When you configure a node through the web administration interface to enable NIC aggregation, storage administrators must be aware of the following:
u

OneFS provides support for the following link aggregation methods:

External IP failover

257

Networking

Link Aggregation Control Protocol (LACP) Supports the IEEE 802.3ad Link
Aggregation Control Protocol (LACP). This method is recommended for switches that support LACP and is the default mode for new pools.

Legacy Fast EtherChannel (FEC) mode This method is compatible with aggregated configurations in earlier versions of OneFS. Etherchannel (FEC) This method is a newer implementation of the Legacy FEC
mode.

Active / Passive Failover This method transmits all data transmits through the
master port, which is the first port in the aggregated link. The next active port in an aggregated link takes over if the master port is unavailable.

Round-Robin Tx This method balances outbound traffic across all active ports in the
aggregated link and accepts inbound traffic on any port.

u u

Some NICs may allow aggregation of ports only on the same network card. For LACP and FEC aggregation modes, the switch must support IEEE 802.3ad link aggregation. Since the trunks on the network switch must also be set up, the node must be correctly connected with the right ports on the switch.

VLANs
Virtual LAN (VLAN) tagging is an optional setting for the external network subnet that enables a cluster to participate in multiple virtual networks. A VLAN is a group of hosts that communicate as though they are connected to the same local area network regardless of their physical location. Enabling a cluster to participate in a VLAN allows multiple cluster subnet support without multiple network switches, so that one physical switch enables multiple virtual subnets. Multiple cluster subnets can be supported without multiple network switches, so that one physical switch enables multiple virtual subnets. Configuring a VLAN requires advanced knowledge of network switches. Consult your network switch documentation before configuring your cluster for a VLAN.

DNS name resolution


You can configure the DNS server settings used for your external network. The DNS server setting designates up to three domain name service (DNS) servers that the cluster uses to resolve hostnames to IP addresses. You can also add up to six search domains. You can configure the DNS server settings during initial cluster configuration using the command-line Configuration wizard. The settings can also be modified at any time after the initial configuration through the web administration interface or through the isi networks command via the command-line interface.

IPv6 support
OneFS provides support for IPv6 through a dual-stack configuration. You can configure a cluster with IPv6 addresses. With dual-stack support in OneFS, you can use both IPv4 and IPv6 addresses. However, configuring a cluster to use IPv6 exclusively is not supported. When you set up the cluster, the initial subnet must use IPv4 addresses. The following table describes important distinctions between IPv4 and IPv6.
258

OneFS 7.0 Administration Guide

Networking

IPv4 32-bit addresses Subnet mask Address Resolution Protocol (ARP)

IPv6 128-bit addresses Prefix length Neighbor Discovery Protocol (NDP)

Configuring the internal cluster network


You can use the web interface to modify the internal cluster network settings. You can use the web administration interface to do the following:
u u u u u

Configure settings for the int-b and failover networks Enable internal network failover Modify the int-b and failover network settings Delete IP addresses Migrate IP addresses

You can configure the int-b and failover internal networks to provide back up networks in the event of an int-a network failure. Configuration involves specifying a valid netmask and IP address range for the network.

Modify the internal IP address range


Internal IP addresses are used for intra-cluster communication. You can add, remove or migrate IP addresses for the cluster's internal network. When the cluster was originally set up, the wizard prompted you to configure the int-a, int-b, and failover networks. However, if more nodes are added to the cluster and the initial setup did not provide enough IP addresses, the internal IP ranges must be expanded. 1. Click Cluster Management > Network Configuration. 2. In the Internal Networks Settings area, select the network that you want to add IP addresses for. l To select the int-a network, click int-a.
l

To select the int-b/failover network, click int-b / Failover.

3. In the IP Ranges area, you can add, delete or migrate your IP address ranges. Ideally, the new range is contiguous with the previous one. For example, if your current IP address range is 192.168.160.60-192.168.160.162, the new range should start with 192.168.160.163. 4. Click Submit. If you entered a contiguous range, the new range appears as one range that includes the IP addresses you added as well as the previous ones.

Modify the internal network netmask


You can modify the netmask value for the internal network. If the netmask is too restrictive for the size of the internal network, you must modify the netmask settings. It is recommended that you specify a class C netmask, such as 255.255.255.0, for the internal netmask. This netmask is large enough to accommodate future clusters.
Configuring the internal cluster network
259

Networking

For the changes in netmask value to take effect, you must reboot the cluster. 1. Click Cluster Configuration > Network Configuration. 2. In the Internal Network Settings area, select the network that you want to configure the netmask for. l To select the int-a network, click int-a.
l

To select the int-b/Failover network, click int-b / Failover.

3. In the Netmask field, type a netmask value. You cannot modify the netmask value if the change invalidates any node addresses. 4. Click Submit. A dialog box prompts you to reboot the cluster. 5. Specify when you want to reboot the cluster. l To immediately reboot the cluster, click Yes. When the cluster finishes rebooting, the login page appears.
l

Click No to return to the Edit Internal Network page without changing the settings or rebooting the cluster.

Configure and enable an internal failover network


You can enable a cluster failover network through the web administration interface. By default, the int-b and failover internal networks are disabled. 1. Click Cluster Management > Network Configuration. 2. In the Internal Network Settings area, click int-b / Failover. 3. In the IP Ranges area, for the int-b network, click Add range. The Add IP Range dialog box appears. 4. In the first IP range field, enter the IP address at the low end of the range. Add an IP range for the int-b network. Ensure that there is no overlap with the int-a network. For example, if the IP address range for the int-a network is 192.168.1.1-192.168.1.100, specify a range of 192.168.2.1-192.168.2.100 for the int-b network. The example above assumes a class C netmask is used. 5. In the second IP range field, type the IP address at the high end of the range. 6. Click Submit. 7. In the IP Ranges area for the Failover network, click Add range. Add an IP address range for the failover network, ensuring there is no overlap with the int-a network or the int-b network. The Edit Internal Network page appears, and the new IP address range appears in the IP Ranges list. 8. In the Settings area, specify a valid netmask. Ensure that there is no overlap between the IP address range for the int-b network or for the failover network. It is recommended that you use a class C netmask, such as 255.255.255.0, for the internal network. 9. In the Settings area, for State, click Enable to enable int-b and failover networks. For the changes to the network settings to take effect, you must reboot the cluster. 10. Click Submit.
260

OneFS 7.0 Administration Guide

Networking

The Confirm Cluster Reboot dialog box appears. 11. To reboot the cluster, click Yes.

Disable internal network failover


You can disable the int-b and failover internal networks. This section describes how to disable the int-b and failover networks through the web administration interface. To make changes to the int-b and failover network states, you must reboot the cluster. 1. Click Cluster Management > Network Configuration. 2. For State area, click Disable. 3. Click Submit. The Confirm Cluster Reboot dialog box appears. 4. To reboot the cluster, click Yes. After the cluster reboots, the login page appears.

Configuring an external network


You can configure all network connections between the cluster and client computers.

Adding a subnet
OneFS provides a four-step wizard that enables you to add and configure an external subnet. This procedure explains how to start the Subnet wizard and configure the new subnet. To add a subnet, you must perform the following steps: 1. Configure the subnet's basic and advanced settings. 2. Assign an initial IP address pool to be used by the subnet. 3. Optional: Configure SmartConnect for the IP address pool. 4. Assign external network interfaces to the subnet's IP address pool.

Configure subnet setttings


You can add a subnet to a cluster's external network by using the Subnet wizard. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click Add subnet. 3. In the Basic section, in the Name field, type a unique name for the subnet. The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but not spaces or other punctuation. 4. Optional: In the Description field, type a descriptive comment about the subnet. This value can contain up to 128 characters. 5. Specify the IP address format for the subnet and configure an associated netmask or prefix length setting: l For an IPv4 subnet, click IPv4 in the IP Family list. In the Netmask field, type a dotted decimal octet (x.x.x.x) that represents the subnet's mask.
Disable internal network failover
261

Networking

For an IPv6 subnet, click IPv6 in the IP family list. In the Prefix length field, type an integer (ranging from 1 to 128) that represents the network prefix length.

6. In the MTU list, type or select the size of the maximum transmission units the cluster uses in network communication. Any numerical value is allowed, but might not be compatible with your network. Common settings are 1500 (standard frames) and 9000 (jumbo frames). Although OneFS supports both 1500 MTU and 9000 MTU, it is recommended that you configure switches for jumbo frames. Jumbo frames enable the cluster to more efficiently communicate with all the nodes in the cluster. To benefit from using jumbo frames, all devices in the network path must be configured for jumbo frames. 7. In the Gateway address field, type the IP address of the gateway server device through which the cluster communicates with systems outside of the subnet. 8. In the Gateway priority field, type an integer for the priority of the subnet gateway for nodes assigned to more than one subnet. You can configure only one default gateway per node, but each subnet can be assigned a gateway. When a node belongs to more than one subnet, this option enables you to define the preferred default gateway. A value of 1 represents the highest priority, and 10 represents the lowest priority. 9. If you plan to use SmartConnect for connection balancing, in the SmartConnect service IP field, type the IP address that will receive all incoming DNS requests for each IP address pool according to the client connection policy. You must have at least one subnet configured with a SmartConnect service IP in order to use connection balancing. 10. Optional: In the Advanced section, you can enable VLAN tagging if you want to enable the cluster to participate in virtual networks. Configuring a VLAN requires advanced knowledge of network switches. Consult your network switch documentation before configuring your cluster for a VLAN. 11. If you enable VLAN tagging, you must also type a VLAN ID that corresponds to the ID number for the VLAN set on the switch, with a value from 2 to 4094. 12. Optional: In the Hardware load balancing field, type the IP address for a hardware load balancing switch using Direct Server Return (DSR). This routes all client traffic to the cluster through the switch. The switch determines which node handles the traffic for the client, and passes the traffic to that node. 13. Click Next. The Step 2 of 4 -- IP Address Pool Settings dialog box appears. What to do next This is the first of four steps required to configure an external network subnet. To save the network configuration, you must complete the remaining steps. For information on the next step, see Configure an IP address pool.

262

OneFS 7.0 Administration Guide

Networking

Add an IP address pool to a new subnet


You must configure an IP address pool for a cluster's external network subnet. IP address pools partition your cluster's external network interfaces into groups, or pools, of unique IP address ranges in a subnet. If your cluster is running SmartConnect Basic for connection balancing, you can configure only one IP address pool per subnet. With the optional, licensed SmartConnect Advanced module, you can configure unlimited IP address pools per subnet. Before you begin This is the second of four steps required to configure an external network subnet on a cluster. You must complete the previous steps in the Subnet wizard page before you can perform these steps. 1. In the Step 2 of 4 IP Address Pool Settings dialog box, type a unique Name for the IP address pool. The name can be up to 32 alphanumeric characters long and can include underscores or hyphens, but no spaces or other punctuation. 2. Type an optional Description for the IP address pool. The description can contain up to 128 characters. 3. In the Access zone list, click to select an access zone for the pool. OneFS includes a built-in system access zone. 4. In the IP range (low-high) area, click New. The Subnet wizard adds an IP address range with default Low IP and High IP values. 5. Click to select the default Low IP value. Replace the default value with the starting IP address of the subnet's IP address pool. 6. Click to select the default High IP value. Replace the default value with the ending IP address of the subnet's IP address pool. 7. Optional: Add IP address ranges to the IP address pool by repeating steps 3 through 5 as needed. 8. Click Next. The Step 3 of 4 SmartConnect Settings dialog box appears. What to do next This is the second of four steps required to configure an external network subnet. To save the network configuration, you must complete each of the remaining steps. For information on the next step, see Configure pool SmartConnect settings.

Configure SmartConnect settings for a new subnet


You can configure connection balancing for a cluster's external network subnet. SmartConnect is a client connection balancing management module that you can optionally configure as part of a cluster's external network subnet. The settings available on the wizard page depend on whether you have a SmartConnect Advanced license. The unlicensed mode of the SmartConnect module is limited to using a round robin balancing policy. Before you begin This is the third of four steps required to configure an external network subnet on a cluster by using the Subnet wizard. To access the SmartConnect page in the wizard, you must first complete the steps on the previous subnet wizard pages. 1. In the Step 3 of 4 SmartConnect Settings dialog box, type a Zone name for the SmartConnect zone that this IP address pool represents. The zone name must be
Adding a subnet
263

Networking

unique among the pools served by the SmartConnect service subnet specified in Step 3 below. 2. In the Connection policy list, select the type of connection balancing policy the IP address pool for this subnet uses. The policy determines how SmartConnect distributes incoming DNS requests across the members of an IP address pool. Round Robin Connection Count Selects the next available node on a rotating basis, and is the default state if no other policy is selected. Determines the number of open TCP connections on each available node to optimize the cluster usage.

Network Throughput Uses the overall average throughput volume on each available node to optimize the cluster usage. CPU Usage Examines average CPU utilization on each available node to optimize the cluster usage.

3. In the SmartConnect service subnet list, select the name of the external network subnet whose SmartConnect service will answer DNS requests on behalf of the IP address pool. A pool can have only one SmartConnect service answering DNS requests. If this option is left blank, the IP address pool the subnet belongs to is excluded when SmartConnect answers incoming DNS requests for the cluster. If you have activated an optional SmartConnect Advanced license, complete the following steps for the options in the SmartConnect Advanced section of this wizard page. 4. In the IP allocation method list, select the method by which IP addresses are assigned to the member interfaces for this IP address pool: Static Select this IP allocation method to assign IP addresses when member interfaces are added to the IP pool. As members are added to the pool, this method allocates the next unused IP address from the pool to each new member. After an IP address is allocated, the pool member keeps the address indefinitely unless one of the following items is true:
l l l

The member interface is removed from the network pool. The member node is removed from the cluster. The member interface is moved to another IP address pool.

Dynamic Select this IP allocation method to ensure that all IP addresses in the IP address pool are assigned to member interfaces, which allows clients to connect to any IP addresses in the pool and be guaranteed a response. If a node or an interface becomes unavailable, their IP addresses are automatically moved to other available member interfaces in the pool. If you select the dynamic IP allocation method, you can specify the SmartConnect Rebalance policy and the IP failover policy in the next two steps. 5. Select the type of SmartConnect Rebalance policy to use when IP addresses are redistributed. IP addresses redistribution occurs when node interface members in an IP address pool become available. These options can only be selected if the IP allocation method is set to Dynamic.

264

OneFS 7.0 Administration Guide

Networking

Automatic Failback (default)

Automatically redistributes IP addresses. The automatic rebalance is triggered by a change to one of the following items.
l l l

Cluster membership. Cluster external network configuration. A Member network interface.

Manual Failback

Does not redistribute IP addresses until you manually issue a rebalance command through the command-line interface.

This is the default policy that automatically redistributes IP addresses. The automatic rebalance is triggered by a change to one of the following items. :

6. The IP failover policyalso known as NFS failoverdetermines how to redistribute the IP addresses among remaining members of an IP address pool when one or more members are unavailable. In order to enable IP failover, you must first set the IP allocation method to Dynamic, and then select which type of IP failover policy to use: Round Robin Connection Count Selects the next available node on a rotating basis, and is the default state if no other policy is selected. Determines the number of open TCP connections on each available node to optimize the cluster usage.

Network Throughput Uses the overall average throughput volume on each available node to optimize the cluster usage. CPU Usage Examines average CPU utilization on each available node to optimize the cluster usage.

7. Click Next to store the changes that you made to this wizard page. The Step 4 of 4 IP Address Pool members dialog box appears. What to do next This is the third of four steps required to configure an external network subnet. To save the network configuration, you must complete each of the remaining steps. For information on the next step, see Select the interface members for an IP address pool.

Select interface members for a new subnet


You can select which network interfaces belong to the IP address pool that belongs to the external network subnet. Before you begin This is the final of four steps that are required to configure a external network subnet on a cluster by using the Subnet wizard. The choices available on this wizard page depend on the types of external network interfaces of your Isilon cluster nodes. 1. In the Step 4 of 4 IP Address Pool Members dialog box, select which Available interfaces on which nodes you want to assign to the current IP address pool, and then click the right arrow button. You can also drag and drop the selected interfaces between the Available interfaces table and the Interfaces in current pool table. Selecting an available interface for a node that has a Type designated Aggregation bonds together the external interfaces for the selected node. For information about NIC aggregation, refer to NIC aggregation.
Adding a subnet
265

Networking

In the case of aggregated links, choose the aggregation mode that corresponds to the switch settings from the Aggregation mode drop-down. Configuring link aggregation requires advanced knowledge of how to configure network switches. Consult your network switch documentation before configuring your cluster for link aggregation. 2. When you have finished assigning external network interfaces to the IP address pool, click Submit. The external subnet settings you configured by using the Subnet wizard appear on the Edit Subnet page. What to do next You can change the subnet configuration settings at any time without going through the four-step wizard process. For more information, see Managing external client subnets.

Managing external client subnets


To modify a subnet, you can go directly to any of the subnet settings that you want to change. You can modify the default external network subnet for your cluster, which was configured during cluster installation by using the command-line Configuration wizard. You can also configure any new subnets you have added to the cluster after the initial setup.

Modify external subnet settings


You can modify the subnet for the external network. Modifying an external network subnet that is in use can disable access to the cluster and the web administration interface. OneFS displays a warning if deleting a subnet will terminate communication between the cluster and the web administration interface. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet you want to modify. 3. In the Settings area, click Edit. 4. Modify the Basic subnet settings as needed. Description Netmask MTU A descriptive comment that can be up to 128 characters. The subnet mask for the network interface. This field appears only for IPv4 subnets. The maximum size of the transmission units the cluster uses in network communication. Any numerical value is allowed, but might not be compatible with your network. Common settings are 1500 (standard frames) and 9000 (jumbo frames). The IP address of the gateway server through which the cluster communicates with systems outside of the subnet. The priority of the subnet's gateway for nodes that are assigned to more than one subnet. Only one default gateway can be

Gateway address Gateway priority

266

OneFS 7.0 Administration Guide

Networking

configured on each Isilon node, but each subnet can have its own gateway. If a node belongs to more than one subnet, this option enables you to define the preferred default gateway. A value of 1 is the highest priority, with 10 being the lowest priority. SmartConnect service IP The IP address that receives incoming DNS requests from outside the cluster. SmartConnect responds to these DNS requests for each IP address pool according to the pool's client-connection policy. To use connection balance, at least one subnet must be configured with a SmartConnect service IP address.

5. Optional: Modify the Advanced settings as needed. Configuring a virtual LAN requires advanced knowledge of network switches. Consult your network switch documentation before configuring your cluster for a VLAN. If you are not using a virtual LAN, leave the VLAN options disabled. VLAN tagging You can enable VLAN tagging. VLAN tagging allows a cluster to participate in multiple virtual networks. VLAN support provides security across subnets that is otherwise available only by purchasing additional network switches. If you enabled VLAN tagging, type a VLAN ID that corresponds to the ID number for the VLAN that is set on the switch, with a value from 1 to 4094. You can enter the IP address for a hardware load balancing switch that uses Direct Server Return (DSR).

VLAN ID

Hardware load balancing IPs 6. Click Submit.

Remove an external subnet


You can delete an external network subnet that you no longer need. Deleting an external network subnet that is in use can prevent access to the cluster and the web administration interface. OneFS displays a warning if deleting a subnet will terminate communication between the cluster and the web administration interface. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet you want to delete. The Edit Subnet page appears for the subnet you specified. 3. Click Delete subnet. A confirmation dialogue box appears. 4. Click Yes to delete the subnet. If the subnet you are deleting is used to communicate with the web administration interface, the confirmation message will contain an additional warning.

Create a static route


You can create a static route to connect to networks that are unavailable through the default routes. You configure a static route on a per-pool basis. A static route can be configured only with the command-line interface and only with the IPv4 protocol. 1. Open a secure shell (SSH) connection to any node in the cluster and log in.
Managing external client subnets
267

Networking

2. Create a static route by running the following command: isi networks modify pool --name <subnetname>:<poolname> --add-static-routes <subnet>/<netmask><gateway>

The system displays output similar to the following example:


Modifying pool 'subnet0:pool0': Saving: OK

3. To ensure that the static route was created, run the following command: isi networks ls pools -v.

Remove a static route


You can remove static routes. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Remove a static route by running the following command: isi networks modify pool --name <subnetname>:<poolname> --remove-static-routes <subnet>/<netmask><gateway>

The system displays output similar to the following example:


Modifying pool 'subnet0:pool0': Saving: OK

3. To ensure that the static route was created, run the following command: isi networks ls pools -v.

Enable or disable VLAN tagging


You can configure a cluster to participate in multiple virtual private networks, also known as virtual LANs or VLANs. You can also configure a VLAN when creating a subnet using the Subnet wizard. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet that contains the IP address pool that you want to add interface members to. 3. In the Settings area for the subnet, click Edit. 4. In the VLAN tagging list, select Enabled or Disabled. If you select Enabled, proceed to the next step. If you select Disabled, proceed to Step 6. 5. In the VLAN ID field, type a number between 2 and 4094 that corresponds to the VLAN ID number set on the switch. 6. Click Submit.

Managing IP address pools


You can configure IP address pools to control what network interfaces and IP addresses are attached to different subnets. IP address pools allow you to customize how clients access the cluster.

Add an IP address pool


You can use the web interface to add an IP address pool. 1. Click Cluster Management > Network Configuration.
268

OneFS 7.0 Administration Guide

Networking

2. Click the name of the subnet you wan too ad an IP address pool to. 3. In the IP Address Pools area, click Add pool by the I. 4. The IP Address Pool wizard appears. For more information on the three steps of the wizard, see Add an IP address pool to a new subnet, Configure SmartConnect settings for a new subnet and Select interface members for a new subnet.

Modify an IP address pool


You can use the web interface to modify IP address pool settings. 1. Click Cluster Management > Network Configuration. 2. Click the name of the subnet containing the pool you want to modify. 3. In the Basic Settings area, click Edit for the IP address pool you want to modify. The Configure IP Pool page appears. 4. For more information, see Add an IP address pool to a new subnet.

Delete an IP address pool


You can use the web interface to delete IP address pool settings. 1. Click Cluster Management > Network Configuration. 2. Click the name of the subnet containing the pool you want to delete. 3. Click Delete pool by the pool you want to delete.

Modify a SmartConnect zone


You can modify the settings of a SmartConnect zone that you created for an external network subnet using the Subnet wizard. 1. Click Cluster Management > Network Configuration. 2. Click the name of the external network subnet that contains the SmartConnect zone you want to modify. 3. In the SmartConnect settings area for the pool containing the SmartConnect settings you want to modify, click Edit. 4. Modify any SmartConnect settings, and then click Submit. For more information, see Configure SmartConnect settings for a new subnet .

Disable a SmartConnect zone


You can remove a SmartConnect zone from an external network subnet. 1. Click Cluster Management > Network Configuration. 2. Click the name of the external network subnet that contains the SmartConnect zone you want to disable. 3. In the SmartConnect settings area for the pool containing the SmartConnect zone you want to delete, click Edit. 4. To disable the SmartConnect zone, delete the name of the SmartConnect zone from the Zone name field and leave the field blank. 5. Click Submit to disable the SmartConnect zone. For more information, see Configure SmartConnect settings for a new subnet .

Managing IP address pools

269

Networking

Configure IP failover
You can configure IP failover to reassign an IP address from an unavailable node to a functional node, which enables clients to continue communicating with the cluster, even after a node becomes unavailable. 1. Click Cluster Management > Network Configuration 2. In the External Network Settings area, click the name of the subnet you want to set up IP failover for. 3. Expand the area of the pool you want to modify and click Edit in the SmartConnect Settings area. 4. Optional: In the Zone name field, you can enter a 128-character name for the zone. 5. In the Connection Policy list, select a balancing policy: Round Robin Connection Count Selects the next available node on a rotating basis, and is the default state if no other policy is selected. Determines the number of open TCP connections on each available node to optimize the cluster usage.

Network Throughput Uses the overall average throughput volume on each available node to optimize the cluster usage. CPU Usage Examines average CPU utilization on each available node to optimize the cluster usage.

6. If you purchased a license for SmartConnect Advanced, you will also have access to the following lists:
u

IP allocation method This setting determines how IP addresses are assigned to clients. Select either Dynamic or Static. Rebalance Policy This setting defines the client redirection policy for when a node becomes unavailable. The IP allocation list must be set to Dynamic in order for rebalance
policy options to be selected.

IP failover policy This setting defines the client redirection policy for when an IP
address is unavailable.

Allocate IP addresses to accommodate new nodes


You can expand capacity by adding new nodes to your Isilon cluster. After the hardware installation is complete, you can allocate IP addresses for a new node on one of the cluster's existing external network subnets, and then add the node's external interfaces to the subnet's IP address pool. You can also use network provisioning rules to automate the process of configuring the external network interfaces for new nodes when they are added to a cluster, although you may still need to allocate more IP addresses for the new nodes, depending on how many are already configured. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet that contains the IP address pool that you want to allocate more IP addresses to in order to accommodate the new nodes. 3. In the Basic settings area, click Edit.
270

OneFS 7.0 Administration Guide

Networking

4. Click New to add a new IP address range using the Low IP and High IP fields. or click the respective value in either the Low IP or High IP columns and type a new beginning or ending IP address. 5. Click Submit. 6. In the Pool members area, click Edit. 7. In the Available Interfaces table, select one or more interfaces for the newly added node, and then click the right arrow button to move the interfaces into the Interfaces in current pool table. 8. Click Submit to assign the new node interfaces to the IP address pool.

Managing IP address pool interface members


You can assign interfaces to specific IP address pools. Available interfaces are presented as all possible combinations of each interface on each node in the cluster, as well as NIC aggregation for each interface type on each node in the cluster.

Modify the interface members of a subnet


You can use the web interface to modify interface member settings. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the subnet containing the interface members you want to modify. 3. Click Edit next to the Pool members area. For more information, please see Select interface members for a new subnet.

Remove interface members from an IP address pool


You can use the web interface to remove interface members from an IP address pool. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet containing the IP address pool that you want to remove the interface member(s) for. 3. In the Pool members area, click Edit. 4. Remove a node's interface from the IP address pool by clicking the node's name in the Interfaces in current pool column, and then click the left arrow button. You can also drag and drop node interfaces between the Available Interfaces list and the Interfaces in current poolcolumn. 5. When you have finished removing node interfaces from the IP address pool, click Submit. For more information, please see Select interface members for a new subnet.

Configure NIC aggregation


You can configure cluster IP address pools to use NIC aggregation. This procedure describes how to configure network interface card (NIC) aggregation for an IP address pool belonging to an existing subnet. You can also configure NIC aggregation while configuring an external network subnet using the Subnet wizard. Configuring NIC aggregation means that multiple, physical external network interfaces on a node are combined into a single logical interface. If a node has two external Gigabit Ethernet interfaces, both will be aggregated. On a node with both Gigabit and 10 Gigabit
Managing IP address pool interface members
271

Networking

Ethernet interfaces, both types of interfaces can be aggregated, but only with interfaces of the same type. NIC aggregation cannot be used with mixed interface types. An external interface for a node cannot be used by an IP address pool in both an aggregated configuration and an individual interface. You must remove the individual interface for a node from the Interfaces in current pool table before configuring an aggregated NIC. Otherwise, the web administration interface displays an error message when you click Submit. Configuring link aggregation requires advanced knowledge of network switches. Consult your network switch documentation before configuring your cluster for NIC aggregation. Before you begin You must enable NIC aggregation on the cluster before you can enable NIC aggregation on the switch. If the cluster is configured but the switch is not configured, then the cluster can continue to communicate. If the switch is configured, but the cluster is not configured, the cluster cannot communicate, and you are unable to configure the cluster for NIC aggregation. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet that contains the IP address pool that you want to add aggregated interface members to. 3. In the Pool members area, click Edit. In the case of multiple IP address pools, expand the pool that you want to add the aggregated interfaces to, and then click Edit in the Pool members area. 4. In the Available interfaces table, click the aggregated interface for the node, which is indicated by a listing of AGGREGATION in the Type column. For example, if you want to aggregate the network interface card for Node 2 of the cluster, click the interface named ext-agg, Node 2 under Available interfaces, and then click the right-arrow button to move the aggregated interface to the Interfaces in current pool table. 5. From the Aggregation mode drop-down, select the appropriate aggregation mode that corresponds to the network switch settings. Consult your network switch documentation for supported NIC aggregation modes. OneFS supports the following NIC aggregation modes: Supports the IEEE 802.3ad Link Aggregation Control Protocol Link Aggregation Control Protocol (LACP) (LACP). This method is recommended for switches that support LACP and is the default mode for new pools. Legacy Fast EtherChannel (FEC) mode Etherchannel (FEC) Active / Passive Failover This method is compatible with aggregated configurations in earlier versions of OneFS. This method is the newer implementation of the Legacy FEC mode. This method transmits all data transmits through the master port, which is the first port in the aggregated link. The next

272

OneFS 7.0 Administration Guide

Networking

active port in an aggregated link takes over if the master port is unavailable. Round-Robin Tx This method balances outbound traffic across all active ports in the aggregated link and accepts inbound traffic on any port.

6. Click Submit.

Remove an aggregated NIC from an IP address pool


You can remove an aggregated NIC configuration from an IP address pool if your network environment has changed. However, you must first replace the aggregated setting with single-NIC settings in order for the node to continue supporting network traffic. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the name of the subnet that contains the IP address pol with the NIC aggregation settings you want to remove. 3. In the Pool members area, click Edit. 4. Select the name of the aggregated NIC for the node that you want to remove in the Interfaces in current pool table, and then click the left arrow button to move the name into the Available interfaces table. 5. Select one or more individual interfaces for the node in the Available interfaces table, and then click the right arrow button to move the interfaces into the Interfaces in current pool table. 6. When you have completed modifying the node interface settings, click Submit.

Move nodes between IP address pools


You can move nodes between IP address pools in the event of a network reconfiguration or installation of a new network switch. The process of moving nodes between IP address pools involves creating a new IP address pool and then assigning it to the nodes so that they are temporarily servicing multiple subnets. After testing that the new IP address pool is working correctly, the old IP address pool can safely be deleted. 1. Create a new IP address pool with the interfaces belonging to the nodes you want to move. For more information, see Add an IP address pool. 2. Verify that the new IP address pool functions properly by connecting to the nodes you want to move with IP addresses from the new pool. 3. Delete the old IP address pool. For more information, see Delete an IP address pool.

Reassign a node to another external subnet


You can move a node interface to a different subnet. Nodes can be reassigned to other subnets. 1. Click Cluster Management > Network Configuration. 2. In the External Settings area, click the subnet containing the node that you want to modify. 3. In the IP Address Pools area, click Edit next to the Pool members area.

Managing IP address pool interface members

273

Networking

4. Reassign the interface members that you want to move by dragging and dropping them from one column to other, or by clicking on an interface member and using the left arrow and right arrow buttons.

NIC and LNI aggregation options


Network interface card (NIC) and logical network interface (LNI) mapping options can be configured for aggregation. The following list provides guidelines for interpreting the aggregation options.
u u

Nodes support multiple network card configurations. LNI numbering corresponds to the physical positioning of the NIC ports as found on the back of the node, and LNI mappings are numbered from left to right. Aggregated LNIs are listed in the order in which they are aggregated at the time they are created. NIC names correspond to the network interface name as shown in command-line interface tools such as ifconfig and netstat. NIC Aggregated LNI Aggregated NIC Aggregated NIC (Legacy FEC mode lagg0 fec0

LNI

ext-1 ext-2 ext-1 ext-2 ext-3 ext-4

em0 em1 em2 em3 em0 em1

ext-agg = ext-1 + ext-2 ext-agg = ext-1 + ext-2 ext-agg-2 = ext-3 + ext-4 ext-agg-3 = ext-3 + ext-4 + ext-1 + ext-2 ext-agg = ext-1 + ext-2 10gige-agg-1 = 10gige-1 + 10gige-2

lagg0 lagg1 lagg2

fec0 fec1 fec2

ext-1 ext-2 10gige-1 10gige-1

em0 em1 cxgb0 cxgb1

lagg0 lagg1

fec0 fec1

Configure DNS settings


You can configure the domain name servers and DNS search list that the cluster uses to resolve host names. 1. Click Cluster Management > Networking Configuration. 2. In the DNS Settings area, click Edit. 3. In the Domain name server(s) field, type the IP address of the domain name server that you want to add. This is the domain name server address that the cluster uses to answer all DNS requests. You can specify domain name server addresses in IPv4 or IPv6 format. 4. In the DNS search list field, enter the local domain name.
274

OneFS 7.0 Administration Guide

Networking

The domain name you type in the DNS search list field is used for resolving unqualified hostnames. 5. Click Submit.

Managing external client connections with SmartConnect


You can select the nodes that a client communicates to the cluster with. OneFS enables you to configure which nodes in the cluster a client initially connects to. If you have a SmartConnect Advanced license, you can also configure how clients are redirected in the event that a node becomes unavailable.

Configure client connection balancing


You can configure connection balancing for your cluster's external network connections with SmartConnect. You may have already configured SmartConnect while setting up an external network subnet using the Subnet wizard. However, you can configure or modify connection balancing settings at any time as your networking needs evolve. Before you begin You must first enable SmartConnect by setting up a SmartConnect service address on the external network subnet that answers incoming DNS requests. 1. Click Cluster Management > Network Configuration. 2. In the External Network Settings area, click the link for the subnet that you want to configure to use connection balancing. 3. In the Settings area, verify that the SmartConnect service IP was configured. If the SmartConnect service IP field readsNot set, click Edit, and then specify the IP address that DNS requests are directed to. 4. In the SmartConnect settings area , click Edit. 5. In the Zone name field, type a name for the SmartConnect zone that this IP address pool represents. The zone name must be unique among the pools served by the SmartConnect service subnet that is specified in Step 7 below. 6. In the Connection policy drop-down list, select the type of connection balancing policy the IP address pool for this zone uses. The policy determines how SmartConnect distributes incoming DNS requests across the members of an IP address pool. Round robin is the only connection policy available without an active SmartConnect Advanced license. Round Robin Connection Count Selects the next available node on a rotating basis, and is the default state if no other policy is selected. Determines the number of open TCP connections on each available node to optimize the cluster usage.

Network Throughput Uses the overall average throughput volume on each available node to optimize the cluster usage. CPU Usage Examines average CPU utilization on each available node to optimize the cluster usage.

7. In the SmartConnect service subnet list, select the name of the external network subnet whose SmartConnect service answers DNS requests on behalf of the IP address pool.
Managing external client connections with SmartConnect
275

Networking

A pool can have only one SmartConnect service answering DNS requests. If this option is left blank, the IP address pool that the SmartConnect service belongs to is excluded when SmartConnect answers incoming DNS requests for the cluster. If you have purchased a license for the SmartConnect Advanced module, complete the following steps in the SmartConnect Advanced area. 8. In the IP allocation method list, select the method by which IP addresses are assigned to the member interfaces for this IP address pool.

Client connection settings


The OneFS SmartConnect module balances client connections across all nodes within an Isilon cluster, or across selected nodes. 1. Click Cluster Management > Network Configuration 2. In the External Network Settings area, next to the SmartConnect settings section, click Edit. 3. Optional: In the Zone name field, enter a 128-character name for the zone. 4. In the Connection policy list, select a balancing policy. Round robin is the only connection policy available without an active SmartConnect Advanced license. Round Robin Connection Count Selects the next available node on a rotating basis, and is the default state if no other policy is selected. Determines the number of open TCP connections on each available node to optimize the cluster usage.

Network Throughput Uses the overall average throughput volume on each available node to optimize the cluster usage. CPU Usage Examines average CPU utilization on each available node to optimize the cluster usage.

5. In the SmartConnect service subnet field, type the subnet used for this policy. 6. Click Submit.

Managing network interface provisioning rules


You can automate the configuration of external network interfaces for new nodes when they are added to a cluster with provisioning rules. When you initially configure a cluster, OneFS creates a default provisioning rule called rule0. This provisioning rule adds an external network interface to the default subnet and default IP address pools of a newly added node. You can modify this default provisioning rule as needed and add more provisioning rules as needed. Provisioning rules specify how new nodes are configured when they are added to a cluster. For example, you can create one provisioning rule that configures new Isilon storage nodes, and another rule that configures new accelerator nodes. Similarly, you can also create provisioning rules by model or type, such as the Isilon X-series storage and accelerator nodes. After the provisioning rules are configured, new nodes are evaluated and configured according to the provisioning rules. If the type of the new node (storage-I, accelerator-I, storage, accelerator, or backup-accelerator) matches the type defined in a rule, the new node's interface name is added to the subnet and the IP address pool specified in the
276

OneFS 7.0 Administration Guide

Networking

rule. OneFS automatically checks for multiple provisioning rules when new rules are added to ensure there are no conflicts.

Create a node provisioning rule


Configure one or more provisioning rules to automate the process of adding new nodes to your Isilon cluster. All Isilon nodes support provisioning rules. Before you begin Configure the external network settings for your cluster, especially the subnets and IP address pools. You must also verify that the IP address pool included in the provisioning rule has sufficient IP addresses to accommodate the new node's client connections. 1. Click Cluster Management > Network Configuration. 2. In the Provisioning Rules area, click Add rule. 3. In the Name field, type a unique name for the provisioning rule. The rule name can be up to 32 characters long and can include spaces or other punctuation. 4. Optional: In the Description field, type a descriptive comment about the provisioning rule. 5. In the If node type is list, select the type of node you want the rule to apply to: All Storage-I Accelerator-i Storage Accelerator Applies the provisioning rule to all types of Isilon nodes that join the cluster. Applies the provisioning rule only to Isilon i-Series storage nodes that join the cluster. Applies the provisioning rule only to Isilon i-Series performanceaccelerator nodes that join the cluster. Applies the provisioning rule only to Isilon storage nodes that join the cluster. Applies the provisioning rule only to performance-accelerator nodes that join the cluster.

Backup-Accelerator Applies the provisioning rule only to Isilon backup-accelerator nodes that join the cluster. 6. In the then assign interface list, select one of the following an interfaces to assign to the external network subnet and IP address pool for the node specified in the rule: ext-1 ext-2 ext-3 ext-4 ext-agg ext-agg-2 ext-agg-3 The first external Gigabit Ethernet interface on the cluster. The second external Gigabit Ethernet interface on the cluster. The third external Gigabit Ethernet interface on the cluster. The fourth external Gigabit Ethernet interface on the cluster. The first and second external Gigabit Ethernet interfaces aggregated together. The third and fourth external Gigabit Ethernet interfaces aggregated together. The first four external Gigabit Ethernet interfaces aggregated together.
Create a node provisioning rule
277

Networking

ext-agg-4 10gige-1 10gige-2

All six Gigabit Ethernet interfaces aggregated together. The first external 10 Gigabit Ethernet interface on the cluster. The second external 10 Gigabit Ethernet interface on the cluster.

10gige-agg-1 The first and second external 10 Gigabit Ethernet interfaces aggregated together. 7. In the Subnet list, select the external subnet that the new node will join. 8. In the Pool list, select the IP address pool that belongs to the subnet that should be used by the new node. 9. Click Submit.

Modify a node provisioning rule


You can modify a node provisioning rules. 1. Click Cluster Configuration > Network Configuration. 2. In the Provisioning Rules area, click the name of the rule you want to modify. 3. Modify the provisioning rule settings as needed. 4. When you have finished modifying the provisioning rule, click Submit.

Delete a node provisioning rule


You can delete a provisioning rule that is no longer necessary. 1. Click Cluster Management > Network Configuration. 2. In the Provisioning Rules area, click Delete next to the rule you want to delete. A confirmation dialog box appears. 3. Click Yes to delete the rule, or click No to keep the rule.

278

OneFS 7.0 Administration Guide

CHAPTER 14 Hadoop

Hadoop is a flexible, open-source framework for large-scale distributed computation. The OneFS file system can be configured for native support of the Hadoop Distributed File System (HDFS) protocol, enabling your cluster to participate in a Hadoop system. HDFS integration requires a separate license. To obtain additional information or to activate HDFS support for your EMC Isilon cluster, contact your EMC Isilon sales representative.
u u u u u u

Hadoop support overview....................................................................................280 Hadoop cluster integration..................................................................................280 Managing HDFS...................................................................................................280 Configure the HDFS protocol................................................................................280 Create a local user...............................................................................................282 Enable or disable the HDFS service......................................................................282

Hadoop

279

Hadoop

Hadoop support overview


An HDFS implementation adds HDFS to the list of protocols that can be used to access the OneFS file system. Implementing HDFS on an Isilon cluster does not create a separate HDFS file system. The cluster can continue to be accessed through NFS, SMB, FTP, and HTTP. The HDFS implementation from Isilon is a lightweight protocol layer between the OneFS file system and HDFS clients. Unlike with a traditional HDFS implementation, files are stored in the standard POSIX-compatible file system on an Isilon cluster. This means files can be accessed by the standard protocols that OneFS supports, such as NFS, SMB, FTP, and HTTP as well as HDFS. Files that will be processed by Hadoop can be loaded by using standard Hadoop methods, such as hadoop fs -put, or they can be copied by using an NFS or SMB mount and accessed by HDFS as though they were loaded by Hadoop methods. Also, files loaded by Hadoop methods can be read with an NFS or SMB mount. The supported versions of Hadoop are as follows:
u u u u

Apache Hadoop 0.20.203.0 Apache Hadoop 0.20.205 Cloudera (CDH3 Update 3) Greenplum HD 1.1

Hadoop cluster integration


To enable native HDFS support in OneFS, you must integrate the Isilon cluster with a cluster of Hadoop compute nodes. This process requires configuration of the Isilon cluster as well as each Hadoop compute node that needs access to the cluster.

Managing HDFS
To keep the HDFS service performing efficiently on a OneFS cluster, you will need to be familiar with the user and system configuration options available as part of an HDFS implementation. There are two different methods that you can use to manage an HDFS implementation: u Hadoop client machines are configured directly through their Hadoop installation directory.
u

A secure shell (SSH) connection to a node in the Isilon cluster is used to configure the HDFS service.

Configure the HDFS protocol


You can specify which HDFS distribution to use, and you can set the logging level, the root path, the Hadoop block size, and the number of available worker threads. You configure HDFS by running the isi hdfs command in the OneFS command-line interface. 1. Open a secure shell (SSH) connection to any node in the cluster and log in by using the root account. You can combine multiple options with a single isi hdfs command. For command usage and syntax, run the isi hdfs -h command.

280

OneFS 7.0 Administration Guide

Hadoop

2. To specify which distribution of the HDFS protocol to use, run the isi hdfs command with the --force-version option. Valid values are listed below. Please note that these values are case-sensitive.
l

AUTO: Attempts to match the distribution that is being used by the Hadoop compute node. APACHE_0_20_203: Uses the Apache Hadoop 0.20.203 release. APACHE_0_20_205: Uses the Apache Hadoop 0.20.205 release. CLOUDERA_CDH3: Uses version 3 of Cloudera's distribution, which includes Apache Hadoop. GREENPLUM_HD_1_1: Uses the Greenplum HD 1.1 distribution.

l l l

For example, the following command forces OneFS to use version 0.20.203 of the Apache Hadoop distribution:
isi hdfs --force-version=APACHE_0_20_203

3. To set the default logging level for the Hadoop daemon across the cluster, run the isi hdfs command with the --log-level option. Valid values are listed below, in descending order from the highest to the lowest logging level. The default value is NOTICE. The values are case-sensitive.
l l

EMERG: A panic condition. This is normally broadcast to all users. ALERT: A condition that should be corrected immediately, such as a corrupted system database. CRIT: Critical conditions, such as hard device errors. ERR: Errors. WARNING: Warning messages. NOTICE: Conditions that are not error conditions, but may need special handling. INFO: Informational messages. DEBUG: Messages that contain information typically of use only when debugging a program.

l l l l l l

For example, the following command sets the log level to WARNING:
isi hdfs --log-level=WARNING

4. To set the path on the cluster to present as the HDFS root directory, run the isi hdfs command with the --root-path option. Valid values include any directory path beginning at /ifs, which is the default HDFS root directory. For example, the following command sets the root path to /ifs/hadoop:
isi hdfs --root-path=/ifs/hadoop

5. To set the Hadoop block size, run the isi

hdfs command with the --block-size option.

Valid values are 4KB to 1GB. The default value is 64MB. For example, the following command sets the block size to 32 MB:
isi hdfs --block-size=32MB

6. To tune the number of worker threads that HDFS uses, run the isi hdfs command with the --num-threads option. Valid values are 1 to 256 or auto, which is calculated as twice the number of cores. The default value is auto.
Configure the HDFS protocol
281

Hadoop

For example, the following command specifies 8 worker threads:


isi hdfs --num-threads=8

7. To allocate IP addresses from an IP address pool, run isi option. Valid values are in the form <subnet>:<pool>.

hdfs with the --add-ip-pool

For example, the following command allocates IP addresses from a pool named "pool2," which is in the "subnet0" subnet:
isi hdfs --add-ip-pool=subnet0:pool2

Create a local user


To access files on OneFS by using the HDFS protocol, you must first create a local Hadoop user that maps to a user on a Hadoop client. 1. Open a secure shell (SSH) connection to any node in the cluster and log in by using the root user account. 2. At the command prompt, run the isi auth users create command to create a local user. For example, isi auth users create --name="user1".

Enable or disable the HDFS service


The HDFS service, which is enabled by default after you activate an HDFS license, can be enabled or disabled by running the isi services command. 1. Open a secure shell (SSH) connection to any node in the cluster and log in by using the root user account. 2. At the command prompt, run the isi service command to enable or disable the HDFS service, isi_hdfs_d. l To enable the HDFS service, run the following command: isi services isi_hdfs_d enable
l

To disable the HDFS service, run the following command: isi services isi_hdfs_d disable

282

OneFS 7.0 Administration Guide

CHAPTER 15 Antivirus

OneFS enables you to scan the file system for computer viruses and other security threats on an Isilon cluster by integrating with third-party scanning services through the Internet Content Adaptation Protocol (ICAP). OneFS sends files through ICAP to a server running third-party antivirus scanning software. These servers are referred to as ICAP servers. ICAP servers scan files for viruses. After an ICAP server scans a file, it informs OneFS of whether the file is a threat. If a threat is detected, OneFS informs system administrators by creating an event, displaying near real-time summary information, and documenting the threat in an antivirus scan report. You can configure OneFS to request that ICAP servers attempt to repair infected files. You can also configure OneFS to protect users against potentially dangerous files by truncating or quarantining infected files. Before OneFS sends a file to be scanned, it ensures that the scan is not redundant. If a file has not been modified, OneFS will not send the file to be scanned unless the virus database on the ICAP server has been updated since the last scan.
u u u u u u u u u u u u u u

On-access scanning............................................................................................284 Antivirus policy scanning.....................................................................................284 Individual file scanning.......................................................................................284 Antivirus scan reports.........................................................................................285 ICAP servers........................................................................................................285 Supported ICAP servers.......................................................................................285 Anitvirus threat responses...................................................................................286 Configuring global antivirus settings...................................................................287 Managing ICAP servers........................................................................................289 Create an antivirus policy....................................................................................290 Managing antivirus policies.................................................................................291 Managing antivirus scans....................................................................................291 Managing antivirus threats..................................................................................292 Managing antivirus reports..................................................................................293

Antivirus

283

Antivirus

On-access scanning
You can configure OneFS to send files to be scanned before they are opened, after they are closed, or both. Sending files to be scanned after they are closed is faster but less secure. Sending files to be scanned before they are opened is slower but more secure. If OneFS is configured to ensure that files are scanned after they are closed, when a user creates or modifies a file on the cluster, OneFS queues the file to be scanned. OneFS then sends the file to an ICAP server to be scanned when convenient. In this configuration, users can always access their files without any delay. However, it is possible that after a user modifies or creates a file, a second user might request the file before the file is scanned. If a virus was introduced to the file from the first user, the second user will be able to access the infected file. Also, if an ICAP server is unable to scan a file, the file will still be accessible to users. If OneFS ensures that files are scanned before they are opened, when a user attempts to download a file from the cluster, OneFS first sends the file to an ICAP server to be scanned. The file is not sent to the user until the scan is complete. Scanning files before they are opened is more secure than scanning files after they are closed, because users can access only scanned files. However, scanning files before they are opened requires users to wait for files to be scanned. You can also configure OneFS to deny access to files that cannot be scanned by an ICAP server, which can increase the delay. For example, if no ICAP servers are available, users will not be able to access any files until the ICAP servers become available again. If you configure OneFS to ensure that files are scanned before they are opened, it is recommended that you also configure OneFS to ensure that files are scanned after they are closed. Scanning files as they are both opened and closed will not necessarily improve security, but it will usually improve data availability when compared to scanning files only when they are opened. If a user wants to access a file, the file may have already been scanned after the file was last modified, and will not need to be scanned again provided that the ICAP server database has not been updated since the last scan.

Antivirus policy scanning


You can create antivirus scanning policies that send files that are contained in a specific directory to be scanned. Antivirus policies can be run manually at any time, or configured to run according to a schedule. Antivirus policies target a specific directory on the cluster. You can prevent an antivirus policy from sending certain files within the specified root directory based on the size, name, or extension of the file. Antivirus policies do not target snapshots. Only on-access scans include snapshots. Antivirus scans are handled by the OneFS job engine, and function the same as any system job.

Individual file scanning


You can send a specific file to an ICAP server to be scanned at any time. For example, if a virus is detected in a file but the ICAP server is unable to repair it. You can send the file to the ICAP server after the virus database had been updated, and the ICAP server might be able to repair the file. You can also use individual file scanning to ensure that the cluster and ICAP server have been properly configured to interact with each other.

284

OneFS 7.0 Administration Guide

Antivirus

Antivirus scan reports


OneFS generates a report about antivirus scans. Each time that an antivirus policy is run, OneFS generates a report for that policy. OneFS also generates a report every 24 hours that includes all on-access scans that occurred during the day. Antivirus scan reports contain the following information:
u u u u u u u u u u u u

The time that the scan started. The time that the scan ended. The total number of files scanned. The total size of the files scanned. The total network traffic sent. The network throughput that was consumed by virus scanning. Whether the scan succeeded. The total number of infected files detected. The names of infected files. The threats associated with infected files. How OneFS responded to detected threats. The name and IP address of the user that triggered the scan. This information is not included in reports triggered by antivirus scan policies.

ICAP servers
The number of ICAP servers that are required to support an Isilon cluster depends on how virus scanning is configured, the amount of data a cluster processes, and the processing power of the ICAP servers. If you intend on scanning files only according to antivirus scan policies, it is recommended that you have a minimum of two ICAP servers for a cluster. If you intend on scanning files on-access, it is recommended that you have at least one ICAP server for each node in the cluster. If you configure more than one ICAP server for a cluster, OneFS distributes files to the ICAP servers on a rotating basis, and does not modify the distribution based on the processing power of the ICAP servers. Because of this, it is important to ensure that the processing power of each ICAP server is relatively equal. If one server is significantly more powerful than another, OneFS does not send more files to the more powerful server.

Supported ICAP servers


You can scan your Isilon cluster for viruses using a supported ICAP server. OneFS supports ICAP servers running the following antivirus scanning software:
u u u u

Symantex Scan Engine 5.2 and later. Trend Micro Interscan Web Security Suite 3.1 and later. Kaspersky Anti-Virus for Proxy Server 5.5 and later. McAfee VirusScan Enterprise 8.7 and later with VirusScan Enterprise for Storage 1.0 and later.

Antivirus scan reports

285

Antivirus

Anitvirus threat responses


You can configure the system to repair, quarantine, or truncate any files that the ICAP server detects viruses in. OneFS and ICAP servers can react in one or more of the following ways when threats are detected:
u

Alert All threats that are detected cause an event to be generated in OneFS at the warning
level, regardless of the threat response configuration.

Repair The ICAP server attempts to repair the infected file before returning the file to
OneFS.

Quarantine OneFS quarantines the infected file. A quarantined file cannot be accessed
by any user. However, a quarantined file can be removed from quarantine by the root user if the root user is connected to the cluster through secure shell (SSH). If you backup your cluster through NDMP backup, quarantined files will remain quarantined when the files are restored. If you replicate quarantined files to another Isilon cluster, the quarantined files will continue to be quarantined on the target cluster. Quarantines operate independently of access control lists (ACLs).

Truncate OneFS truncates the infected file. When a file is truncated, OneFS reduces the
size of the file to zero bytes to render the file harmless.

You can configure OneFS and ICAP servers to react in one of the following ways when threats are detected:
u

Repair or quarantine Attempts to repair infected files. If an ICAP server fails to repair a
file, OneFS quarantines the file. If the ICAP server repairs the file successfully, OneFS sends the file to the user. Repair or quarantine can be useful if you want to protect users from accessing infected files while retaining all data on a cluster.

Repair or truncate Attempts to repair infected files. If an ICAP server fails to repair a file, OneFS truncates the file. If the ICAP server repairs the file successfully, OneFS sends the file to the user. Repair or truncate can be useful if you are not concerned with maintaining all data on your cluster, and you want to free storage space. However, data in infected files will be lost. Alert only Only generates an event for each infected file. It is recommended that you do
not apply this setting.

Repair only Attempts to repair infected files. Afterwards, OneFS sends the files to the
user, whether or not the ICAP server repaired the files successfully. It is recommended that you do not apply this setting. If you only attempt to repair files, users will still be able to access infected files if the ICAP server fails to repair it.

Quarantine Quarantines all infected files. It is recommended that you do not apply this setting. If you quarantine files without attempting to repair them, you might deny access to infected files that could have been repaired. Truncate Truncates all infected files. It is recommended that you do not apply this
setting. If you truncate files without attempting to repair them, you might delete data unnecessarily.

286

OneFS 7.0 Administration Guide

Antivirus

Configuring global antivirus settings


You can configure global antivirus settings that are applied to all antivirus scans, unless otherwise specified.

Exclude files from antivirus scans


You can prevent files from being scanned by antivirus policies. 1. Click Data Protection > Antivirus > Settings. 2. In the File size restriction area, specify whether to exclude files from being scanned based on size. l Click Scan all files regardless of size.
l

Click Only scan files smaller than the maximum file size and specify a maximum file size.

3. In the Filename restrictions area, specify whether to exclude files from being scanned based on file names and extensions. l Click Scan all files.
l l

Click Only scan files with the following extensions or filenames. Click Scan all files except those with the following extensions or filenames.

4. Optional: If you chose to exclude files based on file names and extensions, specify the criteria by which files will be selected. a. In the Extensions area, click Edit list, and specify extensions. b. In the Filenames area, click Edit list, and specify filenames. You can specify the following wild cards: Wildcard *

Description Matches any string in place of the asterisk. For example, specifying "m*" would match "movies" and "m123" .

[]

Matches any characters contained in the brackets, or a range of characters separated by a dash. For example, specifying "b[aei]t" would match "bat", "bet", and "bit". For example, specifying "1[4-7]2" would match "142", "152", "162", and "172". You can exclude characters within brackets by following the first bracket with an exclamation mark. For example, specifying "b[!ie]" would match "bat" but not "bit" or "bet". You can match a bracket within a bracket if it is either the first or last character. For example, specifying "[[c]at" would match "cat", and "[at".

Configuring global antivirus settings

287

Antivirus

Wildcard -

Description
You can match a dash within a bracket if it is either the first or last character. For example, specifying "car[-s]" would match "cars", and "car-".

Matches any character in place of the question mark. For example, specifying "t?p" would match "tap", "tip", and "top".

5. Click Submit.

Configure on-access scanning settings


You can configure OneFS to automatically scan files as they are accessed by users. Onaccess scans operate independently of antivirus policies. 1. Click Data Protection > Antivirus > Settings. 2. In the On Access Scans area, specify whether you want files to be scanned as they are accessed. l To require that all files be scanned before they are opened by a user, select Scan files when they are opened, and then specify whether you want to allow access to files that cannot be scanned.
l

To scan files after they are closed, select Scan files when they are closed.

3. In the Directories to be scanned area, specify the directories that you want to apply onaccess settings to. If no directories are specified, on-access scanning settings are applied to all files. If you specify a directory, only files from the specified directories will be scanned as they are accessed. 4. Click Submit.

Configure antivirus threat response settings


You can configure how OneFS handles files in which threats are detected. 1. Click Data Protection > Antivirus > Settings. 2. In the Action on detection area, specify how you want OneFS to react to potentially infected files.

Configure antivirus report retention settings


You can configure how long an antivirus report is kept on the cluster before they are automatically deleted by OneFS. 1. Click Data Protection > Antivirus > Settings. 2. In the Reports area, specify the period of time that you want OneFS to keep reports for.

288

OneFS 7.0 Administration Guide

Antivirus

Enable or disable antivirus scanning


You can enable or disable all antivirus scanning. 1. Click Data Protection > Antivirus > Summary. 2. In the Service area, enable or disable antivirus scanning. l Click Enable.
l

Click Disable.

Managing ICAP servers


Before you can send files to be scanned on an ICAP server, you must configure OneFS to connect to the server. You can test, modify, and remove an ICAP server connection. You can also temporarily disconnect and reconnect to an ICAP server.

Add and connect to an ICAP server


You can add and connect to an ICAP server. After a server is added, OneFS can send files to the server to be scanned for viruses. 1. Click Data Protection > Antivirus > Summary. 2. In the ICAP Servers area, click Add server. 3. In the Add ICAP Server dialog box, in the ICAP URL field, type the IP address of the ICAP server you want to connect to. 4. In the Description field, type a description of this ICAP server. 5. Click Submit. The ICAP server is displayed in the ICAP Servers table.

Test an ICAP server connection


You can test the connection between the cluster and an ICAP server. 1. Click Data Protection > Antivirus > Summary. 2. In the ICAP Servers table, in the row of the ICAP server whose connection you want to test, click Test connection. If the connection test succeeds, the Status column displays a green icon. If the connection test fails, the Status column displays a red icon.

Modify ICAP connection settings


You can modify the IP address and optional description of ICAP server connections. 1. Click Data Protection > Antivirus > Summary. 2. In the ICAP Servers table, in the row of the ICAP server whose connection settings you want to modify, click Edit. 3. Modify settings as needed, and then click Submit.

Temporarily disconnect from an ICAP server


If you want to prevent OneFS from sending files to an ICAP server, but want to retain the ICAP server connection settings, you can temporarily disconnect from the ICAP server. 1. Click Data Protection > Antivirus > Summary.
Enable or disable antivirus scanning
289

Antivirus

2. In the ICAP Servers table, in the row of the ICAP server that you want to temporarily disconnect from, click Disable.

Reconnect to an ICAP server


You can reconnect to an ICAP server that you have temporarily disconnected from. 1. Click Data Protection > Antivirus > Summary. 2. In the ICAP Servers table, in the row of the ICAP server that you want to reconnect to, click Enable.

Remove an ICAP server


If you are no longer using an ICAP server connection, you can permanently disconnect from the ICAP server. 1. Click Data Protection > Antivirus > Summary. 2. In the ICAP Servers table, in the row of the ICAP server that you want to remove, click Delete.

Create an antivirus policy


You can create an antivirus policy that causes specific files to be scanned for viruses each time the policy is run. 1. Click Data Protection > Antivirus > Policies. 2. Click Add policy. 3. In the Name field, type a name for this antivirus policy. 4. Click Add directory and select a directory that you want to scan. Optionally, repeat this step to specify multiple directories. 5. In the Restrictions area, specify whether you want to enforce the file size and name restrictions specified by the global antivirus settings. l Click Enforce file size and filename restrictions.
l

Click Scan all files within the root directories.

6. In the Run policy area, specify whether you want to run the policy according to a schedule or manually. Scheduled policies can also be run manually at any time. Run the policy only manually. Click Manually

Run the policy according a. Click Scheduled. to a schedule. b. In the Interval area, specify on what days you want the policy to run. c. In the Frequency area, specify how often you want the policy to run on the specified days. 7. Click Submit.

290

OneFS 7.0 Administration Guide

Antivirus

Managing antivirus policies


You can modify and delete antivirus policies. You can also temporarily disable antivirus policies if you want to retain the policy but do not want to scan files.

Modify an antivirus policy


You can modify an antivirus policy. 1. Click Data Protection > Antivirus > Policy. 2. In the Policies table, click the name of the antivirus policy that you want to modify. 3. Modify settings as needed, and then click Submit.

Delete an antivirus policy


You can delete an antivirus policy. 1. Click Data Protection > Antivirus > Policies. 2. In the Policies table, in the row of the antivirus policy you want to delete, click Delete.

Enable or disable an antivirus policy


You can temporarily disable antivirus policies if you want to retain the policy but do not want to scan files. 1. Click Data Protection > Antivirus > Policies. 2. In the Policies table, in the row of an antivirus policy, enable or disable the policy. l Click Enable.
l

Click Disable.

View antivirus policies


You can view antivirus policies. 1. Click Data Protection > Antivirus > Policies. 2. In the Policies table, view antivirus policies.

Managing antivirus scans


You can scan multiple files for viruses by manually running an antivirus policy, or scan an individual file without an antivirus policy. You can also stop antivirus scans.

Scan a file
You can manually scan an individual file for viruses. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Run the isi
avscan manual command.

For example, the following command scans /ifs/data/virus_file:


isi avscan manual /ifs/data/virus_file

Managing antivirus policies

291

Antivirus

Manually run an antivirus policy


You can manually run an antivirus policy at any time. 1. Click Data Protection > Antivirus > Policies . 2. In the Policies table, in the row of the policy you want to run, click Start.

Stop a running antivirus scan


You can stop a running antivirus scan. 1. Click Data Protection > Antivirus > Summary. 2. In the Currently Running table, in the row of the antivirus scan you want to stop, click Cancel.

Managing antivirus threats


You can repair, quarantine, or truncate files in which threats are detected. If you believe that a quarantined file is no longer a threat, you can rescan the file or remove the file from quarantine.

Manually quarantine a file


You can quarantine a file to prevent the file from being accessed by users. 1. Click Data Protection > Antivirus > Detected Threats. 2. In the Detected Threats table, in the row of a file, click Quarantine.

Rescan a file
You can rescan the file for viruses if, for example, you believe that a file is no longer a threat. 1. Click Data Protection > Antivirus > Detected Threats. 2. In the Detected Threats table, in the row of a file, click Rescan.

Remove a file from quarantine


You can remove a file from quarantine if, for example, you believe that the file is no longer a threat. 1. Click Data Protection > Antivirus > Detected Threats. 2. In the Detected Threats table, in the row of a file, click Restore.

Manually truncate a file


If a threat is detected in a file, and the file is irreparable and no longer needed, you can truncate the file. 1. Click Data Protection > Antivirus > Detected Threats. 2. In the Detected Threats table, in the row of a file, click Truncate. The Confirm dialog box appears. 3. Click Yes.
292

OneFS 7.0 Administration Guide

Antivirus

View threats
You can view files that are identified as threats by an ICAP server. 1. Click Data Protection > Antivirus > Detected Threats. 2. In the Detected Threats table, view potentially infected files.

Antivirus threat information


You can view information about the antivirus threats that OneFS detects.
u

Status Displays an icon that indicates the status of the detected threat. The icon appears in one of the following colors:
u u u

Red OneFS did not take any action on the file. Orange The file was truncated. Yellow The file was quarantined.

u u u u

Threat Displays the name of the detected threat as it is recognized by the ICAP server. Filename Displays the name of the file. Directory Displays the directory in which the file is located. Remediation Indicates how OneFS responded to the file when the threat was detected. If OneFS did not quarantine or truncate the file, Infected appears. Detected Displays the time that the file was detected. Policy Displays the name of the antivirus policy that detected the threat. If the threat was detected as a result of a manual antivirus scan of an individual file, Manual scan appears. Currently Displays the current state of the file. File size Displays the size of the file, in bytes. Truncated files display a size of zero bytes.

u u

u u

Managing antivirus reports


In addition to viewing antivirus reports through the web administration interface, you can export reports to a comma-separated values (CSV) file. You can also view events that are related to antivirus activity.

Export an antivirus report


You can export an antivirus report to a comma separated values (CSV) file. 1. Click Data Protection > Antivirus > Reports . 2. In the Reports table, in the row of a report, click Export. 3. Save the CSV file.

View antivirus reports


You can view antivirus reports. 1. Click Data Protection > Antivirus > Reports. 2. In the Reports table, in the row of a report, click View Details.

View threats

293

Antivirus

View antivirus events


You can view events that relate to antivirus activity. 1. Click Dashboard > Events > Event History. 2. In the Event History table, view all events. All events related to antivirus scans are classified as warnings. The following events are related to antivirus activities:
u

Anti-Virus scan found threats A threat was detected by an antivirus scan. These events do not provide threat details, but refer to specific reports on the Antivirus Reports page. No ICAP Servers available OneFS is unable to communicate with any
ICAP servers.

ICAP Server Unresponsive or Invalid OneFS is unable to


communicate with an ICAP server.

294

OneFS 7.0 Administration Guide

CHAPTER 16 iSCSI

As an alternative to file-based storage, block-based storage is a flexible way to store and access nearly any type of data. The Isilon iSCSI module enables you to provide block storage for Microsoft Windows, Linux, and VMware systems over an IP network. The Isilon iSCSI module requires a separate license. To obtain additional information about the iSCSI module or to activate the module for your cluster, contact your EMC Isilon sales representative. The Isilon iSCSI module enables you to create and manage iSCSI targets on an Isilon cluster. The targets become available as SCSI block devices on which clients can store structured and unstructured data. iSCSI targets contain one or more logical units, each uniquely identified by a logical unit number (LUN), which the client can format on the local file system and connect to (such as a physical disk device). You can configure separate data protection levels for each logical unit with Isilon FlexProtect or data mirroring. For basic access control, you can configure each target to limit access to a list of initiators. You can also require initiators to authenticate with a target by using the Challenge-Handshake Authentication Protocol (CHAP). The iSCSI module includes the following features:
u u u u u u u u u u u u u u u u u u u

Support for using a Microsoft Internet Storage Name Service (iSNS) server Isilon SmartConnect Advanced dynamic IP allocation Isilon FlexProtect Data mirroring from 2x to 8x LUN cloning One-way CHAP authentication Initiator access control iSCSI targets and LUNs........................................................................................296 iSNS client service...............................................................................................296 Access control for iSCSI targets...........................................................................297 iSCSI considerations and limitations...................................................................297 Supported SCSI mode pages...............................................................................298 Supported iSCSI initiators...................................................................................298 Configuring the iSCSI and iSNS services..............................................................298 Create an iSCSI target..........................................................................................299 Managing iSCSI targets.......................................................................................301 Configuring iSCSI initiator access control............................................................302 Creating iSCSI LUNs.............................................................................................305 Managing iSCSI LUNs..........................................................................................309

iSCSI

295

iSCSI

iSCSI targets and LUNs


A logical unit defines a storage object (such as a disk or disk array) that is made accessible by an iSCSI target on an Isilon cluster. Each logical unit is uniquely identified by a logical unit number (LUN). Although a LUN is an identifier for a logical unit, the terms are often used interchangeably. A logical unit must be associated with a target, and each target can contain one or more logical units. The following table describes the three types of LUNs that the Isilon iSCSI module supports: LUN Type Normal

Description This is the default LUN type for clone LUNs and imported LUNs, and the only type available for newly created LUNs. Normal LUNs can be either writeable or read-only. A snapshot LUN is a copy of a normal LUN or another snapshot LUN. Although snapshot LUNs require little time and disk space to create, they are read-only. You can create snapshot LUNs by cloning existing normal or snapshot LUNs, but you cannot create snapshot clones of clone LUNs. A clone LUN is a copy of a normal, snapshot, or clone LUN. A clone LUN, which is a compromise between a normal LUN and a snapshot LUN, is implemented using overlay and mask files in conjunction with a snapshot. Clone LUNs require little time and disk space to create, and the LUN is fully writeable. You can create clone LUNs by cloning or importing existing LUNs.

Snapshot

Clone

Using SmartConnect with iSCSI targets


When connecting to iSCSI targets, the initiator can specify a SmartConnect service IP or virtual IP address. When an initiator connects to a target with the SmartConnect service, the iSCSI session is redirected to a node in the cluster, based on the SmartConnect connection policy settings. The default connection policy setting is round robin. If the SmartConnect service is configured to use dynamic IP allocation, which requires a SmartConnect Advanced license key, connections are redirected to other nodes in case of failure.

iSNS client service


The Internet Storage Name Service (iSNS) protocol is used by iSCSI initiators to discover and configure iSCSI targets. The iSCSI module supports the Microsoft iSNS server for target discovery. The iSNS server establishes a repository of active iSCSI nodes, which can then be used as initiators or targets. Additionally, the iSNS client service can be enabled, disabled, configured, and tested through the iSCSI module in OneFS.
296

OneFS 7.0 Administration Guide

iSCSI

Access control for iSCSI targets


The iSCSI module supports Challenge-Handshake Authentication Protocol (CHAP) and initiator access control for connections to individual targets. The CHAP and initiator access control security options can be implemented together or used separately.

CHAP authentication
The iSCSI module supports the Challenge-Handshake Authentication Protocol (CHAP) to authenticate initiator connections to iSCSI targets. You can restrict initiator access to a target by enabling CHAP authentication and then adding user:secret pairs to the target's CHAP secrets list. Enabling CHAP authentication requires initiators to provide a valid user:secret pair to authenticate their connections to the target. CHAP authentication is disabled by default. The Isilon iSCSI module does not support mutual CHAP authentication.

Initiator access control


You can control which initiators are allowed to connect to a target by enabling initiator access control and configuring the target's initiator access list. By default, initiator access control is disabled, and all initiators are allowed to access the target. You can restrict access to a target by enabling access control and then adding initiators to the target's initiator access list. If you enable access control but leave the initiator access list empty, no initiators are able to access the target.

iSCSI considerations and limitations


When planning your iSCSI deployment, be aware of the following limitations and considerations.
u

Multipath I/O (MPIO) is recommended only for iSCSI workflows with primarily readonly operations, because the node must invalidate the data cache on all other nodes during file-write operations and because performance decreases in proportion to the number of write operations. If all MPIO sessions are connected to the same node, performance should not decrease. The Isilon iSCSI module supports one-way Challenge-Handshake Authentication Protocol (CHAP), with the target authenticating the initiator. The authentication configuration is shared by all of the nodes, so a target authenticates its initiator regardless of the node the initiator is connecting through. Mutual CHAP authentication between an initiator and a target is not supported. The Isilon iSCSI module supports the importing of normal LUNs only; importing snapshot LUNs and clone LUNs is not supported. You cannot back up and then restore a snapshot or clone LUN, or replicate snapshot or clone LUNs to another cluster. It is recommended that you deploy a backup application to back up iSCSI LUNs on the iSCSI client, as the backup application ensures that the LUN is in a consistent state at the time of backup.
l

The Isilon iSCSI module does not support the following: Internet Protocol Security (IPsec) Multiple connections per session (MCS) iSCSI host bus adaptors (HBAs)
Access control for iSCSI targets
297

l l

iSCSI

Supported SCSI mode pages


The SCSI Mode Sense command is used to obtain device information from mode pages in a target device. The Mode Select command is used to set new values. OneFS supports the following mode pages: Mode page name Caching mode page* Return all mode pages only Control mode page** Informational exceptions control mode page

Page code 08h 3Fh 0Ah 1Ch

Subpage code 00h 00h 00h 00h

* For the caching mode page, OneFS supports the write cache enable (WCE) parameter only. ** OneFS supports querying this mode page through the Mode Sense command, but does not support changing the fields of this page through the Mode Select command.

Supported iSCSI initiators


OneFS 7.0.0 or later is compatible with the following iSCSI initiators. Operating System Microsoft Windows 2003 (32-bit and 64-bit) Microsoft Windows 2008 (32-bit and 64-bit) Microsoft Windows 2008 R2 (64-bit only) Red Hat Enterprise Linux 5 VMware ESX 4.0 and ESX 4.1 VMware ESXi 4.0 and ESXi 4.1 VMware ESXi 5.0

iSCSI Initiator Microsoft iSCSI Initiator 2.08 or later (Certified) Microsoft iSCSI Initiator (Certified) Microsoft iSCSI Initiator (Certified) Linux Open-iSCSI Initiator (Supported) iSCSI Initiator (Certified) iSCSI Initiator (Certified) iSCSI Initiator (Certified)

Configuring the iSCSI and iSNS services


You can disable or enable and configure the iSCSI service and the iSNS client service. The iSNS client service is used by iSCSI initiators to discover targets. These settings are applied to all of the nodes in the cluster. You cannot modify these settings for individual nodes.

Configure the iSCSI service


You can enable or disable the iSCSI service for all the nodes in a cluster. Before you disable the iSCSI service, be aware of the following considerations:
u u

All of the current iSCSI sessions will be terminated for all the nodes in the cluster. Initiators cannot establish new sessions until the iSCSI service is re-enabled.

298

OneFS 7.0 Administration Guide

iSCSI

1. Click File System Management > iSCSI > Settings. 2. In the iSCSI Service area, set the service state that you want: l If the service is disabled, you can enable it by clicking Enable.
l

If the service is enabled, you can disable it by clicking Disable.

Configure the iSNS client service


You can configure and enable or disable the Internet Storage Name Service (iSNS), which iSCSI initiators use to discover targets. 1. Click File System Management > iSCSI > Settings. 2. In the iSNS Client Service area, configure the iSNS client service settings: l iSNS server address: Type the IP address of the iSNS server with which you want to register iSCSI target information.
l

iSNS server port: Type the iSNS server port number. The default port number is 3205.

3. Click Test connection to validate the iSNS configuration settings. If the connection to the iSNS server fails, check the iSNS server address and the iSNS server port number. 4. Click Submit. 5. Change the service to the state that you want: l If the service is disabled, you can enable it by clicking Enable. Enabling the service allows OneFS to register information about iSCSI targets.
l

If the service is enabled, you can disable it by clicking Disable. Disabling the service prevents OneFS from registering information about iSCSI targets.

View iSCSI sessions and throughput


If the iSCSI service is enabled on the cluster, you can view a summary of current iSCSI sessions and current throughput. To view historic iSCSI throughput data, you must obtain the EMC Isilon InsightIQ virtual appliance, which requires a separate license. For more information, contact your EMC Isilon representative. 1. Click File System Management > iSCSI > Summary. 2. Review the current throughput data and current session information. l The Current Throughput area displays a chart that illustrates overall inbound and outbound throughput across all iSCSI sessions during the past hour, measured in kilobits per second (Kbps). This chart automatically updates every 15 seconds.
l

The Current Sessions area displays information about each current connection between an initiator and a target, including the client and target IP addresses; node, target, and LUN; operations per second; and the inbound, outbound, and total throughput in bits per second. You can view details about a target by clicking the target name.

Create an iSCSI target


You can configure one or more iSCSI targets, each with its own settings for initiator access control and authentication. A target is required as a container object for one or more logical units. 1. Click File System Management > iSCSI > Targets & Logical Units.
Configure the iSNS client service
299

iSCSI

2. In the Targets area, click Add target. 3. In the Name field, type a name for the target. The name must begin with a letter and can contain only lowercase letters, numbers, and hyphens (-). 4. In the Description field, type a descriptive comment for the target. 5. In the Default path field, type the full path of the directory, beginning with /ifs, where the logical unit number (LUN) directory is created, or click Browse to select a directory. This directory is used only if no other directory is specified during LUN creation or if a LUN is not created. The directory must be in the /ifs directory tree. The full path to the directory is required, and wildcard characters are not supported. 6. Add one or more SmartConnect pools for the target to connect with. This setting overrides any global default SmartConnect pools that are configured for iSCSI targets. a. For the SmartConnect pool(s) setting, click Edit list. b. Move pools between the Available Pools and Selected Pools lists by clicking a pool and then clicking the right or left arrow. To remove all selected pools at once, click clear. c. Click OK. 7. Click Submit. 8. Optional: In the Initiator Access Control area, enable and configure the settings for initiator access control. a. Click Enable to restrict target access to initiators that are added to the initiator access control settings. b. Click Add initiator. c. In the Initiator name field, type the name of the initiator that you want to allow to access this target, or click Browse to select from a list of initiators. An initiator name must begin with an iqn. prefix. d. Click OK. To continue adding initiators, click OK and add another. When you are finished adding initiators, click OK. 9. Optional: In the CHAP Authentication area, enable and configure Challenge-Handshake Authentication Protocol (CHAP) settings. If CHAP authentication is enabled and the CHAP secrets list is empty, no initiators can access the target. a. Click Enable to require initiators to authenticate with the target. b. Click Add username. c. In the Username field, type the name that the initiator will use to authenticate with the target. You can specify an initiator's iSCSI qualified name (IQN) as the username. Depending on whether you specify an IQN, valid usernames differ in the following ways: If you specify an IQN as the username, the Username value must begin with an iqn. prefix. The characters that are allowed after the iqn. prefix are alphanumeric characters, periods (.), hyphens (-), and colons (:). All other usernames can use alphanumeric characters, periods (.), hyphens (-), and underscores (_).
300

OneFS 7.0 Administration Guide

iSCSI

CHAP usernames and passwords are case-sensitive. d. In the Secret and Confirm secret fields, type the secret that the initiator will use to authenticate with the target. A CHAP secret must be 12 to 16 characters long and can contain any combination of letters, numbers, and symbols. e. Click OK. 10. Click Submit.

Managing iSCSI targets


You can configure one or more targets for an iSCSI server, and each target can contain one or more logical units. The targets define connection endpoints and serve as container objects for logical units on an iSCSI server. iSCSI initiators on clients establish connections to the targets. You can control access to the target by configuring SmartConnect pools, initiator access control, and authentication with the Challenge-Handshake Authentication Protocol (CHAP). To discover targets, the iSCSI module works with a server running Microsoft Internet Storage Name Service (iSNS).

Modify iSCSI target settings


You can modify a target's description, change the path where logical unit directories are created, and modify the list of SmartConnect pools that the target uses. You can also manage the target's settings for initiator access control and authentication. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to modify. 3. Modify the target's settings as needed.

l l

Changing the default path does not affect existing logical units. Changing the security settings does not affect existing connections.

4. Click Submit.

Delete an iSCSI target


When you delete a target, all of the logical unit numbers (LUNs) that are contained in the target are also deleted, and all the data that is stored in the LUNs is deleted. Additionally, any iSCSI sessions that are connected to the target are terminated. This operation cannot be undone. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Delete for the target that you want to delete. 3. In the confirmation dialog box, click Yes. The target and all LUNs and LUN data that are contained in the target are deleted, and any iSCSI sessions on the target are terminated.

View iSCSI target settings


You can view information about a target, including its iSCSI qualified name (IQN), default LUN directory path, capacity, and SmartConnect pool settings. You can also view the
Managing iSCSI targets
301

iSCSI

logical units that are associated with the target as well as the settings for initiator access control and authentication. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, click the name of a target. 3. Review the following sections for information on the target. To modify these settings, click Edit target. l Target Details: Displays the target name, IQN, description, default path, capacity, and SmartConnect pool settings. The name and IQN cannot be modified.
l

Logical Units: Displays any logical units that are contained in the target. You can add or import a logical unit, or manage existing logical units. You can also select the columns to display or hide. Allowed Initiators: Displays the target's initiator access control status, and lists the names of any initiators that are allowed to access the target when access control is enabled. CHAP Authentication: Displays the target's CHAP authentication status, and lists all user:secret pairs for the target.

Configuring iSCSI initiator access control


You can configure access control to specify which initiators are allowed to connect to a target. If initiator access control is enabled for an iSCSI target, access to that target is limited to a specified list of allowed initiators. Access control is disabled by default. Modifications to a target's access control settings are applied to subsequent connection requests. Current connections are not affected.

Configure iSCSI initiator access control


You can configure access control to specify which initiators are allowed to connect to a target. If initiator access control is enabled for an iSCSI target, access is limited to a list of initiators. Access control is disabled by default. Modifications to a target's access control settings are applied to subsequent connection requests. Current connections are not affected. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target whose initiator access state you want to change. 3. In the Initiator Access Control area, configure the access control state. l If access control is disabled, click Enable to restrict target access to initiators that you add to the initiator access list. If you disable access control and the initiator access list is empty, no initiators are able to connect to the target.
l

If access control is enabled, click Disable to allow all initiators access to the target. If you disable access control, the list of allowed initiators is ignored.

302

OneFS 7.0 Administration Guide

iSCSI

4. Add initiators by clicking Add initiator.

Control initiator access to a target


You can control access to a target by adding initiators to its initiator access list. If you enable initiator access control, the initiator access list specifies which initiator names are allowed to access the target. However, the initiator access list is ignored unless initiator access control is enabled. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to allow an initiator to access. 3. In the Initiator Access Control area, click Add initiator. 4. In the Initiator name field, type the name of the initiator that you want to allow to access the target, or click Browse to select from a list of known initiators. An initiator name requires the iqn. prefix. 5. Click OK. 6. To continue adding initiators, click OK and add another. 7. When you are finished adding initiators, click OK.

Modify initiator name


You can rename or replace an initiator that is allowed to connect to a target when access control is enabled. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to modify. 3. In the Initiator Access Control area, click Edit for the initiator that you want to modify. 4. Modify the initiator name. 5. Click OK.

Remove an initiator from the access list


You can remove an initiator from a target's initiator access list so that the initiator is no longer able to connect to a target when access control is enabled.

If you remove all of the allowed initiators for a target and access control is enabled, the target will deny new connections until you disable access control. Removing an allowed initiator for a target does not affect the initiator's access to other targets.

1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to modify. 3. In the Initiator Access Control area, under Actions, click Delete for the initiator that you want to remove from the access list. 4. In the confirmation dialog box, click Yes.

Control initiator access to a target

303

iSCSI

Create a CHAP secret


To use CHAP authentication, you must create user:secret pairs in the target's CHAP secrets list and enable CHAP authentication. Initiators must then authenticate with the target by providing a user:secret pair. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to create a CHAP secret for. 3. In the CHAP Authentication area, click Add username. 4. In the Username field, type the name that the initiator uses to authenticate with the target. You can specify an initiator's iSCSI qualified name (IQN) as the username.
l

If you specify an IQN as the username, the Username value must begin with an iqn. prefix. The characters that are allowed after the iqn. prefix are alphanumeric characters, periods (.), hyphens (-), and colons (:). All other usernames can use alphanumeric characters, periods (.), hyphens (-), and underscores (_). CHAP usernames and passwords are case-sensitive.

5. In the Secret and Confirm secret fields, type the secret that the initiator will use to authenticate with the target. A CHAP secret must be 12 to 16 characters long and can contain any combination of letters, numbers, and symbols. 6. Click OK.

Modify a CHAP secret


You can modify the settings for a CHAP user:secret pair. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to modify a CHAP user:secret pair for. 3. In the CHAP Authentication area, under Actions, click Edit for the username whose settings you want to modify. 4. Make the changes that you want, and then click OK.

Delete a CHAP secret


You can delete a CHAP user:secret pair that is no longer needed. If you delete all of a target's CHAP secrets and CHAP authentication is enabled, no initiators are able to access the target until you disable CHAP authentication. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target that you want to delete a CHAP user:secret pair for. 3. In the CHAP Authentication area, under Actions, click Delete for the CHAP user:secret pair that you want to delete.

304

OneFS 7.0 Administration Guide

iSCSI

4. In the confirmation dialog box, click Yes.

Enable or disable CHAP authentication


You can enable or disable CHAP authentication for a target. Modifications to a target's CHAP authentication status are applied to subsequent connection requests. Current connections are unaffected. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Targets area, under Actions, click Edit for the target whose CHAP authentication state you want to modify. 3. In the CHAP Authentication area, configure the initiator's CHAP authentication state. l If CHAP authentication is disabled, you can click Enable to require initiators to authenticate with the target. If CHAP authentication is enabled and the CHAP secrets list is empty, no initiator is able to access the target.
l

If CHAP authentication is enabled, click Disable to stop authenticating initiators with the target. If CHAP authentication is disabled, the CHAP secrets list is ignored.

4. Add CHAP user:secret pairs by clicking Add username.

Creating iSCSI LUNs


There are two methods by which you can add a LUN: you can create a new LUN, or you can clone an existing LUN to form a new LUN. When you create a LUN, you can set its target assignment, LUN number, directory path, size, provisioning policy, access state, write access, protection settings, and I/O optimization settings. The LUN number uniquely identifies the logical unit. To clone a LUN, you must enable the Isilon SnapshotIQ module. SnapshotIQ is a separately licensed feature. For more information, contact your EMC Isilon sales representative. Cloned copies of logical units can be part of the same target or a different target. LUN cloning, like LUN creation, is asynchronous. The status of in-progress cloning is monitored in the same way as LUN creation. A cloned LUN is inaccessible by iSCSI initiators until the cloning is complete.

Create an iSCSI LUN


You can create a logical unit and assign it to an iSCSI target for access. When you create a logical unit, you must assign it to an existing iSCSI target. Each target can contain one or more logical units. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, click Add logical unit. 3. In the Add Logical Unit area, in theDescription field, type a descriptive comment for the logical unit.
Enable or disable CHAP authentication
305

iSCSI

4. From the Target list, select the target that will contain the logical unit. 5. Select one of the LUN number options. l To assign the next available number to the logical unit, click Automatic. This is the default setting.
l

To manually assign a number to the logical unit, click Manual and then, in the Number field, type an integer value. The value must be within the range 0-255 and must not be assigned to another logical unit within the target.

By default, the LUN number forms part of the directory name that is created for storing the LUN data. 6. To manually specify the path where the LUN directory is created, in the Path field, type the full path of the directory, beginning with /ifs, or click Browse to select the directory. The directory must be in the /ifs directory tree. You must specify the full path to the directory, and wildcard characters are not allowed. The default path is /ifs/iscsi/ ISCSI.LUN.<TargetName>.<LUNnumber>, where <TargetName> is the Target value and <LUNnumber> is the LUN number. 7. In the Size field, specify the LUN capacity by typing an integer value and then selecting a unit of measure from the list (MB, GB, or TB). The minimum LUN size is 1 MB. The maximum LUN size is determined by the OneFS file system. After you create a LUN, you can increase its size, but you cannot decrease it. 8. Select one of the Provisioning options. l To specify that blocks are unallocated until they are written, click Thin provision.
l

To immediately allocate all the blocks, click Pre-allocate space. This is the default setting. Allocation of all the blocks for a large LUN can take hours or even days.

9. Select one of the LUN access options. l To make the LUN accessible, click Online. This is the default setting.
l

To make the LUN inaccessible, click Offline.

10. Select one of the Write access options. l To allow iSCSI initiators to write to the LUN, click Read-Write. This is the default setting.
l

To prevent iSCSI initiators from writing to the LUN, click Read-Only.

11. Under Protection Settings, from the Disk pool list, select the disk pool to contain the logical unit. 12. From the SSD strategy list, select to specify a strategy to use if solid-state drives (SSDs) are available. l Metadata read acceleration (Recommended): Writes metadata and all user data on hard disk drives (HDDs) and additionally creates a mirror backup of the data on an SSD. Depending on the global namespace acceleration setting, the SSD mirror may be an extra mirror in addition to the number required to satisfy the protection level.
l

Metadata read/write acceleration with performance redundancy (Requires more SSD space): Writes all metadata on an SSD and writes all user data on HDDs. Data on SSDs (Requires most SSD space): Similar to metadata acceleration, but also writes one copy of the file's user data (if mirrored) or all of the data (if not mirrored) on SSDs. Regardless of whether global namespace acceleration is

306

OneFS 7.0 Administration Guide

iSCSI

enabled, any SSD blocks reside on the file's target pool if there is room. This SSD strategy does not create additional mirrors beyond the normal protection level.
l

Avoid SSDs (Reduces performance): Never uses SSDs; writes all associate file data and metadata to HDDs only.

13. From the Protection level list, select a protection policy for the logical unit. Select Use iSCSI default (2x), which is the recommended setting for best performance, or one of the mirrored options, such as 2x to 8x. 14. Select one of the Write Cache options. l To prevent write caching for files that contain LUN data, click Disabled. This is the recommended setting for LUNs.
l

To allow write caching for files that store LUN data, click Enable.

The Write Cache option controls whether file writes are sent to the coalescer or the endurant cache. With Write Cache disabled, which is the default and recommended setting, all file writes are sent to the endurant cache. The endurant cache is a committed data guarantee. If Write Cache is enabled, all file writes are sent to the coalescer. Write caching can improve performance, but can lead to data loss if a node loses power or crashes while uncommitted data is in the write cache. 15. Select one of the Data access pattern options. l To select a random access pattern, click Random. This is the recommended setting for LUNs.
l l

To select a concurrent access pattern, click Concurrency. To select a streaming access pattern, click Streaming. Streaming access patterns can improve performance in some workflows.

16. Click Submit.

Clone an iSCSI LUN


You can clone an existing LUN to a create a new LUN. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, under Actions, click Clone for the logical unit that you want to clone. The page updates and the clone options appear. Most of the fields for these options are populated with information from the logical unit you selected to clone. 3. From the LUN type list, select Normal, Clone, or Snapshot. 4. Modify the other settings as needed. The settings for the clone vary according to the source LUN type. 5. Click Submit.

iSCSI LUN cloning operations


Depending on the clone LUN type, the contents (or blocks) of a source LUN are either copied or referenced, and the attributes may or may not be copied. In general, clone and snapshot type clone operations are fast, whereas normal type clones can take several minutes or even hours to create, depending on the size of the LUN. The following table describes the result of each cloning operation.

Clone an iSCSI LUN

307

iSCSI

Source LUN type Normal

Clone LUN type Normal

Result A snapshot of the source LUN is created. The clone LUN is then created by copying the LUN data from the snapshot. After completing the copy, the snapshot is deleted. The copy process may take several hours to complete for large LUNs if the source LUN has a pre-allocated provisioning policy. The copy process may also take several minutes for thinly provisioned LUNs that are significantly used. A snapshot of the source LUN is created. The clone LUN is configured to reference the data from the snapshot. The snapshot is deleted when the clone is deleted. A snapshot of the source LUN is created. The system then creates a clone LUN that references data from the snapshot. The clone LUN is created by copying the LUN data from the snapshot. The copy process may take several minutes to complete for large LUNs if the source LUN has a pre-allocated provisioning policy. The copy process may also take several minutes for thinly provisioned LUNs that are heavily used. The clone LUN is configured to reference the data from the same snapshot that the source LUN references. The underlying snapshot is not deleted when a LUN is deleted unless the LUN being deleted is the last LUN referencing the snapshot. The clone LUN is configured to reference the data from the same snapshot that the source LUN references. The underlying snapshot is not deleted when a LUN is deleted unless the LUN being deleted is the only LUN referencing the snapshot. A snapshot of the source LUN is created. The clone LUN is then created by copying the LUN data from the snapshot. After completing the copy, the snapshot is deleted. The copy process may take several minutes to complete for large LUNs if the source LUN has a pre-allocated provisioning policy. The copy process may also take several minutes for thinly provisioned LUNs that are heavily used

Normal

Snapshot

Normal

Clone

Snapshot

Normal

Snapshot

Snapshot

Snapshot

Clone

Clone

Normal

308

OneFS 7.0 Administration Guide

iSCSI

Source LUN type Clone Clone

Clone LUN type Snapshot Clone

Result Not allowed. A clone of the clone LUN is created. The clone LUN is configured to reference data from the snapshot.

Managing iSCSI LUNs


You can manage a LUN by modifying it, migrating it to another target, viewing its settings, or deleting it. In addition, you can import a LUN that you replicated on a remote cluster. After you create a LUN, you can manage it in the following ways:
u u u u u

Modify a LUN Delete a LUN Migrate a LUN to another target Import a LUN View LUN settings

Modify an iSCSI LUN


You can modify only certain settings for a logical unit. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, under Actions, click Edit for the logical unit that you want to modify. 3. Modify the logical unit's settings. 4. Click Submit.

Delete an iSCSI LUN


Deleting a logical unit permanently deletes all data on the logical unit. This operation cannot be undone. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, under Actions, click Delete for the logical unit that you want to delete. 3. In the confirmation dialog box, click Yes.

Migrate an iSCSI LUN to another target


You can move a logical unit from one target to another, change the value of its logical unit number (LUN), or update the path to the LUN directory. You cannot modify the path of a snapshot LUN. The name of a logical unit comprises its target name and its LUN value. The two parts of the name are separated by a colon (such as "mytarget:0"). 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, under Actions, click Move for the logical unit that you want to move. 3. In the To target list, click to select a new target for the logical unit.
Managing iSCSI LUNs
309

iSCSI

4. Click one of the To LUN number options. l To assign the next available number to the logical unit, click Automatic. This is the default setting.
l

To manually assign a number to the logical unit, click Manual and then, in the Number box, type an integer value. The value must be within the range 0-255 and must not be assigned to another logical unit.

5. To configure the path where the LUN directory is created, in the To path box, type the full path of the directory, or click Browse to select the directory. If a path is not specified, the LUN directory is unchanged from the original directory where that LUN was created. 6. Click Submit.

Import an iSCSI LUN


You can recreate logical units that were replicated to a remote cluster or that were backed up and then restored to a remote cluster. The iSCSI module does not support replicating or restoring logical unit snapshots or clone copies. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, click Import logical unit. 3. In the Description field, type a descriptive comment for the logical unit. 4. In the Source path field, type the full path (beginning with /ifs) of the directory that contains the logical unit that you want to import, or click Browse to select the directory. 5. From the Target list, select the target that will contain the logical unit. 6. Select one of the LUN number options. l To assign the next available number to the logical unit, click Automatic. This is the default setting.
l

To manually assign a number to the logical unit, click Manual, and then in the Number field, type an integer value. The value must be within the range 0-255 and must not be assigned to another logical unit.

7. Select one of the LUN access options. l To make the LUN accessible, click Online. This is the default setting.
l

To make the LUN inaccessible, click Offline.

8. Select one of the Write access options. l To allow iSCSI initiators to write to the LUN, click Read-Write. This is the default setting.
l

To prevent iSCSI initiators from writing to the LUN, click Read-Only.

9. Select one of the caching options. l To allow write caching for files storing LUN data, click Enabled.
l

To prevent write caching for files storing LUN data, click Disabled.

10. Click Submit.

310

OneFS 7.0 Administration Guide

iSCSI

View iSCSI LUN settings


You can view information about a logical unit, including its logical unit number (LUN), iSCSI target, LUN type, LUN directory path, iSCSI qualified name, and other settings. 1. Click File System Management > iSCSI > Targets & Logical Units. 2. In the Logical Units area, under Target:LUN, click the name of the logical unit that you want to view. 3. The following settings display: l LUN: Displays the numerical identifier of the logical unit. You can modify the LUN value by using the move operation.
l

Target: Displays the name of the iSCSI target that contains the logical unit. You can modify the target by using the move operation. Description: Displays an optional description for the logical unit. You can modify the description by clicking Edit LUN. Type: Displays the LUN type (normal, clone, or snapshot). You cannot modify this setting. Size: Displays the LUN capacity. You can increase the size of normal or snapshot LUNs by clicking Edit LUN, but you cannot decrease the size. You cannot modify the size of snapshot LUNs. Status: Displays the connection status (online or offline) and write access permissions (read-only or read-write) of the LUN. You can modify write-access settings for normal or clone LUNs by clicking Edit LUN. You cannot modify writeaccess settings for snapshot LUNs. Path: Displays the path to the directory where the LUN files are stored. You can change the path for normal or snapshot LUNs by using the move operation. You cannot modify the path for snapshot LUNs. Disk pool: Displays the disk pool of the LUN. You can modify the disk pool by clicking Edit LUN. Protection level: Displays the mirroring level (such as 2x, 3x, 4x, and so on) or FlexProtect protection policy for the LUN. You can modify the protection policy for normal or clone LUNs by clicking Edit LUN. You cannot modify these settings for snapshot LUNs. Write Cache: Displays whether SmartCache is enabled or disabled. You can change this setting for normal or clone LUNs by clicking Edit LUN. You cannot modify these settings for snapshot LUNs. Data access pattern: Displays the access pattern setting (Random, Concurrency, or Streaming) for the LUN. You can change the access pattern for normal or clone LUNs by clicking Edit LUN. You cannot modify these settings for snapshot LUNs. SCSI name: Displays the iSCSI qualified name (IQN) of the LUN. You cannot modify this setting. EUI: Displays the extended unique identifier (EUI), which uniquely identifies the LUN. You cannot modify this setting. NAA: Displays the LUN's T11 Network Address Authority (NAA) namespace. You cannot modify this setting. Serial number: Displays the serial number of the LUN. You cannot modify this setting.

View iSCSI LUN settings

311

iSCSI

312

OneFS 7.0 Administration Guide

CHAPTER 17 VMware integration

OneFS integrates with VMware infrastructures, including vSphere, vCenter, and ESXi. VMware integration enables you to view information about and interact with Isilon clusters through VMware applications. OneFS interacts with VMware infrastructures through VMware vSphere API for Storage Awareness (VASA) and VMware vSphere API for Array Integration (VAAI). OneFS integrates with VMware vCenter through the Isilon for vCenter plug-in. The Isilon for vCenter plug-in enables you to locally backup and restore virtual machines on an Isilon cluster. For more information about Isilon for vCenter, see the following documents:
u u u u u u u

Isilon for vCenter Release Notes Isilon for vCenter Installation Guide Isilon for vCenter User Guide
VASA...................................................................................................................314 VAAI....................................................................................................................315 Configuring VASA support...................................................................................315 Disable or re-enable VASA...................................................................................316

VMware integration

313

VMware integration

VASA
OneFS communicates with VMware vSphere through VMware vSphere API for Storage Awareness (VASA). VASA support enables you to view information about Isilon clusters through vSphere, including Isilon-specific alarms in vCenter. VASA support also enables you to integrate with VMware profile driven storage by providing storage capabilities for Isilon clusters in vCenter. For OneFS to communicate with vSphere through VASA, your VMware environment must include ESXi 5.0 or later hypervisors. To configure VASA support, you must access the cluster through the root account. Because SmartLock compliance mode disables root access to the cluster, the cluster must not be running in compliance mode.

Isilon VASA alarms


If the VASA service is enabled on an Isilon cluster and the cluster is added as a VMware vSphere API for Storage Awareness (VASA) vendor provider in vCenter, OneFS is generates alarms in vSphere. The following table describes the alarm that OneFS generates: Alarm name Thin-provisioned LUN capacity exceeded

Description There is not enough available space on the cluster to allocate space for writing data to thinly provisioned LUNs. If this condition persists, you will not be able to write to the virtual machine on this cluster. To resolve this issue, you must free storage space on the cluster.

VASA storage capabilities


OneFS integrates with VMware vCenter through VMware vSphere API for Storage Awareness (VASA) to display storage capabilities of Isilon clusters in vCenter. The following storage capabilities are displayed through vCenter:
u

Archive The Isilon cluster is composed of Isilon NL-Series nodes. The cluster is configured
for maximum capacity.

Performance The Isilon cluster is composed of Isilon i-Series, Isilon X-Series, or Isilon SSeries nodes. The cluster is configured for maximum performance. The Isilon I-Series and X-Series nodes contain Solid State Drives (SSDs). If a cluster is composed of i-Series, X-Series , or S-Series nodes, but does not contain SSDs, the cluster is recognized as a capacity cluster.

Capacity The Isilon cluster is composed of Isilon X-Series nodes that do not contain SSDs. The cluster is configured for a balance between performance and capacity. Hybrid The Isilon cluster is composed of nodes associated with two or more storage
capabilities. For example, if the cluster contained both Isilon S-Series and NL-Series nodes, the storage capability of the cluster is displayed as Hybrid.

314

OneFS 7.0 Administration Guide

VMware integration

VAAI
OneFS uses VMware vSphere API for Array Integration (VAAI) to support offloading specific virtual machine storage and management operations from VMware ESXi hypervisors to an Isilon cluster. VAAI support enables you to accelerate the process of creating virtual machines and virtual disks. For OneFS to interact with your vSphere environment through VAAI, your VMware environment must include ESXi 5.0 or later hypervisors. If you enable VAAI capabilities for an Isilon cluster, when you clone a virtual machine residing on the cluster through VMware, OneFS clones the files related to that virtual machine. For more information on file clones, see Clones.

VAAI support for block storage


OneFS support for VMware vSphere API for Array Integration (VAAI) for block storage is enabled by default. OneFS supports the following VAAI primitives for block storage:
u u u

Hardware Assisted Locking Full Copy Block Zeroing OneFS does not support the thin provisioning block reclaim mechanism.

VAAI support for NAS


To enable OneFS to use VMware vSphere API for Array Integration (VAAI) for NAS, you must install the VAAI NAS plug-in for Isilon on the ESXi server. For more information on the VAAI NAS plug-in for Isilon, see the VAAI NAS plug-in for Isilon Release Notes.

Configuring VASA support


To enable VMware vSphere API for Storage Awareness (VASA) support for a cluster, you must enable the VASA daemon on the cluster, download the Isilon vendor provider certificate and add the Isilon vendor provider in vCenter.

Enable VASA
You must enable an Isilon cluster to communicate with VMware vSphere API for Storage Awareness (VASA) by enabling the VASA daemon. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Enable VASA by running the following command: isi services isi_vasa_d enable

Download the Isilon vendor provider certificate


To add an Isilon cluster VASA vendor provider in VMware vCenter, you must use a vendor provider certificate. You can download a vendor provider certificate from an Isilon cluster. 1. In a supported web browser, connect to an Isilon cluster at //<IPAddress>, where <IPAddress> is the IP address of the Isilon cluster. 2. Retrieve the security certificate and save the certificate to a location on your machine.
VAAI
315

VMware integration

For more information about exporting a security certificate, see the documentation of your browser. Record the location of where you saved the certificate. You will need this file path when adding the vendor provider in vCenter.

Add the Isilon vendor provider


You must add an Isilon cluster as a vendor provider in VMware vCenter before you can view information about the storage capabilities of the cluster through vCenter. Before you begin Download a vendor provider certificate. 1. In vCenter, navigate to the Add Vendor Provider window. 2. Fill out the following fields in the Add Vendor Provider window:
u

Name Type a name for this VASA provider. Specify as any string. For example, type EMC Isilon Systems. URL Type http://<IPAddress>:8081/vasaprovider, where <IPAddress>
is the IP address of a node in the Isilon cluster.

u u u

Login Type root. Password Type the password of the root user. Certificate location Type the file path of the vendor provider certificate for this
cluster.

3. Select the Use Vendor Provider Certificate box. 4. Click OK.

Disable or re-enable VASA


You can disable or re-enable an Isilon cluster to communicate with VMware vSphere through VMware vSphere API for Storage Awareness (VASA). To disable support for VASA, you must disable both the VASA daemon and the Isilon web administration interface. You will not be able to administer the cluster through an internet browser while the web interface is disabled. To re-enable support for VASA, you must enable both the VASA daemon and the web interface. 1. Open a secure shell (SSH) connection to any node in the cluster and log in. 2. Disable or enable the web interface by running one of the following commands: l isi services apache2 disable
l

isi services apache2 enable

3. Disable or enable the VASA daemon by running one of the following commands: l isi services isi_vasa_d disable
l

isi services isi_vasa_d enable

316

OneFS 7.0 Administration Guide

CHAPTER 18 File System Explorer

The File System Explorer is a web-based interface that enables you to manage the content stored on the cluster. You can use the File System Explorer to navigate the Isilon file system (/ifs), add directories, and manage file and directory properties including data protection, I/O optimization, and UNIX permissions. Isilon file system directory permissions are initially set to allow full access for all users. Any user can delete any file, regardless of the permissions on the individual file. Depending on your environment, you want to establish permission restrictions through the File System Explorer. You can view and configure file and directory properties from within Windows clients that are connected to the cluster. However, because Windows and UNIX permissions differ from one another, you must be careful not to make any unwanted changes that affect file and directory access. The File System Explorer displays up to 1000 files in a directory. If more than 1000 files exist within a directory, the files are displayed without additional information, such as file size and last modified date.
u u u u u

Browse the file system........................................................................................318 Create a directory................................................................................................318 Modify file and directory properties.....................................................................318 View file and directory properties........................................................................318 File and directory properties................................................................................319

File System Explorer

317

File System Explorer

Browse the file system


You can browse the Isilon file system (/ifs) through the File System Explorer. 1. Navigate to File System Management > File System Explorer. 2. View files and directories.
l l

You can expand and collapse directories in the Directories pane. The contents of the selected directory are displayed in the right pane. You can view the contents of another directory by clicking the directory in the Directories pane.

Create a directory
You can create a directory under /ifs through the File System Explorer. 1. Navigate to File System Management > File System Explorer. 2. In the Directories pane, specify where you want to create the directory. 3. Click Add Directory. 4. In the New Directory Properties dialog box, in the Directory name field, type a name for the directory. 5. From the User list, select the owner of the directory. 6. From the Group list, select the group for the directory. 7. From the Permissions table, specify the basic permissions for the directory. 8. Click Submit.

Modify file and directory properties


You can modify the data protection, I/O optimization, and UNIX permission properties of files and directories through the File System Explorer. 1. Navigate to File System Management > File System Explorer. 2. In the Directories pane, click the directory that contains the file or directory that you want to modify permissions for. 3. In the right pane, in the row of the file or directory you want to modify permissions for, click Properties. 4. In the Properties dialog box, specify the properties of the file or directory. 5. Click Submit.

View file and directory properties


You can view the data protection, I/O optimization, and UNIX permission properties of files and directories through the File System Explorer. 1. Navigate to File System Management > File System Explorer. 2. In the Directories pane, click the directory that contains the file or directory that you want to view permissions for. 3. In the right pane, in the row of the file or directory you want to view permissions for, click Properties. 4. In the Properties dialog box, view the properties of the file or directory.

318

OneFS 7.0 Administration Guide

File System Explorer

File and directory properties


Each file and directory is assigned specific data protection, I/O optimization, and UNIX permission properties that you can view through the File System Explorer. The following properties are displayed in the Properties dialog box of the File System Explorer: Protection Settings
u

Settings management Specifies whether protection settings are managed manually or by SmartPools. If you modify either or both protection settings, this property automatically refreshes to Manually managed. If you specify Managed by SmartPools, the protection settings will automatically refresh to match the SmartPools specifications the next time the SmartPools job is run. Disk pool The disk pool whose protection policy is applied if SmartPools is configured to manage protection settings. This property is available only if SmartPools is licensed and enabled on the cluster. SSD The SSD strategy that will be used for user data and metadata if solid-state drives
(SSDs) are available. The following SSD strategies are available:
u

Metadata acceleration OneFS creates a mirror backup of file metadata on an SSD and writes the rest of the metadata plus all user data to hard disk drives (HDDs). Depending on the global namespace acceleration setting, the SSD mirror might be an extra mirror in addition to the number required to satisfy the protection level. Avoid SSDs OneFS does not write data or metadata to SSDs. OneFS writes all data and metadata to HDDs only. Data on SSDs Similar to metadata acceleration, OneFS creates a mirror backup of file metadata on an SSD and writes the rest of the metadata plus all user data to hard disk drives. However, OneFS also writes one copy of the file user data (if mirrored) or all of the data (if not mirrored) to SSDs. All SSD blocks reside on the file target pool if there is adequate space available, regardless of whether global namespace acceleration is enabled. OneFS does not create additional mirrors beyond the normal protection level.

Protection level The FlexProtect or data mirroring protection policy for this file or
directory. If SmartPools is licensed and enabled on the cluster, the default protection policy for files and directories is inherited from the specified disk pool.

I/O Optimization Settings


u

Settings Management Specifies whether I/O Optimization Settings are managed


manually or by SmartPools. If you modify either or both I/O optimization settings, this property automatically refreshes to Manually managed. If you specify Managed by SmartPools, the I/O optimization settings values will automatically refresh to match the SmartPools specifications the next time the SmartPools job is run.

SmartCache Specifies whether write caching with SmartCache is enabled for this file or
directory.

Data access pattern The optimization settings for accessing data. The following data
access patterns are available:
u u

Concurrency File or directory is optimized to support many clients simultaneously. Streaming File or directory is optimized for high-speed streaming of a single file. For
example, this pattern can be useful if a single client needs to read very quickly from a single file.
File and directory properties
319

File System Explorer

Random File or directory is optimized for unpredictable access.

The default data access pattern of iSCSI LUNs is the random access pattern. The default data access pattern of other files and directories is the concurrent access pattern.

UNIX Permissions
u u u

User The owner of the file or directory. Group The group of the file or directory. Permissions The basic permissions for the file or directory.

320

OneFS 7.0 Administration Guide

You might also like