Red Hat Ceph Storage-5-File System Guide-En-Us
Red Hat Ceph Storage-5-File System Guide-En-Us
Red Hat Ceph Storage-5-File System Guide-En-Us
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
This guide describes how to configure the Ceph Metadata Server (MDS) and how to create, mount,
and work the Ceph File System (CephFS). Red Hat is committed to replacing problematic language
in our code, documentation, and web properties. We are beginning with these four terms: master,
slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be
implemented gradually over several upcoming releases. For more details, see our CTO Chris
Wright's message.
Table of Contents
Table of Contents
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . TO
. . . .THE
. . . . CEPH
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . .
1.1. CEPH FILE SYSTEM FEATURES AND ENHANCEMENTS 5
1.2. CEPH FILE SYSTEM COMPONENTS 6
1.3. CEPH FILE SYSTEM AND SELINUX 7
1.4. CEPH FILE SYSTEM LIMITATIONS AND THE POSIX STANDARDS 8
1.5. ADDITIONAL RESOURCES 8
.CHAPTER
. . . . . . . . . . 2.
. . THE
. . . . . CEPH
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . .METADATA
. . . . . . . . . . . . .SERVER
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
..............
2.1. PREREQUISITES 10
2.2. METADATA SERVER DAEMON STATES 10
2.3. METADATA SERVER RANKS 10
2.4. METADATA SERVER CACHE SIZE LIMITS 11
2.5. FILE SYSTEM AFFINITY 11
2.6. MANAGEMENT OF MDS SERVICE USING THE CEPH ORCHESTRATOR 12
2.6.1. Prerequisites 12
2.6.2. Deploying the MDS service using the command line interface 12
2.6.3. Deploying the MDS service using the service specification 14
2.6.4. Removing the MDS service using the Ceph Orchestrator 17
2.7. CONFIGURING FILE SYSTEM AFFINITY 18
2.8. CONFIGURING MULTIPLE ACTIVE METADATA SERVER DAEMONS 20
2.9. CONFIGURING THE NUMBER OF STANDBY DAEMONS 21
2.10. CONFIGURING THE STANDBY-REPLAY METADATA SERVER 22
2.11. EPHEMERAL PINNING POLICIES 23
2.12. MANUALLY PINNING DIRECTORY TREES TO A PARTICULAR RANK 23
2.13. DECREASING THE NUMBER OF ACTIVE METADATA SERVER DAEMONS 24
2.14. ADDITIONAL RESOURCES 26
.CHAPTER
. . . . . . . . . . 3.
. . DEPLOYMENT
. . . . . . . . . . . . . . . . OF
. . . .THE
. . . . .CEPH
. . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
3.1. PREREQUISITES 27
3.2. LAYOUT, QUOTA, SNAPSHOT, AND NETWORK RESTRICTIONS 27
3.3. CREATING CEPH FILE SYSTEMS 28
3.4. ADDING AN ERASURE-CODED POOL TO A CEPH FILE SYSTEM 32
3.5. CREATING CLIENT USERS FOR A CEPH FILE SYSTEM 35
3.6. MOUNTING THE CEPH FILE SYSTEM AS A KERNEL CLIENT 37
3.7. MOUNTING THE CEPH FILE SYSTEM AS A FUSE CLIENT 41
3.8. ADDITIONAL RESOURCES 45
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
VOLUMES ..............
4.1. CEPH FILE SYSTEM VOLUMES 47
4.1.1. Creating a file system volume 47
4.1.2. Listing file system volume 48
4.1.3. Removing a file system volume 48
4.2. CEPH FILE SYSTEM SUBVOLUME GROUPS 49
4.2.1. Creating a file system subvolume group 49
4.2.2. Listing file system subvolume groups 50
4.2.3. Fetching absolute path of a file system subvolume group 50
4.2.4. Creating snapshot of a file system subvolume group 51
4.2.5. Listing snapshots of a file system subvolume group 52
4.2.6. Removing snapshot of a file system subvolume group 52
4.2.7. Removing a file system subvolume group 53
4.3. CEPH FILE SYSTEM SUBVOLUMES 54
1
Red Hat Ceph Storage 5 File System Guide
.CHAPTER
. . . . . . . . . . 5.
. . CEPH
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . ADMINISTRATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .69
..............
5.1. PREREQUISITES 69
5.2. USING THE CEPHFS-TOP UTILITY 69
5.3. USING THE MDS AUTOSCALER MODULE 71
5.4. UNMOUNTING CEPH FILE SYSTEMS MOUNTED AS KERNEL CLIENTS 72
5.5. UNMOUNTING CEPH FILE SYSTEMS MOUNTED AS FUSE CLIENTS 72
5.6. MAPPING DIRECTORY TREES TO METADATA SERVER DAEMON RANKS 73
5.7. DISASSOCIATING DIRECTORY TREES FROM METADATA SERVER DAEMON RANKS 74
5.8. ADDING DATA POOLS 75
5.9. TAKING DOWN A CEPH FILE SYSTEM CLUSTER 76
5.10. REMOVING A CEPH FILE SYSTEM 78
5.11. USING THE CEPH MDS FAIL COMMAND 79
5.12. CLIENT FEATURES 80
5.13. CEPH FILE SYSTEM CLIENT EVICTIONS 81
5.14. BLOCKLIST CEPH FILE SYSTEM CLIENTS 82
5.15. MANUALLY EVICTING A CEPH FILE SYSTEM CLIENT 82
5.16. REMOVING A CEPH FILE SYSTEM CLIENT FROM THE BLOCKLIST 83
5.17. ADDITIONAL RESOURCES 84
. . . . . . . . . . . 6.
CHAPTER . . .NFS
. . . . CLUSTER
. . . . . . . . . . .AND
. . . . .EXPORT
. . . . . . . . . MANAGEMENT
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .86
..............
6.1. PREREQUISITES 86
6.2. CREATING AN NFS CLUSTER 86
6.3. CUSTOMIZING AN NFS CONFIGURATION 87
6.4. EXPORTING CEPH FILE SYSTEM NAMESPACES OVER THE NFS PROTOCOL (LIMITED AVAILABILITY)
89
6.5. MODIFYING THE CEPH FILE SYSTEM EXPORTS 93
6.6. CREATING CUSTOM CEPH FILE SYSTEM EXPORTS 96
6.7. DELETING CEPH FILE SYSTEM EXPORTS 98
6.8. DELETING AN NFS CLUSTER 98
. . . . . . . . . . . 7.
CHAPTER . . CEPH
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100
...............
7.1. PREREQUISITES 100
7.2. CEPH FILE SYSTEM QUOTAS 100
7.3. VIEWING QUOTAS 100
7.4. SETTING QUOTAS 101
7.5. REMOVING QUOTAS 102
2
Table of Contents
. . . . . . . . . . . 8.
CHAPTER . . .FILE
. . . . .AND
. . . . .DIRECTORY
. . . . . . . . . . . . .LAYOUTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
...............
8.1. PREREQUISITES 104
8.2. OVERVIEW OF FILE AND DIRECTORY LAYOUTS 104
8.3. SETTING FILE AND DIRECTORY LAYOUT FIELDS 104
8.4. VIEWING FILE AND DIRECTORY LAYOUT FIELDS 105
8.5. VIEWING INDIVIDUAL LAYOUT FIELDS 106
8.6. REMOVING DIRECTORY LAYOUTS 107
8.7. ADDITIONAL RESOURCES 108
.CHAPTER
. . . . . . . . . . 9.
. . .CEPH
. . . . . .FILE
. . . . . SYSTEM
. . . . . . . . . SNAPSHOTS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
...............
9.1. PREREQUISITES 109
9.2. CEPH FILE SYSTEM SNAPSHOTS 109
9.3. CREATING A SNAPSHOT FOR A CEPH FILE SYSTEM 109
9.4. ADDITIONAL RESOURCES 111
. . . . . . . . . . . 10.
CHAPTER . . . CEPH
. . . . . . .FILE
. . . . .SYSTEM
. . . . . . . . . SNAPSHOT
. . . . . . . . . . . . .SCHEDULING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
..............
10.1. PREREQUISITES 112
10.2. CEPH FILE SYSTEM SNAPSHOT SCHEDULES 112
10.3. ADDING A SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM 113
10.4. ADDING A SNAPSHOT SCHEDULE FOR CEPH FILE SYSTEM SUBVOLUME 115
10.5. ACTIVATING SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM 117
10.6. ACTIVATING SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM SUB VOLUME 118
10.7. DEACTIVATING SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM 118
10.8. DEACTIVATING SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM SUB VOLUME 119
10.9. REMOVING A SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM 119
10.10. REMOVING A SNAPSHOT SCHEDULE FOR A CEPH FILE SYSTEM SUB VOLUME 120
10.11. REMOVING SNAPSHOT SCHEDULE RETENTION POLICY FOR A CEPH FILE SYSTEM 121
10.12. REMOVING SNAPSHOT SCHEDULE RETENTION POLICY FOR A CEPH FILE SYSTEM SUB VOLUME
122
10.13. ADDITIONAL RESOURCES 122
. . . . . . . . . . . 11.
CHAPTER . . .CEPH
. . . . . .FILE
. . . . . SYSTEM
. . . . . . . . . MIRRORS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
...............
11.1. PREREQUISITES 123
11.2. CEPH FILE SYSTEM MIRRORING 123
11.3. CONFIGURING A SNAPSHOT MIRROR FOR A CEPH FILE SYSTEM 123
11.4. VIEWING THE MIRROR STATUS FOR A CEPH FILE SYSTEM 127
. . . . . . . . . . . .A.
APPENDIX . . HEALTH
. . . . . . . . . .MESSAGES
. . . . . . . . . . . . FOR
. . . . .THE
. . . . .CEPH
. . . . . . FILE
. . . . . SYSTEM
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
...............
. . . . . . . . . . . .B.
APPENDIX . . METADATA
. . . . . . . . . . . . .SERVER
. . . . . . . . .DAEMON
. . . . . . . . . . CONFIGURATION
. . . . . . . . . . . . . . . . . . .REFERENCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
...............
. . . . . . . . . . . .C.
APPENDIX . . .JOURNALER
. . . . . . . . . . . . . CONFIGURATION
. . . . . . . . . . . . . . . . . . . REFERENCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
...............
. . . . . . . . . . . .D.
APPENDIX . . .CEPH
. . . . . .FILE
. . . . . SYSTEM
. . . . . . . . . CLIENT
. . . . . . . . .CONFIGURATION
. . . . . . . . . . . . . . . . . . .REFERENCE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .150
...............
3
Red Hat Ceph Storage 5 File System Guide
4
CHAPTER 1. INTRODUCTION TO THE CEPH FILE SYSTEM
The Ceph File System has the following features and enhancements:
Scalability
The Ceph File System is highly scalable due to horizontal scaling of metadata servers and direct client
reads and writes with individual OSD nodes.
Shared File System
The Ceph File System is a shared file system so multiple clients can work on the same file system at
once.
Multiple File Systems
Starting with Red Hat Ceph Storage 5, you can have multiple file systems active on one storage
cluster. Each CephFS has its own set of pools and its own set of Metadata Server (MDS) ranks.
When deploying multiple file systems this requires more running MDS daemons. This can increase
metadata throughput, but also increases operational costs. You can also limit client access to certain
file systems.
High Availability
The Ceph File System provides a cluster of Ceph Metadata Servers (MDS). One is active and others
are in standby mode. If the active MDS terminates unexpectedly, one of the standby MDS becomes
active. As a result, client mounts continue working through a server failure. This behavior makes the
Ceph File System highly available. In addition, you can configure multiple active metadata servers.
Configurable File and Directory Layouts
The Ceph File System allows users to configure file and directory layouts to use multiple pools, pool
namespaces, and file striping modes across objects.
POSIX Access Control Lists (ACL)
The Ceph File System supports the POSIX Access Control Lists (ACL). ACLs are enabled by default
with the Ceph File Systems mounted as kernel clients with kernel version kernel-3.10.0-327.18.2.el7
or newer. To use an ACL with the Ceph File Systems mounted as FUSE clients, you must enable
them.
Client Quotas
The Ceph File System supports setting quotas on any directory in a system. The quota can restrict
the number of bytes or the number of files stored beneath that point in the directory hierarchy.
CephFS client quotas are enabled by default.
IMPORTANT
5
Red Hat Ceph Storage 5 File System Guide
Additional Resources
See the Management of MDS service using the Ceph Orchestrator section in the Operations
Guide to install Ceph Metadata servers.
See the Deployment of the Ceph File System section in the File System Guide to create Ceph
File Systems.
Clients
The CephFS clients perform I/O operations on behalf of applications using CephFS, such as ceph-
fuse for FUSE clients and kcephfs for kernel clients. CephFS clients send metadata requests to an
active Metadata Server. In return, the CephFS client learns of the file metadata, and can begin safely
caching both metadata and file data.
Metadata Servers (MDS)
The MDS does the following:
Caches hot metadata to reduce requests to the backing metadata pool store.
Coalesces metadata mutations to a compact journal with regular flushes to the backing
metadata pool.
The diagram below shows the component layers of the Ceph File System.
6
CHAPTER 1. INTRODUCTION TO THE CEPH FILE SYSTEM
The bottom layer represents the underlying core storage cluster components:
Ceph OSDs (ceph-osd) where the Ceph File System data and metadata are stored.
Ceph Metadata Servers (ceph-mds) that manages Ceph File System metadata.
Ceph Monitors (ceph-mon) that manages the master copy of the cluster map.
The Ceph Storage protocol layer represents the Ceph native librados library for interacting with the
core storage cluster.
The CephFS library layer includes the CephFS libcephfs library that works on top of librados and
represents the Ceph File System.
The top layer represents two types of Ceph clients that can access the Ceph File Systems.
The diagram below shows more details on how the Ceph File System components interact with each
other.
Additional Resources
See the Management of MDS service using the Ceph Orchestrator section in the File System
Guide to install Ceph Metadata servers.
See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System
Guide to create Ceph File Systems.
7
Red Hat Ceph Storage 5 File System Guide
support applies to the Ceph File System Metadata Server (MDS), the CephFS File System in User Space
(FUSE) clients, and the CephFS kernel clients.
Additional Resources
See the Using SELinux on Red Hat Enterprise Linux 8 for more information about SELinux.
If a client’s attempt to write a file fails, the write operations are not necessarily atomic. That is,
the client might call the write() system call on a file opened with the O_SYNC flag with an 8MB
buffer and then terminates unexpectedly and the write operation can be only partially applied.
Almost all file systems, even local file systems, have this behavior.
In situations when the write operations occur simultaneously, a write operation that exceeds
object boundaries is not necessarily atomic. For example, writer A writes "aa|aa" and writer B
writes "bb|bb" simultaneously, where "|" is the object boundary, and "aa|bb" is written rather
than the proper "aa|aa" or "bb|bb".
POSIX includes the telldir() and seekdir() system calls that allow you to obtain the current
directory offset and seek back to it. Because CephFS can fragment directories at any time, it is
difficult to return a stable integer offset for a directory. As such, calling the seekdir() system
call to a non-zero offset might often work but is not guaranteed to do so. Calling seekdir() to
offset 0 will always work. This is equivalent to the rewinddir() system call.
Sparse files propagate incorrectly to the st_blocks field of the stat() system call. CephFS does
not explicitly track parts of a file that are allocated or written to, because the st_blocks field is
always populated by the quotient of file size divided by block size. This behavior causes utilities,
such as du, to overestimate used space.
When the mmap() system call maps a file into memory on multiple hosts, write operations are
not coherently propagated to caches of other hosts. That is, if a page is cached on host A, and
then updated on host B, host A page is not coherently invalidated.
CephFS clients present a hidden .snap directory that is used to access, create, delete, and
rename snapshots. Although this directory is excluded from the readdir() system call, any
process that tries to create a file or directory with the same name returns an error. The name of
this hidden directory can be changed at mount time with the -o snapdirname=.<new_name>
option or by using the client_snapdir configuration option.
Additional Resources
See the Management of MDS service using the Ceph Orchestrator section in the File System
Guide to install Ceph Metadata servers.
See the Deployment of the Ceph File System section in the Red Hat Ceph Storage File System
Guide to create Ceph File Systems.
If you want to use NFS Ganesha as an interface to the Ceph File System with Red Hat
8
CHAPTER 1. INTRODUCTION TO THE CEPH FILE SYSTEM
If you want to use NFS Ganesha as an interface to the Ceph File System with Red Hat
OpenStack Platform, see the CephFS through NFS-Ganesha Installation in the Deploying the
Shared File Systems service with CephFS through NFS Guide.
9
Red Hat Ceph Storage 5 File System Guide
2.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
Installation of the Ceph Metadata Server daemons (ceph-mds). See the Management of MDS
service using the Ceph Orchestrator section in the Red Hat Ceph Storage File System Guide for
details on configuring MDS daemons.
Active — manages metadata for files and directories stores on the Ceph File System.
Standby — serves as a backup, and becomes active when an active MDS daemon becomes
unresponsive.
By default, a Ceph File System uses only one active MDS daemon. However, systems with many clients
benefit from multiple active MDS daemons.
You can configure the file system to use multiple active MDS daemons so that you can scale metadata
performance for larger workloads. The active MDS daemons dynamically share the metadata workload
when metadata load patterns change. Note that systems with multiple active MDS daemons still require
standby MDS daemons to remain highly available.
NOTE
To change the value of mds_beacon_grace, add this option to the Ceph configuration
file and specify the new value.
Ranks define how the metadata workload is shared between multiple Metadata Server (MDS) daemons.
The number of ranks is the maximum number of MDS daemons that can be active at one time. Each
MDS daemon handles a subset of the CephFS metadata that is assigned to that rank.
Each MDS daemon initially starts without a rank. The Ceph Monitor assigns a rank to the daemon. The
MDS daemon can only hold one rank at a time. Daemons only lose ranks when they are stopped.
10
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
The actual number of ranks in the CephFS is only increased if a spare daemon is available to accept the
new rank.
Rank States
Ranks can be:
Damaged - A rank that is damaged; its metadata is corrupted or missing. Damaged ranks are
not assigned to any MDS daemons until the operator fixes the problem, and uses the ceph
mds repaired command on the damaged rank.
A memory limit: Use the mds_cache_memory_limit option. Red Hat recommends a value
between 8 GB and 64 GB for mds_cache_memory_limit. Setting more cache can cause issues
with recovery. This limit is approximately 66% of the desired maximum memory use of the MDS.
IMPORTANT
Red Hat recommends using memory limits instead of inode count limits.
Inode count: Use the mds_cache_size option. By default, limiting the MDS cache by inode
count is disabled.
In addition, you can specify a cache reservation by using the mds_cache_reservation option for MDS
operations. The cache reservation is limited as a percentage of the memory or inode limit and is set to
5% by default. The intent of this parameter is to have the MDS maintain an extra reserve of memory for
its cache for new metadata operations to use. As a consequence, the MDS should in general operate
below its memory limit because it will recall old state from clients to drop unused metadata in its cache.
Be aware that the cache limit is not a hard limit. Potential bugs in the CephFS client or MDS or
misbehaving applications might cause the MDS to exceed its cache size. The
mds_health_cache_threshold option configures the storage cluster health warning message, so that
operators can investigate why the MDS cannot shrink its cache.
Additional Resources
See the Metadata Server daemon configuration reference section in the Red Hat Ceph Storage
File System Guide for more information.
11
Red Hat Ceph Storage 5 File System Guide
over another Ceph MDS. For example, you have MDS running on newer, faster hardware that you want
to give preference to over a standby MDS running on older, maybe slower hardware. You can specify
this preference by setting the mds_join_fs option, which enforces this file system affinity. Ceph
Monitors give preference to MDS standby daemons with mds_join_fs equal to the file system name
with the failed rank. The standby-replay daemons are selected before choosing another standby
daemon. If no standby daemon exists with the mds_join_fs option, then the Ceph Monitors will choose
an ordinary standby for replacement or any other available standby as a last resort. The Ceph Monitors
will periodically examine Ceph File Systems to see if a standby with a stronger affinity is available to
replace the Ceph MDS that has a lower affinity.
Additional Resources
See the Configuring file system affinity section in the Red Hat Ceph Storage File System Guide
for details.
2.6.1. Prerequisites
A running Red Hat Ceph Storage cluster.
2.6.2. Deploying the MDS service using the command line interface
Using the Ceph Orchestrator, you can deploy the Metadata Server (MDS) service using the placement
specification in the command line interface. Ceph File System (CephFS) requires one or more MDS.
NOTE
Ensure you have at least two pools, one for Ceph file system (CephFS) data and one for
CephFS metadata.
Prerequisites
12
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
Procedure
Example
2. There are two ways of deploying MDS daemons using placement specification:
Method 1
Use ceph fs volume to create the MDS daemons. This creates the CephFS volume and pools
associated with the CephFS, and also starts the MDS service on the hosts.
Syntax
NOTE
Example
[ceph: root@host01 /]# ceph fs volume create test --placement="2 host01 host02"
Method 2
Create the pools, CephFS, and then deploy MDS service using placement specification:
Syntax
Example
b. Create the file system for the data pools and metadata pools:
Syntax
13
Red Hat Ceph Storage 5 File System Guide
Example
Syntax
Example
[ceph: root@host01 /]# ceph orch apply mds test --placement="2 host01 host02"
Verification
Example
Example
Syntax
Example
Additional Resources
See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph
File System (CephFS).
NOTE
14
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
NOTE
Ensure you have at least two pools, one for the Ceph File System (CephFS) data and one
for the CephFS metadata.
Prerequisites
Procedure
Example
Syntax
service_type: mds
service_id: FILESYSTEM_NAME
placement:
hosts:
- HOST_NAME_1
- HOST_NAME_2
- HOST_NAME_3
Example
service_type: mds
service_id: fs_name
placement:
hosts:
- host01
- host02
Example
Example
15
Red Hat Ceph Storage 5 File System Guide
Example
Example
Syntax
Example
8. Once the MDS services is deployed and functional, create the CephFS:
Syntax
Example
Verification
Example
Syntax
Example
16
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
Additional Resources
See the Red Hat Ceph Storage File System Guide for more information about creating the Ceph
File System (CephFS).
Prerequisites
Procedure
There are two ways of removing MDS daemons from the cluster:
Method 1
Example
Example
Syntax
Example
This command will remove the file system, its data, and metadata pools. It also tries to
remove the MDS using the enabled ceph-mgr Orchestrator module.
Method 2
17
Red Hat Ceph Storage 5 File System Guide
Method 2
Use the ceph orch rm command to remove the MDS service from the entire cluster:
Example
Syntax
Example
Verification
Syntax
ceph orch ps
Example
Additional Resources
See Deploying the MDS service using the command line interface section in the Red Hat
Ceph Storage Operations Guide for more information.
See Deploying the MDS service using the service specification section in the Red Hat
Ceph Storage Operations Guide for more information.
Prerequisites
Procedure
Example
Standby daemons:
Syntax
Example
After a Ceph MDS failover event, the file system favors the standby daemon for which the
affinity is set.
Example
19
Red Hat Ceph Storage 5 File System Guide
Standby daemons:
1 The mds.b daemon now has the join_fscid=27 in the file system dump output.
IMPORTANT
Additional Resources
See the File system affinity section in the Red Hat Ceph Storage File System Guide for more
details.
IMPORTANT
Do not convert all standby MDS daemons to active ones. A Ceph File System (CephFS)
requires at least one standby MDS daemon to remain highly available.
Prerequisites
Procedure
1. Set the max_mds parameter to the desired number of active MDS daemons:
Syntax
Example
This example increases the number of active MDS daemons to two in the CephFS called cephfs
NOTE
Ceph only increases the actual number of ranks in the CephFS if a spare MDS
daemon is available to take the new rank.
20
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
Syntax
Example
+-------------+
| STANDBY MDS |
+-------------+
| node3 |
+-------------+
Additional Resources
See the Metadata Server daemons states section in the Red Hat Ceph Storage File System
Guide for more details.
See the Decreasing the number of active MDS Daemons section in the Red Hat Ceph Storage
File System Guide for more details.
See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide for
more details.
Prerequisites
Procedure
21
Red Hat Ceph Storage 5 File System Guide
Syntax
NOTE
Example
This specific standby-replay daemon follows the active MDS’s metadata journal. The standby-replay
daemon is only used by the active MDS of the same rank, and is not available to other ranks.
IMPORTANT
If using standby-replay, then every active MDS must have a standby-replay daemon.
Prerequisites
Procedure
Syntax
Example
In this example, the Boolean value is 1, which enables the standby-replay daemons to be
assigned to the active Ceph MDS daemons.
Additional Resources
See the Using the ceph mds fail command section in the Red Hat Ceph Storage File System
Guide for details.
22
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
Note: Installation of the attr package is a prerequisite for the ephemeral pinning policies.
Distributed
This policy enforces that all of a directory’s immediate children must be ephemerally pinned. For
example, use a distributed policy to spread a user’s home directory across the entire Ceph File
System cluster. Enable this policy by setting the ceph.dir.pin.distributed extended attribute.
Random
This policy enforces a chance that any descendent subdirectory might be ephemerally pinned. You
can customize the percent of directories that can be ephemerally pinned. Enable this policy by
setting the ceph.dir.pin.random and setting a percentage. Red Hat recommends setting this
percentage to a value smaller than 1% (0.01). Having too many subtree partitions can cause slow
performance. You can set the maximum percentage by setting the
mds_export_ephemeral_random_max Ceph MDS configuration option. The parameters
mds_export_ephemeral_distributed and mds_export_ephemeral_random are already enabled.
NOTE
For more information see the Why does Red Hat recommend less than 0.01% chance for
any descendent subdirectory to be pinned by setting the ceph.dir.pin.random attribute?
Additional Resources
See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage
File System Guide for details on manually setting pins.
A directory’s export pin is inherited from its closest parent directory, but can be overwritten by setting an
export pin on that directory. Setting an export pin on a directory affects all of its sub-directories, for
example:
23
Red Hat Ceph Storage 5 File System Guide
3 Directory a/b is now pinned to rank 0 and directory a/ and the rest of its sub-directories are still
pinned to rank 1.
Prerequisites
Procedure
Syntax
Example
Additional Resources
See the Ephemeral pinning policies section in the Red Hat Ceph Storage File System Guide for
details on automatically setting pins.
Prerequisites
The rank that you will remove must be active first, meaning that you must have the same
number of MDS daemons as specified by the max_mds parameter.
Procedure
24
CHAPTER 2. THE CEPH FILE SYSTEM METADATA SERVER
1. Set the same number of MDS daemons as specified by the max_mds parameter:
Syntax
Example
+------+--------+-------+---------------+-------+-------+--------+--------+
| RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS |
+------+--------+-------+---------------+-------+-------+--------+--------+
| 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 |
| 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 |
+------+--------+-------+---------------+-------+-------+--------+--------+
+-----------------+----------+-------+-------+
| POOL | TYPE | USED | AVAIL |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4638 | 26.7G |
| cephfs_data | data | 0 | 26.7G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| node3 |
+-------------+
2. On a node with administration capabilities, change the max_mds parameter to the desired
number of active MDS daemons:
Syntax
Example
3. Wait for the storage cluster to stabilize to the new max_mds value by watching the Ceph File
System status.
Syntax
Example
25
Red Hat Ceph Storage 5 File System Guide
cephfs - 0 clients
+------+--------+-------+---------------+-------+-------+--------+--------+
| RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS |
+------+--------+-------+---------------+-------+-------+--------+--------+
| 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 |
+------+--------+-------+---------------+-------+-------+--------|--------+
+-----------------+----------+-------+-------+
| POOl | TYPE | USED | AVAIL |
+-----------------+----------+-------+-------+
| cephfs_metadata | metadata | 4638 | 26.7G |
| cephfs_data | data | 0 | 26.7G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| node3 |
| node2 |
+-------------+
Additional Resources
See the Metadata Server daemons states section in the Red Hat Ceph Storage File System
Guide.
See the Configuring multiple active Metadata Server daemons section in the Red Hat
Ceph Storage File System Guide.
26
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
2. Create a Ceph client user with the appropriate capabilities, and make the client key available on
the node where the Ceph File System will be mounted.
3. Mount CephFS on a dedicated node, using either a kernel client or a File System in User Space
(FUSE) client.
3.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
IMPORTANT
All user capability flags, except rw, must be specified in alphabetical order.
Example
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow rwp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
client.1
key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw==
caps: [mds] allow rw
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0 can modify layouts and quotas on the file system cephfs_a, but client.1
cannot.
Snapshots
27
Red Hat Ceph Storage 5 File System Guide
When creating or deleting snapshots, clients require the s flag, in addition to rw capabilities. When the
capability string also contains the p flag, the s flag must appear after it.
Example
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow rw, allow rws path=/temp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
In this example, client.0 can create or delete snapshots in the temp directory of file system cephfs_a.
Network
Restricting clients connecting from a particular network.
Example
client.0
key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw==
caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8
caps: [mon] allow r network 10.0.0.0/8
caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8
The optional network and prefix length is in CIDR notation, for example, 10.3.0.0/16.
Additional Resources
See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File
System Guide for details on setting the Ceph user capabilities.
Prerequisites
Procedure
28
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
c. Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
Example
d. Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
Example
Syntax
Example
29
Red Hat Ceph Storage 5 File System Guide
NOTE
By running this command, Ceph automatically creates the new pools, and
deploys a new Ceph Metadata Server (MDS) daemon to support the new file
system. This also configures the MDS affinity accordingly.
3. Verify access to the new Ceph File System from a Ceph client.
Syntax
IMPORTANT
Example
NOTE
30
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
NOTE
Example
In this example, root_squash is enabled for the file system cephfs01, except
within the /volumes directory tree.
IMPORTANT
The Ceph client can only see the CephFS it is authorized for.
Syntax
Example
Syntax
mkdir PATH_TO_NEW_DIRECTORY_NAME
Example
31
Red Hat Ceph Storage 5 File System Guide
d. On the Ceph client node, mount the new Ceph File System:
Syntax
Example
e. On the Ceph client node, list the directory contents of the new mount point, or create a file
on the new mount point.
Additional Resources
See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File
System Guide for more details.
See the Mounting the Ceph File System as a kernel client section in the Red Hat Ceph Storage
File System Guide for more details.
See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage
File System Guide for more details.
See Ceph File System limitations and the POSIX standards section in the Red Hat Ceph Storage
File System Guide for more details.
See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for more details.
IMPORTANT
IMPORTANT
For production environments, Red Hat recommends using the default replicated data
pool for CephFS. The creation of inodes in CephFS creates at least one object in the
default data pool. It is better to use a replicated pool for the default data to improve
small-object write performance, and to improve read performance for updating
backtraces.
32
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Prerequisites
Procedure
Syntax
Example
Example
Syntax
Example
Syntax
Example
33
Red Hat Ceph Storage 5 File System Guide
=========
RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS
0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921
POOL TYPE USED AVAIL
cephfs-metadata-ec metadata 787M 8274G
cephfs-data-ec data 2360G 12.1T
STANDBY MDS
cephfs-ec.example.irsrql
cephfs-ec.example.cauuaj
Syntax
Example
This example adds the new data pool, cephfs-data-ec01, to the existing erasure-coded file
system, cephfs-ec.
6. Verify that the erasure-coded pool was added to the Ceph File System:
Syntax
Example
STANDBY MDS
cephfs-ec.example.irsrql
cephfs-ec.example.cauuaj
Syntax
mkdir PATH_TO_DIRECTORY
setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY
34
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Example
In this example, all new files created in the /mnt/cephfs/newdir directory inherit the directory
layout and places the data in the newly added erasure-coded pool.
Additional Resources
See The Ceph File System Metadata Server chapter in the Red Hat Ceph Storage File System
Guide for more information about CephFS MDS.
See the Creating Ceph File Systems section in the Red Hat Ceph Storage File System Guide for
more information.
See the Erasure Code Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for
more information.
See the Erasure Coding with Overwrites section in the Red Hat Ceph Storage Storage Strategies
Guide for more information.
Prerequisites
Procedure
Example
Syntax
To restrict the client to only writing in the temp directory of filesystem cephfs_a:
35
Red Hat Ceph Storage 5 File System Guide
Example
client.1
key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
To completely restrict the client to the temp directory, remove the root ( /) directory:
Example
NOTE
Supplying all or asterisk as the file system name grants access to every file
system. Typically, it is necessary to quote the asterisk to protect it from the shell.
Syntax
Example
client.1
key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
caps mds = "allow r, allow rw path=/temp"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs_a"
Syntax
Example
b. Copy the client keyring from the Ceph Monitor node to the /etc/ceph/ directory on the
client node:
Syntax
36
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Example
5. From the client node, set the appropriate permissions for the keyring file:
Syntax
Example
Additional Resources
See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for
more details.
IMPORTANT
Clients running on other Linux distributions, aside from Red Hat Enterprise Linux, are
permitted but not supported. If issues are found in the CephFS Metadata Server or other
parts of the storage cluster when using these clients, Red Hat will address them. If the
cause is found to be on the client side, then the issue will have to be addressed by the
kernel vendor of the Linux distribution.
Prerequisites
Procedure
37
Red Hat Ceph Storage 5 File System Guide
Example
d. Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
Example
e. Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
Example
f. From the client node, set the appropriate permissions for the configuration file:
Manually Mounting
38
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Syntax
mkdir -p MOUNT_POINT
Example
3. Mount the Ceph File System. To specify multiple Ceph Monitor addresses, separate them with
commas in the mount command, specify the mount point, and set the client name:
NOTE
As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As
such, a secret file is no longer necessary. Just specify the client ID with
name=CLIENT_ID, and mount.ceph will find the right keyring file.
Syntax
Example
NOTE
You can configure a DNS server so that a single host name resolves to multiple
IP addresses. Then you can use that single host name with the mount command,
instead of supplying a comma-separated list.
NOTE
You can also replace the Monitor host names with the string :/ and mount.ceph
will read the Ceph configuration file to determine which Monitors to connect to.
NOTE
39
Red Hat Ceph Storage 5 File System Guide
NOTE
You can set the nowsync option to asynchronously execute file creation and
removal on the Red Hat Ceph Storage clusters. This improves the performance
of some workloads by avoiding round-trip latency for these system calls without
impacting consistency. The nowsync option requires kernel clients with Red Hat
Enterprise Linux 8.4 or later.
Example
Syntax
stat -f MOUNT_POINT
Example
Automatically Mounting
2. On the client host, create a new directory for mounting the Ceph File System.
Syntax
mkdir -p MOUNT_POINT
Example
Syntax
The first column sets the Ceph Monitor host names and the port number.
The third column sets the file system type, in this case, ceph, for CephFS.
The fourth column sets the various options, such as, the user name and the secret file using the
40
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
The fourth column sets the various options, such as, the user name and the secret file using the
name and secretfile options. You can also set specific volumes, sub-volume groups, and sub-
volumes using the ceph.client_mountpoint option.
Set the _netdev option to ensure that the file system is mounted after the networking
subsystem starts to prevent hanging and networking issues. If you do not need access time
information, then setting the noatime option can increase performance.
Example
The Ceph File System will be mounted on the next system boot.
NOTE
As of Red Hat Ceph Storage 4.1, mount.ceph can read keyring files directly. As
such, a secret file is no longer necessary. Just specify the client ID with
name=CLIENT_ID, and mount.ceph will find the right keyring file.
NOTE
You can also replace the Monitor host names with the string :/ and mount.ceph
will read the Ceph configuration file to determine which Monitors to connect to.
Additional Resources
See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for
more details on creating a Ceph user.
See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for
details.
Prerequisites
41
Red Hat Ceph Storage 5 File System Guide
Procedure
Example
d. Copy the Ceph client keyring from the Ceph Monitor node to the client node:
Syntax
Example
e. Copy the Ceph configuration file from a Ceph Monitor node to the client node:
Syntax
Example
f. From the client node, set the appropriate permissions for the configuration file:
42
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Manually Mounting
Syntax
mkdir PATH_TO_MOUNT_POINT
Example
NOTE
If you used the path option with MDS capabilities, then the mount point must be
within what is specified by the path.
Syntax
Example
NOTE
If you do not use the default name and location of the user keyring, that is
/etc/ceph/ceph.client.CLIENT_ID.keyring, then use the --keyring option to
specify the path to the user keyring, for example:
Example
NOTE
43
Red Hat Ceph Storage 5 File System Guide
NOTE
Use the -r option to instruct the client to treat that path as its root:
Syntax
Example
NOTE
If you want to automatically reconnect an evicted Ceph client, then add the --
client_reconnect_stale=true option.
Example
Syntax
stat -f MOUNT_POINT
Example
Automatically Mounting
Syntax
mkdir PATH_TO_MOUNT_POINT
Example
NOTE
If you used the path option with MDS capabilities, then the mount point must be
within what is specified by the path.
44
CHAPTER 3. DEPLOYMENT OF THE CEPH FILE SYSTEM
Syntax
The first column sets the Ceph Monitor host names and the port number.
The third column sets the file system type, in this case, fuse.ceph, for CephFS.
The fourth column sets the various options, such as the user name and the keyring using the
ceph.name and ceph.keyring options. You can also set specific volumes, sub-volume groups,
and sub-volumes using the ceph.client_mountpoint option. To specify which Ceph File System
to access, use the ceph.client_fs option. Set the _netdev option to ensure that the file system
is mounted after the networking subsystem starts to prevent hanging and networking issues. If
you do not need access time information, then setting the noatime option can increase
performance. If you want to automatically reconnect after an eviction, then set the
client_reconnect_stale=true option.
Example
The Ceph File System will be mounted on the next system boot.
Additional Resources
See the Ceph user management chapter in the Red Hat Ceph Storage Administration Guide for
more details on creating a Ceph user.
See the Creating Ceph File Systems section of the Red Hat Ceph Storage File System Guide for
details.
45
Red Hat Ceph Storage 5 File System Guide
See Section 3.5, “Creating client users for a Ceph File System” for details.
See Section 3.6, “Mounting the Ceph File System as a kernel client” for details.
See Section 3.7, “Mounting the Ceph File System as a FUSE client” for details.
See Chapter 2, The Ceph File System Metadata Server for details on configuring the CephFS
Metadata Server daemon.
46
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
The Ceph Manager volumes module implements the following file system export abstractions:
CephFS volumes
CephFS subvolumes
NOTE
This creates the Ceph File System, along with the data and metadata pools.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
Procedure
47
Red Hat Ceph Storage 5 File System Guide
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS volume.
Procedure
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS volume.
Procedure
1. If the mon_allow_pool_delete option is not set to true, then set it to true before removing the
CephFS volume:
48
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Example
Syntax
Example
Starting with Red Hat Ceph Storage 5.0, the subvolume group snapshot feature is not supported. You
can only list and remove the existing snapshots of these subvolume groups.
NOTE
When creating a subvolume group, you can specify its data pool layout, uid, gid, and file
mode in octal numerals. By default, the subvolume group is created with an octal file
mode ‘755’, uid ‘0’, gid ‘0’, and data pool layout of its parent directory.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
49
Red Hat Ceph Storage 5 File System Guide
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
50
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
In addition to read (r) and write (w) capabilities, clients also require s flag on a directory path
within the file system.
Procedure
Syntax
Example
client.0
key: AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
caps: [mds] allow rw, allow rws path=/bar 1
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a 2
1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system
cephfs_a.
51
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
The command implicitly snapshots all the subvolumes under the subvolume group.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
Procedure
Syntax
Example
NOTE
Using the --force flag allows the command to succeed that would otherwise fail if the
snapshot did not exist.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
52
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Procedure
Syntax
Example
NOTE
The removal of a subvolume group fails if it is not empty or non-existent. The --force flag
allows the non-existent subvolume group to be removed.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
Procedure
Syntax
Example
53
Red Hat Ceph Storage 5 File System Guide
NOTE
When creating a subvolume, you can specify its subvolume group, data pool layout, uid,
gid, file mode in octal numerals, and size in bytes. The subvolume can be created in a
separate RADOS namespace by specifying the --namespace-isolated option. By
default, a subvolume is created within the default subvolume group, and with an octal file
mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool layout of its
parent directory, and no size limit.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System deployed.
Procedure
Syntax
54
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
Procedure
Syntax
Example
NOTE
The ceph fs subvolume resize command resizes the subvolume quota using the size
specified by new_size. The --no_shrink flag prevents the subvolume from shrinking
below the currently used size of the subvolume. The subvolume can be resized to an
infinite size by passing inf or infinite as the new_size.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
55
Red Hat Ceph Storage 5 File System Guide
A CephFS subvolume.
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
Procedure
Syntax
Example
Prerequisites
56
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
Procedure
Syntax
Example
Example output
{
"atime": "2022-05-09 09:27:15",
"bytes_pcent": "undefined",
"bytes_quota": "infinite",
"bytes_used": 0,
"created_at": "2022-05-09 09:27:15",
"ctime": "22022-05-09 09:27:15",
"data_pool": "cephfs_data",
"features": [
"snapshot-clone",
"snapshot-autoprotect",
"snapshot-retention"
],
"gid": 0,
"mode": 16877,
"mon_addrs": [
"10.8.128.22:6789",
"10.8.128.23:6789",
"10.8.128.24:6789"
],
"mtime": "2022-05-09 09:27:15",
"path": "/volumes/subgroup0/sub0/6d01a68a-e981-4ebe-84ca-96b660879173",
"pool_namespace": "",
"state": "complete",
"type": "subvolume",
"uid": 0
}
57
Red Hat Ceph Storage 5 File System Guide
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
In addition to read (r) and write (w) capabilities, clients also require s flag on a directory path
within the file system.
Procedure
Syntax
58
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Example
1 2 In the example, client.0 can create or delete snapshots in the bar directory of file system
cephfs_a.
Syntax
Example
[root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name
subgroup0
NOTE
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
To create or delete snapshots, in addition to read and write capability, clients require s flag on a
directory path within the filesystem.
Syntax
CLIENT_NAME
key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
caps mds = allow rw, allow rws path=DIRECTORY_PATH
59
Red Hat Ceph Storage 5 File System Guide
Example
[client.0]
key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw==
caps mds = "allow rw, allow rws path=/bar"
caps mon = "allow r"
caps osd = "allow rw tag cephfs data=cephfs_a"
In the above example, client.0 can create or delete snapshots in the bar directory of filesystem
cephfs_a.
Procedure
Syntax
Example
This creates the CephFS file system, its data and metadata pools.
2. Create a subvolume group. By default, the subvolume group is created with an octal file mode
'755', and data pool layout of its parent directory.
Syntax
Example
3. Create a subvolume. By default, a subvolume is created within the default subvolume group, and
with an octal file mode ‘755’, uid of its subvolume group, gid of its subvolume group, data pool
layout of its parent directory, and no size limit.
Syntax
Example
60
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Syntax
Example
[root@mon ~]# ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name
subgroup0
NOTE
a. If the source subvolume and the target clone are in the default group, run the following
command:
Syntax
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0
b. If the source subvolume is in the non-default group, then specify the source subvolume
group in the following command:
Syntax
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --
group_name subgroup0
c. If the target clone is to a non-default group, then specify the target group in the following
command:
Syntax
61
Red Hat Ceph Storage 5 File System Guide
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --
target_group_name subgroup1
Syntax
Example
{
"status": {
"state": "complete"
}
}
Additional Resources
See the Managing Ceph users section in the Red Hat Ceph Storage Administration Guide.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
Procedure
Syntax
62
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
Example
Prerequisites
A CephFS subvolume.
Procedure
Syntax
Example
[root@mon ~]# ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name
subgroup0
Example output
{
"created_at": "2022-05-09 06:18:47.330682",
"data_pool": "cephfs_data",
"has_pending_clones": "no",
"size": 0
}
63
Red Hat Ceph Storage 5 File System Guide
NOTE
The ceph fs subvolume rm command removes the subvolume and its contents in two
steps. First, it moves the subvolume to a trash folder, and then asynchronously purges its
contents.
A subvolume can be removed retaining existing snapshots of the subvolume using the --retain-
snapshots option. If snapshots are retained, the subvolume is considered empty for all operations not
involving the retained snapshots. Retained snapshots can be used as a clone source to recreate the
subvolume, or cloned to a newer subvolume.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
A CephFS subvolume.
Procedure
Syntax
Example
Syntax
NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it
64
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
NEW_SUBVOLUME can either be the same subvolume which was deleted earlier or clone it
to a new subvolume.
Example
[root@mon ~]# ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name
subgroup0 --target_group_name subgroup0
NOTE
Using the --force flag allows the command to succeed that would otherwise fail if the
snapshot did not exist.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System deployed.
Procedure
Syntax
Example
The custom metadata is for users to store their metadata in subvolumes. Users can store the key-value
pairs similar to xattr in a Ceph File System.
65
Red Hat Ceph Storage 5 File System Guide
NOTE
If the key_name already exists then the old value is replaced by the new value.
NOTE
IMPORTANT
Prerequisites
A Ceph File System (CephFS), CephFS volume, subvolume group, and subvolume created.
Procedure
Syntax
Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 test_meta cluster --
group_name subgroup0
Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test meta" cluster --
group_name subgroup0
66
CHAPTER 4. MANAGEMENT OF CEPH FILE SYSTEM VOLUMES, SUB-VOLUME GROUPS, AND SUB-VOLUMES
This creates another metadata with KEY_NAME as test meta for the VALUE cluster.
3. Optional: You can also set the same metadata with a different value:
Example
[ceph: root@host01 /]# ceph fs subvolume metadata set cephfs sub0 "test_meta" cluster2 --
group_name subgroup0
Prerequisites
Procedure
Syntax
Example
[ceph: root@host01 /]# ceph fs subvolume metadata get cephfs sub0 test_meta --
group_name subgroup0
cluster
Prerequisites
Procedure
67
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
Prerequisites
Procedure
Syntax
Example
Example
{}
68
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
Monitoring CephFS metrics in real-time, see Section 5.2, “Using the cephfs-top utility”
Mapping a directory to a particular MDS rank, see Section 5.6, “Mapping directory trees to
Metadata Server daemon ranks”.
Disassociating a directory from a MDS rank, see Section 5.7, “Disassociating directory trees
from Metadata Server daemon ranks”.
Adding a new data pool, see Section 5.8, “Adding data pools” .
Working with files and directory layouts, see Chapter 8, File and directory layouts .
Removing a Ceph File System, see Section 5.10, “Removing a Ceph File System” .
Using the ceph mds fail command, see Section 5.11, “Using the ceph mds fail command”.
Manually evict a CephFS client, see Section 5.15, “Manually evicting a Ceph File System client”
5.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
NOTE
Currently, not all of the performance stats are available in the Red Hat Enterprise Linux 8
kernel. cephfs-top is supported on Red Hat Enterprise Linux 8 and above and uses one of
the standard terminals in Red Hat Enterprise Linux.
IMPORTANT
Prerequisites
69
Red Hat Ceph Storage 5 File System Guide
Procedure
1. Enable the Red Hat Ceph Storage 5 tools repository, if it is not already enabled:
Example
Example
Example
[root@client ~]# ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r'
mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring
NOTE
Optionally, use the --id argument to specify a different Ceph user, other than
client.fstop.
Example
70
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
NOTE
By default, cephfs-top connects to the storage cluster name ceph. To use a non-
default storage cluster name, you can use the --cluster NAME option with the
cephfs-top utility.
The module monitors the following file system settings to inform placement count adjustments:
The Ceph monitor daemons are still responsible for promoting or stopping MDS according to these
settings. The mds_autoscaler simply adjusts the number of MDS which are spawned by the
orchestrator.
Prerequisites
71
Red Hat Ceph Storage 5 File System Guide
Procedure
Example
Prerequisites
Procedure
Syntax
umount MOUNT_POINT
Example
Additional Resources
Prerequisites
Procedure
Syntax
fusermount -u MOUNT_POINT
72
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
Example
Additional Resources
IMPORTANT
An internal balancer already dynamically spreads the application load. Therefore, only
map directory trees to ranks for certain carefully chosen applications.
In addition, when a directory is mapped to a rank, the balancer cannot split it.
Consequently, a large number of operations within the mapped directory can overload
the rank and the MDS daemon that manages it.
Prerequisites
Verify that the attr package is installed on the CephFS client node with a mounted Ceph File
System.
Procedure
Syntax
Example
client.1
key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==
caps: [mds] allow r, allow rwp path=/temp
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
73
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
This example assigns the /temp directory and all of its subdirectories to rank 2.
Additional Resources
See the Layout, quota, snapshot, and network restrictions section in the Red Hat Ceph Storage
File System Guide for more details about the p flag.
See the Manually pinning directory trees to a particular rank section in the Red Hat Ceph Storage
File System Guide for more details.
See the Configuring multiple active Metadata Server daemons section in the Red Hat
Ceph Storage File System Guide for more details.
Prerequisites
Ensure that the attr package is installed on the client node with a mounted CephFS.
Procedure
Syntax
Example
NOTE
Additional Resources
74
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
See the Mapping directory trees to Metadata Server daemon ranks section in Red Hat
Ceph Storage File System Guide for more details.
Before using another data pool in the Ceph File System, you must add it as described in this section.
By default, for storing file data, CephFS uses the initial data pool that was specified during its creation.
To use a secondary data pool, you must also configure a part of the file system hierarchy to store file
data in that pool or optionally within a namespace of that pool, using file and directory layouts.
Prerequisites
Procedure
Syntax
Replace:
Example
2. Add the newly created pool under the control of the Metadata Servers:
Syntax
Replace:
75
Red Hat Ceph Storage 5 File System Guide
Example:
Example
Syntax
Example:
Example
5. If you use the cephx authentication, make sure that clients can access the new pool.
Additional Resources
See the File and directory layouts section in the Red Hat Ceph Storage File System Guide for
details.
See the Creating client users for a Ceph File System section in the Red Hat Ceph Storage File
System Guide for details.
You can also take the CephFS cluster down quickly to test the deletion of a file system and bring the
76
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
You can also take the CephFS cluster down quickly to test the deletion of a file system and bring the
Metadata Server (MDS) daemons down, for example, when practicing a disaster recovery scenario.
Doing this sets the jointable flag to prevent the MDS standby daemons from activating the file system.
Prerequisites
Procedure
Syntax
Example
Syntax
Example
or
Syntax
Example
NOTE
77
Red Hat Ceph Storage 5 File System Guide
NOTE
Syntax
Example
WARNING
This operation is destructive and will make the data stored on the Ceph File System
permanently inaccessible.
Prerequisites
Procedure
Syntax
Replace
FS_NAME with the name of the Ceph File System you want to remove.
Example
78
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
ceph fs status
Example
Syntax
Replace
FS_NAME with the name of the Ceph File System you want to remove.
Example
5. Optional. Remove data and metadata pools associated with the removed file system.
Additional Resources
See the Delete a Pool section in the Red Hat Ceph Storage Storage Strategies Guide .
Mark a MDS daemon as failed. If the daemon was active and a suitable standby daemon was
available, and if the standby daemon was active after disabling the standby-replay
configuration, using this command forces a failover to the standby daemon. By disabling the
standby-replay daemon, this prevents new standby-replay daemons from being assigned.
Restart a running MDS daemon. If the daemon was active and a suitable standby daemon was
79
Red Hat Ceph Storage 5 File System Guide
Restart a running MDS daemon. If the daemon was active and a suitable standby daemon was
available, the "failed" daemon becomes a standby daemon.
Prerequisites
Procedure
To fail a daemon:
Syntax
Example
NOTE
You can find the Ceph MDS name from the ceph fs status command.
Additional Resources
See the Decreasing the number of active Metadata Server daemons section in the Red Hat
Ceph Storage File System Guide.
See the Configuring the number of standby daemons section in the Red Hat Ceph Storage File
System Guide.
See the Metadata Server ranks section in the Red Hat Ceph Storage File System Guide.
IMPORTANT
You can list all the CephFS features by using the fs features ls command. You can add or remove
requirements by using the fs required_client_features command.
Syntax
80
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
Feature Descriptions
reply_encoding
Description
The Ceph Metadata Server (MDS) encodes reply requests in extensible format, if the client
supports this feature.
reclaim_client
Description
The Ceph MDS allows a new client to reclaim another, perhaps a dead, client’s state. This feature
is used by NFS Ganesha.
lazy_caps_wanted
Description
When a stale client resumes, the Ceph MDS only needs to re-issue the capabilities that are
explicitly wanted, if the client supports this feature.
multi_reconnect
Description
After a Ceph MDS failover event, the client sends a reconnect message to the MDS to reestablish
cache states. A client can split large reconnect messages into multiple messages.
deleg_ino
Description
A Ceph MDS delegates inode numbers to a client, if the client supports this feature. Delegating
inode numbers is a prerequisite for a client to do async file creation.
metric_collect
Description
CephFS clients can send performance metrics to a Ceph MDS.
alternate_name
Description
CephFS clients can set and understand alternate names for directory entries. This feature allows
for encrypted file names.
You can evict CephFS clients automatically, if they fail to communicate promptly with the MDS daemon,
or manually.
81
Red Hat Ceph Storage 5 File System Guide
If a CephFS client has not communicated with the active MDS daemon for over the default of
300 seconds, or as set by the session_autoclose option.
During MDS startup or failover, the MDS daemon goes through a reconnect phase waiting for
all the CephFS clients to connect to the new MDS daemon. If any CephFS clients fail to
reconnect within the default time window of 45 seconds, or as set by the
mds_reconnect_timeout option.
Additional Resources
See the Manually evicting a Ceph File System client section in the Red Hat Ceph Storage File
System Guide for more details.
An internal “osdmap epoch barrier” mechanism is used when updating the Ceph OSD map. The purpose
of the barrier is to verify the CephFS clients receiving the capabilities have a sufficiently recent Ceph
OSD map, before any capabilities are assigned that might allow access to the same RADOS objects, as
to not race with canceled operations, such as, from ENOSPC or blocklisted clients from evictions.
If you are experiencing frequent CephFS client evictions due to slow nodes or an unreliable network, and
you cannot fix the underlying issue, then you can ask the MDS to be less strict. It is possible to respond
to slow CephFS clients by simply dropping their MDS sessions, but permit the CephFS client to re-open
sessions and to continue talking to Ceph OSDs. By setting the mds_session_blocklist_on_timeout
and mds_session_blocklist_on_evict options to false enables this mode.
NOTE
When blocklisting is disabled, the evicted CephFS client has only an effect on the MDS
daemon you send the command to. On a system with multiple active MDS daemons, you
need to send an eviction command to each active daemon.
Prerequisites
82
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
Procedure
Syntax
Example
Syntax
Example
IMPORTANT
83
Red Hat Ceph Storage 5 File System Guide
IMPORTANT
Removing a CephFS client from the blocklist puts data integrity at risk, and does not
guarantee a fully healthy, and functional CephFS client as a result. The best way to get a
fully healthy CephFS client back after an eviction, is to unmount the CephFS client and
do a fresh mount. If other CephFS clients are accessing files that the blocklisted CephFS
client was buffering I/O to, it can result in data corruption.
Prerequisites
Procedure
Example
Syntax
Example
3. Optionally, you can have kernel-based CephFS clients automatically reconnect when removing
them from the blocklist. On the kernel-based CephFS client, set the following option to clean
either when doing a manual mount, or automatically mounting with an entry in the /etc/fstab file:
recover_session=clean
4. Optionally, you can have FUSE-based CephFS clients automatically reconnect when removing
them from the blocklist. On the FUSE client, set the following option to true either when doing a
manual mount, or automatically mounting with an entry in the /etc/fstab file:
client_reconnect_stale=true
Additional Resources
See the Mounting the Ceph File System as a FUSE client section in the Red Hat Ceph Storage
File System Guide for more information.
For details, see the Deployment of the Ceph File System section in the Red Hat Ceph Storage
84
CHAPTER 5. CEPH FILE SYSTEM ADMINISTRATION
For details, see the Deployment of the Ceph File System section in the Red Hat Ceph Storage
File System Guide.
For details, see the Red Hat Ceph Storage Installation Guide.
For details, see the The Ceph File System Metadata Server section in the Red Hat Ceph Storage
File System Guide.
85
Red Hat Ceph Storage 5 File System Guide
6.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
Prerequisites
Procedure
Example
Example
Syntax
86
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
Example
[ceph: root@host01 /]# ceph nfs cluster create nfs-cephfs "host01 host02"
NFS Cluster Created Successfully
In this example, the NFS Ganesha cluster name is nfs-cephfs and the daemon containers are
deployed to host01, and host02.
IMPORTANT
Red Hat only supports one NFS Ganesha daemon running per host.
Syntax
Example
NOTE
Prerequisites
An NFS cluster created using the ceph nfs cluster create command.
87
Red Hat Ceph Storage 5 File System Guide
Procedure
Example
Example
LOG {
COMPONENTS {
ALL = FULL_DEBUG;
}
}
Syntax
Example
[ceph: root@host01 /]# ceph nfs cluster config set nfs-cephfs -i nfs-cephfs.conf
Syntax
Example
LOG {
COMPONENTS {
ALL = FULL_DEBUG;
}
}
5. Optional: If you want to remove the user-defined configuration, run the following command:
Syntax
88
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
Example
NOTE
This technology is Limited Availability. See the Deprecated functionality chapter for
additional information.
IMPORTANT
IMPORTANT
NFS clients are unable to create CephFS snapshots through their native NFS mount.
They must use server-side operator tooling for their snapshot needs.
Prerequisites
An NFS cluster created using the ceph nfs cluster create command.
Procedure
For Red Hat Ceph Storage 5.0, create the export as follows:
Syntax
Example
[ceph: root@host01 /]# ceph nfs export create cephfs cephfs01 nfs-cephfs /ceph --
path=/
For Red Hat Ceph Storage 5.1 and later, create the export as follows:
89
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
[ceph: root@host01 /]# ceph nfs export create cephfs nfs-cephfs /ceph cephfs01 --path=/
In this example, the BINDING (/ceph) is the pseudo root path, which must be unique, and an
absolute path.
NOTE
The --readonly option exports a path with the read-only permission, the
default being read, and write permissions.
NOTE
Syntax
Example
Syntax
Example
90
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "cephnfs11",
"fs_name": "cephfs",
"sec_label_xattr": ""
},
"clients": []
}
Syntax
Example
Syntax
91
Red Hat Ceph Storage 5 File System Guide
Example
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
"pseudo": "/ceph",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "nfs.nfs-cephfs.1",
"fs_name": "cephfs"
},
"clients": []
}
Syntax
Example
a. To automatically mount on boot, open and edit the /etc/fstab file by adding a new line:
Syntax
Example
6. On a client host, to mount a exported NFS Ceph File System created with an ingress service:
Syntax
92
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
Syntax
mount -t nfs VIRTUAL_IP_ADDRESS:BINDING LOCAL_MOUNT_POINT
Replace LOCAL_MOUNT_POINT with the mount point to mount the export on.
Example
This example mounts the export nfs-cephfs that exists on a NFS cluster created with --
ingress --virtual-ip 10.10.128.75 on the mount point /mnt.
Prerequisites
Procedure
Syntax
Example
93
Red Hat Ceph Storage 5 File System Guide
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "cephnfs11",
"fs_name": "cephfs",
"sec_label_xattr": ""
},
"clients": []
}
Example
[ceph: root@host01 /]# ceph nfs export get nfs-cephfs /ceph > export.conf
Syntax
{
"export_id": EXPORT_ID,
"path": "/",
"cluster_id": "CLUSTER_NAME",
"pseudo": "CLUSTER_PSEUDO_PATH",
"access_type": "RW/RO",
"squash": "SQUASH",
"security_label": SECURITY_LABEL,
"protocols": [
PROTOCOL_ID_
],
"transports": [
"TCP"
],
"fsal": {
"name": "NAME",
"user_id": "USER_ID",
"fs_name": "FILE_SYSTEM_NAME",
"sec_label_xattr": ""
},
"clients": []
}
Example
{
"export_id": 1,
"path": "/",
"cluster_id": "nfs-cephfs",
94
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
"pseudo": "/ceph",
"access_type": "RW",
"squash": "none",
"security_label": true,
"protocols": [
4
],
"transports": [
"TCP"
],
"fsal": {
"name": "CEPH",
"user_id": "cephnfs11",
"fs_name": "cephfs",
"sec_label_xattr": ""
},
"clients": []
}
Syntax
Example
Syntax
Example
95
Red Hat Ceph Storage 5 File System Guide
],
"fsal": {
"name": "CEPH",
"user_id": "cephnfs11",
"fs_name": "cephfs",
"sec_label_xattr": ""
},
"clients": []
}
Prerequisites
An NFS cluster created using the ceph nfs cluster create command.
A CephFS created.
Procedure
Example
Syntax
EXPORT {
Export_Id = EXPORT_ID;
Transports = TCP/UDP;
Path = PATH;
Pseudo = PSEUDO_PATH;
Protocols = NFS_PROTOCOLS;
Access_Type = ACCESS_TYPE;
Attr_Expiration_Time = EXPIRATION_TIME;
Squash = SQUASH;
FSAL {
Name = NAME;
Filesystem = "CEPH_FILE_SYSTEM_NAME";
User_Id = "USER_ID";
}
}
Example
96
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
EXPORT {
Export_Id = 2;
Transports = TCP;
Path = /;
Pseudo = /ceph1/;
Protocols = 4;
Access_Type = RW;
Attr_Expiration_Time = 0;
Squash = None;
FSAL {
Name = CEPH;
Filesystem = "cephfs";
User_Id = "nfs.nfs-cephfs.2";
}
}
Syntax
Example
Syntax
Example
97
Red Hat Ceph Storage 5 File System Guide
"sec_label_xattr": ""
},
"clients": []
}
Prerequisites
A CephFS created.
Procedure
Syntax
Example
Prerequisites
Procedure
Example
Syntax
98
CHAPTER 6. NFS CLUSTER AND EXPORT MANAGEMENT
Example
99
Red Hat Ceph Storage 5 File System Guide
7.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
Limitations
CephFS quotas rely on the cooperation of the client mounting the file system to stop writing
data when it reaches the configured limit. However, quotas alone cannot prevent an adversarial,
untrusted client from filling the file system.
Once processes that write data to the file system reach the configured limit, a short period of
time elapses between when the amount of data reaches the quota limit, and when the processes
stop writing data. The time period generally measures in the tenths of seconds. However,
processes continue to write data during that time. The amount of additional data that the
processes write depends on the amount of time elapsed before they stop.
When using path-based access restrictions, be sure to configure the quota on the directory to
which the client is restricted, or to a directory nested beneath it. If the client has restricted
access to a specific path based on the MDS capability, and the quota is configured on an
ancestor directory that the client cannot access, the client will not enforce the quota. For
example, if the client cannot access the /home/ directory and the quota is configured on
/home/, the client cannot enforce that quota on the directory /home/user/.
Snapshot file data that has been deleted or changed does not count towards the quota.
No support for quotas with NFS clients when using setxattr, and no support for file-level quotas
on NFS. To use quotas on NFS shares, you can export them using subvolumes and setting the --
size option.
NOTE
100
CHAPTER 7. CEPH FILE SYSTEM QUOTAS
NOTE
If the attributes appear on a directory inode, then that directory has a configured quota. If
the attributes do not appear on the inode, then the directory does not have a quota set,
although its parent directory might have a quota configured. If the value of the extended
attribute is 0, the quota is not set.
Prerequisites
Procedure
Syntax
Example
Syntax
Example
Additional Resources
This section describes how to use the setfattr command and the ceph.quota extended attributes to set
101
Red Hat Ceph Storage 5 File System Guide
This section describes how to use the setfattr command and the ceph.quota extended attributes to set
the quota for a directory.
Prerequisites
Procedure
Syntax
Example
Syntax
Example
Additional Resources
Prerequisites
102
CHAPTER 7. CEPH FILE SYSTEM QUOTAS
Procedure
Syntax
Example
Syntax
Example
Additional Resources
103
Red Hat Ceph Storage 5 File System Guide
8.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
A layout of a file or directory controls how its content is mapped to Ceph RADOS objects. The directory
layouts serve primarily for setting an inherited layout for new files in that directory.
To view and set a file or directory layout, use virtual extended attributes or extended file attributes
(xattrs). The name of the layout attributes depends on whether a file is a regular file or a directory:
Layouts Inheritance
Files inherit the layout of their parent directory when you create them. However, subsequent changes to
the parent directory layout do not affect children. If a directory does not have any layouts set, files
inherit the layout from the closest directory to the layout in the directory structure.
IMPORTANT
When you modify the layout fields of a file, the file must be empty, otherwise an error
occurs.
Prerequisites
104
CHAPTER 8. FILE AND DIRECTORY LAYOUTS
Procedure
Syntax
Replace:
Example
Additional Resources
See the table in the Overview of file and directory layouts section of the Red Hat Ceph Storage
File System Guide for more details.
Prerequisites
Procedure
Syntax
Replace
105
Red Hat Ceph Storage 5 File System Guide
Example
NOTE
A directory does not have an explicit layout until you set it. Consequently, attempting to
view the layout without first setting it fails because there are no changes to display.
Additional Resources
For more information, see Setting file and directory layout fields section in the Red Hat
Ceph Storage File System Guide.
Prerequisites
Procedure
Syntax
Replace
Example
NOTE
106
CHAPTER 8. FILE AND DIRECTORY LAYOUTS
NOTE
Pools in the pool field are indicated by name. However, newly created pools can
be indicated by ID.
Additional Resources
For more information, see File and directory layouts section in the Red Hat Ceph Storage File
System Guide.
NOTE
When you set a file layout, you cannot change or remove it.
Prerequisites
Procedure
Syntax
Example
Syntax
Example
NOTE
The pool_namespace field is the only field you can remove separately.
Additional Resources
107
Red Hat Ceph Storage 5 File System Guide
108
CHAPTER 9. CEPH FILE SYSTEM SNAPSHOTS
9.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
WARNING
Each Ceph Metadata Server (MDS) cluster allocates the snap identifiers
independently. Using snapshots for multiple Ceph File Systems that are sharing a
single pool causes snapshot collisions, and results in missing file data.
Additional Resources
See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File
System Guide for more details.
See the Creating a snapshot schedule for a Ceph File System section in the Red Hat
Ceph Storage File System Guide for more details.
NOTE
Prerequisites
109
Red Hat Ceph Storage 5 File System Guide
Procedure
Example
Syntax
Example
Syntax
mkdir NEW_DIRECTORY_PATH
Example
This example creates the new-snaps subdirectory, and this informs the Ceph Metadata Server
(MDS) to start making snapshots.
a. To delete snapshots:
Syntax
rmdir NEW_DIRECTORY_PATH
Example
IMPORTANT
Additional Resources
See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System
110
CHAPTER 9. CEPH FILE SYSTEM SNAPSHOTS
See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System
Guide for more details.
See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for
more details.
111
Red Hat Ceph Storage 5 File System Guide
10.1. PREREQUISITES
A running, and healthy Red Hat Ceph Storage cluster.
IMPORTANT
The scheduler is precisely based on the specified time to keep snapshots apart when a
storage cluster is under normal load. When the Ceph Manager is under a heavy load, it’s
possible that a snapshot might not get scheduled right away, resulting in a slightly delayed
snapshot. If this happens, then the next scheduled snapshot acts as if there was no delay.
Scheduled snapshots that are delayed do not cause drift in the overall schedule.
Usage
Scheduling snapshots for a Ceph File System (CephFS) is managed by the snap_schedule Ceph
Manager module. This module provides an interface to add, query, and delete snapshot schedules, and
to manage the retention policies. This module also implements the ceph fs snap-schedule command,
with several subcommands to manage schedules, and retention policies. All of the subcommands take
the CephFS volume path and subvolume path arguments to specify the file system path when using
multiple Ceph File Systems. Not specifying the CephFS volume path, the argument defaults to the first
file system listed in the fs_map, and not specifying the subvolume path argument defaults to nothing.
Snapshot schedules are identified by the file system path, the repeat interval, and the start time. The
repeat interval defines the time between two subsequent snapshots. The interval format is a number
plus a time designator: h(our), d(ay), or w(eek). For example, having an interval of 4h, means one
snapshot every four hours. The start time is a string value in the ISO format, %Y-%m-%dT%H:%M:%S,
and if not specified, the start time uses a default value of last midnight. For example, you schedule a
snapshot at 14:45, using the default start time value, with a repeat interval of 1h, the first snapshot will
be taken at 15:00.
Retention policies are identified by the file system path, and the retention policy specifications. Defining
a retention policy consist of either a number plus a time designator or a concatenated pairs in the format
of COUNT TIME_PERIOD. The policy ensures a number of snapshots are kept, and the snapshots are at
least for a specified time period apart. The time period designators are: h(our), d(ay), w(eek), m(onth),
y(ear), and n. The n time period designator is a special modifier, which means keep the last number of
snapshots regardless of timing. For example, 4d means keeping four snapshots that are at least one day,
or longer apart from each other.
Additional Resources
112
CHAPTER 10. CEPH FILE SYSTEM SNAPSHOT SCHEDULING
See the Creating a snapshot for a Ceph File System section in the Red Hat Ceph Storage File
System Guide for more details.
See the Creating a snapshot schedule for a Ceph File System section in the Red Hat
Ceph Storage File System Guide for more details.
A CephFS path can only have one retention policy, but a retention policy can have multiple count-time
period pairs.
NOTE
Once the scheduler module is enabled, running the ceph fs snap-schedule command
displays the available subcommands and their usage format.
Prerequisites
Procedure
Example
Example
Example
Syntax
113
Red Hat Ceph Storage 5 File System Guide
Example
NOTE
This example creates a snapshot schedule for the path /cephfs within the filesystem mycephfs,
snapshotting every four hours, and starts on 16 May 2022 2:00 PM.
Syntax
Example
Syntax
Example
Syntax
114
CHAPTER 10. CEPH FILE SYSTEM SNAPSHOT SCHEDULING
Example
This example displays the status of the snapshot schedule for the CephFS /cephfs path in
JSON format. The default format is plain text, if not specified.
Additional Resources
See the Ceph File System snapshot schedules section in the Red Hat Ceph Storage File System
Guide for more details.
See the Ceph File System snapshots section in the Red Hat Ceph Storage File System Guide for
more details.
Schedules are considered different, if their repeat interval and start times are different.
Add a snapshot schedule for a CephFS file path that does not exist yet. A CephFS path can only have
one retention policy, but a retention policy can have multiple count-time period pairs.
NOTE
Once the scheduler module is enabled, running the ceph fs snap-schedule command
displays the available subcommands and their usage format.
IMPORTANT
Currently, only subvolumes that belong to the default subvolume group can be scheduled
for snapshotting.
Prerequisites
A working Red Hat Ceph Storage cluster with Ceph File System (CephFS) deployed.
Procedure
Syntax
115
Red Hat Ceph Storage 5 File System Guide
Example
Syntax
Example
[ceph: root@host02 /]# ceph fs snap-schedule add /.. 4h 2022-05-16T14:00:00 --fs cephfs --
subvol subvol_1
Schedule set for path /..
NOTE
This example creates a snapshot schedule for the subvolume path, snapshotting every four
hours, and starts on 16 May 2022 2:00 PM.
Syntax
Example
116
CHAPTER 10. CEPH FILE SYSTEM SNAPSHOT SCHEDULING
Syntax
Example
/volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h
Syntax
Example
This example displays the status of the snapshot schedule for the
/volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. path in JSON
format.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
117
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
This example activates all schedules for the CephFS /cephfs path.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
118
CHAPTER 10. CEPH FILE SYSTEM SNAPSHOT SCHEDULING
Procedure
Syntax
Example
This example deactivates the daily snapshots for the /cephfs path, thereby pausing any further
snapshot creation.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
Syntax
Example
119
Red Hat Ceph Storage 5 File System Guide
This section provides the step to remove snapshot schedule of a Ceph File System (CephFS).
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
Syntax
Example
This example removes the specific snapshot schedule for the /cephfs volume, that is
snapshotting every four hours, and started on 16 May 2022 2:00 PM.
Syntax
Example
This example removes all the snapshot schedules for the /cephfs volume path.
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
120
CHAPTER 10. CEPH FILE SYSTEM SNAPSHOT SCHEDULING
Procedure
Syntax
Example
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
Syntax
Example
121
Red Hat Ceph Storage 5 File System Guide
Prerequisites
A working Red Hat Ceph Storage cluster with a Ceph File System (CephFS) deployed.
Procedure
Syntax
Example
122
CHAPTER 11. CEPH FILE SYSTEM MIRRORS
11.1. PREREQUISITES
The source and the target storage clusters must be running Red Hat Ceph Storage 5.0 or later.
Management of CephFS mirrors is done by the CephFS mirroring daemon (cephfs-mirror). This
snapshot data is synchronized by doing a bulk copy to the remote CephFS. The chosen order of
synchronizing snapshot pairs is based on the creation using the snap-id.
IMPORTANT
IMPORTANT
Red Hat supports running only one cephfs-mirror daemon per storage cluster.
NOTE
The time taken for synchronizing to remote storage cluster depends on the file size and
the total number of files in the mirroring path.
Prerequisites
The source and the target storage clusters must be healthy and running Red Hat Ceph Storage
5.0 or later.
123
Red Hat Ceph Storage 5 File System Guide
Root-level access to a Ceph Monitor node in the source and the target storage clusters.
Procedure
Syntax
Example
This command creates a Ceph user called, cephfs-mirror, and deploys the cephfs-mirror
daemon on the given node.
2. On the target storage cluster, create a user for each CephFS peer:
Syntax
Example
Example
Syntax
Example
Syntax
124
CHAPTER 11. CEPH FILE SYSTEM MIRRORS
Example
WARNING
Example
Syntax
Example
Copy the token string between the double quotes for use in the next step.
6. On the source storage cluster, import the bootstrap token from the target storage cluster:
Syntax
125
Red Hat Ceph Storage 5 File System Guide
Example
Syntax
Example
Syntax
Example
NOTE
See the Viewing the mirror status for a Ceph File System link in the Additional
Resources section of this procedure on how to find the peer UUID value.
Syntax
Example
IMPORTANT
Only absolute paths inside the Ceph File System are valid.
NOTE
126
CHAPTER 11. CEPH FILE SYSTEM MIRRORS
NOTE
The Ceph Manager mirroring module normalizes the path. For example, the
/d1/d2/../dN directories are equivalent to /d1/d2. Once a directory has been
added for mirroring, its ancestor directories and subdirectories are prevented
from being added for mirroring.
a. Optional. To stop snapshot mirroring for a directory, use the following command:
Syntax
Example
Additional Resources
See the Viewing the mirror status for a Ceph File System section in the Red Hat Ceph Storage
File System Guide for more information.
See the Ceph File System mirroring section in the Red Hat Ceph Storage File System Guide for
more information.
Prerequisites
Procedure
Example
2. Find the Ceph File System ID on the node running the CephFS mirroring daemon:
Syntax
Example
127
Red Hat Ceph Storage 5 File System Guide
Syntax
Example
Syntax
Example
128
CHAPTER 11. CEPH FILE SYSTEM MIRRORS
{
"/home/user1": {
"state": "idle", 1
"last_synced_snap": {
"id": 120,
"name": "snap1",
"sync_duration": 0.079997898999999997,
"sync_time_stamp": "274900.558797s"
},
"snaps_synced": 2, 2
"snaps_deleted": 0, 3
"snaps_renamed": 0
}
}
3 failed means the directory has hit the upper limit of consecutive failures.
The default number of consecutive failures is 10, and the default retry interval is 60 seconds.
The synchronization stats: snaps_synced, snaps_deleted, and snaps_renamed are reset when the
cephfs-mirror daemon restarts.
Additional Resources
See the Ceph File System mirrors section in the Red Hat Ceph Storage File System Guide for
more information.
129
Red Hat Ceph Storage 5 File System Guide
"Behind on trimming…"
Code: MDS_HEALTH_TRIM
CephFS maintains a metadata journal that is divided into log segments. The length of journal (in
number of segments) is controlled by the mds_log_max_segments setting. When the number of
segments exceeds that setting, the MDS starts writing back metadata so that it can remove (trim)
the oldest segments. If this process is too slow, or a software bug is preventing trimming, then this
health message appears. The threshold for this message to appear is for the number of segments to
be double mds_log_max_segments.
130
APPENDIX A. HEALTH MESSAGES FOR THE CEPH FILE SYSTEM
MDS cache. When the MDS needs to shrink its cache to stay within its own cache size limits, the MDS
sends messages to clients to shrink their caches too. If a client is unresponsive, it can prevent the
MDS from properly staying within its cache size, and the MDS might eventually run out of memory
and terminate unexpectedly. This message appears if a client has taken more time to comply than
the time specified by the mds_recall_state_timeout option (default is 60 seconds). See Metadata
Server cache size limits section for details.
If the administrator forces the MDS to enter into read-only mode by using the
force_readonly administration socket command.
The MDS has failed to trim its cache to comply with the limit set by the administrator. If the MDS cache
becomes too large, the daemon might exhaust available memory and terminate unexpectedly. By
default, this message appears if the MDS cache size is 50% greater than its limit.
Additional Resources
See the Metadata Server cache size limits section in the Red Hat Ceph Storage File System
131
Red Hat Ceph Storage 5 File System Guide
See the Metadata Server cache size limits section in the Red Hat Ceph Storage File System
Guide for details.
132
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
mon_force_standby_active
Description
If set to true, monitors force MDS in standby replay mode to be active. Set under the [mon] or
[global] section in the Ceph configuration file.
Type
Boolean
Default
true
max_mds
Description
The number of active MDS daemons during cluster creation. Set under the [mon] or [global]
section in the Ceph configuration file.
Type
32-bit Integer
Default
1
mds_cache_memory_limit
Description
The memory limit the MDS enforces for its cache. Red Hat recommends using this parameter
instead of the mds cache size parameter.
Type
64-bit Integer Unsigned
Default
1073741824
mds_cache_reservation
Description
The cache reservation, memory or inodes, for the MDS cache to maintain. The value is a
percentage of the maximum cache configured. Once the MDS begins dipping into its reservation,
it recalls client state until its cache size shrinks to restore the reservation.
Type
Float
Default
0.05
mds_cache_size
Description
The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends
133
Red Hat Ceph Storage 5 File System Guide
The number of inodes to cache. A value of 0 indicates an unlimited number. Red Hat recommends
to use the mds_cache_memory_limit to limit the amount of memory the MDS cache uses.
Type
32-bit Integer
Default
0
mds_cache_mid
Description
The insertion point for new items in the cache LRU, from the top.
Type
Float
Default
0.7
mds_dir_commit_ratio
Description
The fraction of directory that contains erroneous information before Ceph commits using a full
update instead of partial update.
Type
Float
Default
0.5
mds_dir_max_commit_size
Description
The maximum size of a directory update in MB before Ceph breaks the directory into smaller
transactions.
Type
32-bit Integer
Default
90
mds_decay_halflife
Description
The half-life of the MDS cache temperature.
Type
Float
Default
5
mds_beacon_interval
Description
The frequency, in seconds, of beacon messages sent to the monitor.
134
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
Type
Float
Default
4
mds_beacon_grace
Description
The interval without beacons before Ceph declares a MDS laggy and possibly replaces it.
Type
Float
Default
15
mds_blocklist_interval
Description
The blocklist duration for failed MDS daemons in the OSD map.
Type
Float
Default
24.0*60.0
mds_session_timeout
Description
The interval, in seconds, of client inactivity before Ceph times out capabilities and leases.
Type
Float
Default
60
mds_session_autoclose
Description
The interval, in seconds, before Ceph closes a laggy client’s session.
Type
Float
Default
300
mds_reconnect_timeout
Description
The interval, in seconds, to wait for clients to reconnect during a MDS restart.
Type
Float
Default
135
Red Hat Ceph Storage 5 File System Guide
45
mds_tick_interval
Description
How frequently the MDS performs internal periodic tasks.
Type
Float
Default
5
mds_dirstat_min_interval
Description
The minimum interval, in seconds, to try to avoid propagating recursive statistics up the tree.
Type
Float
Default
1
mds_scatter_nudge_interval
Description
How quickly changes in directory statistics propagate up.
Type
Float
Default
5
mds_client_prealloc_inos
Description
The number of inode numbers to preallocate per client session.
Type
32-bit Integer
Default
1000
mds_early_reply
Description
Determines whether the MDS allows clients to see request results before they commit to the
journal.
Type
Boolean
Default
true
mds_use_tmap
136
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
Description
Use trivialmap for directory updates.
Type
Boolean
Default
true
mds_default_dir_hash
Description
The function to use for hashing files across directory fragments.
Type
32-bit Integer
Default
2,that is, rjenkins
mds_log
Description
Set to true if the MDS should journal metadata updates. Disable for benchmarking only.
Type
Boolean
Default
true
mds_log_skip_corrupt_events
Description
Determines whether the MDS tries to skip corrupt journal events during journal replay.
Type
Boolean
Default
false
mds_log_max_events
Description
The maximum events in the journal before Ceph initiates trimming. Set to -1 to disable limits.
Type
32-bit Integer
Default
-1
mds_log_max_segments
Description
The maximum number of segments or objects in the journal before Ceph initiates trimming. Set
to -1 to disable limits.
137
Red Hat Ceph Storage 5 File System Guide
Type
32-bit Integer
Default
30
mds_log_max_expiring
Description
The maximum number of segments to expire in parallels.
Type
32-bit Integer
Default
20
mds_log_eopen_size
Description
The maximum number of inodes in an EOpen event.
Type
32-bit Integer
Default
100
mds_bal_sample_interval
Description
Determines how frequently to sample directory temperature when making fragmentation
decisions.
Type
Float
Default
3
mds_bal_replicate_threshold
Description
The maximum temperature before Ceph attempts to replicate metadata to other nodes.
Type
Float
Default
8000
mds_bal_unreplicate_threshold
Description
The minimum temperature before Ceph stops replicating metadata to other nodes.
Type
Float
138
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
Default
0
mds_bal_frag
Description
Determines whether or not the MDS fragments directories.
Type
Boolean
Default
false
mds_bal_split_size
Description
The maximum directory size before the MDS splits a directory fragment into smaller bits. The root
directory has a default fragment size limit of 10000.
Type
32-bit Integer
Default
10000
mds_bal_split_rd
Description
The maximum directory read temperature before Ceph splits a directory fragment.
Type
Float
Default
25000
mds_bal_split_wr
Description
The maximum directory write temperature before Ceph splits a directory fragment.
Type
Float
Default
10000
mds_bal_split_bits
Description
The number of bits by which to split a directory fragment.
Type
32-bit Integer
Default
3
139
Red Hat Ceph Storage 5 File System Guide
mds_bal_merge_size
Description
The minimum directory size before Ceph tries to merge adjacent directory fragments.
Type
32-bit Integer
Default
50
mds_bal_merge_rd
Description
The minimum read temperature before Ceph merges adjacent directory fragments.
Type
Float
Default
1000
mds_bal_merge_wr
Description
The minimum write temperature before Ceph merges adjacent directory fragments.
Type
Float
Default
1000
mds_bal_interval
Description
The frequency, in seconds, of workload exchanges between MDS nodes.
Type
32-bit Integer
Default
10
mds_bal_fragment_interval
Description
The frequency, in seconds, of adjusting directory fragmentation.
Type
32-bit Integer
Default
5
mds_bal_idle_threshold
Description
The minimum temperature before Ceph migrates a subtree back to its parent.
140
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
Type
Float
Default
0
mds_bal_max
Description
The number of iterations to run balancer before Ceph stops. For testing purposes only.
Type
32-bit Integer
Default
-1
mds_bal_max_until
Description
The number of seconds to run balancer before Ceph stops. For testing purposes only.
Type
32-bit Integer
Default
-1
mds_bal_mode
Description
The method for calculating MDS load:
1 = Hybrid.
3 = CPU load.
Type
32-bit Integer
Default
0
mds_bal_min_rebalance
Description
The minimum subtree temperature before Ceph migrates.
Type
Float
Default
0.1
mds_bal_min_start
141
Red Hat Ceph Storage 5 File System Guide
Description
The minimum subtree temperature before Ceph searches a subtree.
Type
Float
Default
0.2
mds_bal_need_min
Description
The minimum fraction of target subtree size to accept.
Type
Float
Default
0.8
mds_bal_need_max
Description
The maximum fraction of target subtree size to accept.
Type
Float
Default
1.2
mds_bal_midchunk
Description
Ceph migrates any subtree that is larger than this fraction of the target subtree size.
Type
Float
Default
0.3
mds_bal_minchunk
Description
Ceph ignores any subtree that is smaller than this fraction of the target subtree size.
Type
Float
Default
0.001
mds_bal_target_removal_min
Description
The minimum number of balancer iterations before Ceph removes an old MDS target from the
MDS map.
142
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
Type
32-bit Integer
Default
5
mds_bal_target_removal_max
Description
The maximum number of balancer iterations before Ceph removes an old MDS target from the
MDS map.
Type
32-bit Integer
Default
10
mds_replay_interval
Description
The journal poll interval when in standby-replay mode for a hot standby.
Type
Float
Default
1
mds_shutdown_check
Description
The interval for polling the cache during MDS shutdown.
Type
32-bit Integer
Default
0
mds_thrash_exports
Description
Ceph randomly exports subtrees between nodes. For testing purposes only.
Type
32-bit Integer
Default
0
mds_thrash_fragments
Description
Ceph randomly fragments or merges directories.
Type
32-bit Integer
143
Red Hat Ceph Storage 5 File System Guide
Default
0
mds_dump_cache_on_map
Description
Ceph dumps the MDS cache contents to a file on each MDS map.
Type
Boolean
Default
false
mds_dump_cache_after_rejoin
Description
Ceph dumps MDS cache contents to a file after rejoining the cache during recovery.
Type
Boolean
Default
false
mds_verify_scatter
Description
Ceph asserts that various scatter/gather invariants are true. For developer use only.
Type
Boolean
Default
false
mds_debug_scatterstat
Description
Ceph asserts that various recursive statistics invariants are true. For developer use only.
Type
Boolean
Default
false
mds_debug_frag
Description
Ceph verifies directory fragmentation invariants when convenient. For developer use only.
Type
Boolean
Default
false
144
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
mds_debug_auth_pins
Description
The debug authentication pin invariants. For developer use only.
Type
Boolean
Default
false
mds_debug_subtrees
Description
Debugging subtree invariants. For developer use only.
Type
Boolean
Default
false
mds_kill_mdstable_at
Description
Ceph injects a MDS failure in a MDS Table code. For developer use only.
Type
32-bit Integer
Default
0
mds_kill_export_at
Description
Ceph injects a MDS failure in the subtree export code. For developer use only.
Type
32-bit Integer
Default
0
mds_kill_import_at
Description
Ceph injects a MDS failure in the subtree import code. For developer use only.
Type
32-bit Integer
Default
0
mds_kill_link_at
Description
Ceph injects a MDS failure in a hard link code. For developer use only.
145
Red Hat Ceph Storage 5 File System Guide
Type
32-bit Integer
Default
0
mds_kill_rename_at
Description
Ceph injects a MDS failure in the rename code. For developer use only.
Type
32-bit Integer
Default
0
mds_wipe_sessions
Description
Ceph deletes all client sessions on startup. For testing purposes only.
Type
Boolean
Default
0
mds_wipe_ino_prealloc
Description
Ceph deletes inode preallocation metadata on startup. For testing purposes only.
Type
Boolean
Default
0
mds_skip_ino
Description
The number of inode numbers to skip on startup. For testing purposes only.
Type
32-bit Integer
Default
0
mds_standby_for_name
Description
The MDS daemon is a standby for another MDS daemon of the name specified in this setting.
Type
String
Default
146
APPENDIX B. METADATA SERVER DAEMON CONFIGURATION REFERENCE
N/A
mds_standby_for_rank
Description
An instance of the MDS daemon is a standby for another MDS daemon instance of this rank.
Type
32-bit Integer
Default
-1
mds_standby_replay
Description
Determines whether the MDS daemon polls and replays the log of an active MDS when used as a
hot standby.
Type
Boolean
Default
false
147
Red Hat Ceph Storage 5 File System Guide
journaler_write_head_interval
Description
How frequently to update the journal head object.
Type
Integer
Required
No
Default
15
journaler_prefetch_periods
Description
How many stripe periods to read ahead on journal replay.
Type
Integer
Required
No
Default
10
journaler_prezero_periods
Description
How many stripe periods to zero ahead of write position.
Type
Integer
Required
No
Default
10
journaler_batch_interval
Description
Maximum additional latency in seconds to incur artificially.
Type
Double
Required
No
Default
.001
148
APPENDIX C. JOURNALER CONFIGURATION REFERENCE
journaler_batch_max
Description
Maximum bytes that will be delayed flushing.
Type
64-bit Unsigned Integer
Required
No
Default
0
149
Red Hat Ceph Storage 5 File System Guide
client_acl_type
Description
Set the ACL type. Currently, only possible value is posix_acl to enable POSIX ACL, or an empty
string. This option only takes effect when the fuse_default_permissions is set to false.
Type
String
Default
"" (no ACL enforcement)
client_cache_mid
Description
Set the client cache midpoint. The midpoint splits the least recently used lists into a hot and warm
list.
Type
Float
Default
0.75
client_cache size
Description
Set the number of inodes that the client keeps in the metadata cache.
Type
Integer
Default
16384 (16 MB)
client_caps_release_delay
Description
Set the delay between capability releases in seconds. The delay sets how many seconds a client
waits to release capabilities that it no longer needs in case the capabilities are needed for another
user space operation.
Type
Integer
Default
5 (seconds)
client_debug_force_sync_read
Description
If set to true, clients read data directly from OSDs instead of using a local page cache.
150
APPENDIX D. CEPH FILE SYSTEM CLIENT CONFIGURATION REFERENCE
Type
Boolean
Default
false
client_dirsize_rbytes
Description
If set to true, use the recursive size of a directory (that is, total of all descendants).
Type
Boolean
Default
true
client_max_inline_size
Description
Set the maximum size of inlined data stored in a file inode rather than in a separate data object in
RADOS. This setting only applies if the inline_data flag is set on the MDS map.
Type
Integer
Default
4096
client_metadata
Description
Comma-delimited strings for client metadata sent to each MDS, in addition to the automatically
generated version, host name, and other metadata.
Type
String
Default
"" (no additional metadata)
client_mount_gid
Description
Set the group ID of CephFS mount.
Type
Integer
Default
-1
client_mount_timeout
Description
Set the timeout for CephFS mount in seconds.
Type
Float
151
Red Hat Ceph Storage 5 File System Guide
Default
300.0
client_mount_uid
Description
Set the user ID of CephFS mount.
Type
Integer
Default
-1
client_mountpoint
Description
An alternative to the -r option of the ceph-fuse command.
Type
String
Default
/
client_oc
Description
Enable object caching.
Type
Boolean
Default
true
client_oc_max_dirty
Description
Set the maximum number of dirty bytes in the object cache.
Type
Integer
Default
104857600 (100MB)
client_oc_max_dirty_age
Description
Set the maximum age in seconds of dirty data in the object cache before writeback.
Type
Float
Default
5.0 (seconds)
152
APPENDIX D. CEPH FILE SYSTEM CLIENT CONFIGURATION REFERENCE
client_oc_max_objects
Description
Set the maximum number of objects in the object cache.
Type
Integer
Default
1000
client_oc_size
Description
Set how many bytes of data will the client cache.
Type
Integer
Default
209715200 (200 MB)
client_oc_target_dirty
Description
Set the target size of dirty data. Red Hat recommends keeping this number low.
Type
Integer
Default
8388608 (8MB)
client_permissions
Description
Check client permissions on all I/O operations.
Type
Boolean
Default
true
client_quota_df
Description
Report root directory quota for the statfs operation.
Type
Boolean
Default
true
client_readahead_max_bytes
Description
Set the maximum number of bytes that the kernel reads ahead for future read operations.
153
Red Hat Ceph Storage 5 File System Guide
Set the maximum number of bytes that the kernel reads ahead for future read operations.
Overridden by the client_readahead_max_periods setting.
Type
Integer
Default
0 (unlimited)
client_readahead_max_periods
Description
Set the number of file layout periods (object size * number of stripes) that the kernel reads
ahead. Overrides the client_readahead_max_bytes setting.
Type
Integer
Default
4
client_readahead_min
Description
Set the minimum number bytes that the kernel reads ahead.
Type
Integer
Default
131072 (128KB)
client_snapdir
Description
Set the snapshot directory name.
Type
String
Default
".snap"
client_tick_interval
Description
Set the interval in seconds between capability renewal and other upkeep.
Type
Float
Default
1.0
client_use_random_mds
Description
Choose random MDS for each request.
Type
154
APPENDIX D. CEPH FILE SYSTEM CLIENT CONFIGURATION REFERENCE
Boolean
Default
false
fuse_default_permissions
Description
When set to false, the ceph-fuse utility checks does its own permissions checking, instead of
relying on the permissions enforcement in FUSE. Set to false together with the client acl
type=posix_acl option to enable POSIX ACL.
Type
Boolean
Default
true
DEVELOPER OPTIONS
These options are internal. They are listed here only to complete the list of options.
client_debug_getattr_caps
Description
Check if the reply from the MDS contains required capabilities.
Type
Boolean
Default
false
client_debug_inject_tick_delay
Description
Add artificial delay between client ticks.
Type
Integer
Default
0
client_inject_fixed_oldest_tid
Description, Type
Boolean
Default
false
client_inject_release_failure
Description, Type
Boolean
Default
155
Red Hat Ceph Storage 5 File System Guide
false
client_trace
Description
The path to the trace file for all file operations. The output is designed to be used by the Ceph
synthetic client. See the ceph-syn(8) manual page for details.
Type
String
Default
"" (disabled)
156