Tutorial CEPH - Redhat
Tutorial CEPH - Redhat
Tutorial CEPH - Redhat
CEPH-101
Revision 02-0514
MSST 2014
COURSE OUTLINE
1 Module 1 - Course Introduction
11
12
13
18
21
27
29
36
37
38
45
47
55
4 Module 4 - RADOS
58
59
60
4.3 Replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
62
65
73
74
75
76
83
84
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
88
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
89
90
94
98
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
99
102
103
104
107
108
110
111
128
143
144
145
.
.
Course Overview
.
.
3 / 148
Course Objectives
After completing this course, delegates should be able to:
4 / 148
.
.
Course Agenda
Module
Module
Module
Module
Module
Module
Module
-
.
.
1
2
3
4
5
6
7
Course Introduction
Ceph History and Components
Ceph Data Placement
RADOS Object Store
Ceph Block Storage (Ceph RBD)
Ceph File systems (CephFS)
Creating a Ceph Storage Cluster
5 / 148
Course Prerequisites
6 / 148
.
.
Course Catalog
.
.
CEPH-100
CEPH-101
CEPH-110
CEPH-120
CEPH-130
CEPH-200
Ceph
Ceph
Ceph
Ceph
Ceph
Ceph
Fundamentals (ILT)
Essentials (WBT)
Operations & tuning (ILT & VCT)
and OpenStack (ILT & VCT)
Unified Storage for OpenStack (VCT)
Open Source Development (ILT)
7 / 148
Course Material
8 / 148
.
.
Course Material
PDF files
-
.
.
9 / 148
Course Material
How to add a note to your PDF files
10 / 148
.
.
End Module 1
10
.
.
11 / 148
11
.
.
Module Objectives
By the end of this module, you will be able to:
12
.
.
The Ceph
librados
The Ceph
The Ceph
The Ceph
13 / 148
Storage Challenges
14 / 148
13
.
.
Storage Costs
Money
-
Time
-
14
.
.
15 / 148
Ceph Delivers
Ceph: The Future Of Storage
A new philosophy
-
A new design
-
Open Source
Community-focused equals strong, sustainable ecosystem
Scalable
No single point of failure
Software-based
Self-managing
Flexible
Unified
16 / 148
15
.
.
17 / 148
All the analysts will tell you that were facing a data explosion. If you are responsible for managing data for your company, you
dont need the analysts to tell you that. As disks become less expensive, there are easier for users to generate content. And
that content must be managed, protected, and backed up so that it is available to you users whenever they request it.
16
.
.
Ceph: Technological
Foundations
Built to address the following challenges
18 / 148
17
.
.
18
.
.
19 / 148
20 / 148
19
.
.
20
.
.
21 / 148
Cluster Components
After completing this section you will be able to:
OSD
Monitors
Ceph journal
22 / 148
21
.
.
Ceph Cluster
22
.
.
23 / 148
Monitors
24 / 148
Note 1: Ceph Monitors are daemons. The primary role of a monitor is to maintain the state of the cluster by managing critical
Ceph Cluster state and configuration information. The Ceph Monitors maintain a master copy of the CRUSH Map and Ceph
Daemons and Clients can check in periodically with the monitors to be sure they have the most recent copy of the map.
Note 2: The monitors most establish a consensus regarding the state of the cluster, which is why there must be an odd number
of monitors.
Note 3: In critical environment and to provide even more reliability and fault tolerance, it can be advised to run up 5 Monitors
In order for the Ceph Storage Cluster to be operational and accessible, there must be at least more than half of Monitors
running and operational. If this number goes below, and as Ceph will always guarantee the integrity of the data to its
accessibility, the complete Ceph Storage Cluster will become inaccessible to any client.
For your information, the Ceph Storage Cluster maintains different map for its operations:
- MON Map
- OSD Map
- CRUSH Map
23
.
.
File Sytem:
-
25 / 148
24
.
.
for
for
for
for
replication
coherency
re-balancing
recovery
Responsible
Responsible
Responsible
Responsible
Atomic transactions
Synchronization and notifications
Send computation to the data
26 / 148
Note 1: The overall design and goal of the OSD is to bring the computing power as close as possible of the data and to let it
perform the maximum it can. For now, it processes the functions listed in the bullet lists, depending on its role (primary or
secondary), but in the future, Ceph will probably leverage the close link between the OSD and the data to extend the
computational power of the OSD.
For example: The OSD drive creation of the thumbnail of an object rather than having the client being responsible for such an
operation.
25
.
.
27 / 148
Ceph requires a modern Linux file system. We have tested XFS, btrs and ext4, and these are the supported file systems. Full
size and extensive tests have been performed on BTRFS is not recommended for productions environments.
Right now for stability, the recommendation os to use xfs.
26
.
.
Ceph Journal
Ceph OSDs
-
28 / 148
27
.
.
Ceph Journal
Ceph OSDs
-
29 / 148
Note 1 : The write to the CEPH cluster will be acknowledged when the minimum number of replica journals have been written
to.
Note 2 : The OSD stops writing every few seconds and synchronizes the journal with the file system commits performed so they
can trim operations from the journal to reclaim the space on the journal disk volume.
Note 3 : The replay sequence will start after the last sync operation as previous journal records were trimmed out.
28
.
.
Communication methods
30 / 148
29
.
.
Communication methods
Note 1: Services interfaces built on top of this native interface include the Ceph Block Device, The Ceph Gateway, and the
Ceph File System.
Note 2: Amazon S3 and OpenStack Swift. The Ceph Gateway is referred to as radosgw.
Note 3: Python module
30
.
.
31 / 148
librados
32 / 148
librados is a native C library that allows applications to work with the Ceph Cluster (RADOS). There are similar libraries
available for C++, Java, Python, Ruby, and PHP.
When applications link with librados, they are enabled to interact with the objects in the Ceph Cluster (RADOS) through a
native protocol.
31
.
.
Ceph Gateway
33 / 148
The gateway application sits on top of a webserver, it uses the librados library to communicate with the CEPH cluster and will
write to OSD processes directly.
The Ceph Gateway (also known as the RADOS Gateway) is an HTTP REST gateway used to access objects in the Ceph Cluster. It
is built on top of librados, and implemented as a FastCGI module using libfcgi, and can be used with any FastCGI capable web
server. Because it uses a unified namespace, it is compatible with Amazon S3 RESTful APIs and OpenStack Swift APIs.
32
.
.
34 / 148
33
.
.
CephFS
The Ceph File System is a parallel file system that provides a massively scalable, single-hierarchy, shared
disk1 & 2
35 / 148
34
.
.
End Module 2
36 / 148
35
.
.
36
.
.
Module Objectives
After completing this module you will be able to
Define CRUSH
Discuss the CRUSH hierarchy
Explain where to find CRUSH rules
Explain how the CRUSH data placement algorithm is used to
determine data placement
Understand Placement Groups in Ceph
Understand Pools in Ceph
38 / 148
What is CRUSH?
A pseudo random placement algorithm
37
.
.
CRUSH
CRUSH (Controlled Replication Under Scalable Hashing)
Rule-based configuration
-
De-clustered placement
Excellent data-re-distribution
Migration proportional to change
38
.
.
39 / 148
What is a PG?
-
40 / 148
A Placement Group (PG) aggregates a series of objects into a group, and maps the group to a series of OSDs.
39
.
.
A Placement Group (PG) aggregates a series of objects onto a group, and maps the group to a series of OSDs.
40
.
.
41 / 148
Without them
-
Extra Benefits
-
The total number of PGs must be adjusted when growing the cluster
As devices leave or join the Ceph cluster, most PGs remain where
they are,
CRUSH will adjust just enough of the data to ensure uniform
distribution
42 / 148
Note 1: Tracking object placement and object metadata on a per-object basis is computationally expensive-i.e., a system with
millions of objects cannot realistically track placement on a per-object basis. Placement groups address this barrier to
performance and scalability. Additionally, placement groups reduce the number of processes and the amount of per-object
metadata Ceph must track when storing and retrieving data.
Note 2: Increasing the number of placement groups reduces the variance in per-OSD load across you cluster. We recommend
approximately 50-200 placement groups per OSD to balance out memory and CPU requirements and per-OSD load. For a single
pool of objects, you can use the following formula: Total Placement Groups = (OSDs*(50-200))/Number of replica.
When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per
pool with the number of placement groups per OSD so that you arrive at a reasonable total number of placement groups that
provides reasonably low variance per OSD without taxing system resources or making the peering process too slow
1. ceph osd pool set <pool-name> pg_num <pg_num>
2. ceph osd pool set <pool-name> pgp_num <pgp_num>
The pgp_num parameter should be equal to pg_num
The second command will trigger the rebalancing of your data
41
.
.
Pools
ownership/access
number of object replicas
number of placement groups
the CRUSH rule set to use.
43 / 148
When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data.
A pool has a default number of replica. Currently 2 but the Firefly version will bump the default up to 3.
A pool differs from CRUSHs location-based buckets in that a pool doesnt have a single physical location, and a pool provides
you with some additional functionality, including: Replicas: You can set the desired number of copies/replicas of an object
- A typical configuration stored an object and one additional copy (i.e., size = 2), but you can determine the number of
copies/replicas.
Placement Groups: you can set the number of placement groups for the pool.
- A typical configuration uses approximately 100 placement groups per OSD to provide optimal balancing without using up too
many computing resources. When setting up multiple pools, be careful to ensure you set a reasonable number of placement
groups for both the pool and the cluster as a whole.
CRUSH Rules: When you store data in a pool, a CRUSH rule set mapped to the pool enables CRUSH
- To identify a rule for the placement of the primary object and object replicas in your cluster. You can create a custom CRUSH
rule for your pool.
Snapshots: When you create snapshots with ceph osd pool mksnap, you effectively take a snapshot of a particular pool.
Set Ownership: You can set a user ID as the owner of a pool.
42
.
.
Pools
Supply a name
Supply how many PGs can belong to the pool
data
metadata
rbd
44 / 148
To organize data into pools, you can list, create, and remove pools. You can also view the utilization statistics for each pool.
Listing the pools:
ceph osd lspools
Creating the pools:
ceph osd pool create {pool-name} {pg-num} [{pgp-num}]
Deleting the pools:
ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]
Renaming the pools:
ceph osd pool rename {current-pool-name} {new-pool-name}
Statistics for the pools:
rados df
Snapshotting pools:
ceph osd pool mksnap {pool-name} {snap-name}
Removing a snapshot:
ceph osd pool rmsnap {pool-name} {snap-name}
43
.
.
Pools
Pool attributes
-
Attributes
size: number of replica objects
min_size: minimum number of replica available for IO
crash_replay_interval: number of seconds to allow clients to replay acknowledged, but uncommitted requests
pgp_num: effective number of placement groups to use when calculating data placement
crush_ruleset: ruleset to use for mapping object placement in the cluster (CRUSH Map Module)
hashpspool: get HASHPSPOOL flag on a given pool
44
.
.
45 / 148
46 / 148
To generate the PG id, we use - The pool id - A hashing formula based on the object name modulo the number of PGs
First OSD in the list returned is the primary OSD, the next ones are secondary
45
.
.
The command used to view the CRUSH Map is: ceph osd tree
46
.
.
47 / 148
root 1
datacenter
rack
host
osd
48 / 148
47
.
.
A list of OSDs
A list of the rules to tell CRUSH how data is to be replicated
A default CRUSH map is created when you create the cluster
The default CRUSH Map is not suited for production clusters 1
49 / 148
Note 1: This default CRUSH Map is fine for a sandbox-type installation only! For production clusters, it should be customized
for better management, performance, and data security
48
.
.
up..
down
in..
out.
Running
Not running or can't be contacted
Holds data
Does NOT hold data
50 / 148
As a quick way to remember it, the weight value indicates the proportion of data an OSD will hold if it is up and running
49
.
.
By default, if an OSD has been down for 5 minutes or more, we will start copying data to other OSDs in order to satisfy the
number of replicas the pool must hold.
Remember that is the number of replica available goes below the min_size pool parameter, no IO will be served.
50
.
.
51 / 148
CRUSH
When it comes time to store an object in the cluster (or retrieve one), the client
calculates where it belongs.
52 / 148
51
.
.
CRUSH
The OSDs are always talking to each other (and the monitors)
They know when something is wrong
52
.
.
The 3rd & 5th nodes noticed that 2nd node on the bottom row is gone
They are also aware that they have replicas of the missing data
53 / 148
CRUSH
Use the CRUSH algorithm to determine how the cluster should look
based on its new state
and move the data to where clients running CRUSH expect it to be
54 / 148
53
.
.
CRUSH
54
.
.
55 / 148
56 / 148
55
.
.
57 / 148
56
.
.
End Module 3
58 / 148
57
.
.
Module 4 - RADOS
RADOS
58
.
.
Module Objectives
60 / 148
59
.
.
What Is Ceph?
A Storage infrastructure:
-
An Object Store:
-
60
.
.
Scalability
Redundancy
Flexibility
61 / 148
Objects are
-
A name
A payload (contents)
Any number of key-value pairs (attributes).
62 / 148
61
.
.
Replication
Object Replication
-
62
.
.
63 / 148
64 / 148
63
.
.
Replication Principle
64
.
.
65 / 148
Monitors (MONs)
Quorum management
-
66 / 148
65
.
.
66
.
.
A C API (librados.h)
A C++ API (librados.hpp)
67 / 148
Ceph Gateway
RADOS Gateway
-
68 / 148
67
.
.
68
.
.
69 / 148
70 / 148
69
.
.
70
.
.
71 / 148
Summary
72 / 148
71
.
.
End Module 4
CEPH-101 : RADOS
72
.
.
73 / 148
73
.
.
Module Objectives
74
.
.
75 / 148
76 / 148
75
.
.
RBD: Native
76
.
.
The Ceph Block Device interacts with Ceph OSDs using the
librados and librbd libraries.
The Ceph Block Devices are striped over multiple OSDs in a Ceph
Object Store.
77 / 148
78 / 148
77
.
.
Virtualization containers
-
Maps data blocks into objects for storage in the Ceph Object Store.
Inherit librados capabilities such as snapshot and clones
Virtualization containers can boot a VM without transferring the boot image to the VM itself.
Config file rbdmap will tell which RBD device needs to be mounted
78
.
.
79 / 148
80 / 148
As far as the VM is concerned, it sees a block device and is not even aware about the CEPH cluster.
79
.
.
Software requirements
krbd: The kernel rados block device (rbd) module is able to access
the Linux kernel on the OSD.
librbd: A shared library that allows applications to access Ceph
Block Devices.
QEMU/KVM: is a widely used open source hypervisor. More info on
the project can be found at http://wiki.qemu.org/Main_Page
libvirt: the virtualization API that supports KVM/QEMU and other
hypervisors. Since the Ceph Block Device supports QEMU/KVM, it
can also interface with software that uses libvirt.
You will be dependant on the kernel version for the best performance and avoiding the bugs
A kernel version of minimum 3.8 is SUPER highly recommended
To use an RBD directly in the VM itself, you need to:
Install librbd (will also install librados)
Then map the RBD device
80
.
.
81 / 148
The rbd kernel device itself is able to access and use the Linux
page cache to improve performance if necessary.
82 / 148
LIBRBD can not leverage the Linux page caching for its own use. Therefore LIBRBD implements its own caching mechanism
By default caching is disabled in LIBRBD.
Note 1 : In write-back mode LIBRBD caching can coalesce contiguous requests for better throughput.
We offer Write Back (aka Cache Enabled which is the activation default) and Write Through support
Be cautious with Write Back as the host will be caching and acknowledge the write IO request as soon as data is place in the
server LIBRBD local cache.
Write Through is highly recommended for production servers to avoid loosing data in case of a server failure
81
.
.
Consider 2 values
immediately if U < M
After writing data back to disk until U < M
Write-through caching
-
83 / 148
Note 1: In write-back mode it can coalesce contiguous requests for better throughput.
The ceph.conf file settings for RBD should be set in the [client] section of your configuration file.
The settings include (default values are in bold underlined):
-rbd cache Enable RBD caching. Value is true or false
-rbd cache size The RBD cache size in bytes. Integer 32MB
-rbd cache max dirty The dirty byte threshold that triggers write back. Must be less than above. 24MB
-rbd cache target dirty The dirty target before cache starts writing back data . Does not hold write IOs to cache. 16MB
-rbd cache max dirty age Number of seconds dirty bytes are kept in cache before writing back. 1
-rbd cache writethrough until flush Start in write through mode and switch to write back after first flush occurs. Value is
true or false.
82
.
.
Snapshots
Snapshots
-
Do not change
Support incremental
snapshots1
Data is read from the original
data
84 / 148
83
.
.
Clones
Clone creation
-
Create snapshot
Protect snapshot
Clone snapshot
Clone behavior
-
84
.
.
Read from it
Write to it
Clone it
Resize it
85 / 148
Clones
Ceph supports
-
Clones are
-
Copies of snapshots
Writable2
Never written to
86 / 148
Note 1 : Reads are always served from the original snapshot used to create the clone. Ceph supports many copy-on-write
clones of a snapshot
Note 2 : Snapshots are read-only!
85
.
.
Clones
Read Operation
-
Note 1 : If data has been updated in the clone, data is read from the clone mounted on the host.
86
.
.
87 / 148
End Module 5
88 / 148
87
.
.
88
.
.
Module Objectives
Describe the methods to store and access data using the Ceph File
System
Explain the purpose of a metadata server cluster (MDS)
90 / 148
89
.
.
Directory hierarchy
File metadata (owner, timestamps, mode, etc.)
Stores metadata in RADOS
Does not access file content
Only required for shared file system
1
91 / 148
Note 1 : The MDS requires a 64bit OS because of the size of the INODES. This also means that ceph-fuse must be run also from a
64bit capable client
Note 2 : CephFS also keeps the recursive size of each directory that will appear at each level (. & .. Directory names)
There are 2 ways to mount a file system
1. The kernel based tool
2. The ceph-fuse tool (only alternative supported on all kernels that do not have the CephFS portion 2.6.32)
Ceph-fuse is most of the time slower than the CephFS kernel module
Note 3 : To mount with the kernel module, issue mount t ceph <mon1,mon2, ...> making all MON running nodes are quoted for
MON failure fault tolerance
To create a snapshot of a file system
In the .snap directory of the file system, create a directory and thats it. From the file system root directory tree, issue mkdir
./.snap/snap_20131218_100000 command
To delete a snapshot of a file system, remove the corresponding snapshot directory name in the .snap directory and thats it.
From the file system root directory tree, issue rm ./.snap/snap_20131218_100000 command
90
.
.
Active
Standby
Are a possibility
This configuration is currently not supported/recommended
92 / 148
91
.
.
92
.
.
93 / 148
MDS functionality
94 / 148
93
.
.
DTP
94
.
.
95 / 148
DTP
96 / 148
95
.
.
Summary
96
.
.
97 / 148
End Module 6
98 / 148
97
.
.
98
.
.
Module Objectives
100 / 148
99
.
.
ceph-deploy
Manual cluster creation
100
.
.
101 / 148
Getting started
-
102 / 148
101
.
.
[name]
parameter=value 1
102
.
.
103 / 148
104 / 148
Usually you use the global section to enable or disable general options such as cephx authenticaction
Cephx is the mechanism that will let you set permissions
[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
public network = {network}[, {network}]
cluster network = {network}[, {network}]
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]
osd journal size = 1024
filestore xattr use omap = true ; required for EXT4
103
.
.
[mon]
parameter = value
104
.
.
105 / 148
mon addr
host
106 / 148
The host parameter is used by mkcephfs so you should not use it as this command is deprecated.
Keep the ceph.conf file as slim as possible
105
.
.
Since Cuttlefish
Declaring every Monitor is not required
The only mandatory parameters are, in the [global] section
-
/var/lib/ceph/mon/$cluster-`hostname`
A done file
A upstart or sysvinit file for the Monitor to be started
107 / 148
Note 1 : The mon_initial_members parameter avoids a brain split during the first start making sure quorum is gained as soon as
possible
The default install path is: /var/lib/ceph/mon/$cluster-`hostname`
The best practice is to use the default path
106
.
.
The default MDS path must contain the same files as the MONs
108 / 148
[mds.0]
host = daisy
[mds.1]
host = eric
107
.
.
[osd]
osd data = /var/lib/ceph/osd/$cluster-$id
1
osd
journal = /var/lib/ceph/osd/$cluster-$id/journal
1
osd journal size = 256 ; Size, in megabytes
; filestore xattr use omap = true ; for ext3/4
[osd.0]
host = daisy
[osd.1]
host = eric
108
.
.
109 / 148
Configuring OSDs
-
Journal parameters
-
110 / 148
109
.
.
110
.
.
111 / 148
112 / 148
111
.
.
As you can see in this slide, data coming from a client to an RBD image will be split among the various OSD processes and
underlying drives. As explained on the previous slide.
112
.
.
113 / 148
114 / 148
113
.
.
Image Order
-
1<<22 = 4M
12 = 4K
13 = 8K
The default is
22 = 4MB
Note 1 : The << C operator is a left bit shifting operator. << shifts the left operand bits by the right operand value
A binary value for example
1 = 0001
If we do 1 << 2 the resulting value is
4 = 0100
The opposite operator is >>, the right bit shifting operator
The advantage of these operators is that they are executed in a single CPU cycle
114
.
.
115 / 148
1&2
116 / 148
115
.
.
Rollback a snapshot
-
116
.
.
1&2
117 / 148
RBD requirements
-
118 / 148
117
.
.
Step by step
-
#
#
#
#
modprobe rbd
mkdir /mnt/mountpoint
mkfs.ext4 /dev/rbd<x>
mount /dev/rbdx /mnt/mountpoint
118
.
.
119 / 148
#
#
#
#
Step by Step 1
modprobe rbd
rbd map foo@s1
rbd showmapped
blockdev --getro /dev/rbd<x>
120 / 148
119
.
.
120
.
.
Subdirectory devices/{n}
-
121 / 148
122 / 148
Remember the pool option can be replaced with the p option for quicker typing
# rbd unmap /dev/rbd0
# rbd showmapped
121
.
.
Available options
122
.
.
123 / 148
rbd protocol
-
124 / 148
123
.
.
As you can see in this slide, data coming from a client to an RBD image will be split among the various OSD processes and
underlying drives. As explained on the previous slide.
124
.
.
125 / 148
Specific parameters
-
Appending : rbd_cache_max_dirty=0
126 / 148
125
.
.
126
.
.
127 / 148
If a client does not respond but did not properly close the image
(such as in the case of a client crash)
-
128 / 148
127
.
.
128
.
.
CephFS: in-kernel
FUSE
129 / 148
130 / 148
129
.
.
FUSE
-
Note
-
130
.
.
131 / 148
Deep mount
-
You will adjust your file system ACLs starting at the root
Note
-
You can specify the MON port number in the mount command
132 / 148
131
.
.
Mount options
-
name=<name>
secretfile=/path/to/file
132
.
.
133 / 148
Mount options:
-
rsize=<bytes>
wsize=<bytes>
134 / 148
133
.
.
134
.
.
135 / 148
If a file
-
If a directory
-
136 / 148
135
.
.
Changing layout
-
-object_size=value in bytes
-stripe_count=value as a decimal integer
-stripe_unit=value in bytes
136
.
.
137 / 148
138 / 148
137
.
.
Naming
-
138
.
.
# mkdir .snap/<name>
139 / 148
Just copy from the .snap directory tree to the normal tree
cp -a .snap/<name>/<file> .
rm ./* -rf
cp -a .snap/<name>/<file> .
140 / 148
139
.
.
Discard a snapshot
-
140
.
.
rmdir .snap/<name>
141 / 148
Summary
Deploying a cluster
Configuration file format
Working with Ceph clients
-
rados
rbd
Mounting a CephFS File System
142 / 148
141
.
.
End Module 7
142
.
.
143 / 148
143
.
.
Module Objectives
144
.
.
About the
About the
About the
About the
About the
145 / 148
Please Tell Us
http://www.inktank.com/trainingfeedback
Q&A
146 / 148
145
.
.
Summary
146
.
.
147 / 148
End Module 8
CEPH-101 : Thanks
148 / 148
147
.
.