Ceph Deploy
Ceph Deploy
Ceph Deploy
For example on here, Configure Ceph Cluster with 3 Nodes like follows.
Furthermore, each Storage Node has a free block device to use on Ceph Nodes.
(use [/dev/sdb] on this example)
|
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
[1] Generate SSH key-pair on [Monitor Daemon] Node (call it Admin Node on here)
and set it to each Node.
Configure key-pair with no-passphrase as [root] account on here.
If you use a common account, it also needs to configure Sudo.
If you set passphrase to SSH kay-pair, it also needs to set SSH Agent.
[root@node01 ~]# ssh-keygen -q -N ""
Host node01
Hostname node01.srv.world
User root
Host node02
Hostname node02.srv.world
User root
Host node03
Hostname node03.srv.world
User root
root@node01.srv.world's password:
f2e52449-e87b-4786-981e-1f1f58186a7c
# create new config
# file name ⇒ (any Cluster Name).conf
# set Cluster Name [ceph] (default) on this example ⇒ [ceph.conf]
[global]
# specify cluster network for monitoring
cluster network = 10.0.0.0/24
# specify public network
public network = 10.0.0.0/24
# specify UUID genarated above
fsid = f2e52449-e87b-4786-981e-1f1f58186a7c
# specify IP address of Monitor Daemon
mon host = 10.0.0.51
# specify Hostname of Monitor Daemon
mon initial members = node01
osd pool default crush rule = -1
# mon.(Node name)
[mon.node01]
# specify Hostname of Monitor Daemon
host = node01
# specify IP address of Monitor Daemon
mon addr = 10.0.0.51
# allow to delete pools
mon allow pool delete = true
creating /etc/ceph/ceph.mon.keyring
# generate secret key for Cluster admin
creating /etc/ceph/ceph.client.admin.keyring
# generate key for bootstrap
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
# import generated key
[root@node01 ~]# ceph auth get-or-create mgr.$NODENAME mon 'allow profile mgr' osd
'allow *' mds 'allow *'
[mgr.node01]
key = AQB7seJk/PK8ARAAnPnPxdr+6Npqxz92J3flng==
[root@node01 ~]# ceph auth get-or-create mgr.node01 >
/etc/ceph/ceph.mgr.admin.keyring
require {
type ceph_t;
type ptmx_t;
type initrc_var_run_t;
type sudo_exec_t;
type chkpwd_exec_t;
type shadow_t;
class file { execute execute_no_trans lock getattr map open read };
class capability { audit_write sys_resource };
class process setrlimit;
class netlink_audit_socket { create nlmsg_relay };
class chr_file getattr;
}
success
[root@node01 ~]# firewall-cmd --runtime-to-permanent
success
[6] Confirm Cluster status. That's OK if [Monitor Daemon] and [Manager Daemon]
are enabled like follows.
For OSD (Object Storage Device), Configure them on next section, so it's no problem
if [HEALTH_WARN] at this point.
[root@node01 ~]# ceph -s
cluster:
id: f2e52449-e87b-4786-981e-1f1f58186a7c
health: HEALTH_WARN
OSD count 0 > osd_pool_default_size 3
services:
mon: 1 daemons, quorum node01 (age 2m)
mgr: node01(active, since 34s)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
[2] Configure OSD (Object Storage Device) to each Node from Admin Node.
Block devices ([/dev/sdb] on this example) are formatted for OSD, Be careful if
some existing data are saved.
# if Firewalld is running on each Node, allow ports
cluster:
id: f2e52449-e87b-4786-981e-1f1f58186a7c
health: HEALTH_OK
services:
mon: 1 daemons, quorum node01 (age 13m)
mgr: node01(active, since 11m)
osd: 3 osds: 3 up (since 4m), 3 in (since 4m)
data:
pools: 1 pools, 1 pgs
objects: 2 objects, 449 KiB
usage: 80 MiB used, 480 GiB / 480 GiB avail
pgs: 1 active+clean
Enable Ceph Object Gateway (RADOSGW) to access to Ceph Cluster Storage via Amazon
S3 or OpenStack Swift compatible API.
This example is based on the environment like follows.
|
+--------------------+ | +----------------------+
| [dlp.srv.world] |10.0.0.30 | 10.0.0.31| [www.srv.world] |
| Ceph Client +-----------+-----------+ RADOSGW |
| | | | |
+--------------------+ | +----------------------+
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
[1] Transfer required files to RADOSGW Node and Configure it from Admin Node.
# transfer public key
# transfer files
# configure RADOSGW
# verify status
# that's OK if follwing answers shown after a few seconds
[root@node01 ~]# curl www.srv.world:7480
[2] On Object Gateway Node, Create a S3 compatible user who can authenticate to
Object Gateway.
# for example, create [serverworld] user
{
"user_id": "serverworld",
"display_name": "Server World",
"email": "admin@srv.world",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "serverworld",
"access_key": "9YRQNWJ1CG6DH69KL2RT",
"secret_key": "Ht07yUzoQFKOeFcMC0Dn9DkAJHqBn2M75mUmC78T"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
[
"serverworld"
]
{
"user_id": "serverworld",
"display_name": "Server World",
"email": "admin@srv.world",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "serverworld",
"access_key": "9YRQNWJ1CG6DH69KL2RT",
"secret_key": "Ht07yUzoQFKOeFcMC0Dn9DkAJHqBn2M75mUmC78T"
}
.....
.....
[3] Verify accessing with S3 interface to create Python test script on a Computer
with a common user.
[cent@dlp ~]$ pip3 install boto3
[cent@dlp ~]$ vi s3_test.py
import sys
import boto3
from botocore.config import Config
# create [my-new-bucket]
bucket = s3client.create_bucket(Bucket = 'my-new-bucket')
# list Buckets
print(s3client.list_buckets())
# remove [my-new-bucket]
s3client.delete_bucket(Bucket = 'my-new-bucket')
|
+--------------------+ | +----------------------+
| [dlp.srv.world] |10.0.0.30 | 10.0.0.31| [www.srv.world] |
| Ceph Client +-----------+-----------+ RADOSGW |
| | | | |
+--------------------+ | +----------------------+
+----------------------------+----------------------------+
| | |
|10.0.0.51 |10.0.0.52 |10.0.0.53
+-----------+-----------+ +-----------+-----------+ +-----------+-----------+
| [node01.srv.world] | | [node02.srv.world] | | [node03.srv.world] |
| Object Storage +----+ Object Storage +----+ Object Storage |
| Monitor Daemon | | | | |
| Manager Daemon | | | | |
+-----------------------+ +-----------------------+ +-----------------------+
dashboard on
{
"dashboard": "https://10.0.0.51:8443/"
}
[2] On Dashboard Host, If Firewalld is running, allow service ports.
[root@node01 ~]# firewall-cmd --add-port=8443/tcp
[3] Access to the Dashboard URL from a Client Computer with Web Browser, then
Ceph Dashboard Login form is shown. Login as a user you just added in [1] section.
After login, it's possible to see various status of Ceph Cluster.