Using Docker-swam with Ceph storage
https://docs.ceph.com/en/mimic/start/quick-start-preflight/#rhel-centos
Ceph는 ext4, btrfs, xfs 지원
Installation (cept-deploy)
ceph-deploy 관리 노드에 Ceph Repository 추가한 다음에 ceph-deploy 설치
OCTOPUS V15.2.16(Stable)
-Octopus is the 15th stable release of Ceph. It is named after an order of 8-limbed cephalopods.
STARTING OVER
Ceph 패키지를 제거하고 모든 데이터와 설정을 제거, lvm도 초기화(remove) 이후 재설치 해야 정상적으로 deploy가 가능함
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*
[algorizm@v134 ~]$ sudo lvscan
ACTIVE '/dev/boxvg/boxvg1' [501.00 GiB] inherit
ACTIVE '/dev/ceph-d9b19e74-86ff-4d37-90fa-14232daf1835/osd-block-e51a292c-18b2-4bcf-8981-7ece2fa5eb4b' [<500.00 GiB] inherit
[algorizm@v134 ~]$ sudo lvremove /dev/ceph-d9b19e74-86ff-4d37-90fa-14232daf1835/osd-block-e51a292c-18b2-4bcf-8981-7ece2fa5eb4b
Do you really want to remove active logical volume ceph-d9b19e74-86ff-4d37-90fa-14232daf1835/osd-block-e51a292c-18b2-4bcf-8981-7ece2fa5eb4b? [y/n]: y
Logical volume "osd-block-e51a292c-18b2-4bcf-8981-7ece2fa5eb4b" successfully removed
[algorizm@v134 ~]$ sudo mkfs.xfs -f /dev/sdc
meta-data=/dev/sdc isize=512 agcount=4, agsize=32768000 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=131072000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=64000, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[asmanager@v134 ~]$ sudo lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
boxvg1 boxvg -wi-ao---- 501.00g
[algorizm@v134 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
fd0 2:0 1 4K 0 disk
sda 8:0 0 100G 0 disk
├─sda1 8:1 0 500M 0 part /boot
├─sda2 8:2 0 30G 0 part /
├─sda3 8:3 0 30G 0 part /home
├─sda4 8:4 0 1K 0 part
├─sda5 8:5 0 4.5G 0 part /tmp
├─sda6 8:6 0 4G 0 part [SWAP]
└─sda7 8:7 0 31G 0 part
└─boxvg-boxvg1 253:0 0 501G 0 lvm /box
sdb 8:16 0 470G 0 disk
└─sdb1 8:17 0 470G 0 part
└─boxvg-boxvg1 253:0 0 501G 0 lvm /box
sdc 8:32 0 500G 0 disk
sr0 11:0 1 1024M 0 rom
$ sudo yum remove ceph ceph-deploy ceph-common ceph-mds ceph-mgr
Prelight Checklist
# apply repository to nodes on each
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-octopus/el7/$basearch
enabled=1
priority=2
gpgcheck=1
pe=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-octopus/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-octopus/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
# When using Ceph monitoring need ntp
$ sudo yum install ntp ntpdate ntp-doc
$ sudo yum update
$ sudo yum install ceph
$ sudo yum install ceph-deploy
$ sudo yum install ceph-common ceph-mds ceph-mgr -y
$ sudo yum install fcgi -y ### ceph
# /etc/hosts.allow
sshd:192.168.172.43,192.168.172.44
# /etc/hosts
$ sudo echo -e "
> 192.168.172.42 v134
> 192.168.172.43 v135
> 192.168.172.44 v136
> " >> /etc/hosts
# ~/.ssh/config (use ansible)
[algorizm@v134 ceph-cluster]$ sudo vi ~/.ssh/config
Host v134
Hostname v134
User algorizm
Port 5501
Host v135
Hostname v135
User algorizm
Port 5501
Host v136
Hostname v136
User algorizm
Port 5501
# cd ~/ (/home/algorizm)
ssh-keygen
Generating public/private key pair.
Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /ceph-admin/.ssh/id_rsa.
Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
$ ssh-copy-id {username}@node1
$ ssh-copy-id {username}@node2
$ ssh-copy-id {username}@node3
# ~/.ssh/config
[algorizm@v134 ceph-cluster]$ sudo vi ~/.ssh/config
Host v134
Hostname v134
User algorizm
Port 5501
Host v135
Hostname v135
User algorizm
Port 5501
Host v136
Hostname v136
User algorizm
Port 5501
Storage Cluster
[algorizm@v134 ceph-cluster]$ mkdir /box/cepth-cluster
[algorizm@v134 ceph-cluster]$ pwd
/box/ceph-cluster
[algorizm@v134 ceph-cluster]$ ceph-deploy mon create-initial
Once you complete the process, your local directory should have the following keyrings:
ceph.client.admin.keyring
ceph.bootstrap-mgr.keyring
ceph.bootstrap-osd.keyring
ceph.bootstrap-mds.keyring
ceph.bootstrap-rgw.keyring
ceph.bootstrap-rbd.keyring
[algorizm@v134 ceph-cluster]$ ceph-deploy admin v134 v135 v136
# admin 설치이후 ceph 상태(HEALTH_WARN).
[algorizm@v134 ceph-cluster]$ ceph -v
ceph version 15.2.16 (d46a73d6d0a67a79558054a3a5a72cb561724974) octopus (stable)
[algorizm@v134 ceph-cluster]$ sudo ceph -s
cluster:
id: 5cdfeff3-0d31-4204-a151-b4eea0e6d575
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
Module 'restful' has failed dependency: No module named 'pecan'
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum v134 (age 23m)
mgr: v134(active, since 8m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
# installed pecan module
$ sudo pip3 install pecan werkzeug --proxy="http://192.168.1.139:3128"
$ sudo pip3 install cherrypy werkzeug --proxy="http://192.168.1.139:3128"
$ ceph-deploy osd create --data /dev/sdc v134
$ ceph-deploy osd create --data /dev/sdc v135
$ ceph-deploy osd create --data /dev/sdc v136
# https://www.suse.com/support/kb/doc/?id=000019960#disclaimer
[algorizm@v134 ceph-deploy]$ sudo ceph -s
cluster:
id: 66250246-7716-4ab0-9166-560125020fa8
health: HEALTH_WARN
mon is allowing insecure global_id reclaim
services:
mon: 1 daemons, quorum v134 (age 15h)
mgr: v134(active, since 15h)
osd: 3 osds: 3 up (since 2m), 3 in (since 2m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs: 1 active+clean
$ ceph config set mon mon_warn_on_insecure_global_id_reclaim true
$ ceph config set mon mon_warn_on_insecure_global_id_reclaim_allowed true
$ ceph config set mon auth_allow_insecure_global_id_reclaim false
[algorizm@v134 ceph-deploy]$ sudo ceph -s
cluster:
id: 66250246-7716-4ab0-9166-560125020fa8
health: HEALTH_OK
services:
mon: 1 daemons, quorum v134 (age 15h)
mgr: v134(active, since 15h)
osd: 3 osds: 3 up (since 3m), 3 in (since 3m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs: 1 active+clean
Ceph Monitor
https://docs.ceph.com/en/mimic/rados/configuration/common/#monitors
모니터링 최소 3대 권장(Paxos 알고리즘)
Ceph production clusters typically deploy with a minimum 3 Ceph Monitor daemons to ensure high availability should a monitor instance crash. At least three (3) monitors ensures that the Paxos algorithm can determine which version of the Ceph Cluster Map is the most recent from a majority of Ceph Monitors in the quorum.
# Added Mgr Node
$ ceph-deploy install --no-adjust-repos v141
$ ceph-deploy mgr create v141
$ ceph-deploy admin v141
$ ssh v141
$ sudo ceph -s
[algorizm@v141 ~]$ sudo ceph -s
cluster:
id: 66250246-7716-4ab0-9166-560125020fa8
health: HEALTH_OK
services:
mon: 1 daemons, quorum v134 (age 17h)
mgr: v134(active, since 17h), standbys: v141
osd: 3 osds: 3 up (since 2h), 3 in (since 2h)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs: 1 active+clean
# CentOS 7에서는 Python3 모듈을 지원하지 않아서 CentoOS 8을 권장함.
Note that the dashboard, prometheus, and restful manager modules will not work on the CentOS 7 build due to Python 3 module dependencies that are missing in CentOS 7.
[algorizm@v141 ~]$ sudo rpm -Uvh *.rpm
Preparing... ################################# [100%]
package python-werkzeug-0.9.1-2.el7.noarch is already installed
package python-routes-1.13-2.el7.noarch is already installed
package python-jwt-1.5.3-1.el7.noarch is already installed
package python-cherrypy-3.2.2-4.el7.noarch is already installed
[algorizm@v141 ~]$ sudo rpm -ivh https://download.ceph.com/rpm-octopus/el7/noarch/ceph-mgr-dashboard-15.2.16-0.el7.noarch.rpm --httpproxy 192.168.1.139 --httpport 3128
Retrieving https://download.ceph.com/rpm-octopus/el7/noarch/ceph-mgr-dashboard-15.2.16-0.el7.noarch.rpm
error: Failed dependencies:
python3-cherrypy is needed by ceph-mgr-dashboard-2:15.2.16-0.el7.noarch
python3-jwt is needed by ceph-mgr-dashboard-2:15.2.16-0.el7.noarch
python3-routes is needed by ceph-mgr-dashboard-2:15.2.16-0.el7.noarch
python3-werkzeug is needed by ceph-mgr-dashboard-2:15.2.16-0.el7.noarch
# 노틸러스 공식 릴리즈 종료일(2021-06-30), CentOS7 호환성 고려해도 실운영 환경에서 릴리즈&패치 대응 불가
NameInitial releaseLatestEnd of life
Nautilus | 2019-03-19 | 14.2.22 | 2021-06-30 |
Mimic | 2018-06-01 | 13.2.10 | 2020-07-22 |
Luminous | 2017-08-01 | 12.2.13 | 2020-03-01 |
실운영 서버 구성시에는..
Ceph OSDs (ceph-osd) - Handles the data store, data replication and recovery. A Ceph cluster needs at least two Ceph OSD servers. I will use three CentOS 7 OSD servers here.
Ceph Monitor (ceph-mon) - Monitors the cluster state, OSD map and CRUSH map. I will use one server.
Ceph Meta Data Server (ceph-mds) - This is needed to use Ceph as a File System.
'Container' 카테고리의 다른 글
Installed Ceph (0) | 2022.11.16 |
---|