Install Ceph Hammer

From Define Wiki
Jump to navigation Jump to search

Set up Ceph Repos

on all ceph nodes: add keys: To install the release.asc key, execute the following:

sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'

To install the autobuild.asc key, execute the following (QA and developers only):

sudo rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/autobuild.asc'

add Ceph extras: For RPM packages, add our package repository to your /etc/yum.repos.d repos (e.g., ceph-extras.repo). Some Ceph packages (e.g., QEMU) must take priority over standard packages, so you must ensure that you set priority=2.

[ceph-extras]
name=Ceph Extras Packages
baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-extras-noarch]
name=Ceph Extras noarch
baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-extras-source]
name=Ceph Extras Sources
baseurl=http://ceph.com/packages/ceph-extras/rpm/centos6/SRPMS
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Add Ceph repo: For major releases, you may add a Ceph entry to the /etc/yum.repos.d directory. Create a ceph.repo file

[ceph]
name=Ceph packages for $basearch
baseurl=http://ceph.com/rpm-hammer/el6/$basearch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-hammer/el6/noarch
enabled=1
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://ceph.com/rpm-hammer/el6/SRPMS
enabled=0
priority=2
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

Install EPEL repository

## RHEL/CentOS 6 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

Install other packages: Install third-party binaries required by Ceph:

# yum install -y snappy leveldb gdisk python-argparse gperftools-libs

Install Ceph:

yum install ceph

Deploy the Ceph: on ceph-node1: Adding First MON: Create a directory for Ceph and create your Ceph cluster configuration file:

# mkdir /etc/ceph
# touch /etc/ceph/ceph.conf
Generate a FSID for your Ceph cluster: e.g.: 792cda6c-af73-46c7-a60b-89d8aa8cf2cb
# uuidgen
create ceph config file: /etc/ceph/ceph.conf as:
[global]
fsid = {cluster-id}
mon initial members = {hostname}[, {hostname}]
mon host = {ip-address}[, {ip-address}]

#All clusters have a front-side public network.
#If you have two NICs, you can configure a back side cluster 
#network for OSD object replication, heart beats, backfilling,
#recovery, etc.
public network = {network}[, {network}]
#cluster network = {network}[, {network}] 

#Clusters require authentication by default.
auth cluster required = cephx
auth service required = cephx
auth client required = cephx

#Choose reasonable numbers for your journals, number of replicas
#and placement groups.
osd journal size = {n}
osd pool default size = {n}  # Write an object n times.
osd pool default min size = {n} # Allow writing n copy in a degraded state.
osd pool default pg num = {n}
osd pool default pgp num = {n}

#Choose a reasonable crush leaf type.
#0 for a 1-node cluster.
#1 for a multi node cluster in a single rack
#2 for a multi node, multi chassis cluster with multiple hosts in a chassis
#3 for a multi node cluster with hosts across racks, etc.
osd crush chooseleaf type = {n}

An example ceph.conf:

[global]
fsid = 792cda6c-af73-46c7-a60b-89d8aa8cf2cb
public network = 172.28.0.0/16

#Choose reasonable numbers for your journals, number of replicas
#and placement groups.
osd journal size = 1024
osd pool default min size = 1
osd pool default pg num = 128
osd pool default pgp num = 128

[mon]
mon initial members = ceph-node1
mon host = ceph-node1,ceph-node2,ceph-node3
mon addr = 172.28.1.89,172.28.0.228,172.28.0.177

[mon.ceph-node1]
host = ceph-node1
mon addr = 172.28.1.89

Create a keyring for your cluster and generate a monitor secret key as follows:

# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'

Create a client.admin user and add the user to the keyring:

# ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow'

Add the client.admin key to ceph.mon.keyring:

# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
create first monitor daemon
   43  monmaptool --create --add ceph-node1 172.28.1.89 --fsid 792cda6c-af73-46c7-a60b-89d8aa8cf2cb /tmp/monmap
   44  mkdir /var/lib/ceph/mon/ceph-ceph-node1
   45  ceph-mon --mkfs -i ceph-node1 --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring

start the ceph service:

[root@ceph-node1 ceph]# service ceph start
=== mon.ceph-node1 === 
Starting Ceph mon.ceph-node1 on ceph-node1...
2015-05-14 12:22:22.356497 7f61f29b97a0 -1 WARNING: 'mon addr' config option 172.28.1.89:0/0 does not match monmap file
         continuing with monmap configuration
Starting ceph-create-keys on ceph-node1...
[root@ceph-node1 ceph]# ceph status 
    cluster 792cda6c-af73-46c7-a60b-89d8aa8cf2cb
     health HEALTH_ERR
            64 pgs stuck inactive
            64 pgs stuck unclean
            no osds
     monmap e1: 1 mons at {ceph-node1=172.28.1.89:6789/0}
            election epoch 2, quorum 0 ceph-node1
     osdmap e1: 0 osds: 0 up, 0 in
      pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating

Creating OSD:

   74  ceph-disk list|grep unknown
   75  parted /dev/sdb mklabel GPT
   76  parted /dev/sdc mklabel GPT
   77  parted /dev/sdd mklabel GPT
   78  ceph-disk prepare --cluster ceph --cluster-uuid 792cda6c-af73-46c7-a60b-89d8aa8cf2cb --fs-type xfs /dev/sdb
   79  df
   80  ceph-disk prepare --cluster ceph --cluster-uuid 792cda6c-af73-46c7-a60b-89d8aa8cf2cb --fs-type xfs /dev/sdc
   81  ceph-disk prepare --cluster ceph --cluster-uuid 792cda6c-af73-46c7-a60b-89d8aa8cf2cb --fs-type xfs /dev/sdd
   82  lsblk
   83  ceph-disk activate /dev/sdb1
   84  ceph-disk activate /dev/sdc1
   85  ceph-disk activate /dev/sdd1
   86  ceph -s
   87  history|less

Now the ceph status:

[root@ceph-node1 ceph]# ceph -s
    cluster 792cda6c-af73-46c7-a60b-89d8aa8cf2cb
     health HEALTH_WARN
            59 pgs degraded
            64 pgs stuck unclean
            59 pgs undersized
            too few PGs per OSD (21 < min 30)
     monmap e1: 1 mons at {ceph-node1=172.28.1.89:6789/0}
            election epoch 2, quorum 0 ceph-node1
     osdmap e15: 3 osds: 3 up, 3 in; 37 remapped pgs
      pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects
            101936 kB used, 2094 GB / 2094 GB avail
                  32 active+undersized+degraded+remapped
                  27 active+undersized+degraded
                   5 active

Copy keyring to other nodes:

   88  scp /etc/ceph/ceph.c* ceph-node2:/etc/ceph
   89  scp /etc/ceph/ceph.c* ceph-node3:/etc/ceph

Now on other nodes, you should be able issue

ceph -s


Scale up the cluster:

Adding Monitor: on ceph-node2:

   33  mkdir -p /var/lib/ceph/mon/ceph-ceph-node2 /tmp/ceph-node2
   34  vim /etc/ceph/ceph.conf 
   35  ceph auth get mon. -o /tmp/ceph-node2/monkeyring
   39  ceph mon getmap -o /tmp/ceph-node2/monmap
   40  ceph-mon -i ceph-node2 --mkfs --monmap /tmp/ceph-node2/monmap --keyring /tmp/ceph-node2/monkeyring 
   43  service ceph start
   44  ceph mon add ceph-node2 172.28.0.228:6789
   45  ceph -s

On ceph-node3:

   36  mkdir -p /var/lib/ceph/mon/ceph-ceph-node3 /tmp/ceph-node3
   37  vim /etc/ceph/ceph.conf 
   38  ceph auth get mon. -o /tmp/ceph-node3/monkeyring
   39  ceph mon getmap -o /tmp/ceph-node3/monmap
   40  ceph-mon -i ceph-node3 --mkfs --monmap /tmp/ceph-node3/monmap --keyring /tmp/ceph-node3/monkeyring
   43  service ceph start
   44  ceph mon add ceph-node3 172.28.0.177:6789


Configuring NTP: On ceph-node1

   95  chkconfig ntpd on
   96  ssh ceph-node2 chkconfig ntpd on
   97  ssh ceph-node3 chkconfig ntpd o
   98  ssh ceph-node3 chkconfig ntpd on
   99  ntpdate pool.ntp.org
  100  ssh ceph-node2 ntpdate pool.ntp.org
  101  ssh ceph-node3 ntpdate pool.ntp.org
  102  /etc/init.d/ntpd start
  103  ssh ceph-node2 /etc/init.d/ntpd start
  104  ssh ceph-node3 /etc/init.d/ntpd start
  106  ssh ceph-node3 /etc/init.d/ntpd start

Adding OSDs refer to Creating OSD on ceph-node1