CEPH: Ceph on the Blades

From Define Wiki
Jump to navigation Jump to search

Environment

Dependencies

LINUX KERNEL

Ceph Kernel Client: We currently recommend:

  • v3.6.6 or later in the v3.6 stable series
  • v3.4.20 or later in the v3.4 stable series
  • btrfs: If you use the btrfs file system with Ceph, we recommend using a recent Linux kernel (v3.5 or later).

Testing Environment

OS: CentOS release 6.3 (Final)

  • Server nodes
uname -a
Linux Blade3 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
  • Client node
uname -a
Linux Blade8 3.8.8-1.el6.elrepo.x86_64 #1 SMP Wed Apr 17 16:47:58 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux

Install

On all the nodes

rpm --import 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
su -c 'rpm -Uvh http://ceph.com/rpm-bobtail/el6/x86_64/ceph-release-1-0.el6.noarch.rpm'
yum -y install ceph

Configuration

  • Location: /etc/ceph/ceph.conf
  • To be copied to all the nodes (servers nodes and clients)
[global]
	auth cluster required = none
	auth service required = none
	auth client required = none
[osd]
	osd journal size = 1000
	filestore xattr use omap = true
	osd mkfs type = ext4
	osd mount options ext4 = user_xattr,rw,noexec,nodev,noatime,nodiratime
[mon.a]
	host = blade3
	mon addr = <IP of blade3>:6789
[mon.b]
	host = blade4
	mon addr = <IP of blade4>:6789
[mon.c]
	host = blade5
	mon addr = <IP of blade5>:6789
[osd.0]
	host = blade3
[osd.1]
	host = blade4
[osd.2]
	host = blade5
[mds.a]
	host = blade3

Create CEPH cluster

Performance

1G Ethernet 10G Ethernet
Example Example
Example Example
Example Example