Difference between revisions of "Ceph:EcoSystem"

From Define Wiki
Jump to navigation Jump to search
 
Line 53: Line 53:
  
 
==Weekly perforance meetings ==
 
==Weekly perforance meetings ==
pad.ceph.com/p/perforance_weekly
+
pad.ceph.com/p/performance_weekly

Latest revision as of 09:12, 22 October 2014

The story so far

Reseach - Berginning to mid 2000;s, DOE grant, scalable meta data management, high security, focus on scalabilty reliabilty, performance

released open source 2006

mid 2007 - dreamhost took over development, Early growing community, purely development with no deliverable's for almost 5 years, emergance of RGW and linux kernel modules

Inktank - Stable releases, more doc, more deployments, support, inktank vs ceph vs inktank ceph enterprise. A storage revolution involving standard hardware, open source software and enterprise products & services.

Redhat bought inktank in 2014, logical progression,


Whats New

Firefly - Relased may 2014, first time it was late!!!, ceph 0.80.x Firefly features - cache tiering pools, erasure coded pools, RGW quotas

erasure coded pools - normally every object is replicated giving high durability and quick recovery, but takes more sapce erasure coded pools splits the objects into peices and uses parity bits, missing parts can be rebuilt, just as durable, recovery slower, requires less storage

cache tiering - multiple pools same cluster, one pool can be cache for another pool, hot/cold data management


The Future

Focus on next gen use cases, and capacity - open stack, cold storage, STaaS, suync and share Optimization - improving performance, OSD optimistation for SSDs, high throughput - Trove Integration with existing systems - LDAP, kerboros - Complicance Simplify - Installation and management. Moving twards VMware

Next version called Giant - soo to be released, 0.85.x dev release, RDMA support, Improved SSD perforance 0.86.x release 7 october as release candidate, low level debugging, local repairable codes

RBD - Client side caching, now enabled by default,

CephFS - Not Production Ready!!!! Lots of activity, sanding rough edges, feature complete, ready to be tested, feed back encouraged

Eco System Update

BAU for inktank depsite take over Increased support for redhat / centos fedora, with continued support for ubuntu Hiring and Growing Ceph vs Gluster - to different use cases


Ceph Developer summit

If you want a feaute - submit a blueprint, if theres enough support then you will get a session at the summit


Weekly perforance meetings

pad.ceph.com/p/performance_weekly