SlideShare a Scribd company logo
Ceph, Xen, and CloudStack:
Semper Melior
Xen User Summit| New Orleans, LA | 18 SEP 2013
2
•Patrick McGarry
•Community monkey
•Inktank / Ceph
•/. > ALU > P4
•@scuttlemonkey
•patrick@inktankcom
Accept no substitutes
C’est Moi
3
•Ceph in <30s
•Ceph, a little bit more
•Ceph in the wild
•Orchestration
•Community status
•What’s Next?
•Questions
The plan, Stan
Welcome!
4
On commodity hardware
Ceph can run on any
infrastructure, metal
or virtualized to
provide a cheap and
powerful storage
cluster.
Object, block, and file
Low overhead doesn’t
mean just hardware,
it means people too!
Awesomesauce
Infrastructure-aware
placement algorithm
allows you to do really
cool stuff.
Huge and beyond
Designed for exabyte,
current
implementations in
the multi-petabyte.
HPC, Big Data, Cloud,
raw storage.
…besides wicked-awesome?
What is Ceph?
Software All-in-1 CRUSH Scale
5
Find out more!
Ceph.com
…but you can find out more
Use it today
Dreamhost.com/cloud/DreamObjects
Get Support
Inktank.com
That WAS fast
6
OBJECTS VIRTUAL DISKS FILES & DIRECTORIES
CEPH
FILE SYSTEM
A distributed, scale-out
filesystem with POSIX
semantics that provides
storage for a legacy and
modern applications
CEPH
GATEWAY
A powerful S3- and Swift-
compatible gateway that
brings the power of the
Ceph Object Store to
modern applications
CEPH
BLOCK DEVICE
A distributed virtual block
device that delivers high-
performance, cost-effective
storage for virtual machines
and legacy applications
CEPH OBJECT STORE
A reliable, easy to manage, next-generation distributed object
store that provides storage of unstructured data for applications
7
8
9
• CRUSH
– Pseudo-random placement
algorithm
– Ensures even distribution
– Repeatable, deterministic
– Rule-based configuration
• Replica count
• Infrastructure topology
• Weighting
1
0
10 10 01 01 10 10 01 11 01 10
10 10 01 01 10 10 01 11 01 10
hash(object name) % num pg
CRUSH(pg, cluster state, rule set)
1
1
10 10 01 01 10 10 01 11 01 10
10 10 01 01 10 10 01 11 01 10
1
2
CLIENT
1
3
1
4
1
5
1
6
CLIENT
??
1
7
…with Marty Stouffer
Ceph in the Wild
1
8
No incendiary devices please…
Linux Distros
1
9
Object && Block
Via RBD and RGW (Swift API)
Our BFF
Identity
Via Keystone
More coming!
Work continues with updates in
Havana and Icehouse.
OpenStack
2
0
Block
Alternate primary, and secondary
Community maintained
Community
Wido from 42on.com
More coming in 4.2!
Snapshot & backup support
Cloning (layering) support
No NFS for system VMs
Secondary/Backup storage (s3)
CloudStack
2
1
A blatent ripoff!
Primary Storage Flow
•The mgmt server never talks
to the Ceph cluster
•One mgmt server can
manage 1000s of hypervisors
•Mgmt server can be clustered
•Multiple Ceph clusters/pools
can be added to CloudStack
cluster
2
2
A pretty package
A commercially
packaged OpenStack
solution back by
Ceph.
RADOS for Archipelago
Virtual server
management
software tool on top
of Xen or KVM.
RBD backed
Complete
virtualization
management with
KVM and containers.
BBC territory
Talk next week in
Berlin
So many delicious flavors
Other Cloud
SUSE Cloud Ganeti Proxmox OpenNebula
2
3
Since 2.6.35
Kernel clients for RBD
and CephFS. Active
development as a
Linux file system.
iSCSI ahoy!
One of the Linux iSCSI
target frameworks.
Emulates: SBC (disk),
SMC (jukebox), MMC
(CD/DVD), SSC (tape),
OSD.
Getting creative
Creative community
member used Ceph to
back their VMWare
infrastructure via
fibre channel.
You can always use more friends
Project Intersection
Kernel STGT VMWare
Love me!
Slightly out-of-date.
Some work has been
done, but could use
some love.
Wireshark
2
4
CephFS
CephFS can serve as a
drop-in replacement
for HDFS.
Upstream
Ceph vfs module
upstream samba.
CephFS or RBD
Reexporting CephFS
or RBD for NFS/CIFS.
MOAR projects
Project Intersection
Hadoop Samba Ganesha
Recently Open Source
Commercially
supported product
from Citrix. Recently
Open Sourced. Still a
bit of a tech preview.
XenServer
2
5
Support for libvirt
XenServer can manipulate Ceph!
Don’t let the naming fool you, it’s easy
Blktap{2,3,asplode}
Qemu; new boss, same as the old boss
(but not really)
What’s in a name?
Ceph :: XenServer :: Libvirt
Block device :: VDI :: storage vol
Pool :: Storage Repo :: storage pool
Doing it with Xen*
2
6
Thanks David Scott!
XenServer host arch
Xapi, XenAPI
xenopsd S M adapters
libvirt
libxl ceph ocfs2
libxenguest libxc qemu
xen
Client
(CloudStack, OpenStack, XenDesktop)
2
7
Come for the block
Stay for the object and file
No matter what you use!
Reduced Overhead
Easier to manage one cluster
“Other Stuff”
CephFS prototypes
fast development profile
ceph-devel
lots of partner action
Gateway Drug
2
8
Squash Hotspots
Multiple hosts = parallel workload
But what does that mean?
Instant Clones
No time to boot for many images
Live migration
Shared storage allows you to
move instances between compute
nodes transparently.
Blocks are delicious
2
9
Flexible APIs
Native support for swift and s3
And less filling!
Secondary Storage
Coming with 4.2
Horizontal Scaling
Easy with HAProxy or others
Objects can juggle
3
0
Neat prototypes
Image distribution to hypervisors
You can dress them up, but you can’t take them anywhere
Still early
You can fix that!
Outside uses
Great way to combine resources.
Files are tricksy
3
1
Where the metal meets the…software
Deploying this stuff
3
2
Procedural, Ruby
Written in Ruby, this
is more of the dev-
side of DevOps. Once
you get past the
learning curve it’s
powerful though.
Model-driven
Aimed more at the
sysadmin, this
procedural tool has a
very wide penetration
(even on Windows!).
Agentless, whole stack
Using the built-in
OpenSSH in your OS,
this super easy tool
goes further up the
stack than most.
Fast, 0MQ
Using ZeroMQ this tool
is designed for massive
scale and fast, fast, fast.
Unfortunately 0MQ has
no built in encryption.
The new hotness
Orchestration
Chef Puppet Ansible Salt
3
3
Canonical Unleashed
Being language
agnostic, this tool can
completely encapsulate
a service. Can also
handle provisioning all
the way down to
hardware.
Dell has skin in the game
Complete operations
platform that can dive
all the way down to
BIOS/RAID level.
Others are joining in
Custom provisioning
and
orchestration, just
one example of how
busy this corner of
the market is.
Doing it w/o a tool
If you prefer not to
use a tool, Ceph gives
you an easy way to
deploy your cluster by
hand.
MOAR HOTNESS
Orchestration Cont’d
Juju Crowbar ComodIT Ceph-deploy
3
4
All your space are belong to us
Ceph Community
3
5
3
6
Up and to the right!
Code Contributions
3
7
Up and to the right!
Commits
3
8
Up and to the right!
List Participation
3
9
This Ceph thing sounds hot.
What’s Next?
4
0
An ongoing process
While the first pass
for disaster recovery
is done, we want to
get to built-in, world-
wide replication.
Reception efficiency
Currently underway
in the community!
Headed to dynamic
Can already do this in
a static pool-based
setup. Looking to get
to a use-based
migration.
Making it open-er
Been talking about it
forever. The time is
coming!
Hop on board!
The Ceph Train
Geo-Replication Erasure Coding Tiering Governance
4
1
Quarterly Online Summit
Online summit puts
the core devs together
with the Ceph
community.
Not just for NYC
More planned,
including Santa Clara
and London. Keep an
eye out:
http://inktank.com/cephdays/
Geek-on-duty
During the week
there are times when
Ceph experts are
available to help. Stop
by oftc.net/ceph
Email makes the world go
Our mailing lists are
very active, check out
ceph.com for details
on how to join in!
Open Source is Open!
Get Involved!
CDS Ceph Day IRC Lists
4
2
http://wiki.ceph.com/04De
velopment/Project_Ideas
Lists, blueprints,
sideboard, paper cuts,
etc.
http://tracker.ceph.com/
All the things!
New #ceph-devel
Splitting off developer
chatter to make it
easier to filter
discussions.
http://ceph.com/resources
/mailing-list-irc/
Our mailing lists are
very active, check out
ceph.com for details
on how to join in!
Patches welcome
Projects
Wiki Redmine IRC Lists
4
3
Comments? Anything for the good of the cause?
Questions?
E-MAIL
patrick@inktank.com
WEBSITE
Ceph.com
SOCIAL
@scuttlemonkey
@ceph
Facebook.com/cephstorage

More Related Content

What's hot (20)

PDF
DockerCon EU 2015: Finding a Theory of the Universe with Docker and Volunteer...
Docker, Inc.
 
PDF
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for Ope...
eNovance
 
PDF
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Christian Theune
 
PDF
Docker Container Orchestration
Fernand Galiana
 
PDF
KubeCon EU 2016: Killing containers to make weather beautiful
KubeAcademy
 
PDF
Raspberry pi x kubernetes x tensorflow
霈萱 蔡
 
PDF
Immutable infrastructure with Docker and containers (GlueCon 2015)
Jérôme Petazzoni
 
PDF
SUSE - performance analysis-with_ceph
inwin stack
 
PDF
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
PDF
Solving k8s persistent workloads using k8s DevOps style
MayaData
 
PPTX
DockerCon EU 2015: Speed Up Deployment: Building a Distributed Docker Registr...
Docker, Inc.
 
PPTX
Cluster Lifecycle Landscape
Mike Danese
 
PDF
inwinSTACK - ceph integrate with kubernetes
inwin stack
 
PDF
OSCON: Incremental Revolution - What Docker learned from the open-source fire...
Docker, Inc.
 
PDF
Ceph storage for ocp deploying and managing ceph on top of open shift conta...
OrFriedmann
 
PDF
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!
smalltown
 
PDF
Cassandra and Docker Lessons Learned
DataStax Academy
 
PPTX
Cloud Foundry V2 | Intermediate Deep Dive
Kazuto Kusama
 
PPTX
BigTop vm and docker provisioner
Evans Ye
 
PDF
Head First to Container&Kubernetes
HungWei Chiu
 
DockerCon EU 2015: Finding a Theory of the Universe with Docker and Volunteer...
Docker, Inc.
 
OpenStack in Action 4! Sebastien Han - Ceph: de facto storage backend for Ope...
eNovance
 
Flying Circus Ceph Case Study (CEPH Usergroup Berlin)
Christian Theune
 
Docker Container Orchestration
Fernand Galiana
 
KubeCon EU 2016: Killing containers to make weather beautiful
KubeAcademy
 
Raspberry pi x kubernetes x tensorflow
霈萱 蔡
 
Immutable infrastructure with Docker and containers (GlueCon 2015)
Jérôme Petazzoni
 
SUSE - performance analysis-with_ceph
inwin stack
 
Storage 101: Rook and Ceph - Open Infrastructure Denver 2019
Sean Cohen
 
Solving k8s persistent workloads using k8s DevOps style
MayaData
 
DockerCon EU 2015: Speed Up Deployment: Building a Distributed Docker Registr...
Docker, Inc.
 
Cluster Lifecycle Landscape
Mike Danese
 
inwinSTACK - ceph integrate with kubernetes
inwin stack
 
OSCON: Incremental Revolution - What Docker learned from the open-source fire...
Docker, Inc.
 
Ceph storage for ocp deploying and managing ceph on top of open shift conta...
OrFriedmann
 
Kubernetes Day 2017 - Build, Ship and Run Your APP, Production !!
smalltown
 
Cassandra and Docker Lessons Learned
DataStax Academy
 
Cloud Foundry V2 | Intermediate Deep Dive
Kazuto Kusama
 
BigTop vm and docker provisioner
Evans Ye
 
Head First to Container&Kubernetes
HungWei Chiu
 

Similar to Ceph, Xen, and CloudStack: Semper Melior (20)

PPTX
London Ceph Day: Ceph in the Echosystem
Ceph Community
 
PDF
Ceph and openstack at the boston meetup
Kamesh Pemmaraju
 
PPTX
Ceph Day NYC: Ceph in the Ecosystem
Ceph Community
 
PPTX
Ceph in the Ecosystem - Ceph Day NYC 2013
Patrick McGarry
 
PDF
DEVIEW 2013
Patrick McGarry
 
PPTX
Ceph Intro and Architectural Overview by Ross Turk
buildacloud
 
PDF
Introduction into Ceph storage for OpenStack
OpenStack_Online
 
PDF
Webinar - Getting Started With Ceph
Ceph Community
 
ODP
Ceph Day SF 2015 - Keynote
Ceph Community
 
PDF
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
PDF
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Community
 
PDF
Ceph Day Shanghai - Community Update
Ceph Community
 
PPTX
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Patrick McGarry
 
PDF
Ceph Day New York: Ceph: one decade in
Ceph Community
 
PDF
Ceph and cloud stack apr 2014
Ian Colle
 
PPTX
Ceph Day Chicago - Ceph Ecosystem Update
Ceph Community
 
PPTX
Ceph Day NYC: Ceph Fundamentals
Ceph Community
 
PPTX
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
Patrick McGarry
 
PPTX
2014 Ceph NYLUG Talk
Patrick McGarry
 
PDF
NantOmics
Ceph Community
 
London Ceph Day: Ceph in the Echosystem
Ceph Community
 
Ceph and openstack at the boston meetup
Kamesh Pemmaraju
 
Ceph Day NYC: Ceph in the Ecosystem
Ceph Community
 
Ceph in the Ecosystem - Ceph Day NYC 2013
Patrick McGarry
 
DEVIEW 2013
Patrick McGarry
 
Ceph Intro and Architectural Overview by Ross Turk
buildacloud
 
Introduction into Ceph storage for OpenStack
OpenStack_Online
 
Webinar - Getting Started With Ceph
Ceph Community
 
Ceph Day SF 2015 - Keynote
Ceph Community
 
CEPH DAY BERLIN - WHAT'S NEW IN CEPH
Ceph Community
 
Ceph Day Seoul - Ceph: a decade in the making and still going strong
Ceph Community
 
Ceph Day Shanghai - Community Update
Ceph Community
 
Ceph, Open Source, and the Path to Ubiquity in Storage - AACS Meetup 2014
Patrick McGarry
 
Ceph Day New York: Ceph: one decade in
Ceph Community
 
Ceph and cloud stack apr 2014
Ian Colle
 
Ceph Day Chicago - Ceph Ecosystem Update
Ceph Community
 
Ceph Day NYC: Ceph Fundamentals
Ceph Community
 
In-Ceph-tion: Deploying a Ceph cluster on DreamCompute
Patrick McGarry
 
2014 Ceph NYLUG Talk
Patrick McGarry
 
NantOmics
Ceph Community
 
Ad

More from Patrick McGarry (12)

PDF
Ceph@MIMOS: Growing Pains from R&D to Deployment
Patrick McGarry
 
PPTX
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
PDF
librados
Patrick McGarry
 
PPTX
Bluestore
Patrick McGarry
 
PPTX
Community Update
Patrick McGarry
 
PPTX
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
PPTX
MySQL Head-to-Head
Patrick McGarry
 
PPTX
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Patrick McGarry
 
ODP
Ceph: A decade in the making and still going strong
Patrick McGarry
 
PPTX
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Patrick McGarry
 
PDF
Powering CloudStack with Ceph RBD - Apachecon
Patrick McGarry
 
PPT
An intro to Ceph and big data - CERN Big Data Workshop
Patrick McGarry
 
Ceph@MIMOS: Growing Pains from R&D to Deployment
Patrick McGarry
 
QCT Ceph Solution - Design Consideration and Reference Architecture
Patrick McGarry
 
librados
Patrick McGarry
 
Bluestore
Patrick McGarry
 
Community Update
Patrick McGarry
 
Journey to Stability: Petabyte Ceph Cluster in OpenStack Cloud
Patrick McGarry
 
MySQL Head-to-Head
Patrick McGarry
 
Using Recently Published Ceph Reference Architectures to Select Your Ceph Con...
Patrick McGarry
 
Ceph: A decade in the making and still going strong
Patrick McGarry
 
Ceph Ecosystem Update - Ceph Day Frankfurt (Feb 2014)
Patrick McGarry
 
Powering CloudStack with Ceph RBD - Apachecon
Patrick McGarry
 
An intro to Ceph and big data - CERN Big Data Workshop
Patrick McGarry
 
Ad

Recently uploaded (20)

PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
PDF
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
PDF
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
PPTX
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
PDF
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
PDF
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
PPTX
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
PDF
Python basic programing language for automation
DanialHabibi2
 
PPTX
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
PPTX
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
PPTX
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
PDF
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
PDF
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
PPTX
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PDF
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
PDF
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
How Startups Are Growing Faster with App Developers in Australia.pdf
India App Developer
 
Transcript: New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
Bitcoin for Millennials podcast with Bram, Power Laws of Bitcoin
Stephen Perrenod
 
Webinar: Introduction to LF Energy EVerest
DanBrown980551
 
Jak MŚP w Europie Środkowo-Wschodniej odnajdują się w świecie AI
dominikamizerska1
 
Achieving Consistent and Reliable AI Code Generation - Medusa AI
medusaaico
 
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
Python basic programing language for automation
DanialHabibi2
 
OpenID AuthZEN - Analyst Briefing July 2025
David Brossard
 
From Sci-Fi to Reality: Exploring AI Evolution
Svetlana Meissner
 
WooCommerce Workshop: Bring Your Laptop
Laura Hartwig
 
[Newgen] NewgenONE Marvin Brochure 1.pdf
darshakparmar
 
CIFDAQ Token Spotlight for 9th July 2025
CIFDAQ
 
AUTOMATION AND ROBOTICS IN PHARMA INDUSTRY.pptx
sameeraaabegumm
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
"Beyond English: Navigating the Challenges of Building a Ukrainian-language R...
Fwdays
 
Log-Based Anomaly Detection: Enhancing System Reliability with Machine Learning
Mohammed BEKKOUCHE
 

Ceph, Xen, and CloudStack: Semper Melior

  • 1. Ceph, Xen, and CloudStack: Semper Melior Xen User Summit| New Orleans, LA | 18 SEP 2013
  • 2. 2 •Patrick McGarry •Community monkey •Inktank / Ceph •/. > ALU > P4 •@scuttlemonkey •patrick@inktankcom Accept no substitutes C’est Moi
  • 3. 3 •Ceph in <30s •Ceph, a little bit more •Ceph in the wild •Orchestration •Community status •What’s Next? •Questions The plan, Stan Welcome!
  • 4. 4 On commodity hardware Ceph can run on any infrastructure, metal or virtualized to provide a cheap and powerful storage cluster. Object, block, and file Low overhead doesn’t mean just hardware, it means people too! Awesomesauce Infrastructure-aware placement algorithm allows you to do really cool stuff. Huge and beyond Designed for exabyte, current implementations in the multi-petabyte. HPC, Big Data, Cloud, raw storage. …besides wicked-awesome? What is Ceph? Software All-in-1 CRUSH Scale
  • 5. 5 Find out more! Ceph.com …but you can find out more Use it today Dreamhost.com/cloud/DreamObjects Get Support Inktank.com That WAS fast
  • 6. 6 OBJECTS VIRTUAL DISKS FILES & DIRECTORIES CEPH FILE SYSTEM A distributed, scale-out filesystem with POSIX semantics that provides storage for a legacy and modern applications CEPH GATEWAY A powerful S3- and Swift- compatible gateway that brings the power of the Ceph Object Store to modern applications CEPH BLOCK DEVICE A distributed virtual block device that delivers high- performance, cost-effective storage for virtual machines and legacy applications CEPH OBJECT STORE A reliable, easy to manage, next-generation distributed object store that provides storage of unstructured data for applications
  • 7. 7
  • 8. 8
  • 9. 9 • CRUSH – Pseudo-random placement algorithm – Ensures even distribution – Repeatable, deterministic – Rule-based configuration • Replica count • Infrastructure topology • Weighting
  • 10. 1 0 10 10 01 01 10 10 01 11 01 10 10 10 01 01 10 10 01 11 01 10 hash(object name) % num pg CRUSH(pg, cluster state, rule set)
  • 11. 1 1 10 10 01 01 10 10 01 11 01 10 10 10 01 01 10 10 01 11 01 10
  • 13. 1 3
  • 14. 1 4
  • 15. 1 5
  • 18. 1 8 No incendiary devices please… Linux Distros
  • 19. 1 9 Object && Block Via RBD and RGW (Swift API) Our BFF Identity Via Keystone More coming! Work continues with updates in Havana and Icehouse. OpenStack
  • 20. 2 0 Block Alternate primary, and secondary Community maintained Community Wido from 42on.com More coming in 4.2! Snapshot & backup support Cloning (layering) support No NFS for system VMs Secondary/Backup storage (s3) CloudStack
  • 21. 2 1 A blatent ripoff! Primary Storage Flow •The mgmt server never talks to the Ceph cluster •One mgmt server can manage 1000s of hypervisors •Mgmt server can be clustered •Multiple Ceph clusters/pools can be added to CloudStack cluster
  • 22. 2 2 A pretty package A commercially packaged OpenStack solution back by Ceph. RADOS for Archipelago Virtual server management software tool on top of Xen or KVM. RBD backed Complete virtualization management with KVM and containers. BBC territory Talk next week in Berlin So many delicious flavors Other Cloud SUSE Cloud Ganeti Proxmox OpenNebula
  • 23. 2 3 Since 2.6.35 Kernel clients for RBD and CephFS. Active development as a Linux file system. iSCSI ahoy! One of the Linux iSCSI target frameworks. Emulates: SBC (disk), SMC (jukebox), MMC (CD/DVD), SSC (tape), OSD. Getting creative Creative community member used Ceph to back their VMWare infrastructure via fibre channel. You can always use more friends Project Intersection Kernel STGT VMWare Love me! Slightly out-of-date. Some work has been done, but could use some love. Wireshark
  • 24. 2 4 CephFS CephFS can serve as a drop-in replacement for HDFS. Upstream Ceph vfs module upstream samba. CephFS or RBD Reexporting CephFS or RBD for NFS/CIFS. MOAR projects Project Intersection Hadoop Samba Ganesha Recently Open Source Commercially supported product from Citrix. Recently Open Sourced. Still a bit of a tech preview. XenServer
  • 25. 2 5 Support for libvirt XenServer can manipulate Ceph! Don’t let the naming fool you, it’s easy Blktap{2,3,asplode} Qemu; new boss, same as the old boss (but not really) What’s in a name? Ceph :: XenServer :: Libvirt Block device :: VDI :: storage vol Pool :: Storage Repo :: storage pool Doing it with Xen*
  • 26. 2 6 Thanks David Scott! XenServer host arch Xapi, XenAPI xenopsd S M adapters libvirt libxl ceph ocfs2 libxenguest libxc qemu xen Client (CloudStack, OpenStack, XenDesktop)
  • 27. 2 7 Come for the block Stay for the object and file No matter what you use! Reduced Overhead Easier to manage one cluster “Other Stuff” CephFS prototypes fast development profile ceph-devel lots of partner action Gateway Drug
  • 28. 2 8 Squash Hotspots Multiple hosts = parallel workload But what does that mean? Instant Clones No time to boot for many images Live migration Shared storage allows you to move instances between compute nodes transparently. Blocks are delicious
  • 29. 2 9 Flexible APIs Native support for swift and s3 And less filling! Secondary Storage Coming with 4.2 Horizontal Scaling Easy with HAProxy or others Objects can juggle
  • 30. 3 0 Neat prototypes Image distribution to hypervisors You can dress them up, but you can’t take them anywhere Still early You can fix that! Outside uses Great way to combine resources. Files are tricksy
  • 31. 3 1 Where the metal meets the…software Deploying this stuff
  • 32. 3 2 Procedural, Ruby Written in Ruby, this is more of the dev- side of DevOps. Once you get past the learning curve it’s powerful though. Model-driven Aimed more at the sysadmin, this procedural tool has a very wide penetration (even on Windows!). Agentless, whole stack Using the built-in OpenSSH in your OS, this super easy tool goes further up the stack than most. Fast, 0MQ Using ZeroMQ this tool is designed for massive scale and fast, fast, fast. Unfortunately 0MQ has no built in encryption. The new hotness Orchestration Chef Puppet Ansible Salt
  • 33. 3 3 Canonical Unleashed Being language agnostic, this tool can completely encapsulate a service. Can also handle provisioning all the way down to hardware. Dell has skin in the game Complete operations platform that can dive all the way down to BIOS/RAID level. Others are joining in Custom provisioning and orchestration, just one example of how busy this corner of the market is. Doing it w/o a tool If you prefer not to use a tool, Ceph gives you an easy way to deploy your cluster by hand. MOAR HOTNESS Orchestration Cont’d Juju Crowbar ComodIT Ceph-deploy
  • 34. 3 4 All your space are belong to us Ceph Community
  • 35. 3 5
  • 36. 3 6 Up and to the right! Code Contributions
  • 37. 3 7 Up and to the right! Commits
  • 38. 3 8 Up and to the right! List Participation
  • 39. 3 9 This Ceph thing sounds hot. What’s Next?
  • 40. 4 0 An ongoing process While the first pass for disaster recovery is done, we want to get to built-in, world- wide replication. Reception efficiency Currently underway in the community! Headed to dynamic Can already do this in a static pool-based setup. Looking to get to a use-based migration. Making it open-er Been talking about it forever. The time is coming! Hop on board! The Ceph Train Geo-Replication Erasure Coding Tiering Governance
  • 41. 4 1 Quarterly Online Summit Online summit puts the core devs together with the Ceph community. Not just for NYC More planned, including Santa Clara and London. Keep an eye out: http://inktank.com/cephdays/ Geek-on-duty During the week there are times when Ceph experts are available to help. Stop by oftc.net/ceph Email makes the world go Our mailing lists are very active, check out ceph.com for details on how to join in! Open Source is Open! Get Involved! CDS Ceph Day IRC Lists
  • 42. 4 2 http://wiki.ceph.com/04De velopment/Project_Ideas Lists, blueprints, sideboard, paper cuts, etc. http://tracker.ceph.com/ All the things! New #ceph-devel Splitting off developer chatter to make it easier to filter discussions. http://ceph.com/resources /mailing-list-irc/ Our mailing lists are very active, check out ceph.com for details on how to join in! Patches welcome Projects Wiki Redmine IRC Lists
  • 43. 4 3 Comments? Anything for the good of the cause? Questions? E-MAIL [email protected] WEBSITE Ceph.com SOCIAL @scuttlemonkey @ceph Facebook.com/cephstorage

Editor's Notes

  • #10: The way CRUSH is configured is somewhat unique. Instead of defining pools for different data types, workgroups, subnets, or applications, CRUSH is configured with the physical topology of your storage network. You tell it how many buildings, rooms, shelves, racks, and nodes you have, and you tell it how you want data placed. For example, you could tell CRUSH that it’s okay to have two replicas in the same building, but not on the same power circuit. You also tell it how many copies to keep.
  • #11: With CRUSH, the first thing that happens is the data gets split into a certain number of sections. These are called “placement groups”. The number of placement groups is configurable. Then, the CRUSH algorithm is invoked, passing along the latest cluster map and a set of placement rules, and it determines where the placement group belongs in the cluster. This is a pseudo-random calculation, but it’s also repeatable; given the same cluster state and rule set, it will always return the same results.
  • #12: Each placement group is run through CRUSH and stored in the cluster. Notice how no node has received more than one copy of a placement group, and no two nodes contain the same information? That’s important.
  • #13: When it comes time to store an object in the cluster (or retrieve one), the client calculates where it belongs.
  • #14: What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
  • #15: What happens, though, when a node goes down? The OSDs are always talking to each other (and the monitors), and they know when something is amiss. The third and fifth node on the top row have noticed that the second node on the bottom row is gone, and they are also aware that they have replicas of the missing data.
  • #16: The OSDs collectively use the CRUSH algorithm to determine how the cluster should look based on its new state, and move the data to where clients running CRUSH expect it to be.
  • #17: Because of the way placement is calculated instead of centrally controlled, node failures are transparent to clients.
  • #21: 4.2 ready (working on RBD java bindings)QEMU and libvirt are creating images in format 1, hacky stuff to make format 2RBD for Primary and RGW S3 for Secondary (templates, backups, isos)
  • #22: You can have a management server which is communicating to all of your agents (hypervisors)Management servers can by clustered for HA/failover or performance
  • #27: Client -&gt; XenAPI -&gt;Domain manager -&gt; xen control library -&gt; standard xen libraries &amp;&amp; “upstream” qemuStorage plugins -&gt; libvirt support (experimental build) -&gt; ceph &amp;&amp; ocfs2