Architecting
a Scale Out Cloud Storage Solution
Open versus Commercial
           Implementation

• Commercial – comes at a price
• Open architecture – with support
• Whatever your choice, HW must be flexible.
Appliance vs. SW on Commodity HW
                           Appliance model wins when…                  SW on commodity HW wins when…
Workloads                  Workloads are fairly standard across        Workloads vary tremendously
                           organizations                               across multiple dimensions (e.g.
                                                                       performance, availability, capacity,
                                                                       I/O requirements, etc.)
Economics/Growth           Savings from commodity hardware             Savings from commodity hardware
                           are limited to the savings from one         stretch across multiple “boxes” and
                           “box” and therefore small relative to       are massive
                           the overall TCO savings from easy
                           deployment and management
Flexibility                All needed flexibility can be               Significant need for flexibility in
                           accomplished by configuration               hardware to optimize capacity,
                           changes in a single box                     performance, availability, etc.
Cloud                      Cloud not a factor                          Cloud a major factor, driving need to
                                                                       be able to run on multiple disparate
                                                                       devices across the internet

  Ben Golub, Computerworld , October 11, 2011 “Storage is a hard problem with a soft(ware) solution”
Open Software Choices
  •   GlusterFS
  •   Open Stack (Swift)
  •   NexentaStor
  •   OS Nexus
  •   Open-E
  •   OpenStorage Software
  •   Openfiler
  •   FreeNAS
  •   ………
Three Leading Open Architectures
• Lustre/Gluster (bought by Red Hat)
    – HPC – the others don’t have the throughput
    – If something fails, it takes days to bring it back
    – Gluster fixed HA and failover and now they’ve been acquired…?
• Open Stack (Swift) (numerous contributors)
    –   Compute, Storage and Networking Management
    –   Many firms are trying to commercialize it, Rackspace one of the largest
    –   Pure cloud storage = pure block level replication, not file level
    –   Still at least 6-12 months from being production ready
• ZFS/NexentaStor (Nexenta)
    – OpenSolaris developed at Sun, Nexenta added GUI and redundancy for HA
    – Swiss army knife = rich feature set, highly configurable platform for unified storage
      management (iSCSI, NAS, DAS)
    – GlusterFS and Swift need dedicated server head node plus storage head nodes,
      NexentaStor only requires one head node
Case Study – Korea Telecom
• Storage Cloud – mobile & online storage
• NexentaStor for unified storage management
• Cirrascale for consolidation (space savings),
  cooling efficiency (power savings and
  reliability), and lowest TCO
  – Decouple server and storage
  – Flexibility to configure/reconfigure on the fly
Case Study – Korea Telecom
                                 Problem
• 6 x 4u storage chassis per rack
   – 24 x 2TB HDDs per chassis
   – 288TB per rack
• Due to overheating, could not
  install more than 6 chassis in
  each rack
• Each rack occupied 1 x 2 floor
  tiles
• High cost of wasted
  infrastructure became a
  problem for the data center
Case Study – Korea Telecom
                         Problem Resolved
• Cirrascale provided Storage Bricks
  in a 1:4 Head Node to Storage
  Blade configuration
   –   12 x 2TB HDDs per Storage Blade
   –   96TB per Storage Brick
   –   14 Storage Bricks per rack
   –   1.3PB storage capacity per rack
       (compared to 288TB and lots of heat)
• Reduced almost 5 racks to 1
• Rack still had 8U of space available
  for switching and other equipment
• All in the same two floor tiles
Vertical Cooling Technology
                                    Server Blades



                                                     Close up of the air gap
       Blades x 12
                                                    between HDDs and blade
                                                            chassis


           Fans x 8
                                  Storage Blades

       Blades x 12

                       Air Flow
           Fans x 8
                                                                         Ethernet and I/O
       Blades x 12                                                        cabled from the
                                                                         backplane to the
                                  Cooling Fans                            foot of the rack.

           Fans x 8                                                     I/O in separate axis
                                                                           than air flow.
     Horizontal 8u
           Switches
        Patch Panels
Serial Concentrators
Storage Systems
   “Storage Brick” is the base product
     1 Head Node (Control Logic/Power Supply)
     1 - 4 Storage Blades (12 – 48 HDDs)
     12 - 36 Storage Bricks per frame
 2PB per Rack in a 1:3 Configuration
 Flexible Configuration
     Mix and match storage media, modify head node to
      HDD ratios, upgrade or downgrade server types
     Rack or blade level redundancy depending on cost
      and application requirements
   Flexible Management
     IPMI 2.0 support for integration with leading tools
     Individual HDDs hot swappable
SB1315                              7                                               SB1315                4                                 SB1315                   1
                                   SB1058                              7                                               SB1058                4                                 SB1058                   1




        FAN
                                                                                                  FAN
                                                                                                                                                                  FAN
                                                                                                                                                                                                                                                                    FAN
                                   SB1058                              7                                               SB1058                4                                 SB1058                   1
                                   SB1058                              7                                               SB1058                4                                 SB1058                   1
                                   SB1315                              8                                               SB1315                5                                 SB1315                   2




        FAN
                                                                                                  FAN
                                                                                                                                                                  FAN
                                                                                                                                                                                                                                                                    FAN
                                   SB1058                              8                                               SB1058                5                                 SB1058                   2
                                   SB1058                              8                                               SB1058                5                                 SB1058                   2




Front
                                   SB1058                              8                                               SB1058                5                                 SB1058                   2




        FAN
                                                                                                  FAN
                                                                                                                                                                  FAN
                                                                                                                                                                                                                                                                    FAN

                                   SB1315                              9                                               SB1315                6                                 SB1315                   3
                                   SB1058                              9                                               SB1058                6                                 SB1058                   3
                                   SB1058                              9                                               SB1058                6                                 SB1058                   3




        FAN
                                                                                                  FAN
                                                                                                                                                                  FAN
                                                                                                                                                                                                                                                                    FAN




                                   SB1058                              9                                               SB1058                6                                 SB1058                   3




                                  SB1315                             16                                                SB1315              13                                  SB1315              10
                                  SB1058                             16                                                SB1058              13                                  SB1058              10




        FAN
                                                                                                  FAN
                                                                                                                                                             FAN
                                  SB1058                             16                                                SB1058              13                                  SB1058              10
                                  SB1058                             16                                                SB1058              13                                  SB1058              10
                                  SB1315                             17                                                SB1315              14                                  SB1315              11




        FAN
                                                                                                  FAN
                                                                                                                                                                   FAN



                                  SB1058                             17                                                SB1058              14                                  SB1058              11
                                  SB1058                             17                                                SB1058              14                                  SB1058              11




Back
                                                                                                                                                                                                                                                                   • FAN Servers




                                  SB1058                             17                                                SB1058              14                                  SB1058              11




        FAN
                                                                                                  FAN
                                                                                                                                                                         FAN
                                                                                                                                                                                                                                                                      18 FAN FAN




                                  SB1315                             18                                                SB1315              15                                  SB1315              12
                                                                                                                                                              • Up to 1.9PB



                                  SB1058                             18                                                SB1058              15                                  SB1058              12
                                  SB1058                             18                                                SB1058              15                                  SB1058              12




        FAN
                                                                                                  FAN
                                                                                                                                                              • Optional SSD FAN
                                                                                                                                                                                                                                                                    FAN




                                  SB1058                             18                                                SB1058              15                                  SB1058              12
                                                                                                                           ratio at any time
                                                                                                                                                              • 2.5” or 3.5” HDDs
                                                                                                                                                              • 1,2,or 3TB SATA or SAS
                                                                                                                                                                                         • 54 Disk Blades (648 Drives)




                                                                                                                         • Reconfigure storage and compute
                                                                                                                                                                                                                         • User defined performance and features
                                                                                                                                                                                                                                                                                   Modular Design




              • Up to 2.2PB
                                                               • 1:14 HeadNode to Storage Blade Ratio
                                                             • Same 2 SKUs was before, with additional
                                                                                                         • 4 Servers


                                                                       SAS Switch Blade using LSI 6160
                              • 56 Disk Blades (672Drives)
Why Separate Head Node from Storage?
 • Easily configure the server independent of
   storage
 • Optimize the compute to storage
   configuration
 • Storage blades can be SSD, SAS, or SATA or
   any combination
Storage Server Capacity/Configurability
        Feature             Cirrascale           Standard 4U         High Density 4U
                                                  Rackmount            Rackmount
Floor Tiles                     2                      2                    2
Height                        87.5”                  84.0”                84.0”
Maximum Capacity               2PB                  720TB                1.38PB
Floor                  Raised or Concrete      Raised or Concrete   Raised or Concrete
Maximum Storage            672 drives              240 drives           432 drives
                        Up to 3TB HDDs          Up to 3TB HDDs       Up to 3TB HDDs
                        14 Server heads         10 Server heads      10 Server heads
Configuration Granularity
 Drives                     12 Drives              24 Drives            45 Drives
 Server:Drive Ratio   1:12, 1:24, 1:36, 1:48         1:24                 1:45
Case Study: SAN for Cloud Deployment
Cirrascale HW Configuration                     Dell HW Configuration
• Head Node:                                    • Head Node: PowerEdge R510 2U
  –   Dual Xeon X5606 2.13GHz quad core CPUs        –   Dual Xeon X5530 2.4GHz quad core CPUs
  –   48GB DDR3-1333MHz ECC memory                  –   32GB DDR3-1333MHz ECC memory
  –   2 x 10GbE ports                               –   2 x 10GbE ports
  –   2 x 250GB, 7200 RPM 2.5” HDD (hot swap)       –   2 x 250GB, 7200 RPM 3.5” HDD
• Storage Blade:                                •    Storage JBOD: MD1220 4U
  – 24 x 2.5” 300GB SAS 15K RPM HDDs                – 24 x 2.5” 300GB SAS 10K HDDs
    7.4 TB per Blade                                  7.4 TB per JBOD
• Head Node to Storage Blade Ratio 1:2          • Head Node to JBOD Ratio 1:2
  – 1 Head Node: 48 HDDs                            – 1 Head Node: 48 HDDs

Bottom line
     2.6PB, 7 Racks, $2.4M                              2.6PB, 27 Racks, $4.3M
Summary
• Every storage architecture balances trade offs among:
      • Availability              • Manageability
      • Backup/Recovery           • Compliance
      • Performance               • Scalability
      • Capacity                  • Density
      • Power Consumption         • Cost
• Compelling benefits among SAN, iSCSI, NAS, and DAS are
  driving Cloud Service Providers and enterprises to install
  converged and hybrid solutions
• Open Software and Hardware platforms provide the most
  flexible, cost effective, future proof platforms for scale

More Related Content

PPTX
Rancangan aktualisasi di SMK Negeri Somambawa Kabupaten Nias Selatan
PPTX
Cloud computing
PPTX
Edp 279 assignment 4.1
PDF
Sadigh Gallery Egyptian Limestone Panels Collection Part 1
PPTX
La red "Experimentar y Compartir" en SIMO Educación 2015
PDF
cronograma escolar 2014-2015sierra
PDF
Accounts Manager resume
PDF
Foresight 2015 - The Ups and Downs of Batteries and Renewable Energy
Rancangan aktualisasi di SMK Negeri Somambawa Kabupaten Nias Selatan
Cloud computing
Edp 279 assignment 4.1
Sadigh Gallery Egyptian Limestone Panels Collection Part 1
La red "Experimentar y Compartir" en SIMO Educación 2015
cronograma escolar 2014-2015sierra
Accounts Manager resume
Foresight 2015 - The Ups and Downs of Batteries and Renewable Energy

Viewers also liked (12)

PPT
El calendario escolar 2016-17
PPT
English as a way of communicatiion report
PDF
Cuaderno de-trabajo-matematicas-7mo
PPT
Prototyping - the what, why and how at the University of Edinburgh
PPSX
Erosión dental
PPTX
Konsep, pendekatan, prinsip, dan aspek geografi
PPTX
Pronostico y valorización del plan de tratamiento
PDF
Творець Могилянки київський митрополит Петро Могила
PDF
El Arte del neoclasicismo
PDF
Materi 1; tik
El calendario escolar 2016-17
English as a way of communicatiion report
Cuaderno de-trabajo-matematicas-7mo
Prototyping - the what, why and how at the University of Edinburgh
Erosión dental
Konsep, pendekatan, prinsip, dan aspek geografi
Pronostico y valorización del plan de tratamiento
Творець Могилянки київський митрополит Петро Могила
El Arte del neoclasicismo
Materi 1; tik
Ad

Similar to Architecting a Scale Out Cloud Storage Solution (20)

PDF
Netgear ReadyNAS Comparison
PDF
Il Cloud chiavi in mano | Marco Soldi (Intel) | Milano
PDF
Nucleus GP
PDF
Micro-Modular Data Center infrastructure
PDF
Build Your Own Middleware Machine
PDF
Great Article, Thanks Paul Feresten, Sr. Product Marketing Manager, and Rajes...
PPTX
Top Technology Trends
PDF
6dec2011 - DELL
PPT
09 05 26 Hd Offering 09
PDF
16.07.12 Analyzing Logs/Configs of 200'000 Systems with Hadoop (Christoph Sch...
PDF
Open Storage Sun Intel European Business Technology Tour
PDF
Private cloud virtual reality to reality a partner story daniel mar_technicom
PDF
HP Flexible Data Center
PDF
Lug best practice_hpc_workflow
PPTX
San & Virutualisation
PDF
神州数码 Jason pan future_clouddatacenterv2
PDF
How to be green in the cloud
PDF
Pers71 ds
PDF
NetApp Product training
PDF
Flextop Overview
Netgear ReadyNAS Comparison
Il Cloud chiavi in mano | Marco Soldi (Intel) | Milano
Nucleus GP
Micro-Modular Data Center infrastructure
Build Your Own Middleware Machine
Great Article, Thanks Paul Feresten, Sr. Product Marketing Manager, and Rajes...
Top Technology Trends
6dec2011 - DELL
09 05 26 Hd Offering 09
16.07.12 Analyzing Logs/Configs of 200'000 Systems with Hadoop (Christoph Sch...
Open Storage Sun Intel European Business Technology Tour
Private cloud virtual reality to reality a partner story daniel mar_technicom
HP Flexible Data Center
Lug best practice_hpc_workflow
San & Virutualisation
神州数码 Jason pan future_clouddatacenterv2
How to be green in the cloud
Pers71 ds
NetApp Product training
Flextop Overview
Ad

More from Chris Westin (20)

PDF
Data torrent meetup-productioneng
PDF
Gripshort
PPTX
Ambari hadoop-ops-meetup-2013-09-19.final
PDF
Cluster management and automation with cloudera manager
PDF
Building low latency java applications with ehcache
PDF
SDN/OpenFlow #lspe
ODP
cfengine3 at #lspe
PPTX
mongodb-aggregation-may-2012
PDF
Nimbula lspe-2012-04-19
PPTX
mongodb-brief-intro-february-2012
PDF
Stingray - Riverbed Technology
PPTX
MongoDB's New Aggregation framework
PPTX
Replication and replica sets
PPTX
FlashCache
PPTX
Large Scale Cacti
PPTX
MongoDB: An Introduction - July 2011
PPTX
Practical Replication June-2011
PPTX
MongoDB: An Introduction - june-2011
PPT
Ganglia Overview-v2
PPTX
MongoDB Aggregation MongoSF May 2011
Data torrent meetup-productioneng
Gripshort
Ambari hadoop-ops-meetup-2013-09-19.final
Cluster management and automation with cloudera manager
Building low latency java applications with ehcache
SDN/OpenFlow #lspe
cfengine3 at #lspe
mongodb-aggregation-may-2012
Nimbula lspe-2012-04-19
mongodb-brief-intro-february-2012
Stingray - Riverbed Technology
MongoDB's New Aggregation framework
Replication and replica sets
FlashCache
Large Scale Cacti
MongoDB: An Introduction - July 2011
Practical Replication June-2011
MongoDB: An Introduction - june-2011
Ganglia Overview-v2
MongoDB Aggregation MongoSF May 2011

Recently uploaded (20)

PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PDF
Five Habits of High-Impact Board Members
PPTX
Tartificialntelligence_presentation.pptx
PDF
Getting Started with Data Integration: FME Form 101
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
Hybrid model detection and classification of lung cancer
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Hindi spoken digit analysis for native and non-native speakers
DOCX
search engine optimization ppt fir known well about this
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
STKI Israel Market Study 2025 version august
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
A review of recent deep learning applications in wood surface defect identifi...
PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
Unlock new opportunities with location data.pdf
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
Five Habits of High-Impact Board Members
Tartificialntelligence_presentation.pptx
Getting Started with Data Integration: FME Form 101
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Hybrid model detection and classification of lung cancer
A novel scalable deep ensemble learning framework for big data classification...
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
DP Operators-handbook-extract for the Mautical Institute
Hindi spoken digit analysis for native and non-native speakers
search engine optimization ppt fir known well about this
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
STKI Israel Market Study 2025 version august
Module 1.ppt Iot fundamentals and Architecture
A review of recent deep learning applications in wood surface defect identifi...
Web Crawler for Trend Tracking Gen Z Insights.pptx
Group 1 Presentation -Planning and Decision Making .pptx
O2C Customer Invoices to Receipt V15A.pptx
Unlock new opportunities with location data.pdf
Univ-Connecticut-ChatGPT-Presentaion.pdf

Architecting a Scale Out Cloud Storage Solution

  • 1. Architecting a Scale Out Cloud Storage Solution
  • 2. Open versus Commercial Implementation • Commercial – comes at a price • Open architecture – with support • Whatever your choice, HW must be flexible.
  • 3. Appliance vs. SW on Commodity HW Appliance model wins when… SW on commodity HW wins when… Workloads Workloads are fairly standard across Workloads vary tremendously organizations across multiple dimensions (e.g. performance, availability, capacity, I/O requirements, etc.) Economics/Growth Savings from commodity hardware Savings from commodity hardware are limited to the savings from one stretch across multiple “boxes” and “box” and therefore small relative to are massive the overall TCO savings from easy deployment and management Flexibility All needed flexibility can be Significant need for flexibility in accomplished by configuration hardware to optimize capacity, changes in a single box performance, availability, etc. Cloud Cloud not a factor Cloud a major factor, driving need to be able to run on multiple disparate devices across the internet Ben Golub, Computerworld , October 11, 2011 “Storage is a hard problem with a soft(ware) solution”
  • 4. Open Software Choices • GlusterFS • Open Stack (Swift) • NexentaStor • OS Nexus • Open-E • OpenStorage Software • Openfiler • FreeNAS • ………
  • 5. Three Leading Open Architectures • Lustre/Gluster (bought by Red Hat) – HPC – the others don’t have the throughput – If something fails, it takes days to bring it back – Gluster fixed HA and failover and now they’ve been acquired…? • Open Stack (Swift) (numerous contributors) – Compute, Storage and Networking Management – Many firms are trying to commercialize it, Rackspace one of the largest – Pure cloud storage = pure block level replication, not file level – Still at least 6-12 months from being production ready • ZFS/NexentaStor (Nexenta) – OpenSolaris developed at Sun, Nexenta added GUI and redundancy for HA – Swiss army knife = rich feature set, highly configurable platform for unified storage management (iSCSI, NAS, DAS) – GlusterFS and Swift need dedicated server head node plus storage head nodes, NexentaStor only requires one head node
  • 6. Case Study – Korea Telecom • Storage Cloud – mobile & online storage • NexentaStor for unified storage management • Cirrascale for consolidation (space savings), cooling efficiency (power savings and reliability), and lowest TCO – Decouple server and storage – Flexibility to configure/reconfigure on the fly
  • 7. Case Study – Korea Telecom Problem • 6 x 4u storage chassis per rack – 24 x 2TB HDDs per chassis – 288TB per rack • Due to overheating, could not install more than 6 chassis in each rack • Each rack occupied 1 x 2 floor tiles • High cost of wasted infrastructure became a problem for the data center
  • 8. Case Study – Korea Telecom Problem Resolved • Cirrascale provided Storage Bricks in a 1:4 Head Node to Storage Blade configuration – 12 x 2TB HDDs per Storage Blade – 96TB per Storage Brick – 14 Storage Bricks per rack – 1.3PB storage capacity per rack (compared to 288TB and lots of heat) • Reduced almost 5 racks to 1 • Rack still had 8U of space available for switching and other equipment • All in the same two floor tiles
  • 9. Vertical Cooling Technology Server Blades Close up of the air gap Blades x 12 between HDDs and blade chassis Fans x 8 Storage Blades Blades x 12 Air Flow Fans x 8 Ethernet and I/O Blades x 12 cabled from the backplane to the Cooling Fans foot of the rack. Fans x 8 I/O in separate axis than air flow. Horizontal 8u Switches Patch Panels Serial Concentrators
  • 10. Storage Systems  “Storage Brick” is the base product  1 Head Node (Control Logic/Power Supply)  1 - 4 Storage Blades (12 – 48 HDDs)  12 - 36 Storage Bricks per frame  2PB per Rack in a 1:3 Configuration  Flexible Configuration  Mix and match storage media, modify head node to HDD ratios, upgrade or downgrade server types  Rack or blade level redundancy depending on cost and application requirements  Flexible Management  IPMI 2.0 support for integration with leading tools  Individual HDDs hot swappable
  • 11. SB1315 7 SB1315 4 SB1315 1 SB1058 7 SB1058 4 SB1058 1 FAN FAN FAN FAN SB1058 7 SB1058 4 SB1058 1 SB1058 7 SB1058 4 SB1058 1 SB1315 8 SB1315 5 SB1315 2 FAN FAN FAN FAN SB1058 8 SB1058 5 SB1058 2 SB1058 8 SB1058 5 SB1058 2 Front SB1058 8 SB1058 5 SB1058 2 FAN FAN FAN FAN SB1315 9 SB1315 6 SB1315 3 SB1058 9 SB1058 6 SB1058 3 SB1058 9 SB1058 6 SB1058 3 FAN FAN FAN FAN SB1058 9 SB1058 6 SB1058 3 SB1315 16 SB1315 13 SB1315 10 SB1058 16 SB1058 13 SB1058 10 FAN FAN FAN SB1058 16 SB1058 13 SB1058 10 SB1058 16 SB1058 13 SB1058 10 SB1315 17 SB1315 14 SB1315 11 FAN FAN FAN SB1058 17 SB1058 14 SB1058 11 SB1058 17 SB1058 14 SB1058 11 Back • FAN Servers SB1058 17 SB1058 14 SB1058 11 FAN FAN FAN 18 FAN FAN SB1315 18 SB1315 15 SB1315 12 • Up to 1.9PB SB1058 18 SB1058 15 SB1058 12 SB1058 18 SB1058 15 SB1058 12 FAN FAN • Optional SSD FAN FAN SB1058 18 SB1058 15 SB1058 12 ratio at any time • 2.5” or 3.5” HDDs • 1,2,or 3TB SATA or SAS • 54 Disk Blades (648 Drives) • Reconfigure storage and compute • User defined performance and features Modular Design • Up to 2.2PB • 1:14 HeadNode to Storage Blade Ratio • Same 2 SKUs was before, with additional • 4 Servers SAS Switch Blade using LSI 6160 • 56 Disk Blades (672Drives)
  • 12. Why Separate Head Node from Storage? • Easily configure the server independent of storage • Optimize the compute to storage configuration • Storage blades can be SSD, SAS, or SATA or any combination
  • 13. Storage Server Capacity/Configurability Feature Cirrascale Standard 4U High Density 4U Rackmount Rackmount Floor Tiles 2 2 2 Height 87.5” 84.0” 84.0” Maximum Capacity 2PB 720TB 1.38PB Floor Raised or Concrete Raised or Concrete Raised or Concrete Maximum Storage 672 drives 240 drives 432 drives Up to 3TB HDDs Up to 3TB HDDs Up to 3TB HDDs 14 Server heads 10 Server heads 10 Server heads Configuration Granularity Drives 12 Drives 24 Drives 45 Drives Server:Drive Ratio 1:12, 1:24, 1:36, 1:48 1:24 1:45
  • 14. Case Study: SAN for Cloud Deployment Cirrascale HW Configuration Dell HW Configuration • Head Node: • Head Node: PowerEdge R510 2U – Dual Xeon X5606 2.13GHz quad core CPUs – Dual Xeon X5530 2.4GHz quad core CPUs – 48GB DDR3-1333MHz ECC memory – 32GB DDR3-1333MHz ECC memory – 2 x 10GbE ports – 2 x 10GbE ports – 2 x 250GB, 7200 RPM 2.5” HDD (hot swap) – 2 x 250GB, 7200 RPM 3.5” HDD • Storage Blade: • Storage JBOD: MD1220 4U – 24 x 2.5” 300GB SAS 15K RPM HDDs – 24 x 2.5” 300GB SAS 10K HDDs 7.4 TB per Blade 7.4 TB per JBOD • Head Node to Storage Blade Ratio 1:2 • Head Node to JBOD Ratio 1:2 – 1 Head Node: 48 HDDs – 1 Head Node: 48 HDDs Bottom line 2.6PB, 7 Racks, $2.4M 2.6PB, 27 Racks, $4.3M
  • 15. Summary • Every storage architecture balances trade offs among: • Availability • Manageability • Backup/Recovery • Compliance • Performance • Scalability • Capacity • Density • Power Consumption • Cost • Compelling benefits among SAN, iSCSI, NAS, and DAS are driving Cloud Service Providers and enterprises to install converged and hybrid solutions • Open Software and Hardware platforms provide the most flexible, cost effective, future proof platforms for scale

Editor's Notes

  • #4: The appliance model is winning in areas like network security. A single, well integrated, easy-to-manage firewall appliance can meet the workloads of most IT operations. The savings on a single, generic server are not material relative to the management ease of deploying a reliable, integrated appliance. To the extent that flexibility is needed, it can be achieved by configuring rules within the appliance. By contrast, storage pools are massive and growing, and the workloads they serve differ massively across organizations. The ideal hardware, disk to server ratio, I/O choice, etc., differs dramatically depending on whether one is providing storage for video, audio, images, virtual machines, or if one is dealing with read-intensive versus write-intensive applications.  Therefore, having flexibility in the underlying hardware is critical.Furthermore, in an environment where pools of hundreds of servers and petabytes of disk are becoming commonplace, the economics of being able to flexibly source commodity hardware from multiple vendors becomes very compelling.Finally, with the move to the cloud, it is clear that a hardware-bound model of storage will face challenges in the long term. One simply can't move a large, proprietary storage appliance to the cloud.  For that matter, one also can't easily adapt the proprietary appliance model to the kinds of SMAQ (Storage, Map Reduce and Query) workloads I discussed in my previous posts. Hence, in hybrid cloud or Big Data environments, there will be a clear imperative for the software model to win out.For all of these reasons, I think it is pretty clear that storage will follow the compute, middleware, and application server segments of the IT market in adopting the software model of the world.