FCoE vs. iSCSIMaking the ChoiceStephen Foskettstephen@fosketts.net@SFoskettFoskettServices.comBlog.Fosketts.netGestaltIT.com
This is Not a Rah-Rah Session
This is Not a Cage Match
First, let’s talk about convergence and Ethernet
Converging on ConvergenceData centers rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
Drivers of Convergence
What's in it for you?
The Storage Network Roadmap
Performance: Throughput vs. LatencyHigh-ThroughputLow Latency
Serious Performance10 GbE is faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
Latency is Critical TooLatency is even more critical in shared storage
Benefits Beyond Speed10 GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility
Server Connectivity1 GbE Network1 GbE Cluster4G FC Storage10 GbE(Plus 6 Gbps extra capacity)
FlexibilityNo more rats-nest of cablesServers become interchangeable unitsSwappableBrought on line quicklyFew cable connections Less concern about availability of I/O slots, cards and portsCPU, memory, chipset are deciding factor, not HBA or network adapter
Changing Data CenterPlacement and cabling of SAN switches and adapters dictates where to install serversConsiderations for placing SAN-attached servers:Cable types and lengthsSwitch locationLogical SAN layoutApplies to both FC and GbEiSCSISANsUnified 10 GbE network allows the same data and storage networking in any rack position
Virtualization: Performance and FlexibilityPerformance and flexibility benefits are amplified with virtual servers10 GbE acceleration of storage performance, especially latency – “the I/O blender”Can allow performance-sensitive applications to use virtual servers
Virtual Machine MobilityMoving virtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locations
Enhanced 10 Gb EthernetEthernet and SCSI were not made for each other
SCSI expects a lossless transport with guaranteed delivery
Ethernet expects higher-level protocols to take care of issues
“Data Center Bridging” is a project to create lossless Ethernet
AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
iSCSI and NFS are happy with or without DCB
DCB is a work in progress
FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
QCN (Qau) is still not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
Flow ControlPAUSE (802.3x)Reactive not proactive (like FC credit approach)Allows a receiver to block incoming traffic in a point-to-point Ethernet linkPriority Flow Control 802.1Qbb)Uses an 8-bit mask in PAUSE to specify 802.1p prioritiesBlocks a class of traffic, not an entire linkRatified and shippingSwitch ASwitch BGraphic courtesy of EMCResult of PFC:Handles transient spikesMakes Ethernet losslessRequired for FCoE
Bandwidth ManagementEnhanced Transmission Selection (ETS) 802.1QazLatest in a series of attempts at Quality of Service (QoS)Allows “overflow” to better-utilize bandwidthData Center Bridging Exchange (DCBX) protocolAllows devices to determine mutual capabilitiesRequired for ETS, useful for othersRatified and shipping10 GE Link Realized Traffic UtilizationOffered TrafficHPC Traffic3G/s3G/s2G/s3G/s3G/s2G/sStorage Traffic3G/s3G/s3G/s3G/s3G/s3G/sLAN Traffic4G/s5G/s3G/s3G/s4G/s6G/sGraphic courtesy of EMC
Congestion NotificationNeed a more proactive approach to persistent congestionQCN 802.1QauNotifies edge ports of congestion, allowing traffic to flow more smoothlyImproves end-to-end network latency (important for storage)Should also improve overall throughputNot quite readyGraphic courtesy of Broadcom
Now we can talk aboutiSCSI vs. FCoE
Why Choose a Protocol?
SAN History: SCSIEarly storage protocols were system-dependent and short distanceMicrocomputers used internal ST-506 disksMainframes used external bus-and-tag storageSCSI allowed systems to use external disksBlock protocol, one-to-many communicationExternal enclosures, RAIDReplaced ST-506 and ESDI in UNIX systemsSAS dominates in servers; PCs use IDE (SATA)Copyright 2006, GlassHouse Technologies
The Many Faces of SCSI“SCSI”SASiSCSI“FC”FCoE
Comparing Protocols
Why Go iSCSI?iSCSI targets are robust and matureJust about every storage vendor offers iSCSI arraysSoftware targets abound, too (Nexenta, Microsoft, StarWind)Client-side iSCSI is strong as wellWide variety of iSCSI adapters/HBAsSoftware initiators for UNIX, Windows, VMware, MacSmooth transition from 1- to 10-gigabit EthernetPlug it in and it works, no extensions requirediSCSI over DCB is rapidly appearing
iSCSI Support Matrix
iSCSI Reality Check
The Three-Fold Path of Fibre Channel
FCoE Spotters’ GuideFibre Channel over Ethernet (FCoE)FC-BB-5Bandwidth Management (ETS)802.1QazPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauFibre ChannelEthernetFibre Channel over Ethernet (FCoE)
Why Go FCoE?Large FC install base/investmentStorage arrays and switchesManagement tools and skillsAllows for incremental adoptionFCoE as an edge protocol promises to reduce connectivity costsEnd-to-end FCoE would be implemented laterI/O consolidation and virtualization capabilitiesMany DCB technologies map to the needs of server virtualization architecturesAlso leverages Ethernet infrastructure and skills

More Related Content

PPT
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
PPT
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
PDF
iSCSI Protocol and Functionality
PDF
Detailed iSCSI presentation
PPTX
iSCSI (Internet Small Computer System Interface)
PDF
Basics of IO techniques in Storage Technology Networks
PDF
SAN vs NAS technology summary
PPT
Mpls vpn toi
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
iSCSI Protocol and Functionality
Detailed iSCSI presentation
iSCSI (Internet Small Computer System Interface)
Basics of IO techniques in Storage Technology Networks
SAN vs NAS technology summary
Mpls vpn toi

What's hot (20)

PDF
Converged data center_f_co_e_iscsi_future_storage_networking
 
PPT
Troubleshooting basic networks
PDF
Deploying IP/MPLS VPN - Cisco Networkers 2010
PDF
MPLS + BGP Presentation
PDF
ipv6 mpls by Patrick Grossetete
PPT
Voice over MPLS
PDF
Inter-AS MPLS VPN Deployment
PPTX
iSCSi , FCSAN ,FC protocol stack and Infini band
PDF
NetApp Multi-Protocol Storage Evaluation
PPT
Implementing a scalable ospf based solution
PPT
Network device management
PPTX
MPLS L3 VPN Tutorial, by Nurul Islam Roman [APNIC 38]
PDF
Pci express modi
PPTX
03_03_Implementing_PCIe_ATS_in_ARM-based_SoCs_Final
PDF
Pcie basic
PDF
Creating Your Own PCI Express System Using FPGAs: Embedded World 2010
PDF
Pci express technology 3.0
PPT
Juniper L2 MPLS VPN
PPT
PCIe and PCIe driver in WEC7 (Windows Embedded compact 7)
Converged data center_f_co_e_iscsi_future_storage_networking
 
Troubleshooting basic networks
Deploying IP/MPLS VPN - Cisco Networkers 2010
MPLS + BGP Presentation
ipv6 mpls by Patrick Grossetete
Voice over MPLS
Inter-AS MPLS VPN Deployment
iSCSi , FCSAN ,FC protocol stack and Infini band
NetApp Multi-Protocol Storage Evaluation
Implementing a scalable ospf based solution
Network device management
MPLS L3 VPN Tutorial, by Nurul Islam Roman [APNIC 38]
Pci express modi
03_03_Implementing_PCIe_ATS_in_ARM-based_SoCs_Final
Pcie basic
Creating Your Own PCI Express System Using FPGAs: Embedded World 2010
Pci express technology 3.0
Juniper L2 MPLS VPN
PCIe and PCIe driver in WEC7 (Windows Embedded compact 7)
Ad

Viewers also liked (20)

PPT
10G Ethernet Outlook for HPC
PPTX
Ex8216 Core Switch
PPTX
Test Tool for Industrial Ethernet Network Performance (June 2009)
PDF
web performance explained to network and infrastructure experts
PPT
Flash Roadblock: Latency! - How Storage Interconnects are Slowing Flash Storage
PDF
Alcatel-Lucent Enterprise Product Guide
PDF
BLADE Cloud Ready Network Architecture
PDF
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
PDF
Pycon 2008: Python Command-line Tools *Nix
PPTX
CenturyLink Network
PPTX
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
PPT
Swift Architecture and Practice, by Alex Yang
PPT
Tecnologia para generar valor en Telecomunicaciones
PDF
Building RESTful APIs
PPTX
Software Defined presentation
PPTX
Router architectures in no c
PDF
Microservices with Swagger, Flask and Docker
PDF
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月
PPTX
Core Concept: Software Defined Everything
10G Ethernet Outlook for HPC
Ex8216 Core Switch
Test Tool for Industrial Ethernet Network Performance (June 2009)
web performance explained to network and infrastructure experts
Flash Roadblock: Latency! - How Storage Interconnects are Slowing Flash Storage
Alcatel-Lucent Enterprise Product Guide
BLADE Cloud Ready Network Architecture
Open-E DSS V7 Active-Active Load Balanced iSCSI HA Cluster (with bonding)
Pycon 2008: Python Command-line Tools *Nix
CenturyLink Network
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Swift Architecture and Practice, by Alex Yang
Tecnologia para generar valor en Telecomunicaciones
Building RESTful APIs
Software Defined presentation
Router architectures in no c
Microservices with Swagger, Flask and Docker
Mellanox for OpenStack - OpenStack最新情報セミナー 2014年10月
Core Concept: Software Defined Everything
Ad

Similar to "FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011 (20)

PPTX
IBM Blade University: Emulex Connects the Data Center of the Future
PPTX
SAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
PPTX
Virtual Networks for Storage and Data
PPTX
Interop: The 10GbE Top 10
PDF
F co e_netapp_v12
PPT
FCoE Origins and Status for Ethernet Technology Summit
PDF
Unified Fabric: Data Centre Bridging and FCoE Implementation
PDF
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
PPTX
Designing and deploying converged storage area networks final
PDF
Overview of HPC Interconnects
PPT
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
PPT
06 - Intel 10 Gb For Dc
PDF
Unified Computing In Servers
PPTX
Creating Competitive Advantage by Revolutionizing I/O
PDF
#IBMEdge: "Not all Networks are Equal"
PDF
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
 
PDF
I/O Consolidation in the Data Center -Excerpt
PDF
Brocade VDX 6730 Converged Switch for IBM
PPTX
M4 f co-e_4.4.2
PDF
Network Virtualization using Shortest Path Bridging
IBM Blade University: Emulex Connects the Data Center of the Future
SAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
Virtual Networks for Storage and Data
Interop: The 10GbE Top 10
F co e_netapp_v12
FCoE Origins and Status for Ethernet Technology Summit
Unified Fabric: Data Centre Bridging and FCoE Implementation
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
 
Designing and deploying converged storage area networks final
Overview of HPC Interconnects
Современные сетевые аспекты, которые нужно учитывать при построении ЦОД. Кон...
06 - Intel 10 Gb For Dc
Unified Computing In Servers
Creating Competitive Advantage by Revolutionizing I/O
#IBMEdge: "Not all Networks are Equal"
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
 
I/O Consolidation in the Data Center -Excerpt
Brocade VDX 6730 Converged Switch for IBM
M4 f co-e_4.4.2
Network Virtualization using Shortest Path Bridging

More from Stephen Foskett (20)

PPTX
The Zen of Storage
PPTX
What’s the Deal with Containers, Anyway?
PPTX
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
PPTX
The Four Horsemen of Storage System Performance
PPTX
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
PPTX
It's the End of Data Storage As We Know It (And I Feel Fine)
PPTX
Storage for Virtual Environments 2011 R2
PPTX
State of the Art Thin Provisioning
PPTX
Rearchitecting Storage for Server Virtualization
PPT
Eleven Essential Attributes For Email Archiving
PPT
Email Archiving Solutions Whats The Difference
PPT
Storage School 1
PPT
Storage School 2
PPTX
Deep Dive Into Email Archiving Products
PPTX
Storage Virtualization Introduction
PPTX
Extreme Tiered Storage Flash, Disk, And Cloud
PPTX
The Right Approach To Cloud Storage
PPTX
Storage Decisions Nirvanix Introduction
PPTX
Solve 3 Enterprise Storage Problems Today
PPTX
Virtualization Changes Storage
The Zen of Storage
What’s the Deal with Containers, Anyway?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
The Four Horsemen of Storage System Performance
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
It's the End of Data Storage As We Know It (And I Feel Fine)
Storage for Virtual Environments 2011 R2
State of the Art Thin Provisioning
Rearchitecting Storage for Server Virtualization
Eleven Essential Attributes For Email Archiving
Email Archiving Solutions Whats The Difference
Storage School 1
Storage School 2
Deep Dive Into Email Archiving Products
Storage Virtualization Introduction
Extreme Tiered Storage Flash, Disk, And Cloud
The Right Approach To Cloud Storage
Storage Decisions Nirvanix Introduction
Solve 3 Enterprise Storage Problems Today
Virtualization Changes Storage

Recently uploaded (20)

PDF
A symptom-driven medical diagnosis support model based on machine learning te...
PDF
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
PPTX
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
PDF
Early detection and classification of bone marrow changes in lumbar vertebrae...
PDF
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
PDF
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
PDF
Auditboard EB SOX Playbook 2023 edition.
PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PDF
SaaS reusability assessment using machine learning techniques
PDF
EIS-Webinar-Regulated-Industries-2025-08.pdf
PDF
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
PPTX
Microsoft User Copilot Training Slide Deck
DOCX
Basics of Cloud Computing - Cloud Ecosystem
PDF
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
PDF
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
PDF
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
PDF
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
PDF
Advancing precision in air quality forecasting through machine learning integ...
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PDF
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf
A symptom-driven medical diagnosis support model based on machine learning te...
Transform-Quality-Engineering-with-AI-A-60-Day-Blueprint-for-Digital-Success.pdf
agenticai-neweraofintelligence-250529192801-1b5e6870.pptx
Early detection and classification of bone marrow changes in lumbar vertebrae...
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
Transform-Your-Factory-with-AI-Driven-Quality-Engineering.pdf
Auditboard EB SOX Playbook 2023 edition.
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
SaaS reusability assessment using machine learning techniques
EIS-Webinar-Regulated-Industries-2025-08.pdf
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
Microsoft User Copilot Training Slide Deck
Basics of Cloud Computing - Cloud Ecosystem
IT-ITes Industry bjjbnkmkhkhknbmhkhmjhjkhj
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
MENA-ECEONOMIC-CONTEXT-VC MENA-ECEONOMIC
Aug23rd - Mulesoft Community Workshop - Hyd, India.pdf
Advancing precision in air quality forecasting through machine learning integ...
giants, standing on the shoulders of - by Daniel Stenberg
The-2025-Engineering-Revolution-AI-Quality-and-DevOps-Convergence.pdf

"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

  • 1. FCoE vs. iSCSIMaking the ChoiceStephen [email protected]@SFoskettFoskettServices.comBlog.Fosketts.netGestaltIT.com
  • 2. This is Not a Rah-Rah Session
  • 3. This is Not a Cage Match
  • 4. First, let’s talk about convergence and Ethernet
  • 5. Converging on ConvergenceData centers rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
  • 7. What's in it for you?
  • 9. Performance: Throughput vs. LatencyHigh-ThroughputLow Latency
  • 10. Serious Performance10 GbE is faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
  • 11. Latency is Critical TooLatency is even more critical in shared storage
  • 12. Benefits Beyond Speed10 GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility
  • 13. Server Connectivity1 GbE Network1 GbE Cluster4G FC Storage10 GbE(Plus 6 Gbps extra capacity)
  • 14. FlexibilityNo more rats-nest of cablesServers become interchangeable unitsSwappableBrought on line quicklyFew cable connections Less concern about availability of I/O slots, cards and portsCPU, memory, chipset are deciding factor, not HBA or network adapter
  • 15. Changing Data CenterPlacement and cabling of SAN switches and adapters dictates where to install serversConsiderations for placing SAN-attached servers:Cable types and lengthsSwitch locationLogical SAN layoutApplies to both FC and GbEiSCSISANsUnified 10 GbE network allows the same data and storage networking in any rack position
  • 16. Virtualization: Performance and FlexibilityPerformance and flexibility benefits are amplified with virtual servers10 GbE acceleration of storage performance, especially latency – “the I/O blender”Can allow performance-sensitive applications to use virtual servers
  • 17. Virtual Machine MobilityMoving virtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locations
  • 18. Enhanced 10 Gb EthernetEthernet and SCSI were not made for each other
  • 19. SCSI expects a lossless transport with guaranteed delivery
  • 20. Ethernet expects higher-level protocols to take care of issues
  • 21. “Data Center Bridging” is a project to create lossless Ethernet
  • 22. AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
  • 23. iSCSI and NFS are happy with or without DCB
  • 24. DCB is a work in progress
  • 25. FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
  • 26. QCN (Qau) is still not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
  • 27. Flow ControlPAUSE (802.3x)Reactive not proactive (like FC credit approach)Allows a receiver to block incoming traffic in a point-to-point Ethernet linkPriority Flow Control 802.1Qbb)Uses an 8-bit mask in PAUSE to specify 802.1p prioritiesBlocks a class of traffic, not an entire linkRatified and shippingSwitch ASwitch BGraphic courtesy of EMCResult of PFC:Handles transient spikesMakes Ethernet losslessRequired for FCoE
  • 28. Bandwidth ManagementEnhanced Transmission Selection (ETS) 802.1QazLatest in a series of attempts at Quality of Service (QoS)Allows “overflow” to better-utilize bandwidthData Center Bridging Exchange (DCBX) protocolAllows devices to determine mutual capabilitiesRequired for ETS, useful for othersRatified and shipping10 GE Link Realized Traffic UtilizationOffered TrafficHPC Traffic3G/s3G/s2G/s3G/s3G/s2G/sStorage Traffic3G/s3G/s3G/s3G/s3G/s3G/sLAN Traffic4G/s5G/s3G/s3G/s4G/s6G/sGraphic courtesy of EMC
  • 29. Congestion NotificationNeed a more proactive approach to persistent congestionQCN 802.1QauNotifies edge ports of congestion, allowing traffic to flow more smoothlyImproves end-to-end network latency (important for storage)Should also improve overall throughputNot quite readyGraphic courtesy of Broadcom
  • 30. Now we can talk aboutiSCSI vs. FCoE
  • 31. Why Choose a Protocol?
  • 32. SAN History: SCSIEarly storage protocols were system-dependent and short distanceMicrocomputers used internal ST-506 disksMainframes used external bus-and-tag storageSCSI allowed systems to use external disksBlock protocol, one-to-many communicationExternal enclosures, RAIDReplaced ST-506 and ESDI in UNIX systemsSAS dominates in servers; PCs use IDE (SATA)Copyright 2006, GlassHouse Technologies
  • 33. The Many Faces of SCSI“SCSI”SASiSCSI“FC”FCoE
  • 35. Why Go iSCSI?iSCSI targets are robust and matureJust about every storage vendor offers iSCSI arraysSoftware targets abound, too (Nexenta, Microsoft, StarWind)Client-side iSCSI is strong as wellWide variety of iSCSI adapters/HBAsSoftware initiators for UNIX, Windows, VMware, MacSmooth transition from 1- to 10-gigabit EthernetPlug it in and it works, no extensions requirediSCSI over DCB is rapidly appearing
  • 38. The Three-Fold Path of Fibre Channel
  • 39. FCoE Spotters’ GuideFibre Channel over Ethernet (FCoE)FC-BB-5Bandwidth Management (ETS)802.1QazPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauFibre ChannelEthernetFibre Channel over Ethernet (FCoE)
  • 40. Why Go FCoE?Large FC install base/investmentStorage arrays and switchesManagement tools and skillsAllows for incremental adoptionFCoE as an edge protocol promises to reduce connectivity costsEnd-to-end FCoE would be implemented laterI/O consolidation and virtualization capabilitiesMany DCB technologies map to the needs of server virtualization architecturesAlso leverages Ethernet infrastructure and skills
  • 41. Who’s Pushing FCoE and Why?Cisco wants to move to an all-Ethernet futureBrocade sees it as a way to knock off Cisco in the Ethernet marketQlogic, Emulex, and Broadcom see it as a differentiator to push siliconIntel wants to drive CPU upgradesNetApp thinks their unified storage will win as native FCoE targetsEMC and HDS want to extend their dominance of high-end FC storageHP, IBM, and Oracle don’t care about FC anyway
  • 43. Comparing Protocol EfficiencySource: UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 44. Comparing Protocol ThroughputSource: UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 45. Comparing Protocol CPU EfficiencySource: UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 47. Counterpoint: Why Ethernet?Why converge on Ethernet at all?Lots of work just to make Ethernet perform unnatural acts!Why not InfiniBand?Converged I/O already worksExcellent performance and scalabilityWide hardware availability and supportKinda pricey; another new networkWhy not something else entirely?Token Ring would have been great!FCoTRGet on the Ring

Editor's Notes

  • #14: The average server includes quite a few back-end ports, from traditional data networking to storage connectivity, management, and clustering. Many server manufacturers are consolidating most or all of these functions on 10 GbE, and improving performance at the same time. Rather than four bonded Gigabit Ethernet ports, less than half the capacity of a single 10 GbE connection is required. Next, server management is moved to the same IP/Ethernet connection, and capacity is left over for iSCSI to replace a Gigabit Ethernet or Fibre Channel port or two. Even cluster communication will be bundled over 10 GbE, whether a simple IP heartbeat or advanced DMA using RoCE. The average server now needs just two redundant 10 GbE ports rather than up to a dozen ports and cables.