SlideShare a Scribd company logo
WHITE PAPER




FCoE Convergence at the
Access Layer with Juniper
Networks QFX3500 Switch
First Top-of-Rack Switch Built to Solve All the Challenges
Posed by Access-Layer Convergence




Copyright © 2011, Juniper Networks, Inc.	                      1
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                      Table of Contents
                      Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

                      Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

                      Access-Layer Convergence Modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

                      Option 1: FCoE Transit Switch (DCB Switch with FIP Snooping) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

                          FCoE Servers with CNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

                      Option 2: FCoE-FC Gateway (Using NPIV Proxy). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

                      Option 3: FCoE-FC Switch (Full FCF) (Not Recommended). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

                      Deployment Models Available Today. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

                          Rack-Mount Servers and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

                          Blade Servers with Pass-Through Modules and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

                          Blade Servers with Embedded DCB Switch and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

                          Blade Servers with Embedded FCoE-FC Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

                          Servers Connected Through FCoE Transit Switch to an FCoE-Enabled Fibre Channel SAN Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . 10

                      The Standards that Allow for Server I/O and Access-Layer Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11

                          Enhancements to Ethernet for Converged Data Center Networks—DCB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11

                          Enhancements to Fibre Channel for Converged Data Center Networks—FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

                      Future Direction for FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

                      A Brief Note on iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

                      Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

                      About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14




                      Table of Figures
                      Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network.. . . 4

                      Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

                      Figure 3: Operation of an FCoE transit switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

                      Figure 4: FCoE servers with CNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

                      Figure 5: Rack-mount servers and top-of-rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

                      Figure 6: Blade servers with pass-through modules and top-of-rack FCoE-FC gateway.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

                      Figure 7: Blade servers with embedded DCB switch and top-of-rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

                       Figure 8: Servers connected to FCoE transit switch through to an FCoE-enabled FC SAN fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11

                      Figure 9: PFC ETS and QCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12




2	                                                                                                                                                                                                       Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         Executive Summary
                         In 2011, customers will finally be able to invest in convergence-enabling equipment and begin reaping the benefits of
                         convergence in their data centers. With the first wave of standards now complete—both the IEEE Data Center Bridging (DCB)
                         enhancements to Ethernet and the InterNational Committee for Information Technology Standards (INCITS) T11 FC-BB-5
                         standard for Fibre Channel over Ethernet (FCoE), enterprises can benefit from server- and access-layer I/O convergence
                         while continuing to leverage their investment in their existing aggregation, core LAN, and Fibre Channel (FC) backbones.

                         So why the focus on server and access-layer I/O convergence? Simply put, the industry recognizes that the first wave of
                         standards does not meet the needs of full convergence and so it is working on a second wave of standards—including FC-
                         BB-6—as well as various forms of fabric technology to better address the challenges of full convergence. The new standards
                         are designed to provide lower cost convergence strategies for the smaller enterprise, and to address the scaling issues that
                         come about from convergence in general as well as increased data center scale. As a result, 2011 is the year to focus on the
                         benefits to be gained from converging the access layer while laying a foundation for the future.

                         Juniper Networks® QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer
                         convergence. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and
                         storage area network (SAN) teams. It is also the first product to leverage a new generation of ASIC technologies. It offers
                         1.28 terabits per second (Tbps) of bandwidth implemented with a single ultra-low latency ASIC and soft-programmable
                         ports capable of gigabit Ethernet (GbE), 10GbE, 40GbE, and 2/4/8 Gbps FC, supported through small form-factor pluggable
                         transceiver (SFP+) GbE copper, 10GbE copper and optical digital to analog converter (DAC), and quad small form-factor
                         pluggable (QSFP) dense optical connectivity.




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                       3
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch




                                                                                                      SAN A
                                                                                                       SAN B




                               Phase 1: Separate Networks




                                                                                                      SAN A
                                                                                                       SAN B




                               Phase 2: Access Layer
                               Convergence




                               Phase 3: Full Convergence

                         Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully
                                                                  converged network.




4	                                                                                                    Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         Introduction
                         The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable data
                         center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. Such a network
                         also allows the data center to support much higher levels of business agility and not become a bottleneck that hinders a
                         company from releasing new products or services.

                         To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully clarify the
                         most interesting options for convergence in 2011:

                         1.	 Review the different types of convergence-capable products that are available on the market based upon the current
                             standards and consider the capabilities of those products

                         2.	 Consider the deployment scenarios for those products

                         3.	 Look forward to some of the new product and solution capabilities expected over the next couple of years

                         Access-Layer Convergence Modes
                         When buying a convergence platform, it is possible to deploy products based on three very different modes of operation.
                         Products on the market today may be capable of one or more of these modes depending on hardware and software
                         configuration and license enablement.

                         •	 FCoE transit switch—DCB switch with FCoE Initialization Protocol (FIP) snooping

                         •	 FCoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy

                         •	 FCoE-FC switch—full Fibre Channel Forwarder (FCF) capability

                         In principle, these systems can be used in multiple places within a deployment. However, for the purpose of this document
                         and based on the most likely deployments in 2011, only the server access-layer convergence model will be covered.


                                                               FCOE Transit Switch vs. FCOE-FC Gateway

                                                           FC/FCoE Switch                                              FC Switch

                                              VF_Port           VF_Port               VF_Port


                                                    DCB                        DCB                            F_Port                  F_Port
                                                    Port                       Port



                                                        DCB                    DCB                            N_Port                  N_Port
                                                        Port                   Port

                                                                          FCoE Transit Switch
                                                                                                                                  NPIV Proxy
                                                                          FIP Snooping

                                                   FIP            FIP            FIP                        VF_Port     VF_Port        VF_Port
                                                   ACL            ACL            ACL

                                                   DCB            DCB            DCB                         DCB          DCB           DCB
                                                   Port           Port           Port                        Port         Port          Port




                                                  VN_Port       VN_Port        VN_Port                      VN_Port     VN_Port        VN_Port




                                                    FCoE servers with CNA                                     FCoE servers with CNA

                                                        Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                              5
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                      Option 1: FCoE Transit Switch (DCB Switch with FIP Snooping)
                      In this model, the SAN team enables their backbone SAN fabric for FCoE, while the network team deploys a top-of-rack DCB
                      switch with FIP snooping. Servers are deployed with Converged Network Adapters (CNAs), and blade servers are deployed
                      with pass-through modules or embedded DCB switches. These are connected to the top-of-rack switch, which then has
                      Ethernet connectivity to the LAN aggregation layer and Ethernet connectivity to the FCoE ports of the SAN backbone.

                      A common question at this point is whether a DCB switch with no Fibre Channel stack can indeed be a viable part of
                      a converged deployment and, in particular, whether such a switch gives not just the necessary security but also the
                      performance and manageability required in a storage network deployment.

                      Since this is, at one level, just a Layer 2 switch, this solution ensures that the switch in each server rack is not consuming
                      an FC domain ID. Fibre Channel networks have a scale restriction that limits them to just a couple of tens of switches. As
                      convergence and 10GbE forces a move towards top-of-rack switches, any solution deployed must ensure that convergence
                      does not cause an FC SAN scaling problem.



                                    FCoE



                                                                                                                        FCoE
                                                                                                                       Enabled
                                                                                                                        SAN
                                     LAG                                                  LAG

                                                                                                            FCoE


                                                                FCoE




                                                                Figure 3: Operation of an FCoE transit switch

                      FCoE Servers with CNA
                      A rich implementation of an FCoE transit switch will provide strong management and monitoring of the traffic separation,
                      allowing the SAN team to monitor FCoE traffic throughput. Specifically, a fully manageable DCB switch will allow the user to
                      monitor traffic on a per user priority and per priority group basis and not just per port.

                      FIP snooping as defined in the FCoE standard provides perimeter protection, ensuring that the presence of an Ethernet layer
                      in no way impacts existing SAN security. The SAN backbone can be simply FCoE-enabled with either FCoE blades within
                      chassis-based systems or FCoE-FC gateways connected to the edge of the SAN backbone. In addition, the traditional Fibre
                      Channel Security Profile (FC-SP) mechanisms work seamlessly through FCoE, allowing CNA-to-FCF authentication to be
                      used through the DCB switch.

                      Perhaps less obviously, FIP snooping also means that the switch has a very clear view of each and every FCoE session that is
                      running through it, both in terms of the path, which is derived from the source and destination media access control (MAC)
                      frames of the virtual Fibre Channel ports, as well as the actual status of the virtual FC connection, which is monitored by
                      snooping the FIP keepalive.




6	                                                                                                                       Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         Just as with any Ethernet deployment, the switch can use link aggregation group (LAG) to balance the Ethernet packets
                         (including FCoE) across multiple links. As with any FC switch, this load balancing can include the OxID (Fibre Channel
                         exchange ID) in order to carry out the Fibre Channel best practice of exchange-based load balancing. Finally, the FCoE
                         protocol includes FCoE load-balancing capabilities to ensure that the FCoE servers are evenly and appropriately distributed
                         across the multiple FCoE FC fabric connections.

                         FCoE transit switches have several advantages:

                         •	 Low-cost top-of-rack DCB switch

                         •	 Rich monitoring of FCoE traffic at top of rack (QFX3500 Switch)

                         •	 FCoE enablement of SAN backbone (FCoE blades or FCoE-FC gateway) managed by the SAN team for clean
                            management separation

                         •	 Load balancing carried out between CNAs and FCoE ports of the SAN fabric as well as point-to-point throughout the
                            Ethernet infrastructure

                         •	 Comprehensive security maintained through FIP snooping and FC-SP

                         •	 No heterogeneous support issues, as top of rack is L2 connectivity only



                                            FC



                                                                                                                                FC
                                                                                                                               SAN

                                            LAG                                       LAG

                                                                                                                 FC


                                                              FCoE




                                                                     Figure 4: FCoE servers with CNA

                         Option 2: FCoE-FC Gateway (Using NPIV Proxy)
                         In this model, the SAN and Ethernet teams agree jointly to deploy an FCoE-FC top-of-rack gateway. From a cabling
                         perspective, the deployment is identical to Option 1, with the most visible difference being that the cable between the top of
                         rack and the SAN backbone is now carrying native Fibre Channel traffic rather than FCoE traffic.

                         As with Option 1, this solution ensures that the switch in each server rack is not consuming an FC domain ID. In this case,
                         however, unlike Option 1, a much richer level of Fibre Channel functionality has been enabled within the switch. The FCoE-
                         FC gateway uses NPIV technology so that it presents to the servers as an FCoE-enabled Fibre Channel switch, and presents
                         to the SAN backbone as a group of FC servers. It then simply proxies sessions from one domain to the other with intelligent
                         load-balancing and automated failover capability across the Fibre Channel links to the fabric.




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                       7
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                      FCoE-FC gateways have several advantages:

                      •	 Clean separation of management through role-based access control (QFX3500 Switch)

                      •	 No need for FCoE enablement of the SAN backbone

                      •	 Fine-grained FCoE session-based load balancing (at the virtual machine level for NPIV-enabled hypervisors—QFX3500
                         Switch) and full Ethernet LAG with exchange-based load balancing on the Ethernet-facing connectivity

                      •	 No heterogeneous support issues, as the FCoE-FC gateway presents to the SAN fabric as a Fibre Channel-enabled server
                         (N_Port to F_Port)

                      •	 Available post deployment as a license upgrade and fungible port reconfiguration with no additional hardware (QFX3500
                         Switch)

                      •	 Support for an upstream DCB switch such as an embedded switch in blade server shelf (QFX3500 Switch), as well as
                         direct CNA connectivity or connectivity via blade server pass-through modules

                      Option 3: FCoE-FC Switch (Full FCF) (Not Recommended)
                      For deployments of any size, there is no value to local switching, as any rack is either pure server or pure storage. In addition,
                      although the SAN standards limit deployments to 239 switches, the practical supported limits are typically within the 16 to
                      32 range (in reality, most deployments are kept well below these limits). As such, this option has limited value in production
                      data centers.

                      For very small configurations where a single switch needs to connect to both servers and storage, Juniper believes that
                      Internet Small Computer System Interface (iSCSI) is the best approach in 2011, while the FC-BB-6 VN2VN model (see
                      “Future Direction for FCoE” section later in this white paper) will be the preferred FCoE end-to-end model in 2012.

                      Deployment Models Available Today
                      As previously noted, this paper focuses on deployments that apply for server access layer convergence. As such, it is assumed
                      that this access layer is in turn connecting both to some form of Ethernet aggregation/core layer on one side and a Fibre
                      Channel backbone on the other. The term “Fibre Channel backbone” implies a traditional FC SAN of some form which has
                      attached to the FC disk and tape as well as most likely existing FC servers.

                      By leveraging either an FCoE transit switch or an FCoE-FC gateway, whether separately or together, there are a number
                      of deployment options for supporting both rack-mount servers and blade servers. Each approach has its merits, and
                      organizations may want to use different approaches, depending on their requirements.

                      In terms of physical deployment in most data centers, the Ethernet aggregation and core, the FC backbone, and the FC disk
                      and tape are likely to be colocated in some centralized location within the data center with the server racks. From a cabling
                      perspective, this means that the same physical cable infrastructure can easily support any of the deployment models
                      discussed below.

                      Rack-Mount Servers and Top-of-Rack FCoE-FC Gateway
                      This deployment model is perhaps the most recognized and best understood. The QFX3500 Switch fully supports this model
                      and, unlike other products, the QFX3500 enables this mode through a single license that allows up to 12 of its 48 SFP+ ports
                      to be configured for 2/4/8 Gbps FC instead of 10GbE.




8	                                                                                                                       Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch




                                                                                                                 FC
                                                               Ethernet                                         SAN
                                                               Network



                                                                      Ethernet                   FC




                                                      Figure 5: Rack-mount servers and top-of-rack FCoE-FC gateway

                         Blade Servers with Pass-Through Modules and Top-of-Rack FCoE-FC Gateway
                         This model is similar to the previous rack-mount servers and top-of-rack FCoE-FC gateway model. The challenge with this
                         model is the complex cabling that accompanies pass-through modules. Using pass-through has the benefit of removing an
                         entire layer from the network topology, thereby simplifying the data center, ensuring a single network operating system at all
                         layers, and allowing the edge of the network to leverage the richer functionality available with the feature-rich ASICs used at
                         top of rack. The use of modern pass-through modules and well constructed cabling solutions provide all the cable simplicity
                         benefits of an embedded blade switch with none of the limitations.

                                                                                        TOR FCoE-FC Gateway




                                                            Pass-Through
                                                               Module




                                            Figure 6: Blade servers with pass-through modules and top-of-rack FCoE-FC gateway.




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                       9
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                      Blade Servers with Embedded DCB Switch and Top-of-Rack FCoE-FC Gateway
                      To support this deployment model, it is necessary to ensure that both the CNAs and the FCoE-FC gateway have particularly
                      feature-rich implementations of the full FC-BB-5 standard in order to support many-to-many L2 visibility for fan-in load
                      balancing and high availability.

                      The QFX3500 Switch is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches,
                      including third-party embedded blade shelf switches. Juniper strongly recommends using such switches only if they have
                      implemented FIP snooping for perimeter detection, and they have fully standards-based, feature-rich DCB implementations.

                      When deploying a DCB switch in between the servers and the gateway, an Ethernet LAG is formed between the two devices,
                      providing optimum packet distribution. In the case of the QFX3500 Switch, the Fibre Channel OxID is included in the LAG,
                      ensuring exchange-based load balancing across the link. Additionally, for enhanced scaling, the ports of the QFX3500 can be
                      configured in a trusted mode where it is known that there is an upstream DCB switch with FIP snooping.

                      Increasingly, however, this option is seen as undesirable, as it adds an additional network tier and makes it hard to
                      standardize the network access layer in a multivendor server environment.




                                                                                            FC
                                                                                                               FC
                                                                                                              SAN




                                                             FCoE




                                    Figure 7: Blade servers with embedded DCB switch and top-of-rack FCoE-FC gateway

                      Blade Servers with Embedded FCoE-FC Gateway
                      Typically, embedded switches have a limited power and heat budget, so the simpler the module the better. There is also an
                      issue with limited space for port connections. With a gateway, some of these ports must be Ethernet and some must be Fibre
                      Channel, further restricting the available bandwidth in both cases. In addition, such modules are not commonly available
                      for all blade server families, making the deployment of a standard and consistent infrastructure challenging. Overall, these
                      issues make this is an undesirable use case.

                      Servers Connected Through FCoE Transit Switch to an FCoE-Enabled Fibre Channel SAN Fabric
                      There is an interesting case for using FCoE transit switches as the access layer connecting both to Ethernet aggregation and
                      to an FCoE-enabled Fibre Channel SAN fabric. An FCoE-FC gateway has to be actively managed and monitored by both the
                      SAN and LAN teams—a considerable challenge for some organizations. An FCoE transit switch is not active at the FCoE layer
                      of the protocol stack, so there is nothing for the SAN team to actively manage. Therefore, while the SAN team would still
                      need monitoring capabilities, there is no active overlap of management, and this minimizes the possibility of configuration
                      mistakes by different groups.




10	                                                                                                                    Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         There are various ways to enable the FC SAN fabric. One model is to include some FCoE-enabled switches within the SAN
                         fabric; this can be accomplished by adding an FCoE blade to one of the chassis-based SAN directors. As with the previous
                         use cases, the FCoE ports deployed on the SAN fabric must support multiple virtual fabric ports per physical port for
                         this deployment to be viable. Another option is to use the QFX3500 Switch configured as an FCoE-FC gateway, which is
                         connected locally to a pure FC SAN fabric and administered by the SAN team.

                         For larger customers, where the merging of LAN and SAN network teams is unlikely to happen for several years, this provides
                         a very clean and simply converged deployment model.



                                                     Managed by SAN Team                                      Managed by SAN Team




                                                                FC                                                        FC
                                                               SAN                                                       SAN




                                                                                       Managed by
                                                                                        LAN Team


                                                               FCoE                                                      FCoE




                                                                               Managed by Server Team


                                   Figure 8: Servers connected to FCoE transit switch through to an FCoE-enabled FC SAN fabric

                         The Standards that Allow for Server I/O and Access-Layer Convergence

                         Enhancements to Ethernet for Converged Data Center Networks—DCB
                         Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support lossless
                         traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames can lead to
                         cross-traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing (user priorities), but
                         again, these are rarely deployed within the data center. The next logical step for Ethernet will be to leverage these capabilities
                         and enhance existing standards to meet the needs of convergence and virtualization, propelling Ethernet into the forefront as
                         the preeminent infrastructure for LANs, SANs, and high-performance computing (HPC) clusters.

                         These enhancements benefit Ethernet I/O convergence (remembering that most servers have multiple 1GbE network
                         interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based storage
                         protocols such as network access server (NAS) and iSCSI. These enhancements also provide the appropriate platform for
                         supporting FCoE. In the early days when these standards were being developed and before they moved under the auspices of
                         the IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them.




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                        11
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                      DCB—a set of IEEE standards. Ethernet needed a variety of enhancements to support I/O, network convergence, and
                      server virtualization. Server virtualization is covered in other Juniper white papers, even though it is part of the DCB protocol
                      set. With respect to I/O and network convergence, the development of new standards began with the following existing
                      standards:

                      1.	User Priority for Class of Service—802.1p—which already allows identification of eight separate lanes of traffic (used as-is)

                      2.	 Ethernet Flow Control (Pause, symmetric, and/or asymmetric flow control)—802.3X—which is leveraged for priority flow
                          control (PFC)

                      3.	 MAC Control Frame for PFC—802.3bd—to allow 802.3X to apply to individual user priorities (modified)

                      A number of new standards that leverage these components have been developed and have either been formally approved
                      or are in the final stages of the approval process. These include:

                      1.	 PFC—IEEE 802.1Qbb—which applies traditional 802.3X Pause to individual priorities instead of the port

                      2.	 Enhanced Transmission Selection (ETS)—IEEE 802.1Qaz—which is a grouping of priorities and bandwidth allocation to
                          those groups

                      3.	 Ethernet Congestion Management (QCN)—IEEE 802.1Qau—which is a cross network as opposed to a point-to-point
                          backpressure mechanism

                      4.	 Data Center Bridging Exchange Protocol (DCBx), part of the ETS standard for DCB auto-negotiation

                      The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms of external
                      requirements, and also describe in some detail the options for implementing internal behavior and the downside of some
                      lower cost but standards-compliant ways of implementing DCB. It is important to note that these standards are separate
                      from the efforts to solve the Layer 2 multipathing issues that are not technically necessary to make convergence work. Also,
                      neither these standards nor those around L2 multipathing address a number of other challenges that arise when networks
                      are converged and flattened.



                                                                                                 PFC       TX Queue 0                                          RX Buffer 0     PFC
                                                                                                 ON        RX Buffer 0                                         TX Queue 0      ON

                                                                                                 PFC       TX Queue 1                                          RX Buffer 1     PFC
                                                                                                 ON        RX Buffer 1                                         TX Queue 1      ON
                                                                                                                         S
                                                                        Physical Port – PFC




                                                                                                                                                                                        Physical Port – PFC
                                                                                                 PFC       TX Queue 2                                          RX Buffer 2     PFC
                                                                                                 ON        RX Buffer 2   T                  pause              TX Queue 2      ON

                                                                                                 PFC       TX Queue 3    O                                     RX Buffer 3     PFC
                                                                                                 ON        RX Buffer 3   P                                     TX Queue 3      ON

                                                                                                 PFC       TX Queue 4                                          RX Buffer 4     PFC
                                                                                                 OFF       RX Buffer 4                                         TX Queue 4      OFF

                                                                                                 PFC       TX Queue 5                                          RX Buffer 5     PFC
                                                                                                 OFF       RX Buffer 5                                         TX Queue 5      OFF

                                                                                                 PFC       TX Queue 6    Keeps sending                  DROP   RX Buffer 6     PFC
                                                                                                 OFF       RX Buffer 6                                         TX Queue 6      OFF

                                                                                                 PFC       TX Queue 7                                          RX Buffer 7     PFC
                                                                                                 ON        RX Buffer 7                                         TX Queue 7      ON




                                                                                                                             1         2           3
                                       Physical Port – ETS




                                                             Class Group 1                    TX Queue 0
                                                                                                                                                                       1          2                           2
                                                                                              TX Queue 1
                                                             Class Group 2                    TX Queue 2
                                                                                              TX Queue 3                                                               2          5                           5
                                                                                                                             2         6           5
                                                                                              TX Queue 4
                                                                                              TX Queue 5
                                                             Class Group 3                                                                                             2          3                           3
                                                                                              TX Queue 6
                                                                                              TX Queue 7                     2         4           3

                                                                                                                         T1            T2          T3                 T1         T2                           T3
                                                                                                                                 Offered Traffic                           Realized Traffic

                                                                                                                     Figure 9: PFC ETS and QCN




12	                                                                                                                                                                                   Copyright © 2011, Juniper Networks, Inc.
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         Enhancements to Fibre Channel for Converged Data Center Networks—FCoE
                         FCoE—the protocol developed within T11. The proposed FCoE protocol has been developed by the T11 Technical Committee—a
                         subgroup of the International Committee for Information Technology Standards (INCITS)—as part of the Fibre Channel
                         Backbone 5 (FC-BB-5) project. The standard was passed over to INCITS for public comment and final ratification in
                         2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Backbone 6 (FC-BB-6), which
                         is intended to address a number of issues not covered in the first standard, and develop a number of new deployment
                         scenarios.

                         FCoE was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the cost of
                         change. To the storage world, FCoE is, in many ways, just FC with a new physical media type; many of the tools and services
                         remain the same. To the Ethernet world, FCoE is just another upper level protocol riding over Ethernet.

                         The FC-BB-5 standard clearly defines all of the details involved in mapping FC through an Ethernet layer whether directly
                         or through simplified L2 connectivity. It lays out both the responsibilities of the FCoE-enabled endpoints and FC fabrics as
                         well as of the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended to maintain
                         the level of security that a physically separate SAN traditionally provides. Overall, apart from the scale-up and scale-down
                         aspects, FC-BB-5 defines everything needed to build and support the products and solutions discussed earlier.

                         While the development of FCoE as an industry standard will bring the deployment of unified data center infrastructures
                         closer to reality, FCoE by itself is not enough to complete the necessary convergence. Many additional enhancements to
                         Ethernet and changes to the way networking products are designed and deployed are required to make it a viable, useful, and
                         pragmatic implementation. Many, though not all, of the additional enhancements are provided by the standards developed
                         through the IEEE DCB committee. In theory, the combination of the DCB and FCoE standards allows for full network
                         convergence. In reality, they only solve the problem for relatively small-scale data centers. The challenge of applying these
                         techniques to larger deployments involves the use of these protocols purely for server- and access-layer I/O convergence
                         through the use of FCoE transit switches (DCB switches with FIP snooping) and FCoE-FC gateways (using N_Port ID
                         Virtualization to eliminate SAN scaling and heterogeneous support issues).

                         Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an FCoE transit switch mode. The QFX3500
                         also supports FCoE-FC gateway mode. These products are industry firsts in many ways:

                         1.	 The EX4500 and QFX3500 are fully standards-based with rich implementations from both a DCB and FC-BB-5
                             perspective.

                         2.	 The EX4500 and QFX3500 are purpose-built FCoE transit switches.

                         3.	 QFX3500 is a purpose-built FCoE-FC gateway which includes fungible combined Ethernet/Fibre Channel ports.

                         4.	 QFX3500 features a single Packet Forwarding Engine (PFE) design.

                         5.	 The EX4500 and QFX3500 switches both include feature-rich L3 capabilities.

                         6.	 QFX3500 supports low latency with cut-through switching.

                         Future Direction for FCoE
                         There are two key initiatives underway within FC-BB-6, which will prove critical to the adoption of FCoE for small and large
                         businesses alike.

                         For smaller businesses, a new FCoE mode has been developed, allowing for a fully functional FCoE deployment without
                         the need for either the traditional FC services stack or FC L3 forwarding. Instead, the FCoE end devices directly discover and
                         attach to each other through a pure L2 Ethernet infrastructure. This can be as simple as a DCB-enabled Ethernet switch,
                         with the addition of FIP snooping for security. It makes FCoE simpler than either iSCSI or NAS, since it no longer needs a
                         complex Fibre Channel (or FCoE) switch, and because the FCoE endpoints have proper discovery mechanisms. This mode
                         of operation is commonly referred to as VN_Node to VN_Node or VN2VN. It can be used by itself for small to medium scale
                         FCoE deployments, or in conjunction with the existing FCoE models for larger deployments to allow them to benefit from
                         local L2 connectivity.




Copyright © 2011, Juniper Networks, Inc.	                                                                                                                      13
WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch



                         For larger businesses, a set of approaches is being investigated to remove the practical FC scaling restrictions that currently
                         limit deployment sizes. As this work continues, it is hoped that the standards will evolve not only to solve some of these
                         scaling limitations, but also to more fully address many of the other challenges that arise as a result of blending L2 switching,
                         L3 FC forwarding, and FC services.

                         Juniper fully understands these challenges, which are similar to the challenges of blending L2 Ethernet, L3 IP forwarding, and
                         higher level network services for routing. As part of Juniper’s 3-2-1 data center architecture, we have already demonstrated
                         many of these approaches with Juniper Networks EX Series Ethernet Switches, MX Series 3D Universal Edge Routers, SRX
                         Series Services Gateways, and Juniper Networks Junos® Space.

                         A Brief Note on iSCSI
                         Although not the subject of this white paper, it is important to note that the implementation of DCB and products—such as
                         the QFX3500 and Juniper Networks QFabric™ family of products, along with the latest generation of CNAs and storage—
                         provides many benefits to iSCSI for those deployments where the FC-BB-5 standards prove too limiting.

                         This is of particular interest given that most CNAs and many storage subsystems can now be deployed through different
                         licensing as either FCoE or iSCSI, giving the end user significant protection against the protocol debate.

                         Conclusion
                         Juniper Networks® QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer
                         convergence. It is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches, including
                         third-party embedded blade shelf switches. It works for both rack-mount and blade servers, and for organizations with
                         combined or separate LAN and storage area network (SAN) teams. It is also the first product to leverage a new generation of
                         powerful ASICs.

                         Industry firsts in many ways, Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an FCoE
                         transit switch mode, and the QFX3500 also supports FCoE-FC gateway mode. They are fully standards-based with rich
                         implementations from both a DCB and FC-BB-5 perspective and feature-rich L3 capabilities. The QFX3500 Switch is a
                         purpose-built FCoE-FC gateway which includes fungible combined Ethernet/FC ports, a single PFE design, and low latency
                         cut-through switching.

                         There are a number of very practical server I/O access-layer convergence topologies that can be used as a step along the
                         path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), QSFP, 40GbE,
                         and the FCoE Direct Discovery Direct Attach model will further bring Ethernet economics to FCoE convergence efforts.

                         About Juniper Networks
                         Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers,
                         Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking.
                         The company serves customers and partners worldwide. Additional information can be found at www.juniper.net.




Corporate and Sales Headquarters                    APAC Headquarters                        EMEA Headquarters                To purchase Juniper Networks solutions,
Juniper Networks, Inc.                              Juniper Networks (Hong Kong)             Juniper Networks Ireland         please contact your Juniper Networks
1194 North Mathilda Avenue                          26/F, Cityplaza One                      Airside Business Park            representative at 1-866-298-6428 or
Sunnyvale, CA 94089 USA                             1111 King’s Road                         Swords, County Dublin, Ireland
                                                                                                                              authorized reseller.
Phone: 888.JUNIPER (888.586.4737)                   Taikoo Shing, Hong Kong                  Phone: 35.31.8903.600
or 408.745.2000                                     Phone: 852.2332.3636                     EMEA Sales: 00800.4586.4737
Fax: 408.745.2100                                   Fax: 852.2574.7803                       Fax: 35.31.8903.601
www.juniper.net

Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.

2000422-001-EN         July 2011                       Printed on recycled paper



14	                                                                                                                                       Copyright © 2011, Juniper Networks, Inc.

More Related Content

What's hot (19)

PDF
Emerging Multicast VPN Applications
Johnson Liu
 
PDF
Ibm flex system and pure flex system network implementation with cisco systems
Edgar Jara
 
PDF
Ap 51xx access point product reference guide (part no. 72 e-113664-01 rev. b)
Advantec Distribution
 
PDF
LED Driver "BCR401W" | Infineon Technologies
Infineon Technologies AG
 
PDF
Ap7181 product referenceguide
Advantec Distribution
 
PDF
Rhel Tuningand Optimizationfor Oracle V11
Yusuf Hadiwinata Sutandar
 
PDF
Subscriber mgmt-solution-layer2-wholesale
alexandr martynjuk
 
PDF
VNX Snapshots
EMC
 
PDF
Zd12xx release notes_9.9
forum4user
 
PDF
RouteScout_v1.1_UserGuide_v1
Martha Roden
 
PDF
Networking for Storage Virtualization and EMC RecoverPoint TechBook
EMC
 
PDF
TechBook: Using EMC VNX Storage with VMware vSphere
EMC
 
PDF
Design of an arm based microcontroller circuit board
tuanngoc253
 
PDF
IBM Power 750 and 755 Technical Overview and Introduction
IBM India Smarter Computing
 
PDF
Aquamacs Manual
roblingelbach
 
PDF
Fibrelink
Leon Henry
 
PDF
Mef Carrier Ethernet For Delivery Of Private Cloud Services 20120031
slongobardo
 
PDF
LED Driver "BCR402W" | Infineon Technologies
Infineon Technologies AG
 
PDF
Motorola solutions wing 4.4 ap51xx access point product reference guide (part...
Advantec Distribution
 
Emerging Multicast VPN Applications
Johnson Liu
 
Ibm flex system and pure flex system network implementation with cisco systems
Edgar Jara
 
Ap 51xx access point product reference guide (part no. 72 e-113664-01 rev. b)
Advantec Distribution
 
LED Driver "BCR401W" | Infineon Technologies
Infineon Technologies AG
 
Ap7181 product referenceguide
Advantec Distribution
 
Rhel Tuningand Optimizationfor Oracle V11
Yusuf Hadiwinata Sutandar
 
Subscriber mgmt-solution-layer2-wholesale
alexandr martynjuk
 
VNX Snapshots
EMC
 
Zd12xx release notes_9.9
forum4user
 
RouteScout_v1.1_UserGuide_v1
Martha Roden
 
Networking for Storage Virtualization and EMC RecoverPoint TechBook
EMC
 
TechBook: Using EMC VNX Storage with VMware vSphere
EMC
 
Design of an arm based microcontroller circuit board
tuanngoc253
 
IBM Power 750 and 755 Technical Overview and Introduction
IBM India Smarter Computing
 
Aquamacs Manual
roblingelbach
 
Fibrelink
Leon Henry
 
Mef Carrier Ethernet For Delivery Of Private Cloud Services 20120031
slongobardo
 
LED Driver "BCR402W" | Infineon Technologies
Infineon Technologies AG
 
Motorola solutions wing 4.4 ap51xx access point product reference guide (part...
Advantec Distribution
 

Viewers also liked (20)

PDF
3 gpp lte radio layer 2
المهندس عمران
 
PPT
Campas network design overview
Anushka Hapuhinna
 
PPT
Orlando SFDC User Group 4/2009
Joshua Hoskins
 
PPTX
The Evolution of Virtual Mentality
Juniper Networks
 
PPT
Proverbs on water
Sonia Bártolo
 
PDF
A VISION FOR FUTURE SLUDGE MANAGEMENT IN ALEXANDRIA
Helalley Helalley
 
PDF
Unique Security Challenges in the Datacenter Demand Innovative Solutions
Juniper Networks
 
PDF
Puppet for Junos OS How-to Guide
Juniper Networks
 
PPT
Getting Started with Social Media Marketing
Eric Krock
 
PDF
Containing Chaos
Juniper Networks
 
PDF
WAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
Juniper Networks
 
PPTX
Orlando SFDC User Group 10/2011
Joshua Hoskins
 
PPT
Presentation2
Kalahub
 
PDF
Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Juniper Networks
 
PDF
Medical waste ein shams - april 2007
Helalley Helalley
 
PDF
A vision for future wastewater system of alexandria city= 21 22 june 2010
Helalley Helalley
 
PDF
Programmable Networking is SFW (JavaOne presentation)
Juniper Networks
 
PPTX
Orlando SFDC User Group 7/2011
Joshua Hoskins
 
PDF
Impact of Juniper Training and Certification on Network Management Activities
Juniper Networks
 
PDF
Social Media Marketing for the Lean Startup
Eric Krock
 
3 gpp lte radio layer 2
المهندس عمران
 
Campas network design overview
Anushka Hapuhinna
 
Orlando SFDC User Group 4/2009
Joshua Hoskins
 
The Evolution of Virtual Mentality
Juniper Networks
 
Proverbs on water
Sonia Bártolo
 
A VISION FOR FUTURE SLUDGE MANAGEMENT IN ALEXANDRIA
Helalley Helalley
 
Unique Security Challenges in the Datacenter Demand Innovative Solutions
Juniper Networks
 
Puppet for Junos OS How-to Guide
Juniper Networks
 
Getting Started with Social Media Marketing
Eric Krock
 
Containing Chaos
Juniper Networks
 
WAN Solution Meets The Challenges Of The Large Enterprise Solution Brief
Juniper Networks
 
Orlando SFDC User Group 10/2011
Joshua Hoskins
 
Presentation2
Kalahub
 
Network Configuration Example: Configuring Assured Forwarding for High-Defini...
Juniper Networks
 
Medical waste ein shams - april 2007
Helalley Helalley
 
A vision for future wastewater system of alexandria city= 21 22 june 2010
Helalley Helalley
 
Programmable Networking is SFW (JavaOne presentation)
Juniper Networks
 
Orlando SFDC User Group 7/2011
Joshua Hoskins
 
Impact of Juniper Training and Certification on Network Management Activities
Juniper Networks
 
Social Media Marketing for the Lean Startup
Eric Krock
 
Ad

Similar to FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch (20)

PDF
Emulex - Management Mind Meld (A. Ordoubadian)
Ali Ordoubadian
 
PPT
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
Stuart Miniman
 
PPTX
Data Center: Cloud & Convergencia
Logicalis Latam
 
PDF
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
EMC
 
PDF
PLNOG 13: Artur Pająk: Storage w sieciach Ethernet, czyli coś o iSCSI I FCoE
PROIDEA
 
PDF
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
EMC
 
PDF
IBM Flex System Networking in an Enterprise Data Center
IBM India Smarter Computing
 
PPT
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Stuart Miniman
 
PPTX
Designing and deploying converged storage area networks final
Bhavin Yadav
 
PDF
F co e_netapp_v12
aj4953
 
PDF
Juniper Networks: Q Fabric Architecture
TechnologyBIZ
 
PDF
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
Juniper Networks
 
PDF
Q logic convergence solutions net-app insight (110310)
QLogic Corporation
 
PDF
SEAMLESS MPLS
Johnson Liu
 
PPT
FCoE Origins and Status for Ethernet Technology Summit
Stuart Miniman
 
PDF
Presentation data center deployment guide
xKinAnx
 
PDF
Junipe 1
Ugursuz
 
PDF
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
EMC
 
PDF
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
EMC
 
PPT
Xtw01t6v0210 dcn
pgnguyen44
 
Emulex - Management Mind Meld (A. Ordoubadian)
Ali Ordoubadian
 
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
Stuart Miniman
 
Data Center: Cloud & Convergencia
Logicalis Latam
 
Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC ...
EMC
 
PLNOG 13: Artur Pająk: Storage w sieciach Ethernet, czyli coś o iSCSI I FCoE
PROIDEA
 
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
EMC
 
IBM Flex System Networking in an Enterprise Data Center
IBM India Smarter Computing
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Stuart Miniman
 
Designing and deploying converged storage area networks final
Bhavin Yadav
 
F co e_netapp_v12
aj4953
 
Juniper Networks: Q Fabric Architecture
TechnologyBIZ
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
Juniper Networks
 
Q logic convergence solutions net-app insight (110310)
QLogic Corporation
 
SEAMLESS MPLS
Johnson Liu
 
FCoE Origins and Status for Ethernet Technology Summit
Stuart Miniman
 
Presentation data center deployment guide
xKinAnx
 
Junipe 1
Ugursuz
 
Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking ( EM...
EMC
 
Converged Data Center: FCoE, iSCSI and the Future of Storage Networking
EMC
 
Xtw01t6v0210 dcn
pgnguyen44
 
Ad

More from Juniper Networks (20)

PPTX
Why Juniper, Driven by Mist AI, Leads the Market
Juniper Networks
 
PPTX
Experience the AI-Driven Enterprise
Juniper Networks
 
PPTX
How AI Simplifies Troubleshooting Your WAN
Juniper Networks
 
PPTX
Real AI. Real Results. Mist AI Customer Testimonials.
Juniper Networks
 
PPTX
SD-WAN, Meet MARVIS.
Juniper Networks
 
PPTX
Are you able to deliver reliable experiences for connected devices
Juniper Networks
 
PPTX
Stop Doing These 5 Things with Your SD-WAN
Juniper Networks
 
PDF
Securing IoT at Scale Requires a Holistic Approach
Juniper Networks
 
PDF
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Juniper Networks
 
PDF
What's Your IT Alter Ego?
Juniper Networks
 
PDF
Are You Ready for Digital Cohesion?
Juniper Networks
 
PDF
Juniper vSRX - Fast Performance, Low TCO
Juniper Networks
 
PDF
SDN and NFV: Transforming the Service Provider Organization
Juniper Networks
 
PDF
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Juniper Networks
 
PDF
vSRX Buyer’s Guide infographic - Juniper Networks
Juniper Networks
 
PDF
NFV Solutions for the Telco Cloud
Juniper Networks
 
PDF
Juniper SRX5800 Infographic
Juniper Networks
 
PDF
Infographic: 90% MetaFabric Customer Satisfaction
Juniper Networks
 
PDF
Infographic: Whack Hackers Lightning Fast
Juniper Networks
 
PPTX
High performance data center computing using manageable distributed computing
Juniper Networks
 
Why Juniper, Driven by Mist AI, Leads the Market
Juniper Networks
 
Experience the AI-Driven Enterprise
Juniper Networks
 
How AI Simplifies Troubleshooting Your WAN
Juniper Networks
 
Real AI. Real Results. Mist AI Customer Testimonials.
Juniper Networks
 
SD-WAN, Meet MARVIS.
Juniper Networks
 
Are you able to deliver reliable experiences for connected devices
Juniper Networks
 
Stop Doing These 5 Things with Your SD-WAN
Juniper Networks
 
Securing IoT at Scale Requires a Holistic Approach
Juniper Networks
 
Smart Solutions for Smart Communities: What's Next & Who's Responsible?
Juniper Networks
 
What's Your IT Alter Ego?
Juniper Networks
 
Are You Ready for Digital Cohesion?
Juniper Networks
 
Juniper vSRX - Fast Performance, Low TCO
Juniper Networks
 
SDN and NFV: Transforming the Service Provider Organization
Juniper Networks
 
Navigating the Uncertain World Facing Service Providers - Juniper's Perspective
Juniper Networks
 
vSRX Buyer’s Guide infographic - Juniper Networks
Juniper Networks
 
NFV Solutions for the Telco Cloud
Juniper Networks
 
Juniper SRX5800 Infographic
Juniper Networks
 
Infographic: 90% MetaFabric Customer Satisfaction
Juniper Networks
 
Infographic: Whack Hackers Lightning Fast
Juniper Networks
 
High performance data center computing using manageable distributed computing
Juniper Networks
 

Recently uploaded (20)

PDF
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
PDF
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
PDF
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
PPTX
Top iOS App Development Company in the USA for Innovative Apps
SynapseIndia
 
PDF
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
PPTX
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
PDF
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
PPTX
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
PDF
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 
PDF
Windsurf Meetup Ottawa 2025-07-12 - Planning Mode at Reliza.pdf
Pavel Shukhman
 
PPTX
MSP360 Backup Scheduling and Retention Best Practices.pptx
MSP360
 
PDF
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
PDF
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
PDF
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
PDF
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
PDF
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
PDF
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
PDF
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
PDF
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
PPTX
Building a Production-Ready Barts Health Secure Data Environment Tooling, Acc...
Barts Health
 
SFWelly Summer 25 Release Highlights July 2025
Anna Loughnan Colquhoun
 
New from BookNet Canada for 2025: BNC BiblioShare - Tech Forum 2025
BookNet Canada
 
Building Real-Time Digital Twins with IBM Maximo & ArcGIS Indoors
Safe Software
 
Top iOS App Development Company in the USA for Innovative Apps
SynapseIndia
 
Using FME to Develop Self-Service CAD Applications for a Major UK Police Force
Safe Software
 
UiPath Academic Alliance Educator Panels: Session 2 - Business Analyst Content
DianaGray10
 
Smart Trailers 2025 Update with History and Overview
Paul Menig
 
Building Search Using OpenSearch: Limitations and Workarounds
Sease
 
Why Orbit Edge Tech is a Top Next JS Development Company in 2025
mahendraalaska08
 
Windsurf Meetup Ottawa 2025-07-12 - Planning Mode at Reliza.pdf
Pavel Shukhman
 
MSP360 Backup Scheduling and Retention Best Practices.pptx
MSP360
 
Fl Studio 24.2.2 Build 4597 Crack for Windows Free Download 2025
faizk77g
 
HCIP-Data Center Facility Deployment V2.0 Training Material (Without Remarks ...
mcastillo49
 
Complete JavaScript Notes: From Basics to Advanced Concepts.pdf
haydendavispro
 
Empower Inclusion Through Accessible Java Applications
Ana-Maria Mihalceanu
 
DevBcn - Building 10x Organizations Using Modern Productivity Metrics
Justin Reock
 
Wojciech Ciemski for Top Cyber News MAGAZINE. June 2025
Dr. Ludmila Morozova-Buss
 
SWEBOK Guide and Software Services Engineering Education
Hironori Washizaki
 
NewMind AI - Journal 100 Insights After The 100th Issue
NewMind AI
 
Building a Production-Ready Barts Health Secure Data Environment Tooling, Acc...
Barts Health
 

FCOE Convergence at the Access Layer with Juniper Networks QFX3500 Switch

  • 1. WHITE PAPER FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch First Top-of-Rack Switch Built to Solve All the Challenges Posed by Access-Layer Convergence Copyright © 2011, Juniper Networks, Inc. 1
  • 2. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Table of Contents Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Access-Layer Convergence Modes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Option 1: FCoE Transit Switch (DCB Switch with FIP Snooping) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 FCoE Servers with CNA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Option 2: FCoE-FC Gateway (Using NPIV Proxy). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Option 3: FCoE-FC Switch (Full FCF) (Not Recommended). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Deployment Models Available Today. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Rack-Mount Servers and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Blade Servers with Pass-Through Modules and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Blade Servers with Embedded DCB Switch and Top-of-Rack FCoE-FC Gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Blade Servers with Embedded FCoE-FC Gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Servers Connected Through FCoE Transit Switch to an FCoE-Enabled Fibre Channel SAN Fabric. . . . . . . . . . . . . . . . . . . . . . . . . . 10 The Standards that Allow for Server I/O and Access-Layer Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Enhancements to Ethernet for Converged Data Center Networks—DCB. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Enhancements to Fibre Channel for Converged Data Center Networks—FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Future Direction for FCoE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 A Brief Note on iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 About Juniper Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Table of Figures Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network.. . . 4 Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Figure 3: Operation of an FCoE transit switch. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Figure 4: FCoE servers with CNA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 Figure 5: Rack-mount servers and top-of-rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 6: Blade servers with pass-through modules and top-of-rack FCoE-FC gateway.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 7: Blade servers with embedded DCB switch and top-of-rack FCoE-FC gateway. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Figure 8: Servers connected to FCoE transit switch through to an FCoE-enabled FC SAN fabric. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Figure 9: PFC ETS and QCN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2 Copyright © 2011, Juniper Networks, Inc.
  • 3. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Executive Summary In 2011, customers will finally be able to invest in convergence-enabling equipment and begin reaping the benefits of convergence in their data centers. With the first wave of standards now complete—both the IEEE Data Center Bridging (DCB) enhancements to Ethernet and the InterNational Committee for Information Technology Standards (INCITS) T11 FC-BB-5 standard for Fibre Channel over Ethernet (FCoE), enterprises can benefit from server- and access-layer I/O convergence while continuing to leverage their investment in their existing aggregation, core LAN, and Fibre Channel (FC) backbones. So why the focus on server and access-layer I/O convergence? Simply put, the industry recognizes that the first wave of standards does not meet the needs of full convergence and so it is working on a second wave of standards—including FC- BB-6—as well as various forms of fabric technology to better address the challenges of full convergence. The new standards are designed to provide lower cost convergence strategies for the smaller enterprise, and to address the scaling issues that come about from convergence in general as well as increased data center scale. As a result, 2011 is the year to focus on the benefits to be gained from converging the access layer while laying a foundation for the future. Juniper Networks® QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer convergence. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and storage area network (SAN) teams. It is also the first product to leverage a new generation of ASIC technologies. It offers 1.28 terabits per second (Tbps) of bandwidth implemented with a single ultra-low latency ASIC and soft-programmable ports capable of gigabit Ethernet (GbE), 10GbE, 40GbE, and 2/4/8 Gbps FC, supported through small form-factor pluggable transceiver (SFP+) GbE copper, 10GbE copper and optical digital to analog converter (DAC), and quad small form-factor pluggable (QSFP) dense optical connectivity. Copyright © 2011, Juniper Networks, Inc. 3
  • 4. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch SAN A SAN B Phase 1: Separate Networks SAN A SAN B Phase 2: Access Layer Convergence Phase 3: Full Convergence Figure 1: The phases of convergence, from separate networks, to access layer convergence, to the fully converged network. 4 Copyright © 2011, Juniper Networks, Inc.
  • 5. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Introduction The network is the critical enabler of all services delivered from the data center. A simple, streamlined, and scalable data center network fabric can deliver greater efficiency and productivity, as well as lower operating costs. Such a network also allows the data center to support much higher levels of business agility and not become a bottleneck that hinders a company from releasing new products or services. To allow businesses to make sound investment decisions, this white paper will look at the following areas to fully clarify the most interesting options for convergence in 2011: 1. Review the different types of convergence-capable products that are available on the market based upon the current standards and consider the capabilities of those products 2. Consider the deployment scenarios for those products 3. Look forward to some of the new product and solution capabilities expected over the next couple of years Access-Layer Convergence Modes When buying a convergence platform, it is possible to deploy products based on three very different modes of operation. Products on the market today may be capable of one or more of these modes depending on hardware and software configuration and license enablement. • FCoE transit switch—DCB switch with FCoE Initialization Protocol (FIP) snooping • FCoE-FC gateway—using N_Port ID Virtualization (NPIV) proxy • FCoE-FC switch—full Fibre Channel Forwarder (FCF) capability In principle, these systems can be used in multiple places within a deployment. However, for the purpose of this document and based on the most likely deployments in 2011, only the server access-layer convergence model will be covered. FCOE Transit Switch vs. FCOE-FC Gateway FC/FCoE Switch FC Switch VF_Port VF_Port VF_Port DCB DCB F_Port F_Port Port Port DCB DCB N_Port N_Port Port Port FCoE Transit Switch NPIV Proxy FIP Snooping FIP FIP FIP VF_Port VF_Port VF_Port ACL ACL ACL DCB DCB DCB DCB DCB DCB Port Port Port Port Port Port VN_Port VN_Port VN_Port VN_Port VN_Port VN_Port FCoE servers with CNA FCoE servers with CNA Figure 2: Operation FCoE transit switch vs. FCoE-FC gateway Copyright © 2011, Juniper Networks, Inc. 5
  • 6. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Option 1: FCoE Transit Switch (DCB Switch with FIP Snooping) In this model, the SAN team enables their backbone SAN fabric for FCoE, while the network team deploys a top-of-rack DCB switch with FIP snooping. Servers are deployed with Converged Network Adapters (CNAs), and blade servers are deployed with pass-through modules or embedded DCB switches. These are connected to the top-of-rack switch, which then has Ethernet connectivity to the LAN aggregation layer and Ethernet connectivity to the FCoE ports of the SAN backbone. A common question at this point is whether a DCB switch with no Fibre Channel stack can indeed be a viable part of a converged deployment and, in particular, whether such a switch gives not just the necessary security but also the performance and manageability required in a storage network deployment. Since this is, at one level, just a Layer 2 switch, this solution ensures that the switch in each server rack is not consuming an FC domain ID. Fibre Channel networks have a scale restriction that limits them to just a couple of tens of switches. As convergence and 10GbE forces a move towards top-of-rack switches, any solution deployed must ensure that convergence does not cause an FC SAN scaling problem. FCoE FCoE Enabled SAN LAG LAG FCoE FCoE Figure 3: Operation of an FCoE transit switch FCoE Servers with CNA A rich implementation of an FCoE transit switch will provide strong management and monitoring of the traffic separation, allowing the SAN team to monitor FCoE traffic throughput. Specifically, a fully manageable DCB switch will allow the user to monitor traffic on a per user priority and per priority group basis and not just per port. FIP snooping as defined in the FCoE standard provides perimeter protection, ensuring that the presence of an Ethernet layer in no way impacts existing SAN security. The SAN backbone can be simply FCoE-enabled with either FCoE blades within chassis-based systems or FCoE-FC gateways connected to the edge of the SAN backbone. In addition, the traditional Fibre Channel Security Profile (FC-SP) mechanisms work seamlessly through FCoE, allowing CNA-to-FCF authentication to be used through the DCB switch. Perhaps less obviously, FIP snooping also means that the switch has a very clear view of each and every FCoE session that is running through it, both in terms of the path, which is derived from the source and destination media access control (MAC) frames of the virtual Fibre Channel ports, as well as the actual status of the virtual FC connection, which is monitored by snooping the FIP keepalive. 6 Copyright © 2011, Juniper Networks, Inc.
  • 7. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Just as with any Ethernet deployment, the switch can use link aggregation group (LAG) to balance the Ethernet packets (including FCoE) across multiple links. As with any FC switch, this load balancing can include the OxID (Fibre Channel exchange ID) in order to carry out the Fibre Channel best practice of exchange-based load balancing. Finally, the FCoE protocol includes FCoE load-balancing capabilities to ensure that the FCoE servers are evenly and appropriately distributed across the multiple FCoE FC fabric connections. FCoE transit switches have several advantages: • Low-cost top-of-rack DCB switch • Rich monitoring of FCoE traffic at top of rack (QFX3500 Switch) • FCoE enablement of SAN backbone (FCoE blades or FCoE-FC gateway) managed by the SAN team for clean management separation • Load balancing carried out between CNAs and FCoE ports of the SAN fabric as well as point-to-point throughout the Ethernet infrastructure • Comprehensive security maintained through FIP snooping and FC-SP • No heterogeneous support issues, as top of rack is L2 connectivity only FC FC SAN LAG LAG FC FCoE Figure 4: FCoE servers with CNA Option 2: FCoE-FC Gateway (Using NPIV Proxy) In this model, the SAN and Ethernet teams agree jointly to deploy an FCoE-FC top-of-rack gateway. From a cabling perspective, the deployment is identical to Option 1, with the most visible difference being that the cable between the top of rack and the SAN backbone is now carrying native Fibre Channel traffic rather than FCoE traffic. As with Option 1, this solution ensures that the switch in each server rack is not consuming an FC domain ID. In this case, however, unlike Option 1, a much richer level of Fibre Channel functionality has been enabled within the switch. The FCoE- FC gateway uses NPIV technology so that it presents to the servers as an FCoE-enabled Fibre Channel switch, and presents to the SAN backbone as a group of FC servers. It then simply proxies sessions from one domain to the other with intelligent load-balancing and automated failover capability across the Fibre Channel links to the fabric. Copyright © 2011, Juniper Networks, Inc. 7
  • 8. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch FCoE-FC gateways have several advantages: • Clean separation of management through role-based access control (QFX3500 Switch) • No need for FCoE enablement of the SAN backbone • Fine-grained FCoE session-based load balancing (at the virtual machine level for NPIV-enabled hypervisors—QFX3500 Switch) and full Ethernet LAG with exchange-based load balancing on the Ethernet-facing connectivity • No heterogeneous support issues, as the FCoE-FC gateway presents to the SAN fabric as a Fibre Channel-enabled server (N_Port to F_Port) • Available post deployment as a license upgrade and fungible port reconfiguration with no additional hardware (QFX3500 Switch) • Support for an upstream DCB switch such as an embedded switch in blade server shelf (QFX3500 Switch), as well as direct CNA connectivity or connectivity via blade server pass-through modules Option 3: FCoE-FC Switch (Full FCF) (Not Recommended) For deployments of any size, there is no value to local switching, as any rack is either pure server or pure storage. In addition, although the SAN standards limit deployments to 239 switches, the practical supported limits are typically within the 16 to 32 range (in reality, most deployments are kept well below these limits). As such, this option has limited value in production data centers. For very small configurations where a single switch needs to connect to both servers and storage, Juniper believes that Internet Small Computer System Interface (iSCSI) is the best approach in 2011, while the FC-BB-6 VN2VN model (see “Future Direction for FCoE” section later in this white paper) will be the preferred FCoE end-to-end model in 2012. Deployment Models Available Today As previously noted, this paper focuses on deployments that apply for server access layer convergence. As such, it is assumed that this access layer is in turn connecting both to some form of Ethernet aggregation/core layer on one side and a Fibre Channel backbone on the other. The term “Fibre Channel backbone” implies a traditional FC SAN of some form which has attached to the FC disk and tape as well as most likely existing FC servers. By leveraging either an FCoE transit switch or an FCoE-FC gateway, whether separately or together, there are a number of deployment options for supporting both rack-mount servers and blade servers. Each approach has its merits, and organizations may want to use different approaches, depending on their requirements. In terms of physical deployment in most data centers, the Ethernet aggregation and core, the FC backbone, and the FC disk and tape are likely to be colocated in some centralized location within the data center with the server racks. From a cabling perspective, this means that the same physical cable infrastructure can easily support any of the deployment models discussed below. Rack-Mount Servers and Top-of-Rack FCoE-FC Gateway This deployment model is perhaps the most recognized and best understood. The QFX3500 Switch fully supports this model and, unlike other products, the QFX3500 enables this mode through a single license that allows up to 12 of its 48 SFP+ ports to be configured for 2/4/8 Gbps FC instead of 10GbE. 8 Copyright © 2011, Juniper Networks, Inc.
  • 9. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch FC Ethernet SAN Network Ethernet FC Figure 5: Rack-mount servers and top-of-rack FCoE-FC gateway Blade Servers with Pass-Through Modules and Top-of-Rack FCoE-FC Gateway This model is similar to the previous rack-mount servers and top-of-rack FCoE-FC gateway model. The challenge with this model is the complex cabling that accompanies pass-through modules. Using pass-through has the benefit of removing an entire layer from the network topology, thereby simplifying the data center, ensuring a single network operating system at all layers, and allowing the edge of the network to leverage the richer functionality available with the feature-rich ASICs used at top of rack. The use of modern pass-through modules and well constructed cabling solutions provide all the cable simplicity benefits of an embedded blade switch with none of the limitations. TOR FCoE-FC Gateway Pass-Through Module Figure 6: Blade servers with pass-through modules and top-of-rack FCoE-FC gateway. Copyright © 2011, Juniper Networks, Inc. 9
  • 10. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Blade Servers with Embedded DCB Switch and Top-of-Rack FCoE-FC Gateway To support this deployment model, it is necessary to ensure that both the CNAs and the FCoE-FC gateway have particularly feature-rich implementations of the full FC-BB-5 standard in order to support many-to-many L2 visibility for fan-in load balancing and high availability. The QFX3500 Switch is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches, including third-party embedded blade shelf switches. Juniper strongly recommends using such switches only if they have implemented FIP snooping for perimeter detection, and they have fully standards-based, feature-rich DCB implementations. When deploying a DCB switch in between the servers and the gateway, an Ethernet LAG is formed between the two devices, providing optimum packet distribution. In the case of the QFX3500 Switch, the Fibre Channel OxID is included in the LAG, ensuring exchange-based load balancing across the link. Additionally, for enhanced scaling, the ports of the QFX3500 can be configured in a trusted mode where it is known that there is an upstream DCB switch with FIP snooping. Increasingly, however, this option is seen as undesirable, as it adds an additional network tier and makes it hard to standardize the network access layer in a multivendor server environment. FC FC SAN FCoE Figure 7: Blade servers with embedded DCB switch and top-of-rack FCoE-FC gateway Blade Servers with Embedded FCoE-FC Gateway Typically, embedded switches have a limited power and heat budget, so the simpler the module the better. There is also an issue with limited space for port connections. With a gateway, some of these ports must be Ethernet and some must be Fibre Channel, further restricting the available bandwidth in both cases. In addition, such modules are not commonly available for all blade server families, making the deployment of a standard and consistent infrastructure challenging. Overall, these issues make this is an undesirable use case. Servers Connected Through FCoE Transit Switch to an FCoE-Enabled Fibre Channel SAN Fabric There is an interesting case for using FCoE transit switches as the access layer connecting both to Ethernet aggregation and to an FCoE-enabled Fibre Channel SAN fabric. An FCoE-FC gateway has to be actively managed and monitored by both the SAN and LAN teams—a considerable challenge for some organizations. An FCoE transit switch is not active at the FCoE layer of the protocol stack, so there is nothing for the SAN team to actively manage. Therefore, while the SAN team would still need monitoring capabilities, there is no active overlap of management, and this minimizes the possibility of configuration mistakes by different groups. 10 Copyright © 2011, Juniper Networks, Inc.
  • 11. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch There are various ways to enable the FC SAN fabric. One model is to include some FCoE-enabled switches within the SAN fabric; this can be accomplished by adding an FCoE blade to one of the chassis-based SAN directors. As with the previous use cases, the FCoE ports deployed on the SAN fabric must support multiple virtual fabric ports per physical port for this deployment to be viable. Another option is to use the QFX3500 Switch configured as an FCoE-FC gateway, which is connected locally to a pure FC SAN fabric and administered by the SAN team. For larger customers, where the merging of LAN and SAN network teams is unlikely to happen for several years, this provides a very clean and simply converged deployment model. Managed by SAN Team Managed by SAN Team FC FC SAN SAN Managed by LAN Team FCoE FCoE Managed by Server Team Figure 8: Servers connected to FCoE transit switch through to an FCoE-enabled FC SAN fabric The Standards that Allow for Server I/O and Access-Layer Convergence Enhancements to Ethernet for Converged Data Center Networks—DCB Ethernet, originally developed to handle traffic using a best-effort delivery approach, has mechanisms to support lossless traffic through 802.3X Pause, but these are rarely deployed. When used in a converged network, Pause frames can lead to cross-traffic blocking and congestion. Ethernet also has mechanisms to support fine-grained queuing (user priorities), but again, these are rarely deployed within the data center. The next logical step for Ethernet will be to leverage these capabilities and enhance existing standards to meet the needs of convergence and virtualization, propelling Ethernet into the forefront as the preeminent infrastructure for LANs, SANs, and high-performance computing (HPC) clusters. These enhancements benefit Ethernet I/O convergence (remembering that most servers have multiple 1GbE network interface cards not for bandwidth but to support multiple network services), and existing Ethernet- and IP-based storage protocols such as network access server (NAS) and iSCSI. These enhancements also provide the appropriate platform for supporting FCoE. In the early days when these standards were being developed and before they moved under the auspices of the IEEE, the term Converged Enhanced Ethernet (CEE) was used to identify them. Copyright © 2011, Juniper Networks, Inc. 11
  • 12. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch DCB—a set of IEEE standards. Ethernet needed a variety of enhancements to support I/O, network convergence, and server virtualization. Server virtualization is covered in other Juniper white papers, even though it is part of the DCB protocol set. With respect to I/O and network convergence, the development of new standards began with the following existing standards: 1. User Priority for Class of Service—802.1p—which already allows identification of eight separate lanes of traffic (used as-is) 2. Ethernet Flow Control (Pause, symmetric, and/or asymmetric flow control)—802.3X—which is leveraged for priority flow control (PFC) 3. MAC Control Frame for PFC—802.3bd—to allow 802.3X to apply to individual user priorities (modified) A number of new standards that leverage these components have been developed and have either been formally approved or are in the final stages of the approval process. These include: 1. PFC—IEEE 802.1Qbb—which applies traditional 802.3X Pause to individual priorities instead of the port 2. Enhanced Transmission Selection (ETS)—IEEE 802.1Qaz—which is a grouping of priorities and bandwidth allocation to those groups 3. Ethernet Congestion Management (QCN)—IEEE 802.1Qau—which is a cross network as opposed to a point-to-point backpressure mechanism 4. Data Center Bridging Exchange Protocol (DCBx), part of the ETS standard for DCB auto-negotiation The final versions of the standards specify minimum requirements for compliance, detail the maximum in terms of external requirements, and also describe in some detail the options for implementing internal behavior and the downside of some lower cost but standards-compliant ways of implementing DCB. It is important to note that these standards are separate from the efforts to solve the Layer 2 multipathing issues that are not technically necessary to make convergence work. Also, neither these standards nor those around L2 multipathing address a number of other challenges that arise when networks are converged and flattened. PFC TX Queue 0 RX Buffer 0 PFC ON RX Buffer 0 TX Queue 0 ON PFC TX Queue 1 RX Buffer 1 PFC ON RX Buffer 1 TX Queue 1 ON S Physical Port – PFC Physical Port – PFC PFC TX Queue 2 RX Buffer 2 PFC ON RX Buffer 2 T pause TX Queue 2 ON PFC TX Queue 3 O RX Buffer 3 PFC ON RX Buffer 3 P TX Queue 3 ON PFC TX Queue 4 RX Buffer 4 PFC OFF RX Buffer 4 TX Queue 4 OFF PFC TX Queue 5 RX Buffer 5 PFC OFF RX Buffer 5 TX Queue 5 OFF PFC TX Queue 6 Keeps sending DROP RX Buffer 6 PFC OFF RX Buffer 6 TX Queue 6 OFF PFC TX Queue 7 RX Buffer 7 PFC ON RX Buffer 7 TX Queue 7 ON 1 2 3 Physical Port – ETS Class Group 1 TX Queue 0 1 2 2 TX Queue 1 Class Group 2 TX Queue 2 TX Queue 3 2 5 5 2 6 5 TX Queue 4 TX Queue 5 Class Group 3 2 3 3 TX Queue 6 TX Queue 7 2 4 3 T1 T2 T3 T1 T2 T3 Offered Traffic Realized Traffic Figure 9: PFC ETS and QCN 12 Copyright © 2011, Juniper Networks, Inc.
  • 13. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch Enhancements to Fibre Channel for Converged Data Center Networks—FCoE FCoE—the protocol developed within T11. The proposed FCoE protocol has been developed by the T11 Technical Committee—a subgroup of the International Committee for Information Technology Standards (INCITS)—as part of the Fibre Channel Backbone 5 (FC-BB-5) project. The standard was passed over to INCITS for public comment and final ratification in 2009, and has since been formerly ratified. In 2009, T11 started development work on Fibre Backbone 6 (FC-BB-6), which is intended to address a number of issues not covered in the first standard, and develop a number of new deployment scenarios. FCoE was designed to allow organizations to move to Ethernet-based storage while, at least in theory, minimizing the cost of change. To the storage world, FCoE is, in many ways, just FC with a new physical media type; many of the tools and services remain the same. To the Ethernet world, FCoE is just another upper level protocol riding over Ethernet. The FC-BB-5 standard clearly defines all of the details involved in mapping FC through an Ethernet layer whether directly or through simplified L2 connectivity. It lays out both the responsibilities of the FCoE-enabled endpoints and FC fabrics as well as of the Ethernet layer. Finally, it clearly states the additional security mechanisms that are recommended to maintain the level of security that a physically separate SAN traditionally provides. Overall, apart from the scale-up and scale-down aspects, FC-BB-5 defines everything needed to build and support the products and solutions discussed earlier. While the development of FCoE as an industry standard will bring the deployment of unified data center infrastructures closer to reality, FCoE by itself is not enough to complete the necessary convergence. Many additional enhancements to Ethernet and changes to the way networking products are designed and deployed are required to make it a viable, useful, and pragmatic implementation. Many, though not all, of the additional enhancements are provided by the standards developed through the IEEE DCB committee. In theory, the combination of the DCB and FCoE standards allows for full network convergence. In reality, they only solve the problem for relatively small-scale data centers. The challenge of applying these techniques to larger deployments involves the use of these protocols purely for server- and access-layer I/O convergence through the use of FCoE transit switches (DCB switches with FIP snooping) and FCoE-FC gateways (using N_Port ID Virtualization to eliminate SAN scaling and heterogeneous support issues). Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an FCoE transit switch mode. The QFX3500 also supports FCoE-FC gateway mode. These products are industry firsts in many ways: 1. The EX4500 and QFX3500 are fully standards-based with rich implementations from both a DCB and FC-BB-5 perspective. 2. The EX4500 and QFX3500 are purpose-built FCoE transit switches. 3. QFX3500 is a purpose-built FCoE-FC gateway which includes fungible combined Ethernet/Fibre Channel ports. 4. QFX3500 features a single Packet Forwarding Engine (PFE) design. 5. The EX4500 and QFX3500 switches both include feature-rich L3 capabilities. 6. QFX3500 supports low latency with cut-through switching. Future Direction for FCoE There are two key initiatives underway within FC-BB-6, which will prove critical to the adoption of FCoE for small and large businesses alike. For smaller businesses, a new FCoE mode has been developed, allowing for a fully functional FCoE deployment without the need for either the traditional FC services stack or FC L3 forwarding. Instead, the FCoE end devices directly discover and attach to each other through a pure L2 Ethernet infrastructure. This can be as simple as a DCB-enabled Ethernet switch, with the addition of FIP snooping for security. It makes FCoE simpler than either iSCSI or NAS, since it no longer needs a complex Fibre Channel (or FCoE) switch, and because the FCoE endpoints have proper discovery mechanisms. This mode of operation is commonly referred to as VN_Node to VN_Node or VN2VN. It can be used by itself for small to medium scale FCoE deployments, or in conjunction with the existing FCoE models for larger deployments to allow them to benefit from local L2 connectivity. Copyright © 2011, Juniper Networks, Inc. 13
  • 14. WHITE PAPER - FCoE Convergence at the Access Layer with Juniper Networks QFX3500 Switch For larger businesses, a set of approaches is being investigated to remove the practical FC scaling restrictions that currently limit deployment sizes. As this work continues, it is hoped that the standards will evolve not only to solve some of these scaling limitations, but also to more fully address many of the other challenges that arise as a result of blending L2 switching, L3 FC forwarding, and FC services. Juniper fully understands these challenges, which are similar to the challenges of blending L2 Ethernet, L3 IP forwarding, and higher level network services for routing. As part of Juniper’s 3-2-1 data center architecture, we have already demonstrated many of these approaches with Juniper Networks EX Series Ethernet Switches, MX Series 3D Universal Edge Routers, SRX Series Services Gateways, and Juniper Networks Junos® Space. A Brief Note on iSCSI Although not the subject of this white paper, it is important to note that the implementation of DCB and products—such as the QFX3500 and Juniper Networks QFabric™ family of products, along with the latest generation of CNAs and storage— provides many benefits to iSCSI for those deployments where the FC-BB-5 standards prove too limiting. This is of particular interest given that most CNAs and many storage subsystems can now be deployed through different licensing as either FCoE or iSCSI, giving the end user significant protection against the protocol debate. Conclusion Juniper Networks® QFX3500 Switch is the first top-of-rack switch built to solve all of the challenges posed by access-layer convergence. It is the first fully FC-BB-5-enabled gateway capable of easily supporting upstream DCB switches, including third-party embedded blade shelf switches. It works for both rack-mount and blade servers, and for organizations with combined or separate LAN and storage area network (SAN) teams. It is also the first product to leverage a new generation of powerful ASICs. Industry firsts in many ways, Juniper Networks EX4500 Ethernet Switch and QFX3500 Switch both support an FCoE transit switch mode, and the QFX3500 also supports FCoE-FC gateway mode. They are fully standards-based with rich implementations from both a DCB and FC-BB-5 perspective and feature-rich L3 capabilities. The QFX3500 Switch is a purpose-built FCoE-FC gateway which includes fungible combined Ethernet/FC ports, a single PFE design, and low latency cut-through switching. There are a number of very practical server I/O access-layer convergence topologies that can be used as a step along the path to full network convergence. During 2011 and 2012, further events such as LAN on motherboard (LoM), QSFP, 40GbE, and the FCoE Direct Discovery Direct Attach model will further bring Ethernet economics to FCoE convergence efforts. About Juniper Networks Juniper Networks is in the business of network innovation. From devices to data centers, from consumers to cloud providers, Juniper Networks delivers the software, silicon and systems that transform the experience and economics of networking. The company serves customers and partners worldwide. Additional information can be found at www.juniper.net. Corporate and Sales Headquarters APAC Headquarters EMEA Headquarters To purchase Juniper Networks solutions, Juniper Networks, Inc. Juniper Networks (Hong Kong) Juniper Networks Ireland please contact your Juniper Networks 1194 North Mathilda Avenue 26/F, Cityplaza One Airside Business Park representative at 1-866-298-6428 or Sunnyvale, CA 94089 USA 1111 King’s Road Swords, County Dublin, Ireland authorized reseller. Phone: 888.JUNIPER (888.586.4737) Taikoo Shing, Hong Kong Phone: 35.31.8903.600 or 408.745.2000 Phone: 852.2332.3636 EMEA Sales: 00800.4586.4737 Fax: 408.745.2100 Fax: 852.2574.7803 Fax: 35.31.8903.601 www.juniper.net Copyright 2011 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos, NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice. 2000422-001-EN July 2011 Printed on recycled paper 14 Copyright © 2011, Juniper Networks, Inc.