2. OpenStack Networking
This diagram depicts a sample OpenStack Networking deployment, with a
dedicated OpenStack Networking node performing L3 routing and DHCP,
and running the advanced services FWaaS and LBaaS.
Two Compute nodes run the Open vSwitch (openvswitch-agent) and have
two physical network cards each, one for tenant traffic, and another for
management connectivity.
The OpenStack Networking node has a third network card specifically for
provider traffic
3. Open vSwitch
Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch
similar to the Linux software bridge.
OVS provides switching services to virtualized networks with support for
industry standard NetFlow, OpenFlow, and sFlow. Open vSwitch is also able
to integrate with physical switches using layer 2 features, such as STP, LACP,
and 802.1Q VLAN tagging.
Tunneling with VXLAN and GRE is supported with Open vSwitch
version 1.11.0-1.el6 or later
4. Modular Layer 2 (ML2)
ML2 is the OpenStack
Networking core plug-in
introduced in
OpenStack’s Havana
release.
Superseding the previous
model of monolithic
plug-ins, ML2’s modular
design enables the
concurrent operation of
mixed network
technologies. The
monolithic Open vSwitch
and Linux Bridge plug-ins
have been deprecated
and removed.
Their functionality has
instead been
reimplemented as ML2
mechanism drivers.
5. ML2 network types
Multiple network segment types can be operated concurrently. In addition, these network
segments can interconnect using ML2’s support for multi-segmented networks.
Ports are automatically bound to the segment with connectivity; it is not necessary to bind them
to a specific segment. Depending on the mechanism driver, ML2 supports the following network
segment types:
1. flat
2. GRE
3. local
4. VLAN
5. VXLAN
6. The various Type drivers are enabled in the ML2 section of the ml2_conf.ini file
Tenant networks
Tenant networks are created by users for connectivity within projects. They are
fully isolated by default and are not shared with other projects.
OpenStack Networking supports a range of tenant network types:
Flat - All instances reside on the same network, which can also be shared with
the hosts. No VLAN tagging or other network segregation takes place.
VLAN - OpenStack Networking allows users to create multiple provider or
tenant networks. They can also communicate with dedicated servers, firewalls,
load balancers and other network infrastructure on the same layer 2 VLAN.
[ml2]type_drivers = local,flat,vlan,gre,vxlan
7. VXLAN and GRE tunnels - VXLAN and GRE use network overlays to support
private communication between instances. An OpenStack Networking
router is required to enable traffic to traverse outside of the GRE or VXLAN
tenant network.
A router is also required to connect directly-connected tenant networks
with external networks, including the Internet; the router provides the ability
to connect to instances directly from an external network using floating IP
addresses.
8. Configure controller nodes
Edit /etc/neutron/plugin.ini (symbolic link
to /etc/neutron/plugins/ml2/ml2_conf.ini.
Add flat to existing list of values and set flat_networks to *
Type_drivers = vxlan, flat
Flat_networks =*
Create an external network as a flat network and associate it with the
configured physical_network.
9. Create a subnet using neutron subnet-create command.
Restart the neuton-server service to apply the changes.
10. Configure the Network and Compute nodes
1. Create an external network bridge (br-ex) and add an associated port (eth1)
to it
Create the external bridge in /etc/sysconfig/network-scripts/ifcfg-br-ex:
In /etc/sysconfig/network-scripts/ifcfg-eth1, configure the eth1 to connect to br-ex
Reboot the node or restart the network service for the changes to take effect
2. Configure physical networks
in /etc/neutron/plugins/ml2/openvswitch_agent.ini and map bridges to the physical
network
3. Restart the neutron-openvswitch-agent service on both the network and compute
nodes for the changes to take effect
12. Standard OVS built out of three main components:
ovs-vswitchd – a user-space daemon that implements the switch logic
kernel module (fast path) – that processes received frames based on a
lookup table
ovsdb-server – a database server that ovs-vswitchd queries to obtain its
configuration. External clients can talk to ovsdb-server using OVSDB
protocol
13. When a frame is received, the fast path (kernel space) uses match fields
from the frame header to determine the flow table entry and the set of
actions to execute.
If the frame does not match any entry in the lookup table it is sent to the
user-space daemon (vswitchd) which requires more CPU processing.
The user-space daemon then determines how to handle frames of this type
and sets the right entries in the fast path lookup tables
14. OVS has several ports:
outbound ports which are
connected to the physical NICs
on the host using kernel device
drivers,
Inbound ports which are
connected to VMs. The VM
guest operating system (OS) is
presented with vNICs using the
well-
known VirtlO paravirtualized net
work driver.
15. PCI Passthrough
Through Intel’s VT-d extension (IOMMU for
AMD) it is possible to present PCI devices on
the host system to the virtualized guest OS.
This is supported by KVM (Kernel-based
Virtual Machine).
Using this technique it is possible to provide
a guest VM exclusive access to a NIC. For
all practical purposes, the VM thinks the
NIC is directly connected to it.
PCI passthrough suffers from one major
shortcoming - a single interface eth0 on
one of the VNF1 has complete access and
ownership of the physical NIC
16. Data Plane
Development
Kit (DPDK)
The Data Plane Development Kit (DPDK) consists of a set
of libraries and user-space drivers for fast packet
processing.
It’s designed to run mostly in user-space enabling
applications to perform their own packet processing
operations directly from/to the NIC.
The DPDK libraries only provide minimal packet
operations within the application but enable receiving
and sending packets with a minimum number of CPU
cycles.
It does not provide any networking stack and instead
helps to bypass the kernel network stack in order to
deliver high performance.
17. DPDK-accelerated Open
vSwitch (OVS-DPDK)
Open vSwitch can be bundled with
DPDK for better performance, resulting
in a DPDK-accelerated OVS
(OVS+DPDK).
At a high level, the idea is to replace
the standard OVS kernel datapath
with a DPDK-based datapath, creating
a user-space vSwitch on the host,
which is using DPDK internally for its
packet forwarding.
The nice thing about this architecture is
that it is mostly transparent to users as
the basic OVS features as well as the
interfaces it exposes (such as
OpenFlow, OVSDB, the command line,
etc.) remains mostly the same.
18. DPDK with
Red Hat
OpenStack
Platform
Generally, we see two main use-cases for using DPDK
with Red Hat and Red Hat OpenStack Platform.
DPDK enabled applications, or VNFs, written on top
of Red Hat Enterprise Linux as a guest operating
system. Here we are talking about Network Functions
that are taking advantage of DPDK as opposed to
the standard kernel networking stack for enhanced
performance.
DPDK-accelerated Open vSwitch, running within Red
Hat OpenStack Platform compute nodes (the
hypervisors). Here it is all about boosting the
performance of OVS and allowing for faster
connectivity between VNFs.