The ONIX Controller Model (a distributed control
platform for large-scale production networks)
The Onix controller model
first relates to the idealized
SDN framework in that it
provides a variety of
northbound RESTful
interfaces.
These can be used to
program, interrogate, and
configure the controller’s
numerous functions, such
as basic controller
functionality, flow and
forwarding entry
programming, and
topology.
Open Source OpenFlow
controller capabilities
(against an idealized
controller framework).Onix
capabilities (against an
idealized controller
framework)
NOX/POX
NOX was developed by Nicira and donated to the research community and hence
becoming open source in 2008.
This move in fact made it one of the first open source OpenFlow controllers.
NOX provides a C++ API to OpenFlow (OF v1.0) and an asynchronous, event-based
programming model.
NOX is both a primordial controller and a component-based framework for developing
SDN applications.
It provides support modules specific to OpenFlow but can and has been extended.
The NOX core provides helper methods and APIs for interacting with OpenFlow
switches, including a connection handler and event engine.
Components of a NOX-
based network: OpenFlow
(OF) switches, a server
running a NOX controller
process and a database
containing the network
view.
NOX
Architecture
Advantages of POX over NOX
POX has a Pythonic OpenFlow interface.
POX has reusable sample components for path selection, topology discovery,
and so on.
POX specifically targets Linux, Mac OS, and Windows.
POX supports the same GUI and visualization tools as NOX.
POX performs well compared to NOX applications written in Python.
Trema
Trema is an OpenFlow programming framework for developing an OpenFlow
controller.
Trema OpenFlow Controller is an extensible set of Ruby scripts.
Developers can individualize or enhance the base controller functionality (class object)
by defining their own controller subclass object and embellishing it with additional
message handlers.
In addition, the core modules provide a message bus (IPC mechanism via Messenger)
that allows the applications/user_modules to communicate with each other and core
modules.
Other core modules include timer and logging libraries, a packet parser library, and
hash-table and linked-list structure libraries.
Trema core/user module relationships
Trema architecture and API interfaces
Ryu
Ryu is a component-based, open source framework implemented entirely in Python.
The Ryu messaging service does support components developed in other languages.
Components include an OpenFlow wire protocol support, event management,
messaging, in-memory state management, application management, infrastructure
services and a series of reusable libraries.
Additionally, applications like Snort, a layer 2 switch, GRE tunnel abstractions, VRRP,
as well as services (e.g., topology and statistics) are available.
At the API layer, Ryu has an Openstack Quantum plug-in that supports both GRE
based overlay and VLAN configurations.
Ryu also supports a REST interface to its OpenFlow operations.
Ryu architecture, applications and APIs
Big Switch Networks/Floodlight
Floodlight is a very popular SDN controller contribution from Big Switch Networks to
the open source community.
Floodlight is based on Beacon from Stanford University.
Floodlight is an Apache-licensed, Java-based OpenFlow controller (non-OSGI).
The architecture of Floodlight as well as the API interface is shared with Big Switch
Network’s commercial enterprise offering Big Network Controller.
The Floodlight core architecture is modular, with components including topology
management, device management, path computation, infrastructure for web access,
counter store, and a generalized storage abstraction for state storage.
These components are treated as loadable services with interfaces that export state.
The controller itself presents a set of extensible REST APIs as well as an event
notification system.
Floodlight
Architecture
Contd…
The core module called the Floodlight Provider, handles I/O from switches and translates
OpenFlow messages into Floodlight events, thus creating an event-driven, asynchronous
application framework.
Floodlight incorporates a threading model that allows modules to share threads with other
modules.
Event handling within this structure happens within the publishing module’s thread
context.
Synchronized locks protect shared data.
Component dependencies are resolved at load-time via configuration.
The topology manager uses LLDP (as does most OpenFlow switches) for the discovery
of both OpenFlow and non-OF endpoints.
Juniper Networks Contrail Systems Controller
Supports L3VPN overlays.
L3VPN utilizes virtual routing and forwarding (VRF).
VRF is a technology included in Internet Protocol (IP) network routers that
enables multiple instances of a routing table to exist in a virtual router and work
simultaneously.
Juniper Networks Virtual
Network System SDN
controller system
The Control Node uses BGP to
distribute network state, presenting a
standardized protocol for horizontal
scalability and the potential of
multivendor interoperability.
The communication/messaging
between Control Node and vRouter
Agent is intended to be an open
standard using XMPP (Extensible
Messaging and Presence Protocol )
Contd…
Analytics: Provides the query interface and storage interface for statistics/counter
reporting.
Configuration: Provides the compiler that uses the high-level data model to
convert API requests for network actions into low-level data model for
implementation via the control code.
Control: The Control Node uses BGP to distribute network state, presenting a
standardized protocol for horizontal scalability and the potential of multivendor
interoperability. More useful for interoperability with existing BGP networks.
Interaction between controller and
Juniper Networks vRouter
The vRouter Agent converts
XMPP control messages into
VRF instantiations repre‐
senting the tenants and
programs the appropriate FIB
entries for these entities in the
hypervisor resident vRouter
forwarding plane.
Multi-tenancy
in Juniper
Networks
vRouter
Contrail VNS
capabilities (against an
idealized controller
framework)
Controller Placement Problem
Controller Placement
A key design choice of the SDN control plane is placement of the controller(s),
which impacts a wide range of network issues ranging from latency to
resiliency, from energy efficiency to load balancing, and so on.
Important Questions?
1. How many controllers to place in an SDN network?
2. Where to locate them in the network?
3. How to map switches to controllers?
Significance of the Controller Placement
Problem in SDN
Due to the intrinsic decoupling of the data and control planes in SDN, an SDN
switch queries the SDN controller (using PACKET-IN messages) for appropriate
forwarding actions for every new flow that it encounters.
Consequently, the flow setup time in SDN depends on the switch-controller
communication latency.
Thus, the placement of the SDN controller(s) is a critical design consideration, as
it directly affects the control latency, which in turn affects a wide range of
network issues such as routing policy updates, load balancing, scalability, energy
efficiency, etc.
Average Latency
Switch-controller communication latency is due to
(i) round-trip propagation delay between a switch and its assigned controller, and,
(ii) processing delay at the controller.
The controller locations determine the switch-controller propagation latency,
while the switch-controller mappings affect the controller load, and hence its
response time.
Placement #2
experiences 30.22%
lower latency than that
in placement #1
Resilience (Against Controller Failure)
In the event of a controller failure, all switches are mapped to the surviving controller.
Thus, if controller C1 fails, the network is managed by controller C2.
We find that upon failure of C1, the maximum switch-controller latency experienced in
placement #2 is 18.19% lower than that in placement #1.
Quality of Service (QoS)
If the network has a performance requirement that the
switch-controller latency should be bounded, then it is
necessary to optimize both the location of controllers as
well as the number of controllers in the network.
For example, if the upper bound for propagation latency
experienced by a request is 25 ms, we find that the
placement and number of controllers in Figure 2(c) is not
able to meet this requirement.
We observe that two controllers are unable to meet this
QoS requirement, and at least three controllers are
required. Figure 2(f) shows a possible placement with
three controllers that can meet the QoS requirements
References
1. OpenFlow, Software Defined Networking (SDN) and Network Function
Virtualization (NFV) by Prof. Raj Jain @ the Washington University in Saint
Louis.
2. SDN -Software Defined Networks by Thomas D. Nadeau & Ken Gray,
O'Reilly, 2013.
3. Das, T., Sridharan, V., & Gurusamy, M. (2019). A survey on controller
placement in SDN. IEEE communications surveys & tutorials, 22(1), 472-503.