Learning POX : Network virtualization

The application name is aggregator.py.

Overview

Network virtualization is a paradigm when a physical network topology details are hidden from the user behind some intermediate entity. This beast handles events from the physical switches and translates commands received from the user application.

POX has frameworks for both sides of OpenFlow, one is the controller side which is used to control switches, and another one is the switch side which takes commands from controllers. The aggregator is a switch, allowing other controllers to connect to it and send it OpenFlow commands and so forth. But underneath, it implements this by controlling other OpenFlow switches.

It aggregates all the ports and flow tables of all the underlying switches and presents them to its own controller as if they were all part of one big switch. When the controller sends a flow table entry, the aggregator translates it to the underlying switches. When the controller requests flow statistics, the aggregator collects it from all the switches and responds with a proper combined statistics reply message.

Implementation details

Underlying switches are interconnected using flow-based GRE tunnels supported by Open vSwitch NXM extension.

Example topology

POX Aggregator componentAll the flows installed by the controller are get translated and installed over all switches hidden underneath the aggregator. Redirect action translation example is shown below.

Redirect action translation


POX Aggregator flows

During initialization stage three flow tables are created and prepopulated with service rules on each switch controlled by the aggregator. They are:

  • Table 0, responsible to multiplex packets coming from local or tunnel ports to the following two tables
  • Table 1, named “Remote“, is responsible to properly redirect traffic coming from a tunnel port
  • Table 2, named “Openflow“, is the table where translated flow tables are installed to

The following table represents service rules that are pre-installed to each table on each switch.

Preinstalled flows

Table Match Action
Table 0 Port = 3 (tunnel port) Resubmit to Table 1
Table 0 Any Resubmit to Table 2
Table 1 Tunnel id = 0 Redirect to controller
Table 1 Tunnel id = 1 Redirect to port 1
Table 1 Tunnel id = 2 Redirect to port 2
Table 1 Tunnel id = 0x7e (MALL) Redirect to All
Table 1 Tunnel id = 0x7f (MFLOOD) Flood
Table 2 Any Redirect to controller

Ports

The aggregator hides all underlying switches and is seen as one big switch with lots of ports thus it has to translate switch port numbers. A simple formula is used:

Aggregator port number = switch port number + MAX_PORT_NUMBER * (switch dpid - 1)

Where MAX_PORT_NUMBER is a constant that defines the maximum number of ports that switch can own. In our module it is equal to 16.

Local port Switch dpid Aggregator port
1 1 1
2 1 2
1 2 17
2 2 18

Using cookies

The cookie field is used to map flow table entries between the aggregator and underlying switches.
For this purpose a cookie is generated and assigned to a flow table entry installed to the aggregator and controlled switches. Two mapping dictionaries are maintained to store correspondence between cookies.

Facilitating Nicira Extended Match(NXM) support in POX allows to use cookie as a key when modifying and deleting flow entries on switches.

Running the aggregator

  1. Start mininet using “sudo python agg_net.py
  2. Add GRE tunnels to switches using “sudo ./aggregator.sh 2
  3. Start main POX controller “./pox.py log.level –DEBUG openflow.of_01 –port=7744 forwarding.l2_pairs
  4. Start the aggregator “./pox.py log.level –DEBUG edge.aggregator –ips=172.16.0.1,172.16.0.2

Note that OVS 2.2.90 and Ubuntu 13.10 were used for experimentation.
Mininet script (agg_net.py)

#!/usr/bin/python

from mininet.net import Mininet
from mininet.node import Controller, RemoteController, Node
from mininet.cli import CLI
from mininet.log import setLogLevel, info
from mininet.link import Link, Intf

def aggNet():

    NODE1_IP='172.16.0.1'
    NODE2_IP='172.16.0.2'
    CONTROLLER_IP='127.0.0.1'

    net = Mininet( topo=None,
                   build=False)

    net.addController( 'c0',
                      controller=RemoteController,
                      ip=CONTROLLER_IP,
                      port=6633)

    h1 = net.addHost( 'h1', ip='10.0.0.1' )
    h2 = net.addHost( 'h2', ip='10.0.0.2' )
    h3 = net.addHost( 'h3', ip='10.0.0.3' )
    h4 = net.addHost( 'h4', ip='10.0.0.4' )
    s1 = net.addSwitch( 's1' )
    s2 = net.addSwitch( 's2' )

    net.addLink( h1, s1 )
    net.addLink( h2, s1 )
    net.addLink( h3, s2 )
    net.addLink( h4, s2 )

    net.start()
    CLI( net )
    net.stop()

if __name__ == '__main__':
    setLogLevel( 'info' )
    aggNet()

Tunnel setup script (aggregator.sh)

#!/bin/bash
num=$1
echo Adding tunnels for $num switches
rmmod dummy
modprobe dummy numdummies=$((num+1))

for x in $(seq 1 $1); do
  ifconfig dummy$x 172.16.0.$x
  ovs-vsctl del-port s$x tun$x 2> /dev/null
  ovs-vsctl add-port s$x tun$x -- set Interface tun$x type=gre \
	options:remote_ip=flow options:local_ip=172.16.0.$x options:key=flow
done

References