-
Notifications
You must be signed in to change notification settings - Fork 39
Description
I am trying to use maxinet to emulate large topologies (1000-2000 nodes) with TCLinks (latency emulation), however the results I got so far using the OVSKernelSwitch are quite far from expected when using ping.
Here is an excerpt of a 600 second ping execution on a topology with 500 nodes where the RTT should be 166ms:
From 10.1.0.170 icmp_seq=165 Destination Host Unreachable
From 10.1.0.170 icmp_seq=166 Destination Host Unreachable
64 bytes from 10.1.0.54: icmp_seq=141 ttl=64 time=26714 ms
From 10.1.0.170 icmp_seq=196 Destination Host Unreachable
From 10.1.0.170 icmp_seq=197 Destination Host Unreachable
From 10.1.0.170 icmp_seq=198 Destination Host Unreachable
64 bytes from 10.1.0.54: icmp_seq=180 ttl=64 time=31360 ms
64 bytes from 10.1.0.54: icmp_seq=181 ttl=64 time=30352 ms
64 bytes from 10.1.0.54: icmp_seq=184 ttl=64 time=29393 ms
64 bytes from 10.1.0.54: icmp_seq=187 ttl=64 time=28595 ms
64 bytes from 10.1.0.54: icmp_seq=188 ttl=64 time=29569 ms
64 bytes from 10.1.0.54: icmp_seq=217 ttl=64 time=369 ms
64 bytes from 10.1.0.54: icmp_seq=218 ttl=64 time=166 ms
64 bytes from 10.1.0.54: icmp_seq=189 ttl=64 time=29568 ms
The same behaviour occurs throughout the entire run.
Smaller topologies seem to work ok (10-20 nodes).
From reading the Maxinet paper, it is recomended to use the "Openflow 1.0 userspace reference implementation" when emulating large numbers of switches.
I assume that is the UserSwitch from mininet. However when I change
net = maxinet.Experiment(cluster, topo, switch=OVSKernelSwitch)
n343 = net.addSwitch('n343', cls=OVSKernelSwitch, dpid=Tools.makeDPID(343), wid=ids[3])
to
net = maxinet.Experiment(cluster, topo, switch=UserSwitch)
n343 = net.addSwitch('n343', cls=UserSwitch, dpid=Tools.makeDPID(343), wid=ids[3])
in my topology the script crashes with the following error:
Traceback (most recent call last):
File "maxinet_500nodes_m1.py", line 1382, in <module>
myNetwork()
File "maxinet_500nodes_m1.py", line 28, in myNetwork
n343 = net.addSwitch('n343', cls=UserSwitch, dpid=Tools.makeDPID(343), wid=ids[3])
File "/usr/local/lib/python2.7/dist-packages/MaxiNet-1.2-py2.7.egg/MaxiNet/Frontend/maxinet.py", line 1259, in addSwitch
self.get_worker(name).addSwitch(name, cls, **params)
File "/usr/local/lib/python2.7/dist-packages/MaxiNet-1.2-py2.7.egg/MaxiNet/Frontend/maxinet.py", line 411, in addSwitch
return self.mininet.addSwitch(name, cls, **params)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 169, in __call__
return self.__send(self.__name, args, kwargs)
File "/usr/local/lib/python2.7/dist-packages/Pyro4/core.py", line 388, in _pyroInvoke
raise data
AssertionError
The samples in MaxiNet/MaxiNet/Frontend/examples/ work as expected.
I installed Maxinet through the install.sh script on 4 machines running Ubuntu 16.04.4 LTS.
I had to downgrade Pyro with: sudo pip install Pyro4==4.30
I am using the forwarding.l2_learning pox controller.
My questions are:
What is the best switch to use for large topologies like mine?
If UserSwitch is the right one than what am I doing wrong?
Thank you for your help
(I have ommited the full topology script because its quite large, but I can post it if requested)