Testing throughput at Gigabit speeds

Overview-

cnWave provides consistent high bandwidth internet in dense urban environment. It is a 60GHz, multi-node wireless Software Defined Network (SDN) that enables high-speed internet connectivity in multiple environments. cnWave is a high-speed backbone network to which other networks are connected.

Throughput vs MCS-

throughput numbers provided above are bidirectional numbers considering 50/50 for uplink and downlink in mesh network. There is a plan to support asymmetric throughput in future for PTP links.

Tools used-

  1. Iperf version 3.7 for bidirectional support (widely used)
  2. Speed test

Traffic Generator-

Spirent

PTP Setup Block diagram-

Setup Requirements-

  • PC Dell optiplex 7060 with 16GB RAM+ intel 10G SFP+with ubuntu 18.4 & iperf3.7 installed
  • 4 10G SFP+ cambium modules with 2 5mt Optical Cable 10G supported
  • 2 V5000 devices

Iperf3 examples-

Example for PTP V5000-V5000

Speed test-

image

Spirent Traffic Example-

UDP bidirectional uplink=1.95Gbps; downlink=1.94Gbps

2 Likes

I have a V3000 to V3000 PtP link with a 661 meter distance. I recently upgraded to software version 1.2.2 and wanted to test the new channel bond feature. The POP is running E2E controller onboard.

I wanted to use the devices to test, and I wish there was a “link capacity test” like the PMP450 product line. I came across this post and discovered the next best thing for what I need.

I used PuTTY to SSH into both sides of the link, with the same username and password as the web interface.

On the POP side I ran:

service iperf -s

On the CN side I ran:

service iperf -c [ipv6 of PoP] -i1 -fm -w128k -P 16 --bidir

Then retrieved the output on the CN:

system iperf show

Some of the output is shown below (it looks like it runs several iterations).

I had to SSH into the POP a second time to shut down the iperf service, using the command:

system iperf kill
system iperf clear

I don’t know how much of what I discovered is “proper” so to speak, but thought it might help someone else who wanted to test without downloading software onto multiple computers, or else just wanted a unit-to-unit test.

In fact, if Cambium would take a suggestion, it would be great to incorporate a test like this into the web UI. It’s clear that the units themselves can support it using the CLI.

Output:


[ 6][TX-C] 7.00-8.00 sec 13.2 MBytes 110 Mbits/sec 0 279 KBytes
[ 8][TX-C] 7.00-8.00 sec 13.2 MBytes 111 Mbits/sec 0 273 KBytes
[ 10][TX-C] 7.00-8.00 sec 13.1 MBytes 110 Mbits/sec 0 267 KBytes
[ 12][TX-C] 7.00-8.00 sec 12.9 MBytes 108 Mbits/sec 0 269 KBytes
[ 14][TX-C] 7.00-8.00 sec 13.4 MBytes 112 Mbits/sec 0 267 KBytes
[ 16][TX-C] 7.00-8.00 sec 13.4 MBytes 113 Mbits/sec 0 271 KBytes
[ 18][TX-C] 7.00-8.00 sec 13.0 MBytes 109 Mbits/sec 0 262 KBytes
[ 20][TX-C] 7.00-8.00 sec 13.2 MBytes 110 Mbits/sec 0 277 KBytes
[ 22][TX-C] 7.00-8.00 sec 13.1 MBytes 110 Mbits/sec 0 280 KBytes
[ 24][TX-C] 7.00-8.00 sec 12.9 MBytes 108 Mbits/sec 0 267 KBytes
[ 26][TX-C] 7.00-8.00 sec 12.9 MBytes 108 Mbits/sec 0 273 KBytes
[ 28][TX-C] 7.00-8.00 sec 13.3 MBytes 111 Mbits/sec 0 260 KBytes
[ 30][TX-C] 7.00-8.00 sec 13.2 MBytes 110 Mbits/sec 0 277 KBytes
[ 32][TX-C] 7.00-8.00 sec 13.1 MBytes 110 Mbits/sec 0 264 KBytes
[ 34][TX-C] 7.00-8.00 sec 12.9 MBytes 108 Mbits/sec 0 262 KBytes
[ 36][TX-C] 7.00-8.00 sec 12.9 MBytes 108 Mbits/sec 0 258 KBytes
[SUM][TX-C] 7.00-8.00 sec 210 MBytes 1759 Mbits/sec 0
[ 38][RX-C] 7.00-8.00 sec 15.7 MBytes 131 Mbits/sec
[ 40][RX-C] 7.00-8.00 sec 15.4 MBytes 129 Mbits/sec
[ 42][RX-C] 7.00-8.00 sec 15.5 MBytes 130 Mbits/sec
[ 44][RX-C] 7.00-8.00 sec 15.5 MBytes 130 Mbits/sec
[ 46][RX-C] 7.00-8.00 sec 15.6 MBytes 131 Mbits/sec
[ 48][RX-C] 7.00-8.00 sec 15.5 MBytes 130 Mbits/sec
[ 50][RX-C] 7.00-8.00 sec 15.3 MBytes 128 Mbits/sec
[ 52][RX-C] 7.00-8.00 sec 15.9 MBytes 133 Mbits/sec
[ 54][RX-C] 7.00-8.00 sec 15.4 MBytes 129 Mbits/sec
[ 56][RX-C] 7.00-8.00 sec 15.7 MBytes 132 Mbits/sec
[ 58][RX-C] 7.00-8.00 sec 15.6 MBytes 131 Mbits/sec
[ 60][RX-C] 7.00-8.00 sec 15.2 MBytes 128 Mbits/sec
[ 62][RX-C] 7.00-8.00 sec 15.4 MBytes 130 Mbits/sec
[ 64][RX-C] 7.00-8.00 sec 15.5 MBytes 130 Mbits/sec
[ 66][RX-C] 7.00-8.00 sec 15.5 MBytes 130 Mbits/sec
[ 68][RX-C] 7.00-8.00 sec 15.4 MBytes 129 Mbits/sec
[SUM][RX-C] 7.00-8.00 sec 248 MBytes 2081 Mbits/sec


3 Likes

I’m trying out 1.2.2.1-beta4 and I want to thank you for including this feature in the GUI. This has made bandwidth testing much much easier!

Thank you!

2 Likes

Hey, how are you enabling channel bonding in V5000? I have two V5000 and I don’t see any option for that?

1 Like

In addition to the throughput test on the unit, please try the combination of 1.2.2.1 and cnMaestro X 4.1.0, not only do you have access to the throughput testing many other features.

Plus I hope you are using the latest 1.2.2.1 official release now.

BTW what throughput did you get on the V3000 with the Channel Bonding switched on?

We are currently on the free cnMaestro, but I did upgrade to the 1.2.2.1 release.

Our iPerf speeds at 661 meters distance, with bonding, PtP, with onboard E2E controller:
[SUM] 0.00-10.00 sec 3.11 GBytes 2.67 Gbits/sec 0 sender
[SUM] 0.00-10.01 sec 3.10 GBytes 2.66 Gbits/sec receiver

2 Likes

That’s brilliant, the limit is 2.7Gbps, so you’re hitting the sweet spot. Thanks for the feedback!

That should happen – please. It’d be so useful in diagnosing problems to be able to iPerf from IP to IP from the WebUI.