Hello together,
we have a strange network problem on a setup where we use V3000/V5000 in Point-2-Point.
The following setup:
Windows 2022 Server (“backup1”) <-> 10Gbit Switch <-> V3000 <-----> V5000 <-> 10Gbit Switch <-> Windows 2016 Server (hyperv5)
Now when we transfer data between these 2 servers we get the full speed of the Cambium IPv4 Layer 2 Bridge (1.4-1.6Gbit/s) on one direction, but when we transfer the data in the other direction the speed drops to about 80 - 120MBit/s. However per transfer - if I start 10 transfers in parallel each transfer gets this ~100MBit/s … so in total we are back to ~1Gbit/s.
The same occurs with measurements with IPFERF - one direction fits the speed, the other strongly limited, with parallel streams per stream the total speed goes up again.
I am not 100% sure if the problem is really with the Cambium Bridge or somewhere else. However, I can only reproduce the problem when data is running over the Cambium Bridge. On the respective sites locally over the 10GBit switch I can’t see this massive difference between RX / TX.
The Cambium Bridge is RX & TX on MCS 12 - so here everything should fit regarding the reception …
Maybe one of you has an idea about this error pattern?
Many thanks, best regards from Tyrol
Andreas
Hi Andreas,
To eliminate the odds, can you exercise iperf between your windows 2022 server and V3000 (which I believe you have configured as POP) and see whether 10Gbit switch is contributing this behavior.
example iperf cmd on your 2022 server
iperf3 --server -p 6004
example iperf cmd on CLISH of V3000
service iperf -c -t 10 -p 6004
more references: Link Test using IPERF in CLISH
I am about to make observations related to your painpoint on 1.2.2.1-b2.
Shall keep you posted
Hi @Prasanna_TM,
thx for your answer.
V5000 is POP. And iperf looking good in both directions. Link is in use right now - so not full speed.
CLISH>service iperf -c 10.2.0.27 -p 5201
Connecting to host 10.2.0.27, port 5201
[ 6] local 10.2.0.180 port 56922 connected to 10.2.0.27 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 6] 0.00-1.00 sec 111 MBytes 929 Mbits/sec 4 419 KBytes
[ 6] 1.00-2.00 sec 109 MBytes 918 Mbits/sec 0 419 KBytes
[ 6] 2.00-3.00 sec 110 MBytes 924 Mbits/sec 0 419 KBytes
[ 6] 3.00-4.00 sec 110 MBytes 924 Mbits/sec 0 419 KBytes
[ 6] 4.00-5.00 sec 104 MBytes 871 Mbits/sec 0 419 KBytes
[ 6] 5.00-6.00 sec 101 MBytes 851 Mbits/sec 0 419 KBytes
[ 6] 6.00-7.00 sec 108 MBytes 908 Mbits/sec 0 419 KBytes
[ 6] 7.00-8.00 sec 108 MBytes 905 Mbits/sec 0 419 KBytes
[ 6] 8.00-9.00 sec 106 MBytes 889 Mbits/sec 3 419 KBytes
[ 6] 9.00-10.00 sec 109 MBytes 917 Mbits/sec 0 419 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 6] 0.00-10.00 sec 1.05 GBytes 904 Mbits/sec 7 sender
[ 6] 0.00-10.00 sec 1.05 GBytes 902 Mbits/sec receiver
iperf Done.
CLISH>service iperf -c 10.2.0.27 -p 5201 -R
Connecting to host 10.2.0.27, port 5201
Reverse mode, remote host 10.2.0.27 is sending
[ 6] local 10.2.0.180 port 56978 connected to 10.2.0.27 port 5201
[ ID] Interval Transfer Bitrate
[ 6] 0.00-1.00 sec 98.0 MBytes 822 Mbits/sec
[ 6] 1.00-2.00 sec 101 MBytes 844 Mbits/sec
[ 6] 2.00-3.00 sec 99.8 MBytes 838 Mbits/sec
[ 6] 3.00-4.00 sec 100 MBytes 839 Mbits/sec
[ 6] 4.00-5.00 sec 99.8 MBytes 837 Mbits/sec
[ 6] 5.00-6.00 sec 97.5 MBytes 818 Mbits/sec
[ 6] 6.00-7.00 sec 97.1 MBytes 814 Mbits/sec
[ 6] 7.00-8.00 sec 100 MBytes 842 Mbits/sec
[ 6] 8.00-9.00 sec 99.1 MBytes 831 Mbits/sec
[ 6] 9.00-10.00 sec 96.7 MBytes 810 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate
[ 6] 0.00-10.00 sec 989 MBytes 830 Mbits/sec sender
[ 6] 0.00-10.00 sec 989 MBytes 829 Mbits/sec receiver
iperf Done.
The really weird thing is that the slow Single Session Speed is only between some Devices. When doing tests with other Devices the Speed is ok.
I also research into another directions as well as I’m not really sure if cnWave is the Problem - if maybe Windows Server 2022 could be the Source of the problems.
But haven’t found the solution for this really weird problem yet! :-/
Thank you
Andy