through put research and output

Ok guys,

I’m still having problems with this throughput thing, but I think i’m getting somewhere.

Over the RF network, between AP-SM or BH-BH we can’t get the throughput Motorola claim, I have manged to improve the through put but it is very spiky and eratic. As much as Motorola would like me to believe it is to do with my END points, over a wire network and a wi-fi network no problem.

Doing research on this whole subject and delving deeper into TCP looking into RWIN, latency, MTU’s this is what I saw.

Developed a simple client server programme, that would send UDP packets, I choose the rate (how many packets / sec) and I choose the size of each packet (10/100/500/1000/1518/3000)bytes. The throughput is rock solid, no spike complete flat line, getting 100% traffic through.

Hence the problem lies with the TCP and latency. I have read up on spoofing (used in VSAT to get around the 600ms delay) and I have read up on SNOOP protocol (latency aware protocol works similar to VSAT spoofing). Final results are that the through put is there but you can’t really make use of it because of the way TCP works, so unless you start putting in additional devices to deal with the latency or use other NON TCP transport protocol you can’t use the throughput.

What is confusing me is that any designer of a wireless based network product would be aware of this and would build in some form of spoofing into their products, and I am sure Motorola would have done that, but as I can’t speak to anyone senior, the thought of ringing up their front line soldiers and asking them has me shaking in my boots. So unless I am misconfiguring the canopy equipment I need to look at some additional accelerator device/mechansim. I am looking at Packeteers SkyX but I think it requires making changes at the client side.

Anyone else got any input or thoughts of wisdom on the subject.

VJ,
Yes, you are correct in that this is a TCP inefficiency and not a Canopy inefficiency and we have been aware of this for a long time. The fix to compensate for this inefficiency gets rather complicated as you begin to work with packets being received out of order in addition to many other things. You can adjust window sizes on the client side to help reduce this issue. Another major item is to not rate limit the uplink to a small value because the TCP inefficiency has to do with the TCP acknowledgements going back the uplink. If there is any saturation on the uplink you will experience a slowed downlink. We have seen many customers that rate limit the uplink to a small value not understanding that with TCP applications the downlink will suffer.

This is still in discussions within engineering, we are aware of it, and it will be addressed.

Regards

I have done a lot of tests between legacy AP’s and SM’s using various types of bandwidth measurement tools. I agree with your conclusion, you can realize the full bandwidth (or close to) that Motorola claims when your transport-layer protocol is UDP. When you are working with TCP apps, which is almost everything, your links are really going to “feel” the 3-way handshake, the checksums and the acknowledgments on a link with 20-25 ms of latency.

It has been over a year since I studied TCP in depth, but having the knowledge of Transport-Layer protocols helps when distinguishing between bandwidth vs throughput, especially in a Canopy system.

Thanks canopy support, I have ripping my network apart for the last week thinking it was a config on myside. I have now dedicated my time into understanding the low level canopy transfer in depth.

I have outstanding questions for Canopy support and anyone else:

page 59 of canopy user guide states:
1 slot = 64 bytes
1 frame = 33 data slots
400 framses / second

therefore 64 x 33 x 400 x 8 (for bits) = 6.7Mbits/sec

page 61
optimum size = 1518
max upling from 1 AP to 1 SM = 300 pps
max downlink from 1 AP to 1 SM = 1800 pps

in the 300 and 1800 pps are we talking ethernet packets or canopy packets.
Either it does not add up:
ethernet packet at 1518 x 8 (for bits) x 1800 > 20 Mb
canaopy packet at 64 x 8 x 1800 = 900kbs

Also if the frame has 33 slots at 64 bytes thats 2112 bytes in each frame how is a 1518 byte frame the most optimum 1518/64= 23.xy say 24 slots what happens to the other 9 slots.

Also in 7.2.9 link test you can set the size of the packet, using different size packets we were able to calculat the pps rate and found that we got over 300 in both uplink and downlink.

I am not trying to punch holes in the product only attempting to get a better understanding so I can work around it.

Using Advantage reduces latency to 7ms how does that effect the performance, can I use advantage AP with classic SM and still get benefits. Can I have a mixture of both SM’s provided I have an Advantage AP.

Any insight by anyone would help, or any ideas of where I should look would help, we are now looking into developing a UDP based bandwidth test :-). Anyone worked with SNOOP or SACK (selective Acks),

Ash

VJ you should read the release notes for 7.2 issue 2 under hardware scheduling you can use legacy sm’s with advantage ap’s and enable hardware scheduling and recieve the benefits of increased throughput and reduced latency. It will also tell you how to use the frame calculator to configure colocated advantage and lagacy ap’s.

I’ve got 2 Advantage AP’s and SM on order so will read see how it is supposed to work and then test to see how it actually works.

Anyone using Canopy for last mile with VSAT as their backbone.

We get the following:

directly in to backbone – fine no speed issues
one SM to second SM over the RF, ok upto 1.2MB sustained
over RF and VSAT— turtle speed… and sometimes its ok’ish

VJ here are some real world no for you about pg 59

I ran 3 test on 5.7 ap to sm

Test 1

Downlink data rate 75%
data slots down=24
data slots up=7
u ack=3
d ack=3
ctl slota=3

calculated
down with 24 slots=4,915,200
up with 7 slots=1,433,600

actual test
down=4,872,192
up=1,054,208
threwput=5,926,400

not too bad but still 400k on up missing.

here’s where it starts getting bad

test 2

downlink data rate=50%
down slots=16
up slots=15
u ack=3
d ack=3
ctl slots=3

calculated
down with 16 slots=3,276,800
up with 15 slots=3,072,000

actual test
down=3,250,176
up=1,797,632
threwput=5,047,808

down looks good but lost 1,274,368 on the up

test 3

downlink data rate=25%
down slots=8
up slots=22
u ack=3
d ack=3
ctl slots=3

calculated
down with 8 slots=1,638,400
up with 22 slots=4,505,600

actual test
down=1,628,160
up=2,231,808
threwput=3,859,968

once again down is close but up is way off.

I did run same tests changing burst and sustained retes in both ap and sm did not notice any change. 7.2.9 software sched.

Hi attitude,

can you give me details of the exact environment.

What data did you transfer,
what protocol, ftp/http/ms-d
UDP/TCP
was it SM to SM or SM to AP,
how many SM’s registered to theAP at the time,
how far was the SM from the AP

what measurement tool did you use.

tx