AR750S-EXT Gigabit Ethernet Network Throughput - Posting Pointer?

This is a pointer to a question I posted in the “Hardware” category and wanted to post here just in case different people follow different threads…

Post:

Thanks!

Not sure you read my original posting…

Have you actually run data through (LAN<->WAN) the AR750s at sustained gigabit speeds (or 900M as you indicate)?

I indicated that if I remove the AR750s from the path and just connect directly to my edge device the computers system call all reach greater than 900 megabits/sec data transfer speeds.

The bottleneck seems to be related to the AR750s for some unknown reason???

My testing using flent and eight, simultaneous streams plus pings (tcp_8down, tcp_8up, bi-directional RRUL) indicates that use of NAT and the “standard” OpenWrt firewall rules, going through the CPU (flow offloading is not enabled), is between 370 and 420 Mbps throughput (TCP payload, not on-wire bit rate). This is likely a CPU-processing limitation and is consistent with other single-core, MIPS-based processors (the venerable Archer C7v2, QCA9558 at 720 MHz, achieves 310-380 Mbps under the same test conditions). Use of flow-offloading may increase this, however the flow-offloading feature is “broken” current OpenWrt master, apparently due to upstream Linux changes. Without NAT, the rates are higher. Adding SQM with piece_of_cake.qos significantly increases the CPU load, dropping the throughput to the 160-210 Mbps range.

To be very clear, this is not a GL.iNet-implementation limitation, but a general one of the MIPS-based SoCs. While gigabit throughput can be achieved with ASICs (such as in enterprise-grade, L3 switches), if a general-purpose CPU is involved, a multi-core, x86_64/AMD64 is likely required. Some have suggested that some of the ARM-based, Marvell SOCs can handle gigabit rates, though I have not tested any. For gigabit symmetric x86_64/AMD64 is almost certainly required.

Edit: An IPQ40xx can handle gigabit downstream direction without SQM (or 600-800 upstream), but I don’t consider that sufficient for what it sounds like you’re expecting from GigE, assumed symmetric, connectivity.

Edit: This is on OpenWrt master – If the GL.iNet firmware, or the firmware you’re running allows for “Enable software offload”, you might try that. The ar71xx/ath79 switches do not support the hardware-offload “sub-option”.

1 Like

Using the firmware provided by GL, We have added the software offload in the firmware, the actual LAN → WAN rate is about 700M, which depends on the network environment

1 Like

Thanks you all for the suggestions and comments!

One of my problem not attaining higher speeds was related to am intermittent bad ethernet cable…

I am now attaining just a little below 600 Mbits/sec throughput, which seems like this is the most I can expect out of the AR750s!

Thanks, again!

1 Like

Following up on this, I enabled flow offloading on OpenWrt master with a pending patch to accommodate upstream Linux changes after v18 and the GL.iNet firmware (which support flow offloading, as I understand it). Under stressful conditions (8 TCP streams, plus pings), the GL-AR750S was able to route and NAT ~870 Mbps downstream, or ~650 Mbps either upstream, or aggregate of downstream and upstream. These numbers are consistent with that given just above, which I am guessing was for single stream.