Issues with traffic shaping on GL-SF1200

Hello everyone, I bought the GL-SF1200 router about three weeks ago.

Although it ships a quite old OpenWRT version, the device is nice. I got it to improve customization on my connection since my Italian ISP forces me to use their bad modem in which I can’t even set the DNS.

Anyway I chose the SF1200 also for traffic shaping, trying to distribute and limit some of the bandwidth between the various devices connected to the Internet. Unfortunately the router in this stage is only good if you have a 100 Mbps downstream connection. If you have more, like my case, the bandwidth is capped around 100 Mbps and you can’t have more throughput.

In the original GL.iNet interface, the user is warned about the fact that this feature is CPU intensive, but it’s weird that you can use only 100 Mbps. Since it’s a Gigabit router, if you have a FTTH 1 Gbps connection, you can only use 10% of your advertised full speed (and that’s not my case, I have a VDSL 200 Mbps).

I tried to discover more and found interesting things. The router by default has the software and the hardware flow offloading enabled which can provide a big throughput the to device: I can saturate my downstream connection and maybe I could also achieve 1 Gbps. Never tried the latter, I don’t have a FTTH connection, but I assume it’s possibile since the CPU during high transfer in not employed because of the flow offloading.

Anyway, if I disable the flow offloading by the firewall service, just touching the 100 Mbps, one single core hits 100% of CPU usage and the device can’t go upper speeds. I think it’s also bad optimized because during transfer only one core goes to 100% while the three others stay low. I tried also using irqbalance, but got no better results.

Experimenting with custom tc qdiscs did not change much of this situation, so even installing SQM package from Luci is useless. If you want to shape the traffic on this device, you have to cap you connection at around 100 Mbit. If you have more, you can’t use it just because the CPU can’t handle the high traffic.

I also changed the frequency, which by default is 800 MHz. It can be modified to maximum 1GHz. Anyway, not big changes, the device simply can’t handle high traffic. So if you want more throughput, you can enable the flow soft/hardware offloading, but at that point you can’t do traffic shaping because all the transfer is delegated to the hardware with all the bufferbloat that is usually seen under heavy bandwidth usage.

Did someone experience the same behavior? Is there a workaround to overtake this issue?

How did you modify the CPU rate? Did you have any heat issues?

echo 1000000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
echo 1000000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_setspeed
echo 1000000 > /sys/devices/system/cpu/cpu2/cpufreq/scaling_setspeed
echo 1000000 > /sys/devices/system/cpu/cpu3/cpufreq/scaling_setspeed

No issues for now. Since I enabled the flow offloading, the CPU is not used for transfer.

I think it’s even useless to raise the rate because 800 MHz should be sufficient for most of things not traffic related. Disabling flow offloading, the CPU is involved in traffic, but even at maximum rate and no limits, it can’t reach the maximum download speed on my connection.

1 Like

The GL-SF1200 is advertised by GL.iNet to have “SF19A2890, Dual-Core 1GHz”:

It seems it is only clocked at 800MHz. Also, did you find 4 cores?

I do not work for and I do not have formal association with GL.iNet

There’s a bit of misconception around that. Consulting the manufacturer, its datasheet reports Quad-processing MIPS32® interAptiv with Two Physical Cores and Four Virtual Processing Elements.

So the physical cores are two, but CPU reported in /proc/cpuinfo and htop are four:

Talking about the CPU, I don’t really know which is the maximum load of this device. Theoretically dual core processors have a maximum load of 2.0, but this one reporting 4 CPU should have a max load of 4.0? Does someone know that?

Another weirdness about this processing unit is that the system load never goes below 1.0. It can only go above, but never down. If the max load is 2.0, this means the system is always half loaded. It’s not normal, some OpenWRT systems have their load sitting around 0.1/0.3 in idle state.

And the screenshot above is taken with a system unloaded of the GL services which are useless to me (I left only the gl_ipv6 service enabled because my ISP gives IPv6 connectivity). In that state, the system stays always around 1.0, but never going below. With all GL services enabled, the load is usually around 1.4/1.7, which is almost full loaded if the max is 2.0. It’s weird.