~1Gbit WIFI Performance Limitation on Spitz AX (GL-X3000)?

Hello GL.iNet Team and Community,

First off, I'm very impressed with the features and flexibility of my Spitz AX (GL-X3000). The combination of a modern OpenWrt base with the polished GL.iNet UI is fantastic.

I've spent considerable time optimizing the local RF environment to achieve a near-perfect Wi-Fi connection. However, after conducting a series of in-depth performance benchmarks, I've encountered a hard throughput limit that I would like to discuss and understand on a technical level.

My Setup & Link Quality:

I'm connecting a Windows PC (with an Intel AX210 Wi-Fi card) to the Spitz AX. The Wi-Fi 6 (AX) connection itself is stable and reports truly excellent metrics in LuCI:

  • Signal: -28 dBm

  • PHY Link Rate: Consistently 1921 Mbps (TX/RX) on a 160MHz channel.

To eliminate this specific client as a variable, I've also tested with a modern Android phone right next to the router, achieving a signal of -16 dBm and a reported link rate of 2402 Mbps (TX/RX).

The Core Issue: iPerf3 throughput is capped far below the link rate

When running an iperf3 server directly on the Spitz AX (“iperf3 -s”) to measure raw Wi-Fi throughput to its endpoint, the performance is consistently capped.

  • Test 1: Router to Client (TX / Download benchmark)

    • Command: iperf3.exe -c -P 16 -R

    • Result: Capped at a stable ~850-900 Mbps.

  • Test 2: Client to Router (RX / Upload benchmark)

    • Command: iperf3.exe -c -P 16

    • Result: Significantly lower, capped at a stable ~550-600 Mbps.

  • Confirmation: The same limits are observed by running iPerf3 on the Android phone, which also never exceeds ~950 Mbps despite its 2402 Mbps link rate.

Observation & Hypothesis:
Despite a flawless multi-gigabit PHY link rate, the router's own CPU processing appears to be the bottleneck. The ~950 Mbps ceiling strongly suggests a net throughput limit equivalent to a 1GbE interface, leading to the hypothesis that the data path to the CPU is architecturally limited.

Exhaustive Troubleshooting & Optimization Performed:

To ensure this is not a configuration issue, we performed a deep dive into the system's software configuration.

  1. Verified Hardware Links:

    • ethtool eth0 confirmed the CPU port supports 2500baseT/Full.

    • iwinfo rax0 assoclist confirmed the excellent multi-gigabit Wi-Fi link rates.

  2. Enabled Flow Offloading:

    • Both flow_offloading and flow_offloading_hw were enabled in /etc/config/firewall. While this primarily benefits routed traffic, it was enabled to ensure an optimal baseline configuration.
  3. Addressed Single-Core Bottleneck (IRQ Affinity):

    • cat /proc/interrupts revealed that the primary WLAN IRQ (7: ... GICv3 237 Level 0000:00:00.0) was being handled exclusively by a single CPU core (CPU1).

    • We manually forced symmetric load distribution by setting the affinity mask: echo 3 > /proc/irq/7/smp_affinity. This change was made persistent via /etc/rc.local.

  4. Enabled Packet Steering (RPS):

    • To support the IRQ affinity change, RPS was also enabled for both cores on the Wi-Fi interface: echo 3 > /sys/class/net/rax0/queues/rx-0/rps_cpus. This was also made persistent.
  5. Tuned Kernel Network Stack (/etc/sysctl.conf):

    • Network Buffers: Increased rmem_max, wmem_max, tcp_rmem, tcp_wmem, and netdev_max_backlog to values suitable for high-speed networks.

    • TCP Optimizations: Enabled tcp_fastopen=3 and tcp_mtu_probing=1.

    • Connection Tracking: Increased net.netfilter.nf_conntrack_max to 65536.

    • Queuing Discipline: Set net.core.default_qdisc=fq_codel to manage bufferbloat.

    • (Note: An attempt to use kmod-tcp-bbr resulted in system instability and reboots, so it was removed, confirming a package/kernel incompatibility. The system is stable with the cubic default.)

Conclusion after Optimization:
Even after applying all these deep-level software and kernel optimizations, the throughput limits remained identical: ~950 Mbps TX and ~600 Mbps RX. This strongly indicates that the bottleneck is not a tunable software parameter but a fundamental architectural limitation of the Spitz AX.

My Questions for the GL.iNet Engineers & Developers:

  1. Are these measured throughput values the expected performance ceiling of the MediaTek SoC when its CPU cores are the traffic endpoint?

  2. Can you confirm if there is a known architectural bottleneck in the data path between the Wi-Fi subsystem and the CPU that explains this "1Gbit-like" wall?

  3. Is the significant performance asymmetry (TX > RX) a known characteristic of the platform's hardware offloading capabilities (e.g., TSO/GSO being more efficient than LRO/GRO) or the driver implementation?

  4. Is there any potential for future firmware or proprietary driver updates to improve this?

I want to emphasize that my goal is to technically understand the product I purchased for over 400 EUR. This WIFI metric is a critical piece of information if it´s really internal limited to just 950 Mbps.

Thank you for your time and expertise. I'm looking forward to your feedback and a productive discussion.

CellMonster

1 Like

Just a basic check - do you have network acceleration enabled in LuCi? If so, I would disable that and test again. It may not change it, but network acceleration has been a culprit for various performance issues. It might do the same thing you have already done manually, so ymmv.

Thanks @packetmonkey for the idea, but indeed, and although I didn´t mention this: I´ve tested this with “Off”, “On - Software Acceleration”, “On - Hardware Acceleration” and it makes absolutely no difference for my scenario. Speeds stay exactly the same as reported above in all 3 modes.

1 Like

Hello,

In some test scenarios, the data packets of "iperf3" may have compatibility issues with the switching chip, please try using "OpenSpeedTest" for more testing.

I’ve found that when using iPerf to test wifi I get much better throughput when using the -w 4M option. Give that a try and let us know if your speeds increase. Also, I typically do not use the -P option, the -w 4M is enough to get good results in my experience.

Hello everyone,

I'd like to follow up on my initial performance analysis with some fascinating new data, thanks to the suggestion from @bruce to use OpenSpeedTest. This has ultimately solved the mystery, and I'm sharing my complete findings to help others.

A quick recap: My initial tests using iperf3 directly on the router showed a hard performance wall at ~900 Mbps (TX) and ~600 Mbps (RX), despite a flawless 1900+ Mbps Wi-Fi link rate. This led to the suspicion of a 1Gbit hardware bottleneck.

The breakthrough came from testing with OpenSpeedTest hosted on the router itself.

The results are dramatically different and confirm that the bottleneck is software- and application-specific, not a hard hardware limit.


Test Results: OpenSpeedTest (hosted on Spitz AX via Nginx)

Test 1: Download (Router TX -> Client RX)

  • Result: A fantastic 1336.0 Mbps!

  • Link Rate during test: 1921 Mbps (Router TX)

  • CPU Load (htop): Both cores at nearly 100% utilization.

Test 2: Upload (Client TX -> Router RX)

  • Result: 760.0 Mbps.

  • Link Rate during test: 1729 Mbps (Router RX)

  • CPU Load (htop): Both cores again at nearly 100% utilization.

Final Analysis & Conclusions

Based on this comprehensive data, here are my final conclusions:

1. The Spitz AX CPU is the definitive bottleneck, but the limit depends on the application.
The OpenSpeedTest result of 1336 Mbps proves the hardware is capable of exceeding 1 Gbit/s throughput to a CPU-hosted application. However, it also shows that at this speed, the dual-core 1.3 GHz CPU is completely saturated. This confirms that achieving the advertised 2402 Mbps Wi-Fi speeds is not possible in any scenario where the router's CPU is the endpoint (e.g., VPN server, file server, etc.), as the CPU processing power is the limiting factor.

2. A significant performance regression exists for raw socket tools like iperf3.
The fact that OpenSpeedTest (via Nginx) is ~50% faster than iperf3 in the download test points to a specific inefficiency in how raw network traffic is handled by the kernel/drivers, as suggested by the GL.iNet staff. This is a crucial finding for anyone benchmarking this platform.

3. The platform exhibits a strong, persistent TX/RX asymmetry.
Both iperf3 and OpenSpeedTest show that the router is significantly faster at sending data (Download) than receiving it (Upload). Since the CPU is maxed out in both tests, this suggests the hardware/driver architecture is fundamentally more efficient at TX processing. At ~760 Mbps, the upload performance hits the CPU wall and will not go higher.

In summary:
The Spitz AX is a powerful device, but users should be aware that CPU-bound tasks will be limited to approximately ~1.3 Gbps for downloads and ~760 Mbps for uploads under ideal Wi-Fi conditions and with an optimal application stack. Performance with less optimized tools like iperf3 will be considerably lower.

Thank you to the GL.iNet team for the pointer that ultimately solved this puzzle! This has been a very educational deep dive. I hope this detailed analysis helps other users understand the real-world performance characteristics of this router.

4 Likes

Excellent testing, methodology, and reporting on this issue @CellMonster. Thanks for providing the followup!

@packetmonkey You are most welcome, hope it´s helpful for someone!

1 Like

In fact, in order to make the router focus on processing packet forwarding and rigorous experiments, the OpenSpeedTest server will generally be installed on another physical machine. After all, the OpenSpeedTest server will also consume the router's own CPU resources.


For quick verification and experimentation, this time just putting OpenSpeedTest server on the router itself to reduce some extra work.