Beryl 7 (GL-MT3600BE) - TX speeds unstable

Hi there,

Could someone with a Beryl 7 (GL-MT3600BE) please run an iperf test on the LAN port and share their results?

I’m seeing a very strange issue. No matter how I configure the Beryl (router mode, AP/bridge mode, with or without DHCP, just forwarding traffic, etc.), I get very bad upload speeds when connecting to another wired device on the network.

Example via the Beryl:

[  5][TX-C]   0.00-3.90 sec  33.5 MBytes  72.1 Mbits/sec  sender
[  7][RX-C]   0.00-3.90 sec  438 MBytes   942 Mbits/sec   receiver

So:

  • Upload (client → LAN) ≈ 70–80 Mbit/s

  • Download (LAN → client) ≈ 900+ Mbit/s

Very asymmetric.

This does not happen when:

  • The same client is plugged directly into the main router

  • Or when using a different AP (Cudy WR3000 in my case)

Directly connected (no Beryl), I get:

[  5][TX-C]   0.00-10.00 sec  2.73 GBytes  2.34 Gbits/sec  sender
[  7][RX-C]   0.00-10.00 sec  1.09 GBytes   939 Mbits/sec  sender

So the physical cable and general network path are fine.

On the Beryl side I checked interface stats and I’m seeing a huge amount of FCS errors on the LAN port:

ethtool -S eth1 | egrep -i 'crc|fcs|err|drop|pause|disc|over'

rx_overflow: 0
rx_fcs_errors: 260393
rx_short_errors: 0
rx_long_errors: 0
rx_checksum_errors: 0

The rx_fcs_errors counter keeps increasing under load.

The LAN link negotiates at 2.5G full duplex, EEE is disabled, and I’ve tested with a different power supply (even a power bank) to rule out electrical noise — same behaviour.

Since the exact same cable and setup works perfectly when bypassing the Beryl, this makes me suspect:

  • A faulty LAN port on the unit?

  • 2.5G PHY instability?

  • A driver issue in the current firmware?

Has anyone seen similar behaviour on the Beryl 7 / MT7987 platform?

Would you recommend forcing the LAN port to 1G, updating firmware, or just RMA?

Any suggestions are welcome.

Cheers

When you say on the LAN port do you mean with iPerf running on the device itself? I’ve done various tests on mine through the device and get no issues like this with performance, but not with iperf on the device itself.

Can you give a bit more clarfication on the topology/setup where you see the performance issue that caused you to start testing? It’s not clear (to me anyway) when you say ‘to another wired device’ what you mean?

I had a similar experience.

But when troubleshooting thoroughly I figured that my network cable was still broken.

Brume 3 same cable works, Beryl 7 had the exact symptoms as your post.

My conclusion was that this device is alot more sensitive to bad cables, I did not even know that could be a thing.

Since I make most of my cables I went with a shielded rj45, rather than a shield rj45 ez plug.

After having a bad batch of those ez ones and moving in into my new house I had the weirdest issues, even had a 20 meter cable performing well for 6 months and suddenly broke, also my normal diagnosis failed with pinging and checking latency none gave me a indication of breakage, basicly the copper plating internal was lose on those plugs and the way the rubber terminates were both an issue.

I really don't trust these ez plugs one bit for stability, so for me it was easy to know it was broken, stability returned with the classic rj45.

I'm also thinking about this whole situation that maybe the exposed copper from such plug could start interfere too with some of the internal hardware behind the port, which can explain some very strange behaviours with errors, I don't know if you use such plugs on this cable? ( The difference is if the top has holes passthrough the cores (ez) exposed to air, or when it is air tight no holes (classic) )

@oorweeg No I mean iperf running on another machine downstream. Multiple other machines in fact the general topology is the following

Gaming PC (iperf client)
→ Beryl LAN port (eth1) vlan 10 untagged
→ (Beryl config: bridge/router, doesn’t really matter, happens either way)
→ Beryl WAN port (eth0) - tested vlan 10 tagged as well as vlan10 untagged - doesn’t make a difference

Then 2 other configurations tested
Config 1:
→ Layer 2 switch
→ target device (iperf server) (either another PC on the LAN or my NAS (both vlan 10))

Config 2:
→ 2.5G NIC on the main router (x86 machine running opnsense)
→ target device (iperf server) (either another PC on the LAN or my NAS (both vlan 10))

In both cases when i put the beryl in the chain the upload drops.
If i remove the beryl i get 1G or 2.5G depending on which configuration since the L2 switch i have is 1G and i get 2.5G if i connect it to the other interface on the main router.

@xize11 Oook I mean as you say I have also not really had the experience of a specific device being more or less sensitive to cables but i guess easy to test solution i can do on the weekend is to just bring both PC`s next to the main router plug them in directly and see i something changes.

Still tho how bad can this cable be that it goes from 2.5G without the beryl to <100M with it

In my case the faulty cable was smaller than a meter, I do know on sites like aliexpress they also sell cheap cables and some come also with these rj45 ez plugs attached, I really advise to use cables not having those.

It doesn’t need to be that bad, you only need a minimal amount of loss (maybe even 0.1 or 0.01% depending on latency and other factors) to totally tank throughput. Although that loss can be coming from the cable or somewhere else.

Hmm ok let’s see.

I mean all my cables are self made using normal connectors on a cat7 cable. But I will do the test with another cable and the machines closer together and then write back. I mean if that fixes it it would be a first for me but as @xize11 is saying apparently possible. Still kind of my head doesnt fit how can 1 router do full speed no problems on the same cable and then the other one crashes but eh well - im open to surprises :smiley:

Also this was my original post in case of interest.

I did some steps which led me to the conclusion it was the cable.

For what it’s worth but using shielded cables without the proper grounding intended for shielded cables can actually make things worse as the shielding acts like an antenna picking up interference which messes with the signal. The exact opposite of its intended purpose. Less of a problem on a short cable but something to keep in mind as you are often better off using cheaper cables to get better performance.

1 Like

I tested pretty much everything I can think of, and I can say for the first time in a while… I don’t understand what I’m seeing.

Clarifying the setup (since it came up above):

  • iperf3 is running on two PCs (Windows iperf3.exe), not on the router/AP itself.

  • Running it on the AP it self more or less gives the same

  • Direct here means = PC → Beryl→ PC (no main router in the path).

  • Indirect = traffic traverses the main router (and my L2 switch / VLANs).

  • Main router interfaces: 1× WAN, 1× 2.5G, 1× 10G.

I swapped interconnect patch cables (Cat5e / Cat6 / Cat7) for the tests and get the same behavior. The only thing I can’t fully exclude is the in-wall / house cabling for some indirect paths, but since the issue also shows up in direct PC↔Beryl↔PC tests with known-good patch leads, it seems irrelevant.

Here is the data

direct pc-pc on 1G (Via Beryl 7(stock)) - tested with cat 5e, cat6 and cat 7 interconnects
iperf3.exe -c 192.168.8.218 --bidir

[ 5][TX-C] 0.00-10.00 sec 130 MBytes 109 Mbits/sec sender
[ 5][TX-C] 0.00-10.04 sec 129 MBytes 107 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 53.0 MBytes 44.4 Mbits/sec sender
[ 7][RX-C] 0.00-10.04 sec 51.9 MBytes 43.3 Mbits/sec receiver

direct pc-pc on 2.5G (Via Beryl 7(stock)) - tested with cat 5e, cat6 and cat 7 interconnects

[ 5][TX-C] 0.00-10.01 sec 157 MBytes 132 Mbits/sec sender
[ 5][TX-C] 0.00-10.04 sec 156 MBytes 130 Mbits/sec receiver
[ 7][RX-C] 0.00-10.01 sec 54.2 MBytes 45.5 Mbits/sec sender
[ 7][RX-C] 0.00-10.04 sec 53.6 MBytes 44.8 Mbits/sec receiver

Direct Cudy
iperf3.exe -c 192.168.1.2 --bidir (1G)
[ 5][TX-C] 0.00-10.00 sec 1.10 GBytes 946 Mbits/sec sender
[ 5][TX-C] 0.00-10.02 sec 1.10 GBytes 943 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 530 MBytes 445 Mbits/sec sender
[ 7][RX-C] 0.00-10.02 sec 530 MBytes 443 Mbits/sec receiver

Indirect Beryl - PC(vlan 10 2.5G)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(1G)
iperf3.exe -c 192.168.10.160 --bidir

[ 5][TX-C] 0.00-10.01 sec 68.8 MBytes 57.6 Mbits/sec sender
[ 5][TX-C] 0.00-10.02 sec 68.4 MBytes 57.3 Mbits/sec receiver
[ 7][RX-C] 0.00-10.01 sec 845 MBytes 708 Mbits/sec sender
[ 7][RX-C] 0.00-10.02 sec 843 MBytes 706 Mbits/sec receiver

Indirect Beryl - PC(vlan 10 2.5)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(1G)
iperf3.exe -c 192.168.10.160 --bidir

[ ID][Role] Interval Transfer Bitrate
[ 5][TX-C] 0.00-10.01 sec 96.4 MBytes 80.8 Mbits/sec sender
[ 5][TX-C] 0.00-10.01 sec 96.1 MBytes 80.6 Mbits/sec receiver
[ 7][RX-C] 0.00-10.01 sec 1.07 GBytes 918 Mbits/sec sender
[ 7][RX-C] 0.00-10.01 sec 1.07 GBytes 915 Mbits/sec receiver

Indirect Cudy - PC(vlan 10)->Main(2.5G)->NAS(vlan 30)
iperf3.exe -c 192.168.30.10 --bidir

[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 1.07 GBytes 923 Mbits/sec sender
[ 5][TX-C] 0.00-10.00 sec 1.07 GBytes 922 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 1.05 GBytes 899 Mbits/sec 0 sender
[ 7][RX-C] 0.00-10.00 sec 1.04 GBytes 895 Mbits/sec receiver

Indirect Cudy - PC(vlan 10 1G)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(1G)
iperf3.exe -c 192.168.10.160 --bidir

[ ID][Role] Interval Transfer Bitrate
[ 5][TX-C] 0.00-10.00 sec 1.10 GBytes 946 Mbits/sec sender
[ 5][TX-C] 0.00-10.02 sec 1.10 GBytes 943 Mbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 530 MBytes 445 Mbits/sec sender
[ 7][RX-C] 0.00-10.02 sec 530 MBytes 443 Mbits/sec receiver

Indirect Beryl - PC(vlan 10 2.5G)-Main(2.5G)-L2 Switch(10G)->NAS
iperf3.exe -c 192.168.30.10 --bidir

[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.01 sec 2.41 GBytes 2.07 Gbits/sec sender
[ 5][TX-C] 0.00-10.01 sec 2.41 GBytes 2.07 Gbits/sec receiver
[ 7][RX-C] 0.00-10.01 sec 1.91 GBytes 1.64 Gbits/sec 4 sender
[ 7][RX-C] 0.00-10.01 sec 1.91 GBytes 1.64 Gbits/sec receiver

Main to NAS
iperf3 -c 192.168.30.10 --bidir

[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.00 sec 10.0 GBytes 8.60 Gbits/sec 398 sender
[ 5][TX-C] 0.00-10.00 sec 10.0 GBytes 8.60 Gbits/sec receiver
[ 7][RX-C] 0.00-10.00 sec 10.7 GBytes 9.16 Gbits/sec 515 sender
[ 7][RX-C] 0.00-10.00 sec 10.7 GBytes 9.16 Gbits/sec receiver

In addition it doesnt show in exactly 1 scenario - which is what makes me not understand it. I would have called it defective port - but in the Beryl to NAS case it works.

Indirect Beryl - PC(vlan 10 2.5G)-Main(2.5G)-L2 Switch(10G)->NAS
iperf3.exe -c 192.168.30.10 --bidir

[ ID][Role] Interval Transfer Bitrate Retr
[ 5][TX-C] 0.00-10.01 sec 2.41 GBytes 2.07 Gbits/sec sender
[ 5][TX-C] 0.00-10.01 sec 2.41 GBytes 2.07 Gbits/sec receiver
[ 7][RX-C] 0.00-10.01 sec 1.91 GBytes 1.64 Gbits/sec 4 sender
[ 7][RX-C] 0.00-10.01 sec 1.91 GBytes 1.64 Gbits/sec receiver

So yeah any ideas ?

Hmm, I'm thinking here.

Are there any network switches involved even if they are not directly involved?

Sometimes if a switch doesn't function properly or just a device connected to a bad cable anywhere on your network it can pull down the main router.

What is even enough is a continuous spam of dhcp requests but the client never acknowledge them and spams almost directly again.

Otherwise try temporary disabling those switches and only allow the path to the Beryl 7.

It is also possible there is some ipv6 related problem, or some kind of loop due to a ip conflict.

Can you run in UDP mode not TCP mode and in one direction at a time. This will validate the maximum throughput the physical path can sustain and ignore any losses casing TCP to slow down further.

Upload

iperf3 -c {{server_ip}} -u -b 930M

Download

iperf3 -c {{server_ip}} -u -b 930M -R

You will need to ignore most of the output and focus on the last 2 lines which will look something like this usually with sender showing the rate you set (930 here) and the receiver showing how much was received.

- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.01  sec  1.08 GBytes   930 Mbits/sec  0.000 ms  0/803239 (0%)  sender
[  5]   0.00-10.02  sec  1.05 GBytes   903 Mbits/sec  0.016 ms  22084/803239 (2.7%)  receiver

The receiver line is what is important for each direction. These should be higher than your TCP results and will give some indication of why the slowness is happening.

@xize11 - there is a L2 switch on the 10G of the main router which then distributes everything - however this switch is always present regardless if using the Beryl or not so it should not have a influence - the indirect measurements are taken via this switch.

@oorweeg ok done - introducing the beryl results in well you will see in the results. In a single direction it generally speaking works i have tested it before - regardless if its tcp or udp

Cudy:

PC(vlan10 1G)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(vlan10 1G)
iperf3.exe -c 192.168.10.57 -u -b 930M

[  5]   0.00-10.00  sec  1.08 GBytes   930 Mbits/sec  0.000 ms  0/796133 (0%)  sender
[  5]   0.00-10.01  sec  1.07 GBytes   920 Mbits/sec  0.009 ms  7881/796129 (0.99%)  receiver

iperf3.exe -c 192.168.10.57 -u -b 930M -R

[  5]   0.00-10.00  sec  1.08 GBytes   930 Mbits/sec  0.000 ms  0/796382 (0%)  sender
[  5]   0.00-10.00  sec  1.08 GBytes   930 Mbits/sec  0.009 ms  0/796287 (0%)  receiver


Beryl 1G:

PC(vlan10 1G)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(vlan10 1G)
iperf3.exe -c 192.168.10.57 -u -b 930M

[  5]   0.00-10.01  sec  1.08 GBytes   929 Mbits/sec  0.000 ms  0/796074 (0%)  sender
[  5]   0.00-10.01  sec  1.08 GBytes   926 Mbits/sec  0.764 ms  2046/796074 (0.26%)  receiver

iperf3.exe -c 192.168.10.57 -u -b 930M -R

[  5]   0.00-10.02  sec   963 MBytes   806 Mbits/sec  0.000 ms  0/691271 (0%)  sender
[SUM]  0.0-10.0 sec  5 datagrams received out-of-order
[  5]   0.00-10.01  sec   961 MBytes   805 Mbits/sec  0.018 ms  626/691150 (0.091%)  receiver


Beryl 2.5G

PC(vlan10 2.5G)-Main(2.5G)-L2 Switch(10G)->AP(1G (vlan10))-PC(vlan10 1G)
iperf3.exe -c 192.168.10.57 -u -b 930M

[  5]   0.00-10.00  sec  1.08 GBytes   930 Mbits/sec  0.000 ms  0/796232 (0%)  sender
[  5]   0.00-10.00  sec   749 MBytes   628 Mbits/sec  0.062 ms  258046/796232 (32%)  receiver

iperf3.exe -c 192.168.10.57 -u -b 930M -R

[  5]   0.00-10.01  sec  1.08 GBytes   929 Mbits/sec  0.000 ms  0/796197 (0%)  sender
[SUM]  0.0-10.0 sec  24 datagrams received out-of-order
[  5]   0.00-10.00  sec  1.08 GBytes   929 Mbits/sec  0.020 ms  0/796061 (0%)  receiver


I have the feeling I want to like this router more than it wants to like me. Considering returning it here - I mean for a travel router it sure acts very unstable. And this is with the perfect conditions provided - so what am i to expect in a real world scenario where the conditions aren’t perfect.

UDP test there highlights the serious problem and why your TCP speeds are so low.

To confirm, to change the speeds this is two different devices (one 2.5g and one is 1g) connected to the 2.5g LAN port on the Beryl and the using the 1g device is fine?

Can you share the output of the following command with each different device connected

ethtool eth1

ethtool -a eth1

ethtool --show-eee eth1

After a fresh reboot and retest for each speed

ethtool -S eth1

Just done some testing of my own.

  • Untagged device on WAN (Set as LAN port)
  • VLAN 10 Tagged Device on LAN
  • WAN port is connected at 2.5G
  • LAN port is tested connected at both 1G and 2.5G

When using 2.5G I can not get Iperf3 to work at all in one of the directions. 2.5G WAN→ 2.5G LAN. Test connects but fails and doesn’t return a result.

When using 2.5G WAN→ 1G LAN works fine at full speed (940Mbps)

Will do some further testing to see if its that same moving the VLAN to eth0 and report back


***

Leaving the details here for continuity in the thread but this post can be ignored. See the update here: Beryl 7 (GL-MT3600BE) - TX speeds unstable - #23 by oorweeg

***

@m.v.petev Try setting the MTU of your vlan tagged interfaces on the Beryl 7 to 1496 rather than the default 1500 and test again.

There seems to be some issue/bug with egress vlan traffic when the ethernet port is running at 2.5G that is not present at 1G. In my case when setting the MTU to 1494 (so the entire ethernet frame is reduced to the regular 1514 bytes instead of 1518) I am able to pass traffic again as I had not working iperf3 at all.

Performance wise once this is working I get about 1600Mbps with 1 stream or 1900Mbps with 4 streams (-P 4) with this configuration (Untagged→Tagged). Not full speed but still significant improvement over using a 1Gbps port.

In the other direction (Tagged→Untagged) I get 2350Mbps with this configuration

Edit

It is not clear if what I am seeing is a known issue with the MTK drivers that has already been patched upstream or not. @m.v.petev Hopefully this MTU workaround confirms you see the same thing and if so we can loop in the GL team here to try and get a future fix to resolve this.

Edit 2

This likely would impact the Brume 3 as well but I can’t test currently as my Brume 3 is my main gateway device

Edit 3 - Issue Overview

While we wait for @m.v.petev to confirm if the workaround resolve the issue for him and we are both seeing the same bug summarising my findings so far.

Impacted devices

  • Beryl 7

Issue Behaviour

  • Egress/Tx VLAN tagging at 2.5G physical port speed is broken.
  • Heavy packet loss to total loss of connectivity.

Steps To Replicate

  • Configure WAN port as LAN in default bridge
  • Configure 802.1q VLAN device on eth1 (LAN Port)
  • Create new static IP interface with DHCP
  • Attach new VLAN device to new static IP interface
  • Assign new static IP interface to ‘LAN’ firewall zone
  • Connect iperf3 server/client to physical ports at 2.5G
  • Run iperf3 test between server and client between subnets

Test Results

  • iperf3 test from Tagged (LAN) → To Untagged (WAN)
    • Full speed (2300Mbps), no issues
  • iperf3 test from Untagged (WAN) → To Tagged (LAN)
    • iperf3 connects but throughput test fails and reports 0Mbps

Workarounds

Using the following workarounds the issue is resolved and Untagged (WAN) → To Tagged (LAN) traffic throughput test works

  • Setting VLAN interface MTU to 1496
    • Speed 1900Mbps
  • Running at 1G port speeds
    • Speed 940Mbps

Troubleshooting steps with no effect/improvment

  • ethtool -K eth1 tx-checksum-ipv4 off
  • ethtool -K eth1 tx-scatter-gather off
  • ethtool -K eth1 tso off
  • ethtool -K eth1 gso off
  • ethtool -K eth1 gro off
  • ethtool -K eth1 tx-vlan-offload off
  • ethtool -A eth1 tx off

Hello there, thanks for tall the testing. We might be getting somewhere.

Unfortunately I cant confirm the results with the MTU setting - however slightly different configuration

What I have now is - VLAN 10 trunk coming to WAN Interface.

network.@device[0]=device
network.@device[0].name='br-lan'
network.@device[0].type='bridge'
network.@device[0].ports='eth1' 'eth0.1'

network.@device[1]=device
network.@device[1].name='eth1'

network.@device[2]=device
network.@device[2].name='eth0.10'
network.@device[2].mtu='1496'

network.wan=interface
network.wan.proto='dhcp'
network.wan.force_link='0'
network.wan.ipv6='0'
network.wan.classlessroute='0'
network.wan.metric='10'
network.wan.device='eth0.10'
network.wan.disabled='1'

Result is the same:

[  5]   0.00-10.01  sec  1.08 GBytes   930 Mbits/sec  0.000 ms  0/796662 (0%)  sender
[  5]   0.00-10.01  sec   758 MBytes   635 Mbits/sec  0.014 ms  252367/796662 (32%)  receiver

Just to completely exclude the VLAN affects I have tested the following

Set a port on the on the L2 switch to just do untagged VLAN 10 - now obviously this will change the WAN port negotiation speed to 1G - however the LAN on the other side is left at 2.5G

interface 1/1/6
  vlan access 10
  no shutdown

Set the beryl in 2 configurations one with DHCP

network.@device[0]=device
network.@device[0].name='br-lan'
network.@device[0].type='bridge'
network.@device[0].ports='eth1'

network.@device[1]=device
network.@device[1].name='eth1'

network.wan.proto='dhcp'
network.wan.force_link='0'
network.wan.ipv6='0'
network.wan.classlessroute='0'
network.wan.metric='10'
network.wan.device='eth0'

Resulting in no change to the speeds:

[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.01  sec  1.08 GBytes   929 Mbits/sec  0.000 ms  0/796223 (0%)  sender
[  5]   0.00-10.01  sec   764 MBytes   641 Mbits/sec  0.063 ms  247192/796222 (31%)  receiver

One without DHCP

network.wan.disabled='1'
[ ID] Interval           Transfer     Bitrate         Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec  1.08 GBytes   927 Mbits/sec  0.000 ms  0/793730 (0%)  sender
[  5]   0.00-10.00  sec   774 MBytes   649 Mbits/sec  0.030 ms  238145/793730 (30%)  receiver

Interesting that you don’t see the same VLAN issue that I do. But the fact there is an issue without vlans at all suggests there is more than one problem and I managed to find a different one somehow.

Just to help the clarity would you be able to knock up a diagram in Exalidraw or Paint or something to help show the topology, where/what the iPerf devices are and direction across the topology of the traffic with the loss?

Absolutely - well i can do it ascii style i think it might be easier

   [PC #1]
 Realtek 2.5GbE 
     |
     | 2.5GbE (wired) (VLAN 10 untagged)
     v
+-----------------------+
| Cudy WR3000 (OpenWrt) | (or Beryl)
| Access Point          |
+-----------------------+
     |
     | 2.5GbE (wired) (VLAN 10, 20 Trunk) 
     v
+----------------------------------------------+
| OPNsense Main Router (x86, i7-4790)          |
|  - Realtek 2.5GbE NIC (to WR3000)            |
|  - Intel X520-DA1 10G SFP+ (to switch)       |
|                                              |
| VLANs on router:                             |
|   VLAN 1   = Management                      |
|   VLAN 10  = Devices                         |
|   VLAN 20  = Servers                         |
|   VLAN 30  = IoT                             |
|   VLAN 99  = Dump                            |
+---------------------------------------------+
     |
     | 10G SFP+  (TRUNK: VLANs 1/10/20/30/99)
     v
+---------------------------+
| 2930F (L2 Switch)         |
+---------------------------+
  |            |                |                 |                 |
  | 10G SFP+   | 1G (copper)    | 1G (copper)     | 1G (copper)     | 1G (copper)
  |            |                |                 |                 |
  v            v                v                 v                 v
+---------+  +----------------+ +---------------------+ +---------------------+
|  NAS    |  | Linksys MX4200 | | D-Link Aquila M30 #1 | | D-Link Aquila M30 #2 |
| X520 10G|  | (OpenWrt AP)   | | (OpenWrt AP)         | | (OpenWrt AP)         |
+---------+  +----------------+ +---------------------+ +---------------------+
                |
                | 1G (wired) (VLAN 10 untagged)
                v
            [PC #2]
            Realtek 1GbE

The configuration of where PC`s have been plugged in and via which AP and even no AP on the way we discussed previously so this is just the general infrastructure

PC #1 - PC #2 is where we see the results - as not to spam i have also tested the same traffic in the direction of random AP`s from PC #1 as well as the NAS and well we know the situation is the same when the Beryl is in the middle. With the only exception of going to the NAS in which case we get 1.2-1.8G

1 Like

Loving the ACII diagram! :heart: :heart:

So to line up based on my testing, you see issues in the same direction as me, (in the direction on egress from the Beryl towards the main router), but not quite the same issue.

You see issues present when there is no VLAN tagging outbound from the Beryl towards main router (which solves it for me) and no improvement with MTU reduction when there is tagging (which solves it for me).

Deffo some kind of driver issue based on my testing, and could be the same issue as you, just perhaps not manifesting quite the same due to the different PHY on the main router side in your setup perhaps. :thinking: I speculate that the MTK is mishandling the egress frames at the higher rate and perhaps your Realtek PHY is less fussy so you see loss, but in my case the PHY in my test device is being strict and dropping stuff entirely.
Doesn’t explain why it is still broken for you when untagged though….

Can you run the ethanol EEE output command from the earlier post. It seems like its supported but disabled o the Beryl, just want to check that is the case in your setup and EEE isn’t running on that Realtek link to the Beryl

Edit

I’m going to test again here just to validate my VLAN tagged test client isn’t the broken side in my setup causing the drops in my tests