My network spans two properties and I recently bought a GL-MT6000 (Flint 2) for one and GL-BE9300 (Flint 3) for the other. Since I regularly access devices and services remotely, I set up Tailscale on both routers.
Site A:
Flint 3
LAN = 192.168.1.0/24
Tailscale = 100.100.1.1
Allow Remote Access WAN = true
Allow Remote Access LAN = true
Site B:
Flint 2
LAN 192.168.2.0/24
Tailscale 100.100.2.1
Allow Remote Access WAN = true
Allow Remote Access LAN = true
I have nginx for reverse proxying at each site and use my subdomains to point to various services. Connectivity between the two sites worked when using IP addresses, but not the subdomains. To resolve that, I have:
Site A: nft add rule inet fw4 dstnat iifname "tailscale0" tcp dport { 80,443 } dnat ip to 192.168.1.250 nft add rule inet fw4 srcnat ip saddr 192.168.2.0/24 ip daddr 192.168.1.0/24 masquerade
The only issue is the fact that the connection is ridiculously slow - when using either IPs or subdomains. Is this a configuration issue or hardware limitation?
Have you checked the connection status and throughput of your Tailscale link?
If your connection is established through a relay node (DERP), performance may be limited by the bandwidth load on the Tailscale relay server.
Even with a direct (peer-to-peer) connection, factors such as geographic distance and ISP routing can affect overall speed.
You can verify and benchmark the connection via SSH using the commands below:
# Check Tailscale connection status
tailscale status
# Install iperf3 for bandwidth testing
opkg update && opkg install iperf3
# On one device, start the iperf3 server
iperf3 -s
# On the other device, run the client tests:
# Upload test
iperf3 -c <tailscale_peer_ip>
# Download test (reverse mode)
iperf3 -c <tailscale_peer_ip> -R
This will help determine whether the speed limitation originates from the Tailscale relay, ISP routing, or local network performance.
The MT6000 → BE9300 link shows approximately 45 Mbps.
The BE9300 → MT6000 link shows approximately 85 Mbps.
These rates appear to be within the expected range.
Could you clarify how you measured the lower connection speed during your access?
It may help to deploy iperf3 or OpenSpeed on the LAN device to further test the throughput along this path:
LAN <-> MT6000 <-> Tailscale Tunnel <-> BE9300 <-> LAN
I regularly upload photos from Site B to Immich running in Docker on a Synology NAS at Site A. I usually access Immich using a subdomain (reverse proxied) via a desktop browser. Anything that needs to be uploaded just gets dragged into the browser window. The broadband at Site B is roughly 500Mbps down and 50Mbps up. Unless images in question are >1MB, the progress bars just flash from 0 to 100.
I've just:
Tried uploading 15 images totalling 11.6MB - took around 5 minutes. Previously it would have been around 5 seconds. Prit took just under 5 minutes. Previously it would have taken ~5 seconds.
Disabled Tailscale on both routers
Tried uploading 132 images totalling 514MB - took around 2 minutes.
Tomorrow I'll re-enable Tailscale and then use iperf3 to test from laptop at Site B to NAS at Site A
@will.qiu Whilst looking to re-enable and test, I pasted the original post into ChatGPT and it suggested the connection was being relayed. I noticed that the solution it suggested only mentioned LAN Subnet Route - I had both LAN and WAN selected.
I've now re-enabled Tailscale with:
Allow Remote Access WAN = false
Allow Remote Access LAN = true
Everything appears to be working well, with performance being no different to the “before” state - even after removing the rules that I had added.
So, after restarting Tailscale, everything started working again?
It’s possible that Tailscale initially selected an unsuitable DERP server or was unable to establish a direct connection, and the restart resolved the issue.
In any case, it’s good to hear that everything is functioning normally now.
No, I don’t think the issue was resolved by just:
(“IT” is both my career and passion, so naturally reboots are a last resort activity )
From what I can tell, the issue was down to “Allow Remote Access WAN” being set to “true”. When enabled, the subdomains used for reverse proxying ceased to function. According to ChatGPT, the rules I added in order to resolve the subdomain problem, caused the connection to be relayed.
The issue was resolved by setting “Allow Remote Access WAN” to “false”. With that applied, everything works at the expected speed and doesn’t require any additional rules.