I’m not seeing the supposed ‘bug.’
As best I can tell (I can’t track down the exact script/line of code to point out the specific bug), GL.iNet configures multiple route tables to handle different types of traffic (VPN vs non-VPN). These appear to be:
- main (also 1) → used by the local router
- 51 → used for DNS
- 52 → used for VPN clients (e.g. via_vpn)
- 53 → used for non-VPN clients (e.g. bypass_vpn)
When a VPN connection is established, the router configures the main AND via_vpn route tables to route 0.0.0.0/1 and 128.0.0.0/1 through the VPN connection (dev wgclient). It does not add these routes to the bypass_vpn route table so the non-VPN devices will not route through the VPN (as expected.
Since I am used an vpn policy that defaults to non-VPN devices, I believe the behavior should be to only configure the via_vpn route table with 0.0.0.0/1 and 128.0.0.0/1 and NOT the main or bypass_vpn route tables.
As I said, this is best I can tell. It’s clear from /usr/bin/route_policy that the 51, 52, and 53 route tables are populated from the main route table, and that the actual route policy marks/tags traffic coming from VPN devices (0x80000) and non-VPN (0x60000). So perhaps the route tables aren’t actually used? I’m not sure.
The main thing I am trying to figure out is how best to configure the main route table to not use the VPN. I could simply delete the 0/1 and 128/1 routes, but am not sure if that will have a downstream impact on for client devices?