Wireguard connection watchdog script for LUCI/GL.iNet router?

Hello Bruce,

thanks again for your detailed tests!

I think there is still a fundamental misunderstanding:

:white_check_mark: DNS itself already works correctly
:cross_mark: The issue is that WireGuard policy routing modifies DNS traffic, depending on include/exclude rules.

In my deployments the DNS server is always the Domain Controller on the other side of the tunnel (distributed by DHCP).
Therefore, DNS traffic must always go through the VPN, regardless of which destination domain or IP is queried.

Right now the GL firmware treats DNS as destination traffic
→ but DNS is service traffic, independent of the final target.

Because of this behavior:

  • Using include breaks DNS (DNS packets stay local instead of reaching the DC)

  • Using exclude forces me to maintain a huge exclusion list

  • Which does not scale across many customer installations

We need DNS routing separate from include/exclude destination rules.

:pushpin: What is needed
A simple option like:

“Send DNS traffic to VPN”
(all DNS packets: UDP/TCP 53, optionally DoT/DoH to specified DNS IPs)

This solves the problem cleanly and makes “include” usable again.

I hope this makes the situation clearer.
Thank you again for reviewing it!

Best regards,
Dustin

Hello,

Whether it is "include" or "exclude", DNS will spilt traffic according to the configured policy rules. For example, if it is "include", and the destination address is in the "include" list, the DNS will use VPN DNS, the access request will go to VPN tunnel; vice versa, if destination is not in the "include", the DNS will use WAN DNS, and the access request will go to the WAN (or the next tunnel rule)

So we are curious why in your configuration the DNS of the VPN client cannot reach the domain control server of the VPN server, but in my test DNS works properly.

  1. Please SHH login the router, and execute: ipset list
  2. Please export the issue syslog in GL GUI > SYSTEM > Log

Hello Bruce,

thank you for the clarification. I think I understand how your current logic works — DNS traffic is split depending on the destination policy (include/exclude).

However, this logic doesn’t fit setups where the DNS server itself is located behind the VPN — for example, when the VPN server side has a Windows Domain Controller (DC) providing DNS via DHCP.

In this situation:

  • The client must always send all DNS queries through the VPN, because the only valid resolver is the remote DC.

  • The DC DNS decides where each domain should go — internal or external.

  • If GL.iNet splits DNS before it reaches the DC, the lookup fails or resolves with wrong results.

That’s why in my setup, include breaks the DNS logic:
DNS packets never reach the remote DC, because they’re already handled locally.
The only way to make it work is exclude, which forces everything through the tunnel first — but that requires a huge exclude list.

So the problem is not that DNS cannot “reach” the DC — it’s that GL.iNet’s current policy engine decides DNS direction before the DNS query can even reach the correct server.

For corporate or AD environments, DNS must be treated as service routing, not destination-based routing.

I’ll still collect the ipset list and syslog for your reference.

Best regards,
Dustin
___________________________________________
BusyBox v1.33.2 (2025-09-04 11:11:17 UTC) built-in shell (ash)


| |.-----.-----.-----.| | | |.----.| |_

| - || _ | -| || | | || || |
|
_____|| |
||||___||| |____|
|| W I R E L E S S F R E E D O M

OpenWrt 21.02-SNAPSHOT,

root@GL-XE3000:~# ipset list
Name: dst_net10
Type: hash:net
Revision: 6
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 2760
References: 1
Number of entries: 38
Members:
196.0.0.0/6
193.0.0.0/8
112.0.0.0/5
16.0.0.0/4
126.0.0.0/8
176.0.0.0/4
173.0.0.0/8
192.192.0.0/10
32.0.0.0/3
1.0.0.0/8
160.0.0.0/5
64.0.0.0/3
12.0.0.0/6
168.0.0.0/6
192.170.0.0/15
194.0.0.0/7
200.0.0.0/5
120.0.0.0/6
172.0.0.0/12
10.0.0.0/8
192.176.0.0/12
192.128.0.0/11
172.64.0.0/10
4.0.0.0/6
124.0.0.0/7
128.0.0.0/3
172.128.0.0/9
208.0.0.0/4
8.0.0.0/7
192.0.0.0/9
192.169.0.0/16
192.160.0.0/13
172.32.0.0/11
174.0.0.0/7
192.172.0.0/14
2.0.0.0/7
96.0.0.0/4
11.0.0.0/8

Name: dst_net10_6
Type: hash:net
Revision: 6
Header: family inet6 hashsize 1024 maxelem 65536
Size in memory: 1240
References: 0
Number of entries: 0
Members:

Name: dst_net1832
Type: hash:net
Revision: 6
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 2824
References: 1
Number of entries: 37
Members:
172.32.0.0/11
120.0.0.0/6
126.0.0.0/8
32.0.0.0/3
192.0.0.0/9
174.0.0.0/7
11.0.0.0/8
176.0.0.0/4
192.170.0.0/15
192.128.0.0/11
192.176.0.0/12
124.0.0.0/7
192.192.0.0/10
8.0.0.0/7
194.0.0.0/7
208.0.0.0/4
4.0.0.0/6
172.0.0.0/12
172.64.0.0/10
193.0.0.0/8
192.169.0.0/16
173.0.0.0/8
64.0.0.0/3
160.0.0.0/5
192.172.0.0/14
1.0.0.0/8
12.0.0.0/6
196.0.0.0/6
96.0.0.0/4
172.128.0.0/9
200.0.0.0/5
168.0.0.0/6
16.0.0.0/4
128.0.0.0/3
192.160.0.0/13
2.0.0.0/7
112.0.0.0/5

Name: dst_net1832_6
Type: hash:net
Revision: 6
Header: family inet6 hashsize 1024 maxelem 65536
Size in memory: 1240
References: 0
Number of entries: 0
Members:

Name: GL_MAC_BLOCK
Type: hash:mac
Revision: 0
Header: hashsize 1024 maxelem 65536
Size in memory: 200
References: 1
Number of entries: 0
Members:
root@GL-XE3000:~#

________________________________
Fri Oct 31 11:53:47 2025 daemon.info dnsmasq[21110]: using nameserver x.x.x.x#53
Fri Oct 31 11:53:47 2025 daemon.info dnsmasq[21110]: using nameserver 8.8.8.8#53
Fri Oct 31 11:53:47 2025 daemon.info dnsmasq[21110]: using nameserver 8.8.4.4#53
Fri Oct 31 11:53:47 2025 daemon.info gl-repeater[3224]: (repeater.lua:1731) interface "wwan" up
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667): PING x.x.x.x (x.x.x.x): 56 data bytes
Fri Oct 31 11:53:47 2025 daemon.info glc: (common.c:1741) Parse +COPS response success
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667): 64 bytes from x.x.x.x: seq=0 ttl=64 time=9.368 ms
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667):
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667): --- x.x.x.x ping statistics ---
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667): 1 packets transmitted, 1 packets received, 0% packet loss
Fri Oct 31 11:53:47 2025 daemon.notice netifd: wwan (21667): round-trip min/avg/max = 9.368/9.368/9.368 ms
Fri Oct 31 11:53:47 2025 user.notice kmwan: config json str={ "op": 6, "data": { } }
Fri Oct 31 11:53:48 2025 user.notice kmwan: config json str={ "op": 2, "data": { "cells": [ { "interface": "wwan", "netdev": "apclix0", "track_mode": "force", "addr_type": 4, "force_ip": "x.x.x.x", "tracks": [ { "type": "ping", "ip": "1.1.1.1" }, { "type": "ping", "ip": "8.8.8.8" }, { "type": "ping", "ip": "208.67.222.222" }, { "type": "ping", "ip": "208.67.220.220" } ] } ] } }
Fri Oct 31 11:53:48 2025 kern.debug kernel: [ 67.893721] [add_dev_config 319]add node success. iface:wwan, dev:apclix0, ifindex:12
Fri Oct 31 11:53:48 2025 daemon.info gl-repeater[3224]: (repeater.lua:1738) interface wwan status offline
Fri Oct 31 11:53:48 2025 daemon.info glc: (common.c:1741) Parse +COPS response success
Fri Oct 31 11:53:48 2025 user.notice firewall: Reloading firewall due to ifup of wwan (apclix0)
Fri Oct 31 11:53:48 2025 daemon.info avahi-daemon[7852]: Joining mDNS multicast group on interface apclix0.IPv6 with address fe80::ecee:71ff:fe39:7758.
Fri Oct 31 11:53:48 2025 daemon.info avahi-daemon[7852]: New relevant interface apclix0.IPv6 for mDNS.
Fri Oct 31 11:53:48 2025 daemon.info avahi-daemon[7852]: Registering new address record for fe80::ecee:71ff:fe39:7758 on apclix0.*.
Fri Oct 31 11:53:49 2025 kern.debug kernel: [ 69.236179] Set_ByPassCac_Proc(): set CAC value to 1
Fri Oct 31 11:53:49 2025 kern.debug kernel: [ 69.538442] BcnCheck start after 2500 ms (ra0)
Fri Oct 31 11:53:49 2025 kern.debug kernel: [ 69.542886] BcnCheck start after 2500 ms (ra0)
Fri Oct 31 11:53:49 2025 daemon.info gl-repeater[3224]: (repeater-portal.lua:395) portal detecting...
Fri Oct 31 11:53:50 2025 daemon.info gl-repeater[3224]: (repeater.lua:1738) interface wwan status online
Fri Oct 31 11:53:50 2025 daemon.info gl-repeater[3224]: (repeater-portal.lua:418) not found portal
Fri Oct 31 11:53:52 2025 cron.err crond[7473]: time disparity of 3112 minutes detected
Fri Oct 31 12:38:10 2025 kern.debug kernel: [ 2729.580780] entrytb_aid_aquire(): found non-occupied aid:5, allocated from:4
Fri Oct 31 12:38:10 2025 kern.warn kernel: [ 2729.587881] 7981@C13L2,MacTableInsertEntry() 1577: New Sta:xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.598280] 7981@C08L3,ap_cmm_peer_assoc_req_action() 1714: Recv Assoc from STA - xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.607739] 7981@C08L3,ap_cmm_peer_assoc_req_action() 2241: ASSOC Send ASSOC response (Status=0)...
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.616940] 7981@C01L3,wifi_sys_conn_act() 1115: wdev idx = 2
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.623118] 7981@C08L3,hw_ctrl_flow_v2_connt_act() 215: wdev_idx=2
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.747008] 7981@C15L3,WPABuildPairMsg1() 5310: <=== send Msg1 of 4-way
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.753688] 7981@C15L3,PeerPairMsg2Action() 6303: ===>Receive msg 2
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.760474] 7981@C15L3,WPABuildPairMsg3() 5595: <=== send Msg3 of 4-way
Fri Oct 31 12:38:10 2025 kern.notice kernel: [ 2729.767173] 7981@C15L3,PeerPairMsg4Action() 6734: ===>Receive msg 4
Fri Oct 31 12:38:10 2025 kern.warn kernel: [ 2729.778721] 7981@C15L2,PeerPairMsg4Action() 7098: AP SETKEYS DONE(rax0) - AKMMap=WPA2PSK, PairwiseCipher=AES, GroupCipher=AES, wcid=2 from xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:13 2025 daemon.info dnsmasq-dhcp[21110]: DHCPDISCOVER(br-lan) xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:13 2025 daemon.info dnsmasq-dhcp[21110]: DHCPOFFER(br-lan) x.x.x.x xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:13 2025 daemon.info dnsmasq-dhcp[21110]: DHCPREQUEST(br-lan) x.x.x.x xx:xx:xx:xx:xx:xx
Fri Oct 31 12:38:13 2025 daemon.warn dnsmasq-dhcp[21110]: Ignoring domain dwnet.intern for DHCP host name x
Fri Oct 31 12:38:13 2025 daemon.info dnsmasq-dhcp[21110]: DHCPACK(br-lan) x.x.x.x xx:xx:xx:xx:xx:xx x
Fri Oct 31 12:39:21 2025 kern.notice kernel: [ 2800.221288] 7981@C15L3,PeerGroupMsg1Action() 7167: ===>Receive group msg 1
Fri Oct 31 12:53:21 2025 daemon.notice netifd: modem_0001_4 (12236): udhcpc: sending renew to x.x.x.x
Fri Oct 31 12:53:21 2025 daemon.notice netifd: modem_0001_4 (12236): udhcpc: lease of x.x.x.x obtained, lease time 7200
Fri Oct 31 12:53:21 2025 user.notice kmwan: config json str={ "op": 6, "data": { } }
Fri Oct 31 13:07:53 2025 authpriv.info dropbear[13424]: Child connection from x.x.x.x:12246
Fri Oct 31 13:08:02 2025 authpriv.notice dropbear[13424]: Password auth succeeded for 'x' from x.x.x.x:12246

Hello,

I understood what you request, and I think the firmware v4.7 and previous may be more in line with your needs. Since the previous firmware was not developed and supported DNS split, all DNS traffic went to the VPN.

For your scenario, all DNS uses to VPN DNS, which is in line with your purpose.

But for most ordinary users, they may prefer that DNS can also be split (WAN and VPN) instead of all go to VPN DNS.

If the DNS does not split, will cause the returned resolution IP to not necessarily be the IP with the best/fastest one - for example, if the user configure an exclude list, and VPN tunnel is a server in the United States, all DNS uses remote VPN DNS (v4.7 firmware), he just want to access a domain web page which is in exclude list in Hong Kong (htpp/s session should go to WAN), but the resolution IP returned by VPN DNS is from the United States, the web loading and response speed of web page elements will be greatly reduced, since it connects to the United States server.
On the other hand, streaming media such as HBO have been added to exclude, and it should go to WAN, all use VPN DNS in the v4.7 firmware, it would be detected by the APP and misjudged as using VPN, and access was not allowed. I assume the reason is the IP returned by VPN DNS does not match or does not belong to the WAN connected ISP local area.

That’s reason why we have to improve and develop DNS spilt to ensure that domain is resolved to the best IP, which is more in line with user expectations and ensures higher availability.

For your scenario, I think we have two ways:

  1. Downgrade the firmware to v4.7 or previous.
  2. Customize all LAN client DNS servers to DC IP, or manually set the router DNS to DC IP.
    This should bring that all DNS requests go to DC.


    Configure the exclude list according to your needs, it will not affect the DNS settings as in my previous test.

Hello Bruce,

thank you for the detailed explanation – now everything makes sense regarding the difference between the older (v4.7) and newer firmware behavior.

I fully understand why the DNS split feature was introduced – for most home or private users it’s beneficial that DNS can be split between WAN and VPN for performance and streaming services.
However, our use case is fundamentally different:

  • We deploy GL.iNet routers in managed business environments with Windows Domain Controllers.

  • The DC is always the DNS server for all clients and is located behind the VPN.

  • DHCP on the GL router points to the DC as DNS – clients must not and cannot modify DNS settings locally.

  • Many users do not have administrative rights to change network settings, so manually configuring DNS on each PC is not an option.

Because of the new split-DNS logic, the router now intercepts DNS traffic before it reaches the remote DC.
This breaks the Active Directory environment (domain login, group policies, Kerberos, internal name resolution, etc.).

Downgrading to v4.7 could work temporarily, but it is not a long-term solution.
We operate and maintain a large number of GL.iNet routers, and managing firmware downgrades or custom images across all devices is not practical.

What would really help is a configurable option in the GUI such as:

“Always send all DNS traffic through VPN tunnel (disable DNS split).”

That way, the split behavior stays the default for private users, but professional or enterprise users like us can explicitly enable full VPN DNS routing when required.

Thank you again for all the support and clear explanations — I really appreciate your time.
I hope your R&D team can consider adding such an option in a future firmware release.

Best regards,
Dustin