Hello, I just got the GL-RM1 a few days ago.
I was able to upgrade it to V1.8.2 release1 before using the desktop app GLKVM V1.4.0 release2.
I occasionally experience connection drops using the desktop app. I exported the logs a couple of times and had it ingested by Claude and it suggested using browser instead to see if the issue was no longer happening and it was not.
After investigating, the issue appears to be that the desktop app always routes through the GL cloud relay (av99082.r.glkvm.top), even when the PC and the GL-RM1 are on the same LAN. I confirmed this by watching kvmd's HTTP access output on the device while using the app: every request arrives from 127.0.0.1 with a referer of https://av99082.r<N>.glkvm.top/, indicating it is being proxied in via the GL cloud tunnel (mptun0) rather than reaching the device directly. The relay subdomain also rotates between sessions (I have seen r4 and r11 in the same uptime window), which I assume is normal GSLB behavior on your end.
To isolate where the failure was happening, I temporarily loaded the same UI in a regular browser at https://10.0.0.17 (the device's LAN IP). I want to be clear that this was only a diagnostic test to narrow down the problem. I am not trying to use the browser as my workflow, and the browser path is not a solution I am asking for. My intended workflow is the desktop GLKVM app, which is what I bought the device for.
What the diagnostic test showed:
-
With the app, every request is proxied through the cloud relay, and sessions reliably drop within an hour or two.
-
With the browser pointed directly at the LAN IP, requests come in from the PC's LAN address with no cloud hop, and the session has stayed up cleanly across multi-hour windows.
That delta isolates the failure to the cloud relay path that the app is forced through. The device itself, its ethernet link, MQTT connection to the GL cloud, and WireGuard tunnel all stay up the whole time, so the device is never actually offline when the app shows the failure screen.
When a drop happens with the app, the pattern in the logs looks like this:
-
Janus reports
WebRTC resources freedandMemsink closedfor the active session, with no precedingICE failedor DTLS alert. -
About 20-25 seconds later, kvmd removes the WebSocket client (
Removed client socket: WsSession(...)). -
The app surfaces the "Connection failed, check connection status" screen, and a fresh session can be established immediately by clicking reconnect.
The delay between the WebRTC channel dying and the WebSocket finally timing out is consistent with the underlying transport (mptun0 / cloud relay) freezing briefly, with no explicit failure logged on the device side. Whatever is happening is happening above the device's networking stack, somewhere in the cloud relay path that the app uses.
A few smaller observations that may or may not be related, but seemed worth mentioning:
-
Every WebRTC negotiation logs
Waiting for candidates-done callback... (slow gathering, ... Consider enabling full-trickle)from Janus, followed byStill waiting...a second later, and the DTLS handshake completing 1-3 seconds in. This happens on every connect and reconnect. -
After a cold boot, the cloud tunnel takes around 40 minutes to come up, because gl-cloud's initial
fetch servercall fails with "Timestamp Invalid" until NTP corrects the clock from 1970. During that window, the app cannot connect at all because Janus's TURN config (/tmp/turnserver.json) does not exist yet and Janus logsInvalid response: missing username. -
rtty logs
SSL certificate error(20): unable to get local issuer certificateon registration. Registration still succeeds, but the local CA bundle does not appear to validate the rtty server's cert.
What I'd like to see from a product standpoint, in priority order:
-
The cloud relay path that the app uses needs to be more resilient to brief transport hiccups. Whatever is silently freezing the session for ~20 seconds and then triggering a teardown should at minimum surface a clearer cause on the device side, and ideally not tear the session down at all for transient blips. This is the actual bug from my perspective.
-
A "connect via LAN" or LAN auto-discovery option in the desktop app would be a meaningful improvement for the case where the PC and the device are on the same subnet. Today, even with the device sitting on the same switch, every byte goes out to the cloud and back, which adds latency and creates the failure mode above. This would also reduce load on your relay infrastructure.
-
The post-boot 40-minute window where the app cannot connect is a poor first impression after any power cycle. A hardware RTC, or eager NTP at boot before gl-cloud's first auth attempt, would close that gap.
Happy to provide logs privately if that would help diagnose the relay-side behavior.