The lspci output is good news. I was worried that your modem might have yet another different PCI product/subproduct id, but it's the same as mine: you can see the product id 0x0308 and sub-id 0x5201 so that patch is correct for your modem too.
I think that patch isn't taking effect for some reason. To confirm: what does
dmesg | grep mhi-pci-generic
return for you? When the patch is correctly applied, you should see
Assuming you see the latter, please could I take a look at your drivers/bus/mhi/host/pci_generic.c to see if there's some reason it hasn't applied right on the older kernel? You can email me as chris@arachsys.com if that's easier.
@SpitzAX3000 this guide is great, thank you for putting it together, my concern is how friendly is going to be with vanilla OpenWrt with the Modem side of the router, signal status, information, querying the modem, change from NSA to SA, block/unblock bands, etc., are there LuCI modules or tools to help with that? Or everything has to be done via the command line? I think this is probably the main advantage of using the GLi.Net firmware.
It would be nice to have an updated guide bundling all the work/discovery/changes you have done with @ChrisW, unfortunately to me, it has become too technical and risky to apply into my X3000, because it is the main gateway with TMHI, and if I brick it in the process, I will be isolated.
Please keep updating and maintaining this thread, it's invaluable for all of us who want to use plain vanilla OpenWrt. My cascading router behind the X3000 who acts primarily as a modem, is an MT6000 running 23.05.4.
I haven't explored LuCI tools to view such info. But, there should be some tools. Sometimes it's in your advantages to have to set up the connection using CLI as these GUI packages cause sometimes issues (as we have seen with some of the GL.iNet firmware). Not to mention that using one or two commands to start the 5G connection isn't that difficult nor does it require technical knowledge.
My guide should be sufficient to get you a very stable vanilla OpenWRT. Chris and I have been discussing something else: modifying the Linux stock driver to recognize the modem module in PCIE mode. Once Linux drivers are patched as per Chris's fix, you won't even need the part of compiling the drivers that is mentioned in my guide.
If you are happy with GL.iNet firmware and having few or negligible issues, then just stick with it.
Man, how slow are we talking here? I've got a small NUC, J4125, that I'm trying to compile it in in an LXC container on proxmox, and its been like 6 hours now and its still pegged on make -j $(($(nproc)+1)) toolchain/install. I don't have any big x86 machines anymore to run this on natively.
Yes, it’s gonna be very slow. I compiled it in NUC with Linux installed and took about 40 minutes.
If you don’t want to install Linux on your NUC, you can try to boot Ubuntu/Mint from a fast usb. Using containers and virtual machines won’t utilize all of your cpu’s cores.
Agree! If you install a vanilla openwrt then few days later you compile the drivers, it won’t work as the kernel versions are rapidly being pumped up !
Yes, and don’t forget to add also the -d option. I found adding -d to be better than using udhcpc. Although you need to run it on boot, you can automate it rather than manually invoking it.
For the second command you mentioned, no need. Just go to dnsmasq settings and choose any desired location for your resolvers.
I think MBIM is the supported interface over PCIe MHI, although QMI can be tunnelled within MBIM for commands that aren't supported directly. Allegedly uqmi has the --mbim flag to support this, although I've never tried it as I've not needed anything more complicated than basic connect.
I can't investigate the packet drops myself I'm afraid - my 100Mbps mobile signal here is nowhere near fast enough to be able to stress the device! It's a little puzzling though: there shouldn't be that much overhead and the mt7981 is nice and fast.
[Edit: the modem may not support in-band address discovery with dhcpv4/6. Not all of them do. You might need to use the addresses returned by a BASIC_CONNECT_IP_CONFIGURATION, but I think the openwrt umbim netif script will do that out of the box.]
I have now managed to get both to work: MBIM and QMI, although the qmicli tool cannot make a connection when the modem in QMI mode; instead I am using mbimcli to connect.
when I have time, I will try to find out the cause of the dropped packets. Initially it seems to me because of the datagrams and max size configured by the driver:
# qmicli -d /dev/wwan0mbim0 --wda-get-data-format
[/dev/wwan0mbim0] Successfully got data format
QoS flow header: no
Link layer protocol: 'raw-ip'
Uplink data aggregation protocol: 'mbim'
Downlink data aggregation protocol: 'mbim'
NDP signature: '5460041'
Downlink data aggregation max datagrams: '32'
Downlink data aggregation max size: '32768'
I was able to change these values using qmicli. But when the connection starts, they are return to defaults. If I change it on the fly after establishing a connection, the Internet stops working.
I'm intrigued that you're seeing packet loss with both drivers. I wonder if that means it's something intrinsic to the modem itself, or its default configuration?
(You don't get any kind of packet loss using the same measurement methodology over the ethernet ports I assume? Sounds like your mobile connection is practically the same speed as gigabit ethernet (!) so this isn't a completely unfair comparison.)
Yes, this is a minor bug in the modem itself - the sequence numbers are only used for debugging so it's harmless if they're broken, but the driver warns anyway.
It happens on both usb cdc-wdm and pcie-mhi, but in usb mode it's been downgraded to a debug message (presumably because it's so common) whereas in pcie mode it gets reported more loudly to dmesg.