Adguard crashes all the time: out of memory

Since about two weeks it has been impossible for me to use adguard :frowning:

I launch it; it works 5mn then crash because there is no more internet.

here is a Luci kernel log

I have installed today the latest 4.7 beta firmware to see if it fixes it but no

[  500.264020] Hardware name: GL.iNet GL-X3000 (DT)
[  500.268629] Call trace:
[  500.271077]  dump_backtrace+0x0/0x198
[  500.274729]  show_stack+0x14/0x20
[  500.278034]  dump_stack+0xb4/0xf4
[  500.281337]  dump_header+0x40/0x180
[  500.284814]  oom_kill_process+0x1b4/0x1b8
[  500.288811]  out_of_memory+0x204/0x310
[  500.292550]  __alloc_pages_slowpath+0x860/0x988
[  500.297069]  __alloc_pages_nodemask+0x1dc/0x248
[  500.301587]  do_read_cache_page+0x2ac/0x670
[  500.305757]  read_cache_page+0x10/0x18
[  500.309493]  page_get_link+0x34/0x118
[  500.313142]  vfs_get_link+0x38/0x40
[  500.316620]  ovl_get_link+0x3c/0x60
[  500.320107]  trailing_symlink+0x1c8/0x228
[  500.324107]  path_openat+0x264/0xff8
[  500.327670]  do_filp_open+0x60/0xc0
[  500.331148]  do_open_execat+0x60/0x1d0
[  500.334885]  open_exec+0x3c/0x60
[  500.338104]  load_elf_binary+0x1cc/0x1428
[  500.342102]  search_binary_handler.part.60+0xac/0x278
[  500.347150]  search_binary_handler+0x18/0x28
[  500.351410]  load_script+0x1e4/0x280
[  500.354974]  search_binary_handler.part.60+0xac/0x278
[  500.360013]  __do_execve_file.isra.63+0x534/0x740
[  500.364703]  __arm64_sys_execve+0x40/0x50
[  500.368702]  el0_svc_common.constprop.2+0x7c/0x110
[  500.373491]  el0_svc_handler+0x20/0x80
[  500.377230]  el0_svc+0x8/0x680
[  500.380338] Mem-Info:
[  500.382625] active_anon:76349 inactive_anon:913 isolated_anon:0
[  500.382625]  active_file:376 inactive_file:305 isolated_file:0
[  500.382625]  unevictable:0 dirty:0 writeback:0 unstable:0
[  500.382625]  slab_reclaimable:1828 slab_unreclaimable:15327
[  500.382625]  mapped:498 shmem:1060 pagetables:535 bounce:0
[  500.382625]  free:1136 free_pcp:255 free_cma:0
[  500.415239] Node 0 active_anon:305396kB inactive_anon:3652kB active_file:1564kB inactive_file:1736kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:2512kB dirty:0kB writeback:0kB shmem:4240kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[  500.438100] DMA32 free:4544kB min:4096kB low:5120kB high:6144kB active_anon:305396kB inactive_anon:3652kB active_file:844kB inactive_file:1652kB unevictable:0kB writepending:0kB present:520512kB managed:491532kB mlocked:0kB kernel_stack:2368kB pagetables:2140kB bounce:0kB free_pcp:892kB local_pcp:732kB free_cma:0kB
[  500.465973] lowmem_reserve[]: 0 0 0
[  500.469467] DMA32: 146*4kB (UME) 264*8kB (UME) 57*16kB (UE) 15*32kB (UE) 7*64kB (UE) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 4536kB
[  500.482492] 1919 total pagecache pages
[  500.486242] 0 pages in swap cache
[  500.489571] Swap cache stats: add 0, delete 0, find 0/0
[  500.494804] Free swap  = 0kB
[  500.497702] Total swap = 0kB
[  500.500631] 130128 pages RAM
[  500.503528] 0 pages HighMem/MovableOnly
[  500.507386] 7245 pages reserved
[  500.510553] Tasks state (memory values in pages):
[  500.515292] [  pid  ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
[  500.523970] [   1236]    81  1236      316       38    28672        0             0 ubusd
[  500.532167] [   1385]     0  1385      236       16    28672        0             0 urngd
[  500.540350] [   3896]     0  3896      253       17    24576        0             0 fcgiwrap
[  500.548792] [   3929]     0  3929      211       17    24576        0             0 fwdd
[  500.556933] [   3931]     0  3931      259       22    28672        0             0 fcgiwrap
[  500.565386] [   3932]     0  3932      259       21    28672        0             0 fcgiwrap
[  500.573831] [   3933]     0  3933      259       21    28672        0             0 fcgiwrap
[  500.582307] [   3934]     0  3934      259       21    28672        0             0 fcgiwrap
[  500.590761] [   4091]   514  4091      295       40    32768        0             0 logd
[  500.598853] [   4143]     0  4143      678      201    36864        0             0 rpcd
[  500.606959] [   4295]     0  4295      528      113    32768        0             0 lua
[  500.614961] [   4343]     0  4343     1111      414    36864        0             0 eco
[  500.622965] [   4733]     0  4733      253       17    28672        0             0 dropbear
[  500.631402] [   4877]     0  4877     1556      138    40960        0             0 modem_AT
[  500.639878] [   4951]     0  4951      236       18    32768        0             0 carrier-monitor
[  500.648943] [   5044]     0  5044      480       69    32768        0             0 netifd
[  500.657215] [   5306]     0  5306     1804      160    45056        0             0 gl_nas_diskmana
[  500.666316] [   6482]     0  6482     1349      155    40960        0             0 uhttpd
[  500.674607] [   6691]     0  6691      409       34    32768        0             0 dbus-daemon
[  500.683308] [   6871] 65534  6871      545       74    32768        0             0 avahi-daemon
[  500.692135] [   8472]     0  8472      466       82    32768        0             0 mount.ntfs-3g
[  500.701071] [   9332]     0  9332     2407      382    49152        0             0 nginx
[  500.709268] [   9428]     0  9428     3778     1297    61440        0             0 nginx
[  500.717443] [   9429]     0  9429     3680     1224    61440        0             0 nginx
[  500.725621] [   9903]     0  9903     1458      100    36864        0             0 sms_manager
[  500.734349] [  10120]     0 10120      343       87    24576        0             0 smsd
[  500.742460] [  10165]     0 10165      345       95    28672        0             0 smsd
[  500.750575] [  10415]     0 10415     1601      105    45056        0             0 gl_b2r_daemon
[  500.759464] [  10684]     0 10684      453      156    28672        0             0 sh
[  500.767388] [  10892]     0 10892     8686      756   102400        0             0 smbd
[  500.775507] [  10893]     0 10893     5504      318    73728        0             0 nmbd
[  500.783616] [  10940]     0 10940     8367      402    94208        0             0 smbd-notifyd
[  500.792407] [  10941]     0 10941     8365      400    90112        0             0 cleanupd
[  500.800873] [  10957]     0 10957      311       15    24576        0             0 ntpd
[  500.808985] [  11092]     0 11092   308113      387    73728        0             0 lpa_arm64_v1.47
[  500.818065] [  11153]     0 11153      766      188    32768        0             0 eco
[  500.826094] [  11264]     0 11264     2371      391    49152        0             0 lua
[  500.834112] [  11327]     0 11327      205       13    28672        0             0 gl_fan
[  500.842394] [  11392]     0 11392     1963      175    40960        0             0 gl_nas_sys
[  500.851050] [  11440]     0 11440     1963      175    40960        0             0 gl_nas_sys
[  500.859679] [  11487]     0 11487      814      254    36864        0             0 eco
[  500.867689] [  13021]     0 13021      311       15    24576        0             0 crond
[  500.875921] [  13522]     0 13522      310       15    28672        0             0 udhcpc
[  500.884228] [  16103]     0 16103      277       18    24576        0             0 qcm
[  500.892287] [  16490]     0 16490      310       14    24576        0             0 udhcpc
[  500.900574] [  18376]   453 18376      735       45    36864        0             0 dnsmasq
[  500.908949] [  18384]     0 18384      734       42    36864        0             0 dnsmasq
[  500.917302] [  21491]     0 21491   382318    68647   704512        0             0 AdGuardHome
[  500.926001] [  21518]     0 21518     1141      209    36864        0             0 eco
[  500.934038] [  21525]     0 21525      310       14    28672        0             0 ash
[  500.942054] [  21530]     0 21530      275       10    24576        0             0 traffic_statist
[  500.951164] [  21531]     0 21531      310       14    24576        0             0 ash
[  500.959172] [  21534]     0 21534      214       15    24576        0             0 sleep
[  500.967351] [  21677]     0 21677      310       13    24576        0             0 login
[  500.975576] [  21851]     0 21851      310       13    28672        0             0 sh
[  500.983510] [  21852]     0 21852      310       13    28672        0             0 sh
[  500.991433] [  21853]     0 21853      310       10    28672        0             0 mount
[  500.999616] [  21854]     0 21854      453      155    28672        0             0 sh
[  501.007536] [  21855]     0 21855      233       10    28672        0             0 get_interface_s
[  501.016610] [  21856]     0 21856      310       10    28672        0             0 grep
[  501.024718] [  21857]     0 21857      310       10    24576        0             0 grep
[  501.032814] [  21858]     0 21858      310       11    24576        0             0 sh
[  501.040775] [  21859]     0 21859      416       74    32768        0             0 procd
[  501.048966] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=AdGuardHome,pid=21491,uid=0
[  501.061995] Out of memory: Killed process 21491 (AdGuardHome) total-vm:1529272kB, anon-rss:274580kB, file-rss:8kB, shmem-rss:0kB, UID:0 pgtables:688kB oom_score_adj:0
[  633.440371] WiFi@C15L1,RTMPDeletePMKIDCache() 1311: IF(2), del PMKID CacheIdx=0

Check if a more up-to-date version of AdGuard Home might solve the issue: [Script] Update AdGuard Home

Reduce the amount of filter lists as well, if it does not help.

The script will not work apparently because I have ARMv8, is this really an issue ?

from Luci
ARMv8 Processor rev 4

Give it a try.

I gave it a try; and while the script worked (maybe they should update the disclaimer ?); adguard is still broken with same behavior :frowning: :frowning:

[38601.677755] Out of memory: Killed process 12932 (AdGuardHome) total-vm:1529812kB, anon-rss:270820kB, file-rss:8kB, shmem-rss:0kB, UID:0 pgtables:708kB oom_score_adj:0

Too many filter lists then, maybe?

Can i delete filters from cli ?

Is there a reset adguard to default from cli ?

I doubt it is the filters. It was working fine and stopped suddenly. I have not made any change to the filters before

Try to edit /etc/AdGuardHome/config.yalm

But I guess you will not have access to ssh after the crash.

I usually restart the router, then go quickly to the admin panel and stop AdGuard Home, then edit the yalm file, commenting the lines with the filters.

This is very common issue when the filters are too big.

I usually set the MINI filters (HaGeZi) to GL-iNet routers. Except Flint 2, that has enough memory and processing.

Yup, stopping AGH by using /etc/init.d/AdGuardHome stop and then:

rm -fr /etc/AdGuardHome/*
cp /rom/etc/AdGuardHome/* /etc/AdGuardHome/

Will reset AdGuard Home settings.

1 Like

Thanks for the help.

I disabled the rules in yaml file and it seems to hold up so far. will report back if it crashes again but I think we are good

So... What lists do you recommend and how many maximum entries should they have?

For devices with small amount of RAM, set the MINI lists from GitHub - hagezi/dns-blocklists: DNS-Blocklists: For a better internet - keep the internet clean!

2 Likes

Hi,

I had a similiar issue on a Flint 2. My question is how many filter lists = too many and why are you not preventing the user from selecting specific lists or alerting in the user interface if the total number of lists / entries in a list exceed the memory requirements of said router.

An important usability heuristic is to prevent the user from error states.

I understand this package is primarily maintained by another project (AdGuard Home team. hence the separate admin ui), but this seems to be a mandatory feature if users like us will configure our routers and then find they start crashing because the items we selected suddenly exceed the memory requirements of the hardware. I imagine other users would appreciate this and it would reduce support inquiries which should be a number your PM team is tracking from a feature prioritization perspective since eliminating customer complaints / support costs should always be seen as a hidden feature resulting metrics clearly showing more profit when selecting requirements for a major release.

1 Like

It should be possible to have the router download the selected lists and make a count of the total hostsnames/ip's it blocks before loading into memory and refuse to load if there are "too many" hosts. How many are too many depends on the router specs and the users usage.

Everything takes away from the precious memory and cpu cycles. Try running ADH and a VPN at the same time and see the max speed take a nose dive on most routers. Some routers do this with just VPN enabled.

So depending on your use case you can block X hosts in Y memory. It is up to the company to set some limits as they know the capabilities of their routers best. For example maybe cap all 128MB routers to a certain number with a "I'm sure I want to eat all my memory" check box if you want to go above the set limit.

That's just one way I can think of, but there are many ways to implement your good idea. If it happens if another question. The good thing is that you could try and make your own package to take of this and maybe do some more tweaks as well and install it. This could be done in a simple shell or python script for sure.

AGH is made to use as much ram as possible, see their forums for an explanation about it. Linux should manage the memory better. Gl.Inet needs to look into that as it clearly doesn't as AGH gets killed for using too much which should've been prevented. Out of the box AGH will eat all your ram like candy by design.

Maybe Pi-hole is better as it allocates ram differently. For what it's worth my Pi-hole with over 2.5 million hosts blocked idles at 93mb ram used. That's including caches and is just plain Debian + Pi-hole, nothing else. You can run Pi-hole on OpenWrt too but it's a but more involved as you need to setup a container solution (docker or lxc).

Easy: If it crashes, then there are too many.

Other questions like “why don't you warn users” must be asked to the AdGuard Home team, this is a 3rd party application.

that's not the correct answer. The reason I say this is I've been working as a product manager for over two decades. The right way to look at this from either a support perspective or a product management perspective is If you bundle the feature, you are responsible for it. For example, when I used to work on some security software, we would get the firewall engine from another company. When our customers installed our Security suite, they didn't care who made the underlying technology and ultimately we were responsible for the support of the product. You need to take ownership and communicate on behalf of your customers to the Agard team. It's not our responsibility to make requests to that team and then have it funnel into your software. I find this a little bit insulting because I bought the product from you.

3 Likes

I don't need to do anything because I don't work for GL :wink: So for me this is a perfectly fine answer. You might address your concerns to @bruce instead.

But since AGH is a big OpenSource project, nobody will communicate with them, I would assume. Because they wouldn't care about it. It's like a complete 3rd-party-integration.

But bruce might be able to assist you with that however.

1 Like

My recommendation is upstream address from your account with https://adguard-dns.io
Inside you can turn on basic filters and manual filters.
If you paid Adguard VPN and will be given extra queries per month.

Classic buck passing to make it AGHs problem. It’s like McDonald’s telling customers to complain to the cow farmers if they don’t like the hamburgers.

1 Like

If you don’t work here why are you a moderator? You’re a moderator on an official forum of a brand.

Any customer that posts here will assume you work for them. You moderate and post here for free like a hobby?

If you go into a supermarket and see someone wearing the companies clothes you assume they work there right?

1 Like