Beryl AX does not leverage jffs2. Instead it uses ubifs for its main mount point which means it has a pessimistic 170MB of usable space to start with:
root@GL-MT3000:~# df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 44.0M 44.0M 0 100% /rom
tmpfs 240.2M 508.0K 239.7M 0% /tmp
/dev/ubi0_2 169.7M 1.0M 163.9M 1% /overlay
overlayfs:/overlay 169.7M 1.0M 163.9M 1% /
tmpfs 512.0K 0 512.0K 0% /dev
root@GL-MT3000:~# mount
/dev/root on /rom type squashfs (ro,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,noatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,noatime)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noatime)
/dev/ubi0_2 on /overlay type ubifs (rw,noatime,assert=read-only,ubi=0,vol=2)
overlayfs:/overlay on / type overlay (rw,noatime,lowerdir=/,upperdir=/overlay/upper,workdir=/overlay/work)
tmpfs on /dev type tmpfs (rw,nosuid,relatime,size=512k,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,mode=600,ptmxmode=000)
debugfs on /sys/kernel/debug type debugfs (rw,noatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,noatime,mode=700)
pstore on /sys/fs/pstore type pstore (rw,noatime)
It's definitely zippier than the Beryl MT1300 in both CPU performance and Disk IO but still not to the level of something like a BMC4908 much less a BMC4912. The 'numbers' are in 1000s of bytes per second processed.
Beryl AX:
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
blowfish cbc 42867.48k 48022.36k 49695.76k 49990.66k 50289.02k 50118.66k
aes-128 cbc 47493.63k 51901.60k 53360.73k 53963.77k 53897.90k 54050.76k
aes-256 cbc 36996.83k 39643.80k 40452.69k 40829.86k 40896.98k 40697.86k
BMC4908:
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
blowfish cbc 47774.26k 56447.57k 59677.62k 60133.46k 60582.44k 60511.57k
aes-128 cbc 57659.15k 71957.35k 77010.79k 78551.42k 78604.97k 78582.72k
aes-256 cbc 45797.69k 54128.37k 57348.81k 57799.15k 58234.67k 57890.13k
BMC4912:
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
blowfish cbc 54371.69k 65426.60k 69130.67k 70232.31k 70382.24k 70555.31k
aes-128 cbc 62112.32k 78851.61k 85189.50k 87079.99k 87247.53k 87291.53k
aes-256 cbc 49896.06k 59858.20k 63485.52k 64060.28k 64670.22k 64389.12k
Beryl MT-1300
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
blowfish cbc 13558.07k 14779.26k 15353.88k 15077.72k 15425.54k 15257.26k
aes-128 cbc 11820.90k 13560.92k 12850.69k 13802.57k 13373.92k 13771.27k
aes-256 cbc 9250.92k 9904.52k 10036.73k 10060.37k 10130.77k 9961.04k
Script: openssl speed aes-128-cbc aes-256-cbc bf-cbc
Updated! I'll update with the Beryl MT1300 once I get that back up and running.
The 4.5.x firmware removes the "max number of users" that breaks the MT1300 in favor of just setting the beginning and end DHCP scope which is nice.
The integrated adguardhome is decent but to do anything you really have to flip over to AdGuardHome running on port 3000. According to netstat AdGuardhome is running on 3053. Instead of having AGH respond natively, instead they add a dns forwarder to forward requests from port 53 to 3053. You can see this in luci here:
My preference in setting up AGH is to have it resolve natively to requests on port 53 and then have dnsmasq respond on a different port local and specific upstream requests. Personally, I just find it more efficient because you are only putting load on one service rather than two and your overall response time should be shorter but whether all this is measurable is probably a debate for another time. I'll likely tweak the settings to minimize the hops internally.
I did happen to test my adguardhome updater script and it breaks because, for some absurd reason, GLiNET decided to rename the default config file from adguardhome.yaml to config.yaml. Edit - this has now been fixed to work with Beryl AX and similar routers.