GL-AXT1800 max Micro SD card size

I know spec says 512GB but is there any reason 1TB wouldn’t work?

Short answer: Maybe the needed filesystem is not supported?

A little guessing: Most times because of the addressing limit.
We have block orientated filesystems in general, the blocks needs to be addressed and every address needs space.

For example: If you have 1MB and want a blocksize of 10kb, you’ll need a TOC (Table of content) of roundabout 102 ‘files’ (see inodes).
1MB=1024KB → /10 → 102,4 but an address can only be one, not a part of one.

But this means you can only store a maximum of 102 files, minus the TOC itself … The TOC need to contain every address and some meta information, like filename, size, ACL (or similar), date of creation, date of modification, …
Because you are clever, and don’t want to waste space on the TOC, you decide to store only 4 cluster with a size of 256kb each … Great Idea if you have 4 files with a size roundabout 250kb … But there is no space for a fifth file, even if all 4 files are only 4KB …

And don’t forget a directory is also a file (here is the reason for the ‘’ around file, above).
Wikipedia can explain this better than me: File system - Wikipedia

What I am trying to show: It’s complicated.

Sometimes it just don’t make sense, even if it would possible to use a larger storage, the access will be so slow, because the TOC needs to be accessed, cached and searched.
I don’t know what exactly is the reason for GL.iNet routers. But I trust the limitation they gave us.

1 Like

I have a 1TB in mine. Works fine.

2 Likes

I do not see why to limit would be 512 GB.
If it were 2GB, 32GB, 2TB or 128TB I would say it would actually be a possible real limit.

In case of 2GB it supports only SD cards.
In case of 32GB it supports up to SDHC cards.
In case of 2TB it supports up to SDXC cards.
In case of 128TB it supports up to SDUC cards.

My guess is that it was tested up to a 512 GB microSD card, but would at least support all SDXC cards.

1 Like

@Eheasterweekend

Good question.

CC @groentjuh @LupusE

I saw this thread a while back (several months; last year), before I had an account (I'm only recently a GL·iNet newbie-customer). Now that I have an account, I thought I'd offer my insights, especially given @LupusE's thoughtful remarks.

It's not that simple, @groentjuh.
+1 to @LupusE for citing other (file-system) limitations.

I'll start by citing an actual, tangible, empirical example, to make the core point, before delving into the abstract.

I have an Olympus LS-10 (from ~2008) which is a PCM audio recorder.

  • The LS-10 being a PCM recorder, means that it records uncompressed audio; it outputs WAV (RIFF (little-endian)) files. Thus, storage capacity is a concern.
  • Context (Skipable this list-item if uninterested in “why uncompressed?”): while the LS-10 can encode (on-the-fly, while recording) using lossy codecs (MP3, WMA), one of the major reasons I purchased it (circa £250 at the time; so wasn't a no-brainer decision) was because of its by-default ability to record not just losslessly (eg FLAC), but (specifically) uncompressed. — If I had come across another product, at the time, during my research, which didn't feature MP3/WMA encoding, and wasn't excessively-expensive (pro-grade field recorders easily run to multiple-thousands, each), then I might've opted for that other product, instead (to avoid giving patent-license fees to patent-trolls). Though, only if it was from a reputable maker; I knew (from experience elsewhere) Olympus to be a maker of quality hand-held (or, otherwise portable/field) audio recorders. Not that I found anything else comparable, anyway. So, anyway, why uncompressed? Because:
    • I wanted to end up with losslessly-encoded/compressed recordings (because fidelity; can always transcode to lossy (or other (future) lossless) codecs later)
    • from the perspective of recording integrity/reliability, high-end (field) recorders will capture uncompressed, because
      • it's simpler (complexity is the enemy of reliability (and/or security, depending on context)), thus uses less power, and so can capture for longer on a set (battery) of power-cells. Short of actually measuring, using a pair of Lithium AA/LR6 cells, I can get at least 12 hours record-time out of it; I suspect more (I record in ~4-hour batches, and a set of Lithiums will last several batches).
      • encoding/compression is better/best done on more capable hardware (such as a micro-computer, later), in order to yield a good compression-ratio in a reasonable amount of compute-time. A micro-computer has a vastly superior CPU, and lots more RAM, to do the task well.
      • in case any file(-system) corruption occurs, recovering uncompressed data is far more likely (else easier) than if using any compression at all (even lossless) — this is especially important, in my (current) use-case; what I'm recording can't be repeated/re-captured/interrupted (think of one-off events).
      • storage-capacity concerns are mitigated by using high-capacity storage volumes, and even if that's insufficient by having multiple volumes to swap in/out — compare using an external battery (connected via USB) with your phone to overcome the limits of its ~3Ah internal battery. Though, 16GB yields ~25 hours of recording at 44·1kHz, 16-bit(-per-sample) (I'm primarily recording voice, for which 44·1kHz@16-bit is plenty (overkill, actually; but that's the recorder's lowest sample-rate/bit-depth; besides/again, can always re-sample on a computer, later/after). If needed, it can capture at (up to) 192kHz/24-bit (192kHz is definitely overkill)).
    • given the above advantages, and that storage is only becoming better value (price-per-GiB) with time, the challenges disappear after a while (as they now have, for this case/context)
  • The LS-10's documentation clearly states that the maximum capacity SD card it can use is 16GB (not 32GB). That was after a firmware update, too. (The previous limit might've been 8GB.)
  • I successfully use 16GB SDHCs with/in it (I wanted to avoid storage being a limiting factor). (My first 16GB SDHC (in ~2008) was £100 at the time(!))
  • Because SD capacity has greatly increased, since 2008, fewer 16GB SDHCs are sold (and (local) retailers aren't stocking (m)any, now). Because the LS-10 still functions flawlessly after ~14 years (treating equipment well/gently, helps, as well as buying quality in the first place), I wanted to ensure that I had a sufficient number/collection/pool of 16GB SDHCs (given that EEPROM/Flash wears out over time, with use) to continue using the LS-10 for the rest of its life.
  • When shopping around, at local retailers, a sales-droid confidently told me that a 32GB SDHC should work. I had my doubts (which I'll elaborate on, later).
  • To refute, I actually tested the case of a 32GB SDHC in the LS-10.
  • The LS-10 wouldn't even boot fully/properly, and simply displayed a flashing error message (it thought that the SDHC was corrupt, if I recall). It wouldn't respond to anything other than a shutdown command. I wasn't surprised by this (but the sales-droid would've been).
  • I promptly removed the 32GB SDHC, and returned the 16GB SDHC (which yielded normal behaviour, once again).

Full disclosure: I didn't go to the trouble of re-formatting the 32GB with only a 16GB partition. That might have worked, but I'm still doubtful. Besides, I had only one 32GB SDHC at the time, which I needed for other things (and tested it only after ensuring my data was preserved, and only then after toggling the hardware read-only switch to read-only mode, for the test; lest the LS-10 inadvertently bork the 32GB SDHC).

The cards used are SanDisk; so hardly of dubious quality. They've all been very reliable.

So, just because the SD specs might say that SDHC can be up to 32GB (and (µ)SDXC up to 2TB); well, that's for/about the cards themselves. Doesn't mean that a host device can actually use them.

Though, yes, ideally it'd be nice if devices with card slots had their capacity limits at the same thresholds as the different classes of SD cards do (SD, SDHC, SDXC, etc.). Sometimes that isn't possible, or is otherwise impractical overkill. Think about 64-bit addressing, which allows the theoretical possibility of a many-petabyte storage device; well, those aren't going to exist (as a single, non-array device) for a long time, yet. Probably beyond the service-life of today's products.
Besides, higher-spec tends to cost more. A lot of products are in price-sensitive markets; that is, they compete on price, not quality. I'm reminded of A Market for Lemons (about how quality tends to decline, due to customer/buyer decisions over which of competing products to purchase.


Now onto the abstract.

I'm old enough to recall, with what're now ancient (eg Pentium 2) PCs, if you wanted to install a new (high-capacity) HDD in them, you had to be mindful of if the disk-controller on the motherboard could actually address that much storage (which, if I recall, came down to the bit-length of its LBA chip/firmware). Some couldn't (shorter LBA bit-length). Others did have sufficient bits such that the next limit was at something like 1–2TiB (so, when the max HDD capacity, at the time, was circa 100GiB, you knew (you were|it was) fine for the foreseeable future).

The only easy/practical (and reliable) way to determine if hardware was capable of higher-capacity, was to actually test it with an HDD of capacity greater than the suspected limit (I forget what that lower limit was, now; it might've been 64GiB/128GiB).

Aside: actually, having just checked, 128GiB and 2TiB are plausible and do make sense.
2⁴⁰ bits = 128GiB, while 2⁴⁴ bits = 2048GiB = 2TiB (4-bit steps isn't an absurd assumption).
Yes, LBA doesn't address bits, but clusters of sectors; however, clusters are sized at a multiple of 1024 (eg 4KiB), anyway. So, in reality it might be 4KiB×2²⁴=64GiB and 4KiB×2²⁸=1024GiB=1TiB.

It's been a long time since those days, and my memory is fuzzy on the specific numbers. It all being rather moot, and an historical anecdote, now that we have multi-TiB HDDs connecting via serial (rather than parallel (ribbon-cable)) links, and other goodness/marvels.

It simply didn't matter what any specification about theoretical/maximum limits stated. What mattered was what the actual hardware was capable of.
Same as in networking; just because you have a link with 1Gbps capacity, doesn't mean that you'll be saturating it. Even if you do saturate it, your useful-payload throughput will be, at best, ~800Mbps (80%), due to various overheads (MTU/MSS limits (~1500 byte MTU is still very common), IP+TCP packet headers, non-payload-carrying control/signalling packets (eg TCP ACKs), (end-to-end) latency, packet-loss, other traffic traversing shared links, buffer-bloat, and other (even more obscure) factors), even if the remote host itself is capable of sustaining 1Gbps (maybe over a LAN, but unlikely over the WAN). Your local link's maximum/theoretical capacity, means little in a broader context.

Back to storage. That's just at the hardware/signalling level. Then you've got the likes of the excellent points which @LupusE made, about higher abstractions like filesystems (which can be their own minefield).
Consider that more than a few filesystems, still in use today, have limits on the maximum length of single files (eg 2GiB), which we're starting to hit (HD video, anyone?). The LS-10 actually implements a graceful (multi-file) work-around for (avoiding) this limit (having reached it a few times, which happens at ~3·4 hours of continuous-recording (at 44·1kHz,16-bit)).

Consider the Millennium Bug (year 2000 problem). That came about, because back in the 1970s, computer scientists, when faced with very limited storage/memory, opted for the trade-off of using only 2-digit years. It made sense at the time (same as dial-up and DSL did), and (then) the need for 4-digit year-counters were some 30 years away.

Why would hardware-makers not make similar trade-offs, today?
So the spec says that you can theoretically use a 9000ZiB whatever. So? Who has one of those, or will in the next 10 years? No-one. So, if it's significantly extra work/cost to faithfully implement the spec, for no gain, why do it? Especially when a sub-set is sufficient for today and the near future.

While I'm all for getting the most out of hardware (hence buying the largest capacity SD cards my devices can use), let's not forget that it wasn't many years ago when bitty-box gateways (like GL·iNet sells; yes, gateways, because routers (proper) are something quite different; and no, multi-WAN doesn't mean that they're doing routing), were extremely limited in both RAM and permanent-storage (I'm thinking of a handful of MiB, for each; I have an original Linksys WRT54G for example, which sports a whole 16MiB of RAM(!) :laughing:).

So, 512GB is positively enormous, by comparison. That's more than some non-ancient micro-computers have!
512GB of storage, with 512MiB of RAM, is enough to host comparable services of a small network (including non-essential nice-to-haves, like Squid, Privoxy, and a Tor client configured to keep the directory data between sessions/reboots (rather than having to re-fetch it at each (re-)launch), not to mention obvious features of GL·iNet boxes like AdGuard Home, and multiple performant VPNs). All in a box which can fit in your pocket :astonished::exploding_head:.

Even if that much really isn't enough (what're you installing that you need >512GB?!); well,

  • use the USB port to attach external storage (USB hubs for the win)
  • use the GL·iNet gateway as a VPN client, to connect to your lab back at HQ/base, where you can host all the storage you want, like a many-dozen-TiB (or even a 1 PiB, like Linux Tech Tips) OpenZFS storage array :slightly_smiling_face:

I say this as someone who has a habit of fulfilling Parkinson's Law of Data (“Data expands to fill the space available for storage”).


However, all that aside;

That's helpful to know; thankyou @jdub.

Though, have you actually tried filling it up? That'd be the real test. A device can report any capacity it wants (which is exactly what some counterfeit/fraudulent USB flash drives do), but bad things (eg corruption) happen when you start trying to fill it with data.

I played it safe, took the specifications seriously, and use 512GB SanDisk µSDXCs (with an A2 Application Performance Class (I/O rating) (because capacity is not the only metric which matters), for good ExtRoot performance), myself. Reliability (of the card(hardware), the data on it, and the system using it) were more important to me than absolute maximal capacity.

I also, especially, wanted to avoid buying non-inexpensive µSDXCs, to only then discover that they couldn't be used. Something is better than nothing.

But, maybe in future :slightly_smiling_face:. I'm sure I'll find new ways to fill 512GB, even in an AXT1800 :grin: (large Squid cache?).

1 Like

@Lee-Carre, yes. Current space used, 842GB on the card I was talking about at the time. I have a 1.5T card now, 1.1TB used, 312GB free.

1 Like

:+1:

Sounds like it's doing just fine. Helpful to know; thank for sharing/reporting.

I presume that you're using Ext4, or similar, on that?

Maybe groentjuh was right about 512GB being what the developers tested it up to (perhaps 512GB was the highest-capacity TF that existed at the time; I know that >1TB is still relatively new to market). I would imagine that the marketing+legal departments would want to avoid making product claims which are unproven/speculation, lest it blow up in their face later on.

So, perhaps the TF slot on the AXT1800 does implement the full SDXC spec. If so, then great. I guess we'll fine out, as people user ever greater capacity TF cards :smirk:.

LUKS encrypted partition with ext4, yes.

My limits were defined by card-reader.

Filesystems and firmware limits do indeed also play a role, but in this case you are talking about OpenWRT devices, which I doubt would bring much/any additional limits.