ls makefiles can be cleaned up
We currently are building ls six (yes six) times. This is because we have both i386 and amd64 trees, even though we deliver just one binary each for usr/bin, xpg4/bin, and xpg6/bin.
We can eliminate the whole subdirectory dance and simplify this.
Updated by Jason King 2 months ago
To clarify -- it's the XPG4/6 versions of ls where 32 + 64 bit binaries are built, but only one is delivered. We do deliver /bin/ls and /bin/amd64/ls.
One other note is that some distros do rely on the 32-bit RSS limit to keep ls from getting out of control on large directories (as most invocations of ls require it to read all the directory contents into memory to sort) -- a 64 bit ls can (and has) consumed GB of RAM.
Updated by Garrett D'Amore 2 months ago
Arguably a 32-bit ls can do the same, until it exhausts virtual memory, then crashes.
Of course extremely large directories (millions of files in a flat directory) are problematic for many things. I imagine, for example, that many utilities are going to read all files in and sort them.
For example, the go runtime library has a convenience function that does just this. I imagine shell invocations might do this with glob expansions. And so forth.
Feeding a list into sort would do the same.
So a real and valid question is whether "ls" deserves special treatment in this regard, or should we rely on e.g. sensible use of ulimit and rationale directory sizes.
An associated question is whether this consumption is necessarily always a problem. On a system with tens (or hundreds) of GB of RAM, does consumption of a few GB of RAM by ls necessarily constitute a problem? After all other programs might well do the same when faced with similar circumstances.
Of course if ls is using excessive memory due to wasteful design, that's another matter entirely. (For example if it allocated a page per directory entry or something stupid like that.) I'm not sure we have benchmarked this at all yet.