Why do you need ARG_MAX? (Or how do you decide on ARG_MAX?)

Asked 2 years ago, Updated 2 years ago, 101 views

On Linux, there is a limit to the number of arguments passed to commands in the shell.For example, if you try to execute a command like foobar* for many files and get a Argument list too long error, this is the reason.

After a brief examination, it seems that this upper limit is defined as ARG_MAX.In my Ubuntu (64bit) environment, the values were as follows.

$getconf ARG_MAX
2097152

This ARG_MAX was described in man3sysconf as follows:

ARG_MAX-_SC_ARG_MAX
    The maximum length of the arguments to the exec(3) family of
    functions. Must not be less than _POSIX_ARG_MAX (4096).

Based on this description, the argument length limit appears to be due to arguments passed to system calls of the exec series.But what kind of limitations create the upper limit?

The actual ARG_MAX value is not that large.In my environment, ARG_MAX is exactly equal to 2¹ に, so I think there is some reason why it is this value, but I don't know why it is.

In addition, ARG_MAX does not limit the argument length for shell built-in commands such as echo.In other words, I don't think this limitation is the limitation that appears when managing the argument itself, but it's caused by some limitation when doing exec, but I don't know where to go.

So I have a question.

  • Why do I need an argument length limit for shell commands?
  • In particular, how is ARG_MAX determined?

linux kernel

2022-09-30 19:15

2 Answers

When it comes to necessity, it's probably because the memory of a computer is finite.Don't forget that memory capacity was very small in the 1970s and 1980s, especially when UNIX was developed (MicroVAX II had 1MB of memory).

In extreme cases, the MS-DOS command line can be up to 127 characters long (offset 0x80 is the command line length) placed after offset 0x80 (PSP)

Even in the UNIX world, it's very reasonable to set constraints because using all the remaining memory just to get the command line for this single process that you're about to start can affect all other booted processes.

As for why the number is this, our HPUX 11.11 /usr/include/limits.h contains the following comments:

The default values are provided here because the constants are specified by special publications including XPG (X/Open Portability Guide) Issue 2 and SVID (System V Interface Definition) Issue 2.
#defineARG_MAX2048000/*maximum length of arguments...*/

Oira translation

The default values provided here are constants specified by documents such as XPG2 and SVID2.

If the reasons and grounds are explained, it seems to be in the documents around here, but I don't have time to look for XPG2 or SVID2 right now, so I'll leave that to someone else.

In the case of shell built-in commands, it is not supported by the idea that command line arguments are allowed as long as memory permits, "because they do not start another process."


2022-09-30 19:15

http://man7.org/linux/man-pages/man2/execve.2.html

the total size is limited to 1/4 of the allowed stack size.

It is written on the man of execve as above.
Actually, when I checked it with my hand, it seems that it is 1/4 of the stack's maximum size.
(The output of ulimit is in units of KiB *1024/4)

$ulimit-s
8192
$ getconf ARG_MAX
2097152
$ ulimit-s4096
$ ulimit-s
4096
$ getconf ARG_MAX
1048576

Also, as to why restrictions are necessary, I think it is related to not only execve but also to the fact that arguments are loaded into registers and stacks when calling functions.
I thought it might be because without that limitation, the stack would overflow when given too many arguments.(No confidence)


2022-09-30 19:15

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.