listen(fd, -1) should give maximum backlog, not minimum
The venerable backlog argument to
listen(). What do we do with negative values in it?
If listen() is called with a backlog argument value that is less than 0, the function behaves as if it had been called with a backlog argument value of 0.
A backlog argument of 0 may allow the socket to accept connections, in which case the length of the listen queue may be set to an implementation-defined minimum value.
But Linux does:
if ((unsigned int)backlog > somaxconn) backlog = somaxconn;
And OpenBSD does:
if (backlog < 0 || backlog > somaxconn) backlog = somaxconn;
Some examples of portable userland code out there using the fact that -1 means "maximum" on other platforms:
868 if (backlog < 0 || backlog > somaxconn) 869 backlog = somaxconn;
- php-fpm: https://github.com/php/php-src/blob/master/sapi/fpm/fpm/fpm_sockets.h#L17 (it's polite and checks for OS defines)
- nginx: https://github.com/nginx/nginx/blob/master/src/os/unix/ngx_freebsd_config.h#L103 (also does it for darwin and obsd, but not linux?)
We currently follow POSIX, and interpret all negative values as equivalent to 0. Which means anything written assuming this behaviour (and not politely sniffing the OS... or running under LX brand!) works with default settings, but then yields ECONNREFUSED all the time under load. Luckily almost all of them have user-exposed configuration tweakables to adjust the listen backlog, which gets people out of the hole, but it's annoying.
Should we just follow the crowd instead of sticking by POSIX on this one?