This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: subprocess/popen close_fds perform poor if SC_OPEN_MAX is hi
Type: performance Stage:
Components: Library (Lib) Versions: Python 2.6
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: georg.brandl Nosy List: georg.brandl, gvanrossum, hvbargen, klaas, loewis, pjdelport, ssbarnea
Priority: normal Keywords: patch

Created on 2007-02-19 10:17 by hvbargen, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
posix_closerange.patch klaas, 2007-10-16 04:05
posix_closerange.patch klaas, 2008-01-05 06:51 Updated patch
Messages (14)
msg31289 - (view) Author: H. von Bargen (hvbargen) Date: 2007-02-19 10:17
If the value of sysconf("SC_OPEN_MAX") is high
and you try to start a subprocess with subprocess.py or os.popen2 with close_fds=True, then starting the other process is very slow.
This boils down to the following code in subprocess.py:
        def _close_fds(self, but):
            for i in xrange(3, MAXFD):
                if i == but:
                    continue
                try:
                    os.close(i)
                except:
                    pass

resp. the similar code in popen2.py:
    def _run_child(self, cmd):
        if isinstance(cmd, basestring):
            cmd = ['/bin/sh', '-c', cmd]
        for i in xrange(3, MAXFD):
            try:
                os.close(i)
            except OSError:
                pass

There has been an optimization already (range has been replaced by xrange to reduce memory impact), but I think the problem is that for high values of MAXFD, usually a high percentage of the os.close statements will fail, raising an exception (which is an "expensive" operation).
It has been suggested already to add a C implementation called "rclose" or "close_range" that tries to close all FDs in a given range (min, max) without the overhead of Python exception handling.

I'd like emphasize that this is not a theoretical, but a real world problem:
We have a Python application in a production environment on Sun Solaris. Some other software running on the same server needed a high value of 260000 for SC_OPEN_MAX (set with ulimit -n XXX or in some /etc/-file (don't know which one).
Suddenly calling any other process with subprocess.Popen (..., close_fds=True) now took 14 seconds (!) instead of some microseconds.
This caused a huge performance degradation, since the subprocess itself only needs only  a few seconds.

See also:
Patches item #1607087 "popen() slow on AIX due to large FOPEN_MAX value".
This contains a fix, but only for AIX - and I think the patch does not support the "but" argument used in subprocess.py.
The correct solution should be coded in C, and should
do the same as the _close_fds routine in subprocess.py.
It could be optimized to make use of (operating-specific) system calls to close all handles from (but+1) to MAX_FD with "closefrom" or "fcntl" as proposed in the patch.
msg31290 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2007-02-20 23:45
Wouldn't it be simpler for you to just don't pass close_fds=True to popen?
msg31291 - (view) Author: H. von Bargen (hvbargen) Date: 2007-02-21 15:42
No, I have to use close_fds=True, because I don't want to have the subprocess to inherit each and every file descriptor.
This is for two reasons:
i) Security - why should the subproces be able to work with all the parent processes' files?
ii) Sometimes, for whatever reason, the subprocess (Oracle Reports in this case) seems to hang. And because it inherited all of the parent's log file handles, the paraent can not close and remove its log files correctly. This is the reason why I stumbled about close_fds at all. BTW on MS Windows, a similar (but not equivalent) solution was to create the log files as non-inheritable.
msg31292 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2007-02-21 18:18
I understand you don't want the subprocess to inherit "incorrect" file descriptors. However, there are other ways to prevent that from happening:
- you should close file descriptors as soon as you are done with the files
- you should set the FD_CLOEXEC flag on all file descriptors you don't want to be inherited, using fnctl(fd, F_SETFD, 1)

I understand that there are cases where neither these strategy is not practical, but if you follow it, the performance will be much better, as the closing of unused file descriptor is done in the exec(2) implementation of the operating system.
msg31293 - (view) Author: H. von Bargen (hvbargen) Date: 2007-02-22 20:16
Of course I am already closing any files as soon as possible.

I know that I could use FD_CLOEXEC. But this would require that I do it explicitly for each descriptor that I use in my program. But this would be a tedious work and require platform-specific coding all around the program. And the whole bunch of python library functions (i.e. the logging module) do not use FD_CLOEXEC as well.
Right now, more or less the only platform specific code in the program is where I call subprocesses, and I like to keep it that way.
The same is true for the socket module. All sockets are by default inherited to child processes.
So, the only way to prevent unwanted handles from inheriting to child processes, is in fact to specify close_fds=True in subprocess.py.
If you think that a performance patch similar to the patch #16078087 makes no sense, then the close_fds argument should either be marked as deprecated or at least the documentation should mention that the implementation is slow for large values of SC_OPEN_MAX.
msg31294 - (view) Author: Piet Delport (pjdelport) Date: 2007-03-18 16:43
I've been bitten by this too:  close_fds turns my a trivial little monitoring daemon (that reads the output of another status-reporting command once a second or so) into a monster that hogs the entire server.

As H. von Bargen says, this kind of degradation should at least come with a warning (or, better, be fixed by C/Pyrex-ifying close_fds).
msg56490 - (view) Author: Mike Klaas (klaas) Date: 2007-10-16 04:05
This problem has also afflicted us.

Attached is a patch which adds closerange(fd_low, fd_high) to the posix 
(and consequently os) module, and modifies subprocess to use it.  Patch 
is against trunk but should work for 2.5maint.

I don't really think that this is useful enough to add to the public 
api, but it results in a huge performance benefit for subprocess:

[python-trunk]$ ./python -m timeit -s 'import python_close' 
'python_close.go(100000)'
10 loops, best of 3: 358 msec per loop
[python-trunk]$ ./python -m timeit -s 'import os' 'os.closerange(4, 
100000)'
10 loops, best of 3: 20.7 msec per loop
[python-trunk]$ ./python -m timeit -s 'import python_close' 
'python_close.go(300000)'
10 loops, best of 3: 1.05 sec per loop
[python-trunk]$ ./python -m timeit -s 'import os' 'os.closerange(4, 
300000)'10 loops, best of 3: 63 msec per loop

[python-trunk]$ cat python_close.py
import os, sys
def go(N):
        for i in xrange(4, N):
                try:
                        os.close(i)
                except:
                        pass
msg56491 - (view) Author: Mike Klaas (klaas) Date: 2007-10-16 05:04
I see that some spaces crept in to the patch.  I can fix that (and perhaps 
rename the function to start with an underscore) if desired.
msg56492 - (view) Author: Mike Klaas (klaas) Date: 2007-10-16 05:12
Finally, I did not include any platform-specific optimizations, as I don't 
have AIX or solaris boxes on which to test them (and they can easily be 
added later).  An order-mag speedup on all *nix platforms is a good start.
msg57016 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2007-11-01 17:39
Looks good to me.
msg59281 - (view) Author: Mike Klaas (klaas) Date: 2008-01-05 06:51
Updated patch, now with documentation and test.  Patch is against 
2.6trunk, but probably will apply against 3.0 as well
msg59289 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2008-01-05 16:51
I'll review this when I have time. Thanks! If you want speedier
response, please find someone else among the core developers; or perhaps
the bug day on Jan 19 will help.
msg60224 - (view) Author: Georg Brandl (georg.brandl) * (Python committer) Date: 2008-01-19 20:22
Reviewed and committed patch as r60097. Thanks for your work!
msg160956 - (view) Author: Sorin Sbarnea (ssbarnea) * Date: 2012-05-17 09:45
Am I wrong or we should reopen this bug. Try `ulimit -n unlimited` and you'll make python code that does popen practically blocking.
History
Date User Action Args
2022-04-11 14:56:22adminsetgithub: 44595
2020-04-24 00:29:24vstinnersetresolution: accepted -> fixed
2012-05-17 09:45:06ssbarneasettype: performance

messages: + msg160956
nosy: + ssbarnea
2008-01-19 20:22:46georg.brandlsetstatus: open -> closed
assignee: gvanrossum -> georg.brandl
resolution: accepted
messages: + msg60224
nosy: + georg.brandl
2008-01-05 16:51:09gvanrossumsetassignee: gvanrossum
messages: + msg59289
2008-01-05 06:51:55klaassetfiles: + posix_closerange.patch
messages: + msg59281
components: + Library (Lib), - Interpreter Core
versions: + Python 2.6, - Python 2.5
2007-11-01 17:39:17gvanrossumsetnosy: + gvanrossum
messages: + msg57016
2007-10-16 05:19:31loewissetkeywords: + patch
2007-10-16 05:12:40klaassetmessages: + msg56492
2007-10-16 05:04:17klaassetmessages: + msg56491
2007-10-16 04:05:37klaassetfiles: + posix_closerange.patch
nosy: + klaas
messages: + msg56490
2007-02-19 10:17:38hvbargencreate