This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: memory leak threading or socketserver module
Type: Stage:
Components: Interpreter Core Versions: Python 2.4
process
Status: closed Resolution: works for me
Dependencies: Superseder:
Assigned To: Nosy List: akuchling, idsvandermolen, jyasskin, schuppenies
Priority: normal Keywords:

Created on 2006-07-05 12:47 by idsvandermolen, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
memory.txt idsvandermolen, 2006-07-05 12:47 server and client source code with test shell script
memory2.4.2.csv schuppenies, 2008-04-04 19:33 memory usage for 2.4.2
memory2.6a2.csv schuppenies, 2008-04-04 19:37 memory usage for 2.6a2
Messages (5)
msg29046 - (view) Author: Ids van der Molen (idsvandermolen) Date: 2006-07-05 12:47
a long running threaded server is not releasing memory,
but will keep lots of it for appearantly no reason,
causing memory exhaustion. This problem occurs with
python 2.4.2 (Suse 10.1 version) and 2.4.3 (customer
compiled on RedHat 8.0 and Suse 10.0). The problem does
_not_ occur with python 2.2.1 (RedHat 8.0 version).

The problem can be reproduced by running multiple
concurrent clients sending lots of data (25M)to a
threaded server. It looks like the garbage collector
does not always release memory used in threads, because
the ForkinMixIn and normal variants of the TCPServer do
not  show this problem (but it may be masked because of
seperate process memory space issues).
Testing:
To reproduce the probme run server code and create
multiple client connections by running multiple
instances of the client code, using the test run shell
script. The server will peak at about 550M, and remain
at lower but with every test run increasing amount of
memory (and therefore eventually using all available
memory).
msg29047 - (view) Author: A.M. Kuchling (akuchling) * (Python committer) Date: 2006-10-09 20:16
Logged In: YES 
user_id=11375

Confirmed; the sample server does leak on Linux.

I couldn't figure out why, though.  I doubt it's an
interaction between threads and GC, unless there's a
refcounting bug in the threading module.  gc.garbage doesn't
contain anything, so it doesn't look like an __del__ cycle.
msg62883 - (view) Author: Jeffrey Yasskin (jyasskin) * (Python committer) Date: 2008-02-24 07:38
It's possible but unlikely that r61011 fixed this. SocketServer creates
the reference cycles it fixed, but they tended to get cleaned up by
gc.collect(), so it sounds like that wasn't the bug you're seeing here.
I haven't had time yet to check, so I'm mentioning it here so the
possibility doesn't get lost.
msg64939 - (view) Author: Robert Schuppenies (schuppenies) * (Python committer) Date: 2008-04-04 19:33
I can *not* confirm the leak. I tested using the provided scripts (with
little modifications to log memory usage), using 1000 instead of 20
runs. I am running on Debian Linux and checked the reported Python
2.4.2 and the current trunk (2.6a2). I attached my results for
both. For me it looks like the average memory variations.
msg99876 - (view) Author: A.M. Kuchling (akuchling) * (Python committer) Date: 2010-02-22 23:39
I can no longer confirm this bug, either; trying the scripts with the current trunk doesn't seem to leak.  Backing out Jeffrey's r61011 didn't bring the problem back, so I'll just conclude that the problem has gotten fixed along the way somehow.
History
Date User Action Args
2022-04-11 14:56:18adminsetgithub: 43615
2010-02-22 23:39:39akuchlingsetstatus: open -> closed
resolution: works for me
messages: + msg99876
2008-04-04 19:37:36schuppeniessetfiles: + memory2.6a2.csv
2008-04-04 19:33:46schuppeniessetfiles: + memory2.4.2.csv
nosy: + schuppenies
messages: + msg64939
2008-02-24 07:38:47jyasskinsetnosy: + jyasskin
messages: + msg62883
2006-07-05 12:47:35idsvandermolencreate