This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: test_zlib is too slow
Type: Stage:
Components: Library (Lib) Versions: Python 2.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: nascheme Nosy List: brett.cannon, mwh, nascheme, rhettinger, tim.peters
Priority: low Keywords:

Created on 2004-05-26 17:35 by mwh, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Messages (6)
msg20901 - (view) Author: Michael Hudson (mwh) (Python committer) Date: 2004-05-26 17:35
I don't know what it's doing, but I've never seen it fail and 
waiting for it has certainly wasted quite a lot of my life :-)
msg20902 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2004-05-26 18:17
Logged In: YES 
user_id=80475

I hate this slow test.  If you want to label this as
explictly called resource, regrtest -u zlib , then be my guest.
msg20903 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2004-05-26 18:45
Logged In: YES 
user_id=31435

I'm sure most of the cases in test_zlib are quite zippy (yes, 
pun intended).  Do the right thing:  determine which cases 
are the time hogs, and pare them down.  By eyeball, only 
these subtests consume enough time to notice:

test_manydecompimax
test_manydecompimaxflush
test_manydecompinc

s/_many/_some/ isn't enough on its own <wink>.
msg20904 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2004-05-26 19:52
Logged In: YES 
user_id=357491

A quick look at the tests Tim lists shows that each of those run the basic 
incremental decompression test 8 times, from the normal size to 2**8 
time the base size; creates a list from [1<<n for n in range(8)] size 
increments.  So we get exponential growth in data size for each test 
which uses a 1921 long string as the base.

It also compresses in 32 byte steps and then decompresses at 4 byte 
steps.  The default, though, is 256 and 64.

Perhaps we should just move these tests to something like test_zlib_long 
and have it require the overloaded largefile resource?
msg20905 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2004-05-27 06:24
Logged In: YES 
user_id=31435

Persevere:  taking tests you don't understand and just 
moving them to artificially bloat the time taken by an 
unrelated test is so lazy on so many counts I won't make you 
feel bad by belaboring the point <wink>.  Moving them to yet 
another -u option doomed to be unused is possibly worse.

IOW, fix the problem, don't shuffle it around.

Or, IOOW, pare the expensive ones down.  Since they never 
fail for anyone, it's not like they're testing something 
delicate.  Does it *need* to try so many distinct cases?  
That will take some thought, but it's a real help that you 
already know the answer <wink>.
msg20906 - (view) Author: Neil Schemenauer (nascheme) * (Python committer) Date: 2004-06-05 19:35
Logged In: YES 
user_id=35752

Fixed in test_zlib.py 1.26.  I removed a bunch of magic
numbers while I was
at it.
History
Date User Action Args
2022-04-11 14:56:04adminsetgithub: 40296
2004-05-26 17:35:04mwhcreate