This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: potential crash and free memory read
Type: Stage:
Components: Interpreter Core Versions: Python 2.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: loewis Nosy List: loewis, nnorwitz
Priority: normal Keywords: patch

Created on 2005-11-16 05:26 by nnorwitz, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
tokenizer.patch nnorwitz, 2005-11-16 05:26 fix attempt 1
Messages (2)
msg49049 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2005-11-16 05:26
Martin,

I think this problem came about from the work on PEP
263 (coding spec).  Attached is a patch that corrects a
free memory write.  The problem shows up with valgrind
and test_coding IIRC.

There is a XXX comment in the code which points to
another problem.  It's possible that you could break
and not do a strcpy().  Or perhaps decoding_fgets()
shouldn't call error_ret().  I'm not sure if
error_ret() should free the buffer ever.  I think that
would be my preference.  That way we can deallocate it
where we allocate it.  I think I plugged all the other
leaks.

Let me know what you think.
msg49050 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2006-06-02 06:23
Logged In: YES 
user_id=33168

Someone already fixed the XXX comment.  Apply the rest of
the patch.  This should be backported to 2.4.

Committed revision 46602.
History
Date User Action Args
2022-04-11 14:56:14adminsetgithub: 42598
2005-11-16 05:26:23nnorwitzcreate