Issue959379
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2004-05-24 11:32 by astrand, last changed 2022-04-11 14:56 by admin. This issue is now closed.
Messages (5) | |||
---|---|---|---|
msg20865 - (view) | Author: Peter Åstrand (astrand) * | Date: 2004-05-24 11:32 | |
As we all know, the fileobjects destructor invokes the close() method automatically. But, most people are not aware of that errors from close() are silently ignored. This can lead to silent data loss. Consider this example: $ python -c 'open("foo", "w").write("aaa")' No traceback or warning message is printed, but the file is zero bytes large, because the close() system call returned EDQUOT. Another similiar example is: $ python -c 'f=open("foo", "w"); f.write("aaa")' When using an explicit close(), you get a traceback: $ python -c 'f=open("foo", "w"); f.write("aaa"); f.close()' Traceback (most recent call last): File "<string>", line 1, in ? IOError: [Errno 122] Disk quota exceeded I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? |
|||
msg20866 - (view) | Author: Terry J. Reedy (terry.reedy) * | Date: 2004-06-01 17:53 | |
Logged In: YES user_id=593130 I think there are two separate behavior issues: implicit file close and interpreter shutdown. What happens with $ python -c 'f=open("foo", "w"); f.write("aaa"); del f' which forces the implicit close *before* shutdown. As I recall, the ref manual says little about the shutdown process, which I believe is necessarily implementation/system dependent. There certainly is little that can be guaranteed once the interpreter is partly deconstructed itself. >I'm aware of that exceptions cannot be raised in destructors, but wouldn't it be possible to at least print a warning message? Is there already a runtime warning mechanism, or are you proposing that one be added? |
|||
msg20867 - (view) | Author: Peter Åstrand (astrand) * | Date: 2004-06-01 18:16 | |
Logged In: YES user_id=344921 It has nothing to do with the interpreter shutdown; the same thing happens for long-lived processed, when the file object falls off a function end. For example, the code below fails silently: def foo(): f = open("foo", "w") f.write("bar") foo() time.sleep(1000) |
|||
msg20868 - (view) | Author: Tim Peters (tim.peters) * | Date: 2004-06-01 18:23 | |
Logged In: YES user_id=31435 I think the issue here is mainly that an explicit file.close() maps to fileobject.c's file_close(), which checks the return value of the underlying C-level close call and raises an exception (or not) as appropriate; but file_dealloc(), which is called as part of recycling garbage fileobjects, does not look at the return value from the underlying C-level close call it makes (and, of course, then doesn't raise any exceptions either based on that return value). |
|||
msg20869 - (view) | Author: Peter Åstrand (astrand) * | Date: 2004-11-07 14:16 | |
Logged In: YES user_id=344921 Fixed in revision 2.193 or fileobject.c. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:56:04 | admin | set | github: 40283 |
2004-05-24 11:32:13 | astrand | create |