Issue856706
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2003-12-09 07:08 by stevenhowe, last changed 2022-04-11 14:56 by admin. This issue is now closed.
Messages (5) | |||
---|---|---|---|
msg19341 - (view) | Author: Steven Howe (stevenhowe) | Date: 2003-12-09 07:08 | |
Using os.popen3 inside a thread (start_new_thread) returns different results from the stderr file dev. I discovered this while trying to trap badblocks program output. I have usb floppy I am trying to format, after a badblock check. My floppy is on /dev/sdb. code: import thread import os def dd( outfilename ): cmd='/sbin/badblocks -n -s -v -c 16 -b 512 /dev/sdb' channels = os.popen3( cmd ) ch = ' ' ff = '/tmp/%s' % outfilename out=open( ff ,'w') while ch != '': ch = channels[2].read(1) out.write( ch ) Run two ways. First as a stand alone code. Then as a threaded program. def( 'nothread.err' ) thead.start_new_thread( def, ( 'thread.err' ) ) --------------- Now view the results with od -ta. You will see that the results are very different. All the the verbose data, on current blocks completed, are missing. Steven Howe |
|||
msg19342 - (view) | Author: Andrew Gaul (gaul) | Date: 2003-12-12 10:03 | |
Logged In: YES user_id=139865 Confirmed with Python CVS and Fedora on x86. e2fsprogs/misc/badblocks:alarm_intr sets a SIGALRM handler and calls alarm(1), but it is not getting fired. I will look into this further. |
|||
msg19343 - (view) | Author: Steven Howe (stevenhowe) | Date: 2003-12-12 19:02 | |
Logged In: YES user_id=916892 Hello Gaul. Well I found a work around. Using threading.Thread, I init a routine that forks and execv's a script (execv has no method to accept redirects '> 2>') that runs the badblocks program and route output to files. Then I the a thread that uses open() to attach the <stderr> to a progress reading routine and when complete <stdout> for the badblock list (if any). This method created another problem. Popen3 does not return an end of file ( '' ) until the process has ended. With badblocks forked, there is no syncronization between my script and the badblocks output. So I can and do overrun the <stderr> output, which then returns an EOF. Another workaround: I wrote a routine to make sure I never read to the end of file: readsize = os.stat(fname)[stat.ST_SIZE] - fptr.tell() - BIAS All this so I can using threading. No doubt you're asking why use threading? I'm making a pygtk app similar to 'gfloppy' that can handle USB floppies. I need to make a progress meter. Using threading allows a GTK call back to examine the current status of badblocks. But a fix would be preferable. Thanks, Steven Howe |
|||
msg19344 - (view) | Author: Andrew Gaul (gaul) | Date: 2003-12-25 19:42 | |
Logged In: YES user_id=139865 This appears to be a duplicate of #853411. The thread on python-dev with subject "Python threads end up blocking signals in subprocesses" discusses this. |
|||
msg19345 - (view) | Author: Facundo Batista (facundobatista) * | Date: 2005-01-15 19:59 | |
Logged In: YES user_id=752496 Duplicate of #853411 (the OP says so in the other bug). |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:56:01 | admin | set | github: 39685 |
2003-12-09 07:08:52 | stevenhowe | create |