This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Problems with os.system and ulimit -f
Type: Stage:
Components: Interpreter Core Versions: Python 2.4
process
Status: closed Resolution: not a bug
Dependencies: Superseder:
Assigned To: nnorwitz Nosy List: gsbarbieri, kowaltowski, nnorwitz
Priority: normal Keywords:

Created on 2004-10-12 15:34 by kowaltowski, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
testsystem.zip kowaltowski, 2004-10-12 15:34 Files testsystem.c, testsystem.py and largefile.c
Messages (5)
msg22654 - (view) Author: Tomasz Kowaltowski (kowaltowski) Date: 2004-10-12 15:34
Python version (running under Fedora Core 2 Linux):
   Python 2.3.3 (#1, May  7 2004, 10:31:40)
   [GCC 3.3.3 20040412 (Red Hat Linux 3.3.3-7)] on linux2
---------------------------------------------------------------------------
I found a problem while executing the 'ulimit -f' bash
command within the 'os.system' function. According to
the documentation this function should behave exactly
like the stdlib function 'system'. However it does not
happen as illustrated by the minimal Python and C
examples: testsystem.py and testsystem.c (see attached
zipped file).

In these examples 'largefile' is a compiled C program
which writes an endless file into the stdout (also
attached). The C program testsystem.c works as expected
and prints the following output:

   command: ulimit -f 10; largefile > xxx;
   result = 153

The Python program testsystem.py **does not stop**; if
interrupted by Ctrl-C it prints:

  command: ulimit -f 10; largefile > xxx;
  result = 0

In both cases the output file 'xxx' has 10240 bytes,
ie, 10 blocks as limited by 'ulimit'. 

It is interesting though that the command 'ulimit -t 1'
(CPU time) produces correct results under both Python
and C versions, ie, interrupts the execution and prints:

  command: ulimit -t 1; largefile > xxx;
  result = 137




msg22655 - (view) Author: Tomasz Kowaltowski (kowaltowski) Date: 2004-10-22 11:52
Logged In: YES 
user_id=185428

I also tested the new version "Python 2.4b1" -- the problem
still occurs :-(.
msg22656 - (view) Author: Gustavo Sverzut Barbieri (gsbarbieri) Date: 2004-11-26 19:38
Logged In: YES 
user_id=511989

The problem is that python ignores the SIGXFSZ (File size
limit exceeded) signal.

import signal
signal.signal( signal.SIGXFSZ, signal.SIG_DFL )

solves your problem.


Any python developer: why does python ignores this signal?
msg22657 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2005-10-03 06:35
Logged In: YES 
user_id=33168

It's set in Python/pythonrun.c. The reason is below.  I'm
not sure this can be changed.  Any suggestions?  I'd be
happy to update doc, but have no idea where.  

revision 2.160
date: 2002/04/23 20:31:01;  author: jhylton;  state: Exp; 
lines: +3 -0
Ignore SIGXFSZ.

The SIGXFSZ signal is sent when the maximum file size limit is
exceeded (RLIMIT_FSIZE).  Apparently, it is also sent when
the 2GB
file limit is reached on platforms without large file support.

The default action for SIGXFSZ is to terminate the process
and dump
core.  When it is ignored, the system call that caused the
limit to be
exceeded returns an error and sets errno to EFBIG.  Python
always checks errno on I/O syscalls, so there is nothing to
do with
the signal.
msg22658 - (view) Author: Neal Norwitz (nnorwitz) * (Python committer) Date: 2005-12-19 04:57
Logged In: YES 
user_id=33168

I don't know how to improve this, so closing.
History
Date User Action Args
2022-04-11 14:56:07adminsetgithub: 41007
2004-10-12 15:34:59kowaltowskicreate