8000 bpo-31804: multiprocessing calls flush on sys.stdout at exit even if … by SlaushVunter · Pull Request #5492 · python/cpython · GitHub
[go: up one dir, main page]

Skip to content

bpo-31804: multiprocessing calls flush on sys.stdout at exit even if … #5492

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions Lib/multiprocessing/process.py
Original file line number Diff line number Diff line change
Expand Up @@ -314,8 +314,10 @@ def _bootstrap(self):
finally:
threading._shutdown()
util.info('process exiting with exitcode %d' % exitcode)
sys.stdout.flush()
sys.stderr.flush()
if sys.stdout is not None and not sys.stdout.closed:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A minor issue, but I'd like to stick to the idiom already used in the previous PR: https://github.com/python/cpython/pull/4073/files#diff-d6a41af14d72f372f37ced0cc15c1f58L17

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Of course. It's the easier to ask for forgiveness than permission idiom.

sys.stdout.flush()
if sys.stderr is not None and not sys.stderr.closed:
sys.stderr.flush()

return exitcode

Expand Down
41 changes: 41 additions & 0 deletions Lib/test/_test_multiprocessing.py
Original file line number Diff line number Diff line change
Expand Up @@ -653,6 +653,47 @@ def test_forkserver_sigkill(self):
self.check_forkserver_death(signal.SIGKILL)


class TestStdOutAndErr(unittest.TestCase):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps you can move these tests closer to the test added by the other PR:
https://github.com/python/cpython/pull/4073/files#diff-b046ab474480855fb4a01de88cfc82bbR585

I also wonder if those tests can be unified. They seem to be doing very similar things.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My 1st guess was to update that test case too. But someone has disabled that test case by adding a _ prefix to the class name. Also the test module itself is prefixed by _. So these tests are not run automatically. And I couldn't make that test case work.
Also that test seemed to test the inter process Event handling, so I guessed I would stick to the strict and often unpractical "only test one thing at a time" principle.
I think the "_test_multiprocess.py" modul should be refurbished but I don't think that I am the right person for that.

Copy link
Member
@pitrou pitrou Feb 12, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ha. You misunderstood how _test_multiprocessing works :-) The test classes there are used as base classes for the actual tests in test_multiprocessing_fork, test_multiprocessing_forkserver and test_multiprocessing_spawn. So you just have to run one of those three test modules to get the tests to work!

(and that's why those test classes have an underscore prefix)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoa! :) Now it all makes sense! I will make yet another commit (3rd) and unify the test cases.

@staticmethod
def closeIO(stream_name):
getattr(sys, stream_name).close()

@staticmethod
def removeIO(stream_name):
setattr(sys, stream_name, None)

def test_closed_stdio(self):
"""
bpo-28326: multiprocessing.Process depends on sys.stdout being open
"""
self.run_process(self.closeIO)

def test_no_stdio(self):
"""
bpo-31804: If you start Python by pythonw then sys.stdout and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's no need for the docstring (or comment) to be that long. Just put the reference to the issue number + a 2- or 3-line summary.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I trimmed the docstring.

sys.stderr are set to None. If you also use multiprocessing
then when the child process finishes BaseProcess._bootstrap
calls sys.stdout.flush() and sys.stderr.flush() finally.
This causes the process return code to be not zero (it is 1).

This unit test sets sys.stdio and sys.stderr to None, instead of
changing the Python interpreter to use when starting a child process
to pythonw.exe because that is Windows specific. This method
cannot test if there is an error (because of stdout or stderr is None)
before the target function is called. However the errors have occurred
in the multiprocessing outro so this can test the previously mentioned
bug. Also this way the feature can be tested on other operating
systems too.
"""
self.run_process(self.removeIO)

def run_process(self, target):
for stream_name in ('stdout', 'stderr'):
proc = multiprocessing.Process(target=target, args=(stream_name,))
proc.start()
proc.join()
self.assertEqual(proc.exitcode, 0)

#
#
#
Expand Down
0