-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Description
Checklist
- I have verified that the issue exists against the
master
branch of Celery. - This has already been asked to the discussions forum first.
- I have read the relevant section in the
contribution guide
on reporting bugs. - I have checked the issues list
for similar or identical bug reports. - I have checked the pull requests list
for existing proposed fixes. - I have checked the commit log
to find out if the bug was already fixed in the master branch. - I have included all related issues and possible duplicate issues
in this issue (If there are none, check this box anyway).
Mandatory Debugging Information
- I have included the output of
celery -A proj report
in the issue.
(if you are not able to do this, then at least specify the Celery
version affected).
5.2.1 (dawn-chorus)
- I have verified that the issue exists against the
master
branch of Celery. - I have included the contents of
pip freeze
in the issue. - I have included all the versions of all the external dependencies required
to reproduce this bug.
Pyomo 6.1.2 (CPython 3.9.6 on Darwin 20.6.0)
Optional Debugging Information
- I have tried reproducing the issue on more than one Python version
and/or implementation. - I have tried reproducing the issue on more than one message broker and/or
result backend. - I have tried reproducing the issue on more than one version of the message
broker and/or result backend. - I have tried reproducing the issue on more than one operating system.
- I have tried reproducing the issue on more than one workers pool.
- I have tried reproducing the issue with autoscaling, retries,
ETA/Countdown & rate limits disabled. - I have tried reproducing the issue after downgrading
and/or upgrading Celery and its dependencies.
Downgrading to 5.1.2 removes the issue.
Related Issues and Possible Duplicates
Related Issues
- None
Possible Duplicates
- None
Environment & Settings
Celery version: 5.2.1
pyomo version: 6.1.2
celery report
Output:
Steps to Reproduce
I have already debugged this and the explanation is below.
We are running tasks which use pyomo
solver.
The solver reports for all logs issued by its tee.py:writeOutputBuffer
method:
2021-12-04 09:51:11,264 ERROR [tee.py:writeOutputBuffer] Output stream (<celery.utils.log.LoggingProxy object at 0x134a5edc0>) closed before all output was written to it. The following was left in the output buffer: '\n'
Example original message which pyomo
is trying to log is (so contains a newline char at the end):
`' 10 10 3481.4695 877 10430.2484 3473.1362 11931 66.70%\n Impl Bds: 172
\n'`
This is the writeOutputBuffer
method in pyomo
:
def writeOutputBuffer(self, ostreams):
if not self.encoding:
ostring, self.output_buffer = self.output_buffer, b''
elif self.buffering == 1:
EOL = self.output_buffer.rfind(self.newlines or '\n') + 1
ostring = self.output_buffer[:EOL]
self.output_buffer = self.output_buffer[EOL:]
else:
ostring, self.output_buffer = self.output_buffer, ''
if not ostring:
return
for stream in ostreams:
try:
written = stream.write(ostring)
except:
written = 0
if written and not self.buffering:
stream.flush()
# Note: some derived file-like objects fail to return the
# number of characters written (and implicitly return None).
# If we get None, we will just assume that everything was
# fine (as opposed to tossing the incomplete write error).
if written is not None and written != len(ostring):
logger.error(
"Output stream (%s) closed before all output was "
"written to it. The following was left in "
"the output buffer:\n\t%r" % (
stream, ostring[written:],))
As you can see it checks if the length of message written by the logger (which will be LoggingProxy object from celery) matches the length of the original message.
The problem is caused by celery.utils.log.LoggingProxy
write
method where we rstrip
the newline characters\n
from the log message. This means that write
returns integer smaller than the length of the original message and that causes pyomo
to complain with an error log. Since this happens for every log (thousands of them per run) it is extremely annoying :).
def write(self, data):
# type: (AnyStr) -> int
"""Write message to logging object."""
if _in_sighandler:
safe_data = safe_str(data)
print(safe_data, file=sys.__stderr__)
return len(safe_data)
if getattr(self._thread, 'recurse_protection', False):
# Logger is logging back to this file, so stop recursing.
return 0
if data and not self.closed:
self._thread.recurse_protection = True
try:
safe_data = safe_str(data).rstrip('\n')
if safe_data:
self.logger.log(self.loglevel, safe_data)
return len(safe_data)
finally:
self._thread.recurse_protection = False
return 0
I am not sure why this is done, but this changes the original message.
Is having a new line prohibited or causes some issues?
I am simply curious why we do that as it makes pyomo log a lot of errors since celery does not log the entire contents of the message...
Possible 'fix':
- go back to not stripping the message?
- make it optional?
- is it required only for bytes data (looking at the recent PRs it says something about data being bytes etc)
If you could explain why message is stripped of newline chars and suggest a way out, that would be nice :).
Required Dependencies
- Minimal Python Version: N/A or Unknown
- Minimal Celery Version: N/A or Unknown
- Minimal Kombu Version: N/A or Unknown
- Minimal Broker Version: N/A or Unknown
- Minimal Result Backend Version: N/A or Unknown
- Minimal OS and/or Kernel Version: N/A or Unknown
- Minimal Broker Client Version: N/A or Unknown
- Minimal Result Backend Client Version: N/A or Unknown
Python Packages
pip freeze
Output:
Other Dependencies
N/A
Minimally Reproducible Test Case
Expected Behavior
The message is logged as required by the user, information is not changed by celery (or this is optional?)
Actual Behavior
Celery removes \n
newline characters from the log message, changing its contents.