8000 Logs are stopping after 10MB and never continue · Issue #1489 · docker-java/docker-java · GitHub
[go: up one dir, main page]

Skip to content

Logs are stopping after 10MB and never continue #1489

@ohadasulin

Description

@ohadasulin

I want to dump log into a file but logs are truncated after 10MB of data on the stream.

My code:

DefaultDockerClientConfig dockerClientConfig = DefaultDockerClientConfig.createDefaultConfigBuilder().withDockerHost("tcp://x.x.x.x:2375")
    .withDockerTlsVerify(false).build();

DockerHttpClient httpClient = new ApacheDockerHttpClient.Builder()
    .dockerHost(dockerClientConfig.getDockerHost())
    .build();
dockerClient = DockerClientBuilder
    .getInstance(dockerClientConfig)
    .withDockerHttpClient(httpClient).build();

String serviceName = "kafka_kafka";
final LogSwarmObjectCmd logSwarmObjectCmd = dockerClient.logServiceCmd(serviceName)
    .withStdout(Boolean.TRUE).withFollow(true);



log.debug("Service {}, dumping logs start", serviceName);
LogToFileResultCallback resultCallback = logSwarmObjectCmd.exec(new LogToFileResultCallback(Paths.get("/tmp"), serviceName));
Thread.sleep(100000000);

in the callback, I just write to a file this way:

Writer writer = ...

String strToWrite = new String(frame.getPayload(), charset);
writer.write(writer, strToWrite);

my file appears as if it was stopped mid-processing:

-bash-4.2# tail -n 1 /tmp/kafka_kafka.stdout.log 
[2020-11-09 10:07:14,745] INFO [GroupCoordinator 1001]: Removed 0 offsets associated with delet
-bash-4.2# 

and file stops growing at 10MB (consistently)

-bash-4.2# ls -lh /tmp/kafka_kafka.stdout.log
-rw-r--r--. 1 root root 10M Nov 11 12:54 /tmp/kafka_kafka.stdout.log

Is there any way to remove the limitation (my file should be much bigger ~80MB)?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0