-
Notifications
You must be signed in to change notification settings - Fork 186
Closed
Labels
enhancementNew feature or requestNew feature or request
Milestone
Description
Proposal:
You can specify the max body size of the influxdb and the client split the data points into correct batches.
Current behavior:
If you try to sent a bigger batch of datapoints with
client.write_points(points)
.
You get the error message:
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.17.4</center>
</body>
</html>
This already happen if the plain data points only have 1.500.000 bytes.
Desired behavior:
The client check the size of the body and split the data into different requests if necessary.
Use case:
We are not the owner of the influxdb instance and can not change the max-body-size.
But it's also hard to calculate the bytes of the body, because the body is created in this library.
Further the byte size of our data points are very different.
So it's very hard to find a good batch size for all our datapoints.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request