8000 DataFrameClient retry on batches · Issue #378 · influxdata/influxdb-python · GitHub
[go: up one dir, main page]

Skip to content
This repository was archived by the owner on Oct 29, 2024. It is now read-only.
This repository was archived by the owner on Oct 29, 2024. It is now read-only.
DataFrameClient retry on batches #378
Closed
@dirkdevriendt

Description

@dirkdevriendt

When writing medium to large dataframes (multiple 50K datapoints and up) we sometimes run into server errors (HTTP 500, timeout etc). I understand this could be considered outside of the scope for the client library, but if the error is caught outside of the batch loop we can't know what data was uploaded and have to restart the upload for the entire dataframe.

Ideally, when using the the batch_size option on the DataFrameClient.write_points function it would be aware of these errors and retry a batch if needed. There are a few retry decorators out there that could be of use (e.g. https://github.com/d0ugal/retrace).

Any thoughts? Would this be a valuable addition? Would you mind taking a dependency on a retry decorator?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0