-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Closed
Description
Bug Report
Describe the bug
I have observed segfaults with the following (incomplete) backtrace on 1.7.2. These are quite rare so I don't have a good way to reproduce. I'm guessing that we're passing bad data to OpenSSL.
[engine] caught signal (SIGSEGV)
#0 0x7f072535bdf0 in ???() at ???:0
#1 0x7f072535c292 in ???() at ???:0
#2 0x7f07255ab876 in ???() at ???:0
#3 0x7f07255ac7b0 in ???() at ???:0
#4 0x7f07255aca2f in ???() at ???:0
#5 0x7f07255bef81 in ???() at ???:0
#6 0x7f07255bf0b2 in ???() at ???:0
#7 0x560fc3384609 in tls_net_write() at src/tls/openssl.c:382
#8 0x560fc3384c0a in flb_tls_net_write_async() at src/tls/flb_tls.c:257
#9 0x560fc338e483 in flb_io_net_write() at src/flb_io.c:362
#10 0x560fc339095f in flb_http_do() at src/flb_http_client.c:1147
#11 0x560fc33f63fc in cb_splunk_flush() at plugins/out_splunk/splunk.c:236
#12 0x560fc33624da in output_pre_cb_flush() at include/fluent-bit/flb_output.h:466
#13 0x560fc3800fc6 in co_init() at lib/monkey/deps/flb_libco/amd64.c:117
It's very possible that the Splunk instance is behaving badly here, but I don't think that should cause crashes in Fluent Bit itself.
Your Environment
- Version used: 1.7.2
- Environment name and version (e.g. Kubernetes? What version?): Kubernetes 1.16.13
- Filters and plugins: Tail, Kubernetes, Splunk
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels