You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I use compress gzip buffer (built-in, no plugin) and compress_request true with this http output plugin.
Fluentd attempts to gunzip the buffer from disk, which is then recompressed by this plugin.
Fluentd will decompress these compressed chunks automatically before passing them to the output plugin (The exceptional case is when the output plugin can transfer data in compressed form. In this case, the data will be passed to the plugin as is).
Can we somehow let fluentd know that this output plugin can transfer data in compressed form and skip the decomp / re-comp?
The main reason why we came to this revelation is due to fluentd having errors sometimes when decompressing the gzip'ed buffer chunks and choke on it with the same up-to-1-week retry logic that we put in place for cases like network loss. We'd rather fluentd pass the bad chunks to this plugin, which sends them as-is to my endpoint in the cloud, where we have all the processing power to attempt to recover them or discard them without choking up the pipe.
Using Fluentd and out_http plugin versions
OS version: Debian 11
Bear Metal or Within Docker or Kubernetes or other: official Docker image
Problem
I use
compress gzip
buffer (built-in, no plugin) andcompress_request true
with this http output plugin.Fluentd attempts to gunzip the buffer from disk, which is then recompressed by this plugin.
Steps to replicate
Expected Behavior or What you need to ask
According to Fluentd doc https://docs.fluentd.org/configuration/buffer-section#:~:text=Fluentd%20will%20decompress,plugin%20as%20is):
Can we somehow let fluentd know that this output plugin can transfer data in compressed form and skip the decomp / re-comp?
The main reason why we came to this revelation is due to fluentd having errors sometimes when decompressing the gzip'ed buffer chunks and choke on it with the same up-to-1-week retry logic that we put in place for cases like network loss. We'd rather fluentd pass the bad chunks to this plugin, which sends them as-is to my endpoint in the cloud, where we have all the processing power to attempt to recover them or discard them without choking up the pipe.
Using Fluentd and out_http plugin versions
The text was updated successfully, but these errors were encountered: