Filebeat: setting elasticsearch index name per input
I’m trying to setup filebeat so that I have two log sources that end up in different indexes of the target logstash. All involved services’ (filebeat, logstash, elastic) versions are 8.12.0.
Filebeat: setting index per input
I’m trying to setup filebeat so that I have two log sources that end up in different indexes of the target logstash. All involved services’ (filebeat, logstash, elastic) versions are 8.12.0.
how does filebeat storage the state of reading or sending data?
I am reading about filebeat in ELK and because i am newbie, my question is does spooler also keep the actual data that harvester read on the disk or is it only about keeping the state of how much does harvester read data from input or send data to output?
Parsing the “message” field with filebeat
I have a log that contains the following output in the message field:
Filebeat Error in Kubernetes when Parsing JSON Logs
Background:
I’m following a guide to configure Filebeat in Kubernetes for parsing JSON logs. The guide suggests using autodiscover to detect and configure the inputs automatically.