Logstash configuration¶
Logstash configuration refers to the process of defining the input, filter, and output pipelines for the Logstash data processing engine.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md] It allows users to collect data from various sources, transform it, and send it to a designated destination, such as Elasticsearch.
Pipeline Structure¶
A Logstash configuration file specifies a pipeline that consists of three distinct sections:
- Input: Defines how data enters the pipeline.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
- Filter: Modifies or transforms the data as it passes through the pipeline.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
- Output: Defines where the data is sent after it has been processed.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
Implementation¶
Logstash is typically executed as a service using a specific configuration file. For example, to run Logstash with a configuration file named logstash-test.conf, the following command is used:^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
logstash -f /etc/logstash/logstash-test.conf
When deployed in a Docker container, the configuration file is often mounted into the container from the host machine using the -v flag to ensure persistence and easy updates.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
Example Configurations¶
Logstash configurations vary based on the target environment and data source.
Testing Environment¶
In a testing environment, the configuration might ingest data from a Kafka topic.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
input {
kafka {
bootstrap_servers => "10.4.7.11:9092"
client_id => "10.4.7.200"
consumer_threads => 4
group_id => "k8s_test"
topics_pattern => "k8s-fb-test-.*"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["10.4.7.12:9200"]
index => "k8s-test-%{+YYYY.MM.DD}"
}
}
Production Environment¶
In a production environment, the configuration differs slightly, typically using a specific consumer group ID and a topic pattern that matches production data streams.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]
input {
kafka {
bootstrap_servers => "10.4.7.11:9092"
client_id => "10.4.7.200"
consumer_threads => 4
group_id => "k8s_prod"
topics_pattern => "k8s-fb-prod-.*"
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["10.4.7.12:9200"]
index => "k8s-prod-%{+YYYY.MM.DD}"
}
}
Related Concepts¶
- Elasticsearch
- [[Kibana]]
- [[Kafka]]
- [[Filebeat]]