Skip to content

Sidecar Logging Pattern with FileBeat

The sidecar logging pattern is a deployment strategy in containerized environments where a logging agent runs in its own container alongside the application container within the same Pod. This pattern effectively decouples the application logic from the logging infrastructure, allowing for standardized log collection without modifying the application code.^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]

Architecture

In a typical Kubernetes implementation using FileBeat, the sidecar pattern involves co-locating two containers in a single Pod definition^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]:

  1. Application Container: Runs the business logic (e.g., a Tomcat server) and writes logs to a standard output stream or a local file.
  2. FileBeat Container: Runs a dedicated logging agent configured to read log files generated by the application container.

These two containers share a volume, typically an emptyDir, which is mounted at specific paths in both containers. The application writes logs to this volume, and the FileBeat sidecar reads from the same volume^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].

Implementation

The core mechanism relies on a shared file system between the containers.

  • Shared Volume: A volume is defined in the Pod specification (e.g., volumes: - emptyDir: {}).
  • Application Mount: The application container mounts this volume to a directory like /opt/tomcat/logs to write stdout.log^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].
  • Sidecar Mount: The FileBeat container mounts the same volume to a directory like /logm to access the log files^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].

This configuration ensures that logs are available to the sidecar agent immediately as they are written by the application.

FileBeat Configuration

The FileBeat container requires specific configuration to function correctly as a sidecar. This configuration is often injected via a startup script or a ConfigMap^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].

Key configuration elements include:

  • Input Source: The filebeat.inputs section defines the log paths to monitor, matching the shared volume mount point (e.g., /logm/*.log)^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].
  • Multiline Pattern: To handle Java stack traces or complex logs, a multiline pattern is configured (e.g., multiline.pattern: '^\\d{2}') to ensure lines belonging to a single log entry are grouped together^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].
  • Metadata: Fields such as topic or environment tags (e.g., topic: logm-${PROJ_NAME}) are added to the log events to facilitate downstream routing^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].
  • Output: Logs are shipped to an output backend like Kafka, where topics are dynamically constructed based on the environment (e.g., k8s-fb-$ENV-%{[topic]})^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md].

Data Flow

The lifecycle of a log entry in this architecture typically follows these steps^[400-devops__06-Kubernetes__k8s-paas__07.Promtheus监控k8s企业级应用.md]:

  1. The Application container writes log data to a file located in the shared volume (e.g., /opt/tomcat/logs/stdout.log).
  2. The FileBeat Sidecar container reads the file from the shared volume mount point (e.g., /logm/stdout.log).
  3. FileBeat processes the data (applying multiline rules and metadata) and ships it to a Message Queue (Kafka).
  4. Logstash consumes the topics from Kafka and forwards the structured data to Elasticsearch for indexing.
  5. Kibana queries Elasticsearch to visualize the log data for users.

Sources