Skip to content

Kubernetes Metrics Server

Kubernetes Metrics Server is a cluster-wide aggregator of resource usage data^[400-devops__06-Kubernetes__devops-helm__terraform-helm__helm__charts__metrics-server__README.md]. It is a scalable, efficient source of container resource Metrics (such as CPU and memory) that is essential for monitoring cluster health and enabling automation.

It collects Metrics from Kubernetes nodes via kubelet and serves them through the Kubernetes API, typically at /apis/[Metrics](<./metrics.md>).k8s.io^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Architecture and Data Flow

Metrics Server acts as a cluster-level data aggregator^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. It registers with the API Server via kube-aggregator, allowing standard Kubernetes tools to query Metrics just like any other API resource^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Data collection follows this path: 1. Source: kubelet on each node collects raw Metrics from the underlying containers and nodes^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. 2. Collection: Metrics Server scrapes data from all kubelets^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. 3. Storage: Metrics are held in the memory of the Metrics Server^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. 4. Access: Data is exposed via the Metrics API for consumption by other components^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Because data is stored only in memory, Metrics Server does not retain historical data; a restart results in data loss^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Key Characteristics

  • Non-Persistent: As an in-memory Metrics store, it does not save data to disk or retain historical logs^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].
  • Extension Based: It is not built into the core Kubernetes distribution but is deployed as an extension or add-on^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. For example, environments like Docker-Desktop require manual installation, whereas managed services like GKE may provide pre-configured monitoring solutions^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].
  • Network Requirements: It must be able to reach the kubelet on every node to gather statistics^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Installation and Configuration

Installation

Metrics Server is typically deployed within the kube-system namespace^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. Installation often involves applying a manifest that creates the necessary Deployment, ServiceAccount, RBAC roles, and an APIService to register the Metrics endpoint^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

TLS Configuration

A common installation issue, particularly in local development environments (e.g., Docker Desktop), involves TLS certificate verification. To resolve connection failures between Metrics Server and kubelets, the container argument --kubelet-insecure-tls is often used^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. This parameter disables CA verification, which is acceptable for testing but not recommended for production^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Helm Configuration

When deploying via Helm, notable configuration parameters include^[400-devops__06-Kubernetes__devops-helm__terraform-helm__helm__charts__metrics-server__README.md]: * args: Command line arguments passed to the Metrics Server container (e.g., --kubelet-insecure-tls). * hostNetwork.enabled: Determines if hostNetwork mode is used. * image: Specifies the repository and tag for the Metrics Server image (default is k8s.gcr.io/metrics-server-amd64). * replicas: Sets the number of pods (default is 1).

Use Cases and Integration

Resource Monitoring

The primary tool for human interaction with Metrics Server is kubectl top^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md]. This command relies on the Metrics Server to display resource usage.

  • Node Usage: kubectl top node displays CPU and Memory utilization for cluster nodes^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].
  • Pod Usage: kubectl top pods -n <namespace> displays resource consumption for pods^[400-devops-06-kubernetes-k8s-ithelp-day24-readme.md].

Autoscaling

Metrics Server is a foundational prerequisite for the Horizontal Pod Autoscaler (HPA)^[400-devops-06-kubernetes-k8s-ithelp-day26-readme.md]. HPA requires current resource Metrics (CPU/Memory usage) to calculate the desired replica count for a workload^[400-devops-06-kubernetes-k8s-ithelp-day26-readme.md].

Without Metrics Server, kubectl top will fail, and HPA will not be able to scale based on standard resource metrics^[400-devops-06-kubernetes-k8s-ithelp-day26-readme.md]. For autoscaling based on custom Metrics, other adapters like the Prometheus Adapter are required^[400-devops__06-Kubernetes__devops-helm__terraform-helm__helm__charts__metrics-server__README.md].

Sources

  • 400-devops-06-kubernetes-k8s-ithelp-day24-readme.md
  • 400-devops-06-kubernetes-k8s-ithelp-day26-readme.md
  • 400-devops__06-Kubernetes__devops-helm__terraform-helm__helm__charts__metrics-server__README.md