Skip to content

Kubernetes high availability with keepalived and nginx reverse proxy

Kubernetes high availability with keepalived and nginx reverse proxy refers to the architectural practice of deploying multiple API Server instances behind a load balancing layer to ensure cluster resilience. In this setup, [[Nginx]] acts as a Layer 4 (L4) reverse proxy for traffic distribution, while [[Keepalived]] manages a Virtual IP (VIP) to provide automatic failover.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Architecture Overview

In a high-availability (HA) Kubernetes Deployment, the control plane components are distributed across multiple nodes to eliminate single points of failure. The API Server is the central management point, and making it highly available typically involves two components deployed on dedicated reverse proxy nodes (often referred to as OP or operator nodes):

  1. Nginx: Load balances traffic across the backend API Server instances.
  2. Keepalived: Manages the shared Virtual IP (VIP) to ensure the entry point remains available even if a proxy node fails.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

This architecture separates the control plane management from the infrastructure networking layer.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Nginx Configuration (L4 Reverse Proxy)

Nginx is configured as a Layer 4 (TCP) stream proxy. It listens on a specific port (e.g., 7443) and forwards connections to the upstream Kubernetes API Servers (typically running on port 6443).^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Upstream Configuration

An upstream block defines the pool of backend API Server instances. The configuration typically includes Health Check parameters such as max_fails and fail_timeout to handle server unavailability gracefully.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Server Block

The server block defines the listening port for the proxy. It passes requests to the defined upstream group via the proxy_pass directive.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Keepalived Configuration

Keepalived is used to implement the Virtual Router Redundancy Protocol (VRRP). It assigns a floating Virtual IP (VIP) to the active "master" node.

Master Node Configuration

On the designated master node, Keepalived is configured with: * State: MASTER. * Priority: A higher numerical value (e.g., 100). * Virtual Router ID: A shared identifier between nodes (e.g., 251). * Authentication: A shared password for cluster security.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Backup Node Configuration

On the backup node(s), the configuration differs in: * State: BACKUP. * Priority: A lower numerical value (e.g., 90). * Interface: May differ depending on the host's network interface (e.g., eth0 vs ens33).^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Health Checking

To ensure the VIP is only assigned to a node with a functioning Nginx proxy, a VRRP script is used. This script checks if the Nginx proxy port (e.g., 7443) is listening. If the check fails, the priority is reduced, triggering a failover to the backup node.^[400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md]

Sources

  • 400-devops__06-Kubernetes__k8s-paas__02.企业部署实战_K8S.md