Skip to content

L4 reverse proxy and high availability with Nginx and Keepalived

L4 reverse proxy and high availability with Nginx and Keepalived involves configuring a pair of servers (nodes) to load balance traffic while ensuring that a Virtual IP (VIP) remains available even if one of the nodes fails^[400-devops-06-kubernetes-k8s-paas-02-k8s.md]. This setup is typically placed in front of backend services like Kubernetes API Servers^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

Architecture

The architecture generally consists of two or more nodes running both Nginx (for load balancing) and Keepalived (for high availability).

  • Nginx: Configured to operate at Layer 4 (TCP stream) to forward traffic to backend endpoints^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].
  • Keepalived: Manages the shared Virtual IP and monitors the health of the local Nginx service to trigger failover if necessary^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

Nginx L4 Configuration

Nginx is configured using the stream module to handle TCP traffic. An upstream block defines the backend servers, and a server block listens on a specific port to proxy the traffic^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

Example Configuration

This configuration listens on port 7443 and load balances traffic across two backend servers on port 6443^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

stream {
    upstream [kube-apiserver](<./kube-apiserver.md>) {
        server 10.4.7.21:6443     max_fails=3 fail_timeout=30s;
        server 10.4.7.22:6443     max_fails=3 fail_timeout=30s;
    }
    server {
        listen 7443;
        proxy_connect_timeout 2s;
        proxy_timeout 900s;
        proxy_pass [kube-apiserver](<./kube-apiserver.md>);
    }
}
  • max_fails and fail_timeout: Define how many connection attempts must fail within a specific period to mark the server as unavailable^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

Keepalived High Availability

Keepalived implements the VRRP (Virtual Router Redundancy Protocol) to manage a Virtual IP (VIP). One node acts as the MASTER holding the VIP, while others act as BACKUP^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

Health Check Script

A custom script is used to monitor the Nginx proxy port. If the port is not listening, the script exits with an error code, causing Keepalived to reduce the node's priority and trigger a failover^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

#!/bin/bash
CHK_PORT=$1
if [ -n "$CHK_PORT" ];then
        PORT_PROCESS=`ss -lnt|grep $CHK_PORT|wc -l`
        if [ $PORT_PROCESS -eq 0 ];then
                echo "Port $CHK_PORT Is Not Used,End."
                exit 1
        fi
else
        echo "Check Port Cant Be Empty!"
fi

Master Configuration

On the master node (e.g., 10.4.7.11), the state is set to MASTER with a higher priority^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 251
    priority 100
    advert_int 1
    mcast_src_ip 10.4.7.11
    nopreempt

    authentication {
        auth_type PASS
        auth_pass 11111111
    }
    track_script {
         chk_nginx
    }
    virtual_ipaddress {
        10.4.7.10
    }
}

Backup Configuration

On the backup node (e.g., 10.4.7.12), the state is set to BACKUP with a lower priority^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 251
    priority 90
    ...
    virtual_ipaddress {
        10.4.7.10
    }
}
  • nopreempt (on Master): Prevents the master from attempting to reclaim the VIP immediately after recovering, until a manual intervention or specific condition is met^[400-devops-06-kubernetes-k8s-paas-02-k8s.md].
  • [[Reverse proxy]]
  • [[High availability]]
  • [[VRRP]]
  • Kubernetes

Sources

^[400-devops-06-kubernetes-k8s-paas-02-k8s.md]