Skip to content

Two-tier proxy architecture for Kubernetes

The two-tier proxy architecture is a networking pattern used in Kubernetes clusters to route external traffic to internal services. It typically consists of a reverse proxy (such as Nginx) running on the cluster infrastructure nodes, which forwards traffic to the Ingress Controller (such as ingress-nginx) running inside the Kubernetes cluster.^[01-ingress__README.md]

Architecture Overview

In this setup, the first tier is a reverse proxy located on the infrastructure or operating system level. The second tier is the Kubernetes Ingress Controller, which handles the routing to specific services and pods within the cluster.^[01-ingress__README.md]

This design creates a separation of concerns:

  • Tier 1 (External Proxy): Listens for external traffic (e.g., on port 80 or 443) and performs initial forwarding.
  • Tier 2 (Ingress Controller): Receives traffic from the external proxy and routes it to Kubernetes Services based on Ingress rules.

Configuration

Tier 1: External Nginx Configuration

The external proxy (e.g., an Nginx instance on node hdss7-12) is configured to define an upstream backend pointing to the Kubernetes nodes where the Ingress Controller is hosted^[01-ingress__README.md].

The configuration uses an upstream block to specify the Ingress Controller's NodePort. For example, if the Ingress Controller is exposed on port 30035 (mapped from port 80) across nodes 10.4.7.21 and 10.4.7.22, the configuration would look like this^[01-ingress__README.md]:

upstream default_backend_nginx {
    server 10.4.7.21:30035 max_fails=3 fail_timeout=10s;
    server 10.4.7.22:30035 max_fails=3 fail_timeout=10s;
}
server {
    server_name *.od.com;

    location / {
        proxy_pass http://default_backend_nginx;
        proxy_set_header Host $http_host;
        proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for;
    }
}

This configuration ensures that requests matching *.od.com are proxied to the Kubernetes Ingress Controller.^[01-ingress__README.md]

Tier 2: Kubernetes Ingress Controller

The second tier consists of the Ingress Controller deployed within the Kubernetes cluster^[01-ingress__README.md]. This controller creates a Service of type NodePort to expose itself internally.

In the provided example, the ingress-nginx-controller service listens internally on port 80 and exposes it via NodePort 30035^[01-ingress__README.md]. The Ingress resource then defines routing rules, such as directing traffic for myapp.od.com to the myapp-svc service^[01-ingress__README.md].

Traffic Flow

  1. DNS Resolution: A user accesses a URL like http://myapp.od.com. The DNS system resolves the domain to the IP address of the external proxy^[01-ingress__README.md].
  2. External Proxy: The external Nginx server receives the request on port 80 and matches the server_name *.od.com block^[01-ingress__README.md]. It forwards the request to the configured upstream (e.g., 10.4.7.21:30035).
  3. Ingress Controller: The request hits the NodePort of the ingress-nginx-controller Service inside the cluster^[01-ingress__README.md]. The Ingress Controller parses the Host header (myapp.od.com) and matches it against defined Ingress rules.
  4. Service Routing: The Ingress Controller forwards the traffic to the backend Service (myapp-svc), which then routes it to a specific Pod (e.g., myapp-depl-...)[^[01-ingress__README.md]].

Sources

^[01-ingress__README.md]