Skip to content

Ingress NGINX Controller

The Ingress NGINX Controller is an Ingress Controller implementation that uses NGINX as a reverse proxy and load balancer within a Kubernetes cluster^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md].

Installation and Components

Installation is typically performed by applying a manifest file (e.g., 01-ingress.yaml) using kubectl apply^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]. This deployment process creates a dedicated namespace (ingress-nginx) and establishes several standard Kubernetes components, including:

  • ServiceAccount: ingress-nginx^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]
  • ConfigMap: ingress-nginx-controller^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]
  • RBAC Resources: ClusterRoles, ClusterRoleBindings, Roles, and Rolebindings for both the controller and admission webhook services^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]
  • Services: A controller service (NodePort) and an admission webhook service (ClusterIP)^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]
  • Deployment: The controller deployment itself^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]
  • Webhooks: A ValidatingWebhookConfiguration for the admission service^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]

Networking Configuration

Service Types

The controller is exposed via a Service of type NodePort^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]. For example, the service might map HTTP port 80 to port 30035 and HTTPS port 443 to port 30603 on the cluster nodes^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md].

External Proxying

To allow external access via standard ports, a reverse proxy (such as Nginx) is often configured on a public-facing node^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]. This proxy forwards traffic to the backend Ingress Controller nodes.

An example configuration defines an upstream pointing to the NodePorts on the worker nodes:

upstream default_backend_nginx {
    server 10.4.7.21:30035    max_fails=3 fail_timeout=10s;
    server 10.4.7.22:30035    max_fails=3 fail_timeout=10s;
}
The server block then proxies requests for *.od.com to this upstream^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md].

  • Kubernetes
  • [[Nginx]]
  • [[Service (Kubernetes)]]
  • [[DNS]]

Sources

^[400-devops-06-kubernetes-k8s-learning-linux-02-ingress-readme.md]