Skip to content

Kubernetes Ingress resource routing

Kubernetes Ingress is an API object that manages external access to services within a Kubernetes cluster, typically via HTTP/HTTPS.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md] It operates as a Layer 7 (Application Layer) load balancer, routing traffic to specific Services based on rules defined in the Ingress resource.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

Architecture and Components

An Ingress setup requires two main components: the Ingress resource and an Ingress Controller.

  • Ingress Resource: A YAML manifest defining the routing rules, such as which hostnames or paths map to which backend Services.
  • Ingress Controller: A deployment (e.g., NGINX Ingress) that runs within the cluster and watches for changes to Ingress resources.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md] It reads these rules and configures its underlying load balancer (often software like NGINX) to enforce them.

The controller itself is exposed to the outside world via a Service, commonly of type NodePort, which listens on a specific port (e.g., 30035) on the cluster nodes.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

Traffic Flow

Ingress routing typically follows a specific chain to get traffic from an external user to a container:

  1. External Request: A user (or a reverse proxy like Nginx) sends a request to the cluster's IP or a domain name (e.g., myapp.od.com).^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]
  2. Node Entry: Traffic hits the NodePort exposed by the Ingress Controller's Service (e.g., port 30035).^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]
  3. Controller Processing: The Ingress Controller receives the traffic, inspects the Host header (e.g., myapp.od.com), and looks up the matching Ingress rules.
  4. Service Forwarding: The controller proxies the request to the backend [[Service]] defined in the Ingress rule (e.g., myapp-svc).
  5. Pod Delivery: The Service routes the traffic to a specific Pod IP based on its own selector logic.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

Example Configuration

The following example demonstrates a typical Ingress setup using the NGINX Ingress Controller.

1. Backend Resources

First, a Deployment and a standard ClusterIP Service are created.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md] * Deployment: myapp-depl (Manages the application Pods). * Service: myapp-svc (Selects Pods with label app=myapp).

2. Ingress Resource

An Ingress resource binds the external hostname to the internal Service.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

apiVersion: networking.k8s.io/v1
kind: [Ingress](<./ingress.md>)
metadata:
  name: myapp-ing
spec:
  ingressClassName: nginx
  rules:
  - host: myapp.od.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-svc
            port:
              number: 80

In this configuration: * The controller listens for requests to myapp.od.com. * It forwards traffic to the Service myapp-svc on port 80.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

3. Verification

You can verify the routing by accessing the application via the NodePort on any node. For example, using the browser or curl: http://myapp.od.com:30035/hostname.html^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

Integration with Reverse Proxies

In production environments, an Ingress Controller is often placed behind an enterprise reverse proxy (like HAProxy or external Nginx). This outer proxy handles SSL Termination or initial load balancing before passing traffic to the Kubernetes Ingress NodePort.^[400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md]

Example configuration for an outer Nginx proxy:

upstream default_backend_nginx {
    server 10.4.7.21:30035 max_fails=3 fail_timeout=10s;
    server 10.4.7.22:30035 max_fails=3 fail_timeout=10s;
}

server {
    server_name *.od.com;
    location / {
        proxy_pass http://default_backend_nginx;
        proxy_set_header Host $http_host;
    }
}

Sources

  • 400-devops__06-Kubernetes__k8s-learning__linux__02-ingress__README.md
  • [[Services]]
  • [[Deployments]]
  • [[Namespaces]]
  • [[Reverse proxy]]
  • [[Load balancing]]