Skip to content

Ingress fundamentals in Kubernetes

Ingress is an API object that manages external access to services within a Kubernetes cluster, typically for HTTP protocols^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]. It acts as a unified entry point for network traffic, allowing for the management of routing rules rather than exposing individual services directly.

Core Functionality

Ingress provides efficient network traffic management by consolidating layer 7 routing rules into a single resource^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]. Key capabilities include:

  • Load Balancing: Distributing incoming traffic across multiple backend services^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].
  • SSL Termination: Handling decryption at the ingress level, relieving backend services from the computational burden^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].
  • Name-based Virtual Hosting: Routing traffic to different services based on the requested hostnames^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

Architecture and Components

To function, an Ingress resource requires an Ingress Controller^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]. While the Ingress resource defines the rules (the configuration), the Ingress Controller is the component that actually executes these rules, typically by running a reverse proxy (like Nginx) or a load balancer.

A common implementation is the Ingress NGINX Controller^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]. When deployed, this creates a specific namespace (e.g., ingress-nginx) and necessary resources such as ServiceAccount, ConfigMap, and ClusterRole bindings^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

Service Exposure

The Ingress Controller itself is exposed to the network via a standard [[Service]], commonly using type: LoadBalancer or type: NodePort^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]. In environments that support LoadBalancer (like major cloud providers), the controller automatically gets an External IP. If the environment does not support it (indicated by <pending> status), the controller may fall back to a NodePort configuration on the host machine^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

Basic Usage Workflow

A typical workflow involves three main steps: deploying the backend application, exposing it internally, and creating the Ingress rule^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

  1. Deploy Backend: Create a Deployment (e.g., demo) running a web server image^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].
  2. Expose Service: Create a [[Service]] to make the application reachable within the cluster^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].
  3. Create Ingress: Define an Ingress resource that maps a hostname (e.g., demo.localdev.me) to the backend service^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

For local testing without a cloud load balancer, port-forward can be used to access the Ingress Controller service^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md].

  • [[Service]]
  • [[LoadBalancer]]
  • NodePort
  • [[Deployment]]

Sources

^[400-devops__06-Kubernetes__k8s-learning__06.ingress__README.md]