Kubernetes networking

·

9 min read

Kubernetes networking
  1. Services

  2. Ingress

  3. Network policies

  4. DNS

  5. CNI plugins

Services

What is service and why do we need this in k8s?

As we all know each pod in a k8s cluster gets its IP address but pods are ephemeral meaning they die very frequently new pods are recreated and as usual new IP address is assigned to this newly created pod. Now this becomes a problem in production if we have to continuously monitor these pods for the IP address and keep updating our production deployments according to the new changes

This problem is solved by k8s Service which gives a stable or static IP address that can be used even when the pod dies ie In front of every pod, we place a service which represents a static IP address through which we can access a pod.

The Service API, part of Kubernetes, is an abstraction to help you expose groups of Pods over a network. Each Service object defines a logical set of endpoints (usually these endpoints are Pods) along with a policy about how to make those pods accessible.

Services also provide other benefits such as

  1. load balancing: Suppose we have 2 replicas of a pod, the service will get each request forwarded to that application and forward it to a pod so that clients can call a single stable IP address instead of calling individual ones

  2. loose coupling: They are good abstractions for loose coupling for communication within the cluster or outside the cluster

Service Endpoints

This is created whenever we create a service with the same name as the service. It is used to keep track of which pods are the members of service. As pods are ephemeral, this object keeps track of endpoints when pods die or are recreated

Service port

The service port is arbitrary whereas the target port should match the port that the application container inside the pod is listening.

Types of services

Custer IP service:

We specify the selectors in the service and match them with the labels of the pod. All those pods that match become the service endpoints. Next, we also specify the target port on which the container application is running in the deployment file.

Now, if there is any request to the node, the service first checks for the pods that match the selector type. The ones which match will become service endpoints, and then the service picks a specific pod that has a port defined by the target port attribute and forward the request to that specific pod.

apiVersion: v1
kind: Service
metadata:
 name: nginx-clusterip
spec:
 type: ClusterIP
 selector:
   run: app-nginx
 ports:
 - port: 80
   protocol: TCP

Key points:

  1. Exposes the Service on a cluster-internal IP.

  2. Choosing this value makes the Service only reachable from within the cluster.

  3. This is the default that is used if you don't explicitly specify a type for a Service.

  4. You can expose the Service to the public internet using an Ingress or a Gateway.

Node port:

It creates a service that is accessible on a static port on each worker node in the cluster meaning the external traffic is accessible directly to the worker node through the static port that is exposed through service. We specify this port in the service attribute and its value can be anywhere between 30,000-32,767. Also, a cluster which will route to the node port is automatically created when we create a Nodeport. This type is not efficient or secure as we be allowing the external client requests to directly talk to the nodes

When you create a NodePort Service, Kubernetes opens a port (in the range of 30000 and 32767) on all of its worker nodes. Note that the same port number is used across all of them. All traffic incoming to the worker node's IP address, and that specific port, is redirected to a Pod linked with that Service.

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app.kubernetes.io/name: MyApp
  ports:
      # By default and for convenience, the `targetPort` is set to the same value as the `port` field.
    - port: 80
      targetPort: 80
      # Optional field
      # By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
      nodePort: 30007

Key points:

  1. Exposes the Service on each Node's IP at a static port ie NODEPORT

  2. To make the node port available, Kubernetes sets up a cluster IP address, the same as if you had requested a Service of type: ClusterIP.

Load balancer:

In this type, we create an external load balancer that is connected to the service. These load balancers are provided by cloud providers such as AWS, GCP, azure etc. Whenever we create a load balancer service type, a cluster IP and node port are automatically created to which the eternal load balancer of the cloud platform will route the traffic to :

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  clusterIP: 10.0.171.239
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip: 192.0.2.127

Key points:

  1. Exposes the Service externally using an external load balancer.

  2. Kubernetes does not directly offer a load-balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.

External name:

This type of service maps the service to an external DNS name, allowing you to reference an external service by its DNS name instead of an IP address.

apiVersion: v1
kind: Service
metadata:
  name: my-service
  namespace: prod
spec:
  type: ExternalName
  externalName: my.database.example.com

Headless service:

This is used when a client or a pod want to connect to a specific pod directly without going through the service as the service randomly selects pods. This happens when we are using stateful applications such as mysql application. Here clients need to identify the specific IP address of each pod and this is done by k8s DNS lookup where we can discover pod IP address

The DNS lookup for Service returns the single IP address which will be the cluster IP address however if we set the cluster IP to none, the DNS lookup returns the Pod IP address instead of service IP address hence a simple DNS lookup provides the IP address and client can use this to communicate with the pod

Multiport service:

For some Services, you need to expose more than one port. Kubernetes lets you configure multiple port definitions on a Service object. When using multiple ports for a Service, you must give all of your ports names so that these are unambiguous

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - name: http
      protocol: TCP
      port: 80
      targetPort: 9376
    - name: https
      protocol: TCP
      port: 443
      targetPort: 9377

Ingress

What is Ingress?

Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.

Here is a simple example where an Ingress sends all its traffic to one Service:

ingress-diagram

An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx-example
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80

Network policies

NetworkPolicies are used to control incoming and outgoung traffic to a pod. They are an application-centric construct which allow you to specify how a pod is allowed to communicate with various network entities over the network. NetworkPolicies apply to a connection with a pod on one or both ends and are not relevant to other connections.

The entities that a Pod can communicate with are identified through a combination of the following 3 identifiers:

  1. Other pods that are allowed (exception: a pod cannot block access to itself)

  2. Namespaces that are allowed

  3. IP blocks (exception: traffic to and from the node where a Pod is running is always allowed, regardless of the IP address of the Pod or the node)

Kubernetes Network Policies

This is Kubernetes assets that control the traffic between pods. Kubernetes network policy lets developers secure access to and from their applications. This is how we can restrict a user for access.

Any request that is successfully authenticated (including an anonymous request) is then authorized. The default authorization mode is always allowed, which allows all requests. In Kubernetes, you must be authenticated (logged in) before your request can be authorized (granted permission to access).

Network Policy In Pods

All Pods in Kubernetes communicate with each other which are present in the cluster. By default all Pods are non-isolated however Pods become isolated by having a Kubernetes Network Policy in Kubernetes. Once we have it in a namespace choosing a specific pod, that will restrict all the incoming and outing traffic of the pods.

Network Policy Specification

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: test-network-policy
  namespace: default
spec:
  podSelector:
    matchLabels:
      role: db
  policyTypes:
    - Ingress
    - Egress
  ingress:
    - from:
        - ipBlock:
            cidr: 172.17.0.0/16
            except:
              - 172.17.1.0/24
        - namespaceSelector:
            matchLabels:
              project: myproject
        - podSelector:
            matchLabels:
              role: frontend
      ports:
        - protocol: TCP
          port: 6379
  egress:
    - to:
        - ipBlock:
            cidr: 10.0.0.0/24
      ports:
        - protocol: TCP
          port: 5978

PodSelector – Each of these includes a pod selector that selects the grouping of pods to which the policy applies. This selects particular Pods in the same namespace as the Kubernetes Network Policy which should be allowed as ingress sources or egress destinations.

Policy Types – indicates which sorts of arrangements are remembered for this approach, Ingress, or Egress.

Ingress – Each Network Policies may include a list of allowed ingress rules. This includes inbound traffic whitelist rules.

Egress – Each Network Policy may include a list of allowed egress rules. This includes outbound traffic whitelist rules.

** Network policy is created only when we have deployed antrea(Calico,Cilium) or kuberouter(Romana, Weavenet) as networking solution

DNS :

Kubernetes creates DNS records for Services and Pods. You can contact Services with consistent DNS names instead of IP addresses. Kubelet configures Pods' DNS so that running containers can lookup Services by name rather than IP.

CNI PLUGINS:

Container Network Interface, a Cloud Native Computing Foundation venture, comprises of detail and libraries for writing plugins to configure network interfaces in Linux containers

It is a framework for dynamically configuring networking resources. It uses a group of libraries and specifications written in Go. The plugin specification defines an interface for configuring the network, provisioning IP addresses, and maintaining connectivity with multiple hosts.

When used with Kubernetes, CNI can integrate smoothly with the kubelet to enable the use of an overlay or underlay network to automatically configure the network between pods. Overlay networks encapsulate network traffic using a virtual interface such as Virtual Extensible LAN (VXLAN). Underlay networks work at the physical level and comprise switches and routers.

Once you’ve specified the type of network configuration type, the container runtime defines the network that containers join. The runtime adds the interface to the container namespace via a call to the CNI plugin and allocates the connected subnetwork routes via calls to the IP Address Management (IPAM) plugin.

Kubernetes 1.25 supports Container Network Interface (CNI) plugins for cluster networking. You must use a CNI plugin that is compatible with your cluster and that suits your needs. Different plugins are available (both open- and closed- source) in the wider Kubernetes ecosystem. A CNI plugin is required to implement the Kubernetes network model.

No alt text provided for this image