The Ultimate Guide to Pods and Services in Kubernetes

Kubernetes is a powerful container orchestration platform, but to harness its full potential, it's important to understand its core components, with pods and services playing foundational roles. In this article, we'll dive into what they are and how they work together to expose and manage access to applications running within a Kubernetes cluster.

What Is a Pod?

Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.

Before we create a pod, let’s check the API resources available in your Kubernetes cluster; you can use the kubectl api-resources command. This command lists the API resources that are supported by the Kubernetes API server, including their short names, API groups, and whether they are namespaced or not. This is useful for understanding the capabilities of your cluster, especially when working with custom resources or exploring new Kubernetes features. The kubectl explain command complements this by providing detailed information about individual resources.

Let's create a basic pod configuration file for a pod running a simple Nginx container.

Create a file named nginx-pod.yaml:

YAML
 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80


Here is a brief explanation of the terms used:

Use kubectl to apply the configuration file and create the pod:

Check the status of the pod to ensure it has been created and is running:

You should see an output similar to this:

Shell
 
NAME         READY   STATUS    RESTARTS    AGE
nginx-pod    1/1     Running   0           10s


Next, delete the pod:

The output should look similar to the following:

Shell
 
kubectl get pod
No resources found in default namespace.


What Is a Service?

Creating a service for an Nginx pod in Kubernetes allows you to expose the Nginx application and make it accessible within or outside the cluster. Here's a step-by-step guide to creating a Service for an Nginx pod.

To ensure you have an Nginx pod running, if you don't already have one, create a YAML file named nginx-pod.yaml:

YAML
 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80


Apply the Pod configuration:

Create a YAML file named nginx-service.yaml to define the Service:

YAML
 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
       targetPort: 80
  type: ClusterIP


Here are a few things to note:

Apply the Service configuration using kubectl:

Shell
 
NAME             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
nginx-service    ClusterIP   10.96.0.1        <none>        80/TCP    10s


Since the Service is of type ClusterIP, it is accessible only within the cluster. To access it from outside the cluster, you can change the Service type to NodePort or LoadBalancer.

To expose the Service externally using NodePort, modify the nginx-service.yaml file:

YAML
 
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
       targetPort: 80
      nodePort: 30007  # Specify a node port in the range 30000-32767 (optional)
  type: NodePort


Apply the updated Service configuration:

You can now access the Nginx application using the node's IP address and the node port (e.g., http://<node-ip>:30007).

Multi-Container Pods

Using a single container per pod provides maximum granularity and decoupling. However, there are scenarios where deploying multiple containers, sometimes referred to as composite containers, within a single pod is beneficial. These secondary containers can perform various roles: handling logging or enhancing the primary container (sidecar concept), acting as a proxy to external systems (ambassador concept), or modifying data to fit an external format (adapter concept). These secondary containers complement the primary container by performing tasks it doesn't handle.

Below is an example of a Kubernetes Pod configuration that includes a primary container running an Nginx server and a secondary container acting as a sidecar for handling logging. The sidecar container uses a simple logging container like BusyBox to demonstrate how it can tail the Nginx access logs.

YAML
 
apiVersion: v1
kind: Pod
metadata:
  name: nginx-with-logging-sidecar
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx
  - name: log-sidecar
    image: busybox:latest
    command: ["sh", "-c", "tail -f /var/log/nginx/access.log"]
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/nginx
  volumes:
  - name: shared-logs
    emptyDir: {}


You will then do the following:

By following these steps, you will delete the existing Pod and apply the new configuration, setting up a Pod with an Nginx container and a sidecar container for logging. This will ensure the new configuration is active and running in your Kubernetes cluster.

Multi-container pods in Kubernetes offer several advantages, enabling more flexible and efficient application deployment and management.

Multi-Container Pod Patterns

Sidecar Pattern

Sidecar containers can enhance the primary application by providing auxiliary functions such as logging, configuration management, or proxying. This pattern helps extend functionality without modifying the primary container. A sidecar container can handle logging by collecting and forwarding logs from the main application container.

Ambassador Pattern

Ambassador containers act as a proxy, managing communication between the primary application and external services. This can simplify integration and configuration. An ambassador container might handle SSL termination or API gateway functions.

The Ambassador pattern involves using a sidecar container to manage communication between the primary application and external services. This pattern can handle tasks like proxying, load balancing, or managing secure connections, thereby abstracting the complexity away from the primary application container.

YAML
 
apiVersion: v1
kind: Pod
metadata:
  name: app-with-ambassador
spec:
  containers:
  - name: web-app
    image: python:3.8-slim
    command: ["python", "-m", "http.server", "8000"]
    volumeMounts:
    - name: config-volume
      mountPath: /etc/envoy
  - name: envoy-proxy
    image: envoyproxy/envoy:v1.18.3
    args: ["-c", "/etc/envoy/envoy.yaml"]
    ports:
    - containerPort: 8080
    volumeMounts:
    - name: config-volume
      mountPath: /etc/envoy
  volumes:
  - name: config-volume
    configMap:
      name: envoy-config

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: envoy-config
data:
  envoy.yaml: |
    static_resources:
      listeners:
      - name: listener_0
        address:
          socket_address:
            address: 0.0.0.0
            port_value: 8080
        filter_chains:
        - filters:
          - name: envoy.filters.network.http_connection_manager
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
              stat_prefix: ingress_http
              route_config:
                name: local_route
                virtual_hosts:
                - name: local_service
                  domains: ["*"]
                  routes:
                  - match:
                      prefix: "/"
                    route:
                      cluster: external_service
              http_filters:
              - name: envoy.filters.http.router
      clusters:
      - name: external_service
        connect_timeout: 0.25s
        type: LOGICAL_DNS
        lb_policy: ROUND_ROBIN
        load_assignment:
          cluster_name: external_service
          endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: api.example.com
                    port_value: 80


This example demonstrates the Ambassador pattern in Kubernetes, where an Envoy proxy acts as an intermediary to manage external communication for the primary application container. This pattern helps abstract communication complexities and enhances modularity and maintainability.

Adapter Pattern

Adapter containers can modify or transform data between the primary application and external systems, ensuring compatibility and integration. For example, an adapter container can reformat log data to meet the requirements of an external logging service.

The Adapter pattern in Kubernetes involves using a sidecar container to transform data between the primary application container and external systems. This can be useful when the primary application requires data in a specific format that differs from the format provided by or required by external systems.

Suppose you have a primary application container that generates logs in a custom format. You need to send these logs to an external logging service that requires logs in a specific standardized format. An adapter container can be used to transform the log data into the required format before sending it to the external service.

YAML
 
apiVersion: v1
kind: Pod
metadata:
  name: app-with-adapter
spec:
  containers:
  - name: log-writer
    image: busybox
    command: ["sh", "-c", "while true; do echo \"$(date) - Custom log entry\" >> /var/log/custom/app.log; sleep 5; done"]
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/custom
  - name: log-adapter
    image: busybox
    command: ["sh", "-c", "tail -f /var/log/custom/app.log | while read line; do echo \"$(echo $line | sed 's/ - / - {\"timestamp\": \"/;s/$/\"}/')\" >> /var/log/json/app.json; done"]
    volumeMounts:
    - name: shared-logs
      mountPath: /var/log/custom
    - name: json-logs
      mountPath: /var/log/json
  - name: log-sender
    image: busybox
    command: ["sh", "-c", "while true; do cat /var/log/json/app.json | grep -v '^$' | while read line; do echo \"Sending log to external service: $line\"; done; sleep 10; done"]
    volumeMounts:
    - name: json-logs
      mountPath: /var/log/json
  volumes:
  - name: shared-logs
    emptyDir: {}
  - name: json-logs
    emptyDir: {}


This example demonstrates the Adapter pattern in Kubernetes, where an adapter container transforms data from the primary application container into the required format before sending it to an external system. This pattern helps integrate applications with external services by handling data format transformations within the sidecar container.

Summary

In summary, pods:

Services:

 

 

 

 

Top