Kubernetes Demystified: Solving Service Dependencies

This series of articles explores some of the common problems enterprise customers encounter when using Kubernetes. One question frequently asked by Container Service customers is, "How do I handle dependencies between services?"

In applications, component dependencies refer to middleware services and business services. In traditional software deployment methods, application startup and stop tasks must be completed in a specific order.

When using Kubernetes, Docker Swarm, and other container orchestration technologies to deploy applications in distributed environments, different components start up concurrently, so it is impossible to ensure a certain startup order. In addition, when applications are running, the services they depend on may fail or be migrated. Therefore, solving service dependencies between containers is an issue frequently raised by customers.

Method 1: Inspecting Dependencies in an Application

We can add service dependency inspection logic in the application startup logic. If a service required by the application cannot be accessed, it is retried. If the service is still inaccessible after a set number of retries, the application automatically gives up. Based on the container's restart policy, Kubernetes and Docker wait for a certain period of time before automatically giving up.

In the following, we use a simple Golang application as an example to check if MySQL service dependencies are ready.

  ...
    // Connect to database.
    hostPort := net.JoinHostPort(config.Host, config.Port)
    log.Println("Connecting to database at", hostPort)
    dsn := fmt.Sprintf("%s:%s@tcp(%s)/%s?timeout=30s",
        config.Username, config.Password, hostPort, config.Database)

    db, err = sql.Open("mysql", dsn)
    if err != nil {
        log.Println(err)
    }

    var dbError error
    maxAttempts := 20
    for attempts := 1; attempts <= maxAttempts; attempts++ {
        dbError = db.Ping()
        if dbError == nil {
            break
        }
        log.Println(dbError)
        time.Sleep(time.Duration(attempts) * time.Second)
    }
    if dbError != nil {
        log.Fatal(dbError)
    }

    log.Println("Application started successfully.")
    ...


"Fail Fast" is an important principle of Design by Contract that helps ensure system robustness and predictability. In the preceding code, if the retry mechanism fails, log.Fatal(dbError) is reported and the process ends. In addition, the K8S and Docker container restart rollback functions ensure that system resource are not exhausted by repeated failed attempts to access application dependencies.

Method 2: Independent Service Dependency Inspection Logic

In the real world, some legacy applications and frameworks cannot be adjusted. Therefore, we want to decouple their inspection policies and application logic.

One common method is to add the relevant service dependency inspection logic in the container's Dockerfile startup script. For more information about this method, see this Docker document. Another method is to use the Kubernetes pod mechanism itself to add dependency inspection logic.

Before we start, we must understand the pod lifecycle. 

First, pods contain three types of containers:

  1. Infrastructure container: This is the famous pause container.
  2. Init container: This is an initialization container, generally used to initialize and prepare applications. The application container can start up only after waiting for all initialization containers to finish running.
  3. Main container: This is an application container.

Best practices for Kubernetes generally rely on initialization containers to inspect service dependencies. We use the following WordPress example to show how this is done.

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  clusterIP: None
  ports:
  - name: mysql
    port: 3306
  selector:
    app: mysql
---
apiVersion: v1
kind: Service
metadata:
  name: wordpress
spec:
  ports:
  - name: wordpress
    port: 80
    targetPort: 80
  selector:
    app: wordpress
  type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  serviceName: mysql 
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - name: mysql
        image: mysql:5.7
        env:
        - name: MYSQL_ALLOW_EMPTY_PASSWORD
          value: "true"
        livenessProbe:
          exec:
            command: ["mysqladmin", "ping"]
          initialDelaySeconds: 30
          periodSeconds: 10
          timeoutSeconds: 5
        readinessProbe:
          exec:
            # Check we can execute queries over TCP (skip-networking is off).
            command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
          initialDelaySeconds: 5
          periodSeconds: 2
          timeoutSeconds: 1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
  template:
    metadata:
      labels:
        app: wordpress
    spec:
      containers:
      - name: wordpress
        image: wordpress:4
        ports:
        - containerPort: 80
        env:
        - name: WORDPRESS_DB_HOST
          value: mysql
        - name: WORDPRESS_DB_PASSWORD
          value: ""
      initContainers:
      - name: init-mysql
        image: busybox
        command: ['sh', '-c', 'until nslookup mysql; do echo waiting for mysql; sleep 2; done;']


In the WordPress Deployment pod definition, we added initContainers. This will check that the MySQL domain name can be resolved to determine if the MySQL service dependency is ready.

At the same time, we introduced a readinessProbe and livenessProbe in MySQL StatefulSet to determine if the MySQL process is ready for business. In K8S, as long as a pod is healthy it can perform ClusterIP access or DNS resolution.

$ kubectl create -f wordpress.yaml
service "mysql" created
service "wordpress" created
statefulset "mysql" created
deployment "wordpress" created
$ kubectl get pods
NAME                         READY     STATUS     RESTARTS   AGE
mysql-0                      0/1       Running    0          5s
wordpress-797655cf44-w4p87   0/1       Init:0/1   0          5s
$ kubectl get pods
NAME                         READY     STATUS     RESTARTS   AGE
mysql-0                      1/1       Running    0          11s
wordpress-797655cf44-w4p87   0/1       Init:0/1   0          11s
$ kubectl get pods
NAME                         READY     STATUS            RESTARTS   AGE
mysql-0                      1/1       Running           0          14s
wordpress-797655cf44-w4p87   0/1       PodInitializing   0          14s
$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mysql-0                      1/1       Running   0          17s
wordpress-797655cf44-w4p87   1/1       Running   0          17s
$ kubectl describe pods wordpress-797655cf44-w4p87
...

NOTE:

  1. Liveness probe: This probe is mainly used to determine if the container is in the Running state. For example, it can detect service deadlocks, slow responses, and other situations.
  2. Readiness probe: This probe is mainly used to determine if the service is already working normally.
  3. Readiness probes cannot be used in init containers.
  4. If the pod restarts, all of its init containers must be run again.

Conclusion

This article discussed common solutions used to inspect service dependencies and provided an example to demonstrate how to use init containers, liveness and readiness probes, and other service health check and dependency inspection functions.

Kubernetes provides flexible pod lifecycle management functions. Due to space limitations, we did not discuss postStart, preStop, and other lifecycle hooks.

 

 

 

 

Top