How to Create a Kubernetes Cluster on AWS With Jenkins and Spring Boot

construction

Kuberntes cluster under construction.

In this article, we will set up an AWS environment to deploy a Dockerized Spring Boot application in a Kubernetes Cluster with the free tier EC2 instance in a few minutes. Kubernetes can be installed on AWS as explained in the Kubernetes documentation either using conjure-up, Kubernetes Operations (kops), CoreOS Tectonic or kube-aws. Out of those options, I found kops easier to use and it's nicely-designed for customizing the installation, executing upgrades and managing the Kubernetes clusters over time. 

Steps to Follow

  1. First, we need an AWS account and access keys to start with. Login to your AWS console and generate access keys for your user by navigating to Users/Security credentials page.
  2. Create an EC2 Instance with a t2.micro instance for managing the Kubernetes cluster.
  3. Create a new IAM user or use an existing IAM user and grant the following permissions to the newly-created EC2 Instance:        
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
AmazonVPCFullAccess
AmazonIAMFullAccess


  1. Install the AWS CLI by following its official installation guide:
pip install awscli --upgrade –user


  1. Install kops by following its official installation guide:
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops


  1. Configure the AWS CLI by providing the Access Key, Secret Access Key and the AWS region that you want the Kubernetes cluster to be installed:        
aws configure
AWS Access Key ID [None]: 
AWS Secret Access Key [None]: 
Default region name [None]: ap-south-1b
Default output format [None]:


  1. Create an AWS S3 bucket for kops to persist its state:
bucket_name=dev.k8s.abdul.in

aws s3api create-bucket --bucket ${bucket_name} --region ap-south-1b


  1. Enable versioning for the above S3 bucket:        
aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled


  1. Provide a name for the Kubernetes cluster and set the S3 bucket URL in the following environment variables:     
export KOPS_CLUSTER_NAME=dev.k8s.abdul.in

export KOPS_STATE_STORE=s3://${bucket_name}

      This code block can be added to the ~/.bash_profile or ~/.profile file depending on the operating system to make them available on all terminal environments.

  1. Create a Private Hosted Zone and give it a name (mine is dev.k8s.abdul.in) and enter the VPN region as ap-south-1b. To create a private hosted zone, follow the steps here.
  2. Create a Kubernetes cluster definition using kops by providing the required node count, node size, and AWS zones. The node size or rather the EC2 instance type would need to be decided according to the workload that you are planning to run on the Kubernetes cluster:
kops create cluster --node-count=2 --node-size=t2.micro --zones=ap-south-1b --name=${KOPS_CLUSTER_NAME} --master-size=t2.micro --master-count=1 --dns private


      If you are seeing any authentication issues, try to set the following environment variables to let kops directly read EC2 credentials without using the AWS CLI:      

sshkeygen

kops create secret --name dev.k8s.abdul.in sshpublickey admin -i ~/.ssh/id_rsa.pub


     If needed, execute the kops create cluster help   command to find additional parameters:  

kops create cluster --help

   

  Review the Kubernetes cluster definition by executing the below command:     

 kops edit cluster --name ${KOPS_CLUSTER_NAME}


  1. Now, let’s create the Kubernetes cluster on AWS by executing kops update   command:
kops update cluster --name ${KOPS_CLUSTER_NAME} --yes


  1. The above command may take some time to create the required infrastructure resources on AWS. Execute the validate command to check its status and wait until the cluster becomes ready:     
 kops validate cluster

     

Once the above process completes, kops will configure the Kubernetes CLI (kubectl) with the Kubernetes cluster API endpoint and user credentials.

  1. Now, you may need to deploy the Kubernetes dashboard to access the cluster via its web based user interface:      
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml


  1. Execute the below command to find the admin user’s password:      
kops get secrets kube --type secret –o plaintext


  1. Execute this command to find the Kubernetes master hostname:      
kubectl cluster-info


  1. Access the Kubernetes dashboard through proxy, type the following command:      
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' &


Now the dashboard URL will be accessed like below:

http://<AWSk8sManagementpublicIP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/

  1. To skip the Kubernetes dashboard login, you need to grant admin privileges to the Dashboard's Service Account, to do that enter the following:
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
EOF


 Afterward, you can use skip option on login page to access Dashboard.

If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login  to the deployment's command line arguments. You can do so by adding it to the args 

kubectl edit deployment/kubernetes-dashboard --namespace=kube-system


For example:

containers:
- args:
        - --auto-generate-certificates
        - --enable-skip-login # <-- add this line
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
  1. Go to the AWS Master and Nodes security group, click on InBound rules, and add the following rule.

      Port range to open on the nodes and master: 30000-32767

  1. Deploying Spring Boot Application via Jenkins

     Environment Setup:

sudo yum install java-1.8.0-openjdk

sudo yum install java-1.8.0-openjdk-devel

sudo alternatives --config java

sudo yum remove java-1.7*


sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo

sudo rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key

sudo yum install -y jenkins

sudo service jenkins start


wget https://www-eu.apache.org/dist/maven/maven-3/3.6.2/binaries/apache-maven-3.6.2-bin.tar.gz

tar –xvf apache-maven-3.6.2-bin.tar.gz


Go to Home Path /home/ec2-user     

 vi .bash_profile


Add a Maven path in the existing path variable and save.      

source ~/.bash_profile


sudo yum install -y docker

sudo service docker start

sudo usermod -aG docker ec2-user

sudo usermod -aG docker Jenkins


sudo yum install –y git


After this software is installed, create an admin user in Jenkins UI and install suggested plugins.

mkdir -p /var/lib/jenkins/.kube

sudo cp -i /root/.kube/config /var/lib/jenkins/.kube/config

sudo chown  jenkins:jenkins –R /var/lib/jenkins/.kube/config

sudo chown  Jenkins:jenkins -R /var/lib/jenkins/.kube/
  1. To Shut down the Kubernetes cluster, enter the following command:
kops edit ig nodes

     and set maxSize  and minSize  to 0.

kops get ig

kops edit ig <MasterNodeName>

     and again set maxSize and minSize to 0.

kops update cluster  –yes 

kops rolling-update cluster

Awesome, cluster is offline now! 

If you wanted to turn your cluster back on, revert the settings, changing your master to at least 1, and your nodes to 2. A sample project is available here.

Further Reading


Deploying a Kubernetes Cluster With Amazon EKS

Getting Started With Kubernetes

 

 

 

 

Top