How to Create a Kubernetes Cluster on AWS With Jenkins and Spring Boot
In this article, we will set up an AWS environment to deploy a Dockerized Spring Boot application in a Kubernetes Cluster with the free tier EC2 instance in a few minutes. Kubernetes can be installed on AWS as explained in the Kubernetes documentation either using conjure-up, Kubernetes Operations (kops), CoreOS Tectonic or kube-aws. Out of those options, I found kops easier to use and it's nicely-designed for customizing the installation, executing upgrades and managing the Kubernetes clusters over time.
Steps to Follow
- First, we need an AWS account and access keys to start with. Login to your AWS console and generate access keys for your user by navigating to Users/Security credentials page.
- Create an EC2 Instance with a t2.micro instance for managing the Kubernetes cluster.
- Create a new IAM user or use an existing IAM user and grant the following permissions to the newly-created EC2 Instance:
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
AmazonVPCFullAccess
AmazonIAMFullAccess
- Install the AWS CLI by following its official installation guide:
pip install awscli --upgrade –user
- Install kops by following its official installation guide:
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
- Configure the AWS CLI by providing the Access Key, Secret Access Key and the AWS region that you want the Kubernetes cluster to be installed:
aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: ap-south-1b
Default output format [None]:
- Create an AWS S3 bucket for kops to persist its state:
bucket_name=dev.k8s.abdul.in
aws s3api create-bucket --bucket ${bucket_name} --region ap-south-1b
- Enable versioning for the above S3 bucket:
aws s3api put-bucket-versioning --bucket ${bucket_name} --versioning-configuration Status=Enabled
- Provide a name for the Kubernetes cluster and set the S3 bucket URL in the following environment variables:
export KOPS_CLUSTER_NAME=dev.k8s.abdul.in
export KOPS_STATE_STORE=s3://${bucket_name}
This code block can be added to the ~/.bash_profile or ~/.profile file depending on the operating system to make them available on all terminal environments.
- Create a Private Hosted Zone and give it a name (mine is dev.k8s.abdul.in) and enter the VPN region as ap-south-1b. To create a private hosted zone, follow the steps here.
- Create a Kubernetes cluster definition using kops by providing the required node count, node size, and AWS zones. The node size or rather the EC2 instance type would need to be decided according to the workload that you are planning to run on the Kubernetes cluster:
kops create cluster --node-count=2 --node-size=t2.micro --zones=ap-south-1b --name=${KOPS_CLUSTER_NAME} --master-size=t2.micro --master-count=1 --dns private
If you are seeing any authentication issues, try to set the following environment variables to let kops directly read EC2 credentials without using the AWS CLI:
sshkeygen
kops create secret --name dev.k8s.abdul.in sshpublickey admin -i ~/.ssh/id_rsa.pub
If needed, execute the kops create cluster help
command to find additional parameters:
kops create cluster --help
Review the Kubernetes cluster definition by executing the below command:
kops edit cluster --name ${KOPS_CLUSTER_NAME}
- Now, let’s create the Kubernetes cluster on AWS by executing
kops update
command:
kops update cluster --name ${KOPS_CLUSTER_NAME} --yes
- The above command may take some time to create the required infrastructure resources on AWS. Execute the validate command to check its status and wait until the cluster becomes ready:
kops validate cluster
Once the above process completes, kops will configure the Kubernetes CLI (kubectl) with the Kubernetes cluster API endpoint and user credentials.
- Now, you may need to deploy the Kubernetes dashboard to access the cluster via its web based user interface:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
- Execute the below command to find the admin user’s password:
kops get secrets kube --type secret –o plaintext
- Execute this command to find the Kubernetes master hostname:
kubectl cluster-info
- Access the Kubernetes dashboard through proxy, type the following command:
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' &
Now the dashboard URL will be accessed like below:
http://<AWSk8sManagementpublicIP>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/
- To skip the Kubernetes dashboard login, you need to grant admin privileges to the Dashboard's Service Account, to do that enter the following:
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF
Afterward, you can use skip option on login page to access Dashboard.
If you are using dashboard version v1.10.1 or later, you must also add --enable-skip-login
to the deployment's command line arguments. You can do so by adding it to the args
:
kubectl edit deployment/kubernetes-dashboard --namespace=kube-system
For example:
containers:
- args:
- --auto-generate-certificates
- --enable-skip-login # <-- add this line
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
- Go to the AWS Master and Nodes security group, click on InBound rules, and add the following rule.
Port range to open on the nodes and master: 30000-32767
- Deploying Spring Boot Application via Jenkins
Environment Setup:
- Type the following command to install Java 8:
sudo yum install java-1.8.0-openjdk
sudo yum install java-1.8.0-openjdk-devel
sudo alternatives --config java
sudo yum remove java-1.7*
- Install Jenkins in the Kubernetes Management Server
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import http://pkg.jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install -y jenkins
sudo service jenkins start
- To install Maven, enter the following:
wget https://www-eu.apache.org/dist/maven/maven-3/3.6.2/binaries/apache-maven-3.6.2-bin.tar.gz
tar –xvf apache-maven-3.6.2-bin.tar.gz
Go to Home Path /home/ec2-user
vi .bash_profile
Add a Maven path in the existing path variable and save.
source ~/.bash_profile
- To install Docker, type the following command:
sudo yum install -y docker
sudo service docker start
sudo usermod -aG docker ec2-user
sudo usermod -aG docker Jenkins
- To install git, type the following command:
sudo yum install –y git
After this software is installed, create an admin user in Jenkins UI and install suggested plugins.
- Add JDK and Maven Path in Global Tool Configuration.
- Go to Manage Jenkins -> Manage Plugins -> Available and install Pipeline Utility Steps and Amazon ECR plugin.
- Go to Credentials, then Add AWS Credentials, and enter your Access Key ID and Secret Access Key. If you haven’t created AWS Access Key ID, follow this link.
- Go to Global System Environment and create these two Environment Paths:
- awsECRUrl -> Enter your ECR URL , If you haven't created ECR before, follow this link.
- awsID - > ecr:<regionname>:<aws credentialsid> Eg : ecr:ap-south-1:f990151f-44cf-4adc-971a-457629978f9b( Jenkins AWS Credentials ID get from ID column which you have added before).
- Now create a Pipeline Job and add the GitHub project URL and enable a GitHub hook trigger for GITScm polling. Select Pipeline script from SCM -> SCM -> Git -> Enter Repository URL -> Click Save -> Apply.
- Before triggering job and enter the below commands to execute the Kubernetes script with Jenkins user:
mkdir -p /var/lib/jenkins/.kube
sudo cp -i /root/.kube/config /var/lib/jenkins/.kube/config
sudo chown jenkins:jenkins –R /var/lib/jenkins/.kube/config
sudo chown Jenkins:jenkins -R /var/lib/jenkins/.kube/
- Once job is started, you will be asked to enter the AWS Master IP to patching Node Port
- Once the job completed, in the console logs, you can see the application URL.
- To Shut down the Kubernetes cluster, enter the following command:
kops edit ig nodes
and set maxSize
and minSize
to 0.
- To shut down the master node, you need to add the master name in the command line. To discover the name of master, type the following.:
kops get ig
kops edit ig <MasterNodeName>
and again set maxSize
and minSize
to 0.
- Finally, when you need to update the cluster, enter:
kops update cluster –yes
kops rolling-update cluster
Awesome, cluster is offline now!
If you wanted to turn your cluster back on, revert the settings, changing your master to at least 1, and your nodes to 2. A sample project is available here.
Further Reading
Deploying a Kubernetes Cluster With Amazon EKS
Getting Started With Kubernetes