Creating a Gossip-Based Kubernetes Cluster on AWS
Creating a Kubernetes cluster using Kops requires a top-level domain or a subdomain and setting up Route 53-hosted zones. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl
to be able to talk directly with the master. This worked well, but poses an additional hassle for the developers.
Kops 1.6.2 adds an experimental support for gossip-based discovery of nodes. This makes the process of setting up Kubernetes cluster using Kops DNS-free, and much more simplified.
Let’s take a look!
Install or upgrade kops:
brew upgrade kops
Check the version:
~ $ kops version
Version 1.6.2
Create an S3 bucket as “state store”:
~ $ kops version
Version 1.6.2
Create a Kubernetes cluster:
kops create cluster cluster.k8s.local --zones us-east-1a --yes
It shows the output as:
I0622 16:52:07.494558 83656 create_cluster.go:655] Inferred --cloud=aws from zone "us-east-1a"
I0622 16:52:07.495012 83656 create_cluster.go:841] Using SSH public key: /Users/argu/.ssh/id_rsa.pub
I0622 16:52:08.540445 83656 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1a
I0622 16:52:16.327523 83656 apply_cluster.go:396] Gossip DNS: skipping DNS validation
I0622 16:52:25.539755 83656 executor.go:91] Tasks: 0 done / 67 total; 32 can run
I0622 16:52:29.843320 83656 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0622 16:52:30.108046 83656 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0622 16:52:30.139629 83656 vfs_castore.go:422] Issuing new certificate: "kube-scheduler"
I0622 16:52:31.072229 83656 vfs_castore.go:422] Issuing new certificate: "kube-proxy"
I0622 16:52:31.082560 83656 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager"
I0622 16:52:31.579158 83656 vfs_castore.go:422] Issuing new certificate: "kops"
I0622 16:52:32.742807 83656 executor.go:91] Tasks: 32 done / 67 total; 13 can run
I0622 16:52:43.057189 83656 executor.go:91] Tasks: 45 done / 67 total; 18 can run
I0622 16:52:50.047375 83656 executor.go:91] Tasks: 63 done / 67 total; 3 can run
I0622 16:53:02.047610 83656 vfs_castore.go:422] Issuing new certificate: "master"
I0622 16:53:03.027007 83656 executor.go:91] Tasks: 66 done / 67 total; 1 can run
I0622 16:53:04.197637 83656 executor.go:91] Tasks: 67 done / 67 total; 0 can run
I0622 16:53:04.884362 83656 update_cluster.go:229] Exporting kubecfg for cluster
Kops has set your kubectl context to cluster.k8s.local
Cluster is starting. It should be ready in a few minutes.
Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.cluster.k8s.local
The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md
Wait for a few minutes for the cluster to be created.
Then, validate the cluster:
~ $ kops validate cluster
Using cluster from kubectl context: cluster.k8s.local
Validating cluster cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 1 1 us-east-1a
nodes Node t2.medium 2 2 us-east-1a
NODE STATUS
NAME ROLE READY
ip-172-20-36-52.ec2.internal node True
ip-172-20-38-117.ec2.internal master True
ip-172-20-49-179.ec2.internal node True
Your cluster cluster.k8s.local is ready
Get the list of nodes using kubectl
:
~ $ kubectl get nodes
NAME STATUS AGE VERSION
ip-172-20-36-52.ec2.internal Ready,node 4h v1.6.2
ip-172-20-38-117.ec2.internal Ready,master 4h v1.6.2
ip-172-20-49-179.ec2.internal Ready,node 4h v1.6.2
Deleting a cluster is pretty straight forward as well:
kops delete cluster cluster.k8s.local --yes
That’s it!
Here, you can find several examples of getting started with Kubernetes.
File issues at github.com/kubernetes/kops/issues.