Setting up Kubernetes

So far I have deployed almost all of my application in Docker container. Managing them has becoming a problem. Right now the way I keep track of the container image I build is by running docker images, and the commands to running them is kept in seperate folder in Makefiles. Scaling is also a problem because I have to take the Digital Ocean droplet down to upgrade to a more powerful droplet. Additionally, Kubernetes offers routing, which could replace Nginx-proxy container. For ease of use, it is time to jump ship!

What is kubernetes?

You can skip this section if you know what Kubernetes is. In simple English, it is a container manager. What it does is abstracting a bunch of servers that has docker daemon installed and run containers on them based on their available resources. For example, if there is a server that is running low on memory, then when the user asks Kubernetes to deploy a large Java app, it would probably not hand it to this poor server.

Choosing a platform

Kubernetes cluster is mature and big cloud providers, i.e. AWS, Azure, GCE, even provide managed cluster as a product. I choose Google Cloud Engine to deploy a managed Kube cluster, because I have a lot of free credits left from the free trial.

Using Terrform to provision a cluster

Wrote the following Terraform file to provision a Kube cluster on GCE:

variable.tf

variable "project" {
  default = "coding-marathon"
}

variable "region_us_east4" {
  default = "us-east4" # north virginia
}

variable "availability_zone_us_east4" {
  type        = "map"
  default     = {
    "a" = "us-east4-a"
    "b" = "us-east4-b"
    "c" = "us-east4-c"
  }
}

goo.tf

provider "google" {
  credentials = "${file("google_project_coding_marathon_credentials.json")}"
  project = "${var.project}"
  region = "${var.region_us_east4}"
}

resource "google_container_cluster" "kub-cluster1" {
  name = "kub-cluster1"
  zone = "${var.availability_zone_us_east4["a"]}"
  initial_node_count = 2
  description = "defines a Kubernetes cluster for my coding marathon project"
  node_config = {
    "machine_type" = "g1-small",
  }

  master_authorized_networks_config = {
    cidr_blocks = [
      # {
      #   "cidr_block" = "YOUR_IP/32",
      #   "display_name" = "NAME"
      # },
  ]
  }
}

output.tf

output "client_certificate" {
  value = "${google_container_cluster.kub-cluster1.master_auth.0.client_certificate}"
}
output "client_key" {
  value = "${google_container_cluster.kub-cluster1.master_auth.0.client_key}"
}
output "cluster_ca_certificate" {
  value = "${google_container_cluster.kub-cluster1.master_auth.0.cluster_ca_certificate}"
}
output "ip" {
  value = "${google_container_cluster.kub-cluster1.endpoint}"
}

Running terraform apply will output client_certificate, client_key and cluster_ca_certificate. These keys are needed for setting up kubectl on our laptop/desktop.

The following Makefile makes it easy to capture those keys.

apply:
	terraform apply
	terraform output client_key | base64 --decode > client_key.txt
	terraform output client_certificate | base64 --decode > client_certificate.txt
	terraform output cluster_ca_certificate | base64 --decode > cluster_ca_certificate.txt

Run make will save those keys to some files.

Configure kubectl

You can already see the cluster running if you go to the google cloud dashboard. And the way to interact with the cluster is through a cli tool called kubectl that you can install on your local computer. Installation instruction is here.

kubectl is configured through a yaml file, by taking advantage of a environment variable. We can put this configuration file anywhere we want, preferably together with our terraform script.

Make a file called config that looks like the following:

config

apiVersion: v1
clusters:
- cluster:
    certificate-authority: google-terraform/cluster_ca_certificate.txt
    server: https://CLUSTER_IP
  name: kub-cluster1
contexts:
- context:
    cluster: kub-cluster1
    namespace: default-namespace
    user: admin
  name: default
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate: google-terraform/client_certificate.txt
    client-key: google-terraform/client_key.txt
- name: developer
  user: {}

Change CLUSTER_IP to your cluster ip in the terraform output. Also, make sure you change the "google-terraform/BLAHBLAHBLAH_KEY.txt"s to the correct path to the keys on your system.

Now we export an environment variable: export KUBECONFIG=config:$KUBECONFIG. This means that the current directory config file take precedence over the default config file.

We can see it works:

$ kubectl get nodes
NAME                                          STATUS    ROLES     AGE       VERSION
gke-kub-cluster1-default-pool-8a3d3617-909d   Ready     <none>    41m       v1.8.10-gke.0
gke-kub-cluster1-default-pool-8a3d3617-rmdf   Ready     <none>    41m       v1.8.10-gke.0

If you got an error with connection issue. Make sure your ip is included in the authorized networks config defined in the Terraform file.

Day 7 is long. In the end, we have ourself a running Kubernetes cluster on Google Cloud and we configured our command line tool kubectl to connect to the remote cluster.

In the next post, we will migrate all the docker apps from previous days to this Kubernetes cluster. See you on Day 8.