Skip to content

A complete DevOps cycle for Building and Deploying a Go Application to Kubernetes cluster.

Notifications You must be signed in to change notification settings

omersiar/go-microepoch

Repository files navigation

Quality Gate Status Go Report Card docker Build Status Kuburnetes GitHub last commit Website

A complete DevOps cycle for Building and Deploying a Go Application to Kubernetes cluster.

In this example we will focus on creating an elegant solution using several higher-level services/software to enable agile, automated development and delivery of a (minimal) microservice that lives on Kubernetes (k8s) cluster.

tBW1f.png

This article divided into four parts, as we start preliminary with simple code, it grows into something bigger (an auto scaled, revision controlled, containered, globally distributed microservice). There are nearly hundreds (even thousands when combined) of different ways to accomplish what we are doing here, so I am going to focus on providing mixture of approaches, methods rather than giving examples of command line arguments.

Let's begin.

Go Application

Our journey to the clouds begins with a minimal Go application (main.go) that simply serves a Unix Timestamp as JSON formatted plain text which can be used to sync the time between Kubernetes nodes or with other services.

What is a unix timestamp and why should you use it?

A Unix Timestamp represents a point in time (time elapsed since January 1, 1970 00:00 UTC) regardless of region, time-zone or any systemic or cultural differences. A timestamp is always useful when a time critical task is carried on like in this example.


{
	"type": "epoch",
	"data": 1552299848,
	"unit": "sec",
	"rev": "574cb4c"
}

program output (pretty formatted)

Why JSON?

JSON formatted text can be easily parsed by a machine even if it has minimal resources. JSON also easy to read and write by humans as well.


A Unix timestamp microservice can be used in various cases, in this case we assume that we needed it for our TOTP (Time-Based One-Time Password) services to work reliably. I chose Go Language because of its exponential growth in popularity and of course for its deliverability.

On TOTP

TOTP algorithm is widely used by 2-step authentication mechanisms (2FA, 2-step verification, multi-factor authentication).


Go applications can be executed within incredibly small containers, once you link all the dependencies to the executable, you would not need Go runtimes to run your application, after all, Go is essentially designed for cloud computing.

Containerization

Containerization term comes from logistics, means packaging payloads (goods) in a standardized way. With containerization we can make sure that the software can be transported anywhere regardless which runtime is going to consumes it (Kubernetes, Docker, LXD, containerd).

Docker is not the only containerization option but makes everyone's life very... very... easy thus we understand why it is so popular.


Of course, you can choose any other programming languages for a microservice, they have all have pros and cons and these are beyond of this article's context. For those who may ask what to choose, these are the programming languages that can be used to build fast, scalable microservices:

  • Javascript
  • Python
  • Java
  • Ruby
  • PHP
  • C++

sorted by popularity

Our go app does not take into account any time drifts (that may caused by network lag, jitter) so it's not expected to have atomic clock grade performance (small differences can be ignored since TOTP spec. recommends 30 seconds window), if you want discover how networked, IP based devices get their clock synced, please check Cisco's Network Time Protocol Best Practices

Docker

Our go app is intended to be built on a Docker container (golang), then compiled binary copied to Docker's empty (scratch) image in order to have a minimal Docker image to run.

Our code going to be build in a container?

Yes, and this makes great example of using containers.

Think of a case that you are in the hurry (or on the go) and your team expect you to fix a bug asap, but you do not have the development environment or time to setup a new environment.

Let's think of another case, with usage of containers anyone can checkout your code base and make modifications without hesitation of getting their computer bloated.

Also it makes automation really simple, no need to worry about mixing configuration with another project, etc.


Since we are linking go dependencies (libraries) to the binary, final image can be executed like any other Docker image. It gets executed with no privileges, so we can say that it is secure in its simplicity (You should not run applications with root privilege in a container, to conform security by design principles (Wikipedia Link), a process should only have access to the resources it needs and nothing more).

Also we need to serve our built Docker images so our Kubernetes cluster will able download it, of course we could have used one of the Public Registries like Docker Hub or Google's Container Registry but in the context of Proprietary Software we would not want to use a public registry, instead we are going to create our Private Registry for our docker images.

Creating a private docker registry is relatively an easy task, just follow the instructions here. While we are putting this example to reality, we omit some fundamental necessity for a Private Registry due to the sake of quickness - Authentication. Docker Registry Authentication Specification allows us to build variety of authentication mechanisms on top of it, companies should choose their corresponding authentication approach.

The Uber's Docker Registry - Kraken

Uber’s Cluster Management team developed Kraken, an open source, peer-to-peer (P2P) Docker registry. Docker containers are a foundational building block of Uber’s infrastructure, but as the number and size of our compute clusters grew, a simple Docker registry setup with sharding and caches couldn’t keep up with the throughput required to distribute Docker images efficiently.

With a focus on scalability and availability, Kraken was designed for Docker image management, replication, and distribution in a hybrid cloud environment. With pluggable back-end support, Kraken can also be plugged into existing Docker registry setups as the distribution layer.

(from Uber's blog)


I have already deployed a private registry at one of my cloud server and available behind a nginx Reverse Proxy. This does not make an impact on our workflow except the URL link to pull/push the Docker image, if you want use one of Public Registries you are good to go.

Our Private Docker Registry is located at: bitadvise.com

Make

In GNU scene make is a powerful software tool for building/installing/uninstalling an executable. Our Makefile defines some options for our workspace (repository) so end-users (includes ourself) can quickly give commands without needing to know all the details about what actual command does.

Simple make build command will build a docker image, and make run runs our latest image on Docker container.

usage: make [target]

build                          - Build go-microepoch Docker image (this also updates staging image on docker registry if it is built on Travis)
build-no-cache                 - Build go-microepoch Docker image with --no-cache option enabled
deploy                         - Deploy image to bitadvise.com Registry and update K8s app image (Production)
help                           - Show targets
run                            - Run go-microepoch and publish on TCP 8080 port (detached)

Travis CI / CD

In this case our Continuous Integration / Delivery service is Travis, one of the most popular CI/CD services and it is free for open source projects.

(It could have been one of popular services/software like Jenkins, Bamboo it would not change our workflow). (You may also use git's post/pre commit hooks for local automated builds).

Thanks to automated services we do not need any development environment for Go, one can simply push changes to the git repository (via WebIDE or even mobile phone) then it can be built by an automated Continuous Integration system, this enables us to work with virtually on any device and anywhere in the world.

simplified workflow

Our CI pipeline starts with a .travis.yml file, Travis uses this file in order to run our pipeline in a Virtual Machine. When a commit is pushed to the git repository, Travis boots up a Clean VM, prepares building environment and builds our Docker image and in the case of commit is pushed to "master" branch it also pushes image to our Docker Private Registry and finally rolls an update on the Kubernetes cluster.

We use Travis' protected repository variables to pass our Kubernetes' service account tokens to the Bash script.

Kubernetes

Certified Kubernetes Providers

Kubernetes cluster in this example is provided by Google Cloud Platform, again, it does not matter which cloud service you host your Kubernetes cluster on, either choose one of the best Kubernetes services provided by Amazon Web Service and DigitalOcean or create your mini cluster with minikube locally.

Our Kubernetes cluster configuration is as below:

  • Our cluster is in a Data Center located at Netherlands
  • It consists from three nodes and one load balancer service (nginx Ingress)
  • There are two namespaces for Production and Staging
  • Our production deployment is named as epoch-app which is updated whenever the master branch is changed
  • Our testing deployment is named as epoch-test which is updated whenever developers commit to the staging branch
  • We expose the deployments through nginx Ingress
  • We have two service accounts for Continous Deployment for both namespaces

When we merge/push changes to "master" branch on our git repository, Travis tells Kubernetes cluster to change the application image via kubectl. Kubernetes then tries to roll out our new image to across its pods, thanks to Kubernetes internal workings there is no downtime introduced by an update (Kubernetes first creates new pods with updated image, then checks their health, if they are in good condition, it deletes previous pods and redirects traffic to the new pods), and our new version of Go Application will be on air in a few minutes without disrupting clients.

Congratulations, now we can access our freshly updated microservice. Please notice that we have "revision" element on application's output, so it matches with the repository's latest SHA1 commit id (short type). You can also use reverse proxy service like Cloudflare to access our microservice within a namespace (domain) or you can create a new "A" record on your DNS server.

Click and check it out how cool it is:

http://epoch.bitadvise.com (Production, master branch)

http://test.epoch.bitadvise.com (Testing, staging branch)

TODO

  • Write test for "Inspect the container to determine if it is really running"
  • Write test for "If timestamp matches with expected regex"
  • Create another cluster on different cloud provider then sync two Kubernetes clusters together (federation is still in beta)
  • Switched to the Google Cloud DNS (auto Ingress DNS naming, CA certification)
  • Create a Kubernetes Ingress (nginx)
  • Arrange Kubernetes secrets for RBAC or ABAC (possibly ABAC in this headless setup) (ended up with setting up two roles for both namespaces)
  • Separate Production and Staging environments on same Kubernetes cluster using namespaces
  • Redirect "epoch.bitadvise.com" to Kubernetes load balancer
  • Provide more learning materials