Kubernetes and Golang at Gini-Recruit
In this article I go tell about our experience with Kubernetes and Golang. How we moved from programming language and decide to move our whole infrastructure to a more stable one.
In the beginning we started like most of the software products. Like a monolithic. After a while we started think that the library started to grow to big. Our API started to slow down. We had built everything in PHP and with that the library become bigger and bigger.
We tried to make the code base smaller. Unfortunately was the code already complex. So we decide it was better to start rewrite. We rewrite the modules part by part in a micro service architecture way. By the rewriting we evaluate if our current programming language was still what we wanted to use.
Golang our new programming language of choise
In PHP we had an XML generator to generate feeds for all customers. This generator was a very slow while generating the feeds. After a while we decide to search for a new language. We tried in Java, Python and Golang.
The result was that Golang was a better choise for our needs. Golang is easy to learn, maintain and above all, fast! The process what took around 25 minutes in PHP, took lesser than a minute in Golang.
Also the general API calls where faster in Golang. We decide to go further with rewrite PHP code to Golang. Now we are about 25 percent of the code in Golang.
Also it give use more options to work good together with our new infrastructure.
Our old infrastructure
When we started, we created a little cluster with a redundant environment. It was based on Xen virtual private servers. It started to grow fast. First it was built with the following components:
- 2 load balancers;
- 2 API / portal servers;
- 2 MySQL servers / elastic search servers;
- 1 task server.
This was at the time a perfect stack for us. When we started to grow we first upgrade our VPS-en with more memory / processor power. This was going good until we wanted more flexibility. We started to handle the VPS-en as pets. What in our sight was not the perfect way to handle them.
Better terminate and start a new one in place. So we where happy that our hosting provider started a new product. They created a new environment for OpenStack. What give a lot more freedom. It is easier to maintain. Easier to extend and above al, a good working API.
We reproduce our environment in OpenStack platform and created a few auto-scaling processes. It was monitoring our network and produce and destroy VPS-en on specified thresholds. Our platform started become more scalable.
Later we discovered a new project. Build with insights of Google. The Kubernetes project. In the beginning it was a bit a guess if it is for us usable. But after a test period, we where fall in love with the technology and the community. We decide to take the next step.
Kubernetes our new infrastructure
We have our entire staging used for months in Kubernetes. It was in the beginning trial and error. Some of our code needed to be rewritten to work optimal in a Docker environment. But at the time of writing we are on the way to migrate our live platform to the new infrastructure. A private cloud.
Docker give us the flexibility we need. We can create on the fly containers to give temporary extra performance to our product. During the day we can scale up the front-end and back-end services. In the evenings we use them for heavy processes. So will be the servers always used and not too much idle.
Our Kubernetes hosts have a lot of capacity. Most of it used permanent by processes for our project. But because of the scheduler of Kubernetes it is not anymore thinking where the container is. But believing that it is there. Ofcourse with a bit of help of our monitoring infrastructure.
The numbers tell the tale
It is important to know what is happening in your application and environment. Because Kubernetes is a changing environment, it has a few challenges. Every pod (a group op Docker containers which belong together) can be move from one host to another. To track the changes it is important to know where they are and what the status is.
We tested 2 solutions. One was Prometheus and the other DataDog. At the time being we prefer DataDog because of the hosted part. Later we will study further in Prometheus what is a much promising software package.
In my next article I will go more in dept about the technology we use and how we implement.